id
stringlengths 10
10
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| content
stringlengths 3.91k
873k
| references
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|
1605.09090 | Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention | In this paper, we proposed a sentence encoding-based model for recognizing
text entailment. In our approach, the encoding of sentence is a two-stage
process. Firstly, average pooling was used over word-level bidirectional LSTM
(biLSTM) to generate a first-stage sentence representation. Secondly, attention
mechanism was employed to replace average pooling on the same sentence for
better representations. Instead of using target sentence to attend words in
source sentence, we utilized the sentence's first-stage representation to
attend words appeared in itself, which is called "Inner-Attention" in our paper
. Experiments conducted on Stanford Natural Language Inference (SNLI) Corpus
has proved the effectiveness of "Inner-Attention" mechanism. With less number
of parameters, our model outperformed the existing best sentence encoding-based
approach by a large margin. | http://arxiv.org/pdf/1605.09090 | Yang Liu, Chengjie Sun, Lei Lin, Xiaolong Wang | cs.CL | null | null | cs.CL | 20160530 | 20160530 | 2016
6 1 0 2
y a M 0 3 ] L C . s c [
1 v 0 9 0 9 0 . 5 0 6 1 : v i X r a
# Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention
Yang Liu, Chengjie Sun, Lei Lin and Xiaolong Wang Harbin Institute of Technology, Harbin, P.R.China {yliu,cjsun,linl,wangxl}@insun.hit.edu.cn
# Abstract
In this paper, we proposed a sentence encoding-based model for recognizing text en- tailment. In our approach, the encoding of sentence is a two-stage process. Firstly, av- erage pooling was used over word-level bidi- rectional LSTM (biLSTM) to generate a ï¬rst- stage sentence representation. Secondly, at- tention mechanism was employed to replace average pooling on the same sentence for bet- ter representations. Instead of using target sentence to attend words in source sentence, we utilized the sentenceâs ï¬rst-stage represen- tation to attend words appeared in itself, which is called âInner-Attentionâ in our paper . Ex- periments conducted on Stanford Natural Lan- guage Inference (SNLI) Corpus has proved the effectiveness of âInner-Attentionâ mech- anism. With less number of parameters, our model outperformed the existing best sentence encoding-based approach by a large margin.
P The boy is running through a grassy area. The boy is in his room. H A boy is running outside. The boy is in a park. C E N
Table 1: Examples of three types of label in RTE, where P stands for Premises and H stands for Hypothesis
also explored by many researchers, but not been widely used because of its complexity and domain limitations.
Recently published Stanford Natural Language Inference (SNLI1) corpus makes it possible to use deep learning methods to solve RTE problems. So far proposed deep learning approaches can be roughly categorized into two groups: sentence encoding-based models and matching encoding- based models. As the name implies, the encoding of sentence is the core of former methods, while the lat- ter methods directly model the relation between two sentences and didnât generate sentence representa- tions at all.
# 1 Introduction
Given a pair of sentences, the goal of recogniz- ing text entailment (RTE) is to determine whether the hypothesis can reasonably be inferred from the premises. There were three types of relation in RTE, Entailment (inferred to be true), Contradiction (in- ferred to be false) and Neutral (truth unknown).A few examples were given in Table 1.
Traditional methods to RTE has been the domin- ion of classiï¬ers employing hand engineered fea- tures, which heavily relied on natural language pro- cessing pipelines and external resources. Formal (Bos and Markert, 2005) were reasoning methods
In view of universality, we focused our efforts on sentence encoding-based model. Existing methods of this kind including: LSTMs-based model, GRUs- based model, TBCNN-based model and SPINN- based model. Single directional LSTMs and GRUs suffer a weakness of not utilizing the contextual information from the future tokens and Convolu- tional Neural Networks didnât make full use of in- formation contained in word order. Bidirectional LSTM utilizes both the previous and future context by processing the sequence on two directions which helps to address the drawbacks mentioned above.
1http://nlp.stanford.edu/projects/snli/
>< (c) Sentence Matching Multiplication © Mean Pooling (B) Sentence Encoding Mean Pooling (a) Sentence Input Premise immersed pleasant conversation photograph involved headted discussion Canon Hypothesis
Figure 1: Architecture of Bidirectional LSTM model with Inner-Attention
# (Tan et al., 2015)
A recent work by (Rockt¨aschel et al., 2015) im- proved the performance by applying a neural atten- tion model that didnât yield sentence embeddings.
In this paper, we proposed a uniï¬ed deep learning framework for recognizing textual entailment which dose not require any feature engineering, or external resources. The basic model is based on building biL- STM models on both premises and hypothesis. The basic mean pooling encoder can roughly form a intu- ition about what this sentence is talking about. Ob- tained this representation, we extended this model by utilize an Inner-Attention mechanism on both sides. This mechanism helps generate more accurate and focused sentence representations for classiï¬ca- tion. In addition, we introduced a simple effective input strategy that get ride of same words in hypoth- esis and premise, which further boosts our perfor- mance. Without parameter tuning, we improved the art-of-the-state performance of sentence encoding- based model by nearly 2%.
# 2 Our approach
In our work, we treated RTE task as a supervised three-way classiï¬cation problem. The overall archi- tecture of our model is shown in Figure 1. The de- sign of this model we follow the idea of Siamese Network, that the two identical sentence encoders
share the same set of weights during training, and the two sentence representations then combined to- gether to generated a ârelation vectorâ for classiï¬- cation. As we can see from the ï¬gure, the model mainly consists of three parts. From top to bottom were: (A). The sentence input module; (B). The sen- tence encoding module; (C). The sentence matching module. We will explain the last two parts in detail in the following subsection. And the sentence input module will be introduced in Section 3.3.
# 2.1 Sentence Encoding Module
Sentence encoding module is the fundamental part of this model. To generate better sentence repre- sentations, we employed a two-step strategy to en- code sentences. Firstly, average pooling layer was built on top of word-level biLSTMs to produce sen- tence vector. This simple encoder combined with the sentence matching module formed the basic ar- chitecture of our model. With much less parame- ters, this basic model alone can outperformed art-of- state method by a small margin. (refer to Table 3). Secondly, attention mechanism was employed on the same sentence, instead of using target sentence representation to attend words in source sentence, we used the representation generated in previous stage to attend words appeared in the sentence it- self, which results in a similar distribution with other
attention mechanism weights. More attention was given to important words.2
The idea of âInner-attentionâ was inspired by the observation that when human read one sentence, people usually can roughly form an intuition about which part of the sentence is more important accord- ing past experience. And we implemented this idea using attention mechanism in our model. The atten- tion mechanism is formalized as follows:
M = tanh(W yY + W hRave â eL) α = sof tmax(wT M ) Ratt = Y αT
where Y is a matrix consisting of output vectors of biLSTM, Rave is the output of mean pooling layer, α denoted the attention vector and Ratt is the attention-weighted sentence representation.
# 2.2 Sentence Matching Module
Once the sentence vectors are generated. Three matching methods were applied to extract relations between premise and hypothesis.
Concatenation of the two representations ⢠Element-wise product ⢠Element-wise difference
This matching architecture was ï¬rst used by (Mou et al., 2015). Finally, we used a SoftMax layer over the output of a non-linear projection of the gen- erated matching vector for classiï¬cation.
# 3 Experiments
# 3.1 DataSet
To evaluate the performance of our model, we conducted on Stanford corpus Natural Language (Bos and Markert, 2005). At 570K pairs, SNLI is two orders of magnitude larger than all other resources of its type. The dataset is constructed by crowdsourced efforts, each sentence written by humans. labels comprise three classes: Entailment, Contradiction, and Neutral
(Yang et al., 2016) proposed a Hierarchical At- tention model on the task of document classiï¬cation also used for but the target representation in attention their mechanism is randomly initialized.
(two irrelevant sentences). We applied the standard train/validation/test split, containing 550k, 10k, and 10k samples, respectively.
# 3.2 Parameter Setting
The training objective of our model is cross-entropy loss, and we use minibatch SGD with the Rmsprop (Tieleman and Hinton, 2012) for optimization. The batch size is 128. A dropout layer was applied in the output of the network with the dropout rate set to 0.25. In our model, we used pretrained 300D Glove 840B vectors (Pennington et al., 2014) to initialize the word embedding. Out-of-vocabulary words in the training set are randomly initialized by sampling values uniformly from (0.05, 0.05). All of these em- bedding are not updated during training . We didnât tune representations of words for two reasons: 1. To reduced the number of parameters needed to train. 2. Keep their representation stays close to unseen similar words in inference time, which improved the modelâs generation ability. The model is imple- mented using open-source framework Keras.3
# 3.3 The Input Strategy
In this part, we investigated four strategies to modify the input on our basic model which helps us increase performance, the four strategies are:
Inverting Premises (Sutskever et al., 2014) ⢠Doubling Premises (Zaremba and Sutskever, 2014) ⢠Doubling Hypothesis ⢠Differentiating Inputs (Removing same words
appeared in premises and hypothesis)
Experimental results were illustrated in Table 2. As we can see from it, doubling hypothesis and differentiating inputs both improved our modelâs performance.While the hypothesises usually much shorter than premises, doubling hypothesis may ab- sorb this difference and emphasize the meaning twice via this strategy. Differentiating input strat- egy forces the model to focus on different part of the two sentences which may help the classiï¬cation for Neutral and Contradiction examples as we ob- served that our model tended to assign unconï¬dent instances to Entailment. And the original input sen- tences appeared in Figure 1 are:
Premise: Two man in polo shirts and tan pants im- mersed in a pleasant conversation about photograph.
3http://keras.io/
Test Acc. 83.24% 82.60% 83.66% 82.83% 83.72% Input Strategy Original Sequences Inverting Premises Doubling Premises Doubling Hypothesis Differentiating Inputs
Table 2: Comparison of different input strategies
Hypothesis: Two man in polo shirts and tan pants in- volved in a heated discussion about Canon.
Label: Contradiction While most of the words in this pair of sentences are same or close in semantic, It is hard for model to distinguish the difference between them, which resulted in labeling it with Neutral or Entailment. Through differentiating inputs strategy, this kind of problems can be solved.
3.4 Comparison Methods In this part, we compared our model against the fol- lowing art-of-the-state baseline approaches:
⢠LSTM enc: 100D LSTM encoders + MLP. (Bowman et al., 2015)
⢠GRU enc: 1024D GRU encoders + skip-thoughts + cat, -. (Vendrov et al., 2015)
⢠TBCNN enc: 300D Tree-based CNN encoders + cat, ⦠, -. (Mou et al., 2015)
⢠SPINN enc: 300D SPINN-NP encoders + cat, ⦠, -. (Bowman et al., 2016)
⢠Static-Attention: 100D LSTM + static attention. (Rockt¨aschel et al., 2015)
⢠WbW-Attention: 100D LSTM + word-by-word at- tention. (Rockt¨aschel et al., 2015)
The cat refers to concatenation, - and ⦠denote element-wise difference and product, respectively. Much simpler and easy to understand.
# 3.5 Results and Qualitative Analysis
Although the classiï¬cation of RTE example is not solely relying on representations obtained from at- tention, instructive to analysis Inner- Attention mechanism as we witnessed a large per- formance increase after employing it. We hand- picked several examples from the dataset to visual- ize. In order to make the weights more discrimi- nated, we didnât use a uniform colour atla cross sen- tences. That is, each sentence have its own color atla, the lightest color and the darkest color de- noted the smallest attention weight the biggest value
Params Test Acc. Model Sentence encoding-based models LSTM enc GRU enc TBCNN enc SPINN enc Basic model + Inner-Attention + Diversing Input Other neural network models Static-Attention WbW-Attention 3.0M 15M 3.5M 3.7M 2.0M 2.8M 2.8M 80.6% 81.4% 82.1% 83.2% 83.3% 84.2% 85.0% 242K 252K 82.4% 83.5%
Table 3: Performance comparison of different models on SNLI.
within the sentence, respectively. Visualizations of Inner-Attention on these examples are depicted in Figure 2.
A Two Three men women firefighter is climbing come riding on out a a of moto wooden subway scaffold station
Figure 2: Inner-Attention Visualizations.
We observed that more attention was given to Nones, Verbs and Adjectives. This conform to our experience that these words are more semantic richer than function words. While mean pooling re- garded each word of equal importance, the attention mechanism helps re-weight words according to their importance. And more focused and accurate sen- tence representations were generated based on pro- duced attention vectors.
# 4 Conclusion and Future work
In this paper, we proposed a bidirectional LSTM- based model with Inner-Attention to solve the RTE problem. We come up with an idea to utilize attention mechanism within sentence which can teach itself to attend words without the information from another one. The Inner-Attention mechanism helps produce more accurate sentence representa-
tions through attention vectors. In addition, the sim- ple effective diversing input strategy introduced by us further boosts our results. And this model can be easily adapted to other sentence-matching models. Our future work including:
1. Employ this architecture on other sentence- matching tasks such as Question Answer, Para- phrase and Sentence Text Similarity etc.
2. Try more heuristics matching methods to make full use of the sentence vectors.
# Acknowledgments
We thank all anonymous reviewers for their hard work!
# References
[Bos and Markert2005] Johan Bos and Katja Markert. 2005. Recognising textual entailment with logical in- ference. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natu- ral Language Processing, pages 628â635. Association for Computational Linguistics.
[Bowman et al.2015] Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326.
[Bowman et al.2016] Samuel R Bowman, Jon Gauthier, Abhinav Rastogi, Raghav Gupta, Christopher D Man- ning, and Christopher Potts. 2016. A fast uniï¬ed model for parsing and sentence understanding. arXiv preprint arXiv:1603.06021.
[Mou et al.2015] Lili Mou, Men Rui, Ge Li, Yan Xu, Lu Zhang, Rui Yan, and Zhi Jin. 2015. Recogniz- ing entailment and contradiction by tree-based convo- lution. arXiv preprint arXiv:1512.08422.
Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In EMNLP, volume 14, pages 1532â1543.
Edward Grefenstette, Karl Moritz Hermann, Tom´aËs KoËcisk`y, 2015. Reasoning about en- and Phil Blunsom. tailment with neural attention. arXiv preprint arXiv:1509.06664.
[Sutskever et al.2014] Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104â3112.
[Tan et al.2015] Ming Tan, Bing Xiang, and Bowen Lstm-based deep learning models arXiv preprint Zhou. for non-factoid answer selection. arXiv:1511.04108. 2015.
[Tieleman and Hinton2012] Tijmen Tieleman and Geof- frey Hinton. 2012. Lecture 6.5-rmsprop. COURS- ERA: Neural networks for machine learning.
[Vendrov et al.2015] Ivan Vendrov, Ryan Kiros, Sanja Order- Fidler, and Raquel Urtasun. embeddings of images and language. arXiv preprint arXiv:1511.06361.
[Yang et al.2016] Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classiï¬- cation. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies.
and Ilya Sutskever. 2014. Learning to execute. arXiv preprint arXiv:1410.4615. | {
"id": "1512.08422"
} |
1605.08803 | Density estimation using Real NVP | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations. | http://arxiv.org/pdf/1605.08803 | Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | cs.LG, cs.AI, cs.NE, stat.ML | 10 pages of main content, 3 pages of bibliography, 18 pages of
appendix. Accepted at ICLR 2017 | null | cs.LG | 20160527 | 20170227 | 7 1 0 2
b e F 7 2 ] G L . s c [
3 v 3 0 8 8 0 . 5 0 6 1 : v i X r a
Published as a conference paper at ICLR 2017
# DENSITY ESTIMATION USING REAL NVP
Laurent Dinhâ Montreal Institute for Learning Algorithms University of Montreal Montreal, QC H3T1J4
Jascha Sohl-Dickstein Google Brain
Samy Bengio Google Brain
# ABSTRACT
Unsupervised learning of probabilistic models is a central yet challenging problem in machine learning. Speciï¬cally, designing models with tractable learning, sam- pling, inference and evaluation is crucial in solving this task. We extend the space of such models using real-valued non-volume preserving (real NVP) transforma- tions, a set of powerful, stably invertible, and learnable transformations, resulting in an unsupervised learning algorithm with exact log-likelihood computation, exact and efï¬cient sampling, exact and efï¬cient inference of latent variables, and an interpretable latent space. We demonstrate its ability to model natural images on four datasets through sampling, log-likelihood evaluation, and latent variable manipulations.
# 1 Introduction
The domain of representation learning has undergone tremendous advances due to improved super- vised learning techniques. However, unsupervised learning has the potential to leverage large pools of unlabeled data, and extend these advances to modalities that are otherwise impractical or impossible.
One principled approach to unsupervised learning is generative probabilistic modeling. Not only do generative probabilistic models have the ability to create novel content, they also have a wide range of reconstruction related applications including inpainting [61, 46, 59], denoising [3], colorization [71], and super-resolution [9].
As data of interest are generally high-dimensional and highly structured, the challenge in this domain is building models that are powerful enough to capture its complexity yet still trainable. We address this challenge by introducing real-valued non-volume preserving (real NVP) transformations, a tractable yet expressive approach to modeling high-dimensional data.
This model can perform efï¬cient and exact inference, sampling and log-density estimation of data points. Moreover, the architecture presented in this paper enables exact and efï¬cient reconstruction of input images from the hierarchical features extracted by this model.
# 2 Related work
Substantial work on probabilistic generative models has focused on training models using maximum likelihood. One class of maximum likelihood models are those described by probabilistic undirected graphs, such as Restricted Boltzmann Machines [58] and Deep Boltzmann Machines [53]. These models are trained by taking advantage of the conditional independence property of their bipartite structure to allow efï¬cient exact or approximate posterior inference on latent variables. However, because of the intractability of the associated marginal distribution over latent variables, their training, evaluation, and sampling procedures necessitate the use of approximations like Mean Field inference and Markov Chain Monte Carlo, whose convergence time for such complex models
âWork was done when author was at Google Brain.
1
Published as a conference paper at ICLR 2017
# Data space X
# Latent space Z
Inference x â¼ ËpX z = f (x) â Generation z â¼ pZ x = f â1 (z) â
Figure 1: Real NVP learns an invertible, stable, mapping between a data distribution ËpX and a latent distribution pZ (typically a Gaussian). Here we show a mapping that has been learned on a toy 2-d dataset. The function f (x) maps samples x from the data distribution in the upper left into approximate samples z from the latent distribution, in the upper right. This corresponds to exact inference of the latent state given the data. The inverse function, f â1 (z), maps samples z from the latent distribution in the lower right into approximate samples x from the data distribution in the lower left. This corresponds to exact generation of samples from the model. The transformation of grid lines in X and Z space is additionally illustrated for both f (x) and f â1 (z).
remains undetermined, often resulting in generation of highly correlated samples. Furthermore, these approximations can often hinder their performance [7].
Directed graphical models are instead deï¬ned in terms of an ancestral sampling procedure, which is appealing both for its conceptual and computational simplicity. They lack, however, the conditional independence structure of undirected models, making exact and approximate posterior inference on latent variables cumbersome [56]. Recent advances in stochastic variational inference [27] and amortized inference [13, 43, 35, 49], allowed efï¬cient approximate inference and learning of deep directed graphical models by maximizing a variational lower bound on the log-likelihood [45]. In particular, the variational autoencoder algorithm [35, 49] simultaneously learns a generative network, that maps gaussian latent variables z to samples x, and a matched approximate inference network that maps samples x to a semantically meaningful latent representation z, by exploiting the reparametrization trick [68]. Its success in leveraging recent advances in backpropagation [51, 39] in deep neural networks resulted in its adoption for several applications ranging from speech synthesis [12] to language modeling [8]. Still, the approximation in the inference process limits its ability to learn high dimensional deep representations, motivating recent work in improving approximate inference [42, 48, 55, 63, 10, 59, 34].
Such approximations can be avoided altogether by abstaining from using latent variables. Auto- regressive models [18, 6, 37, 20] can implement this strategy while typically retaining a great deal of ï¬exibility. This class of algorithms tractably models the joint distribution by decomposing it into a product of conditionals using the probability chain rule according to a ï¬xed ordering over dimensions, simplifying log-likelihood evaluation and sampling. Recent work in this line of research has taken advantage of recent advances in recurrent networks [51], in particular long-short term memory [26], and residual networks [25, 24] in order to learn state-of-the-art generative image models [61, 46] and language models [32]. The ordering of the dimensions, although often arbitrary, can be critical to the training of the model [66]. The sequential nature of this model limits its computational efï¬ciency. For example, its sampling procedure is sequential and non-parallelizable, which can become cumbersome in applications like speech and music synthesis, or real-time rendering.. Additionally, there is no natural latent representation associated with autoregressive models, and they have not yet been shown to be useful for semi-supervised learning.
2
Published as a conference paper at ICLR 2017
Generative Adversarial Networks (GANs) [21] on the other hand can train any differentiable gen- erative network by avoiding the maximum likelihood principle altogether. Instead, the generative network is associated with a discriminator network whose task is to distinguish between samples and real data. Rather than using an intractable log-likelihood, this discriminator network provides the training signal in an adversarial fashion. Successfully trained GAN models [21, 15, 47] can consistently generate sharp and realistically looking samples [38]. However, metrics that measure the diversity in the generated samples are currently intractable [62, 22, 30]. Additionally, instability in their training process [47] requires careful hyperparameter tuning to avoid diverging behavior.
Training such a generative network g that maps latent variable z â¼ pZ to a sample x â¼ pX does not in theory require a discriminator network as in GANs, or approximate inference as in variational autoencoders. Indeed, if g is bijective, it can be trained through maximum likelihood using the change of variable formula:
px (2) = pe(2) fae (22). i) OzT
This formula has been discussed in several papers including the maximum likelihood formulation of independent components analysis (ICA) [4, 28], gaussianization [14, 11] and deep density models [5, 50, 17, 3]. As the existence proof of nonlinear ICA solutions [29] suggests, auto-regressive models can be seen as tractable instance of maximum likelihood nonlinear ICA, where the residual corresponds to the independent components. However, naive application of the change of variable formula produces models which are computationally expensive and poorly conditioned, and so large scale models of this type have not entered general use.
# 3 Model deï¬nition
In this paper, we will tackle the problem of learning highly nonlinear models in high-dimensional continuous spaces through maximum likelihood. In order to optimize the log-likelihood, we introduce a more ï¬exible class of architectures that enables the computation of log-likelihood on continuous data using the change of variable formula. Building on our previous work in [17], we deï¬ne a powerful class of bijective functions which enable exact and tractable density evaluation and exact and tractable inference. Moreover, the resulting cost function does not to rely on a ï¬xed form reconstruction cost such as square error [38, 47], and generates sharper samples as a result. Also, this ï¬exibility helps us leverage recent advances in batch normalization [31] and residual networks [24, 25] to deï¬ne a very deep multi-scale architecture with multiple levels of abstraction.
# 3.1 Change of variable formula
Given an observed data variable x â X, a simple prior probability distribution pZ on a latent variable z â Z, and a bijection f : X â Z (with g = f â1), the change of variable formula deï¬nes a model distribution on X by
act (p2(f(0)))
Of (x px(2) = pe( f(a) act (9?) 0)
log (nx(#)) = 10g (p2(f(0))) +1og (face (S55). @)
# where âf (x)
âxT is the Jacobian of f at x.
Exact samples from the resulting distribution can be generated by using the inverse transform sampling rule [16]. A sample z â¼ pZ is drawn in the latent space, and its inverse image x = f â1(z) = g(z) generates a sample in the original space. Computing the density on a point x is accomplished by computing the density of its image f (x) and multiplying by the associated Jacobian determinant det . See also Figure 1. Exact and efï¬cient inference enables the accurate and fast evaluation of the model.
3
Published as a conference paper at ICLR 2017
(a) Forward propagation (b) Inverse propagation
Figure 2: Computational graphs for forward and inverse propagation. A coupling layer applies a simple invertible transformation consisting of scaling followed by addition of a constant offset to one part x2 of the input vector conditioned on the remaining part of the input vector x1. Because of its simple nature, this transformation is both easily invertible and possesses a tractable determinant. However, the conditional nature of this transformation, captured by the functions s and t, signiï¬cantly increase the ï¬exibility of this otherwise weak function. The forward and inverse propagation operations have identical computational cost.
# 3.2 Coupling layers
Computing the Jacobian of functions with high-dimensional domain and codomain and computing the determinants of large matrices are in general computationally very expensive. This combined with the restriction to bijective functions makes Equation 2 appear impractical for modeling arbitrary distributions.
As shown however in [17], by careful design of the function f , a bijective model can be learned which is both tractable and extremely ï¬exible. As computing the Jacobian determinant of the transformation is crucial to effectively train using this principle, this work exploits the simple observation that the determinant of a triangular matrix can be efï¬ciently computed as the product of its diagonal terms.
We will build a ï¬exible and tractable bijective function by stacking a sequence of simple bijections. In each simple bijection, part of the input vector is updated using a function which is simple to invert, but which depends on the remainder of the input vector in a complex way. We refer to each of these simple bijections as an afï¬ne coupling layer. Given a D dimensional input x and d < D, the output y of an afï¬ne coupling layer follows the equations
y1:d = x1:d (4)
Yd+1:D = Td41:D © exp (s(w1:4)) +t(x1:a); (5)
where s and ¢ stand for scale and translation, and are functions from R¢ + R?~4, and © is the Hadamard product or element-wise product (see Figure
# 3.3 Properties
The Jacobian of this transformation is
oy | Ta 0 axt Opese diag (exp [s (1:a)] ) (6)
where diag ( exp [s (71.a)] ) is the diagonal matrix whose diagonal elements correspond to the vector exp [s (71.a)]. Given the observation that this Jacobian is triangular, we can efficiently compute its determinant as exp | 5> 58 (1:4) | . Since computing the Jacobian determinant of the coupling layer operation does not involve computing the Jacobian of s or t, those functions can be arbitrarily complex. We will make them deep convolutional neural networks. Note that the hidden layers of s and t can have more features than their input and output layers.
Another interesting property of these coupling layers in the context of deï¬ning probabilistic models is their invertibility. Indeed, computing the inverse is no more complex than the forward propagation
4
Published as a conference paper at ICLR 2017
Figure 3: Masking schemes for afï¬ne coupling layers. On the left, a spatial checkerboard pattern mask. On the right, a channel-wise masking. The squeezing operation reduces the 4 à 4 à 1 tensor (on the left) into a 2 à 2 à 4 tensor (on the right). Before the squeezing operation, a checkerboard pattern is used for coupling layers while a channel-wise masking pattern is used afterward.
(see Figure 2(b)),
{ns = 21d (7) Ydt1:D = Td41:pD © exp (s(a1:a)) + t(@1:4)
# Td
Td = Vid = y 8) r = (yari:p â t(y1:a)) © exp (= s(yi:a)), (
meaning that sampling is as efï¬cient as inference for this model. Note again that computing the inverse of the coupling layer does not require computing the inverse of s or t, so these functions can be arbitrarily complex and difï¬cult to invert.
# 3.4 Masked convolution
Partitioning can be implemented using a binary mask b, and using the functional form for y,
y=boOrt(1â-db)o (« © exp (s(b© 2)) + 4(bO x)). (9)
We use two partitionings that exploit the local correlation structure of images: spatial checkerboard patterns, and channel-wise masking (see Figure 3). The spatial checkerboard pattern mask has value 1 where the sum of spatial coordinates is odd, and 0 otherwise. The channel-wise mask b is 1 for the ï¬rst half of the channel dimensions and 0 for the second half. For the models presented here, both s(·) and t(·) are rectiï¬ed convolutional networks.
# 3.5 Combining coupling layers
Although coupling layers can be powerful, their forward transformation leaves some components unchanged. This difï¬culty can be overcome by composing coupling layers in an alternating pattern, such that the components that are left unchanged in one coupling layer are updated in the next (see Figure 4(a)).
The Jacobian determinant of the resulting function remains tractable, relying on the fact that
â(fb ⦠fa) âfb âxT âxT a b det(A · B) = det(A) det(B).
(fo © fa) Ofa Ofe , Bar (a) = Farle)» Glee = Lalta)) (10)
(11)
Similarly, its inverse can be computed easily as
a ⦠f â1 (fb ⦠fa)â1 = f â1 b . (12)
5
Published as a conference paper at ICLR 2017
OOo Qe. > & â ©) EP RED ED
(a) In this alternating pattern, units which remain identical in one transformation are modiï¬ed in the next.
(b) Factoring out variables. At each step, half the vari- ables are directly modeled as Gaussians, while the other half undergo further transfor- mation.
Figure 4: Composition schemes for afï¬ne coupling layers.
# 3.6 Multi-scale architecture
We implement a multi-scale architecture using a squeezing operation: for each channel, it divides the image into subsquares of shape 2 à 2 à c, then reshapes them into subsquares of shape 1 à 1 à 4c. The squeezing operation transforms an s às à c tensor into an s 2 à 4c tensor (see Figure 3), effectively trading spatial size for number of channels.
At each scale, we combine several operations into a sequence: we ï¬rst apply three coupling layers with alternating checkerboard masks, then perform a squeezing operation, and ï¬nally apply three more coupling layers with alternating channel-wise masking. The channel-wise masking is chosen so that the resulting partitioning is not redundant with the previous checkerboard masking (see Figure 3). For the ï¬nal scale, we only apply four coupling layers with alternating checkerboard masks.
Propagating a D dimensional vector through all the coupling layers would be cumbersome, in terms of computational and memory cost, and in terms of the number of parameters that would need to be trained. For this reason we follow the design choice of [57] and factor out half of the dimensions at regular intervals (see Equation 14). We can deï¬ne this operation recursively (see Figure 4(b)),
h(0) = x (z(i+1), h(i+1)) = f (i+1)(h(i)) z(L) = f (L)(h(Lâ1))
(13)
(14)
(15)
z = (z(1), . . . , z(L)). (16)
In our experiments, we use this operation for i < L. The sequence of coupling-squeezing-coupling operations described above is performed per layer when computing f (i) (Equation 14). At each layer, as the spatial resolution is reduced, the number of hidden layer features in s and t is doubled. All variables which have been factored out at different scales are concatenated to obtain the ï¬nal transformed output (Equation 16).
As a consequence, the model must Gaussianize units which are factored out at a ï¬ner scale (in an earlier layer) before those which are factored out at a coarser scale (in a later layer). This results in the deï¬nition of intermediary levels of representation [53, 49] corresponding to more local, ï¬ne-grained features as shown in Appendix D.
Moreover, Gaussianizing and factoring out units in earlier layers has the practical beneï¬t of distribut- ing the loss function throughout the network, following the philosophy similar to guiding intermediate layers using intermediate classiï¬ers [40]. It also reduces signiï¬cantly the amount of computation and memory used by the model, allowing us to train larger models.
6
Published as a conference paper at ICLR 2017
# 3.7 Batch normalization
To further improve the propagation of training signal, we use deep residual networks [24, 25] with batch normalization [31] and weight normalization [2, 54] in s and t. As described in Appendix E we introduce and use a novel variant of batch normalization which is based on a running average over recent minibatches, and is thus more robust when training with very small minibatches.
We also use apply batch normalization to the whole coupling layer output. The effects of batch normalization are easily included in the Jacobian computation, since it acts as a linear rescaling on each dimension. That is, given the estimated batch statistics ˵ and ËÏ2, the rescaling function
aj th To (17)
has a Jacobian determinant
(0 (a? + 0) . (18) a
This form of batch normalization can be seen as similar to reward normalization in deep reinforcement learning [44, 65].
We found that the use of this technique not only allowed training with a deeper stack of coupling layers, but also alleviated the instability problem that practitioners often encounter when training conditional distributions with a scale parameter through a gradient-based approach.
# 4 Experiments
# 4.1 Procedure
The algorithm described in Equation 2] shows how to learn distributions on unbounded space. In general, the data of interest have bounded magnitude. For examples, the pixel values of an image typically lie in (0, 256] after application of the recommended jittering procedure [64] [62]. In order to reduce the impact of boundary effects, we instead model the density of logit(a+(1âa) © 55), where a is picked here as .05. We take into account this transformation when computing log-likelihood and bits per dimension. We also augment the CIFAR-10, CelebA and LSUN datasets during training to also include horizontal flips of the training examples.
We train our model on four natural image datasets: CIFAR-10 [36], Imagenet [52], Large-scale Scene Understanding (LSUN) [70], CelebFaces Attributes (CelebA) [41]. More speciï¬cally, we train on the downsampled to 32 à 32 and 64 à 64 versions of Imagenet [46]. For the LSUN dataset, we train on the bedroom, tower and church outdoor categories. The procedure for LSUN is the same as in [47]: we downsample the image so that the smallest side is 96 pixels and take random crops of 64 à 64. For CelebA, we use the same procedure as in [38]: we take an approximately central crop of 148 à 148 then resize it to 64 à 64.
We use the multi-scale architecture described in Section 3.6 and use deep convolutional residual networks in the coupling layers with rectiï¬er nonlinearity and skip-connections as suggested by [46]. To compute the scaling functions s, we use a hyperbolic tangent function multiplied by a learned scale, whereas the translation function t has an afï¬ne output. Our multi-scale architecture is repeated recursively until the input of the last recursion is a 4 à 4 à c tensor. For datasets of images of size 32 à 32, we use 4 residual blocks with 32 hidden feature maps for the ï¬rst coupling layers with checkerboard masking. Only 2 residual blocks are used for images of size 64 à 64. We use a batch size of 64. For CIFAR-10, we use 8 residual blocks, 64 feature maps, and downscale only once. We optimize with ADAM [33] with default hyperparameters and use an L2 regularization on the weight scale parameters with coefï¬cient 5 · 10â5.
We set the prior pZ to be an isotropic unit norm Gaussian. However, any distribution could be used for pZ, including distributions that are also learned during training, such as from an auto-regressive model, or (with slight modiï¬cations to the training objective) a variational autoencoder.
7
Published as a conference paper at ICLR 2017
Dataset CIFAR-10 Imagenet (32 Ã 32) Imagenet (64 Ã 64) LSUN (bedroom) LSUN (tower) LSUN (church outdoor) CelebA PixelRNN [46] Real NVP Conv DRAW [22] 3.00 3.86 (3.83) 3.63 (3.57) 3.49 4.28 (4.26) 3.98 (3.75) 2.72 (2.70) 2.81 (2.78) 3.08 (2.94) 3.02 (2.97) < 3.59 < 4.40 (4.35) < 4.10 (4.04) IAF-VAE [34] < 3.28
Table 1: Bits/dim results for CIFAR-10, Imagenet, LSUN datasets and CelebA. Test results for CIFAR-10 and validation results for Imagenet, LSUN and CelebA (with training results in parenthesis for reference).
Figure 5: On the left column, examples from the dataset. On the right column, samples from the model trained on the dataset. The datasets shown in this ï¬gure are in order: CIFAR-10, Imagenet (32 à 32), Imagenet (64 à 64), CelebA, LSUN (bedroom).
# 4.2 Results
We show in Table 1 that the number of bits per dimension, while not improving over the Pixel RNN [46] baseline, is competitive with other generative methods. As we notice that our performance increases with the number of parameters, larger models are likely to further improve performance. For CelebA and LSUN, the bits per dimension for the validation set was decreasing throughout training, so little overï¬tting is expected.
We show in Figure 5 samples generated from the model with training examples from the dataset for comparison. As mentioned in [62, 22], maximum likelihood is a principle that values diversity
8
Published as a conference paper at ICLR 2017
Figure 6: Manifold generated from four examples in the dataset. Clockwise from top left: CelebA, Imagenet (64 Ã 64), LSUN (tower), LSUN (bedroom).
over sample quality in a limited capacity setting. As a result, our model outputs sometimes highly improbable samples as we can notice especially on CelebA. As opposed to variational autoencoders, the samples generated from our model look not only globally coherent but also sharp. Our hypothesis is that as opposed to these models, real NVP does not rely on ï¬xed form reconstruction cost like an L2 norm which tends to reward capturing low frequency components more heavily than high frequency components. Unlike autoregressive models, sampling from our model is done very efï¬ciently as it is parallelized over input dimensions. On Imagenet and LSUN, our model seems to have captured well the notion of background/foreground and lighting interactions such as luminosity and consistent light source direction for reï¬ectance and shadows.
We also illustrate the smooth semantically consistent meaning of our latent variables. In the latent space, we define a manifold based on four validation examples 21)» 2(2)» 2(3)> 2(4)> and parametrized by two parameters ¢ and ¢ by,
z = cos(¢) (cos(¢â)z(1) + sin(¢â)z(2))
+ sin(@) (cos(墉)z(3) + sin(墉) 2(4))
«
(19)
We project the resulting manifold back into the data space by computing g(z). Results are shown Figure 6. We observe that the model seems to have organized the latent space with a notion of meaning that goes well beyond pixel space interpolation. More visualization are shown in the Appendix. To further test whether the latent space has a consistent semantic interpretation, we trained a class-conditional model on CelebA, and found that the learned representation had a consistent semantic meaning across class labels (see Appendix F).
# 5 Discussion and conclusion
In this paper, we have deï¬ned a class of invertible functions with tractable Jacobian determinant, enabling exact and tractable log-likelihood evaluation, inference, and sampling. We have shown that this class of generative model achieves competitive performances, both in terms of sample quality and log-likelihood. Many avenues exist to further improve the functional form of the transformations, for instance by exploiting the latest advances in dilated convolutions [69] and residual networks architectures [60].
This paper presented a technique bridging the gap between auto-regressive models, variational autoencoders, and generative adversarial networks. Like auto-regressive models, it allows tractable and exact log-likelihood evaluation for training. It allows however a much more ï¬exible functional form, similar to that in the generative model of variational autoencoders. This allows for fast and exact sampling from the model distribution. Like GANs, and unlike variational autoencoders, our technique does not require the use of a ï¬xed form reconstruction cost, and instead deï¬nes a cost in terms of higher level features, generating sharper images. Finally, unlike both variational
9
Published as a conference paper at ICLR 2017
autoencoders and GANs, our technique is able to learn a semantically meaningful latent space which is as high dimensional as the input space. This may make the algorithm particularly well suited to semi-supervised learning tasks, as we hope to explore in future work.
Real NVP generative models can additionally be conditioned on additional variables (for instance class labels) to create a structured output algorithm. More so, as the resulting class of invertible transformations can be treated as a probability distribution in a modular way, it can also be used to improve upon other probabilistic models like auto-regressive models and variational autoencoders. For variational autoencoders, these transformations could be used both to enable a more ï¬exible reconstruction cost [38] and a more ï¬exible stochastic inference distribution [48]. Probabilistic models in general can also beneï¬t from batch normalization techniques as applied in this paper.
The deï¬nition of powerful and trainable invertible functions can also beneï¬t domains other than generative unsupervised learning. For example, in reinforcement learning, these invertible functions can help extend the set of functions for which an argmax operation is tractable for continuous Q- learning [23] or ï¬nd representation where local linear Gaussian approximations are more appropriate [67].
# 6 Acknowledgments
The authors thank the developers of Tensorï¬ow [1]. We thank Sherry Moore, David Andersen and Jon Shlens for their help in implementing the model. We thank Aäron van den Oord, Yann Dauphin, Kyle Kastner, Chelsea Finn, Maithra Raghu, David Warde-Farley, Daniel Jiwoong Im and Oriol Vinyals for fruitful discussions. Finally, we thank Ben Poole, Rafal Jozefowicz and George Dahl for their input on a draft of the paper.
# References
[1] Martın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorï¬ow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016.
[2] Vijay Badrinarayanan, Bamdev Mishra, and Roberto Cipolla. Understanding symmetries in deep networks. arXiv preprint arXiv:1511.01029, 2015.
[3] Johannes Ballé, Valero Laparra, and Eero P Simoncelli. Density modeling of images using a generalized normalization transformation. arXiv preprint arXiv:1511.06281, 2015.
[4] Anthony J Bell and Terrence J Sejnowski. An information-maximization approach to blind separation and blind deconvolution. Neural computation, 7(6):1129â1159, 1995.
[5] Yoshua Bengio. Artiï¬cial neural networks and their application to sequence recognition. 1991. [6] Yoshua Bengio and Samy Bengio. Modeling high-dimensional discrete data with multi-layer neural
networks. In NIPS, volume 99, pages 400â406, 1999.
[7] Mathias Berglund and Tapani Raiko. Stochastic gradient estimate variance in contrastive divergence and persistent contrastive divergence. arXiv preprint arXiv:1312.6002, 2013.
[8] Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Bengio. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349, 2015.
[9] Joan Bruna, Pablo Sprechmann, and Yann LeCun. Super-resolution with deep convolutional sufï¬cient statistics. arXiv preprint arXiv:1511.05666, 2015.
[10] Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. arXiv preprint arXiv:1509.00519, 2015.
[11] Scott Shaobing Chen and Ramesh A Gopinath. Gaussianization. In Advances in Neural Information Processing Systems, 2000.
[12] Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua Bengio. A recurrent latent variable model for sequential data. In Advances in neural information processing systems, pages 2962â2970, 2015.
[13] Peter Dayan, Geoffrey E Hinton, Radford M Neal, and Richard S Zemel. The helmholtz machine. Neural computation, 7(5):889â904, 1995.
[14] Gustavo Deco and Wilfried Brauer. Higher order statistical decorrelation without information loss. In G. Tesauro, D. S. Touretzky, and T. K. Leen, editors, Advances in Neural Information Processing Systems 7, pages 247â254. MIT Press, 1995.
[15] Emily L. Denton, Soumith Chintala, Arthur Szlam, and Rob Fergus. Deep generative image models using a laplacian pyramid of adversarial networks. In Advances in Neural Information Processing Systems 28:
10
Published as a conference paper at ICLR 2017
Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 1486â1494, 2015.
[16] Luc Devroye. Sample-based non-uniform random variate generation. In Proceedings of the 18th conference on Winter simulation, pages 260â265. ACM, 1986.
[17] Laurent Dinh, David Krueger, and Yoshua Bengio. Nice: non-linear independent components estimation. arXiv preprint arXiv:1410.8516, 2014.
[18] Brendan J Frey. Graphical models for machine learning and digital communication. MIT press, 1998. [19] Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. Texture synthesis using convolutional neural networks. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 262â270, 2015.
[20] Mathieu Germain, Karol Gregor, Iain Murray, and Hugo Larochelle. MADE: masked autoencoder for distribution estimation. CoRR, abs/1502.03509, 2015.
[21] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 2672â2680, 2014.
[22] Karol Gregor, Frederic Besse, Danilo Jimenez Rezende, Ivo Danihelka, and Daan Wierstra. Towards conceptual compression. arXiv preprint arXiv:1604.08772, 2016.
[23] Shixiang Gu, Timothy Lillicrap, Ilya Sutskever, and Sergey Levine. Continuous deep q-learning with model-based acceleration. arXiv preprint arXiv:1603.00748, 2016.
[24] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015.
[25] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. CoRR, abs/1603.05027, 2016.
[26] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735â1780, 1997.
[27] Matthew D Hoffman, David M Blei, Chong Wang, and John Paisley. Stochastic variational inference. The Journal of Machine Learning Research, 14(1):1303â1347, 2013.
[28] Aapo Hyvärinen, Juha Karhunen, and Erkki Oja. Independent component analysis, volume 46. John Wiley & Sons, 2004.
[29] Aapo Hyvärinen and Petteri Pajunen. Nonlinear independent component analysis: Existence and uniqueness results. Neural Networks, 12(3):429â439, 1999.
[30] Daniel Jiwoong Im, Chris Dongjoo Kim, Hui Jiang, and Roland Memisevic. Generating images with recurrent adversarial networks. arXiv preprint arXiv:1602.05110, 2016.
[31] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
[32] Rafal Józefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. CoRR, abs/1602.02410, 2016.
[33] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
[34] Diederik P Kingma, Tim Salimans, and Max Welling. Improving variational inference with inverse autoregressive ï¬ow. arXiv preprint arXiv:1606.04934, 2016.
[35] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
[36] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images, 2009. [37] Hugo Larochelle and Iain Murray. The neural autoregressive distribution estimator. In AISTATS, 2011. [38] Anders Boesen Lindbo Larsen, Søren Kaae Sønderby, and Ole Winther. Autoencoding beyond pixels using
a learned similarity metric. CoRR, abs/1512.09300, 2015.
[39] Yann A LeCun, Léon Bottou, Genevieve B Orr, and Klaus-Robert Müller. Efï¬cient backprop. In Neural networks: Tricks of the trade, pages 9â48. Springer, 2012.
[40] Chen-Yu Lee, Saining Xie, Patrick Gallagher, Zhengyou Zhang, and Zhuowen Tu. Deeply-supervised nets. arXiv preprint arXiv:1409.5185, 2014.
[41] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), December 2015.
[42] Lars Maaløe, Casper Kaae Sønderby, Søren Kaae Sønderby, and Ole Winther. Auxiliary deep generative models. arXiv preprint arXiv:1602.05473, 2016.
[43] Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. arXiv preprint arXiv:1402.0030, 2014.
[44] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529â533, 2015.
[45] Radford M Neal and Geoffrey E Hinton. A view of the em algorithm that justiï¬es incremental, sparse, and other variants. In Learning in graphical models, pages 355â368. Springer, 1998.
11
Published as a conference paper at ICLR 2017
[46] Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016.
[47] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. CoRR, abs/1511.06434, 2015.
[48] Danilo Jimenez Rezende and Shakir Mohamed. Variational inference with normalizing ï¬ows. arXiv preprint arXiv:1505.05770, 2015.
[49] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approxi- mate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014.
[50] Oren Rippel and Ryan Prescott Adams. High-dimensional probability estimation with deep density models. arXiv preprint arXiv:1302.5125, 2013.
[51] David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning representations by back- propagating errors. Cognitive modeling, 5(3):1, 1988.
[52] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211â252, 2015.
[53] Ruslan Salakhutdinov and Geoffrey E Hinton. Deep boltzmann machines. In International conference on artiï¬cial intelligence and statistics, pages 448â455, 2009.
[54] Tim Salimans and Diederik P Kingma. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. arXiv preprint arXiv:1602.07868, 2016.
[55] Tim Salimans, Diederik P Kingma, and Max Welling. Markov chain monte carlo and variational inference: Bridging the gap. arXiv preprint arXiv:1410.6460, 2014.
[56] Lawrence K Saul, Tommi Jaakkola, and Michael I Jordan. Mean ï¬eld theory for sigmoid belief networks. Journal of artiï¬cial intelligence research, 4(1):61â76, 1996.
[57] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recogni- tion. arXiv preprint arXiv:1409.1556, 2014.
[58] Paul Smolensky. Information processing in dynamical systems: Foundations of harmony theory. Technical report, DTIC Document, 1986.
[59] Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, pages 2256â2265, 2015.
[60] Sasha Targ, Diogo Almeida, and Kevin Lyman. Resnet in resnet: Generalizing residual architectures. CoRR, abs/1603.08029, 2016.
[61] Lucas Theis and Matthias Bethge. Generative image modeling using spatial lstms. In Advances in Neural Information Processing Systems, pages 1918â1926, 2015.
[62] Lucas Theis, Aäron Van Den Oord, and Matthias Bethge. A note on the evaluation of generative models. CoRR, abs/1511.01844, 2015.
[63] Dustin Tran, Rajesh Ranganath, and David M Blei. Variational gaussian process. arXiv preprint arXiv:1511.06499, 2015.
[64] Benigno Uria, Iain Murray, and Hugo Larochelle. Rnade: The real-valued neural autoregressive density- estimator. In Advances in Neural Information Processing Systems, pages 2175â2183, 2013.
[65] Hado van Hasselt, Arthur Guez, Matteo Hessel, and David Silver. Learning functions across many orders of magnitudes. arXiv preprint arXiv:1602.07714, 2016.
[66] Oriol Vinyals, Samy Bengio, and Manjunath Kudlur. Order matters: Sequence to sequence for sets. arXiv preprint arXiv:1511.06391, 2015.
[67] Manuel Watter, Jost Springenberg, Joschka Boedecker, and Martin Riedmiller. Embed to control: A locally linear latent dynamics model for control from raw images. In Advances in Neural Information Processing Systems, pages 2728â2736, 2015.
[68] Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â256, 1992.
[69] Fisher Yu and Vladlen Koltun. Multi-scale context aggregation by dilated convolutions. arXiv preprint arXiv:1511.07122, 2015.
[70] Fisher Yu, Yinda Zhang, Shuran Song, Ari Seff, and Jianxiong Xiao. Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365, 2015.
[71] Richard Zhang, Phillip Isola, and Alexei A Efros. Colorful image colorization. arXiv preprint arXiv:1603.08511, 2016.
12
Published as a conference paper at ICLR 2017
A Samples
Figure 7: Samples from a model trained on Imagenet (64 Ã 64).
13
Published as a conference paper at ICLR 2017
Figure 8: Samples from a model trained on CelebA.
14
Published as a conference paper at ICLR 2017
iaâ . cee: Tae fe Ft
Figure 9: Samples from a model trained on LSUN (bedroom category).
15
Published as a conference paper at ICLR 2017
âa
Figure 10: Samples from a model trained on LSUN (church outdoor category).
16
Published as a conference paper at ICLR 2017
Figure 11: Samples from a model trained on LSUN (tower category).
17
Published as a conference paper at ICLR 2017
B Manifold
Figure 12: Manifold from a model trained on Jmagenet (64 x 64). Images with red borders are taken from the validation set, and define the manifold. The manifold was computed as described in Equation Tx where the x-axis corresponds to ¢, and the y-axis to ¢â, and where , ¢â ⬠{0,7,---, G}.-
18
Published as a conference paper at ICLR 2017
Figure 13: Manifold from a model trained on CelebA. Images with red borders are taken from the training set, and define the manifold. The manifold was computed as described in Equation}19} where the x-axis corresponds to ¢, and the y-axis to ¢â, and where ¢, 0â ⬠{0,4,---, = :
19
Published as a conference paper at ICLR 2017
Figure 14: Manifold from a model trained on LSUN (bedroom category). Images with red bor- ders are taken from the validation set, and define the manifold. The manifold was computed as described in Equation [19] where the x-axis corresponds to ¢, and the y-axis to ¢â, and where b,9 â¬{0, 5,0, F}.
20
Published as a conference paper at ICLR 2017
Figure 15: Manifold from a model trained on LSUN (church outdoor category). Images with red borders are taken from the validation set, and define the manifold. The manifold was computed as described in Equation[19] where the x-axis corresponds to ¢, and the y-axis to ¢â, and where b,9 â¬{0, 5,05, Fh.
21
Published as a conference paper at ICLR 2017
Figure 16: Manifold from a model trained on LSUN (tower category). Images with red bor- ders are taken from the validation set, and define the manifold. The manifold was computed as described in Equation|19| where the x-axis corresponds to ¢, and the y-axis to ¢â, and where b,9 ⬠{05,05 , Fh.
# C Extrapolation
Inspired by the texture generation work by [19, 61] and extrapolation test with DCGAN [47], we also evaluate the statistics captured by our model by generating images twice or ten times as large as present in the dataset. As we can observe in the following ï¬gures, our model seems to successfully create a âtextureâ representation of the dataset while maintaining a spatial smoothness through the image. Our convolutional architecture is only aware of the position of considered pixel through edge effects in convolutions, therefore our model is similar to a stationary process. This also explains why these samples are more consistent in LSUN, where the training data was obtained using random crops.
22
Published as a conference paper at ICLR 2017
(a) Ã2
(b) Ã10
Figure 17: We generate samples a factor bigger than the training set image size on Imagenet (64Ã64).
23
Published as a conference paper at ICLR 2017
(a) Ã2
(b) Ã10
Figure 18: We generate samples a factor bigger than the training set image size on CelebA.
24
Published as a conference paper at ICLR 2017
(a) Ã2
(b) Ã10
Figure 19: We generate samples a factor bigger than the training set image size on LSUN (bedroom category).
25
Published as a conference paper at ICLR 2017
(a) Ã2
(b) Ã10
Figure 20: We generate samples a factor bigger than the training set image size on LSUN (church outdoor category).
26
Published as a conference paper at ICLR 2017
(a) Ã2
(b) Ã10
Figure 21: We generate samples a factor bigger than the training set image size on LSUN (tower category).
27
Published as a conference paper at ICLR 2017
# D Latent variables semantic
As in [22], we further try to grasp the semantic of our learned layers latent variables by doing ablation tests. We infer the latent variables and resample the lowest levels of latent variables from a standard gaussian, increasing the highest level affected by this resampling. As we can see in the following ï¬gures, the semantic of our latent space seems to be more on a graphic level rather than higher level concept. Although the heavy use of convolution improves learning by exploiting image prior knowledge, it is also likely to be responsible for this limitation.
Figure 22: Conceptual compression from a model trained on Imagenet (64 Ã 64). The leftmost column represent the original image, the subsequent columns were obtained by storing higher level latent variables and resampling the others, storing less and less as we go right. From left to right: 100%, 50%, 25%, 12.5% and 6.25% of the latent variables are kept.
Figure 23: Conceptual compression from a model trained on CelebA. The leftmost column represent the original image, the subsequent columns were obtained by storing higher level latent variables and resampling the others, storing less and less as we go right. From left to right: 100%, 50%, 25%, 12.5% and 6.25% of the latent variables are kept.
28
Published as a conference paper at ICLR 2017
Figure 24: Conceptual compression from a model trained on LSUN (bedroom category). The leftmost column represent the original image, the subsequent columns were obtained by storing higher level latent variables and resampling the others, storing less and less as we go right. From left to right: 100%, 50%, 25%, 12.5% and 6.25% of the latent variables are kept.
Figure 25: Conceptual compression from a model trained on LSUN (church outdoor category). The leftmost column represent the original image, the subsequent columns were obtained by storing higher level latent variables and resampling the others, storing less and less as we go right. From left to right: 100%, 50%, 25%, 12.5% and 6.25% of the latent variables are kept.
29
Published as a conference paper at ICLR 2017
Figure 26: Conceptual compression from a model trained on LSUN (tower category). The leftmost column represent the original image, the subsequent columns were obtained by storing higher level latent variables and resampling the others, storing less and less as we go right. From left to right: 100%, 50%, 25%, 12.5% and 6.25% of the latent variables are kept.
# E Batch normalization
We further experimented with batch normalization by using a weighted average of a moving average of the layer statistics ˵t, ËÏ2
˵t+1 = Ï˵t + (1 â Ï)˵t t + (1 â Ï)ËÏ2 t+1 = ÏËÏ2 ËÏ2 t ,
(20)
(21)
where Ï is the momentum. When using ˵t+1, ËÏ2 statistics ˵t, ËÏ2 t+1, we only propagate gradient through the current batch t . We observe that using this lag helps the model train with very small minibatches.
We used batch normalization with a moving average for our results on CIFAR-10.
# F Attribute change
Additionally, we exploit the attribute information y in CelebA to build a conditional model, i.e. the invertible function f from image to latent variable uses the labels in y to define its parameters. In order to observe the information stored in the latent variables, we choose to encode a batch of images x with their original attribute y and decode them using a new set of attributes yâ, build by shuffling the original attributes inside the batch. We obtain the new images xâ = g(f(a;y);yâ)-
We observe that, although the faces are changed as to respect the new attributes, several properties remain unchanged like position and background.
30
Published as a conference paper at ICLR 2017
~~ oe. Ge Di pei oe F ete 2 ab «> wp T ep ie pon oe o BOcr: ~ rN Pwe % =ees
Figure 27: Examples x from the CelebA dataset.
31
Published as a conference paper at ICLR 2017
Figure 28: From a model trained on pairs of images and attributes from the CelebA dataset, we encode a batch of images with their original attributes before decoding them with a new set of attributes. We notice that the new images often share similar characteristics with those in Fig 27, including position and background.
32 | {
"id": "1602.05110"
} |
1605.07725 | Adversarial Training Methods for Semi-Supervised Text Classification | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text. | http://arxiv.org/pdf/1605.07725 | Takeru Miyato, Andrew M. Dai, Ian Goodfellow | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160525 | 20211116 | 1 2 0 2
# v o N 6 1
] L M . t a t s [
arXiv:1605.07725v4
4 v 5 2 7 7 0 . 5 0 6 1 : v i X r a
Published as a conference paper at ICLR 2017
# ADVERSARIAL TRAINING METHODS FOR SEMI-SUPERVISED TEXT CLASSIFICATION
Takeru Miyato1,2â, Andrew M Dai2, Ian Goodfellow3 takeru.miyato@gmail.com, adai@google.com, ian@openai.com 1 Preferred Networks, Inc., ATR Cognitive Mechanisms Laboratories, Kyoto University 2 Google Brain 3 OpenAI
# ABSTRACT
Adversarial training provides a means of regularizing supervised learning al- gorithms while virtual adversarial training is able to extend supervised learn- ing algorithms to the semi-supervised setting. However, both methods require making small perturbations to numerous entries of the input vector, which is inappropriate for sparse high-dimensional inputs such as one-hot word rep- resentations. We extend adversarial and virtual adversarial training to the text domain by applying perturbations to the word embeddings in a recur- rent neural network rather than to the original input itself. The proposed method achieves state of the art results on multiple benchmark semi-supervised and purely supervised tasks. We provide visualizations and analysis show- ing that the learned word embeddings have improved in quality and that while training, the model is less prone to overï¬tting. Code is available at https://github.com/tensorï¬ow/models/tree/master/research/adversarial_text.
# INTRODUCTION
Adversarial examples are examples that are created by making small perturbations to the input de- signed to signiï¬cantly increase the loss incurred by a machine learning model (Szegedy et al., 2014; Goodfellow et al., 2015). Several models, including state of the art convolutional neural networks, lack the ability to classify adversarial examples correctly, sometimes even when the adversarial perturbation is constrained to be so small that a human observer cannot perceive it. Adversarial training is the process of training a model to correctly classify both unmodiï¬ed examples and ad- versarial examples. It improves not only robustness to adversarial examples, but also generalization performance for original examples. Adversarial training requires the use of labels when training models that use a supervised cost, because the label appears in the cost function that the adversarial perturbation is designed to maximize. Virtual adversarial training (Miyato et al., 2016) extends the idea of adversarial training to the semi-supervised regime and unlabeled examples. This is done by regularizing the model so that given an example, the model will produce the same output distribution as it produces on an adversarial perturbation of that example. Virtual adversarial training achieves good generalization performance for both supervised and semi-supervised learning tasks.
Previous work has primarily applied adversarial and virtual adversarial training to image classiï¬ca- tion tasks. In this work, we extend these techniques to text classiï¬cation tasks and sequence models. Adversarial perturbations typically consist of making small modiï¬cations to very many real-valued inputs. For text classiï¬cation, the input is discrete, and usually represented as a series of high- dimensional one-hot vectors. Because the set of high-dimensional one-hot vectors does not admit inï¬nitesimal perturbation, we deï¬ne the perturbation on continuous word embeddings instead of dis- crete word inputs. Traditional adversarial and virtual adversarial training can be interpreted both as a regularization strategy (Szegedy et al., 2014; Goodfellow et al., 2015; Miyato et al., 2016) and as de- fense against an adversary who can supply malicious inputs (Szegedy et al., 2014; Goodfellow et al., 2015). Since the perturbed embedding does not map to any word and the adversary presumably does not have access to the word embedding layer, our proposed training strategy is no longer intended as
âThis work was done when the author was at Google Brain.
1
Published as a conference paper at ICLR 2017
a defense against an adversary. We thus propose this approach exclusively as a means of regularizing a text classiï¬er by stabilizing the classiï¬cation function.
We show that our approach with neural language model unsupervised pretraining as proposed by Dai & Le (2015) achieves state of the art performance for multiple semi-supervised text clas- siï¬cation tasks, including sentiment classiï¬cation and topic classiï¬cation. We emphasize that opti- mization of only one additional hyperparameter Ç«, the norm constraint limiting the size of the adver- sarial perturbations, achieved such state of the art performance. These results strongly encourage the use of our proposed method for other text classiï¬cation tasks. We believe that text classiï¬ca- tion is an ideal setting for semi-supervised learning because there are abundant unlabeled corpora for semi-supervised learning algorithms to leverage. This work is the ï¬rst work we know of to use adversarial and virtual adversarial training to improve a text or RNN model.
We also analyzed the trained models to qualitatively characterize the effect of adversarial and vir- tual adversarial training. We found that adversarial and virtual adversarial training improved word embeddings over the baseline methods.
# 2 MODEL
We denote a sequence of T words as {w(t)|t = 1, . . . , T }, and a corresponding target as y. To transform a discrete word input to a continuous vector, we deï¬ne the word embedding matrix V â R(K+1)ÃD where K is the number of words in the vocabulary and each row vk corresponds to the word embedding of the i-th word. Note that the (K + 1)-th word embedding is used as an embedding of an âend of sequence (eos)â token, veos. As a text classiï¬cation model, we used a simple LSTM-based neural network model, shown in Figure 1a. At time step t, the input is the discrete word w(t), and the corresponding word embedding is v(t). We additionally tried the bidirectional
y LSTM LSTM v(1) v(2) v(3) veos ¯v(1) r(1) ¯v(2) r(2) ¯v(3) r(3) w(1) w(2) w(3) weos w(1) w(2) w(3) (a) LSTM-based text classiï¬cation model. (b) The model with perturbed embeddings. y veos weos
Figure 1: Text classiï¬cation models with clean embeddings (a) and with perturbed embeddings (b).
LSTM architecture (Graves & Schmidhuber, 2005) since this is used by the current state of the art method (Johnson & Zhang, 2016b). For constructing the bidirectional LSTM model for text classiï¬cation, we add an additional LSTM on the reversed sequence to the unidirectional LSTM model described in Figure 1. The model then predicts the label on the concatenated LSTM outputs of both ends of the sequence.
In adversarial and virtual adversarial training, we train the classiï¬er to be robust to perturbations of the embeddings, shown in Figure 1b. These perturbations are described in detail in Section 3. At present, it is sufï¬cient to understand that the perturbations are of bounded norm. The model could trivially learn to make the perturbations insigniï¬cant by learning embeddings with very large norm. To prevent this pathological solution, when we apply adversarial and virtual adversarial training to the model we deï¬ned above, we replace the embeddings vk with normalized embeddings ¯vk, deï¬ned as:
¯vk = vk â E(v) Var(v) where E(v) = K X j=1 fjvj, Var(v) = K X j=1 fj (vj â E(v))2 , (1)
# p
where fi is the frequency of the i-th word, calculated within all training examples.
2
Published as a conference paper at ICLR 2017
# 3 ADVERSARIAL AND VIRTUAL ADVERSARIAL TRAINING
Adversarial training (Goodfellow et al., 2015) is a novel regularization method for classiï¬ers to improve robustness to small, approximately worst case perturbations. Let us denote x as the input and θ as the parameters of a classiï¬er. When applied to a classiï¬er, adversarial training adds the following term to the cost function:
â log p(y | x + radv; θ) where radv = arg min r,krkâ¤Ç« log p(y | x + r; Ëθ) (2)
where r is a perturbation on the input and Ëθ is a constant set to the current parameters of a classiï¬er. The use of the constant copy Ëθ rather than θ indicates that the backpropagation algorithm should not be used to propagate gradients through the adversarial example construction process. At each step of training, we identify the worst case perturbations radv against the current model p(y|x; Ëθ) in Eq. (2), and train the model to be robust to such perturbations through minimizing Eq. (2) with respect to θ. However, we cannot calculate this value exactly in general, because exact minimization with respect to r is intractable for many interesting models such as neural networks. Goodfellow et al. (2015) proposed to approximate this value by linearizing log p(y | x; Ëθ) around x. With a linear approximation and a L2 norm constraint in Eq.(2), the resulting adversarial perturbation is
# radv = âÇ«g/kgk2 where g = âx log p(y | x; Ëθ).
This perturbation can be easily computed using backpropagation in neural networks.
Virtual adversarial training (Miyato et al., 2016) is a regularization method closely related to adver- sarial training. The additional cost introduced by virtual adversarial training is the following:
KL[p(· | x; Ëθ)||p(· | x + rv-adv; θ)] (3)
where rv-adv = arg max r,krkâ¤Ç« KL[p(· | x; Ëθ)||p(· | x + r; Ëθ)] (4)
where KL[p||q] denotes the KL divergence between distributions p and q. By minimizing Eq.(3), a classiï¬er is trained to be smooth. This can be considered as making the classiï¬er resistant to pertur- bations in directions to which it is most sensitive on the current model p(y|x; Ëθ). Virtual adversarial loss Eq.(3) requires only the input x and does not require the actual label y while adversarial loss deï¬ned in Eq.(2) requires the label y. This makes it possible to apply virtual adversarial training to semi-supervised learning. Although we also in general cannot analytically calculate the virtual adversarial loss, Miyato et al. (2016) proposed to calculate the approximated Eq.(3) efï¬ciently with backpropagation.
As described in Sec. 2, in our work, we apply the adversarial perturbation to word embeddings, rather than directly to the input. To deï¬ne adversarial perturbation on the word embeddings, let us denote a concatenation of a sequence of (normalized) word embedding vectors [¯v(1), ¯v(2), . . . , ¯v(T )] as s, and the model conditional probability of y given s as p(y|s; θ) where θ are model parameters. Then we deï¬ne the adversarial perturbation radv on s as:
radv = âÇ«g/kgk2 where g = âs log p(y | s; Ëθ). (5)
To be robust to the adversarial perturbation deï¬ned in Eq.(5), we deï¬ne the adversarial loss by
Ladv(θ) = â 1 N N X n=1 log p(yn | sn + radv,n; θ) (6)
where N is the number of labeled examples. minimizing the negative log-likelihood plus Ladv with stochastic gradient descent.
In virtual adversarial training on our text classiï¬cation model, at each training step, we calculate the below approximated virtual adversarial perturbation:
rv-adv = Ç«g/kgk2 where g = âs+dKL hp(· | s; Ëθ)||p(· | s + d; Ëθ)i (7)
3
Published as a conference paper at ICLR 2017
where d is a T D-dimensional small random vector. This approximation corresponds to a 2nd- order Taylor expansion and a single iteration of the power method on Eq.(3) as in previous work (Miyato et al., 2016). Then the virtual adversarial loss is deï¬ned as:
N â²
Lv-adv(θ) = 1 N â² X nâ²=1 KL hp(· | snâ² ; Ëθ)||p(· | snâ² + rv-adv,nâ²; θ)i (8)
where N â² is the number of both labeled and unlabeled examples.
See Warde-Farley & Goodfellow (2016) for a recent review of adversarial training methods.
# 4 EXPERIMENTAL SETTINGS
All experiments used TensorFlow (Abadi et al., 2016) on GPUs. To compare our method with other text classiï¬cation methods, we tested on 5 different text datasets. We summarize information about each dataset in Table 1.
IMDB (Maas et al., 2011)1 is a standard benchmark movie review dataset for sentiment classiï¬ca- tion. Elec (Johnson & Zhang, 2015b)2 3 is an Amazon electronic product review dataset. Rotten Tomatoes (Pang & Lee, 2005) consists of short snippets of movie reviews, for sentiment classiï¬- cation. The Rotten Tomatoes dataset does not come with separate test sets, thus we divided all examples randomly into 90% for the training set, and 10% for the test set. We repeated train- ing and evaluation ï¬ve times with different random seeds for the division. For the Rotten Toma- toes dataset, we also collected unlabeled examples using movie reviews from the Amazon Re- views dataset (McAuley & Leskovec, 2013) 4. DBpedia (Lehmann et al., 2015; Zhang et al., 2015) is a dataset of Wikipedia pages for category classiï¬cation. Because the DBpedia dataset has no additional unlabeled examples, the results on DBpedia are for the supervised learning task only. RCV1 (Lewis et al., 2004) consists of news articles from the Reuters Corpus. For the RCV1 dataset, we followed previous works (Johnson & Zhang, 2015b) and we conducted a single topic classiï¬ca- tion task on the second level topics. We used the same division into training, test and unlabeled sets as Johnson & Zhang (2015b). Regarding pre-processing, we treated any punctuation as spaces. We converted all words to lower-case on the Rotten Tomatoes, DBpedia, and RCV1 datasets. We removed words which appear in only one document on all datasets. On RCV1, we also removed words in the English stop-words list provided by Lewis et al. (2004)5.
Table 1: Summary of datasets. Note that unlabeled examples for the Rotten Tomatoes dataset are not provided so we instead use the unlabeled Amazon reviews dataset.
Classes Train Test Unlabeled Avg. T Max T IMDB Elec Rotten Tomatoes DBpedia RCV1 2 2 2 14 55 25,000 24,792 9596 560,000 15,564 25,000 24,897 1066 70,000 49,838 50,000 197,025 7,911,684 â 668,640 239 110 20 49 153 2,506 5,123 54 953 9,852
4.1 RECURRENT LANGUAGE MODEL PRE-TRAINING
Following Dai & Le (2015), we initialized the word embedding matrix and LSTM weights with a pre-trained recurrent language model (Bengio et al., 2006; Mikolov et al., 2010) that was trained on
1http://ai.stanford.edu/~amaas/data/sentiment/ 2http://riejohnson.com/cnn_data.html 3There are some duplicated reviews in the original Elec dataset, and we used the dataset with removal of the duplicated reviews, provided by Johnson & Zhang (2015b), thus there are slightly fewer examples shown in Table 1 than the ones in previous works(Johnson & Zhang, 2015b; 2016b).
# 4http://snap.stanford.edu/data/web-Amazon.html 5http://www.ai.mit.edu/projects/jmlr/papers/volume5/lewis04a/lyrl2004_rcv1v2_README.htm
4
Published as a conference paper at ICLR 2017
both labeled and unlabeled examples. We used a unidirectional single-layer LSTM with 1024 hidden units. The word embedding dimension D was 256 on IMDB and 512 on the other datasets. We used a sampled softmax loss with 1024 candidate samples for training. For the optimization, we used the Adam optimizer (Kingma & Ba, 2015), with batch size 256, an initial learning rate of 0.001, and a 0.9999 learning rate exponential decay factor at each training step. We trained for 100,000 steps. We applied gradient clipping with norm set to 1.0 on all the parameters except word embeddings. To reduce runtime on GPU, we used truncated backpropagation up to 400 words from each end of the sequence. For regularization of the recurrent language model, we applied dropout (Srivastava et al., 2014) on the word embedding layer with 0.5 dropout rate.
For the bidirectional LSTM model, we used 512 hidden units LSTM for both the standard order and reversed order sequences, and we used 256 dimensional word embeddings which are shared with both of the LSTMs. The other hyperparameters are the same as for the unidirectional LSTM. We tested the bidirectional LSTM model on IMDB, Elec and RCV because there are relatively long sentences in the datasets.
Pretraining with a recurrent language model was very effective on classiï¬cation performance on all the datasets we tested on and so our results in Section 5 are with this pretraining.
4.2 TRAINING CLASSIFICATION MODELS
After pre-training, we trained the text classiï¬cation model shown in Figure 1a with adversarial and virtual adversarial training as described in Section 3. Between the softmax layer for the target y and the ï¬nal output of the LSTM, we added a hidden layer, which has dimension 30 on IMDB, Elec and Rotten Tomatoes, and 128 on DBpedia and RCV1. The activation function on the hidden layer was ReLU(Jarrett et al., 2009; Nair & Hinton, 2010; Glorot et al., 2011). For optimization, we again used the Adam optimizer, with 0.0005 initial learning rate 0.9998 exponential decay. Batch sizes are 64 on IMDB, Elec, RCV1, and 128 on DBpedia. For the Rotten Tomatoes dataset, for each step, we take a batch of size 64 for calculating the loss of the negative log-likelihood and adversarial training, and 512 for calculating the loss of virtual adversarial training. Also for Rotten Tomatoes, we used texts with lengths T less than 25 in the unlabeled dataset. We iterated 10,000 training steps on all datasets except IMDB and DBpedia, for which we used 15,000 and 20,000 training steps respectively. We again applied gradient clipping with the norm as 1.0 on all the parameters except the word embedding. We also used truncated backpropagation up to 400 words, and also generated the adversarial and virtual adversarial perturbation up to 400 words from each end of the sequence.
We found the bidirectional LSTM to converge more slowly, so we iterated for 15,000 training steps when training the bidirectional LSTM classiï¬cation model.
For each dataset, we divided the original training set into training set and validation set, and we roughly optimized some hyperparameters shared with all of the methods; (model architecture, batch- size, training steps) with the validation performance of the base model with embedding dropout. For each method, we optimized two scalar hyperparameters with the validation set. These were the dropout rate on the embeddings and the norm constraint Ç« of adversarial and virtual adversarial training. Note that for adversarial and virtual adversarial training, we generate the perturbation after applying embedding dropout, which we found performed the best. We did not do early stopping with these methods. The method with only pretraining and embedding dropout is used as the baseline (referred to as Baseline in each table).
5 RESULTS
5.1 TEST PERFORMANCE ON IMDB DATASET AND MODEL ANALYSIS
Figure 2 shows the learning curves on the IMDB test set with the baseline method (only embedding dropout and pretraining), adversarial training, and virtual adversarial training. We can see in Fig- ure 2a that adversarial and virtual adversarial training achieved lower negative log likelihood than the baseline. Furthermore, virtual adversarial training, which can utilize unlabeled data, maintained this low negative log-likelihood while the other methods began to overï¬t later in training. Regarding adversarial and virtual adversarial loss in Figure 2b and 2c, we can see the same tendency as for negative log likelihood; virtual adversarial training was able to keep these values lower than other
5
Published as a conference paper at ICLR 2017
methods. Because adversarial training operates only on the labeled subset of the training data, it eventually overï¬ts even the task of resisting adversarial perturbations.
d o o h i l e k i l g o l e v i t a g e n t s e T 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 0 1000 2000 Baseline Adversarial Virtual adversarial 3000 4000 5000 s s o l l a i r a s r e v d a t s e T 2.5 2.0 1.5 1.0 0.5 0.0 0 Baseline Adversarial Virtual adversarial 1000 2000 3000 4000 5000 s s o l l a i r a s r e v d a l a u t r i v t s e T 1.0 0.8 0.6 0.4 0.2 0.0 0 Baseline Adversarial Virtual adversarial 1000 2000 3000 4000 Step Step Step (a) Negative log likelihood (b) Ladv(θ) (c) Lv-adv(θ) 5000
Figure 2: Learning curves of (a) negative log likelihood, (b) adversarial loss (deï¬ned in Eq.(6)) and (c) virtual adversarial loss (deï¬ned in Eq.(8)) on IMDB. All values were evaluated on the test set. Adversarial and virtual adversarial loss were evaluated with Ç« = 5.0. The optimal value of Ç« differs between adversarial training and virtual adversarial training, but the value of 5.0 performs very well for both and provides a consistent point of comparison.
Table 2 shows the test performance on IMDB with each training method. âAdversarial + Virtual Ad- versarialâ means the method with both adversarial and virtual adversarial loss with the shared norm constraint Ç«. With only embedding dropout, our model achieved a 7.39% error rate. Adversarial and virtual adversarial training improved the performance relative to our baseline, and virtual adversarial training achieved performance on par with the state of the art, 5.91% error rate. This is despite the fact that the state of the art model requires training a bidirectional LSTM whereas our model only uses a unidirectional LSTM. We also show results with a bidirectional LSTM. Our bidirectional LSTM model has the same performance as a unidirectional LSTM with virtual adversarial training.
A common misconception is that adversarial training is equivalent to training on noisy examples. Noise is actually a far weaker regularizer than adversarial perturbations because, in high dimensional input spaces, an average noise vector is approximately orthogonal to the cost gradient. Adversarial perturbations are explicitly chosen to consistently increase the cost. To demonstrate the superiority of adversarial training over the addition of noise, we include control experiments which replaced adversarial perturbations with random perturbations from a multivariate Gaussian with scaled norm, on each embedding in the sequence. In Table 2, âRandom perturbation with labeled examplesâ is the method in which we replace radv with random perturbations, and âRandom perturbation with labeled and unlabeled examplesâ is the method in which we replace rv-adv with random perturbations. Every adversarial training method outperformed every random perturbation method.
To visualize the effect of adversarial and virtual adversarial training on embeddings, we examined embeddings trained using each method. Table 3 shows the 10 top nearest neighbors to âgoodâ and âbadâ with trained embeddings. The baseline and random methods are both strongly inï¬uenced by the grammatical structure of language, due to the language model pretraining step, but are not strongly inï¬uenced by the semantics of the text classiï¬cation task. For example, âbadâ appears in the list of nearest neighbors to âgoodâ on the baseline and the random perturbation method. Both âbadâ and âgoodâ are adjectives that can modify the same set of nouns, so it is reasonable for a language model to assign them similar embeddings, but this clearly does not convey much information about the actual meaning of the words. Adversarial training ensures that the meaning of a sentence cannot be inverted via a small change, so these words with similar grammatical role but different meaning become separated. When using adversarial and virtual adversarial training, âbadâ no longer appears in the 10 top nearest neighbors to âgoodâ. âbadâ falls to the 19th nearest neighbor for adversarial training and 21st nearest neighbor for virtual adversarial training, with cosine distances of 0.463 and 0.464, respectively. For the baseline and random perturbation method, the cosine distances were 0.361 and 0.377, respectively. In the other direction, the nearest neighbors to âbadâ included âgoodâ as the 4th nearest neighbor for the baseline method and random perturbation method. For both adversarial methods, âgoodâ drops to the 36th nearest neighbor of âbadâ.
We also investigated the 15 nearest neighbors to âgreatâ and its cosine distances with the trained embeddings. We saw that cosine distance on adversarial and virtual adversarial training (0.159â 0.331) were much smaller than ones on the baseline and random perturbation method (0.244â0.399).
6
Published as a conference paper at ICLR 2017
Table 2: Test performance on the IMDB sentiment classiï¬cation task. * indicates using pretrained embeddings of CNN and bidirectional LSTM.
Method Test error rate Baseline (without embedding normalization) 7.33% Baseline Random perturbation with labeled examples Random perturbation with labeled and unlabeled examples Adversarial Virtual Adversarial Adversarial + Virtual Adversarial 7.39% 7.20% 6.78% 6.21% 5.91% 6.09% Virtual Adversarial (on bidirectional LSTM) Adversarial + Virtual Adversarial (on bidirectional LSTM) 5.91% 6.02% Full+Unlabeled+BoW (Maas et al., 2011) Transductive SVM (Johnson & Zhang, 2015b) NBSVM-bigrams (Wang & Manning, 2012) Paragraph Vectors (Le & Mikolov, 2014) SA-LSTM (Dai & Le, 2015) One-hot bi-LSTM* (Johnson & Zhang, 2016b) 11.11% 9.99% 8.78% 7.42% 7.24% 5.94%
Table 3: 10 top nearest neighbors to âgoodâ and âbadâ with the word embeddings trained on each method. We used cosine distance for the metric. âBaselineâ means training with embedding dropout and âRandomâ means training with random perturbation with labeled examples. âAdversarialâ and âVirtual Adversarialâ mean adversarial training and virtual adversarial training.
âgoodâ âbadâ Baseline Random Adversarial Virtual Adversarial Baseline Random Adversarial Virtual Adversarial 1 2 3 4 5 6 7 8 9 10 great decent Ãbad excellent Good ï¬ne nice interesting solid entertaining great decent excellent nice Good Ãbad ï¬ne interesting entertaining solid decent great nice ï¬ne entertaining interesting Good excellent solid cool decent great nice ï¬ne entertaining interesting Good cool enjoyable excellent terrible awful horrible Ãgood Bad BAD poor stupid Horrible horrendous terrible awful horrible Ãgood poor BAD Bad stupid Horrible horrendous terrible awful horrible poor BAD stupid Bad laughable lame Horrible terrible awful horrible poor BAD stupid Bad laughable lame Horrible
The much weaker positive word âgoodâ also moved from the 3rd nearest neighbor to the 15th after virtual adversarial training.
5.2 TEST PERFORMANCE ON ELEC, RCV1 AND ROTTEN TOMATOES DATASET
Table 4 shows the test performance on the Elec and RCV1 datasets. We can see our proposed method improved test performance on the baseline method and achieved state of the art performance on both datasets, even though the state of the art method uses a combination of CNN and bidirectional LSTM models. Our unidirectional LSTM model improves on the state of the art method and our method with a bidirectional LSTM further improves results on RCV1. The reason why the bidirectional models have better performance on the RCV1 dataset would be that, on the RCV1 dataset, there are some very long sentences compared with the other datasets, and the bidirectional model could better handle such long sentences with the shorter dependencies from the reverse order sentences.
Table 5 shows test performance on the Rotten Tomatoes dataset. Adversarial training was able to improve over the baseline method, and with both adversarial and virtual adversarial cost, achieved almost the same performance as the current state of the art method. However the test performance of only virtual adversarial training was worse than the baseline. We speculate that this is because the Rotten Tomatoes dataset has very few labeled sentences and the labeled sentences are very short.
7
Published as a conference paper at ICLR 2017
Table 4: Test performance on the Elec and RCV1 classiï¬cation tasks. * indicates using pretrained embeddings of CNN, and â indicates using pretrained embeddings of CNN and bidirectional LSTM.
Method Test error rate Elec RCV1 Baseline Adversarial Virtual Adversarial Adversarial + Virtual Adversarial 6.24% 5.61% 5.54% 5.40% 7.40% 7.12% 7.05% 6.97% Virtual Adversarial (on bidirectional LSTM) Adversarial + Virtual Adversarial (on bidirectional LSTM) 5.55% 5.45% 6.71% 6.68% Transductive SVM (Johnson & Zhang, 2015b) NBLM (Naıve Bayes logisitic regression model) (Johnson & Zhang, 2015a) One-hot CNN* (Johnson & Zhang, 2015b) One-hot CNNâ (Johnson & Zhang, 2016b) One-hot bi-LSTMâ (Johnson & Zhang, 2016b) 16.41% 10.77% 8.11% 13.97% 7.71% 6.27% 7.15% 5.87% 8.52% 5.55%
In this case, the virtual adversarial loss on unlabeled examples overwhelmed the supervised loss, so the model prioritized being robust to perturbation rather than obtaining the correct answer.
Table 5: Test performance on the Rotten Tomatoes sentiment classiï¬cation task. * indicates using pretrained embeddings from word2vec Google News, and â indicates using unlabeled data from Amazon reviews.
Method Test error rate Baseline Adversarial Virtual Adversarial Adversarial + Virtual Adversarial 17.9% 16.8% 19.1% 16.6% NBSVM-bigrams(Wang & Manning, 2012) CNN*(Kim, 2014) AdaSent*(Zhao et al., 2015) SA-LSTMâ (Dai & Le, 2015) 20.6% 18.5% 16.9% 16.7%
5.3 PERFORMANCE ON THE DBPEDIA PURELY SUPERVISED CLASSIFICATION TASK
Table 6 shows the test performance of each method on DBpedia. The âRandom perturbationâ is the same method as the âRandom perturbation with labeled examplesâ explained in Section 5.1. Note that DBpedia has only labeled examples, as we explained in Section 4, so this task is purely supervised learning. We can see that the baseline method has already achieved nearly the current state of the art performance, and our proposed method improves from the baseline method.
# 6 RELATED WORKS
Dropout (Srivastava et al., 2014) is a regularization method widely used for many domains includ- ing text. There are some previous works adding random noise to the input and hidden layer during training, to prevent overï¬tting (e.g. (Sietsma & Dow, 1991; Poole et al., 2013)). However, in our experiments and in previous works (Miyato et al., 2016), training with adversarial and virtual adver- sarial perturbations outperformed the method with random perturbations.
For semi-supervised learning with neural networks, a common approach, especially in the image domain, is to train a generative model whose latent features may be used as features for classiï¬- cation (e.g. (Hinton et al., 2006; Maaløe et al., 2016)). These models now achieve state of the art
8
Published as a conference paper at ICLR 2017
Table 6: Test performance on the DBpedia topic classiï¬cation task
Method Test error rate Baseline (without embedding normalization) 0.87% Baseline Random perturbation Adversarial Virtual Adversarial 0.90% 0.85% 0.79% 0.76% Bag-of-words(Zhang et al., 2015) Large-CNN(character-level) (Zhang et al., 2015) SA-LSTM(word-level)(Dai & Le, 2015) N-grams TFIDF (Zhang et al., 2015) SA-LSTM(character-level)(Dai & Le, 2015) Word CNN (Johnson & Zhang, 2016a) 3.57% 1.73% 1.41% 1.31% 1.19% 0.84%
performance on the image domain. However, these methods require numerous additional hyperpa- rameters with generative models, and the conditions under which the generative model will provide good supervised learning performance are poorly understood. By comparison, adversarial and vir- tual adversarial training requires only one hyperparameter, and has a straightforward interpretation as robust optimization.
Adversarial and virtual adversarial training resemble some semi-supervised or transductive SVM ap- proaches (Joachims, 1999; Chapelle & Zien, 2005; Collobert et al., 2006; Belkin et al., 2006) in that both families of methods push the decision boundary far from training examples (or in the case of transductive SVMs, test examples). However, adversarial training methods insist on margins on the input space , while SVMs insist on margins on the feature space deï¬ned by the kernel function. This property allows adversarial training methods to achieve the models with a more ï¬exible function on the space where the margins are imposed. In our experiments (Table 2, 4) and Miyato et al. (2016), adversarial and virtual adversarial training achieve better performance than SVM based methods.
There has also been semi-supervised approaches applied to text classiï¬cation with both CNNs and RNNs. These approaches utilize âview-embeddingsâ(Johnson & Zhang, 2015b; 2016b) which use the window around a word to generate its embedding. When these are used as a pretrained model for the classiï¬cation model, they are found to improve generalization performance. These methods and our method are complementary as we showed that our method improved from a recurrent pretrained language model.
# 7 CONCLUSION
In our experiments, we found that adversarial and virtual adversarial training have good regular- ization performance in sequence models on text classiï¬cation tasks. On all datasets, our proposed method exceeded or was on par with the state of the art performance. We also found that adversarial and virtual adversarial training improved not only classiï¬cation performance but also the quality of word embeddings. These results suggest that our proposed method is promising for other text do- main tasks, such as machine translation(Sutskever et al., 2014), learning distributed representations of words or paragraphs(Mikolov et al., 2013; Le & Mikolov, 2014) and question answering tasks. Our approach could also be used for other general sequential tasks, such as for video or speech.
ACKNOWLEDGMENTS
We thank the developers of Tensorï¬ow. We thank the members of Google Brain team for their warm support and valuable comments. This work is partly supported by NEDO.
# REFERENCES
Martın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorï¬ow: Large-scale machine learning on heteroge-
9
Published as a conference paper at ICLR 2017
neous distributed systems. arXiv preprint arXiv:1603.04467, 2016.
Mikhail Belkin, Partha Niyogi, and Vikas Sindhwani. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. The Journal of Machine Learning Research, 7(Nov):2399â 2434, 2006.
Yoshua Bengio, Holger Schwenk, Jean-Sébastien Senécal, Fréderic Morin, and Jean-Luc Gauvain. Neural probabilistic language models. In Innovations in Machine Learning, pp. 137â186. Springer, 2006.
Olivier Chapelle and Alexander Zien. Semi-supervised classiï¬cation by low density separation. In AISTATS, 2005.
Ronan Collobert, Fabian Sinz, Jason Weston, and Léon Bottou. Large scale transductive svms. Journal of Machine Learning Research, 7(Aug):1687â1712, 2006.
Andrew M Dai and Quoc V Le. Semi-supervised sequence learning. In NIPS, 2015.
Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectiï¬er neural networks. In AISTATS, 2011.
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In ICLR, 2015.
Alex Graves and Jürgen Schmidhuber. Framewise phoneme classiï¬cation with bidirectional lstm and other neural network architectures. Neural Networks, 18(5):602â610, 2005.
Geoffrey E. Hinton, Simon Osindero, and Yee Whye Teh. A fast learning algorithm for deep belief nets. Neural Computation, 18:1527â1554, 2006.
Kevin Jarrett, Koray Kavukcuoglu, MarcâAurelio Ranzato, and Yann LeCun. What is the best multi-stage architecture for object recognition? In ICCV, 2009.
Thorsten Joachims. Transductive inference for text classiï¬cation using support vector machines. 1999. In ICML,
Rie Johnson and Tong Zhang. Effective use of word order for text categorization with convolutional neural networks. NAACL HLT, 2015a.
Rie Johnson and Tong Zhang. Semi-supervised convolutional neural networks for text categorization via region embedding. In NIPS, 2015b.
Rie Johnson and Tong Zhang. Convolutional neural networks for text categorization: Shallow word-level vs. deep character-level. arXiv preprint arXiv:1609.00718, 2016a.
Rie Johnson and Tong Zhang. Supervised and semi-supervised text categorization using LSTM for region embeddings. In ICML, 2016b.
Yoon Kim. Convolutional neural networks for sentence classiï¬cation. In EMNLP, 2014.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
Quoc V Le and Tomas Mikolov. Distributed representations of sentences and documents. In ICML, 2014.
Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick van Kleef, Sören Auer, et al. Dbpediaâa large-scale, multilingual knowledge base extracted from wikipedia. Semantic Web, 6(2):167â195, 2015.
David D Lewis, Yiming Yang, Tony G Rose, and Fan Li. Rcv1: A new benchmark collection for text catego- rization research. The Journal of Machine Learning Research, 5:361â397, 2004.
Lars Maaløe, Casper Kaae Sønderby, Søren Kaae Sønderby, and Ole Winther. Auxiliary deep generative models. In ICML, 2016.
Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. Learning word vectors for sentiment analysis. In ACL: Human Language Technologies-Volume 1, 2011.
Julian McAuley and Jure Leskovec. Hidden factors and hidden topics: understanding rating dimensions with review text. In ACM conference on Recommender systems, 2013.
Tomas Mikolov, Martin Karaï¬Ã¡t, Lukas Burget, Jan Cernock`y, and Sanjeev Khudanpur. Recurrent neural network based language model. In INTERSPEECH, 2010.
10
Published as a conference paper at ICLR 2017
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In NIPS, 2013.
Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, and Shin Ishii. Distributional smoothing with virtual adversarial training. In ICLR, 2016.
Vinod Nair and Geoffrey E Hinton. Rectiï¬ed linear units improve restricted boltzmann machines. In ICML, 2010.
Bo Pang and Lillian Lee. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In ACL, 2005.
Ben Poole, Jascha Sohl-Dickstein, and Surya Ganguli. Analyzing noise in autoencoders and deep networks. In Deep Leanring Workshop on NIPS, 2013.
J. Sietsma and R. Dow. Creating artiï¬cial neural networks that generalize. Neural Networks, 4(1), 1991.
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overï¬tting. The Journal of Machine Learning Research, 15(1), 2014.
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In NIPS, 2014.
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In ICLR, 2014.
Sida Wang and Christopher D Manning. Baselines and bigrams: Simple, good sentiment and topic classiï¬cation. In ACL: Short Papers, 2012.
David Warde-Farley and Ian Goodfellow. Adversarial perturbations of deep neural networks. In Tamir Hazan, George Papandreou, and Daniel Tarlow (eds.), Perturbations, Optimization, and Statistics, chapter 11. 2016. Book in preparation for MIT Press.
Xiang Zhang, Junbo Zhao, and Yann LeCun. Character-level convolutional networks for text classiï¬cation. In NIPS, 2015.
Han Zhao, Zhengdong Lu, and Pascal Poupart. Self-adaptive hierarchical sentence model. In IJCAI, 2015.
11 | {
"id": "1603.04467"
} |
1605.07678 | An Analysis of Deep Neural Network Models for Practical Applications | Since the emergence of Deep Neural Networks (DNNs) as a prominent technique
in the field of computer vision, the ImageNet classification challenge has
played a major role in advancing the state-of-the-art. While accuracy figures
have steadily increased, the resource utilisation of winning models has not
been properly taken into account. In this work, we present a comprehensive
analysis of important metrics in practical applications: accuracy, memory
footprint, parameters, operations count, inference time and power consumption.
Key findings are: (1) power consumption is independent of batch size and
architecture; (2) accuracy and inference time are in a hyperbolic relationship;
(3) energy constraint is an upper bound on the maximum achievable accuracy and
model complexity; (4) the number of operations is a reliable estimate of the
inference time. We believe our analysis provides a compelling set of
information that helps design and engineer efficient DNNs. | http://arxiv.org/pdf/1605.07678 | Alfredo Canziani, Adam Paszke, Eugenio Culurciello | cs.CV | 7 pages, 10 figures, legend for Figure 2 got lost :/ | null | cs.CV | 20160524 | 20170414 | 7 1 0 2
r p A 4 1 ] V C . s c [
4 v 8 7 6 7 0 . 5 0 6 1 : v i X r a
AN ANALYSIS OF DEEP NEURAL NETWORK MODELS FOR PRACTICAL APPLICATIONS
Alfredo Canziani & Eugenio Culurciello Weldon School of Biomedical Engineering Purdue University {canziani,euge}@purdue.edu
# Adam Paszke Faculty of Mathematics, Informatics and Mechanics University of Warsaw a.paszke@students.mimuw.edu.pl
# Alfredo Canziani & Eugenio Culurciello Adam Paszke
# ABSTRACT
Since the emergence of Deep Neural Networks (DNNs) as a prominent technique in the ï¬eld of computer vision, the ImageNet classiï¬cation challenge has played a major role in advancing the state-of-the-art. While accuracy ï¬gures have steadily increased, the resource utilisation of winning models has not been properly taken into account. In this work, we present a comprehensive analysis of important met- rics in practical applications: accuracy, memory footprint, parameters, operations count, inference time and power consumption. Key ï¬ndings are: (1) power con- sumption is independent of batch size and architecture; (2) accuracy and inference time are in a hyperbolic relationship; (3) energy constraint is an upper bound on the maximum achievable accuracy and model complexity; (4) the number of oper- ations is a reliable estimate of the inference time. We believe our analysis provides a compelling set of information that helps design and engineer efï¬cient DNNs.
1
# 1 INTRODUCTION
Since the breakthrough in 2012 ImageNet competition (Russakovsky et al., 2015) achieved by AlexNet (Krizhevsky et al., 2012) â the ï¬rst entry that used a Deep Neural Network (DNN) â several other DNNs with increasing complexity have been submitted to the challenge in order to achieve better performance.
In the ImageNet classiï¬cation challenge, the ultimate goal is to obtain the highest accuracy in a multi-class classiï¬cation problem framework, regardless of the actual inference time. We believe that this has given rise to several problems. Firstly, it is now normal practice to run several trained instances of a given model over multiple similar instances of each validation image. This practice, also know as model averaging or ensemble of DNNs, dramatically increases the amount of com- putation required at inference time to achieve the published accuracy. Secondly, model selection is hindered by the fact that different submissions are evaluating their (ensemble of) models a different number of times on the validation images, and therefore the reported accuracy is biased on the spe- ciï¬c sampling technique (and ensemble size). Thirdly, there is currently no incentive in speeding up inference time, which is a key element in practical applications of these models, and affects resource utilisation, power-consumption, and latency.
This article aims to compare state-of-the-art DNN architectures, submitted for the ImageNet chal- lenge over the last 4 years, in terms of computational requirements and accuracy. We compare these architectures on multiple metrics related to resource utilisation in actual deployments: accuracy, memory footprint, parameters, operations count, inference time and power consumption. The pur- pose of this paper is to stress the importance of these ï¬gures, which are essential hard constraints for the optimisation of these networks in practical deployments and applications.
# 2 METHODS
In order to compare the quality of different models, we collected and analysed the accuracy values reported in the literature. We immediately found that different sampling techniques do not allow for a direct comparison of resource utilisation. For example, central-crop (top-5 validation) errors of a
1
Top-1 accuracy [%] ll ye yt KAP rox BR 59 40> 45h NP ne ens we oS eS REY Oe A NES AT EHO⢠BeePRee®
Inception-v4 Inceptionv3 e ResNet-50 ResNet-101 oe ResNet-34 ResNet-18 9" GcogLenet ENet ResNet-152 VGG-16 VGG-19 accuracy © svn 60 5M. 35M. 65M. 95M BN-AlexNet 55 AlexNet 125M..155M 50 0 5 vo 15 20 25 30035 40 Operations [G-Ops]
[%]
# Top-1
Figure 1: Top1 vs. network. Single-crop top-1 vali- dation accuracies for top scoring single-model archi- tectures. We introduce with this chart our choice of colour scheme, which will be used throughout this publication to distinguish effectively different archi- tectures and their correspondent authors. Notice that networks of the same group share the same hue, for example ResNet are all variations of pink.
Figure 2: Top1 vs. operations, size â parameters. Top-1 one-crop accuracy versus amount of operations required for a single forward pass. The size of the blobs is proportional to the number of network pa- rameters; a legend is reported in the bottom right cor- ner, spanning from 5Ã106 to 155Ã106 params. Both these ï¬gures share the same y-axis, and the grey dots highlight the centre of the blobs.
single run of VGG-161 (Simonyan & Zisserman, 2014) and GoogLeNet (Szegedy et al., 2014) are 8.70% and 10.07% respectively, revealing that VGG-16 performs better than GoogLeNet. When models are run with 10-crop sampling,2 then the errors become 9.33% and 9.15% respectively, and therefore VGG-16 will perform worse than GoogLeNet, using a single central-crop. For this reason, we decided to base our analysis on re-evaluations of top-1 accuracies3 for all networks with a single central-crop sampling technique (Zagoruyko, 2016).
For inference time and memory usage measurements we have used Torch7 with cuDNN-v5 and CUDA-V8 back-end. All experiments were conducted on a JetPack-2.3 NVIDIA Jetson TX1 board (nVIDIA): an embedded visual computing system with a 64-bit ARM@®) A57 CPU, a | T-Flop/s 256-core NVIDIA Maxwell GPU and 4 GB LPDDR4 of shared RAM. We use this resource-limited device to better underline the differences between network architecture, but similar results can be obtained on most recent GPUs, such as the NVIDIA K40 or Titan X, to name a few. Operation counts were obtained using an open-source tool that we developed (Paszke| {2016). For measuring the power consumption, a Keysight 1146B Hall effect current probe has been used with a Keysight MSO-X 2024A 200 MHz digital oscilloscope with a sampling period of 2s and 50kSa/s sample rate. The system was powered by a Keysight E3645A GPIB controlled DC power supply.
# 3 RESULTS
In this section we report our results and comparisons. We analysed the following DDNs: AlexNet (Krizhevsky et al., 2012), batch normalised AlexNet (Zagoruyko, 2016), batch normalised Network In Network (NIN) (Lin et al., 2013), ENet (Paszke et al., 2016) for ImageNet (Culurciello, 2016), GoogLeNet (Szegedy et al., 2014), VGG-16 and -19 (Simonyan & Zisserman, 2014), ResNet-18, -34, -50, -101 and -152 (He et al., 2015), Inception-v3 (Szegedy et al., 2015) and Inception-v4 (Szegedy et al., 2016) since they obtained the highest performance, in these four years, on the ImageNet (Russakovsky et al., 2015) challenge.
1 In the original paper this network is called VGG-D, which is the best performing network. Here we prefer to highlight the number of layer utilised, so we will call it VGG-16 in this publication.
2 From a given image multiple patches are extracted: four corners plus central crop and their horizontal
mirrored twins.
3 Accuracy and error rate always sum to 100, therefore in this paper they are used interchangeably.
2
200 i â BNNIN â â GoogLeNet â Inception-v3 Inception-v4 â AlexNet â BN-AlexNet â vec-16 50 â vecis ResNet-152 â ENet Foward time per image [ms] II ae az 10 Batch size [/]
power consumption ââ_ â__ BN-AlexNet â ResNet-50 VGG-16 ResNet-101 VGG-19 ResNet-152 ResNet18 = = ENet â BNNIN â GoogLenet 9 â Inception-v3 Inception-v4 Batch size [/]
[W]
# Net
Inference time vs. batch size. This Figure 3: chart show inference time across different batch sizes with a logarithmic ordinate and logarithmic abscissa. Missing data points are due to lack of enough system memory required to process larger batches. A speed up of 3Ã is achieved by AlexNet due to better optimi- sation of its fully connected layers for larger batches.
Figure 4: Power vs. batch size. Net power consump- tion (due only to the forward processing of several DNNs) for different batch sizes. The idle power of the TX1 board, with no HDMI screen connected, was 1.30 W on average. The max frequency component of power supply current was 1.4 kHz, corresponding to a Nyquist sampling frequency of 2.8 kHz.
# 3.1 ACCURACY
Figure 1 shows one-crop accuracies of the most relevant entries submitted to the ImageNet chal- lenge, from the AlexNet (Krizhevsky et al., 2012), on the far left, to the best performing Inception-v4 (Szegedy et al., 2016). The newest ResNet and Inception architectures surpass all other architectures by a signiï¬cant margin of at least 7%.
Figure 2 provides a different, but more informative view of the accuracy values, because it also visualises computational cost and number of networkâs parameters. The ï¬rst thing that is very ap- parent is that VGG, even though it is widely used in many applications, is by far the most expensive architecture â both in terms of computational requirements and number of parameters. Its 16- and 19-layer implementations are in fact isolated from all other networks. The other architectures form a steep straight line, that seems to start to ï¬atten with the latest incarnations of Inception and ResNet. This might suggest that models are reaching an inï¬ection point on this data set. At this inï¬ection point, the costs â in terms of complexity â start to outweigh gains in accuracy. We will later show that this trend is hyperbolic.
3.2 INFERENCE TIME
Figure 3 reports inference time per image on each architecture, as a function of image batch size (from 1 to 64). We notice that VGG processes one image in a ï¬fth of a second, making it a less likely contender in real-time applications on an NVIDIA TX1. AlexNet shows a speed up of roughly 3à going from batch of 1 to 64 images, due to weak optimisation of its fully connected layers. It is a very surprising ï¬nding, that will be further discussed in the next subsection.
# 3.3 POWER
Power measurements are complicated by the high frequency swings in current consumption, which required high sampling current read-out to avoid aliasing. In this work, we used a 200 MHz digital oscilloscope with a current probe, as reported in section 2. Other measuring instruments, such as an AC power strip with 2 Hz sampling rate, or a GPIB controlled DC power supply with 12 Hz sampling rate, did not provide enough bandwidth to properly conduct power measurements.
In ï¬gure 4 we see that the power consumption is mostly independent with the batch size. Low power values for AlexNet (batch of 1) and VGG (batch of 2) are associated to slower forward times per image, as shown in ï¬gure 3.
3
BN-NIN GoogLeNet Inception-v3 AlexNet BN-AlexNet VGG-16 VGG-19 ResNet is ResNet-34 ResNet'50 ResNet-101 2000 1000 Maximum net memory utilisation [MB] Batch size[/]
Batch of 1 image ee Go «ee 0 100 200 300 400 500 Parameters [MB]
Figure 5: Memory vs. batch size. Maximum sys- tem memory utilisation for batches of different sizes. Memory usage shows a knee graph, due to the net- work model memory static allocation and the variable memory used by batch size.
Figure 6: Memory vs. parameters count. De- tailed view on static parameters allocation and cor- responding memory utilisation. Minimum memory of 200 MB, linear afterwards with slope 1.30.
Batch of 1 image Batch of 16 images 10 e e e ® âee 0 20 40 60 ao 1001202140160) Foward time per image [ms] 40 60 ao 100=« 12040160) Foward time per image [ms]
# Operations [G-Ops]
Figure 7: Operations vs. inference time, size â parameters. Relationship between operations and inference time, for batches of size 1 and 16 (biggest size for which all architectures can still run). Not surprisingly, we notice a linear trend, and therefore operations count represent a good estimation of inference time. Furthermore, we can notice an increase in the slope of the trend for larger batches, which correspond to shorter inference time due to batch processing optimisation.
3.4 MEMORY
We analysed system memory consumption of the TX1 device, which uses shared memory for both CPU and GPU. Figure 5 shows that the maximum system memory usage is initially constant and then raises with the batch size. This is due the initial memory allocation of the network model â which is the large static component â and the contribution of the memory required while processing the batch, proportionally increasing with the number of images. In ï¬gure 6 we can also notice that the initial allocation never drops below 200 MB, for network sized below 100 MB, and it is linear afterwards, with respect to the parameters and a slope of 1.30.
# 3.5 OPERATIONS
Operations count is essential for establishing a rough estimate of inference time and hardware circuit size, in case of custom implementation of neural network accelerators. In ï¬gure 7, for a batch of 16 images, there is a linear relationship between operations count and inference time per image. Therefore, at design time, we can pose a constraint on the number of operation to keep processing speed in a usable range for real-time applications or resource-limited deployments.
4
Batch of 1 image Batch of 16 images Operations [G-Ops] 10 e ° @ 5 ; e e) ® | @ sa. te Net power consumption [W] Net power consumption [W]
Figure 8: Operations vs. power consumption, size â parameters. Independency of power and operations is shown by a lack of directionality of the distributions shown in these scatter charts. Full resources utilisation and lower inference time for AlexNet architecture is reached with larger batches.
Batch of 1 image Batch of 16 images â\@ @ 1 â C) 5 gâ ® e@ ® z e? e. Bos a e e 60 . o ss é é Images per second [Hz] Images per second [Hz]
Figure 9: Accuracy vs. inferences per second, size â operations. Non trivial linear upper bound is shown in these scatter plots, illustrating the relationship between prediction accuracy and throughput of all examined architectures. These are the ï¬rst charts in which the area of the blobs is proportional to the amount of operations, instead of the parameters count. We can notice that larger blobs are concentrated on the left side of the charts, in correspondence of low throughput, i.e. longer inference times. Most of the architectures lay on the linear interface between the grey and white areas. If a network falls in the shaded area, it means it achieves exceptional accuracy or inference speed. The white area indicates a suboptimal region. E.g. both AlexNet architectures improve processing speed as larger batches are adopted, gaining 80 Hz.
3.6 OPERATIONS AND POWER
In this section we analyse the relationship between power consumption and number of operations required by a given model. Figure 8 reports that there is no speciï¬c power footprint for different ar- chitectures. When full resources utilisation is reached, generally with larger batch sizes, all networks consume roughly an additional 11.8 W, with a standard deviation of 0.7 W. Idle power is 1.30 W. This corresponds to the maximum system power at full utilisation. Therefore, if energy consumption is one of our concerns, for example for battery-powered devices, one can simply choose the slowest architecture which satisï¬es the application minimum requirements.
3.7 ACCURACY AND THROUGHPUT
We note that there is a non-trivial linear upper bound between accuracy and number of inferences per unit time. Figure 9 illustrates that for a given frame rate, the maximum accuracy that can be achieved is linearly proportional to the frame rate itself. All networks analysed here come from several publications, and have been independently trained by other research groups. A linear ï¬t of the accuracy shows all architecture trade accuracy vs. speed. Moreover, chosen a speciï¬c inference time, one can now come up with the theoretical accuracy upper bound when resources are fully
5
Top-1 accuracy density (%/M-Params} < or ooâ PN? dh ne so a ok eas ee et
Figure 10: Accuracy per parameter vs. network. Information density (accuracy per parameters) is an efï¬- ciency metric that highlight that capacity of a speciï¬c architecture to better utilise its parametric space. Models like VGG and AlexNet are clearly oversized, and do not take fully advantage of their potential learning abil- ity. On the far right, ResNet-18, BN-NIN, GoogLeNet and ENet (marked by grey arrows) do a better job at âsqueezingâ all their neurons to learn the given task, and are the winners of this section.
utilised, as seen in section 3.6. Since the power consumption is constant, we can even go one step further, and obtain an upper bound in accuracy even for an energetic constraint, which could possibly be an essential designing factor for a network that needs to run on an embedded system.
As the spoiler in section 3.1 gave already away, the linear nature of the accuracy vs. throughput relationship translates into a hyperbolical one when the forward inference time is considered instead. Then, given that the operations count is linear with the inference time, we get that the accuracy has an hyperbolical dependency on the amount of computations that a network requires.
3.8 PARAMETERS UTILISATION
DNNs are known to be highly inefï¬cient in utilising their full learning power (number of parameters / degrees of freedom). Prominent work (Han et al., 2015) exploits this ï¬aw to reduce network ï¬le size up to 50Ã, using weights pruning, quantisation and variable-length symbol encoding. It is worth noticing that, using more efï¬cient architectures to begin with may produce even more compact representations. In ï¬gure 10 we clearly see that, although VGG has a better accuracy than AlexNet (as shown by ï¬gure 1), its information density is worse. This means that the amount of degrees of freedom introduced in the VGG architecture bring a lesser improvement in terms of accuracy. Moreover, ENet (Paszke et al., 2016) â which we have speciï¬cally designed to be highly efï¬cient and it has been adapted and retrained on ImageNet (Culurciello, 2016) for this work â achieves the highest score, showing that 24à less parameters are sufï¬cient to provide state-of-the-art results.
# 4 CONCLUSIONS
In this paper we analysed multiple state-of-the-art deep neural networks submitted to the ImageNet challenge, in terms of accuracy, memory footprint, parameters, operations count, inference time and power consumption. Our goal is to provide insights into the design choices that can lead to efï¬cient neural networks for practical application, and optimisation of the often-limited resources in actual deployments, which lead us to the creation of ENet â or Efï¬cient-Network â for ImageNet. We show that accuracy and inference time are in a hyperbolic relationship: a little increment in accuracy costs a lot of computational time. We show that number of operations in a network model can effectively estimate inference time. We show that an energy constraint will set a speciï¬c upper bound on the maximum achievable accuracy and model complexity, in terms of operations counts. Finally, we show that ENet is the best architecture in terms of parameters space utilisation, squeezing up to 13à more information per parameter used respect to the reference model AlexNet, and 24à respect VGG-19.
6
# ACKNOWLEDGMENTS
This paper would have not look so pretty without the Python Software Foundation, the matplot- lib library and the communities of stackoverï¬ow and TEX of StackExchange which I ought to thank. This work is partly supported by the Ofï¬ce of Naval Research (ONR) grants N00014-12-1- 0167, N00014-15-1-2791 and MURI N00014-10-1-0278. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the TX1, Titan X, K40 GPUs used for this research.
# REFERENCES
Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catanzaro, and Evan Shelhamer. cuDNN: Efï¬cient Primitives for Deep Learning. arXiv.org arXiv:1410.0759, 2014.
Ronan Collobert, Koray Kavukcuoglu, and Cl´ement Farabet. Torch7: A matlab-like environment for machine learning. In BigLearn, NIPS Workshop, number EPFL-CONF-192376, 2011.
Eugenio Culurciello. Training enet. https://culurciello.github.io/tech/2016/06/20/ training-enet.html, 2016.
Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097â1105, 2012.
Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. arXiv preprint arXiv:1312.4400, 2013.
# nVIDIA. Jetson tx1 module. http://www.nvidia.com/object/jetson-tx1-module.html.
Adam Paszke. torch-opcounter. https://github.com/apaszke/torch-opCounter, 2016.
Adam Paszke, Abhishek Chaurasia, Sangpil Kim, and Eugenio Culurciello. Enet: A deep neural network architecture for real-time semantic segmentation. arXiv preprint arXiv:1606.02147, 2016.
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, An- drej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211â252, 2015.
Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Er- arXiv preprint han, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. arXiv:1409.4842, 2014.
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. arXiv preprint arXiv:1512.00567, 2015.
Christian Szegedy, Sergey Ioffe, and Vincent Vanhoucke. Inception-v4, inception-resnet and the impact of residual connections on learning. arXiv preprint arXiv:1602.07261, 2016.
Sergey Zagoruyko. imagenet-validation.torch. https://github.com/szagoruyko/imagenet- validation.torch, 2016.
7 | {
"id": "1602.07261"
} |
1605.07427 | Hierarchical Memory Networks | Memory networks are neural networks with an explicit memory component that
can be both read and written to by the network. The memory is often addressed
in a soft way using a softmax function, making end-to-end training with
backpropagation possible. However, this is not computationally scalable for
applications which require the network to read from extremely large memories.
On the other hand, it is well known that hard attention mechanisms based on
reinforcement learning are challenging to train successfully. In this paper, we
explore a form of hierarchical memory network, which can be considered as a
hybrid between hard and soft attention memory networks. The memory is organized
in a hierarchical structure such that reading from it is done with less
computation than soft attention over a flat memory, while also being easier to
train than hard attention over a flat memory. Specifically, we propose to
incorporate Maximum Inner Product Search (MIPS) in the training and inference
procedures for our hierarchical memory network. We explore the use of various
state-of-the art approximate MIPS techniques and report results on
SimpleQuestions, a challenging large scale factoid question answering task. | http://arxiv.org/pdf/1605.07427 | Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, Yoshua Bengio | stat.ML, cs.CL, cs.LG, cs.NE | 10 pages | null | stat.ML | 20160524 | 20160524 | 6 1 0 2
y a M 4 2 ] L M . t a t s [ 1 v 7 2 4 7 0 . 5 0 6 1 : v i X r a
# Hierarchical Memory Networks
# Sarath Chandarâ 1, Sungjin Ahn1, Hugo Larochelle2,4, Pascal Vincent1,4, Gerald Tesauro3, Yoshua Bengio1,4
1 Université de Montréal, Canada. 2 Twitter Cortex, USA. 3 IBM Watson Research Center, USA. 4 CIFAR, Canada.
# Abstract
Memory networks are neural networks with an explicit memory component that can be both read and written to by the network. The memory is often addressed in a soft way using a softmax function, making end-to-end training with backpropagation possible. However, this is not computationally scalable for applications which require the network to read from extremely large memories. On the other hand, it is well known that hard attention mechanisms based on reinforcement learning are challenging to train successfully. In this paper, we explore a form of hierarchical memory network, which can be considered as a hybrid between hard and soft attention memory networks. The memory is organized in a hierarchical structure such that reading from it is done with less computation than soft attention over a ï¬at memory, while also being easier to train than hard attention over a ï¬at memory. Speciï¬cally, we propose to incorporate Maximum Inner Product Search (MIPS) in the training and inference procedures for our hierarchical memory network. We explore the use of various state-of-the art approximate MIPS techniques and report results on SimpleQuestions, a challenging large scale factoid question answering task.
# 1 Introduction
Until recently, traditional machine learning approaches for challenging tasks such as image captioning, object detection, or machine translation have consisted in complex pipelines of algorithms, each being separately tuned for better performance. With the recent success of neural networks and deep learning research, it has now become possible to train a single model end-to-end, using backpropagation. Such end-to-end systems often outperform traditional approaches, since the entire model is directly optimized with respect to the ï¬nal task at hand. However, simple encode-decode style neural networks often underperform on knowledge-based reasoning tasks like question-answering or dialog systems. Indeed, in such cases it is nearly impossible for regular neural networks to store all the necessary knowledge in their parameters.
Neural networks with memory [1, 2] can deal with knowledge bases by having an external memory component which can be used to explicitly store knowledge. The memory is accessed by reader and writer functions, which are both made differentiable so that the entire architecture (neural network, reader, writer and memory components) can be trained end-to-end using backpropagation. Memory-based architectures can also be considered as generalizations of RNNs and LSTMs, where the memory is analogous to recurrent hidden states. However they are much richer in structure and can handle very long-term dependencies because once a vector (i.e., a memory) is stored, it is copied
âCorresponding author: apsarathchandar@gmail.com
from time step to time step and can thus stay there for a very long time (and gradients correspondingly ï¬ow back time unhampered).
There exists several variants of neural networks with a memory component: Memory Networks [2], Neural Turing Machines (NTM) [1], Dynamic Memory Networks (DMN) [3]. They all share ï¬ve major components: memory, input module, reader, writer, and output module.
Memory: The memory is an array of cells, each capable of storing a vector. The memory is often initialized with external data (e.g. a database of facts), by ï¬lling in its cells with a pre-trained vector representations of that data.
Input module: The input module is to compute a representation of the input that can be used by other modules.
Writer: The writer takes the input representation and updates the memory based on it. The writer can be as simple as ï¬lling the slots in the memory with input vectors in a sequential way (as often done in memory networks). If the memory is bounded, instead of sequential writing, the writer has to decide where to write and when to rewrite cells (as often done in NTMs).
Reader: Given an input and the current state of the memory, the reader retrieves content from the memory, which will then be used by an output module. This often requires comparing the inputâs representation or a function of the recurrent state with memory cells using some scoring function such as a dot product.
Output module: Given the content retrieved by the reader, the output module generates a prediction, which often takes the form of a conditional distribution over multiple labels for the output.
For the rest of the paper, we will use the name memory network to describe any model which has any form of these ï¬ve components. We would like to highlight that all the components except the memory are learnable. Depending on the application, any of these components can also be ï¬xed. In this paper, we will focus on the situation where a network does not write and only reads from the memory.
In this paper, we focus on the application of memory networks to large-scale tasks. Speciï¬cally, we focus on large scale factoid question answering. For this problem, given a large set of facts and a natural language question, the goal of the system is to answer the question by retrieving the supporting fact for that question, from which the answer can be derived. Application of memory networks to this task has been studied in [4]. However, [4] depended on keyword based heuristics to ï¬lter the facts to a smaller set which is manageable for training. However heuristics are invariably dataset dependent and we are interested in a more general solution which can be used when the facts are of any structure. One can design soft attention retrieval mechanisms, where a convex combination of all the cells is retrieved or design hard attention retrieval mechanisms where one or few cells from the memory are retrieved. Soft attention is achieved by using softmax over the memory which makes the reader differentiable and hence learning can be done using gradient descent. Hard attention is achieved by using methods like REINFORCE [5], which provides a noisy gradient estimate when discrete stochastic decisions are made by a model.
Both soft attention and hard attention have limitations. As the size of the memory grows, soft attention using softmax weighting is not scalable. It is computationally very expensive, since its complexity is linear in the size of the memory. Also, at initialization, gradients are dispersed so much that it can reduce the effectiveness of gradient descent. These problems can be alleviated by a hard attention mechanism, for which the training method of choice is REINFORCE. However, REINFORCE can be brittle due to its high variance and existing variance reduction techniques are complex. Thus, it is rarely used in memory networks (even in cases of a small memory).
In this paper, we propose a new memory selection mechanism based on Maximum Inner Product Search (MIPS) which is both scalable and easy to train. This can be considered as a hybrid of soft and hard attention mechanisms. The key idea is to structure the memory in a hierarchical way such that it is easy to perform MIPS, hence the name Hierarchical Memory Network (HMN). HMNs are scalable at both training and inference time. The main contributions of the paper are as follows:
⢠We explore hierarchical memory networks, where the memory is organized in a hierarchical fashion, which allows the reader to efï¬ciently access only a subset of the memory.
⢠While there are several ways to decide which subset to access, we propose to pose memory access as a maximum inner product search (MIPS) problem.
2
⢠We empirically show that exact MIPS-based algorithms not only enjoy similar convergence as soft attention models, but can even improve the performance of the memory network.
⢠Since exact MIPS is as computationally expensive as a full soft attention model, we propose to train the memory networks using approximate MIPS techniques for scalable memory access.
⢠We empirically show that unlike exact MIPS, approximate MIPS algorithms provide a speedup and scalability of training, though at the cost of some performance.
# 2 Hierarchical Memory Networks
In this section, we describe the proposed Hierarchical Memory Network (HMN). In this paper, HMNs only differ from regular memory networks in two of its components: the memory and the reader.
Memory: Instead of a ï¬at array of cells for the memory structure, HMNs leverages a hierarchical memory structure. Memory cells are organized into groups and the groups can further be organized into higher level groups. The choice for the memory structure is tightly coupled with the choice of reader, which is essential for fast memory access. We consider three classes of approaches for the memoryâs structure: hashing-based approaches, tree-based approaches, and clustering-based approaches. This is explained in detail in the next section.
Reader: The reader in the HMN is different from the readers in ï¬at memory networks. Flat memory- based readers use either soft attention over the entire memory or hard attention that retrieves a single cell. While these mechanisms might work with small memories, with HMNs we are more interested in achieving scalability towards very large memories. So instead, HMN readers use soft attention only over a selected subset of the memory. Selecting memory subsets is guided by a maximum inner product search algorithm, which can exploit the hierarchical structure of the organized memory to retrieve the most relevant facts in sub-linear time. The MIPS-based reader is explained in more detail in the next section.
In HMNs, the reader is thus trained to create MIPS queries such that it can retrieve a sufï¬cient set of facts. While most of the standard applications of MIPS [6â8] so far have focused on settings where both query vector and database (memory) vectors are precomputed and ï¬xed, memory readers in HMNs are learning to do MIPS by updating the input representation such that the result of MIPS retrieval contains the correct fact(s).
# 3 Memory Reader with K-MIPS attention
In this section, we describe how the HMN memory reader uses Maximum Inner Product Search (MIPS) during learning and inference.
We begin with a formal deï¬nition of K-MIPS. Given a set of points X = {x1, . . . , xn} and a query vector q, our goal is to ï¬nd
argmax(2), q! x; qd)
where the argmax(K) returns the indices of the top-K maximum values. In the case of HMNs, X corresponds to the memory and q corresponds to the vector computed by the input module.
A simple but inefï¬cient solution for K-MIPS involves a linear search over the cells in memory by performing the dot product of q with all the memory cells. While this will return the exact result for K-MIPS, it is too costly to perform when we deal with a large-scale memory. However, in many practical applications, it is often sufï¬cient to have an approximate result for K-MIPS, trading speed-up at the cost of the accuracy. There exist several approximate K-MIPS solutions in the literature [8, 9, 7, 10].
All the approximate K-MIPS solutions add a form of hierarchical structure to the memory and visit only a subset of the memory cells to ï¬nd the maximum inner product for a given query. Hashing-based approaches [8â10] hash cells into multiple bins, and given a query they search for K-MIPS cell vectors only in bins that are close to the bin associated with the query. Tree-based approaches [6, 7] create search trees with cells in the leaves of the tree. Given a query, a path in the tree is followed and MIPS is performed only for the leaf for the chosen path. Clustering-based approaches [11] cluster
3
cells into multiple clusters (or a hierarchy of clusters) and given a query, they perform MIPS on the centroids of the top few clusters. We refer the readers to [11] for an extensive comparison of various state-of-the-art approaches for approximate K-MIPS.
Our proposal is to exploit this rich approximate K-MIPS literature to achieve scalable training and inference in HMNs. Instead of ï¬ltering the memory with heuristics, we propose to organize the memory based on approximate K-MIPS algorithms and then train the reader to learn to perform MIPS. Speciï¬cally, consider the following softmax over the memory which the reader has to perform for every reading step to retrieve a set of relevant candidates:
Rout = softmax(h(q)M T ) (2) where h(q) â Rd is the representation of the query, M â RN Ãd is the memory with N being the total number of cells in the memory. We propose to replace this softmax with softmax(K) which is deï¬ned as follows:
C = argmax(K) h(q)M T Rout = softmax(K)(h(q)M T ) = softmax(h(q)M [C]T ) (4) where C is the indices of top-K MIP candidate cells and M [C] is a sub-matrix of M where the rows are indexed by C. One advantage of using the softmax(K) is that it naturally focuses on cells that would normally receive the strongest gradients during learning. That is, in a full softmax, the gradients are otherwise more dispersed across cells, given the large number of cells and despite many contributing a small gradient. As our experiments will show, this results in slower training. One problematic situation when learning with the softmax(K) is when we are at the initial stages of training and the K-MIPS reader is not including the correct fact candidate. To avoid this issue, we always include the correct candidate to the top-K candidates retrieved by the K-MIPS algorithm, effectively performing a fully supervised form of learning.
During training, the reader is updated by backpropagation from the output module, through the subset of memory cells. Additionally, the log-likelihood of the correct fact computed using K-softmax is also maximized. This second supervision helps the reader learn to modify the query such that the maximum inner product of the query with respect to the memory will yield the correct supporting fact in the top K candidate set.
Until now, we described the exact K-MIPS-based learning framework, which still requires a linear look-up over all memory cells and would be prohibitive for large-scale memories. In such scenarios, we can replace the exact K-MIPS in the training procedure with the approximate K-MIPS. This is achieved by deploying a suitable memory hierarchical structure. The same approximate K-MIPS- based reader can be used during inference stage as well. Of course, approximate K-MIPS algorithms might not return the exact MIPS candidates and will likely to hurt performance, but at the beneï¬t of achieving scalability.
While the memory representation is ï¬xed in this paper, updating the memory along with the query representation should improve the likelihood of choosing the correct fact. However, updating the memory will reduce the precision of the approximate K-MIPS algorithms, since all of them assume that the vectors in the memory are static. Designing efï¬cient dynamic K-MIPS should improve the performance of HMNs even further, a challenge that we hope to address in future work.
# 3.1 Reader with Clustering-based approximate K-MIPS
Clustering-based approximate K-MIPS was proposed in [11] and it has been shown to outperform various other state-of-the-art data dependent and data independent approximate K-MIPS approaches for inference tasks. As we will show in the experiments section, clustering-based MIPS also performs better when used to training HMNs. Hence, we focus our presentation on the clustering-based approach and propose changes that were found to be helpful for learning HMNs.
Following most of the other approximate K-MIPS algorithms, [11] converts MIPS to Maximum Cosine Similarity Search (MCSS) problem:
argmax(K) iâX qT xi ||q|| ||xi|| = argmax(K) iâX qT xi ||xi|| (5)
4
When all the data vectors xi have the same norm, then MCSS is equivalent to MIPS. However, it is often restrictive to have this additional constraint. Instead, [11] appends additional dimensions to both query and data vectors to convert MIPS to MCSS. In HMN terminology, this would correspond to adding a few more dimensions to the memory cells and input representations. The algorithm introduces two hyper-parameters, U < 1 and m â Nâ. The ï¬rst step is to scale all the vectors in the memory by the same factor, such that maxi ||xi||2 = U . We then apply two mappings, P and Q, on the memory cells and on the input vector, respectively. These two mappings simply concatenate m new components to the vectors and make the norms of the data points all roughly the same [9]. The mappings are deï¬ned as follows:
P (x) = [x, 1/2 â ||x||2 Q(x) = [x, 0, 0, . . . , 0] 2, 1/2 â ||x||4 2, . . . , 1/2 â ||x||2m 2 ] (6) (7)
We thus have the following approximation of MIPS by MCSS for any query vector q:
(K) tT cK) Q(g)' Pla) argmax; q ~ argmax, TTT â * lQ(@)ll2 - ||P(wa)|l2 (8)
Once we convert MIPS to MCSS, we can use spherical K-means [12] or its hierarchical version to approximate and speedup the cosine similarity search. Once the memory is clustered, then every read operation requires only K dot-products, where K is the number of cluster centroids.
Since this is an approximation, it is error-prone. As we are using this approximation for the learning process, this introduces some bias in gradients, which can affect the overall performance of HMN. To alleviate this bias, we propose three simple strategies.
⢠Instead of using only the top-K candidates for a single read query, we also add top-K candidates retrieved for every other read query in the mini-batch. This serves two purposes. First, we can do efï¬cient matrix multiplications by leveraging GPUs since all the K-softmax in a minibatch are over the same set of elements. Second, this also helps to decrease the bias introduced by the approximation error.
⢠For every read access, instead of only using the top few clusters which has a maximum product with the read query, we also sample some clusters from the rest, based on a probability distribution log-proportional to the dot product with the cluster centroids. This also decreases the bias.
⢠We can also sample random blocks of memory and add it to top-K candidates.
We empirically investigate the effect of these variations in Section 5.5.
# 4 Related Work
Memory networks have been introduced in [2] and have been so far applied to comprehension-based question answering [13, 14], large scale question answering [4] and dialogue systems [15]. While [2] considered supervised memory networks in which the correct supporting fact is given during the training stage, [14] introduced semi-supervised memory networks that can learn the supporting fact by itself. [3, 16] introduced Dynamic Memory Networks (DMNs) which can be considered as a memory network with two types of memory: a regular large memory and an episodic memory. Another related class of model is the Neural Turing Machine [1], which is uses softmax-based soft attention. Later [17] extended NTM to hard attention using reinforcement learning. [15, 4] alleviate the problem of the scalability of soft attention by having an initial keyword based ï¬ltering stage, which reduces the number of facts being considered. Our work generalizes this ï¬ltering by using MIPS for ï¬ltering. This is desirable because MIPS can be applied for any modality of data or even when there is no overlap between the words in a question and the words in facts.
The softmax arises in various situations and most relevant to this work are scaling methods for large vocabulary neural language modeling. In neural language modeling, the ï¬nal layer is a softmax distribution over the next word and there exist several approaches to achieve scalability. [18] proposes a hierarchical softmax based on prior clustering of the words into a binary, or more generally n-ary tree, that serves as a ï¬xed structure for the learning process of the model. The complexity of training
5
is reduced from O(n) to O(log n). Due to its clustering and tree structure, it resembles the clustering- based MIPS techniques we explore in this paper. However, the approaches differ at a fundamental level. Hierarchical softmax deï¬nes the probability of a leaf node as the product of all the probabilities computed by all the intermediate softmaxes on the way to that leaf node. By contrast, an approximate MIPS search imposes no such constraining structure on the probabilistic model, and is better thought as efï¬ciently searching for top winners of what amounts to be a large ordinary ï¬at softmax. Other methods such as Noice Constrastive Estimation [19] and Negative Sampling [20] avoid an expensive normalization constant by sampling negative samples from some marginal distribution. By contrast, our approach approximates the softmax by explicitly including in its negative samples candidates that likely would have a large softmax value. [21] introduces an importance sampling approach that considers all the words in a mini-batch as the candidate set. This in general might also not include the MIPS candidates with highest softmax values.
[22] is the only work that we know of, proposing to use MIPS during learning. It proposes hashing- based MIPS to sort the hidden layer activations and reduce the computation in every layer. However, a small scale application was considered and data-independent methods like hashing will likely suffer as dimensionality increases.
# 5 Experiments
In this section, we report experiments on factoid question answering using hierarchical memory networks. Speciï¬cally, we use the SimpleQuestions dataset [4]. The aim of these experiments is not to achieve state-of-the-art results on this dataset. Rather, we aim to propose and analyze various approaches to make memory networks more scalable and explore the achieved tradeoffs between speed and accuracy.
# 5.1 Dataset
We use SimpleQuestions [4] which is a large scale factoid question answering dataset. SimpleQues- tions consists of 108,442 natural language questions, each paired with a corresponding fact from Freebase. Each fact is a triple (subject,relation,object) and the answer to the question is always the ob- ject. The dataset is divided into training (75910), validation (10845), and test (21687) sets. Unlike [4] who additionally considered FB2M (10M facts) or FB5M (12M facts) with keyword-based heuristics for ï¬ltering most of the facts for each question, we only use SimpleQuestions, with no keyword-based heuristics. This allows us to do a direct comparison with the full softmax approach in a reasonable amount of time. Moreover, we would like to highlight that for this dataset, keyword-based ï¬ltering is a very efï¬cient heuristic since all questions have an appropriate source entity with a matching word. Nevertheless, our goal is to design a general purpose architecture without such strong assumptions on the nature of the data.
# 5.2 Model
Let Vq be the vocabulary of all words in the natural language questions. Let Wq be a |Vq| â m matrix where each row is some m dimensional embedding for a word in the question vocabulary. This matrix is initialized with random values and learned during training. Given any question, we represent it with a bag-of-words representation by summing the vector representation of each word in the question. Let q = {wi}p
P h(q) = > W, [wil i=l
Then, to ï¬nd the relevant fact from the memory M, we call the K-MIPS-based reader module with h(q) as the query. This uses Equation 3 and 4 to compute the output of the reader Rout. The reader is trained by minimizing the Negative Log Likelihood (NLL) of the correct fact.
N Jo = > âlog(Rout| fil) i=l
6
where fi is the index of the correct fact in Wm. We are ï¬xing the memory embeddings to the TransE [23] embeddings and learning only the question embeddings.
This model is simpler than the one reported in [4] so that it is esay to analyze the effect of various memory reading strategies.
# 5.3 Training Details
We trained the model with the Adam optimizer [24], with a ï¬xed learning rate of 0.001. We used mini-batches of size 128. We used 200 dimensional embeddings for the TransE entities, yielding 600 dimensional embeddings for facts by concatenating the embeddings of the subject, relation and object. We also experimented with summing the entities in the triple instead of concatenating, but we found that it was difï¬cult for the model to differentiate facts this way. The only learnable parameters by the HMN model are the question word embeddings. The entity distribution in SimpleQuestions is extremely sparse and hence, following [4], we also add artiï¬cial questions for all the facts for which we do not have natural language questions. Unlike [4], we do not add any other additional tasks like paraphrase detection to the model, mainly to study the effect of the reader. We stopped training for all the models when the validation accuracy consistently decreased for 3 epochs.
# 5.4 Exact K-MIPS improves accuracy
In this section, we compare the performance of the full soft attention reader and exact K-MIPS attention readers. Our goal is to verify that K-MIPS attention is in fact a valid and useful attention mechanism and see how it fares when compared to full soft attention. For K-MIPS attention, we tried K â 10, 50, 100, 1000. We would like to emphasize that, at training time, along with K candidates for a particular question, we also add the K-candidates for each question in the mini-batch. So the exact size of the softmax layer would be higer than K during training. In Table 1, we report the test performance of memory networks using the soft attention reader and K-MIPS attention reader. We also report the average softmax size during training. From the table, it is clear that the K-MIPS attention readers improve the performance of the network compared to soft attention reader. In fact, smaller the value of K is, better the performance. This result suggests that it is better to use a K-MIPS layer instead of softmax layer whenever possible. It is interesting to see that the convergence of the model is not slowed down due to this change in softmax computation (as shown in Figure 1).
Model Full-softmax 10-MIPS 50-MIPS 100-MIPS 1000-MIPS Clustering PCA-Tree WTA-Hash Test Acc. Avg. Softmax Size 59.5 62.2 61.2 60.6 59.6 51.5 32.4 40.2 108442 1290 6180 11928 70941 20006 21108 20008
Table 1: Accuracy in SQ test-set and average size of memory used. 10-softmax has high performance while using only smaller amount of memory. Figure 1: Validation curve for various models. Convergence is not slowed down by k-softmax.
This experiment conï¬rms the usefulness of K-MIPS attention. However, exact K-MIPS has the same complexity as a full softmax. Hence, to scale up the training, we need more efï¬cient forms of K-MIPS attention, which is the focus of next experiment.
# 5.5 Approximate K-MIPS based learning
As mentioned previously, designing faster algorithms for K-MIPS is an active area of research. [11] compared several state-of-the-art data-dependent and data-independent methods for faster approximate K-MIPS and it was found that clustering-based MIPS performs signiï¬cantly better than other approaches. However the focus of the comparison was on performance during the inference
7
stage. In HMNs, K-MIPS must be used at both training stage and inference stages. To verify if the same trend can been seen during learning stage as well, we compared three different approaches:
Clustering: This was explained in detail in section 3.
WTA-Hash: Winner Takes All hashing [25] is a hashing-based K-MIPS algorithm which also converts MIPS to MCSS by augmenting additional dimensions to the vectors. This method used n hash functions and each hash function does p different random permutations of the vector. Then the preï¬x constituted by the ï¬rst k elements of each permuted vector is used to construct the hash for the vector.
PCA-Tree: PCA-Tree [7] is the state-of-the-art tree-based method, which converts MIPS to NNS by vector augmentation. It uses the principal components of the data to construct a balanced binary tree with data residing in the leaves.
For a fair comparison, we varied the hyper-parameters of each algorithm in such a way that the average speedup is approximately the same. Table 1 shows the performance of all three methods, compared to a full softmax. From the table, it is clear that the clustering-based method performs signiï¬cantly better than the other two methods. However, performances are lower when compared to the performance of the full softmax.
As a next experiment, we analyze various the strategies proposed in Section 3.1 to reduce the approximation bias of clustering-based K-MIPS:
Top-K: This strategy picks the vectors in the top K clusters as candidates.
Sample-K: This strategy samples K clusters, without replacement, based on a probability distribution based on the dot product of the query with the cluster centroids. When combined with the Top-K strategy, we ignore clusters selected by the Top-k strategy for sampling.
Rand-block: This strategy divides the memory into several blocks and uniformly samples a random block as candidate.
We experimented with 1000 clusters and 2000 clusters. While comparing various training strategies, we made sure that the effective speedup is approximately the same. Memory access to facts per query for all the models is approximately 20,000, hence yielding a 5X speedup.
Top-K Sample-K rand-block Yes No Yes Yes Yes No Yes Yes No Yes No No No Yes Yes 1000 clusters Test Acc. 50.2 52.5 52.8 51.8 52.5 epochs 16 68 31 32 38 2000 clusters Test Acc. 51.5 52.8 53.1 52.3 52.7 epochs 22 63 26 26 19
Table 2: Accuracy in SQ test set and number of epochs for convergence.
Results are given in Table 2. We observe that the best approach is to combine the Top-K and Sample-K strategies, with Rand-block not being beneï¬cial. Interestingly, the worst performances correspond to cases where the Sample-K strategy is ignored.
# 6 Conclusion
In this paper, we proposed a hierarchical memory network that exploits K-MIPS for its attention- based reader. Unlike soft attention readers, K-MIPS attention reader is easily scalable to larger memories. This is achieved by organizing the memory in a hierarchical way. Experiments on the SimpleQuestions dataset demonstrate that exact K-MIPS attention is better than soft attention. However, existing state-of-the-art approximate K-MIPS techniques provide a speedup at the cost of some accuracy. Future research will investigate designing efï¬cient dynamic K-MIPS algorithms, where the memory can be dynamically updated during training. This should reduce the approximation bias and hence improve the overall performance.
8
# References
[1] Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014.
[2] Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. In Proceedings Of The International Conference on Representation Learning (ICLR 2015), 2015. In Press.
[3] Ankit Kumar et al. Ask me anything: Dynamic memory networks for natural language processing. CoRR, abs/1506.07285, 2015.
[4] Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075, 2015.
[5] Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8:229â256, 1992.
[6] Parikshit Ram and Alexander G. Gray. Maximum inner-product search using cone trees. KDD â12, pages 931â939, 2012.
[7] Yoram Bachrach et al. Speeding up the xbox recommender system using a euclidean transformation for inner-product spaces. RecSys â14, pages 257â264, 2014.
[8] Anshumali Shrivastava and Ping Li. Asymmetric LSH (ALSH) for sublinear time maximum inner product search (MIPS). In Advances in Neural Information Processing Systems 27, pages 2321â2329, 2014.
[9] Anshumali Shrivastava and Ping Li. Improved asymmetric locality sensitive hashing (alsh) for maximum inner product search (mips). In Proceedings of Conference on Uncertainty in Artiï¬cial Intelligence (UAI), 2015.
[10] Behnam Neyshabur and Nathan Srebro. On symmetric and asymmetric lshs for inner product search. In Proceedings of the 31st International Conference on Machine Learning, 2015.
[11] Alex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, and Yoshua Bengio. Clustering is efï¬cient for approximate maximum inner product search. arXiv preprint arXiv:1507.05910, 2015.
[12] Shi Zhong. Efï¬cient online spherical k-means clustering. In Neural Networks, 2005. IJCNNâ05. Proceed- ings. 2005 IEEE International Joint Conference on, volume 5, pages 3180â3185. IEEE, 2005.
[13] Jason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. Towards ai-complete question answering: a set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698, 2015.
[14] Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-to-end memory networks. arXiv preprint arXiv:1503.08895, 2015.
[15] Jesse Dodge, Andreea Gane, Xiang Zhang, Antoine Bordes, Sumit Chopra, Alexander Miller, Arthur Szlam, and Jason Weston. Evaluating prerequisite qualities for learning end-to-end dialog systems. CoRR, abs/1511.06931, 2015.
[16] Caiming Xiong, Stephen Merity, and Richard Socher. Dynamic memory networks for visual and textual question answering. CoRR, abs/1603.01417, 2016.
[17] Wojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. CoRR, abs/1505.00521, 2015.
[18] Frederic Morin and Yoshua Bengio. Hierarchical probabilistic neural network language model. In Robert G. Cowell and Zoubin Ghahramani, editors, Proceedings of AISTATS, pages 246â252, 2005.
[19] Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. arXiv preprint arXiv:1402.0030, 2014.
[20] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efï¬cient estimation of word representations in vector space. In International Conference on Learning Representations, Workshop Track, 2013.
[21] Sébastien Jean, KyungHyun Cho, Roland Memisevic, and Yoshua Bengio. On using very large target vocabulary for neural machine translation. In Proceedings of ACL,2015, pages 1â10, 2015.
[22] Ryan Spring and Anshumali Shrivastava. Scalable and sustainable deep learning via randomized hashing. CoRR, abs/1602.08194, 2016.
9
[23] Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. Translat- ing embeddings for modeling multi-relational data. In Advances in NIPS, pages 2787â2795. 2013.
[24] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014.
[25] Sudheendra Vijayanarasimhan, Jon Shlens, Rajat Monga, and Jay Yagnik. Deep networks with large output spaces. arXiv preprint arXiv:1412.7479, 2014.
10 | {
"id": "1507.05910"
} |
1605.07683 | Learning End-to-End Goal-Oriented Dialog | Traditional dialog systems used in goal-oriented applications require a lot
of domain-specific handcrafting, which hinders scaling up to new domains.
End-to-end dialog systems, in which all components are trained from the dialogs
themselves, escape this limitation. But the encouraging success recently
obtained in chit-chat dialog may not carry over to goal-oriented settings. This
paper proposes a testbed to break down the strengths and shortcomings of
end-to-end dialog systems in goal-oriented applications. Set in the context of
restaurant reservation, our tasks require manipulating sentences and symbols,
so as to properly conduct conversations, issue API calls and use the outputs of
such calls. We show that an end-to-end dialog system based on Memory Networks
can reach promising, yet imperfect, performance and learn to perform
non-trivial operations. We confirm those results by comparing our system to a
hand-crafted slot-filling baseline on data from the second Dialog State
Tracking Challenge (Henderson et al., 2014a). We show similar result patterns
on data extracted from an online concierge service. | http://arxiv.org/pdf/1605.07683 | Antoine Bordes, Y-Lan Boureau, Jason Weston | cs.CL | Accepted as a conference paper at ICLR 2017 | null | cs.CL | 20160524 | 20170330 | 7 1 0 2
r a M 0 3 ] L C . s c [
4 v 3 8 6 7 0 . 5 0 6 1 : v i X r a
Published as a conference paper at ICLR 2017
# LEARNING END-TO-END GOAL-ORIENTED DIALOG
Antoine Bordes, Y-Lan Boureau & Jason Weston Facebook AI Research New York, USA {abordes, ylan, jase}@fb.com
# ABSTRACT
Traditional dialog systems used in goal-oriented applications require a lot of domain-speciï¬c handcrafting, which hinders scaling up to new domains. End- to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols in order to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We conï¬rm those results by comparing our system to a hand-crafted slot-ï¬lling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
# INTRODUCTION
The most useful applications of dialog systems such as digital personal assistants or bots are currently goal-oriented and transactional: the system needs to understand a user request and complete a related task with a clear goal within a limited number of dialog turns. The workhorse of traditional dialog systems is slot-ï¬lling (Lemon et al., 2006; Wang and Lemon, 2013; Young et al., 2013) which predeï¬nes the structure of a dialog state as a set of slots to be ï¬lled during the dialog. For a restaurant reservation system, such slots can be the location, price range or type of cuisine of a restaurant. Slot-ï¬lling has proven reliable but is inherently hard to scale to new domains: it is impossible to manually encode all features and slots that users might refer to in a conversation.
End-to-end dialog systems, usually based on neural networks (Shang et al., 2015; Vinyals and Le, 2015; Sordoni et al., 2015; Serban et al., 2015a; Dodge et al., 2016), escape such limitations: all their components are directly trained on past dialogs, with no assumption on the domain or dialog state structure, thus making it easy to automatically scale up to new domains. They have shown promising performance in non goal-oriented chit-chat settings, where they were trained to predict the next utterance in social media and forum threads (Ritter et al., 2011; Wang et al., 2013; Lowe et al., 2015) or movie conversations (Banchs, 2012). But the performance achieved on chit-chat may not necessarily carry over to goal-oriented conversations. As illustrated in Figure 1 in a restaurant reservation scenario, conducting goal-oriented dialog requires skills that go beyond language modeling, e.g., asking questions to clearly deï¬ne a user request, querying Knowledge Bases (KBs), interpreting results from queries to display options to users or completing a transaction. This makes it hard to ascertain how well end-to-end dialog models would do, especially since evaluating chit-chat performance in itself is not straightforward (Liu et al., 2016). In particular, it is unclear if end-to-end models are in a position to replace traditional dialog methods in a goal-directed setting: can end-to-end dialog models be competitive with traditional methods even in the well-deï¬ned narrow-domain tasks where they excel? If not, where do they fall short?
This paper aims to make it easier to address these questions by proposing an open resource to test end- to-end dialog systems in a way that 1) favors reproducibility and comparisons, and 2) is lightweight and easy to use. We aim to break down a goal-directed objective into several subtasks to test some crucial capabilities that dialog systems should have (and hence provide error analysis by design).
1
Published as a conference paper at ICLR 2017
Hi! Hello, what can I help you with today? T'd like to book a table for six people in an expensive price range with British food. Task 1 Th it! - Wise should it be? Issuing API calls Ok, let me look into some as fr acy Actually i would prefer for four. Sure. Is there anything else to update? Task 2 No. Updating API calls Ok let me look into some options for you. In London. The Place R_phone The Place phone The Place R cuisine british The Place R_address The Place address The Place R location london The Place R availability four The Place R price expensive The Place R_rating 7 The Fancy Pub R phone The Fancy Pub phone âThe Fancy Pub R_cuisine british The Fancy Pub R address The Fancy Pub_address âThe_Fancy Pub R location london Task 3 The Fancy Pub R availability four : . . Tenaya Reales Cpe Displaying options * The Fancy Pub R-rating 8 What do you think of this option: The_Fancy_Pub No, I don't like that. Sure, let me find an other option for you. What do you think of this option: The Place Let's do it! Great let me do the reservation. Can you provide me the address? Here it is: The_Place_address. You rock! Is there anything else I can help you with? as tg . . SNIOIEHETIESS Providing extra-information You're welcome. Task 5 Conducting full dialogs
Figure 1: Goal-oriented dialog tasks. A user (in green) chats with a bot (in blue) to book a table at a restaurant. Models must predict bot utterances and API calls (in dark red). Task 1 tests the capacity of interpreting a request and asking the right questions to issue an API call. Task 2 checks the ability to modify an API call. Task 3 and 4 test the capacity of using outputs from an API call (in light red) to propose options (sorted by rating) and to provide extra-information. Task 5 combines everything.
In the spirit of the bAbI tasks conceived as question answering testbeds (Weston et al., 2015b), we designed a set of ï¬ve tasks within the goal-oriented context of restaurant reservation. Grounded with an underlying KB of restaurants and their properties (location, type of cuisine, etc.), these tasks cover several dialog stages and test if models can learn various abilities such as performing dialog management, querying KBs, interpreting the output of such queries to continue the conversation or dealing with new entities not appearing in dialogs from the training set. In addition to showing how the set of tasks we propose can be used to test the goal-directed capabilities of an end-to-end dialog system, we also propose results on two additional datasets extracted from real interactions with users, to conï¬rm that the pattern of results observed in our tasks is indeed a good proxy for what would be observed on real data, with the added beneï¬t of better reproducibility and interpretability.
The goal here is explicitly not to improve the state of the art in the narrow domain of restaurant booking, but to take a narrow domain where traditional handcrafted dialog systems are known to perform well, and use that to gauge the strengths and weaknesses of current end-to-end systems with no domain knowledge. Solving our tasks requires manipulating both natural language and symbols from a KB. Evaluation uses two metrics, per-response and per-dialog accuracies, the latter tracking completion of the actual goal. Figure 1 depicts the tasks and Section 3 details them. Section 4 compares multiple methods on these tasks. As an end-to-end neural model, we tested Memory Networks (Weston et al., 2015a), an attention-based architecture that has proven competitive for non goal-oriented dialog (Dodge et al., 2016). Our experiments in Section 5 show that Memory Networks can be trained to perform non-trivial operations such as issuing API calls to KBs and manipulating entities unseen in training. We conï¬rm our ï¬ndings on real human-machine dialogs
2
Published as a conference paper at ICLR 2017
Table 1: Data used in this paper. Tasks 1-5 were generated using our simulator and share the same KB. Task 6 was converted from the 2nd Dialog State Tracking Challenge (Henderson et al., 2014a). Concierge is made of chats extracted from a real online concierge service. (â) Tasks 1-5 have two test sets, one using the vocabulary of the training set and the other using out-of-vocabulary words.
Tasks DIALOGS Average statistics Number of utterances: - user utterances - bot utterances - outputs from API calls DATASETS Vocabulary size Candidate set size Training dialogs Tasks 1-5 share the Validation dialogs same data source Test dialogs T1 T2 T3 T4 T5 55 43 12 13 7 5 18 10 7 24 23 0 3,747 4,212 1,000 1,000 1,000(â) 17 7 10 0 15 4 4 7 T6 54 6 8 40 1,229 2,406 1,618 500 1,117 Concierge 8 4 4 0 8,629 11,482 3,249 403 402
from the restaurant reservation dataset of the 2nd Dialog State Tracking Challenge, or DSTC2 (Henderson et al., 2014a), which we converted into our task format, showing that Memory Networks can outperform a dedicated slot-ï¬lling rule-based baseline. We also evaluate on a dataset of human- human dialogs extracted from an online concierge service that books restaurants for users. Overall, the per-response performance is encouraging, but the per-dialog one remains low, indicating that end-to-end models still need to improve before being able to reliably handle goal-oriented dialog.
# 2 RELATED WORK
The most successful goal-oriented dialog systems model conversation as partially observable Markov decision processes (POMDP) (Young et al., 2013). However, despite recent efforts to learn modules (Henderson et al., 2014b), they still require many hand-crafted features for the state and action space representations, which restrict their usage to narrow domains. Our simulation, used to generate goal-oriented datasets, can be seen as an equivalent of the user simulators used to train POMDP (Young et al., 2013; Pietquin and Hastie, 2013), but for training end-to-end systems.
Serban et al. (2015b) list available corpora for training dialog systems. Unfortunately, no good resources exist to train and test end-to-end models in goal-oriented scenarios. Goal-oriented datasets are usually designed to train or test dialog state tracker components (Henderson et al., 2014a) and are hence of limited scale and not suitable for end-to-end learning (annotated at the state level and noisy). However, we do convert the Dialog State Tracking Challenge data into our framework. Some datasets are not open source, and require a particular license agreement or the participation to a challenge (e.g., the end-to-end task of DSTC4 (Kim et al., 2016)) or are proprietary (e.g., Chen et al. (2016)). Datasets are often based on interactions between users and existing systems (or ensemble of systems) like DSTC datasets, SFCore (Gašic et al., 2014) or ATIS (Dahl et al., 1994). This creates noise and makes it harder to interpret the errors of a model. Lastly, resources designed to connect dialog systems to users, in particular in the context of reinforcement learning, are usually built around a crowdsourcing setting such as Amazon Mechanical Turk, e.g., (Hixon et al., 2015; Wen et al., 2015; Su et al., 2015a;b). While this has clear advantages, it prevents reproducibility and consistent comparisons of methods in the exact same setting.
The closest resource to ours might be the set of tasks described in (Dodge et al., 2016), since some of them can be seen as goal-oriented. However, those are question answering tasks rather than dialog, i.e. the bot only responds with answers, never questions, which does not reï¬ect full conversation.
# 3 GOAL-ORIENTED DIALOG TASKS
All our tasks involve a restaurant reservation system, where the goal is to book a table at a restaurant. The ï¬rst ï¬ve tasks are generated by a simulation, the last one uses real human-bot dialogs. The data for all tasks is available at http://fb.ai/babi. We also give results on a proprietary dataset extracted from an online restaurant reservation concierge service with anonymized users.
3
Published as a conference paper at ICLR 2017
3.1 RESTAURANT RESERVATION SIMULATION
The simulation is based on an underlying KB, whose facts contain the restaurants that can be booked and their properties. Each restaurant is deï¬ned by a type of cuisine (10 choices, e.g., French, Thai), a location (10 choices, e.g., London, Tokyo), a price range (cheap, moderate or expensive) and a rating (from 1 to 8). For simplicity, we assume that each restaurant only has availability for a single party size (2, 4, 6 or 8 people). Each restaurant also has an address and a phone number listed in the KB.
The KB can be queried using API calls, which return the list of facts related to the corresponding restaurants. Each query must contain four ï¬elds: a location, a type of cuisine, a price range and a party size. It can return facts concerning one, several or no restaurant (depending on the party size).
Using the KB, conversations are generated in the format shown in Figure 1. Each example is a dialog comprising utterances from a user and a bot, as well as API calls and the resulting facts. Dialogs are generated after creating a user request by sampling an entry for each of the four required ï¬elds: e.g. the request in Figure 1 is [cuisine: British, location: London, party size: six, price range: expensive]. We use natural language patterns to create user and bot utterances. There are 43 patterns for the user and 20 for the bot (the user can use up to 4 ways to say something, while the bot always uses the same). Those patterns are combined with the KB entities to form thousands of different utterances.
3.1.1 TASK DEFINITIONS
We now detail each task. Tasks 1 and 2 test dialog management to see if end-to-end systems can learn to implicitly track dialog state (never given explicitly), whereas Task 3 and 4 check if they can learn to use KB facts in a dialog setting. Task 3 also requires to learn to sort. Task 5 combines all tasks.
Task 1: Issuing API calls A user request implicitly deï¬nes a query that can contain from 0 to 4 of the required ï¬elds (sampled uniformly; in Figure 1, it contains 3). The bot must ask questions for ï¬lling the missing ï¬elds and eventually generate the correct corresponding API call. The bot asks for information in a deterministic order, making prediction possible.
Task 2: Updating API calls Starting by issuing an API call as in Task 1, users then ask to update their requests between 1 and 4 times (sampled uniformly). The order in which ï¬elds are updated is random. The bot must ask users if they are done with their updates and issue the updated API call.
Task 3: Displaying options Given a user request, we query the KB using the corresponding API call and add the facts resulting from the call to the dialog history. The bot must propose options to users by listing the restaurant names sorted by their corresponding rating (from higher to lower) until users accept. For each option, users have a 25% chance of accepting. If they do, the bot must stop displaying options, otherwise propose the next one. Users always accept the option if this is the last remaining one. We only keep examples with API calls retrieving at least 3 options.
Task 4: Providing extra information Given a user request, we sample a restaurant and start the dialog as if users had agreed to book a table there. We add all KB facts corresponding to it to the dialog. Users then ask for the phone number of the restaurant, its address or both, with proportions 25%, 25% and 50% respectively. The bot must learn to use the KB facts correctly to answer.
Task 5: Conducting full dialogs We combine Tasks 1-4 to generate full dialogs just as in Figure 1. Unlike in Task 3, we keep examples if API calls return at least 1 option instead of 3.
3.1.2 DATASETS
We want to test how well models handle entities appearing in the KB but not in the dialog training sets. We split types of cuisine and locations in half, and create two KBs, one with all facts about restaurants within the ï¬rst halves and one with the rest. This yields two KBs of 4,200 facts and 600 restaurants each (5 types of cuisine à 5 locations à 3 price ranges à 8 ratings) that only share price ranges, ratings and party sizes, but have disjoint sets of restaurants, locations, types of cuisine, phones and addresses. We use one of the KBs to generate the standard training, validation and test dialogs, and use the other KB only to generate test dialogs, termed Out-Of-Vocabulary (OOV) test sets.
For training, systems have access to the training examples and both KBs. We then evaluate on both test sets, plain and OOV. Beyond the intrinsic difï¬culty of each task, the challenge on the OOV test
4
Published as a conference paper at ICLR 2017
sets is for models to generalize to new entities (restaurants, locations and cuisine types) unseen in any training dialog â something natively impossible for embedding methods. Ideally, models could, for instance, leverage information coming from the entities of the same type seen during training.
We generate ï¬ve datasets, one per task deï¬ned in 3.1.1. Table 1 gives their statistics. Training sets are relatively small (1,000 examples) to create realistic learning conditions. The dialogs from the training and test sets are different, never being based on the same user requests. Thus, we test if models can generalize to new combinations of ï¬elds. Dialog systems are evaluated in a ranking, not a generation, setting: at each turn of the dialog, we test whether they can predict bot utterances and API calls by selecting a candidate, not by generating it.1 Candidates are ranked from a set of all bot utterances and API calls appearing in training, validation and test sets (plain and OOV) for all tasks combined.
3.2 DIALOG STATE TRACKING CHALLENGE
Since our tasks rely on synthetically generated language for the user, we supplement our dataset with real human-bot dialogs. We use data from DSTC2 (Henderson et al., 2014a), that is also in the restaurant booking domain. Unlike our tasks, its user requests only require 3 ï¬elds: type of cuisine (91 choices), location (5 choices) and price range (3 choices). The dataset was originally designed for dialog state tracking hence every dialog turn is labeled with a state (a user intent + slots) to be predicted. As our goal is to evaluate end-to-end training, we did not use that, but instead converted the data into the format of our 5 tasks and included it in the dataset as Task 6.
We used the provided speech transcriptions to create the user and bot utterances, and given the dialog states we created the API calls to the KB and their outputs which we added to the dialogs. We also added ratings to the restaurants returned by the API calls, so that the options proposed by the bots can be consistently predicted (by using the highest rating). We did use the original test set but use a slightly different training/validation split. Our evaluation differs from the challenge (we do not predict the dialog state), so we cannot compare with the results from (Henderson et al., 2014a).
This dataset has similar statistics to our Task 5 (see Table 1) but is harder. The dialogs are noisier and the bots made mistakes due to speech recognition errors or misinterpretations and also do not always have a deterministic behavior (the order in which they can ask for information varies).
3.3 ONLINE CONCIERGE SERVICE
Tasks 1-6 are, at least partially, artiï¬cial. This provides perfect control over their design (at least for Tasks 1-5), but no guarantee that good performance would carry over from such synthetic to more realistic conditions. To quantify this, we also evaluate the models from Section 4 on data extracted from a real online concierge service performing restaurant booking: users make requests through a text-based chat interface that are handled by human operators who can make API calls. All conversations are between native English speakers.
We collected around 4k chats to create this extra dataset, denoted Concierge. All conversations have been anonymized by (1) removing all user identiï¬ers, (2) using the Stanford NER tagger to remove named entities (locations, timestamps, etc.), (3) running some manually deï¬ned regex to ï¬lter out any remaining salient information (phone numbers, etc.). The dataset does not contain results from API calls, but still records when operators made use of an external service (Yelp or OpenTable) to gather information. Hence, these have to be predicted, but without any argument (unlike in Task 2).
The statistics of Concierge are given in Table 1. The dialogs are shorter than in Tasks 1-6, especially since they do not include results of API calls, but the vocabulary is more diverse and so is the candidate set; the candidate set is made of all utterances of the operator appearing in the training, validation and test sets. Beyond the higher variability of the language used by human operators compared to bots, the dataset offers additional challenges. The set of user requests is much wider, ranging from managing restaurant reservations to asking for recommendations or speciï¬c information. Users do not always stay focused on the request. API calls are not always used (e.g., the operator might use neither Yelp nor OpenTable to ï¬nd a restaurant), and facts about restaurants are not structured nor constrained as in a KB. The structure of dialogs is thus much more variable. Users and operators also make typos, spelling and grammar mistakes.
1 Lowe et al. (2016) termed this setting Next-Utterance-Classiï¬cation.
5
Published as a conference paper at ICLR 2017
# 4 MODELS
To demonstrate how to use the dataset and provide baselines, we evaluate several learning methods on our goal-oriented dialog tasks: rule-based systems, classical information retrieval methods, supervised embeddings, and end-to-end Memory networks.
4.1 RULE-BASED SYSTEMS
Our tasks T1-T5 are built with a simulator so as to be completely predictable. Thus it is possible to hand-code a rule based system that achieves 100% on them, similar to the bAbI tasks of Weston et al. (2015b). Indeed, the point of these tasks is not to check whether a human is smart enough to be able to build a rule-based system to solve them, but to help analyze in which circumstances machine learning algorithms are smart enough to work, and where they fail.
However, the Dialog State Tracking Challenge task (T6) contains some real interactions with users. This makes rule-based systems less straightforward and not so accurate (which is where we expect machine learning to be useful). We implemented a rule-based system for this task in the following way. We initialized a dialog state using the 3 relevant slots for this task: cuisine type, location and price range. Then we analyzed the training data and wrote a series of rules that ï¬re for triggers like word matches, positions in the dialog, entity detections or dialog state, to output particular responses, API calls and/or update a dialog state. Responses are created by combining patterns extracted from the training set with entities detected in the previous turns or stored in the dialog state. Overall we built 28 rules and extracted 21 patterns. We optimized the choice of rules and their application priority (when needed) using the validation set, reaching a validation per-response accuracy of 40.7%. We did not build a rule-based system for Concierge data as it is even less constrained.
4.2 CLASSICAL INFORMATION RETRIEVAL MODELS
Classical information retrieval (IR) models with no machine learning are standard baselines that often perform surprisingly well on dialog tasks (Isbell et al., 2000; Jafarpour et al., 2010; Ritter et al., 2011; Sordoni et al., 2015). We tried two standard variants:
TF-IDF Match For each possible candidate response, we compute a matching score between the input and the response, and rank the responses by score. The score is the TFâIDF weighted cosine similarity between the bag-of-words of the input and bag-of-words of the candidate response. We consider the case of the input being either only the last utterance or the entire conversation history, and choose the variant that works best on the validation set (typically the latter).
Nearest Neighbor Using the input, we ï¬nd the most similar conversation in the training set, and output the response from that example. In this case we consider the input to only be the last utterance, and consider the training set as (utterance, response) pairs that we select from. We use word overlap as the scoring method. When several responses are associated with the same utterance in training, we sort them by decreasing co-occurence frequency.
4.3 SUPERVISED EMBEDDING MODELS
A standard, often strong, baseline is to use supervised word embedding models for scoring (conversa- tion history, response) pairs. The embedding vectors are trained directly for this goal. In contrast, word embeddings are most well-known in the context of unsupervised training on raw text as in word2vec (Mikolov et al.|/2013). Such models are trained by learning to predict the middle word given the surrounding window of words, or vice-versa. However, given training data consisting of dialogs, a much more direct and strongly performing training procedure can be used: predict the next response given the previous conversation. In this setting a candidate reponse y is scored against the input x: f(x,y) = (Ax)! By, where A and B are d x V word embedding matrices, i.e. input and response are treated as summed bags-of-embeddings. We also consider the case of enforcing A = B, which sometimes works better, and optimize the choice on the validation set.
The embeddings are trained with a margin ranking loss: f (x, y) > m + f (x, ¯y), with m the size of the margin, and we sample N negative candidate responses ¯y per example, and train with SGD. This approach has been previously shown to be very effective in a range of contexts (Bai et al., 2009;
6
Published as a conference paper at ICLR 2017
Dodge et al., 2016). This method can be thought of as a classical information retrieval model, but where the matching function is learnt.
4.4 MEMORY NETWORKS
Memory Networks (Weston et al., 2015a; Sukhbaatar et al., 2015) are a recent class of models that have been applied to a range of natural language processing tasks, including question answering (Weston et al., 2015b), language modeling (Sukhbaatar et al., 2015), and non-goal-oriented dialog (Dodge et al., 2016). By ï¬rst writing and then iteratively reading from a memory component (using hops) that can store historical dialogs and short-term context to reason about the required response, they have been shown to perform well on those tasks and to outperform some other end-to-end architectures based on Recurrent Neural Networks. Hence, we chose them as end-to-end model baseline.
We use the MemN2N architecture of Sukhbaatar et al. (2015), with an additional modiï¬cation to leverage exact matches and types, described shortly. Apart from that addition, the main components of the model are (i) how it stores the conversation in memory, (ii) how it reads from the memory to reason about the response; and (iii) how it outputs the response. The details are given in Appendix A.
4.5 MATCH TYPE FEATURES TO DEAL WITH ENTITIES
Words denoting entities have two important traits: 1) exact matches are usually more appropriate to deal with them than approximate matches, and 2) they frequently appear as OOV words (e.g., the name of a new restaurant). Both are a challenge for embedding-based methods. Firstly, embedding into a low dimensional space makes it hard to differentiate between exact word matches, and matches between words with similar meaning (Bai et al., 2009). While this can be a virtue (e.g. when using synonyms), it is often a ï¬aw when dealing with entities (e.g. failure to differentiate between phone numbers since they have similar embeddings). Secondly, when a new word is used (e.g. the name of a new restaurant) not seen before in training, no word embedding is available, typically resulting in failure (Weston et al., 2015a).
Both problems can be alleviated with match type features. Speciï¬cally, we augment the vocabulary with 7 special words, one for each of the KB entity types (cuisine type, location, price range, party size, rating, phone number and address). For each type, the corresponding type word is added to the candidate representation if a word is found that appears 1) as a KB entity of that type, 2) in the candidate, and 3) in the input or memory. Any word that matches as a KB entity can be typed even if it has never been seen before in training dialogs. These features allow the model to learn to rely on type information using exact matching words cues when OOV entity embeddings are not known, as long as it has access to a KB with the OOV entities. We assess the impact of such features for TF-IDF Match, Supervised Embeddings and Memory Networks.
# 5 EXPERIMENTS
Our main results across all the models and tasks are given in Table 2 (extra results are also given in Table 10 of Appendix D). The ï¬rst 5 rows show tasks T1-T5, and rows 6-10 show the same tasks in the out-of-vocabulary setting. Rows 11 and 12 give results for the Dialog State Tracking Challenge task (T6) and Concierge respectively. Columns 2-7 give the results of each method tried in terms of per-response accuracy and per-dialog accuracy, the latter given in parenthesis. Per-response accuracy counts the percentage of responses that are correct (i.e., the correct candidate is chosen out of all possible candidates). Per-dialog accuracy counts the percentage of dialogs where every response is correct. Ultimately, if only one response is incorrect this could result in a failed dialog, i.e. failure to achieve the goal (in this case, of achieving a restaurant booking). Note that we test Memory Networks (MemNNs) with and without match type features, the results are shown in the last two columns. The hyperparameters for all models were optimized on the validation sets; values for best performing models are given in Appendix C.
The classical IR method TF-IDF Match performs the worst of all methods, and much worse than the Nearest Neighbor IR method, which is true on both the simulated tasks T1-T5 and on the real data of T6 and Concierge. Supplementing TF-IDF Match with match type features noticeably improves performance, which however still remains far behind Nearest Neighbor IR (adding bigrams to the
7
Published as a conference paper at ICLR 2017
Table 2: Test results across all tasks and methods. For tasks T1-T5 results are given in the standard setup and the out-of-vocabulary (OOV) setup, where words (e.g. restaurant names) may not have been seen during training. Task T6 is the Dialog state tracking 2 task with real dialogs, and only has one setup. Best performing methods (or methods within 0.1% of best performing) are given in bold for the per-response accuracy metric, with the per-dialog accuracy given in parenthesis. (â) For Concierge, an example is considered correctly answered if the correct response is ranked among the top 10 candidates by the bot, to accommodate the much larger range of semantically equivalent responses among candidates (see ex. in Tab. 7) . (â ) We did not implement MemNNs+match type on Concierge, because this method requires a KB and there is none associated with it.
Task T1: Issuing API calls T2: Updating API calls T3: Displaying options T4: Providing information T5: Full dialogs T1(OOV): Issuing API calls T2(OOV): Updating API calls T3(OOV): Displaying options T4(OOV): Providing inform. T5(OOV): Full dialogs T6: Dialog state tracking 2 Concierge(â) Rule-based Systems 100 (100) 100 (100) 100 (100) 100 (100) 100 (100) 100 (100) 100 (100) 100 (100) 100 (100) 100 (100) 33.3 (0) n/a TF-IDF Match no type 5.6 (0) 3.4 (0) 8.0 (0) 9.5 (0) 4.6 (0) 5.8 (0) 3.5 (0) 8.3 (0) 9.8 (0) 4.6 (0) 1.6 (0) 1.1 (0.2) + type 22.4 (0) 16.4 (0) 8.0 (0) 17.8 (0) 8.1 (0) 22.4 (0) 16.8 (0) 8.3 (0) 17.2 (0) 9.0 (0) 1.6 (0) n/a Nearest Neighbor 55.1 (0) 68.3 (0) 58.8 (0) 28.6 (0) 57.1 (0) 44.1 (0) 68.3 (0) 58.8 (0) 28.6 (0) 48.4 (0) 21.9 (0) 13.4 (0.5) Supervised Embeddings 100 (100) (0) 68.4 (0) 64.9 (0) 57.2 (0) 75.4 (0) 60.0 (0) 68.3 (0) 65.0 (0) 57.0 (0) 58.2 22.6 (0) 14.6 (0.5) Memory Networks no match type 99.9 (99.6) 100 (100) 74.9 (2.0) (3.0) 59.5 96.1 (49.4) 72.3 78.9 74.4 57.6 65.5 41.1 16.7 (0) (0) (0) (0) (0) (0) (1.2) + match type 100 (100) 98.3 (83.9) 74.9 (0) 100 (100) 93.4 (19.7) 96.5 (82.7) 94.5 (48.4) 75.2 (0) 100 (100) 77.7 (0) 41.0 (0) n/a(â )
dictionary has no effect on performance). This is in sharp contrast to other recent results on data- driven non-goal directed conversations, e.g. over dialogs on Twitter (Ritter et al., 2011) or Reddit (Dodge et al., 2016), where it was found that TF-IDF Match outperforms Nearest Neighbor, as general conversations on a given subject typically share many words. We conjecture that the goal-oriented nature of the conversation means that the conversation moves forward more quickly, sharing fewer words per (input, response) pair, e.g. consider the example in Figure 1.
Supervised embeddings outperform classical IR methods in general, indicating that learning mappings between words (via word embeddings) is important. However, only one task (T1, Issuing API calls) is completely successful. In the other tasks, some responses are correct, as shown by the per-response accuracy, however there is no dialog where the goal is actually achieved (i.e., the mean dialog- accuracy is 0). Typically the model can provide correct responses for greeting messages, asking to wait, making API calls and asking if there are any other options necessary. However, it fails to interpret the results of API calls to display options, provide information or update the calls with new information, resulting in most of its errors, even when match type features are provided.
Memory Networks (without match type features) outperform classical IR and supervised embeddings across all of the tasks. They can solve the ï¬rst two tasks (issuing and updating API calls) adequately. On the other tasks, they give improved results, but do not solve them. While the per-response accuracy is improved, the per-dialog accuracy is still close to 0 on T3 and T4. Some examples of predictions of the MemNN for T1-4 are given in Appendix B. On the OOV tasks again performance is improved, but this is all due to better performance on known words, as unknown words are simply not used without the match type features. As stated in Appendix C, optimal hyperparameters on several of the tasks involve 3 or 4 hops, indicating that iterative accessing and reasoning over the conversation helps, e.g. on T3 using 1 hop gives 64.8% while 2 hops yields 74.7%. Appendix B displays illustrative examples of Memory Networks predictions on T 1-4 and Concierge.
Memory Networks with match type features give two performance gains over the same models without match type features: (i) T4 (providing information) becomes solvable because matches can be made to the results of the API call; and (ii) out-of-vocabulary results are signiï¬cantly improved as well. Still, tasks T3 and T5 are still fail cases, performance drops slightly on T2 compared to not using match type features, and no relative improvement is observed on T6. Finally, note that matching words on its own is not enough, as evidenced by the poor performance of TF-IDF matching; this idea must be combined with types and the other properties of the MemNN model.
Unsurprisingly, perfectly coded rule-based systems can solve the simulated tasks T1-T5 perfectly, whereas our machine learning methods cannot. However, it is not easy to build an effective rule-based
8
Published as a conference paper at ICLR 2017
system when dealing with real language on real problems, and our rule based system is outperformed by MemNNs on the more realistic task T6.
Overall, while the methods we tried made some inroads into these tasks, there are still many challenges left unsolved. Our best models can learn to track implicit dialog states and manipulate OOV words and symbols (T1-T2) to issue API calls and progress in conversations, but they are still unable to perfectly handle interpreting knowledge about entities (from returned API calls) to present results to the user, e.g. displaying options in T3. The improvement observed on the simulated tasks e.g. where MemNNs outperform supervised embeddings which in turn outperform IR methods, is also seen on the realistic data of T6 with similar relative gains. This is encouraging as it indicates that future work on breaking down, analysing and developing models over the simulated tasks should help in the real tasks as well. Results on Concierge conï¬rm this observation: the pattern of relative performances of methods is the same on Concierge and on our series of tasks. This suggests that our synthetic data can indeed be used as an effective evaluation proxy.
# 6 CONCLUSION
We have introduced an open dataset and task set for evaluating end-to-end goal-oriented dialog learning methods in a systematic and controlled way. We hope this will help foster progress of end-to- end conversational agents because (i) existing measures of performance either prevent reproducibility (different Mechanical Turk jobs) or do not correlate well with human judgements (Liu et al., 2016); (ii) the breakdown in tasks will help focus research and development to improve the learning methods; and (iii) goal-oriented dialog has clear utility in real applications. We illustrated how to use the testbed using a variant of end-to-end Memory Networks, which prove an effective model on these tasks relative to other baselines, but are still lacking in some key areas.
ACKNOWLEDGMENTS
The authors would like to thank Martin Raison, Alex Lebrun and Laurent Landowski for their help with the Concierge data.
# REFERENCES
Bai, B., Weston, J., Grangier, D., Collobert, R., Sadamasa, K., Qi, Y., Chapelle, O., and Weinberger, K. (2009). Supervised semantic indexing. In Proceedings of ACM CIKM, pages 187â196. ACM.
Banchs, R. E. (2012). Movie-dic: a movie dialogue corpus for research and development. In Proceedings of the 50th Annual Meeting of the ACL.
Chen, Y.-N., Hakkani-Tür, D., Tur, G., Gao, J., and Deng, L. (2016). End-to-end memory networks with knowledge carryover for multi-turn spoken language understanding. In Proceedings of Interspeech.
Dahl, D. A., Bates, M., Brown, M., Fisher, W., Hunicke-Smith, K., Pallett, D., Pao, C., Rudnicky, A., and Shriberg, E. (1994). Expanding the scope of the atis task: The atis-3 corpus. In Proceedings of the workshop on Human Language Technology, pages 43â48. Association for Computational Linguistics.
Dodge, J., Gane, A., Zhang, X., Bordes, A., Chopra, S., Miller, A., Szlam, A., and Weston, J. (2016). Evaluating prerequisite qualities for learning end-to-end dialog systems. In Proc. of ICLR.
Gašic, M., Kim, D., Tsiakoulis, P., Breslin, C., Henderson, M., Szummer, M., Thomson, B., and Young, S. (2014). Incremental on-line adaptation of pomdp-based dialogue managers to extended domains. In Proceedings on InterSpeech.
Henderson, M., Thomson, B., and Williams, J. (2014a). The second dialog state tracking challenge. In 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue, page 263.
Henderson, M., Thomson, B., and Young, S. (2014b). Word-based dialog state tracking with recurrent neural networks. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 292â299.
Hixon, B., Clark, P., and Hajishirzi, H. (2015). Learning knowledge graphs for question answering through conversational dialog. In Proceedings of the the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Denver, Colorado, USA.
9
Published as a conference paper at ICLR 2017
Isbell, C. L., Kearns, M., Kormann, D., Singh, S., and Stone, P. (2000). Cobot in lambdamoo: A social statistics agent. In AAAI/IAAI, pages 36â41.
Jafarpour, S., Burges, C. J., and Ritter, A. (2010). Filter, rank, and transfer the knowledge: Learning to chat. Advances in Ranking, 10.
Kim, S., DâHaro, L. F., Banchs, R. E., Williams, J. D., and Henderson, M. (2016). The fourth dialog state tracking challenge. In Proceedings of the 7th International Workshop on Spoken Dialogue Systems (IWSDS).
Lemon, O., Georgila, K., Henderson, J., and Stuttle, M. (2006). An isu dialogue system exhibiting reinforcement In Proceedings of the 11th learning of dialogue policies: generic slot-ï¬lling in the talk in-car system. Conference of the European Chapter of the ACL: Posters & Demonstrations, pages 119â122.
Liu, C.-W., Lowe, R., Serban, I. V., Noseworthy, M., Charlin, L., and Pineau, J. (2016). How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. arXiv preprint arXiv:1603.08023.
Lowe, R., Pow, N., Serban, I., and Pineau, J. (2015). The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. arXiv preprint arXiv:1506.08909.
Lowe, R., Serban, I. V., Noseworthy, M., Charlin, L., and Pineau, J. (2016). On the evaluation of dialogue systems with next utterance classiï¬cation. arXiv preprint arXiv:1605.05414.
Mikolov, T., Chen, K., Corrado, G., and Dean, J. (2013). Efï¬cient estimation of word representations in vector space. arXiv:1301.3781.
Pietquin, O. and Hastie, H. (2013). A survey on metrics for the evaluation of user simulations. The knowledge engineering review, 28(01), 59â73.
Ritter, A., Cherry, C., and Dolan, W. B. (2011). Data-driven response generation in social media. In Proceedings of the Conference on Empirical Methods in Natural Language Processing.
Serban, I. V., Sordoni, A., Bengio, Y., Courville, A., and Pineau, J. (2015a). Building end-to-end dialogue systems using generative hierarchical neural network models. In Proc. of the AAAI Conference on Artiï¬cial Intelligence.
Serban, I. V., Lowe, R., Charlin, L., and Pineau, J. (2015b). A survey of available corpora for building data-driven dialogue systems. arXiv preprint arXiv:1512.05742.
Shang, L., Lu, Z., and Li, H. (2015). Neural responding machine for short-text conversation. arXiv preprint arXiv:1503.02364.
Sordoni, A., Galley, M., Auli, M., Brockett, C., Ji, Y., Mitchell, M., Nie, J.-Y., Gao, J., and Dolan, B. (2015). A neural network approach to context-sensitive generation of conversational responses. Proceedings of NAACL.
Su, P.-H., Vandyke, D., Gasic, M., Kim, D., Mrksic, N., Wen, T.-H., and Young, S. (2015a). Learning from real users: Rating dialogue success with neural networks for reinforcement learning in spoken dialogue systems. arXiv preprint arXiv:1508.03386.
Su, P.-H., Vandyke, D., Gasic, M., Mrksic, N., Wen, T.-H., and Young, S. (2015b). Reward shaping with recurrent neural networks for speeding up on-line policy learning in spoken dialogue systems. arXiv preprint arXiv:1508.03391.
Sukhbaatar, S., Szlam, A., Weston, J., and Fergus, R. (2015). End-to-end memory networks. Proceedings of NIPS.
Vinyals, O. and Le, Q. (2015). A neural conversational model. arXiv preprint arXiv:1506.05869.
Wang, H., Lu, Z., Li, H., and Chen, E. (2013). A dataset for research on short-text conversations. In EMNLP.
Wang, Z. and Lemon, O. (2013). A simple and generic belief tracking mechanism for the dialog state tracking challenge: On the believability of observed information. In Proceedings of the SIGDIAL 2013 Conference.
Wen, T.-H., Gasic, M., Mrksic, N., Su, P.-H., Vandyke, D., and Young, S. (2015). Semantically conditioned lstm-based natural language generation for spoken dialogue systems. arXiv preprint arXiv:1508.01745.
Weston, J., Chopra, S., and Bordes, A. (2015a). Memory networks. Proceedings of ICLR.
Weston, J., Bordes, A., Chopra, S., and Mikolov, T. (2015b). Towards ai-complete question answering: a set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698.
Young, S., Gasic, M., Thomson, B., and Williams, J. D. (2013). Pomdp-based statistical spoken dialog systems: A review. Proceedings of the IEEE, 101(5), 1160â1179.
10
Published as a conference paper at ICLR 2017
# A MEMORY NETWORKS IMPLEMENTATION
Storing and representing the conversation history As the model conducts a conversation with the user, at each time step t the previous utterance (from the user) and response (from the model) are appended to the memory. Hence, at any given time there are cu tâ1 model responses stored (i.e. the entire conversation).2 The aim at time t is to thus choose the next response cr t . We train on existing full dialog transcripts, so at training time we know the upcoming utterance cr t and can use it as a training target. Following Dodge et al. (2016), we represent each utterance as a bag-of-words and in memory it is represented as a vector using the embedding matrix A, i.e. the memory is an array with entries:
m = (AΦ(cu 1 ), AΦ(cr 1) . . . , AΦ(cu tâ1), AΦ(cr tâ1))
where Φ(·) maps the utterance to a bag of dimension V (the vocabulary), and A is a d à V matrix, where d is the embedding dimension. We retain the last user utterance cu t as the âinputâ to be used directly in the controller. The contents of each memory slot mi so far does not contain any information of which speaker spoke an utterance, and at what time during the conversation. We therefore encode both of those pieces of information in the mapping Φ by extending the vocabulary to contain T = 1000 extra âtime featuresâ which encode the index i into the bag-of-words, and two more features that encode whether the utterance was spoken by the user or the model.
Attention over the memory The last user utterance cj is embedded using the same matrix A giving q = A®(c}'), which can also be seen as the initial state of the controller. At this point the controller reads from the memory to find salient parts of the previous conversation that are relevant to producing a response. The match between g and the memories is computed by taking the inner product followed by a softmax: pi= Softmax(u' mi), giving a probability vector over the memories. The vector that is returned back to the controller is then computed by 0 = R 5°, pim; where R is ad x d square matrix. The controller state is then updated with g2 = o + q. The memory can be iteratively reread to look for additional pertinent information using the updated state of the controller gz instead of g, and in general using q;, on iteration h, with a fixed number of iterations V (termed N hops). Empirically we find improved performance on our tasks with up to 3 or 4 hops.
Choosing the response The ï¬nal prediction is then deï¬ned as:
a@ = Softmax(qn41'W®(y1),-..,¢n41| W®(yc))
where there are C candidate responses in y, and W is of dimension d à V . In our tasks the set y is a (large) set of candidate responses which includes all possible bot utterances and API calls.
The entire model is trained using stochastic gradient descent (SGD), minimizing a standard cross-entropy loss between Ëa and the true label a.
# B EXAMPLES OF PREDICTIONS OF A MEMORY NETWORK
Tables 3, 4, 5 and 6 display examples of predictions of the best performing Memory Network on full dialogs, Task 5, (with 3 hops) on test examples of Tasks 1-4 along with the values of the attention over each memory for each hop (pi as deï¬ned in Sec. A). This model does not use match type features. Then, Table 7 displays an example of prediction of the best performing Memory Network on Concierge (with 2 hops) on a test example along with the values of the attention over each memory for each hop.
# C HYPERPARAMETERS
Tables 8 and 9 respectively display the values of the hyperparameters of the best Supervised Embeddings and Memory Networks selected for each task. These models were selected using the best validation validation sets.
# D ADDITIONAL RESULTS
Table 10 provides results for additional variants of supervised embeddings, using either a dictionary that includes all bigrams to leverage some word order information, or match type features. On some tasks, supervised embeddings perform better when the last user utterance is used as sole input, without the full dialog history (see Table 8). When no history is used, we slightly adapt match type features to only record type: a special word corresponding to type T (e.g., phone, address, etc) is appended to the representation of a candidate if the
2API calls are stored as bot utterances cr
i , and KB facts resulting from such calls as user utterances cu i .
? API calls are stored as bot utterances câ, and KB facts resulting from such calls as user utterances c'â.
11
Published as a conference paper at ICLR 2017
Table 3: Task 1 (Issue API call) The model learns to direct its attention towards the 4 memories containing the information key to issue the API call. More hops help to strengthen this signal. <silence> is a special token used to indicate that the user did not speak at this turn â the model has to carry out the conversation with no additional input.
Locutor Time User 1 Bot 2 User 3 Bot 4 User 5 Bot 6 User 7 Bot 8 User 9 Bot 10 User 11 12 Bot User input Correct answer Predicted answer Dialog History hi hello what can i help you with today may i have a table in paris iâm on it <silence> any preference on a type of cuisine i love indian food how many people would be in your party we will be six which price range are looking for in a moderate price range please ok let me look into some options for you <silence> api_call indian paris six moderate api_call indian paris six moderate Hop #1 Hop #2 Hop #3 .016 .024 .137 .028 .099 .090 .188 .022 .144 .028 .213 .011 .054 .040 .099 .048 .181 .056 .159 .051 .129 .039 .128 .016 .020 .008 .141 .004 .037 .014 .238 .010 .114 .006 .405 .003 [Correct]
Table 4: Task 2 (Update API call) Out of the multiple memories from the current dialog, the model correctly focuses on the 2 important pieces: the original API call and the utterance giving the update.
Hop #1 Hop #2 Hop #3 .072 .012 .042 .023 .070 .006 .051 .043 .095 .042 .069 .113 .311 .007 .013 .006 .061 .026 .087 .026 .081 .025 .059 .038 .080 .025 .127 .062 .188 .016 .028 .011 .040 .001 .012 .001 .055 .001 .018 .004 .096 .003 .032 .043 .683 .001 .007 .000
12
Published as a conference paper at ICLR 2017
Table 5: Task 3 (Displaying options) The model knows it has to display options but the attention is wrong: it should attend on the ratings to select the best option (with highest rating). It cannot learn that properly and match type features do not help. It is correct here by luck, the task is not solved overall (see Tab. 2). We do not show all memories in the table, only those with meaningful attention.
Time 14 15 20 21 23 24 25 26 27 30 31 32 33 37 38 39 40 User input Correct answer Predicted answer Locutor Bot User Bot User User User User User User User User User User User User User User Dialog history api_call indian paris six moderate instead could it be with french food api_call french paris six moderate resto_1 r_phone rest_1_phone resto_1 r_cuisine french resto_1 r_location paris resto_1 r_number six resto_1 r_price moderate resto_1 r_rating 6 resto_2 r_cuisine french resto_2 r_location paris resto_2 r_number six resto_2 r_price moderate resto_3 r_cuisine french resto_3 r_location paris resto_3 r_number six resto_3 r_price moderate <silence> what do you think of this option: resto_1 what do you think of this option: resto_1 Hop #1 Hop #2 Hop #3 .000 .103 .000 .004 .005 .292 .298 .090 .002 .007 .081 .012 .009 .001 .016 .022 .015 .012 .067 .012 .018 .029 .060 .050 .060 .016 .031 .040 .020 .029 .014 .028 .024 .039 .000 .147 .000 .000 .000 .094 .745 .002 .000 .000 .004 .000 .000 .000 .001 .004 .001 [Correct]
Table 6: Task 4 (Providing extra-information) The model knows it must display a phone or an address, but, as explained in Section A the embeddings mix up the information and make it hard to distinguish between different phone numbers or addresses, making answering correctly very hard. As shown in the results of Tab. 2, this problem can be solved by adding match type features, that allow to emphasize entities actually appearing in the history. The attention is globally wrong here.
Time Locutor 14 Bot 15 User 20 Bot 21 User 22 User 23 User 24 User 25 User 26 User 27 User 28 User 29 User 31 User 32 User 33 User 35 User 36 User 37 User 39 User 40 User 42 Bot 43 User Bot 44 User input Correct answer Predicted answer Dialog history api_call indian paris six moderate instead could it be with french food api_call french paris six moderate resto_1 r_phone resto_1_phone resto_1 r_address resto_1_address resto_1 r_cuisine french resto_1 r_location paris resto_1 r_number six resto_1 r_price moderate resto_1 r_rating 6 resto_2 r_phone resto_2_phone resto_2 r_address resto_2_address resto_2 r_location paris resto_2 r_number six resto_2 r_price moderate resto_3 r_phone resto_3_phone resto_3 r_address resto_3_address resto_3 r_location paris resto_3 r_number six resto_3 r_price moderate what do you think of this option: resto_1 letâs do it great let me do the reservation do you have its address here it is resto_1_address here it is: resto_8_address Hop #1 Hop #2 Hop #3 .000 .011 .000 .005 .004 .003 .091 .078 .225 .006 .009 .004 .176 .126 .090 .001 .002 .028 .013 .008 .001 .004 .000 .006 .024 .005 .011 .018 .018 .068 .086 .070 .014 .015 .014 .075 .100 .038 .004 .005 .028 .039 .018 .074 .032 .003 .000 .007 .001 .004 .001 .001 .108 .020 .369 .008 .006 .001 .193 .026 .167 .001 .001 .026 .002 .013 .000 .001 .000 [Incorrect]
13
Published as a conference paper at ICLR 2017
Table 7: Concierge Data The model is also able to learn from human-human dialogs. <person>, <org>, <number> and <date> are special tokens used to anonymize the data. We report the top 5 answers predicted by the model. They are all semantically equivalent. Note that the utterances, while all produced by humans, are not perfect English ("rservation", "Iâll check into it")
Time 1 2 3 4 5 Locutor User User User User Bot User input Correct answer Pred. answer #1 Pred. answer #2 Pred. answer #3 Pred. answer #4 Pred. answer #5 Dialog History hey concierge could you check if i can get a rservation at <org> <date> for brunch <number> people <silence> hi <person> unfortunately <org> is fully booked for <date> and thereâs <number> people on the waiting list whenâs the earliest availability iâll check iâm on it iâll ï¬nd out iâll take a look iâll check iâll check into it Hop #1 Hop #2 .189 .209 .197 .187 .225 .095 .178 .142 .167 .410 [Incorrect] [Incorrect] [Incorrect] [Correct] [Incorrect]
Table 8: Hyperparameters of Supervised Embeddings. When Use History is True, the whole conversation history is concatenated with the latest user utterance to create the input. If False, only the latest utterance is used as input.
Task Task 1 Task 2 Task 3 Task 4 Task 5 Task 6 Concierge Learning Rate Margin m Embedding Dim d Negative Cand. N Use History 0.01 0.01 0.01 0.001 0.01 0.001 0.001 0.01 0.01 0.1 0.1 0.01 0.01 0.1 32 128 128 128 32 128 64 100 100 1000 1000 100 100 100 True False False False True False False
Table 9: Hyperparameters of Memory Networks. The longer and more complex the dialogs are, the more hops are needed.
Task Task 1 Task 2 Task 3 Task 4 Task 5 Task 6 Concierge Learning Rate Margin m Embedding Dim d Negative Cand. N Nb Hops 0.01 0.01 0.01 0.01 0.01 0.01 0.001 0.1 0.1 0.1 0.1 0.1 0.1 0.1 128 32 32 128 32 128 128 100 100 100 100 100 100 100 1 1 3 2 3 4 2
candidate contains a word that appears in the knowledge base as an entity of type T , regardless of whether the same word appeared earlier in the conversation. As seen on Table 10, match type features improve performance on out-of-vocabulary tasks 1 and 5, bringing it closer to that of Memory Networks without match type features, but still quite lagging Memory Networks with match type features. Bigrams slightly hurt rather than help performance, except in Task 5 in the standard in-vocabulary setup (performance is lower in the OOV setup).
14
Published as a conference paper at ICLR 2017
Table 10: Test results across all tasks and methods. For tasks T1-T5 results are given in the standard setup and the out-of-vocabulary (OOV) setup, where words (e.g. restaurant names) may not have been seen during training. Task T6 is the Dialog state tracking 2 task with real dialogs, and only has one setup. Best performing methods (or methods within 0.1% of best performing) are given in bold for the per-response accuracy metric, with the per-dialog accuracy given in parenthesis.
Task T1: Issuing API calls T2: Updating API calls T3: Displaying options T4: Providing information T5: Full dialogs T1(OOV): Issuing API calls T2(OOV): Updating API calls T3(OOV): Displaying options T4(OOV): Providing inform. T5(OOV): Full dialogs T6: Dialog state tracking 2 Supervised Embeddings + match type no bigram no match type no bigram (100) (0) (0) (0) (0) (0) (0) (0) (0) (0) (0) + bigrams no match type 98.6 (92.4) 68.3 64.9 57.3 83.4 58.8 68.3 62.1 57.0 50.4 21.8 100 68.4 64.9 57.2 75.4 60.0 68.3 65.0 57.0 58.2 22.6 83.2 68.4 64.9 57.2 76.2 67.2 68.3 65.0 57.1 64.4 22.1 (0) (0) (0) (0) (0) (0) (0) (0) (0) (0) (0) (0) (0) (0) (0) (0) (0) (0) (0) (0) (0) Memory Networks no match type + match type 99.9 (99.6) 100 (100) 74.9 (2.0) (3.0) 59.5 96.1 (49.4) 72.3 78.9 74.4 57.6 65.5 41.1 (0) (0) (0) (0) (0) (0) 100 (100) 98.3 (83.9) 74.9 (0) 100 (100) 93.4 (19.7) 96.5 (82.7) 94.5 (48.4) 75.2 (0) 100 (100) 77.7 (0) 41.0 (0)
15 | {
"id": "1512.05742"
} |
1605.06431 | Residual Networks Behave Like Ensembles of Relatively Shallow Networks | In this work we propose a novel interpretation of residual networks showing
that they can be seen as a collection of many paths of differing length.
Moreover, residual networks seem to enable very deep networks by leveraging
only the short paths during training. To support this observation, we rewrite
residual networks as an explicit collection of paths. Unlike traditional
models, paths through residual networks vary in length. Further, a lesion study
reveals that these paths show ensemble-like behavior in the sense that they do
not strongly depend on each other. Finally, and most surprising, most paths are
shorter than one might expect, and only the short paths are needed during
training, as longer paths do not contribute any gradient. For example, most of
the gradient in a residual network with 110 layers comes from paths that are
only 10-34 layers deep. Our results reveal one of the key characteristics that
seem to enable the training of very deep networks: Residual networks avoid the
vanishing gradient problem by introducing short paths which can carry gradient
throughout the extent of very deep networks. | http://arxiv.org/pdf/1605.06431 | Andreas Veit, Michael Wilber, Serge Belongie | cs.CV, cs.AI, cs.LG, cs.NE | NIPS 2016 | null | cs.CV | 20160520 | 20161027 | 6 1 0 2
t c O 7 2 ] V C . s c [
2 v 1 3 4 6 0 . 5 0 6 1 : v i X r a
# Residual Networks Behave Like Ensembles of Relatively Shallow Networks
Michael Wilber Department of Computer Science & Cornell Tech Cornell University {av443, mjw285, sjb344}@cornell.edu
# Abstract
In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks as an explicit collection of paths. Unlike traditional models, paths through residual networks vary in length. Further, a lesion study reveals that these paths show ensemble-like behavior in the sense that they do not strongly depend on each other. Finally, and most surprising, most paths are shorter than one might expect, and only the short paths are needed during training, as longer paths do not contribute any gradient. For example, most of the gradient in a residual network with 110 layers comes from paths that are only 10-34 layers deep. Our results reveal one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of very deep networks.
# Introduction
Most modern computer vision systems follow a familiar architecture, processing inputs from low- level features up to task speciï¬c high-level features. Recently proposed residual networks [5, 6] challenge this conventional view in three ways. First, they introduce identity skip-connections that bypass residual layers, allowing data to ï¬ow from any layers directly to any subsequent layers. This is in stark contrast to the traditional strictly sequential pipeline. Second, skip connections give rise to networks that are two orders of magnitude deeper than previous models, with as many as 1202 layers. This is contrary to architectures like AlexNet [13] and even biological systems [17] that can capture complex concepts within half a dozen layers.1 Third, in initial experiments, we observe that removing single layers from residual networks at test time does not noticeably affect their performance. This is surprising because removing a layer from a traditional architecture such as VGG [18] leads to a dramatic loss in performance.
In this work we investigate the impact of these differences. To address the inï¬uence of identity skip- connections, we introduce the unraveled view. This novel representation shows residual networks can be viewed as a collection of many paths instead of a single deep network. Further, the perceived resilience of residual networks raises the question whether the paths are dependent on each other or whether they exhibit a degree of redundancy. To ï¬nd out, we perform a lesion study. The results show ensemble-like behavior in the sense that removing paths from residual networks by deleting layers or corrupting paths by reordering layers only has a modest and smooth impact on performance. Finally, we investigate the depth of residual networks. Unlike traditional models, paths through residual networks vary in length. The distribution of path lengths follows a binomial distribution, meaning
1Making the common assumption that a layer in a neural network corresponds to a cortical area.
that the majority of paths in a network with 110 layers are only about 55 layers deep. Moreover, we show most gradient during training comes from paths that are even shorter, i.e., 10-34 layers deep.
This reveals a tension. On the one hand, residual network performance improves with adding more and more layers [6]. However, on the other hand, residual networks can be seen as collections of many paths and the only effective paths are relatively shallow. Our results could provide a ï¬rst explanation: residual networks do not resolve the vanishing gradient problem by preserving gradient ï¬ow throughout the entire depth of the network. Rather, they enable very deep networks by shortening the effective paths. For now, short paths still seem necessary to train very deep networks.
In this paper we make the following contributions:
⢠We introduce the unraveled view, which illustrates that residual networks can be viewed as a collection of many paths, instead of a single ultra-deep network.
⢠We perform a lesion study to show that these paths do not strongly depend on each other, even though they are trained jointly. Moreover, they exhibit ensemble-like behavior in the sense that their performance smoothly correlates with the number of valid paths.
⢠We investigate the gradient ï¬ow through residual networks, revealing that only the short paths contribute gradient during training. Deep paths are not required during training.
# 2 Related Work
The sequential and hierarchical computer vision pipeline Visual processing has long been un- derstood to follow a hierarchical process from the analysis of simple to complex features. This formalism is based on the discovery of the receptive ï¬eld [10], which characterizes the visual system as a hierarchical and feedforward system. Neurons in early visual areas have small receptive ï¬elds and are sensitive to basic visual features, e.g., edges and bars. Neurons in deeper layers of the hierarchy capture basic shapes, and even deeper neurons respond to full objects. This organization has been widely adopted in the computer vision and machine learning literature, from early neural networks such as the Neocognitron [4] and the traditional hand-crafted feature pipeline of Malik and Perona [15] to convolutional neural networks [13, 14]. The recent strong results of very deep neural networks [18, 20] led to the general perception that it is the depth of neural networks that govern their expressive power and performance. In this work, we show that residual networks do not necessarily follow this tradition.
Residual networks [5, 6] are neural networks in which each layer consists of a residual module fi and a skip connection2 bypassing fi. Since layers in residual networks can comprise multiple convolutional layers, we refer to them as residual blocks in the remainder of this paper. For clarity of notation, we omit the initial pre-processing and ï¬nal classiï¬cation steps. With yiâ1 as is input, the output of the ith block is recursively deï¬ned as
yi â¡ fi(yiâ1) + yiâ1, (1) where fi(x) is some sequence of convolutions, batch normalization [11], and Rectiï¬ed Linear Units (ReLU) as nonlinearities. Figure 1 (a) shows a schematic view of this architecture. In the most recent formulation of residual networks [6], fi(x) is deï¬ned by
Ale) = Wi-o(B(Wi-o(B(a)))). @) where W; and W/ are weight matrices, - denotes convolution, B(x) is batch normalization and o(x) = max(z,0). Other formulations are typically composed of the same operations, but may differ in their order.
The idea of branching paths in neural networks is not new. For example, in the regime of convolutional neural networks, models based on inception modules [20] were among the ï¬rst to arrange layers in blocks with parallel paths rather than a strict sequential order. We choose residual networks for this study because of their simple design principle.
Highway networks Residual networks can be viewed as a special case of highway networks [19]. The output of each layer of a highway network is deï¬ned as
yi+1 â¡ fi+1(yi) · ti+1(yi) + yi · (1 â ti+1(yi)) (3)
2We only consider identity skip connections, but this framework readily generalizes to more complex projection skip connections when downsampling is required.
2
= (a) Conventional 3-block residual network (b) Unraveled view of (a)
Figure 1: Residual Networks are conventionally shown as (a), which is a natural representation of Equation (1). When we expand this formulation to Equation (6), we obtain an unraveled view of a 3-block residual network (b). Circular nodes represent additions. From this view, it is apparent that residual networks have O(2n) implicit paths connecting input and output and that adding a block doubles the number of paths.
This follows the same structure as Equation (1). Highway networks also contain residual modules and skip connections that bypass them. However, the output of each path is attenuated by a gating function t, which has learned parameters and is dependent on its input. Highway networks are equivalent to residual networks when ti(·) = 0.5, in which case data ï¬ows equally through both paths. Given an omnipotent solver, highway networks could learn whether each residual module should affect the data. This introduces more parameters and more complexity.
Investigating neural networks Several investigative studies seek to better understand convolutional neural networks. For example, Zeiler and Fergus [23] visualize convolutional ï¬lters to unveil the concepts learned by individual neurons. Further, Szegedy et al. [21] investigate the function learned by neural networks and how small changes in the input called adversarial examples can lead to large changes in the output. Within this stream of research, the closest study to our work is from Yosinski et al. [22], which performs lesion studies on AlexNet. They discover that early layers exhibit little co-adaptation and later layers have more co-adaptation. These papers, along with ours, have the common thread of exploring speciï¬c aspects of neural network performance. In our study, we focus our investigation on structural properties of neural networks.
Ensembling Since the early days of neural networks, researchers have used simple ensembling techniques to improve performance. Though boosting has been used in the past [16], one simple approach is to arrange a committee [3] of neural networks in a simple voting scheme, where the ï¬nal output predictions are averaged. Top performers in several competitions use this technique almost as an afterthought [6, 13, 18]. Generally, one key characteristic of ensembles is their smooth performance with respect to the number of members. In particular, the performance increase from additional ensemble members gets smaller with increasing ensemble size. Even though they are not strict ensembles, we show that residual networks behave similarly.
Dropout Hinton et al. [7] show that dropping out individual neurons during training leads to a network that is equivalent to averaging over an ensemble of exponentially many networks. Similar in spirit, stochastic depth [9] trains an ensemble of networks by dropping out entire layers during training. In this work, we show that one does not need a special training strategy such as stochastic depth to drop out layers. Entire layers can be removed from plain residual networks without impacting performance, indicating that they do not strongly depend on each other.
# 3 The unraveled view of residual networks
To better understand residual networks, we introduce a formulation that makes it easier to reason about their recursive nature. Consider a residual network with three building blocks from input y0 to output y3. Equation (1) gives a recursive deï¬nition of residual networks. The output of each stage is based on the combination of two subterms. We can make the shared structure of the residual network apparent by unrolling the recursion into an exponential number of nested terms, expanding one layer
3
(a) Deleting f2 from unraveled view (b) Ordinary feedforward network
Figure 2: Deleting a layer in residual networks at test time (a) is equivalent to zeroing half of the paths. In ordinary feed-forward networks (b) such as VGG or AlexNet, deleting individual layers alters the only viable path from input to output.
at each substitution step:
y3 = y2 + f3(y2) (4) (5)
[yi + fo(y)] + faQy + fo(yr)) [yo + fi(yo) + fo(yo + filyo))] +
[yo + fi(yo) + fo(yo + filyo))] + fa(yo + fi(yo) + fo(yo + fr (yo))) (6)
We illustrate this expression tree graphically in Figure 1 (b). With subscripts in the function modules indicating weight sharing, this graph is equivalent to the original formulation of residual networks. The graph makes clear that data ï¬ows along many paths from input to output. Each path is a unique conï¬guration of which residual module to enter and which to skip. Conceivably, each unique path through the network can be indexed by a binary code b â {0, 1}n where bi = 1 iff the input ï¬ows through residual module fi and 0 if fi is skipped. It follows that residual networks have 2n paths connecting input to output layers.
In the classical visual hierarchy, each layer of processing depends only on the output of the previous layer. Residual networks cannot strictly follow this pattern because of their inherent structure. Each module fi(·) in the residual network is fed data from a mixture of 2iâ1 different distributions generated from every possible conï¬guration of the previous i â 1 residual modules.
Compare this to a strictly sequential network such as VGG or AlexNet, depicted conceptually in Figure 2 (b). In these networks, input always ï¬ows from the ï¬rst layer straight through to the last in a single path. Written out, the output of a three-layer feed-forward network is
3 = f F F yF F 3 (f F F 2 (f F F 1 (y0))) (7)
(x) is typically a convolution followed by batch normalization and ReLU. In these is only fed data from a single path conï¬guration, the output of f F F
It is worthwhile to note that ordinary feed-forward neural networks can also be âunraveledâ using the above thought process at the level of individual neurons rather than layers. This renders the network as a collection of different paths, where each path is a unique conï¬guration of neurons from each layer connecting input to output. Thus, all paths through ordinary neural networks are of the same length. However, paths in residual networks have varying length. Further, each path in a residual network goes through a different subset of layers.
Based on these observations, we formulate the following questions and address them in our experi- ments below. Are the paths in residual networks dependent on each other or do they exhibit a degree of redundancy? If the paths do not strongly depend on each other, do they behave like an ensemble? Do paths of varying lengths impact the network differently?
# 4 Lesion study
In this section, we use three lesion studies to show that paths in residual networks do not strongly depend on each other and that they behave like an ensemble. All experiments are performed at test
4
# Test classification error
Test error when dropping any single block from residual network vs. VGG on CIFAR-10 n~vVV residual network v2, 110 laye! VGG network, 15 layers residual network baseline VGG network baseline ° 10 20 30 40 50 dropped layer index Top-1 error when dropping any single block from 200-layer residual network on ImageNet â residual network v2, 200 laye! residual network baseline top 1 error 0.0 0 10 20 30 40 50 60 dropped layer index
Figure 4: Results when dropping individual blocks from residual networks trained on Ima- geNet are similar to CIFAR results. However, downsampling layers tend to have more impact on ImageNet.
Figure 3: Deleting individual layers from VGG and a residual network on CIFAR-10. VGG per- formance drops to random chance when any one of its layers is deleted, but deleting individual modules from residual networks has a minimal impact on performance. Removing downsam- pling modules has a slightly higher impact.
time on CIFAR-10 [12]. Experiments on ImageNet [2] show comparable results. We train residual networks with the standard training strategy, dataset augmentation, and learning rate policy, [6]. For our CIFAR-10 experiments, we train a 110-layer (54-module) residual network with modules of the âpre-activationâ type which contain batch normalization as ï¬rst step. For ImageNet we use 200 layers (66 modules). It is important to note that we did not use any special training strategy to adapt the network. In particular, we did not use any perturbations such as stochastic depth during training.
# 4.1 Experiment: Deleting individual layers from neural networks at test time
As a motivating experiment, we will show that not all transformations within a residual network are necessary by deleting individual modules from the neural network after it has been fully trained. To do so, we remove the residual module from a single building block, leaving the skip connection (or downsampling projection, if any) untouched. That is, we change y; = yi-1 + fi(yiâ1) to yf = yi-1- We can measure the importance of each building block by varying which residual module we remove. To compare to conventional convolutional neural networks, we train a VGG network with 15 layers, setting the number of channels to 128 for all layers to allow the removal of any layer.
It is unclear whether any neural network can withstand such a drastic change to the model structure. We expect them to break because dropping any layer drastically changes the input distribution of all subsequent layers.
The results are shown in Figure 3. As expected, deleting any layer in VGG reduces performance to chance levels. Surprisingly, this is not the case for residual networks. Removing downsampling blocks does have a modest impact on performance (peaks in Figure 3 correspond to downsampling building blocks), but no other block removal lead to a noticeable change. This result shows that to some extent, the structure of a residual network can be changed at runtime without affecting performance. Experiments on ImageNet show comparable results, as seen in Figure 4.
Why are residual networks resilient to dropping layers but VGG is not? Expressing residual networks in the unraveled view provides a ï¬rst insight. It shows that residual networks can be seen as a collection of many paths. As illustrated in Figure 2 (a), when a layer is removed, the number of paths is reduced from 2n to 2nâ1, leaving half the number of paths valid. VGG only contains a single usable path from input to output. Thus, when a single layer is removed, the only viable path is corrupted. This result suggests that paths in a residual network do not strongly depend on each other although they are trained jointly.
# 4.2 Experiment: Deleting many modules from residual networks at test-time
Having shown that paths do not strongly depend on each other, we investigate whether the collection of paths shows ensemble-like behavior. One key characteristic of ensembles is that their performance
5
Error when deleting layers 09 09 -4 08 Error when permuting layers 7 = 08 i = 1 1 1 07 07 il L 03 1 1 1 1 06 1 1 1 1 1 1 1 1 05 8 aise 0.0 }----4 }---4 ân 1 : ! t + 02 T f r oF od riot 0.0 123 45 6 7 8 9 1011121314151617181920 1.0 0.98 0.96 094 092 09 0.88 0.86 0.84 Number of layers deleted Kendall Tau correlation
(a)
(b)
Figure 5: (a) Error increases smoothly when randomly deleting several modules from a residual network. (b) Error also increases smoothly when re-ordering a residual network by shufï¬ing building blocks. The degree of reordering is measured by the Kendall Tau correlation coefï¬cient. These results are similar to what one would expect from ensembles.
depends smoothly on the number of members. If the collection of paths were to behave like an ensemble, we would expect test-time performance of residual networks to smoothly correlate with the number of valid paths. This is indeed what we observe: deleting increasing numbers of residual modules, increases error smoothly, Figure 5 (a). This implies residual networks behave like ensembles.
When deleting k residual modules from a network originally of length n, the number of valid paths decreases to O(2nâk). For example, the original network started with 54 building blocks, so deleting 10 blocks leaves 244 paths. Though the collection is now a factor of roughly 10â6 of its original size, there are still many valid paths and error remains around 0.2.
# 4.3 Experiment: Reordering modules in residual networks at test-time
Our previous experiments were only about dropping layers, which have the effect of removing paths from the network. In this experiment, we consider changing the structure of the network by re-ordering the building blocks. This has the effect of removing some paths and inserting new paths that have never been seen by the network during training. In particular, it moves high-level transformations before low-level transformations.
To re-order the network, we swap k randomly sampled pairs of building blocks with compatible dimensionality, ignoring modules that perform downsampling. We graph error with respect to the Kendall Tau rank correlation coefï¬cient which measures the amount of corruption. The results are shown in Figure 5 (b). As corruption increases, the error smoothly increases as well. This result is surprising because it suggests that residual networks can be reconï¬gured to some extent at runtime.
# 5 The importance of short paths in residual networks
Now that we have seen that there are many paths through residual networks and that they do not necessarily depend on each other, we investigate their characteristics.
Distribution of path lengths Not all paths through residual networks are of the same length. For example, there is precisely one path that goes through all modules and n paths that go only through a single module. From this reasoning, the distribution of all possible path lengths through a residual network follows a Binomial distribution. Thus, we know that the path lengths are closely centered around the mean of n/2. Figure 6 (a) shows the path length distribution for a residual network with 54 modules; more than 95% of paths go through 19 to 35 modules.
Vanishing gradients in residual networks Generally, data ï¬ows along all paths in residual networks. However, not all paths carry the same amount of gradient. In particular, the length of the paths through the network affects the gradient magnitude during backpropagation [1, 8]. To empirically investigate the effect of vanishing gradients on residual networks we perform the following experiment. Starting from a trained network with 54 blocks, we sample individual paths of a certain length and measure the norm of the gradient that arrives at the input. To sample a path of length k, we ï¬rst feed a batch forward through the whole network. During the backward pass, we randomly sample k residual
6
(a) (c)
(b) Figure 6: How much gradient do the paths of different lengths contribute in a residual network? To ï¬nd out, we ï¬rst show the distribution of all possible path lengths (a). This follows a Binomial distribution. Second, we record how much gradient is induced on the ï¬rst layer of the network through paths of varying length (b), which appears to decay roughly exponentially with the number of modules the gradient passes through. Finally, we can multiply these two functions (c) to show how much gradient comes from all paths of a certain length. Though there are many paths of medium length, paths longer than â¼20 modules are generally too long to contribute noticeable gradient during training. This suggests that the effective paths in residual networks are relatively shallow.
blocks. For those k blocks, we only propagate through the residual module; for the remaining n â k blocks, we only propagate through the skip connection. Thus, we only measure gradients that ï¬ow through the single path of length k. We sample 1,000 measurements for each length k using random batches from the training set. The results show that the gradient magnitude of a path decreases exponentially with the number of modules it went through in the backward pass, Figure 6 (b).
The effective paths in residual networks are relatively shallow Finally, we can use these results to deduce whether shorter or longer paths contribute most of the gradient during training. To ï¬nd the total gradient magnitude contributed by paths of each length, we multiply the frequency of each path length with the expected gradient magnitude. The result is shown in Figure 6 (c). Surprisingly, almost all of the gradient updates during training come from paths between 5 and 17 modules long. These are the effective paths, even though they constitute only 0.45% of all paths through this network. Moreover, in comparison to the total length of the network, the effective paths are relatively shallow.
To validate this result, we retrain a residual network from scratch that only sees the effective paths during training. This ensures that no long path is ever used. If the retrained model is able to perform competitively compared to training the full network, we know that long paths in residual networks are not needed during training. We achieve this by only training a subset of the modules during each mini batch. In particular, we choose the number of modules such that the distribution of paths during training aligns with the distribution of the effective paths in the whole network. For the network with 54 modules, this means we sample exactly 23 modules during each training batch. Then, the path lengths during training are centered around 11.5 modules, well aligned with the effective paths. In our experiment, the network trained only with the effective paths achieves a 5.96% error rate, whereas the full model achieves a 6.10% error rate. There is no statistically signiï¬cant difference. This demonstrates that indeed only the effective paths are needed.
# 6 Discussion
Removing residual modules mostly removes long paths Deleting a module from a residual network mainly removes the long paths through the network. In particular, when deleting d residual modules from a network of length n, the fraction of paths remaining per path length x is given by
rw) (7) (8) fraction of remaining paths of length x =
Figure 7 illustrates the fraction of remaining paths after deleting 1, 10 and 20 modules from a 54 module network. It becomes apparent that the deletion of residual modules mostly affects the long paths. Even after deleting 10 residual modules, many of the effective paths between 5 and 17 modules long are still valid. Since mainly the effective paths are important for performance, this result is in line with the experiment shown in Figure 5 (a). Performance only drops slightly up to the removal of 10 residual modules, however, for the removal of 20 modules, we observe a severe drop in performance.
7
remaining paths after deleting d modules Residual network vs. stochastic depth error when dropping any single block â delete 1 module â residual network v2, 110 layers â delete 10 modules| â stochastic depth, 110 layers, d = 0.5 linear deca: â delete 20 modules| effective paths fraction of remaining paths ° â10 20 30 0 50 path length dropped layer index 20 30 40 50
# (CIFAR-1
Figure 7: Fraction of paths remain- ing after deleting individual layers. Deleting layers mostly affects long paths through the networks.
_
Figure 8: Impact of stochastic depth on resilience to layer deletion. Training with stochastic depth only improves re- silience slightly, indicating that plain residual networks al- ready donât depend on individual layers. Compare to Fig. 3.
Connection to highway networks In highway networks, ti(·) multiplexes data ï¬ow through the residual and skip connections and ti(·) = 0.5 means both paths are used equally. For highway networks in the wild, [19] observe empirically that the gates commonly deviate from ti(·) = 0.5. In particular, they tend to be biased toward sending data through the skip connection; in other words, the network learns to use short paths. Similar to our results, it reinforces the importance of short paths.
Effect of stochastic depth training procedure Recently, an alternative training procedure for resid- ual networks has been proposed, referred to as stochastic depth [9]. In that approach a random subset of the residual modules is selected for each mini-batch during training. The forward and backward pass is only performed on those modules. Stochastic depth does not affect the number of paths in the network because all paths are available at test time. However, it changes the distribution of paths seen during training. In particular, mainly short paths are seen. Further, by selecting a different subset of short paths in each mini-batch, it encourages the paths to produce good results independently.
Does this training procedure signiï¬cantly reduce the dependence between paths? We repeat the experiment of deleting individual modules for a residual network trained using stochastic depth. The result is shown in Figure 8. Training with stochastic depth improves resilience slightly; only the dependence on the downsampling layers seems to be reduced. By now, this is not surprising: we know that plain residual networks already donât depend on individual layers.
# 7 Conclusion
What is the reason behind residual networksâ increased performance? In the most recent iteration of residual networks, He et al. [6] provide one hypothesis: âWe obtain these results via a simple but essential conceptâgoing deeper.â While it is true that they are deeper than previous approaches, we present a complementary explanation. First, our unraveled view reveals that residual networks can be viewed as a collection of many paths, instead of a single ultra deep network. Second, we perform lesion studies to show that, although these paths are trained jointly, they do not strongly depend on each other. Moreover, they exhibit ensemble-like behavior in the sense that their performance smoothly correlates with the number of valid paths. Finally, we show that the paths through the network that contribute gradient during training are shorter than expected. In fact, deep paths are not required during training as they do not contribute any gradient. Thus, residual networks do not resolve the vanishing gradient problem by preserving gradient ï¬ow throughout the entire depth of the network. This insight reveals that depth is still an open research question. These promising observations provide a new lens through which to examine neural networks.
# Acknowledgements
We would like to thank Sam Kwak and Theofanis Karaletsos for insightful feedback. We also thank the reviewers of NIPS 2016 for their very constructive and helpful feedback and for suggesting the paper title. This work is partly funded by AOL through the Connected Experiences Laboratory (Author 1), an NSF Graduate Research Fellowship award (NSF DGE-1144153, Author 2), and a Google Focused Research award (Author 3).
8
# References
[1] Yoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with gradient descent is difï¬cult. IEEE Transactions on Neural Networks, 5(2):157â166, 1994. [2] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Conference on Computer Vision and Pattern Recognition, 2009. [3] Harris Drucker, Corinna Cortes, Lawrence D. Jackel, Yann LeCun, and Vladimir Vapnik.
Boosting and other ensemble methods. Neural Computation, 6(6):1289â1301, 1994.
[4] Kunihiko Fukushima. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological Cybernetics, 36(4):193â202, 1980.
[5] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015.
[6] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. arXiv preprint arXiv:1603.05027, 2016.
[7] Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhut- dinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012.
[8] Sepp Hochreiter. Untersuchungen zu dynamischen neuronalen netzen. Masterâs thesis, Institut fur Informatik, Technische Universitat, Munchen, 1991.
[9] Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Weinberger. Deep networks with stochastic depth. arXiv preprint arXiv:1603.09382, 2016.
[10] David H Hubel and Torsten N Wiesel. Receptive ï¬elds, binocular interaction and functional architecture in the catâs visual cortex. The Journal of Physiology, 160(1):106â154, 1962. [11] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning, 2015.
[12] Alex Krizhevsky. Learning multiple layers of features from tiny images, 2009. [13] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in Neural Information Processing Systems, 2012. [14] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning
applied to document recognition. Proceedings of the IEEE, 86(11):2278â2324, 1998.
[15] Jitendra Malik and Pietro Perona. Preattentive texture discrimination with early vision mecha- nisms. Journal of the Optical Society of America, 1990.
[16] Robert E Schapire. The strength of weak learnability. Machine Learning, 5(2):197â227, 1990. [17] Thomas Serre, Aude Oliva, and Tomaso Poggio. A feedforward architecture accounts for rapid categorization. Proceedings of the National Academy of Sciences, 104(15):6424â6429, 2007. [18] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale
image recognition. arXiv preprint arXiv:1409.1556, 2014.
[19] Rupesh Kumar Srivastava, Klaus Greff, and Jürgen Schmidhuber. Highway networks. arXiv preprint arXiv:1505.00387, 2015.
[20] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Conference on Computer Vision and Pattern Recognition, pages 1â9, 2015.
[21] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfel- low, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
[22] Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In Advances in Neural Information Processing Systems, 2014.
[23] Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In Computer VisionâECCV 2014, pages 818â833. Springer, 2014.
9 | {
"id": "1603.09382"
} |
1605.04711 | Ternary Weight Networks | We present a memory and computation efficient ternary weight networks (TWNs)
- with weights constrained to +1, 0 and -1. The Euclidian distance between full
(float or double) precision weights and the ternary weights along with a
scaling factor is minimized in training stage. Besides, a threshold-based
ternary function is optimized to get an approximated solution which can be fast
and easily computed. TWNs have shown better expressive abilities than binary
precision counterparts. Meanwhile, TWNs achieve up to 16$\times$ model
compression rate and need fewer multiplications compared with the float32
precision counterparts. Extensive experiments on MNIST, CIFAR-10, and ImageNet
datasets show that the TWNs achieve much better result than the
Binary-Weight-Networks (BWNs) and the classification performance on MNIST and
CIFAR-10 is very close to the full precision networks. We also verify our
method on object detection task and show that TWNs significantly outperforms
BWN by more than 10\% mAP on PASCAL VOC dataset. The pytorch version of source
code is available at: https://github.com/Thinklab-SJTU/twns. | http://arxiv.org/pdf/1605.04711 | Fengfu Li, Bin Liu, Xiaoxing Wang, Bo Zhang, Junchi Yan | cs.CV | 5 pages, 3 fitures, conference | null | cs.CV | 20160516 | 20221120 | 2 2 0 2
v o N 0 2 ] V C . s c [ 3 v 1 1 7 4 0 . 5 0 6 1 : v i X r a
# TERNARY WEIGHT NETWORKS
Fengfu Li1â , Bin Liu2â , Xiaoxing Wang2, Bo Zhang1â, Junchi Yan2â
1Institute of Applied Math., AMSS, CAS, Beijing, China lifengfu12@mails.ucas.ac.cn, b.zhang@amt.ac.cn 2MOE Key Lab of Artiï¬cial Intelligence, Shanghai Jiao Tong University, Shanghai, China {binliu_sjtu, ï¬gure1_wxx, yanjunchi}@sjtu.edu.cn
# ABSTRACT
We present a memory and computation efï¬cient ternary weight networks (TWNs) - with weights constrained to +1, 0 and -1. The Euclidian distance between full (ï¬oat or double) precision weights and the ternary weights along with a scaling factor is minimized in training stage. Besides, a threshold-based ternary function is optimized to get an approximated solution which can be fast and easily computed. TWNs have shown better expressive abilities than binary precision counterparts. Mean- while, TWNs achieve up to 16à model compression rate and need fewer multiplications compared with the ï¬oat32 precision counterparts. Extensive experiments on MNIST, CIFAR-10, and ImageNet datasets show that the TWNs achieve much bet- ter result than the Binary-Weight-Networks (BWNs) and the classiï¬cation performance on MNIST and CIFAR-10 is very close to the full precision networks. We also verify our method on object detection task and show that TWNs signiï¬cantly outperforms BWN by more than 10% mAP on PASCAL VOC dataset. The pytorch version of source code is available at: https://github.com/Thinklab-SJTU/twns.
are BinaryNet [11] and XNOR-Net [7] where both weights and activations are binary-valued. These models eliminate most of the multiplications in the forward and backward prop- agations, and thus own the potential of gaining signiï¬cant beneï¬ts with specialized deep learning (DL) hardware by re- placing many multiply-accumulate operations by simple ac- cumulation [12]. Besides, binary weight networks achieve up to 32à model compression rate. Despite the binary tech- niques, some other compression methods focus on identifying models with few parameters while preserving accuracy by compressing existing state-of-the-art DNN models in a lossy way. SqueezeNet [13] is such a model that has 50à fewer parameters than AlexNet [2] but maintains AlexNet-level ac- curacy on ImageNet. MobileNet [14] and Shufï¬eNet [15] propose lightweight architectures to reduce the parameters and computation cost. Other methods propose to search efï¬cient ar- chitectures and achieves great performance on both classiï¬ca- tion [16, 17] and object detection [18]. Deep Compression [9] is another most recently proposed method that uses pruning, trained quantization and huffman coding for compressing neu- ral networks. It reduced the storage requirement of AlexNet and VGG-16 [3] by 35à and 49Ã, respectively, without loss of accuracy. This paper has the following contributions:
# 1. INTRODUCTION AND RELATED WORK
Deep neural networks (DNN) have made signiï¬cant improve- ments in lots of computer vision tasks such as object recog- nition [1, 2, 3, 4] and object detection [5, 6]. This motivates interests to deploy the state-of-the-art DNN models to real world applications like smart phones, wearable embedded de- vices or other edge computing devices. However, these models often need considerable storage and computational power [7], and can easily overburden the limited storage, battery power, and computer capabilities of the smart wearable embedded devices. As a result, it remains a challenge for the deployment. To mitigate the storage and computational problem [8, 9], methods that seek to binarize weights or activations in DNN models have been proposed. BinaryConnect [10] uses a single sign function to binarize the weights. Binary Weight Net- works [7] adopts the same binarization function but adds an extra scaling factor. The extensions of the previous methods
â : Equal contribution. â Correspondence authors.
1) To our best knowledge, this was the ï¬rst (at least at its debut in arxiv) ternary weight quantization scheme to reduce storage and computational cost for deep neural networks.
2) We propose an approximated and universal solution with threshold-based ternary function for calculating the ternary weights of the raw neural networks.
3) Experiments show the efï¬cacy of our approach on public benchmarks for both image classiï¬cation and detection.
# 2. TERNARY WEIGHT NETWORKS
# 2.1. Advantage Overview
We address the limited storage and computational resources issues by introducing ternary weight networks (TWNs), which constrain the weights to be ternary-valued: +1, 0 and -1. TWNs seek to make a balance between the full precision weight net- works (FPWNs) counterparts and the binary precision weight
networks (BPWNs) counterparts. The detailed features are listed as follows.
Expressive ability In most recent network architectures such as VGG [3], GoogLeNet [4] and ResNet [1], a most commonly used convolutional ï¬lter is of size 3Ã3. With binary precision, there is only 23Ã3 = 512 templates. However, a ternary ï¬lter with the same size owns 33Ã3 = 19683 templates, which gains 38à more stronger expressive abilities than the binary counterpart.
Model compression In TWNs, 2-bit storage requirement is needed for a unit of weight. Thus, TWNs achieve up to 16à model compression rate compared with the ï¬oat32 precision counterparts. Take VGG-19 [3] as an example, ï¬oat version of the model needs â¼500M storage requirement, which can be reduced to â¼32M with ternary precision. Thus, although the compression rate of TWNs is 2à less than that of BPWNs, it is fair enough for compressing most of the existing state-of- the-art DNN models.
Computational requirement Compared with the BP- WNs, TWNs own an extra zero state. However, the zero terms need not be accumulated for any multiple operations. Thus, the multiply-accumulate operations in TWNs keep unchanged compared with binary precision counterparts. As a result, it is also hardware-friendly for training large-scale networks with specialized DL hardware.
In the following parts, we will give detailed descriptions about the ternary weight networks problem and an approx- imated but efï¬cient solution. After that, a simple training algorithm with error back-propagation is introduced and the run time usage is described at last.
2.2. Problem Formulation To make the ternary weight networks perform well, we seek to minimize the Euclidian distance between the full precision weights W and the ternary-valued weights ËW along with a nonnegative scaling factor α [7]. The optimization problem is formulated as follows,
~ ~ ~ 2 a*,W* =argmin J(a,W) = \|w âaWw ow. 2 (1) W; ⬠{-1,0, 41} ,6=1,2..n st. a >O0,
Here n is the number of the ï¬lter. With the approximation W â α ËW, a basic block of forward propagation in ternary weight networks is as follows,
Z=X*W~ Xx (aW) = (aX) OW 2) Xrert = g(Z) ¢
where X is the input of the block; â is a convolution or in- ner product operation; g is a nonlinear activation function; â indicates a convolution or an inner product operation with- out multiplication; Z is the output feature map of the neural network block. It can also be used as input of the next block.
2.3. Threshold-based Ternary Function One way to solve the optimization Eq. 1 is to expand the cost function J(α, ËW) and take the derivative w.r.t. α and ËWi is respectively. However, this would get interdependent αâ and ËWâ i . Thus, there is no deterministic solution in this way [19]. To overcome this, we try to ï¬nd an approximated optimal solution with a threshold-based ternary function,
ËWi = f (Wi|â) = +1 0 â1 if Wi > â |Wi| ⤠â if if Wi < ââ (3)
Here â is an positive threshold parameter. With Eq. 3, the original problem can be transformed to
αâ, ââ = arg min αâ¥0,â>0 (|Wâ|α2 â 2( iâIâ |Wi|)α + câ) (4)
where I, = {i||W;| > A} and |I,| denotes the number of elements in Iy; cq = Viers: w? is a a independent con- stant. Thus, for any given A, the optimal a can be computed as follows,
1 an == WwW; (5) A= py (Wi A
By substituting αâ which can be simpliï¬ed as follows, â into Eq. 4, we get a â dependent equation,
1 A* = arg min â W;)? (6) going fT
The above euqation has no straightforward solutions. Though discrete optimization can be made to solve the prob- lem (due to states of Wi is ï¬nite), it should be very time consuming. As a viable alternative, we make a single as- sumption that Wi are generated from uniform or normal In case of Wi are uniformly distributed in distribution. [âα, α] and â lies in (0, α], the approximated ââ is α 3 , which equals to 2 3 E(|W|). When Wi is generated from normal distributions N (0, Ï2), the approximated ââ is 0.6Ï which equals to 0.75E(|W|). Thus, we can use a rule of thumb that ââ â 0.75E(|W|) â 0.75 n
2.4. Training of Ternary-Weight-Networks CNNs typically includes Convolution layer, Fully-Connected layer, Pooling layer (e.g.,Max-Pooling, Avg-Pooling), Batch- Normalization (BN) layer [20] and Activation layer (e.g.,ReLU, Sigmoid), in TWNs, we also follow the traditional neural network block design philosophy, the order of layers in a typical ternary block of TWNs is shown in Fig. 1.
We borrow the parameter optimization strategy which suc- cessfully applied from BinaryConncet [10] and XNOR-Net [7], in our design, ternarization only happens at the forward and backward pass in convolution and fully-connected layers, but in the parameters update stage, we still keep a copy of the
Algorithm 1: Train a M-layers CNN w/ ternary weights
Rwne mw Algorithm a M-layers ternary Inputs : A minibatch of inputs and targets (I, Y), loss function L(Y, Y) and current weight W'. Hyper-parameter : current learning rate 7)â. Outputs updated weight W'+!, updated learning rate nâ+!. Make the float32 weight filters as ternery ones: form = 1to M do for kââ filter in m"â layer do Amk = 2 \|Wra lle Wrmk = {-1,0, +1}, refer to Eq,[3| Wrt?Winks Wink'Wmk Tink = mkWink Amk =
8 ËY = TernaryForward(I, ËW, α) //standard forward
propagation , ËT ) //standard backward = TernaryBackward( âL âL â ËT â ËY propagation except that gradients are computed using T instead of W t
10 W t+1 = UpdateParameters(W t, âL âT , ηt) // we use SGD in this paper
11 ηt+1 = UpdateLearningrate(ηt, t) //we use learning rate step decay in this paper
Fig. 1. A typical Ternary block in TWNs. In the forward pass, we apply ternarization operation for the weight of convolution layer meanwhile the ï¬oat32 weight will be cached for future parameter update; in the backward pass, we calculate ternary weight gradient to update the ï¬oat32 weight.
full-precision parameters. In addition, two effective tricks, Batch-Normalization and learning rate step decay that drops the learning rate by a factor every few epochs, are adopted. We use stochastic gradient descent (SGD) with momentum to update the the parameters when training TWNs, the detailed training strategy show in Table 1.
2.5. Inference of Ternary-Weight-Networks In the forward pass, the scaling factor α could be transformed to the inputs according to Eq. 2. Thus, we only need to keep the ternary-valued weights and the scaling factors for deployment. This would results up to 16à model compression rate for deployment compared with the ï¬oat32 precision counterparts.
# 3. EXPERIMENTS AND DISCUSSION
We benchmark Ternary Weight Networks (TWNs) with Bi- nary Weight Networks (BPWNs) and Full Precision Networks (FPWNs) on both classiï¬cation task (MNIST, CIFAR-10 and ImageNet) and object detection task (PASCAL VOC).
Table 1. Backbones and hyperparameters setting for different datasets used by our method on three benchmarks.
MNIST CIFAR-10 ImageNet backbone architecture weight decay mini-batch size initial learning rate learning rate adjust step2 momentum LeNet-5 1e-4 50 0.01 15, 25 0.9 VGG-7 1e-4 100 0.1 80, 120 0.9 ResNet18B 1e-4 64(x4)1 0.1 30, 40, 50 0.9
For a fair comparison, we keep the following conï¬gures to be same: network architecture, regularization method (L2 weight decay), learning rate scaling procedure (multi-step) and optimization method (SGD with momentum). BPWNs use sign function to binarize the weights and FPWNs use ï¬oat- valued weights. See Table 1 for training conï¬gurations.
3.1. Experiments of Classiï¬cation MNIST is a collection of handwritten digits. It is a very popular dataset in the ï¬eld of image processing. The LeNet- 5 [21] architecture we used in MNIST experiment is â32-C5 + MP2 + 64-C5 + MP2 + 512 FC + SVMâ which starts with a 5x5 convolutional block that includes a convolution layer, a BN layer and a relu layer. Then a max-pooling layer is followed with stride 2. The âFCâ is a fully connect block with 512 nodes. The top layer is a SVM classiï¬er with 10 labels. Finally, hinge loss is minimized with SGD.
CIFAR-10 consists of 10 classes with 6K color images of 32x32 resolution for each class. It is divided into 50K training and 10K testing images. We deï¬ne a VGG inspired architecture, denoted as VGG-7, by â2Ã(128-C3) + MP2 + 2Ã(256-C3) + MP2 + 2Ã(512-C3) + MP2 + 1024-FC + Soft- maxâ. Compared with the architecture in [10], we ignore the last fully connected layer. We follow the data augmentation in [1, 22] for training: 4 pixels are padded on each side, and a 32Ã32 crop is randomly sampled from the padded image or its horizontal ï¬ip. At testing time, we only evaluate the single view of the original 32Ã32 image.
ImageNet consists of about 1.2 million train images from 1000 categories and 50,000 validation images. ImageNet has higher resolution and greater diversity, is more close to real life than MNIST and CIFAR-10. We adopt the popu- lar ResNet18 architecture [1] as backbone. Besides, we also benchmark another enlarged counterpart whose number of ï¬l- ters in each block is 1.5à of the original one which is termed as ResNet18B. In each training iteration, images are randomly cropped with 224Ã224 size. We do not use any resize tricks [7] or any color augmentation.
Table 2 shows the classiï¬cation results. On the small datasets (MNIST and CIFAR-10), TWNs achieve similar per-
1We use 4 GPUs to speed up the training. 2Learning rate is divided by 10 at these epochs.
'ââ Full precision (ResNet-18) Full precision (ResNet-18B) 1-2» Temary precision (ResNet-18) | |p-e-* Ternary precision (ResNet-18B) 1: }»â-#â«Binary precision (ResNetâ18) Binary precision (ResNet-18B) 02 + 0 5 10 15 20 25 30 35 40 45 50 55 60 Epochs
0.9 0.75 Full precision (VGG7~128) Temary precision (VGG7-128) Binary precision (VGG7-128) + 0 2 40 60 80 100 120 140 160 180 Epochs
0.995 0.99 0.985 0.98 + -- 0.975 0.97 âAccuracy 0.965 + 0.96 + ~ _ |e Ternary precision (LeNet-5) Binary precision (LeNet-5) 0.955 0.95 +# 15 20 Epochs 2 30 35 «40
0.995 0.99 0.9 0.985 0.98 + -- 0.975 0.97 âAccuracy 0.965 + 'ââ Full precision (ResNet-18) Full precision (ResNet-18B) 1-2» Temary precision (ResNet-18) 0.96 + ~ _ |e Ternary precision (LeNet-5) 0.75 Binary precision (LeNet-5) 0.955 | |p-e-* Ternary precision (ResNet-18B) 1: }»â-#â«Binary precision (ResNetâ18) Binary precision (ResNet-18B) Full precision (VGG7~128) Temary precision (VGG7-128) Binary precision (VGG7-128) 0.95 +# + 15 20 0 2 Epochs (a) MNIST 2 30 35 «40 40 60 80 100 120 140 160 180 Epochs (b) CIFAR-10 02 + 0 5 10 15 20 25 30 35 40 45 50 55 60 Epochs (c) ImageNet (top-5)
(b) CIFAR-10 Fig. 2. Classiï¬cation accuracy over training epochs MNIST (top-1 accuracy), CIFAR10 (top-1) and ImageNet (top-5).
# (a) MNIST
# (c) ImageNet (top-5)
Table 2. Classiï¬cation accuracy (%) on ImageNet with ResNet18 (or ResNet18B in bracket) as backbones. MNIST CIFAR-10
ImageNet (top-1) ImageNet (top-5) 99.35 99.05 99.41 98.82 98.60 - - 92.56 90.18 92.88 91.73 89.85 - - 61.80 (65.3) 57.50 (61.6) 65.4 (67.6) - - 60.8 51.2 84.20 (86.2) 81.20 (83.9) 86.76 (88.0) - - 83.0 73.2
TWNs (our main approach) BPWNs (binary precision counterpart) FPWNs (full precision counterpart) BinaryConnect [10] Binarized Neural Networks [11] Binary Weight Networks [7] XNOR-Net [7]
Table 3. Detection performance (%) on PASCAL VOC with YOLOv5 (small) as detector on Pascal VOC.
Precision Recall mAP_50 mAP_50:95 TWNs (our main approach) BPWNs (binary precision counterpart) FPWNs (full precision counterpart) 78.0% 69.8% 83.3% 69.1% 56.7% 80.8% 76.8% 62.9% 86.7% 51.5% 39.4% 63.7%
formance as FPWNs, while beats BPWNs. On the large-scale ImageNet dataset, BPWNs and TWNs both get poorer per- formance than FPWNs. However, the accuracy gap between TWNs and FPWNs is smaller than the gap between BPWNs and TWNs. In addition, when we change the backbone from ResNet18 to ResNet18B, as the model size is larger, the perfor- mance gap between TWNs (or BPWNs) and FPWNs has been reduced. This indicates low precision networks gain more merits from larger models than the full precision counterparts. The validation accuracy curves of different approaches across all training epochs on MNIST, CIFAR-10 and Ima- geNet datasets illustrate in Fig. 2. As we can see in the ï¬gure, obviously, BPWNs converge slowly and the training loss is not stable compared with TWNs and FPWNs. However, TWNs converge almost as fast and stably as FPWNs.
3.2. Experiments of Detection PASCAL VOC [23] consists of 20 classes with 11540 images and 27450 labeled objects. We adopt the popular YOLOv5 (small) [24] architecture and compare the performance of full precision, binary precision and ternary precision in Table 3.
Speciï¬cally, we initialize each model by the weights trained on MS-COCO dataset [25] (provided by YOLOv5) and ï¬ne- tune each model by 150 epochs. We observe that TWNs signiï¬cantly outperforms BPWNs by more than 10% mAP, showing the great effectiveness of our method.
4. CONCLUSION
In this paper, we have introduced the simple, efï¬cient, and accurate ternary weight networks for real world AI application which can reduce the memory usage about 16x and the the computation about 2x. We present the optimization problem of TWNs and give an approximated solution with a simple but effective ternary function. The proposed TWNs achieve a balance between accuracy and model compression rate as well as potentially low computational requirements of BPWNs. Empirical results on public benchmarks show the superior performance of the proposed method.
5. REFERENCES
[1] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, âDeep residual learning for image recognition,â arXiv preprint arXiv:1512.03385, 2015.
[2] A. Krizhevsky, I. Sutskever, and G. E. Hinton, âImagenet classiï¬cation with deep convolutional neural networks,â Advances in neural information processing systems, p. 1097â1105, 2012.
[3] K. Simonyan and A. Zisserman, âVery deep convolu- tional networks for large-scale image recognition,â arXiv preprint arXiv:1409.1556, 2014.
[4] W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Er- han, V. Vanhoucke, and A. Rabinovich, âGoing deeper with convolutions,â CVPR, p. 1â9, 2015.
[5] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, and S. Reed, âSsd: Single shot multibox detector,â arXiv preprint arXiv:1512.02325, 2015.
[6] S. Ren, K. He, R. Girshick, and J. Sun, âFaster r-cnn: Towards real-time object detection with region proposal networks,â Advances in neural information processing systems, p. 91â99, 2015.
[7] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi, Imagenet classiï¬cation using binary arXiv preprint âXnor-net: convolutional neural networks,â arXiv:1603.05279, 2016.
[8] Steven K. Esser, Paul A. Merolla, John V. Arthur, An- drew S. Cassidy, Rathinakumar Appuswamy, and et al., âConvolutional networks for fast, energy-efï¬cient neu- romorphic computing,â Proceedings of the National Academy of Sciences, vol. 113, no. 41, pp. 11441â11446, 2016.
[9] Song Han, Huizi Mao, and William J. Dally, âDeep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding,â arXiv preprint arXiv 1510.00149, 2015.
[10] M. Courbariaux, Y. Bengio, and J.-P. David, âBinarycon- nect: Training deep neural networks with binary weights during propagations,â NeurIPS, p. 3123â3131, 2015.
[11] I. Hubara, D. Soudry, and R. E. Yaniv, âBinarized neural networks,â Advances in neural information processing systems, 2016.
[12] Z. Lin, M. Courbariaux, R. Memisevic, and Y. Ben- gio, âNeural networks with few multiplications,â arXiv preprint arXiv:1510.03009, 2015.
[13] F. N. Iandola, M. W. Moskewicz, K. Ashraf, S. Han, W. J. Dally, and K. Keutzer, âSqueezenet: Alexnet-level accuracy with 50x fewer parameters and <1mb model size,â arXiv preprint arXiv:1602.07360, 2016.
[14] Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam, âMobilenets: Efï¬cient convolutional neural networks for mobile vision applica- tions,â CoRR, vol. abs/1704.04861, 2017.
[15] Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun, âShufï¬enet: An extremely efï¬cient convolutional neural network for mobile devices,â in CVPR, 2018.
[16] Hanxiao Liu, Karen Simonyan, and Yiming Yang, âDARTS: differentiable architecture search,â in ICLR, 2019.
[17] Xiaoxing Wang, Chao Xue, Junchi Yan, Xiaokang Yang, Yonggang Hu, and Kewei Sun, âMergenas: Merge oper- ations into one for differentiable architecture search,â in IJCAI, 2020, pp. 3065â3072.
[18] Xiaoxing Wang, Jiale Lin, Juanping Zhao, Xiaokang Yang, and Junchi Yan, âEautodet: Efï¬cient architecture search for object detection,â in ECCV, 2022.
[19] K. Hwang and W. Sung, âFixed-point feedforward deep neural network design using weights +1, 0, and -1,â IEEE Workshop on Signal Processing Systems (SiPS), pp. 1â6, 2014.
[20] S. Ioffe and C. Szegedy, âBatch normalization: Ac- celerating deep network training by reducing internal covariate shift,â Proceedings of The 32nd International Conference on Machine Learning, p. 448â456, 2015.
[21] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, âGradient-based learning applied to document recogni- tion,â Proceedings of the IEEE, vol. 86, no. 11, pp. 2278â2324, 1998.
[22] C.-Y. Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu, âDeeply-supervised nets,â Proceedings of the Eighteenth International Conference on Artiï¬cial Intelligence and Statistics, p. 562â570, 2015.
[23] Mark Everingham, Luc Van Gool, Christopher K. I. Williams, John M. Winn, and Andrew Zisserman, âThe pascal visual object classes (VOC) challenge,â Int. J. Comput. Vis., vol. 88, no. 2, pp. 303â338, 2010.
[24] Glenn Jocher, âYolov5 documentation,â https:// docs.ultralytics.com/, May 2020.
[25] Tsung-Yi Lin, Michael Maire, Serge J. Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick, âMicrosoft COCO: common ob- jects in context,â in ECCV, 2014. | {
"id": "1602.07360"
} |
1604.06778 | Benchmarking Deep Reinforcement Learning for Continuous Control | Recently, researchers have made significant progress combining the advances
in deep learning for learning feature representations with reinforcement
learning. Some notable examples include training agents to play Atari games
based on raw pixel data and to acquire advanced manipulation skills using raw
sensory inputs. However, it has been difficult to quantify progress in the
domain of continuous control due to the lack of a commonly adopted benchmark.
In this work, we present a benchmark suite of continuous control tasks,
including classic tasks like cart-pole swing-up, tasks with very high state and
action dimensionality such as 3D humanoid locomotion, tasks with partial
observations, and tasks with hierarchical structure. We report novel findings
based on the systematic evaluation of a range of implemented reinforcement
learning algorithms. Both the benchmark and reference implementations are
released at https://github.com/rllab/rllab in order to facilitate experimental
reproducibility and to encourage adoption by other researchers. | http://arxiv.org/pdf/1604.06778 | Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel | cs.LG, cs.AI, cs.RO | 14 pages, ICML 2016 | null | cs.LG | 20160422 | 20160527 | 6 1 0 2
y a M 7 2 ] G L . s c [
3 v 8 7 7 6 0 . 4 0 6 1 : v i X r a
# Benchmarking Deep Reinforcement Learning for Continuous Control
Yan Duanâ Xi Chenâ Rein Houthooftâ â¡ John Schulmanâ § Pieter Abbeelâ â University of California, Berkeley, Department of Electrical Engineering and Computer Sciences â¡ Ghent University - iMinds, Department of Information Technology § OpenAI
ROCKYDUAN@EECS.BERKELEY.EDU C.XI@EECS.BERKELEY.EDU REIN.HOUTHOOFT@UGENT.BE JOSCHU@EECS.BERKELEY.EDU PABBEEL@CS.BERKELEY.EDU
# Abstract
Recently, researchers have made signiï¬cant progress combining the advances in deep learn- ing for learning feature representations with rein- forcement learning. Some notable examples in- clude training agents to play Atari games based on raw pixel data and to acquire advanced ma- nipulation skills using raw sensory inputs. How- ever, it has been difï¬cult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of contin- uous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel ï¬ndings based on the systematic evaluation of a range of implemented reinforcement learning al- gorithms. Both the benchmark and reference im- plementations are released at https://github.com/ rllab/rllab in order to facilitate experimental re- producibility and to encourage adoption by other researchers.
# 1. Introduction
Reinforcement learning addresses the problem of how agents should learn to take actions to maximize cumula- tive reward through interactions with the environment. The traditional approach for reinforcement learning algorithms requires carefully chosen feature representations, which are
Proceedings of the 33 rd International Conference on Machine Learning, New York, NY, USA, 2016. JMLR: W&CP volume 48. Copyright 2016 by the author(s). Also available at https://arxiv.org/abs/1604.06778
usually hand-engineered. Recently, signiï¬cant progress has been made by combining advances in deep learning for learning feature representations (Krizhevsky et al., 2012; Hinton et al., 2012) with reinforcement learning, tracing back to much earlier work of Tesauro (1995) and Bert- sekas & Tsitsiklis (1995). Notable examples are training agents to play Atari games based on raw pixels (Guo et al., 2014; Mnih et al., 2015; Schulman et al., 2015a) and to acquire advanced manipulation skills using raw sensory in- puts (Levine et al., 2015; Lillicrap et al., 2015; Watter et al., 2015). Impressive results have also been obtained in train- ing deep neural network policies for 3D locomotion and manipulation tasks (Schulman et al., 2015a;b; Heess et al., 2015b).
Along with this recent progress, the Arcade Learning En- vironment (ALE) (Bellemare et al., 2013) has become a popular benchmark for evaluating algorithms designed for tasks with high-dimensional state inputs and discrete ac- tions. However, these algorithms do not always generalize straightforwardly to tasks with continuous actions, leading to a gap in our understanding. For instance, algorithms based on Q-learning quickly become infeasible when naive discretization of the action space is performed, due to the curse of dimensionality (Bellman, 1957; Lillicrap et al., 2015). In the continuous control domain, where actions are continuous and often high-dimensional, we argue that the existing control benchmarks fail to provide a compre- hensive set of challenging problems (see Section 7 for a review of existing benchmarks). Benchmarks have played a signiï¬cant role in other areas such as computer vision and speech recognition. Examples include MNIST (Le- Cun et al., 1998), Caltech101 (Fei-Fei et al., 2006), CI- FAR (Krizhevsky & Hinton, 2009), ImageNet (Deng et al., 2009), PASCAL VOC (Everingham et al., 2010), BSDS500 (Martin et al., 2001), SWITCHBOARD (Godfrey et al., 1992), TIMIT (Garofolo et al., 1993), Aurora (Hirsch & Pearce, 2000), and VoiceSearch (Yu et al., 2007). The lack
Benchmarking Deep Reinforcement Learning for Continuous Control
of a standardized and challenging testbed for reinforcement learning and continuous control makes it difï¬cult to quan- tify scientiï¬c progress. Systematic evaluation and compar- ison will not only further our understanding of the strengths of existing algorithms, but also reveal their limitations and suggest directions for future research.
We attempt to address this problem and present a bench- mark consisting of 31 continuous control tasks. These tasks range from simple tasks, such as cart-pole balanc- ing, to challenging tasks such as high-DOF locomotion, tasks with partial observations, and hierarchically struc- tured tasks. Furthermore, a range of reinforcement learn- ing algorithms are implemented on which we report novel ï¬ndings based on a systematic evaluation of their effective- ness in training deep neural network policies. The bench- mark and reference implementations are available at https: //github.com/rllab/rllab, allowing for the development, im- plementation, and evaluation of new algorithms and tasks.
# 2. Preliminaries
In this section, we deï¬ne the notation used in subsequent sections.
in the supplementary materials and in the source code.
We choose to implement all tasks using physics simulators rather than symbolic equations, since the former approach is less error-prone and permits easy modiï¬cation of each task. Tasks with simple dynamics are implemented using Box2D (Catto, 2011), an open-source, freely available 2D physics simulator. Tasks with more complicated dynam- ics, such as locomotion, are implemented using MuJoCo (Todorov et al., 2012), a 3D physics simulator with better modeling of contacts.
# 3.1. Basic Tasks
We implement ï¬ve basic tasks that have been widely an- alyzed in reinforcement learning and control literature: Cart-Pole Balancing (Stephenson, 1908; Donaldson, 1960; Widrow, 1964; Michie & Chambers, 1968), Cart-Pole Swing Up (Kimura & Kobayashi, 1999; Doya, 2000), Mountain Car (Moore, 1990), Acrobot Swing Up (DeJong & Spong, 1994; Murray & Hauser, 1991; Doya, 2000), and Double Inverted Pendulum Balancing (Furuta et al., 1978). These relatively low-dimensional tasks provide quick eval- uations and comparisons of RL algorithms.
The implemented tasks conform to the standard interface of a ï¬nite-horizon discounted Markov decision process (MDP), deï¬ned by the tuple (S, A, P, r, Ï0, γ, T ), where S is a (possibly inï¬nite) set of states, A is a set of actions, P : S ÃAÃS â Râ¥0 is the transition probability distribu- tion, r : S à A â R is the reward function, Ï0 : S â Râ¥0 is the initial state distribution, γ â (0, 1] is the discount factor, and T is the horizon.
For partially observable tasks, which conform to the in- terface of a partially observable Markov decision process (POMDP), two more components are required, namely â¦, a set of observations, and O : S à ⦠â Râ¥0, the observa- tion probability distribution.
Most of our implemented algorithms optimize a stochastic policy 79 : S x A â Rso. Let (7) denote its expected discounted reward: (7) = E, [eo y'r(s:, ai)| , where T = (80, a0, -- -) denotes the whole trajectory, 89 ~ po(so), a, ~ 7(az|Se), and sr41 ~ P(Sz41|S¢, ae).
# 3.2. Locomotion Tasks
In this category, we implement six locomotion tasks of varying dynamics and difï¬culty: Swimmer (Purcell, 1977; Coulom, 2002; Levine & Koltun, 2013; Schulman et al., 2015a), Hopper (Murthy & Raibert, 1984; Erez et al., 2011; Levine & Koltun, 2013; Schulman et al., 2015a), Walker (Raibert & Hodgins, 1991; Erez et al., 2011; Levine & Koltun, 2013; Schulman et al., 2015a), Half-Cheetah (Wawrzy´nski, 2007; Heess et al., 2015b), Ant (Schulman et al., 2015b), Simple Humanoid (Tassa et al., 2012; Schul- man et al., 2015b), and Full Humanoid (Tassa et al., 2012). The goal for all the tasks is to move forward as quickly as possible. These tasks are more challenging than the basic tasks due to high degrees of freedom. In addition, a great amount of exploration is needed to learn to move forward without getting stuck at local optima. Since we penalize for excessive controls as well as falling over, during the initial stage of learning, when the robot is not yet able to move forward for a sufï¬cient distance without falling, apparent local optima exist including staying at the origin or diving forward slowly.
For deterministic policies, we use the notation µθ : S â A to denote the policy instead. The objective for it has the same form as above, except that now we have at = µ(st).
# 3.3. Partially Observable Tasks
# 3. Tasks
The tasks in the presented benchmark can be divided into four categories: basic tasks, locomotion tasks, partially ob- servable tasks, and hierarchical tasks. We brieï¬y describe them in this section. More detailed speciï¬cations are given
In real-life situations, agents are often not endowed with perfect state information. This can be due to sensor noise, sensor occlusions, or even sensor limitations that result in partial observations. To evaluate algorithms in more realis- tic settings, we implement three variations of partially ob-
Benchmarking Deep Reinforcement Learning for Continuous Control
(a) (e) (b) (f) (c) (g) (d)
= =
(a) (b)
âsy
Figure 2. Illustration of hierarchical tasks: Food Collection; and (b) Locomotion + Maze. (a) Locomotion +
Figure 1. Illustration of locomotion tasks: (a) Swimmer; (b) Hop- per; (c) Walker; (d) Half-Cheetah; (e) Ant; (f) Simple Humanoid; and (g) Full Humanoid.
servable tasks for each of the ï¬ve basic tasks described in Section 3.1, leading to a total of 15 additional tasks. These variations are described below.
Limited Sensors: For this variation, we restrict the obser- vations to only provide positional information (including joint angles), excluding velocities. An agent now has to learn to infer velocity information in order to recover the full state. Similar tasks have been explored in Gomez & Miikkulainen (1998); Sch¨afer & Udluft (2005); Heess et al. (2015a); Wierstra et al. (2007).
Locomotion + Food Collection: For this task, the agent needs to learn to control either the swimmer or the ant robot to collect food and avoid bombs in a ï¬nite region. The agent receives range sensor readings about nearby food and bomb units. It is given a positive reward when it reaches a food unit, or a negative reward when it reaches a bomb.
Locomotion + Maze: For this task, the agent needs to learn to control either the swimmer or the ant robot to reach a goal position in a ï¬xed maze. The agent receives range sensor readings about nearby obstacles as well as its goal (when visible). A positive reward is given only when the robot reaches the goal region.
# 4. Algorithms
Noisy Observations and Delayed Actions: In this case, sensor noise is simulated through the addition of Gaussian noise to the observations. We also introduce a time de- lay between taking an action and the action being in effect, accounting for physical latencies (Hester & Stone, 2013). Agents now need to learn to integrate both past observa- tions and past actions to infer the current state. Similar tasks have been proposed in Bakker (2001).
In this section, we brieï¬y summarize the algorithms im- plemented in our benchmark, and note any modiï¬cations made to apply them to general parametrized policies. We implement a range of gradient-based policy search meth- ods, as well as two gradient-free methods for comparison with the gradient-based approaches.
# 4.1. Batch Algorithms
System Identiï¬cation: For this category, the underly- ing physical model parameters are varied across different episodes (Szita et al., 2003). The agents must learn to gen- eralize across different models, as well as to infer the model parameters from its observation and action history.
# 3.4. Hierarchical Tasks
Most of the implemented algorithms are batch algorithms. At each iteration, N trajectories {Ïi}N i=1 are generated, where Ïi = {(si t, ri t=0 contains data collected along the ith trajectory. For on-policy gradient-based methods, all the trajectories are sampled under the current policy. For gradient-free methods, they are sampled under perturbed versions of the current policy.
Many real-world tasks exhibit hierarchical structure, where higher level decisions can reuse lower level skills (Parr & Russell, 1998; Sutton et al., 1999; Dietterich, 2000). For in- stance, robots can reuse locomotion skills when exploring the environment. We propose several tasks where both low- level motor controls and high-level decisions are needed. These two components each operates on a different time scale and calls for a natural hierarchy in order to efï¬ciently learn the task.
REINFORCE (Williams, 1992): This algorithm estimates the gradient of expected return âθη(Ïθ) using the likeli- hood ratio trick:
â. N T Von(t) = wd Vo log x(a'|s'; 0)(Ri â bi), i=1 1=0
. T bing . . where Ri} = S0,,_, 7! ~ârj, and bj is a baseline that only depends on the state s; to reduce variance. Hereafter, an as-
Benchmarking Deep Reinforcement Learning for Continuous Control
cent step is taken in the direction of the estimated gradient. This process continues until θk converges.
Truncated Natural Policy Gradient (TNPG) (Kakade, 2002; Peters et al., 2003; Bagnell & Schneider, 2003; Schulman et al., 2015a): Natural Policy Gradient improves upon REINFORCE by computing an ascent direction that approximately ensures a small change in the policy distri- bution. This direction is derived to be I(θ)â1âθη(Ïθ), where I(θ) is the Fisher information matrix (FIM). We use the step size suggested by Peters & Schaal (2008): δKL (âθη(Ïθ)T I(θ)â1âθη(Ïθ))â1. Finally, we re-
Here dx, > 0 controls the step size of the policy, and 6:(v) = ri + v"(6(s4) â 6(s;)) is the sample Bellman error. We then solve for the new policy parameters:
M 1 *) Jn On41 = arg max 77 2 ei )/n log 7(a;|8;; 9).
Trust Region Policy Optimization (TRPO) (Schulman et al., 2015a): This algorithm allows more precise control on the expected policy improvement than TNPG through the introduction of a surrogate loss. At each iteration, we solve the following constrained optimization problem (re- placing expectations with samples):
For neural network policies with tens of thousands of pa- rameters or more, generic Natural Policy Gradient incurs prohibitive computation cost by forming and inverting the empirical FIM. Instead, we study Truncated Natural Policy Gradient (TNPG) in this paper, which computes the nat- ural gradient direction without explicitly forming the ma- trix inverse, using a conjugate gradient algorithm that only requires computing I(θ)v for arbitrary vector v. TNPG makes it practical to apply natural gradient in policy search setting with high-dimensional parameters, and we refer the reader to Schulman et al. (2015a) for more details.
Reward-Weighted Regression (RWR) (Peters & Schaal, 2007; Kober & Peters, 2009): This algorithm formulates the policy optimization as an Expectation-Maximization problem to avoid the need to manually choose learning rate, and the method is guaranteed to converge to a lo- cally optimal solution. At each iteration, this algorithm optimizes a lower bound of the log-expected return: 9 = arg maxg £(6â), where
1 T NT > log (aj|s; 0) o( Ry â 6;) 1 t=0 Mz L£(0) = i
. Ta maximizeg Esnpo, a~mo, oa) Ft Als a] s.t. Es~po, (Dxi (mo, (-|8)||to(-|s))] < Oxi
where Ïθ = ÏÏθ is the discounted state-visitation frequen- cies induced by Ïθ, Aθk (s, a), known as the advantage function, is estimated by the empirical return minus the baseline, and δKL is a step size parameter which controls how much the policy is allowed to change per iteration. We follow the procedure described in the original paper for solving the optimization, which results in the same descent direction as TNPG with an extra line search in the objective and KL constraint.
Cross Entropy Method (CEM) (Rubinstein, 1999; Szita & LËorincz, 2006): Unlike previously mentioned meth- ods, which perform exploration through stochastic actions, CEM performs exploration directly in the policy parame- ter space. At each iteration, we produce N perturbations of the policy parameter: θi â¼ N (µk, Σk), and perform a rollout for each sampled parameter. Then, we compute the new mean and diagonal covariance using the parameters that correspond to the top q-quantile returns.
Here, Ï : R â Râ¥0 is a function that transforms raw re- turns to nonnegative values. Following Deisenroth et al. (2013), we choose Ï to be Ï(R) = R â Rmin, where Rmin is the minimum return among all trajectories collected in the current iteration.
Relative Entropy Policy Search (REPS) (Peters et al., 2010): This algorithm limits the loss of information per iteration and aims to ensure a smooth learning progress (Deisenroth et al., 2013). At each iteration, we collect all trajectories into a dataset D = {(s;,a;,7i, ,) }44,, where M is the total number of samples. Then, we first solve for the dual parameters [7*,v*] = argmin,,,/ g(7',vâ) s.t. 7 > 0, where
Covariance Matrix Adaption Evolution Strategy (CMA-ES) (Hansen & Ostermeier, 2001): Similar to CEM, CMA-ES is a gradient-free evolutionary approach for optimizing nonconvex objective functions. In our case, this objective function equals the average sampled return. In contrast to CEM, CMA-ES estimates the covariance matrix of a multivariate normal distribution through incremental adaption along evolution paths, which contain information about the correlation between consecutive updates.
# 4.2. Online Algorithms
M _ fi 5:(Â¥)/n g(n,v) = nox, +n log (:i > ebiv)/n Deep Deterministic Policy Gradient (DDPG) (Lillicrap et al., 2015): Compared to batch algorithms, the DDPG algorithm continuously improves the policy as it explores the environment. It applies gradient descent to the policy
Benchmarking Deep Reinforcement Learning for Continuous Control
with minibatch data sampled from a replay pool, where the gradient is computed via
> Va Q4(si, a) Von(ue) ) Vorto(si) la= He (Si
where B is the batch size. The critic Q is trained via gradient descent on the (? loss of the Bellman er- ror L = 4 CH â Q¢(si,ai))?, where y; = 7; + Q',, (8), Hg (s;)). To improve stability of the algorithm, we use target networks for both the critic and the policy when forming the regression target y;. We refer the reader to Lillicrap et al. (2015) for a more detailed description of the algorithm.
# 4.3. Recurrent Variants
Policy Representation: For basic, locomotion, and hier- archical tasks and for batch algorithms, we use a feed- forward neural network policy with 3 hidden layers, con- sisting of 100, 50, and 25 hidden units with tanh nonlin- earity at the ï¬rst two hidden layers, which map each state to the mean of a Gaussian distribution. The log-standard deviation is parameterized by a global vector independent of the state, as done in Schulman et al. (2015a). For all par- tially observable tasks, we use a recurrent neural network with a single hidden layer consisting of 32 LSTM hidden units (Hochreiter & Schmidhuber, 1997).
For the DDPG algorithm which trains a deterministic pol- icy, we follow Lillicrap et al. (2015). For both the policy and the Q function, we use the same architecture of a feed- forward neural network with 2 hidden layers, consisting of 400 and 300 hidden units with relu activations.
We implement direct applications of the aforemen- tioned batch-based algorithms to recurrent policies. The only modiï¬cation required is to replace Ï(ai t) by Ï(ai 1:t and a1:tâ1 are the histories of past and current observations and past actions. Recurrent versions of reinforcement learning algorithms have been studied in many existing works, such as Bakker (2001), Sch¨afer & Udluft (2005), Wierstra et al. (2007), and Heess et al. (2015a).
# 5. Experiment Setup
Baseline: For all gradient-based algorithms except REPS, we can subtract a baseline from the empirical return to re- duce variance of the optimization. We use a linear function as the baseline with a time-varying feature vector.
# 6. Results and Discussion
The main evaluation results are presented in Table 1. The tasks on which the grid search is performed are marked with (*). In each entry, the pair of numbers shows the mean and standard deviation of the normalized cumulative return using the best possible hyperparameters.
In this section, we elaborate on the experimental setup used to generate the results.
Performance Metrics: For each report unit (a particular al- gorithm running on a particular task), we deï¬ne its perfor- n=1 Rin, where I is the num- mance as ber of training iterations, Ni is the number of trajectories collected in the ith iteration, and Rin is the undiscounted return for the nth trajectory of the ith iteration,
Hyperparameter Tuning: For the DDPG algorithm, we used the hyperparametes reported in Lillicrap et al. (2015). For the other algorithms, we follow the approach in (Mnih et al., 2015), and we select two tasks in each category, on which a grid search of hyperparameters is performed. Each choice of hyperparameters is executed under ï¬ve random seeds. The criterion for the best hyperparameters is de- ï¬ned as mean(returns) â std(returns). This metric se- lects against large ï¬uctuations of performance due to overly large step sizes.
REINFORCE: Despite its simplicity, REINFORCE is an effective algorithm in optimizing deep neural network poli- cies in most basic and locomotion tasks. Even for high- DOF tasks like Ant, REINFORCE can achieve competi- tive results. However we observe that REINFORCE some- times suffers from premature convergence to local optima as noted by Peters & Schaal (2008), which explains the per- formance gaps between REINFORCE and TNPG on tasks such as Walker (Figure 3(a)). By visualizing the ï¬nal poli- cies, we can see that REINFORCE results in policies that tend to jump forward and fall over to maximize short-term return instead of acquiring a stable walking gait to max- imize long-term return. In Figure 3(b), we can observe that even with a small learning rate, steps taken by RE- INFORCE can sometimes result in large changes to policy distribution, which may explain the fast convergence to lo- cal optima.
For the other tasks, we try both of the best hyperparame- ters found in the same category, and report the better per- formance of the two. This gives us insights into both the maximum possible performance when extensive hyperpa- rameter tuning is performed, and the robustness of the best hyperparameters across different tasks.
TNPG and TRPO: Both TNPG and TRPO outperform other batch algorithms by a large margin on most tasks, conï¬rming that constraining the change in the policy dis- tribution results in more stable learning (Peters & Schaal, 2008).
Compared to TNPG, TRPO offers better control over each
Benchmarking Deep Reinforcement Learning for Continuous Control
# s t l u s e r
e r a
d n a
# e h t
e h T . ) s m h t i r o g l a l l a s s o r c a e m a s ( s d e e s m o d n a r t n e r e f f i d e v ï¬ r o f s n o i t a r e t i g n i n i a r t l l a r e v o n r u t e r e g a r e v a f o s m r e t n i s m h t i r o g l a d e t n e m e l p m i , ) 5 0 . 0 < p h t i w t s e t - t s â h c l e W ( t n e r e f f i d y l t n a c ï¬ i n g i s y l l a c i t s i t a t s t o n e r a t a h t s e c n a m r o f r e p e v a h t a h t s m h t i r o g l a l l a s a l l e w s a , k s a t h c a e s n o i t a v r e s b o y s i o n r o f O N , s r o s n e s d e t i m i l r o f s d n a t s S L : s w o l l o f s a d e t a t o n n a e r a s k s a t e h t f o s t n a i r a v e l b a v r e s b o y l l a i t r a p e h t , n m u l o c s k s a t e h t n I n i s r o r r e y r o m e m - f o - t u o o t g n i d a e l S E - A M C , . g . e , d n a h t a k s a t e h t n o d e l i a f s a h m h t i r o g l a n a t a h t s e t o n e d A N n o i t a t o n / e h T . s n o i t a c ï¬ i t n e d i m e t s y s G P D D 8 . 7 8 ± 4 . 4 3 6 4 6 . 4 4 2 ± 0 . 0 4 3 . 0 7 1 ± 4 . 8 8 2 â 8 . 5 ± 6 . 3 2 2 - 0 . 4 5 1 ± 4 . 3 6 8 2 8 . 1 ± 8 . 5 8 5 . 3 4 ± 1 . 7 6 2 6 . 1 8 1 ± 4 . 8 1 3 7 . 2 0 7 ± 6 . 8 4 1 2 8 . 0 2 ± 2 . 6 2 3 1 . 8 2 ± 4 . 9 9 2 . 1 3 ± 0 . 9 1 1 S E - A M C 3 . 8 6 5 ± 4 . 0 4 4 2 7 . 5 ± 1 . 0 4 â 7 . 7 ± 0 . 5 8 â 1 . 3 1 ± 6 . 5 8 7 â 3 . 1 5 ± 1 . 6 7 5 1 4 . 1 ± 9 . 4 6 3 . 4 1 ± 3 . 0 2 3 . 4 2 ± 1 . 7 7 6 . 7 0 1 ± 3 . 1 4 4 5 . 5 1 ± 8 . 7 1 9 . 3 ± 7 . 8 2 A N / ± A N / 6 . 1 ± 0 . 8 6 4 . 3 ± 4 . 2 6 â 6 . 0 ± 2 . 3 7 - 5 . 7 ± 9 . 9 5 1 â 0 . 6 1 ± 4 . 4 0 1 8 . 2 ± 3 . 0 8 â 5 . 0 ± 5 . 3 7 â 2 . 6 ± 6 . 6 3 2 â 9 . 2 ± 6 . 1 7 M E C 8 . 4 ± 4 . 5 1 8 4 7 . 5 2 ± 2 . 8 3 4 . 2 ± 0 . 6 6 â 7 . 4 1 ± 8 . 6 3 4 â 9 . 8 7 1 ± 2 . 6 6 5 2 4 . 2 ± 8 . 8 6 8 . 7 ± 1 . 3 6 2 . 9 1 ± 5 . 4 8 8 . 4 7 2 ± 4 . 0 3 3 9 . 5 ± 2 . 9 4 9 . 2 1 ± 6 . 0 6 9 . 2 ± 9 . 6 3 0 . 3 2 2 ± 0 . 7 2 2 2 . 3 3 ± 2 . 1 8 â 3 . 1 ± 9 . 8 6 - 3 . 5 1 ± 5 . 9 4 1 â 1 . 2 3 ± 4 . 1 8 1 7 . 6 1 ± 6 . 5 5 â 4 . 1 ± 4 . 7 6 â 3 . 6 ± 4 . 3 1 2 â 2 . 3 9 ± 6 . 6 4 7 O P R T 6 . 7 3 ± 8 . 9 6 8 4 1 . 6 7 ± 2 . 7 4 2 9 . 0 ± 7 . 1 6 - 4 . 4 2 ± 0 . 6 2 3 â 4 . 0 5 ± 4 . 2 1 4 4 2 . 0 ± 0 . 6 9 0 . 0 5 1 ± 3 . 3 8 1 1 0 . 5 8 ± 8 . 3 5 3 1 1 . 0 2 1 ± 0 . 4 1 9 1 3 . 1 6 ± 2 . 0 3 7 3 . 0 4 ± 7 . 9 6 2 4 . 3 2 ± 0 . 7 8 2 0 . 6 4 ± 2 . 0 6 9 1 . 4 ± 5 . 4 5 . 9 ± 2 . 4 6 - 9 . 9 ± 3 . 3 8 - 2 . 2 2 1 ± 2 . 6 0 6 2 . 2 ± 4 . 0 1 0 . 2 ± 2 . 0 6 - 6 . 8 ± 6 . 9 4 1 - 1 . 5 ± 3 . 0 8 9 S P E R R W R 6 . 7 3 1 ± 6 . 5 6 5 3 . 2 1 ± 5 . 1 6 8 4 6 . 4 ± 3 . 3 1 1 â 8 . 3 1 ± 7 . 4 8 3 . 6 6 1 ± 6 . 5 7 2 â 1 . 1 ± 4 . 9 7 â 8 . 0 1 ± 5 . 1 0 0 1 â 9 . 5 3 ± 7 . 2 5 3 â 8 . 4 1 1 ± 7 . 6 4 4 1 . 8 6 3 ± 8 . 4 1 6 3 3 . 3 ± 8 . 3 5 . 5 ± 7 . 0 6 6 . 7 1 ± 7 . 6 8 0 . 1 7 ± 2 . 3 5 5 1 . 8 3 ± 0 . 7 3 â 9 . 5 1 ± 0 . 6 3 1 0 . 8 3 ± 5 . 4 3 2 . 8 2 ± 1 . 6 7 3 8 . 9 ± 0 . 9 3 1 . 3 ± 6 . 7 3 7 . 4 ± 3 . 8 2 4 . 7 1 ± 3 . 3 9 1 . 6 ± 7 . 1 4 6 . 5 ± 7 . 6 4 1 . 2 2 ± 1 . 8 9 8 5 . 1 ± 9 . 8 6 0 . 8 ± 2 . 7 8 â 2 . 0 ± 4 . 7 0 1 â 4 . 0 ± 6 . 2 8 â 1 . 0 ± 7 . 1 8 â 4 . 1 ± 5 . 9 7 3 â 3 . 5 ± 9 . 5 3 2 â 2 . 7 ± 6 . 9 9 2 . 1 ± 8 . 3 9 2 . 4 ± 3 . 9 1 1 â 4 . 1 ± 0 . 0 1 1 â 1 . 0 ± 9 . 2 8 â 1 . 0 ± 7 . 1 8 â 0 . 4 1 ± 5 . 8 5 2 â 4 . 0 ± 1 . 3 3 2 â 4 . 6 9 1 ± 4 . 2 0 7 8 . 2 ± 0 . 9 6 G P N T 9 . 8 4 7 ± 4 . 6 8 9 3 5 . 5 5 ± 7 . 9 0 2 5 . 4 ± 5 . 6 6 - 2 . 1 2 1 ± 8 . 5 9 3 â 6 . 7 3 ± 4 . 5 5 4 4 2 . 0 ± 0 . 6 9 9 . 7 5 ± 1 . 5 5 1 1 2 . 8 0 1 ± 6 . 2 8 3 1 6 . 4 8 1 ± 5 . 9 2 7 1 7 . 7 2 1 ± 0 . 6 0 7 5 . 4 2 ± 0 . 5 5 2 2 . 5 2 ± 4 . 8 8 2 8 . 7 2 ± 1 . 5 4 9 1 . 6 ± 7 . 0 0 . 9 ± 7 . 5 6 - 9 . 2 ± 6 . 4 8 - 0 . 3 2 ± 3 . 6 1 9 5 . 0 ± 5 . 1 1 6 . 8 ± 5 . 4 6 - 4 . 3 1 ± 5 . 4 6 1 - 3 . 7 ± 5 . 0 8 9 E C R O F N I E R 0 . 4 1 ± 7 . 3 9 6 4 0 . 8 1 ± 4 . 3 1 0 . 1 ± 1 . 7 6 â 0 . 1 9 ± 1 . 8 0 5 â 2 . 5 6 ± 5 . 6 1 1 4 1 . 0 ± 3 . 2 9 3 . 9 2 ± 0 . 4 1 7 8 . 8 7 ± 5 . 6 0 5 2 . 9 6 ± 1 . 3 8 1 1 5 . 5 5 ± 3 . 8 4 5 0 . 4 3 ± 1 . 8 2 1 5 . 0 1 ± 2 . 2 6 2 5 . 5 6 2 ± 9 . 0 2 4 2 . 3 ± 4 . 3 1 â 6 . 0 ± 2 . 1 8 â 6 . 1 1 ± 9 . 8 2 1 â 8 . 0 1 2 ± 0 . 6 1 6 1 . 1 ± 5 . 6 8 . 7 ± 7 . 4 7 â 3 . 1 3 ± 7 . 6 8 1 - 1 . 4 7 2 ± 7 . 1 3 4 m o d n a R 0 . 0 ± 1 . 7 7 2 . 0 ± 4 . 3 5 1 â 0 . 0 ± 4 . 5 1 4 â 0 . 1 ± 5 . 4 0 9 1 â 1 . 0 ± 7 . 9 4 1 1 . 0 ± 7 . 1 â 0 . 0 ± 4 . 8 0 . 0 ± 7 . 1 â 3 . 0 ± 8 . 0 9 â 7 . 0 ± 4 . 3 1 2 . 0 ± 5 . 1 4 1 . 0 ± 2 . 3 1 0 . 0 ± 1 . 7 7 1 . 0 ± 1 . 2 2 1 â 0 . 0 ± 0 . 3 8 â 0 . 0 ± 2 . 3 9 3 â 1 . 0 ± 4 . 1 0 1 1 . 0 ± 2 . 2 2 1 â 0 . 0 ± 0 . 3 8 â 0 . 0 ± 5 . 3 9 3 â 1 . 0 ± 3 . 6 7 8 . 4 ± 1 . 3 6 â 6 . 0 1 ± 8 . 1 5 â 9 . 0 ± 1 . 4 1 9 . 3 2 ± 8 . 2 9 â 7 . 4 ± 7 . 8 0 1 â 7 . 1 ± 8 . 4 1 6 . 5 ± 3 . 5 â 2 . 0 ± 8 . 1 2 1 â 6 . 0 ± 9 . 6 6 â 0 . 1 ± 9 . 3 6 â 4 . 0 ± 6 . 1 6 - 3 . 2 ± 7 . 0 8 â 1 . 0 ± 4 . 1 8 â 4 . 0 ± 8 . 1 6 - 2 . 0 ± 9 . 3 6 â 0 . 0 ± 7 . 2 8 â 5 . 5 ± 0 . 5 4 2 â 7 . 3 1 ± 2 . 0 5 2 â 3 . 0 4 ± 9 . 0 7 1 - 7 . 7 ± 1 . 6 1 2 â 6 . 2 ± 2 . 3 3 2 â 9 . 8 3 ± 6 . 6 5 1 - 3 . 2 3 ± 1 . 9 6 1 - 0 . 1 ± 8 . 7 8 3 â 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 3 . 0 ± 3 . 0 â A N / ± A N / 7 . 0 ± 7 . 4 â 0 . 0 ± 4 . 0 â 7 . 0 ± 7 . 6 â 5 . 0 ± 5 . 5 â 1 . 0 ± 4 . 0 â 1 . 0 ± 1 . 0 â 0 . 5 ± 8 . 5 â 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 A N / ± A N / 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0
# e h t
# f o
e c n a m r o f r e P
. 1
e l b a T
n o m h t i r o g l a
g n i m r o f r e p - t s e b
# e h t
# f o
a . e c a f d l o b
# n i
d e t h g i l h g i h
# r o f
# I S
d n a
, s n o i t c a
d e y a l e d
. k s a t
d i o n a m u H
# l l u F
k s a T
g n i c n a l a B e l o P - t r a C
m u l u d n e P d e t r e v n I
r a C n i a t n u o M
t o b o r c A
m u l u d n e P d e t r e v n I
# e l b u o D
r e m m w S
# i
# r e p p o H
r e k l a
# W D 2
h a t e e h C
f l a H
t n A
d i o n a m u H e l p m i S
d i o n a m u H
# l l u F
) S L (
g n i c n a l a B e l o P - t r a C
) S L (
m u l u d n e P d e t r e v n I
) S L (
r a C n i a t n u o M
) S L (
t o b o r c A
) O N
(
g n i c n a l a B e l o P - t r a C
(ON) (ON)
# O N
(
)
# O N
m u l u d n e P d e t r e v n I
(
r a C n i a t n u o M
) O N
(
t o b o r c A
) I S (
g n i c n a l a B e l o P - t r a C
) I S (
m u l u d n e P d e t r e v n I
) I S (
r a C n i a t n u o M
) I S (
t o b o r c A
g n i r e h t a G + r e m m w S
# i
g n i r e h t a G +
t n A
e z a
# M + r e m m w S
# i
e z a
# azey
M +
t n A
s k s a t
l a c i h c r a r e i h
# e h t
# r o f
t p e c x E a
Benchmarking Deep Reinforcement Learning for Continuous Control
(a) (b) (c) (d)
Figure 3. Performance as a function of the number of iterations; the shaded area depicts the mean ± the standard deviation over ï¬ve different random seeds: (a) Performance comparison of all algorithms in terms of the average reward on the Walker task; (b) Comparison between REINFORCE, TNPG, and TRPO in terms of the mean KL-divergence on the Walker task; (c) Performance comparison on TNPG and TRPO on the Swimmer task; (d) Performance comparison of all algorithms in terms of the average reward on the Half- Cheetah task.
policy update by performing a line search in the natural gra- dient direction to ensure an improvement in the surrogate loss function. We observe that hyperparameter grid search tends to select conservative step sizes (δKL) for TNPG, which alleviates the issue of performance collapse caused by a large update to the policy. By contrast, TRPO can robustly enforce constraints with larger a δKL value and hence speeds up learning in some cases. For instance, grid search on the Swimmer task reveals that the best step size for TNPG is δKL = 0.05, whereas TRPOâs best step-size is larger: δKL = 0.1. As shown in Figure 3(c), this larger step size enables slightly faster learning.
tain basic tasks such as Cart-Pole Balancing and Moun- tain Car, suggesting that the dimension of the searching parameter is not always the limiting factor of the method. However, the performance degrades quickly as the system dynamics becomes more complicated. We also observe that CEM outperforms CMA-ES, which is remarkable as CMA-ES estimates the full covariance matrix. For higher- dimensional policy parameterizations, the computational complexity and memory requirement for CMA-ES become noticeable. On tasks with high-dimensional observations, such as the Full Humanoid, the CMA-ES algorithm runs out of memory and fails to yield any results, denoted as N/A in Table 1.
RWR: RWR is the only gradient-based algorithm we im- plemented that does not require any hyperparameter tun- ing. It can solve some basic tasks to a satisfactory degree, but fails to solve more challenging tasks such as locomo- tion. We observe empirically that RWR shows fast initial improvement followed by signiï¬cant slow-down, as shown in Figure 3(d).
REPS: Our main observation is that REPS is especially prone to early convergence to local optima in case of con- tinuous states and actions. Its ï¬nal outcome is greatly af- fected by the performance of the initial policy, an obser- vation that is consistent with the original work of Peters et al. (2010). This leads to a bad performance on average, although under particular initial settings the algorithm can perform on par with others. Moreover, the tasks presented here do not assume the existence of a stationary distribu- tion, which is assumed in Peters et al. (2010). In particular, for many of our tasks, transient behavior is of much greater interest than steady-state behavior, which agrees with pre- vious observation by van Hoof et al. (2015),
Gradient-free methods: Surprisingly, even when train- ing deep neural network policies with thousands of pa- rameters, CEM achieves very good performance on cer-
DDPG: Compared to batch algorithms, we found that DDPG was able to converge signiï¬cantly faster on certain tasks like Half-Cheetah due to its greater sample efï¬ciency. However, it was less stable than batch algorithms, and the performance of the policy can degrade signiï¬cantly during training. We also found it to be more susceptible to scaling of the reward. In our experiment for DDPG, we rescaled the reward of all tasks by a factor of 0.1, which seems to improve the stability.
Partially Observable Tasks: We experimentally verify that recurrent policies can ï¬nd better solutions than feed- forward policies in Partially Observable Tasks but recur- rent policies are also more difï¬cult to train. As shown in Table 1, derivative-free algorithms like CEM and CMA-ES work considerably worse with recurrent policies. Also we note that the performance gap between REINFORCE and TNPG widens when they are applied to optimize recurrent policies, which can be explained by the fact that a small change in parameter space can result in a bigger change in policy distribution with recurrent policies than with feed- forward policies.
Hierarchical Tasks: We observe that all of our imple-
Benchmarking Deep Reinforcement Learning for Continuous Control
mented algorithms achieve poor performance on the hier- archical tasks, even with extensive hyperparameter search and 500 iterations of training. It is an interesting direction to develop algorithms that can automatically discover and exploit the hierarchical structure in these tasks.
# 7. Related Work
In this section, we review existing benchmarks of con- tinuous control tasks. The earliest efforts of evaluating reinforcement learning algorithms started in the form of individual control problems described in symbolic form. Some widely adopted tasks include the inverted pendu- lum (Stephenson, 1908; Donaldson, 1960; Widrow, 1964), mountain car (Moore, 1990), and Acrobot (DeJong & Spong, 1994). These problems are frequently incorporated into more comprehensive benchmarks.
Some reinforcement learning benchmarks contain low- dimensional continuous control tasks, such as the ones introduced above, including RLLib (Abeyruwan, 2013), MMLF (Metzen & Edgington, 2011), RL-Toolbox (Neu- mann, 2006), JRLF (Kochenderfer, 2006), Beliefbox (Dim- itrakakis et al., 2007), Policy Gradient Toolbox (Peters, 2002), and ApproxRL (Busoniu, 2010). A series of RL competitions has also been held in recent years (Dutech et al., 2005; Dimitrakakis et al., 2014), again with relatively low-dimensional actions. In contrast, our benchmark con- tains a wider range of tasks with high-dimensional contin- uous state and action spaces.
variety of challenging tasks. We implemented several rein- forcement learning algorithms, and presented them in the context of general policy parameterizations. Results show that among the implemented algorithms, TNPG, TRPO, and DDPG are effective methods for training deep neural network policies. Still, the poor performance on the pro- posed hierarchical tasks calls for new algorithms to be de- veloped. Implementing and evaluating existing and newly proposed algorithms will be our continued effort. By pro- viding an open-source release of the benchmark, we en- courage other researchers to evaluate their algorithms on the proposed tasks.
# Acknowledgements
We thank Emo Todorov and Yuval Tassa for providing the MuJoCo simulator, and Sergey Levine, Aviv Tamar, Chelsea Finn, and the anonymous ICML reviewers for in- sightful comments. We also thank Shixiang Gu and Timo- thy Lillicrap for helping us diagnose the DDPG implemen- tation. This work was supported in part by DARPA, the Berkeley Vision and Learning Center (BVLC), the Berke- ley Artiï¬cial Intelligence Research (BAIR) laboratory, and Berkeley Deep Drive (BDD). Rein Houthooft is supported by a Ph.D. Fellowship of the Research Foundation - Flan- ders (FWO).
# References
Previously, other benchmarks have been proposed for high- dimensional control tasks. Tdlearn (Dann et al., 2014) includes a 20-link pole balancing task, DotRL (Papis & Wawrzy´nski, 2013) includes a variable-DOF octopus arm and a 6-DOF planar cheetah model, PyBrain (Schaul et al., 2010) includes a 16-DOF humanoid robot with standing and jumping tasks, RoboCup Keepaway (Stone et al., 2005) is a multi-agent game which can have a ï¬exible dimension of actions by varying the number of agents, and SkyAI (Yamaguchi & Ogasawara, 2010) includes a 17-DOF hu- manoid robot with crawling and turning tasks. Other li- braries such as CL-Square (Riedmiller et al., 2012) and RLPark (Degris et al., 2013) provide interfaces to actual hardware, e.g., Bioloid and iRobot Create. In contrast to these aforementioned testbeds, our benchmark makes use of simulated environments to reduce computation time and to encourage experimental reproducibility. Furthermore, it provides a much larger collection of tasks of varying difï¬- culty.
Abeyruwan, S. RLLib: Lightweight standard and on/off policy reinforcement learning library (C++). http://web.cs.miami. edu/home/saminda/rilib.html, 2013.
Bagnell, J. A. and Schneider, J. Covariant policy search. pp. 1019â1024. IJCAI, 2003.
Bakker, B. Reinforcement learning with long short-term memory. In NIPS, pp. 1475â1482, 2001.
Bellemare, M. G., Naddaf, Y., Veness, J., and Bowling, M. The Arcade Learning Environment: An evaluation platform for general agents. J. Artif. Intell. Res., 47:253â279, 2013.
Bellman, R. Dynamic Programming. Princeton University Press, 1957.
Bertsekas, Dimitri P and Tsitsiklis, John N. Neuro-dynamic pro- gramming: an overview. In CDC, pp. 560â564, 1995.
Busoniu, L. ApproxRL: A Matlab toolbox for approxi- http://busoniu.net/ï¬les/repository/ mate RL and DP. readme-approxrl.html, 2010.
Catto, E. Box2D: A 2D physics engine for games, 2011.
Coulom, R´emi. Reinforcement learning using neural networks, with applications to motor control. PhD thesis, Institut Na- tional Polytechnique de Grenoble-INPG, 2002.
# 8. Conclusion
Dann, C., Neumann, G., and Peters, J. Policy evaluation with tem- poral differences: A survey and comparison. J. Mach. Learn. Res., 15(1):809â883, 2014.
In this work, a benchmark of continuous control problems for reinforcement learning is presented, covering a wide
Degris, T., B´echu, J., White, A., Modayil, J., Pilarski, P. M., and Denk, C. RLPark. http://rlpark.github.io, 2013.
Benchmarking Deep Reinforcement Learning for Continuous Control
Deisenroth, M. P., Neumann, G., and Peters, J. A survey on policy search for robotics, foundations and trends in robotics. Found. Trends Robotics, 2(1-2):1â142, 2013.
Heess, N., Wayne, G., Silver, D., Lillicrap, T., Erez, T., and Tassa, T. Learning continuous control policies by stochastic value gradients. In NIPS, pp. 2926â2934. 2015b.
DeJong, G. and Spong, M. W. Swinging up the Acrobot: An example of intelligent control. In ACC, pp. 2158â2162, 1994.
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. ImageNet: A large-scale hierarchical image database. In CVPR, pp. 248â255, 2009.
Dietterich, T. G. Hierarchical reinforcement learning with the MAXQ value function decomposition. J. Artif. Intell. Res, 13: 227â303, 2000.
Dimitrakakis, C., Tziortziotis, N., and Tossou, A. Beliefbox: A framework for statistical methods in sequential decision mak- ing. http://code.google.com/p/beliefbox/, 2007.
Hester, T. and Stone, P. The open-source TEXPLORE code re- lease for reinforcement learning on robots. In RoboCup 2013: Robot World Cup XVII, pp. 536â543. 2013.
Hinton, G., Deng, L., Yu, D., Mohamed, A.-R., Jaitly, N., Se- nior, A., Vanhoucke, V., Nguyen, P., Dahl, T. S. G., and Kings- bury, B. Deep neural networks for acoustic modeling in speech recognition. IEEE Signal Process. Mag, 29(6):82â97, 2012.
Hirsch, H.-G. and Pearce, D. The Aurora experimental framework for the performance evaluation of speech recognition systems under noisy conditions. In ASR2000-Automatic Speech Recog- nition: Challenges for the new Millenium ISCA Tutorial and Research Workshop (ITRW), 2000.
Dimitrakakis, Christos, Li, Guangliang, and Tziortziotis, Nikoa- los. The reinforcement learning competition 2014. AI Maga- zine, 35(3):61â65, 2014.
Hochreiter, S. and Schmidhuber, J. Long short-term memory. Neural Comput., 9(8):1735â1780, 1997.
Donaldson, P. E. K. Error decorrelation: a technique for matching a class of functions. In Proc. 3th Intl. Conf. Medical Electron- ics, pp. 173â178, 1960.
Doya, K. Reinforcement learning in continuous time and space. Neural Comput., 12(1):219â245, 2000.
Kakade, S. M. A natural policy gradient. In NIPS, pp. 1531â1538. 2002.
Kimura, H. and Kobayashi, S. Stochastic real-valued reinforce- ment learning to solve a nonlinear control problem. In IEEE SMC, pp. 510â515, 1999.
Dutech, Alain, Edmunds, Timothy, Kok, Jelle, Lagoudakis, Michail, Littman, Michael, Riedmiller, Martin, Russell, Bryan, Scherrer, Bruno, Sutton, Richard, Timmer, Stephan, et al. Re- inforcement learning benchmarks and bake-offs ii. Advances in Neural Information Processing Systems (NIPS), 17, 2005.
Inï¬nite hori- zon model predictive control for nonlinear periodic tasks. Manuscript under review, 4, 2011.
Everingham, M., Van Gool, L., Williams, C. K. I., Winn, J., and Zisserman, A. The pascal visual object classes (VOC) chal- lenge. Int. J. Comput. Vision, 88(2):303â338, 2010.
Kober, J. and Peters, J. Policy search for motor primitives in robotics. In NIPS, pp. 849â856, 2009.
Kochenderfer, M. JRLF: Java reinforcement learning framework. http://mykel.kochenderfer.com/jrlf, 2006.
Krizhevsky, A. and Hinton, G. Learning multiple layers of fea- tures from tiny images. Technical report, 2009.
Krizhevsky, A., Sutskever, I., and Hinton, G. ImageNet classiï¬- cation with deep convolutional neural networks. In NIPS, pp. 1097â1105. 2012.
LeCun, Y., Cortes, C., and Burges, C. The MNIST database of handwritten digits, 1998.
Fei-Fei, L., Fergus, R., and Perona, P. One-shot learning of object categories. IEEE Trans. Pattern Anal. Mach. Intell., 28(4):594â 611, 2006.
Levine, S. and Koltun, V. Guided policy search. In ICML, pp. 1â9, 2013.
Furuta, K., Okutani, T., and Sone, H. Computer control of a double inverted pendulum. Comput. Electr. Eng., 5(1):67â84, 1978.
Garofolo, J. S., Lamel, L. F., Fisher, W. M., Fiscus, J. G., and Pal- lett, D. S. DARPA TIMIT acoustic-phonetic continuous speech corpus CD-ROM. NIST speech disc 1-1.1. NASA STI/Recon Technical Report N, 93, 1993.
Godfrey, J. J., Holliman, E. C., and McDaniel, J. SWITCH- BOARD: Telephone speech corpus for research and develop- ment. In ICASSP, pp. 517â520, 1992.
Gomez, F. and Miikkulainen, R. 2-d pole balancing with recurrent evolutionary networks. In ICANN, pp. 425â430. 1998.
Guo, X., Singh, S., Lee, H., Lewis, R. L., and Wang, X. Deep learning for real-time Atari game play using ofï¬ine monte- carlo tree search planning. In NIPS, pp. 3338â3346. 2014.
Hansen, N. and Ostermeier, A. Completely derandomized self- adaptation in evolution strategies. Evol. Comput., 9(2):159â 195, 2001.
Levine, S., Finn, C., Darrell, T., and Abbeel, P. End-to-end train- ing of deep visuomotor policies. arXiv:1504.00702, 2015.
Lillicrap, T., Hunt, J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., and Wierstra, D. Continuous control with deep re- inforcement learning. arXiv:1509.02971, 2015.
Martin, D., C. Fowlkes, D. Tal, and Malik, J. A database of human segmented natural images and its application to evaluating seg- mentation algorithms and measuring ecological statistics. In ICCV, pp. 416â423, 2001.
Metzen, J. M. and Edgington, M. Maja machine learning frame- work. http://mloss.org/software/view/220/, 2011.
Michie, D. and Chambers, R. A. BOXES: An experiment in adap- tive control. Machine Intelligence, 2:137â152, 1968.
Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., and Hassabis, D. Human-level control through deep reinforcement learning. Nature, 518(7540):529â533, 2015.
Heess, N., Hunt, J., Lillicrap, T., and Silver, D. Memory-based arXiv:1512.04455, control with recurrent neural networks. 2015a.
Moore, A. Efï¬cient memory-based learning for robot control. Technical report, University of Cambridge, Computer Labora- tory, 1990.
Benchmarking Deep Reinforcement Learning for Continuous Control
Murray, R. M. and Hauser, J. A case study in approximate lin- earization: The Acrobot example. Technical report, UC Berke- ley, EECS Department, 1991.
mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. Artiï¬cial intelligence, 112(1):181â 211, 1999.
Murthy, S. S. and Raibert, M. H. 3D balance in legged locomo- tion: modeling and simulation for the one-legged case. ACM SIGGRAPH Computer Graphics, 18(1):27â27, 1984.
Neumann, G. A reinforcement learning toolbox and RL bench- marks for the control of dynamical systems. Dynamical prin- ciples for neuroscience and intelligent biomimetic devices, pp. 113, 2006.
Papis, B. and Wawrzy´nski, P. dotrl: A platform for rapid rein- forcement learning methods development and validation. In FedCSIS, pp. pages 129â136., 2013.
Parr, Ronald and Russell, Stuart. Reinforcement learning with hierarchies of machines. Advances in neural information pro- cessing systems, pp. 1043â1049, 1998.
Szita, I. and LËorincz, A. Learning Tetris using the noisy cross- entropy method. Neural Comput., 18(12):2936â2941, 2006.
Szita, I., Tak´acs, B., and L¨orincz, A. ε-MDPs: Learning in vary- ing environments. J. Mach. Learn. Res., 3:145â174, 2003.
Tassa, Yuval, Erez, Tom, and Todorov, Emanuel. Synthesis and stabilization of complex behaviors through online trajectory optimization. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pp. 4906â4913. IEEE, 2012.
Tesauro, G. Temporal difference learning and TD-Gammon. Commun. ACM, 38(3):58â68, 1995.
Todorov, E., Erez, T., and Tassa, Y. MuJoCo: A physics engine for model-based control. In IROS, pp. 5026â5033, 2012.
http://www.ausy. tu-darmstadt.de/Research/PolicyGradientToolbox, 2002. Peters, J. and Schaal, S. Reinforcement learning by reward- In ICML,
weighted regression for operational space control. pp. 745â750, 2007.
Peters, J. and Schaal, S. Reinforcement learning of motor skills with policy gradients. Neural networks, 21(4):682â697, 2008.
Peters, J., Vijaykumar, S., and Schaal, S. Policy gradient methods for robot control. Technical report, 2003.
Peters, J., M¨ulling, K., and Alt¨un, Y. Relative entropy policy search. In AAAI, pp. 1607â1612, 2010.
Purcell, E. M. Life at low Reynolds number. Am. J. Phys, 45(1): 3â11, 1977.
van Hoof, H., Peters, J., and Neumann, G. Learning of non- parametric control policies with high-dimensional state fea- tures. In AISTATS, pp. 995â1003, 2015.
Watter, M., Springenberg, J., Boedecker, J., and Riedmiller, M. Embed to control: A locally linear latent dynamics model for control from raw images. In NIPS, pp. 2728â2736, 2015.
Wawrzy´nski, P. Learning to control a 6-degree-of-freedom walk- ing robot. In IEEE EUROCON, pp. 698â705, 2007.
Widrow, B. Pattern recognition and adaptive control. IEEE Trans. Ind. Appl., 83(74):269â277, 1964.
Wierstra, D., Foerster, A., Peters, J., and Schmidhuber, J. Solv- ing deep memory POMDPs with recurrent policy gradients. In ICANN, pp. 697â706. 2007.
Raibert, M. H. and Hodgins, J. K. Animation of dynamic legged In ACM SIGGRAPH Computer Graphics, vol- locomotion. ume 25, pp. 349â358, 1991.
Williams, R. J. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Mach. Learn., 8: 229â256, 1992.
Riedmiller, M., Blum, M., and Lampe, T. CLS2: Closed loop http://ml.informatik.uni-freiburg.de/ simulation system. research/clsquare, 2012.
Yamaguchi, A. and Ogasawara, T. SkyAI: Highly modularized reinforcement learning library. In IEEE-RAS Humanoids, pp. 118â123, 2010.
Rubinstein, R. The cross-entropy method for combinatorial and continuous optimization. Methodol. Comput. Appl. Probab., 1 (2):127â190, 1999.
Yu, D., Ju, Y.-C., Wang, Y.-Y., Zweig, G., and Acero, A. Auto- mated directory assistance system - from theory to practice. In Interspeech, pp. 2709â2712, 2007.
Sch¨afer, A. M. and Udluft, S. Solving partially observable rein- forcement learning problems with recurrent neural networks. In ECML Workshops, pp. 71â81, 2005.
Schaul, T., Bayer, J., Wierstra, D., Sun, Y., Felder, M., Sehnke, F., R¨uckstieÃ, T., and Schmidhuber, J. PyBrain. J. Mach. Learn. Res., 11:743â746, 2010.
Schulman, J., Levine, S., Abbeel, P., Jordan, M. I., and Moritz, P. Trust region policy optimization. In ICML, pp. 1889â1897, 2015a.
Schulman, J., Moritz, P., Levine, S., Jordan, M. I., and Abbeel, P. High-dimensional continuous control using generalized ad- vantage estimation. arXiv:1506.02438, 2015b.
Stephenson, A. On induced stability. Philos. Mag., 15(86):233â 236, 1908.
Stone, Peter, Kuhlmann, Gregory, Taylor, Matthew E, and Liu, Yaxin. Keepaway soccer: From machine learning testbed to benchmark. In RoboCup 2005: Robot Soccer World Cup IX, pp. 93â105. Springer, 2005.
Sutton, Richard S, Precup, Doina, and Singh, Satinder. Between
# Supplementary Material
# 1. Task Speciï¬cations
Below we provide some speciï¬cations for the task observations, actions, and rewards. Please refer to the benchmark source code (https://github.com/rllab/rllab) for complete speciï¬cation of physics parameters.
# 1.1. Basic Tasks
Cart-Pole Balancing: In this task, an inverted pendulum is mounted on a pivot point on a cart. The cart itself is restricted to linear movement, achieved by applying horizontal forces. Due to the systemâs inherent instability, continuous cart movement is needed to keep the pendulum upright. The observation consists of the cart position x, pole angle @, the cart velocity «, and the pole velocity 6. The 1D action consists of the horizontal force applied to the cart body. The reward function is given by r(s, a) := 10 â (1 â cos(@)) â 1075 |la||3. The episode terminates when |x| > 2.4 or |0| > 0.2.
Cart-Pole Swing Up: This is a more complicated version of the previous task, in which the system should not only be able to balance the pole, but ï¬rst succeed in swinging it up into an upright position. This task extends the working range of the inverted pendulum to 360â¦. This is a nonlinear extension of the previous task. It has the same observation and action as in balancing. The reward function is given by r(s, a) := cos(θ). The episode terminates when |x| > 3, with a penalty of â100.
Mountain Car: In this task, a car has to escape a valley by repetitive application of tangential forces. Because the maximal tangential force is limited, the car has to alternately drive up along the two slopes of the valley in order to build up enough inertia to overcome gravity. This brings a challenge of exploration, since before ï¬rst reaching the goal among all trials, a locally optimal solution exists, which is to drive to the point closest to the target and stay there for the rest of the episode. The observation is given by the horizontal position x and the horizontal velocity Ëx of the car. The reward is given by r(s, a) := â1 + height, with height the carâs vertical offset. The episode terminates when the car reaches a target height of 0.6. Hence the goal is to reach the target as soon as possible.
Acrobot Swing Up: In this task, an under-actuated, two-link robot has to swing itself into an upright position. It consists of two joints of which the first one has a fixed position and only the second one can exert torque. The goal is to swing the robot into an upright position and stabilize around that position. The controller not only has to swing the pendulum in order to build up inertia, similar to the Mountain Car task, but also has to decelerate it in order to prevent it from tipping over. The observation includes the two joint angles, 0; and 62, and their velocities, 6; and 02. The action is the torque applied at the second joint. The reward is defined as r(s, a) := â||tip(s) â tipyarget||2, Where tip(s) computes the Cartesian position of the tip of the robot given the joint angles. No termination condition is applied.
Double Inverted Pendulum Balancing: This task extends the Cart-Pole Balancing task by replacing the single-link pole by a two-link rigid structure. As in the former task, the goal is to stabilize the two-link pole near the upright position. This task is more difï¬cult than single-pole balancing, since the system is even more unstable and requires the controller to actively maintain balance. The observation includes the cart position x, joint angles (θ1 and θ2), and joint velocities ( Ëθ1 and Ëθ2). We encode each joint angle as its sine and cosine values. The action is the same as in cart-pole tasks. The reward is given by r(s, a) = 10 â 0.01x2 2, where xtip, ytip are the coordinates of the tip of the pole. No termination condition is applied. The episode is terminated when ytip ⤠1.
# 1.2. Locomotion Tasks
Swimmer: The swimmer is a planar robot with 3 links and 2 actuated joints. Fluid is simulated through viscosity forces, which apply drag on each link, allowing the swimmer to move forward. This task is the simplest of all locomotion tasks, since there are no irrecoverable states in which the swimmer can get stuck, unlike other robots which may fall down or ï¬ip over. This places less burden on exploration. The 13-dim observation includes the joint angles, joint velocities, as well as
Benchmarking Deep Reinforcement Learning for Continuous Control
the coordinates of the center of mass. The reward is given by r(s,a) = v, â 0.005||a||3, where v,, is the forward velocity. No termination condition is applied.
Hopper: The hopper is a planar monopod robot with 4 rigid links, corresponding to the torso, upper leg, lower leg, and foot, along with 3 actuated joints. More exploration is needed than the swimmer task, since a stable hopping gait has to be learned without falling. Otherwise, it may get stuck in a local optimum of diving forward. The 20-dim observation includes joint angles, joint velocities, the coordinates of center of mass, and constraint forces. The reward is given by r(s,a) := vz, â 0.005 - |ja||3 + 1, where the last term is a bonus for being âalive.â The episode is terminated when Zbody < 0.7 where Zpody is the z-coordinate of the body, or when |6,| < 0.2, where @, is the forward pitch of the body.
Walker: The walker is a planar biped robot consisting of 7 links, corresponding to two legs and a torso, along with 6 actuated joints. This task is more challenging than hopper, since it has more degrees of freedom, and is also prone to falling. The 21-dim observation includes joint angles, joint velocities, and the coordinates of center of mass. The reward is given by r(s,a) := vz â 0.005 - ||a\)3. The episode is terminated when zpoay < 0-8, 2body > 2.0, or when |0,| > 1.0.
Half-Cheetah: The half-cheetah is a planar biped robot with 9 rigid links, including two legs and a torso, along with 6 actuated joints. The 20-dim observation includes joint angles, joint velocities, and the coordinates of the center of mass. The reward is given by r(s,a) = vz â 0.05 - ||a||3. No termination condition is applied.
Ant: The ant is a quadruped with 13 rigid links, including four legs and a torso, along with 8 actuated joints. This task is more challenging than the previous tasks due to the higher degrees of freedom. The 125-dim observation includes joint angles, joint velocities, coordinates of the center of mass, a (usually sparse) vector of contact forces, as well as the rotation matrix for the body. The reward is given by r(s,a) = vz â 0.005 - |a||} â Coontact + 0.05, where Coontact penalizes contacts to the ground, and is given by 5 - 10-4 . Frontact||3, where Feontact is the contact force vector clipped to values between â1 and 1. The episode is terminated when z,ay < 0.2 or when Zpody > 1.0.
Simple Humanoid: This is a simplified humanoid model with 13 rigid links, including the head, body, arms, and legs, along with 10 actuated joints. The increased difficulty comes from the increased degrees of freedom as well as the need to maintain balance. The 102-dim observation includes the joint angles, joint velocities, vector of contact forces, and the coordinates of the center of mass. The reward is given by r(s,a) = v, â5-10~4|lal]3 â Coontact â Caeviation + 0-2, where Ccoontact = 5+ 107° - || Feontactl|, and Cueviation = 5- 107° - (v3 + v2) to penalize deviation from the forward direction. The episode is terminated when Zpoay < 0.8 or when Zpody > 2.0.
Full Humanoid: This is a humanoid model with 19 rigid links and 28 actuated joints. It has more degrees of freedom below the knees and elbows, which makes the system higher-dimensional and harder for learning. The 142-dim observation includes the joint angles, joint velocities, vector of contact forces, and the coordinates of the center of mass. The reward and termination condition is the same as in the Simple Humanoid model.
# 1.3. Partially Observable Tasks
Limited Sensors: The full description is included in the main text.
Noisy Observations and Delayed Actions: For all tasks, we use a Gaussan noise with Ï = 0.1. The time delay is as follows: Cart-Pole Balancing 0.15 sec, Cart-Pole Swing Up 0.15 sec, Mountain Car 0.15 sec, Acrobot Swing Up 0.06 sec, and Double Inverted Pendulum Balancing 0.06 sec. This corresponds to 3 discretization frames for each task.
System Identiï¬cations: For Cart-Pole Balancing and Cart-Pole Swing Up, the pole length is varied uniformly between, 50% and 150%. For Mountain Car, the width of the valley varies uniformly between 75% and 125%. For Acrobot Swing Up, each of the pole length varies uniformly between 50% and 150%. For Double Inverted Pendulum Balancing, each of the pole length varies uniformly between 83% and 167%. Please refer to the benchmark source code for reference values.
# 1.4. Hierarchical Tasks
Locomotion + Food Collection: During each episode, 8 food units and 8 bombs are placed in the environment. Collecting a food unit gives +1 reward, and collecting a bomb gives â1 reward. Hence the best cumulative reward for a given episode is 8.
Locomotion + Maze: During each episode, a +1 reward is given when the robot reaches the goal. Otherwise, the robot receives a zero reward throughout the episode.
Benchmarking Deep Reinforcement Learning for Continuous Control
# 2. Experiment Parameters
For all batch gradient-based algorithms, we use the same time-varying feature encoding for the linear baseline:
$s, = concat(s, s © s,0.01¢, (0.014), (0.01t)*, 1)
where s is the state vector and © represents element-wise product.
Table 2 shows the experiment parameters for all four categories. We will then detail the hyperparameter search range for the selected tasks and report best hyperparameters, shown in Tables 3, 4, 5, 6, 7, and 8.
Table 2. Experiment Setup
Basic & Locomotion Partially Observable Hierarchical 50,000 0.99 500 500 50,000 0.99 100 300 50,000 0.99 500 500
Table 3. Learning Rate α for REINFORCE
Search Range Best [1 Ã 10â4, 1 Ã 10â1] Cart-Pole Swing Up Double Inverted Pendulum [1 Ã 10â4, 1 Ã 10â1] [1 Ã 10â4, 1 Ã 10â1] Swimmer [1 Ã 10â4, 1 Ã 10â1] Ant 5 Ã 10â3 5 Ã 10â3 1 Ã 10â2 5 Ã 10â3
Table 4. Step Size δKL for TNPG
Search Range Best [1 Ã 10â3, 5 Ã 100] Cart-Pole Swing Up Double Inverted Pendulum [1 Ã 10â3, 5 Ã 100] [1 Ã 10â3, 5 Ã 100] Swimmer [1 Ã 10â3, 5 Ã 100] Ant 5 Ã 10â2 3 Ã 10â2 1 Ã 10â1 3 Ã 10â1
Table 5. Step Size δKL for TRPO
Search Range Best [1 Ã 10â3, 5 Ã 100] Cart-Pole Swing Up Double Inverted Pendulum [1 Ã 10â3, 5 Ã 100] [1 Ã 10â3, 5 Ã 100] Swimmer [1 Ã 10â3, 5 Ã 100] Ant 5 Ã 10â2 1 Ã 10â3 5 Ã 10â2 8 Ã 10â2
# Table 6. Step Size δKL for REPS
Search Range Best [1 Ã 10â3, 5 Ã 100] Cart-Pole Swing Up Double Inverted Pendulum [1 Ã 10â3, 5 Ã 100] [1 Ã 10â3, 5 Ã 100] Swimmer [1 Ã 10â3, 5 Ã 100] Ant 1 Ã 10â2 8 Ã 10â1 3 Ã 10â1 8 Ã 10â1
Benchmarking Deep Reinforcement Learning for Continuous Control
Table 7. Initial Extra Noise for CEM
Search Range Best [1 Ã 10â3, 1] Cart-Pole Swing Up Double Inverted Pendulum [1 Ã 10â3, 1] [1 Ã 10â3, 1] Swimmer [1 Ã 10â3, 1] Ant 1 Ã 10â2 1 Ã 10â1 1 Ã 10â1 1 Ã 10â1
Table 8. Initial Standard Deviation for CMA-ES
Search Range Best [1 Ã 10â3, 1 Ã 103] Cart-Pole Swing Up Double Inverted Pendulum [1 Ã 10â3, 1 Ã 103] [1 Ã 10â3, 1 Ã 103] Swimmer [1 Ã 10â3, 1 Ã 103] Ant 1 Ã 103 3 Ã 10â1 1 Ã 10â1 1 Ã 10â1 | {
"id": "1506.02438"
} |
1604.06174 | Training Deep Nets with Sublinear Memory Cost | We propose a systematic approach to reduce the memory consumption of deep
neural network training. Specifically, we design an algorithm that costs
O(sqrt(n)) memory to train a n layer network, with only the computational cost
of an extra forward pass per mini-batch. As many of the state-of-the-art models
hit the upper bound of the GPU memory, our algorithm allows deeper and more
complex models to be explored, and helps advance the innovations in deep
learning research. We focus on reducing the memory cost to store the
intermediate feature maps and gradients during training. Computation graph
analysis is used for automatic in-place operation and memory sharing
optimizations. We show that it is possible to trade computation for memory -
giving a more memory efficient training algorithm with a little extra
computation cost. In the extreme case, our analysis also shows that the memory
consumption can be reduced to O(log n) with as little as O(n log n) extra cost
for forward computation. Our experiments show that we can reduce the memory
cost of a 1,000-layer deep residual network from 48G to 7G with only 30 percent
additional running time cost on ImageNet problems. Similarly, significant
memory cost reduction is observed in training complex recurrent neural networks
on very long sequences. | http://arxiv.org/pdf/1604.06174 | Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin | cs.LG | null | null | cs.LG | 20160421 | 20160422 | 6 1 0 2
r p A 2 2 ] G L . s c [
2 v 4 7 1 6 0 . 4 0 6 1 : v i X r a
# Training Deep Nets with Sublinear Memory Cost
# Tianqi Chen 1, Bing Xu 2, Chiyuan Zhang 3, and Carlos Guestrin 1
1 Unveristy of Washington 2 Dato. Inc 3 Massachusetts Institute of Technology
# Abstract
We propose a systematic approach to reduce the memory consumption of deep neural net- work training. Speciï¬cally, we design an algorithm that costs O( n) memory to train a n layer network, with only the computational cost of an extra forward pass per mini-batch. As many of the state-of-the-art models hit the upper bound of the GPU memory, our algorithm allows deeper and more complex models to be explored, and helps advance the innovations in deep learning research. We focus on reducing the memory cost to store the intermediate feature maps and gra- dients during training. Computation graph analysis is used for automatic in-place operation and memory sharing optimizations. We show that it is possible to trade computation for memory giving a more memory efï¬cient training algorithm with a little extra computation cost. In the extreme case, our analysis also shows that the memory consumption can be reduced to O(log n) with as little as O(n log n) extra cost for forward computation. Our experiments show that we can reduce the memory cost of a 1,000-layer deep residual network from 48G to 7G on ImageNet problems. Similarly, signiï¬cant memory cost reduction is observed in training complex recurrent neural networks on very long sequences.
# 1 Introduction
In this paper, we propose a systematic approach to reduce the memory consumption of deep neural network training. We mainly focus on reducing the memory cost to store intermediate results (feature maps) and gradients, as the size of the parameters are relatively small comparing to the size of the intermediate feature maps in many common deep architectures. We use a computation graph analysis to do automatic in-place operation and memory sharing optimizations. More importantly, we propose a novel method to trade computation for memory. As a result, we give a practical algorithm that cost O( n) memory for feature maps to train a n layer network with only double the forward pass computational cost. Interestingly, we also show that in the extreme case, it is possible to use as little as O(log n) memory for the features maps to train a n layer network.
We have recently witnessed the success of deep neural networks in many domains [8], such as computer vision, speech recognition, natural language processing and reinforcement learning. Many of the success are brought by innovations in new architectures of deep neural networks. Convolu- tional neural networks [15, 14, 13, 10] model the spatial patterns and give the state of art results in computer vision tasks. Recurrent neural networks, such as long short-term memory [12], show inspiring results in sequence modeling and structure prediction. One common trend in those new models is to use deeper architectures [18, 14, 13, 10] to capture the complex patterns in a large amount of training data. Since the cost of storing feature maps and their gradients scales linearly with the depth of network, our capability of exploring deeper models is limited by the device (usu- ally a GPU) memory. For example, we already run out of memories in one of the current state-of-art models as described in [11]. In the long run, an ideal machine learning system should be able to continuously learn from an increasing amount of training data. Since the optimal model size and complexity often grows with more training data, it is very important to have memory-efï¬cient train- ing algorithms.
1
Reducing memory consumption not only allows us to train bigger models. It also enables larger batch size for better device utilization and stablity of batchwise operators such as batch normaliza- tion [13]. For memory limited devices, it helps improve memory locality and potentially leads to better memory access patterns. It also enables us to switch from model parallelism to data paral- lelism for training deep convolutional neural networks, which can be beneï¬cial in certain circum- stances. Our solution enables us to train deeper convolutional neural networks, as well as recurrent neural networks with longer unrolling steps. We provide guidelines for deep learning frameworks to incorporate the memory optimization techniques proposed in this paper. We will also make our implementation of memory optimization algorithm publicly available.
# 2 Related Works
We can trace the idea of computational graph and liveness analysis back to the literatures of compiler optimizations [3]. Analogy between optimizing a computer program and optimizing a deep neural network computational graph can be found. For example, memory allocation in deep networks is similar to register allocation in a compiler. The formal analysis of computational graph allows us save memory in a principled way. Theano [5, 4] is a pioneering framework to bring the computation graph to deep learning, which is joined by recently introduced frameworks such as CNTK [2], Tensorï¬ow [1] and MXNet [6]. Theano and Tensorï¬ow use reference count based recycling and runtime garbage collection to manage memory during training, while MXNet uses a static memory allocation strategy prior to the actual computation. However, most of the existing framework focus on graph analysis to optimize computation after the gradient graph is constructed, but do not discuss the computation and memory trade-off.
The trade-off between memory and computation has been a long standing topic in systems re- search. Although not widely known, the idea of dropping intermediate results is also known as gradient checkpointing technique in automatic differentiation literature [9]. We bring this idea to neural network gradient graph construction for general deep neural networks. Through the discus- sion with our colleagues [19], we know that the idea of dropping computation has been applied in some limited speciï¬c use-cases. In this paper, we propose a general methodology that works for general deep neural networks, including both convolutional and recurrent neural networks. Our re- sults show that it is possible to train a general deep neural network with sublinear memory cost. More importantly, we propose an automatic planning algorithm to provide a good memory plan for real use-cases. The proposed gradient graph optimization algorithm can be readily combined with all the existing memory optimizations in the computational graph to further reduce the memory consumption of deep learning frameworks.
There are other ways to train big models, such as swapping of CPU/GPU memory and use of model parallel training [7, 16]. These are orthogonal approaches and can be used together with our algorithm to train even bigger models with fewer resources. Moreover, our algorithm does not need additional communication over PCI-E and can save the bandwidth for model/data parallel training.
# 3 Memory Optimization with Computation Graph
We start by reviewing the concept of computation graph and the memory optimization techniques. Some of these techniques are already used by existing frameworks such as Theano [5, 4], Tensor- ï¬ow [1] and MXNet [6]. A computation graph consists of operational nodes and edges that represent the dependencies between the operations. Fig. 1 gives an example of the computation graph of a two-layer fully connected neural network. Here we use coarse grained forward and backward op- erations to make the graph simpler. We further simplify the graph by hiding the weight nodes and gradients of the weights. A computation graph used in practice can be more complicated and con- tains mixture of ï¬ne/coarse grained operations. The analysis presented in this paper can be directly used in those more general cases.
Once the network conï¬guration (forward graph) is given, we can construct the corresponding backward pathway for gradient calculation. A backward pathway can be constructed by traversing
2
Configuration Gradient Calculation Graph A Possible Allocation Plan input input input-grad input input-grad inplace Sharing, T we f operation sharing {ulle-forward âulefonward fulle-backward , fulle-forward fullo-backward sigmoid-forward sigmoid-forward 4 sigmoid-backward sigttold-forward sigmoid-backward i | Hi fulle-forward fulle-forward Pt fullo-backward fulle-forward fulle-backward softmax-forward ! softmax-forward 4 softmax-backward softmax-forward softmax-backward i t log-loss j«ââââ_] label log-loss F «4 ]] label data dependency [J Memory allocation for each output of op, same color indicates shared memory.
# Network
ââ
Figure 1: Computation graph and possible memory allocation plan of a two layer fully connected neural network training procedure. Each node represents an operation and each edge represents a dependency between the operations. The nodes with the same color share the memory to store output or back-propagated gradient in each operator. To make the graph more clearly, we omit the weights and their output gradient nodes from the graph and assume that the gradient of weights are also calculated during backward operations. We also annotate two places where the in-place and sharing strategies are used.
the conï¬guration in reverse topological order, and apply the backward operators as in normal back- propagation algorithm. The backward pathway in Fig. 1 represents the gradient calculation steps explicitly, so that the gradient calculation step in training is simpliï¬ed to just a forward pass on the entire computation graph (including the gradient calculation pathway). Explicit gradient path also offers some other beneï¬ts (e.g. being able to calculate higher order gradients), which is beyond our scope and will not be covered in this paper.
When training a deep convolutional/recurrent network, a great proportion of the memory is usu- ally used to store the intermediate outputs and gradients. Each of these intermediate results corre- sponds to a node in the graph. A smart allocation algorithm is able to assign the least amount of memory to these nodes by sharing memory when possible. Fig. 1 shows a possible allocation plan of the example two-layer neural network. Two types of memory optimizations can be used
⢠Inplace operation: Directly store the output values to memory of a input value.
⢠Memory sharing: Memory used by intermediate results that are no longer needed can be recycled and used in another node.
Allocation plan in Fig. 1 contains examples of both cases. The ï¬rst sigmoid transformation is carried out using inplace operation to save memory, which is then reused by its backward operation. The storage of the softmax gradient is shared with the gradient by the ï¬rst fully connected layer. Ad hoc application of these optimizations can leads to errors. For example, if the input of an operation is still needed by another operation, applying inplace operation on the input will lead to a wrong result. We can only share memory between the nodes whose lifetime do not overlap. There are multiple ways to solve this problem. One option is to construct the conï¬icting graph of with each variable as node and edges between variables with overlapping lifespan and then run a graph-coloring al- gorithm. This will cost O(n2) computation time. We adopt a simpler heuristic with only O(n) time. The algorithm is demonstrated in Fig. 2. It traverses the graph in topological order, and uses a counter to indicate the liveness of each record. An inplace operation can happen when there is no other pending operations that depend on its input. Memory sharing happens when a recycled tag is used by another node. This can also serve as a dynamic runtime algorithm that traverses the graph, and use a garbage collector to recycle the outdated memory. We use this as a static memory allocation algorithm, to allocate the memory to each node before the execution starts, in order to avoid the overhead of garbage collection during runtime. Guidelines for Deep Learning Frameworks As we can see from the algorithm demonstration graph in Fig. 2. The data dependency causes longer lifespan of each output and increases the memory
3
B=
A a ° I bo 2 apt ° sigmoid(A) 9 2 a 2 a KgmoldtA) Ee = Pooling(B) i ot ta a MU oy, T1 ear v1 1 () () Cesigmoista) BL Ty . H 4 ' â ; i + : a é bey od a E=Pooling(c) S---- 2 a f 1 1 Grete 1 1 14 1 1 C) Initial state of step 1: Allocate step 2: Allocate tag step 3: Allocate tag step 4: Reuse the tag step 5: Re-use tag of E, allocation algorithm tag for B for C, cannot do forF, release space in the box for E This is an inplace inplace because B is ofB optimization : siil alive Final Memory Plan GH internal arrays, same color indicates shared Tag used to indicate memory sharing tT memory. a allocation Algorithm. count ef counter on dependent operations that, yetto be fullfilled Box of free tags in allocation algorithm. > data dependency, operation completed _---» data dependency, operation not completed
Figure 2: Memory allocation algorithm on computation graph. Each node associated with a liveness counter to count on operations to be full-ï¬lled. A temporal tag is used to indicate memory sharing. Inplace operation can be carried out when the current operations is the only one left (input of counter equals 1). The tag of a node can be recycled when the nodeâs counter goes to zero.
consumption of big network. It is important for deep learning frameworks to
⢠Declare the dependency requirements of gradient operators in minimum manner.
⢠Apply liveness analysis on the dependency information and enable memory sharing.
It is important to declare minimum dependencies. For example, the allocation plan in Fig. 1 wonât be possible if sigmoid-backward also depend on the output of the ï¬rst fullc-forward. The dependency analysis can usually reduce the memory footprint of deep network prediction of a n layer network from O(n) to nearly O(1) because sharing can be done between each intermediate results. The technique also helps to reduce the memory footprint of training, although only up to a constant factor.
# 4 Trade Computation for Memory
# 4.1 General Methodology
The techniques introduced in Sec. 3 can reduce the memory footprint for both training and prediction of deep neural networks. However, due to the fact that most gradient operators will depend on the intermediate results of the forward pass, we still need O(n) memory for intermediate results to train a n layer convolutional network or a recurrent neural networks with a sequence of length n. In order to further reduce the memory, we propose to drop some of the intermediate results, and recover them from an extra forward computation when needed.
More speciï¬cally, during the backpropagation phase, we can re-compute the dropped intermedi- ate results by running forward from the closest recorded results. To present the idea more clearly, we show a simpliï¬ed algorithm for a linear chain feed-forward neural network in Alg. 1. Speciï¬cally, the neural network is divided into several segments. The algorithm only remembers the output of each segment and drops all the intermediate results within each segment. The dropped results are recomputed at the segment level during back-propagation. As a result, we only need to pay the mem- ory cost to store the outputs of each segment plus the maximum memory cost to do backpropagation on each segment.
Alg. 1 can also be generalized to common computation graphs as long as we can divide the graph into segments. However, there are two drawbacks on directly applying Alg. 1: 1) users have to manually divide the graph and write customized training loop; 2) we cannot beneï¬t from other memory optimizations presented in Sec 3. We solve this problem by introducing a general gradient graph construction algorithm that uses essentially the same idea. The algorithm is given in Alg. 2. In this algorithm, the user specify a function m : V â N on the nodes of a computation graph
4
# on
Algorithm 1: Backpropagation with Data Dropping in a Linear Chain Network
v â input for k = 1 to length(segments) do temp[k] â v for i = segments[k].begin to segments[k].end â 1 do v â layer[i].f orward(v) end end g â gradient(v, label) for k = length(segments) to 1 do v â temp[k] localtemp â empty hashtable for i = segments[k].begin to segments[k].end â 1 do localtemp[i] â v v â layer[i].f orward(v) end for i = segments[k].end â 1 to segments[k].begin do g â layer[i].backward(g, localtemp[i]) end end
to indicate how many times a result can be recomputed. We call m the mirror count function as the re-computation is essentially duplicating (mirroring) the nodes. When all the mirror counts are set to 0, the algorithm degenerates to normal gradient graph. To specify re-computation pattern in Alg. 2, the user only needs to set the m(v) = 1 for nodes within each segment and m(v) = 0 for the output node of each segment. The mirror count can also be larger than 1, which leads to a recursive generalization to be discussed in Sec 4.4. Fig. 3 shows an example of memory optimized gradient graph. Importantly, Alg. 2 also outputs a traversal order for the computation, so the memory usage can be optimized. Moreover, this traversal order can help introduce control ï¬ow dependencies for frameworks that depend on runtime allocation.
# 4.2 Drop the Results of Low Cost Operations
One quick application of the general methodology is to drop the results of low cost operations and keep the results that are time consuming to compute. This is usually useful in a Conv-BatchNorm-Activation pipeline in convolutional neural networks. We can always keep the result of convolution, but drop the result of the batch normalization, activation function and pooling. In practice this will translate to a memory saving with little computation overhead, as the computation for both batch normalization and activation functions are cheap.
â
# 4.3 An O( n) Memory Cost Algorithm
Alg. 2 provides a general way to trade computation for memory. It remains to ask which intermediate result we should keep and which ones to re-compute. Assume we divide the n network into k segments the memory cost to train this network is given as follows.
cost-total = max cost-of-segment(é) +O(k) =O (<) + O(k) (1)
The ï¬rst part of the equation is the memory cost to run back-propagation on each of the segment. Given that the segment is equally divided, this translates into O(n/k) cost. The second part of n, we get equation is the cost to store the intermediate outputs between segments. Setting k = n). This algorithm only requires an additional forward pass during training, but the cost of O(2
5
Network Normal Memory Optimized Configuration Gradient Graph Gradient Graph input input input-grad input input-grad i conv-forward âsonw-forward tN conv-backward cony-forward it~ bneforward ! br-forward bn-backward bn forward a. relu-forward " relu-forward relu-backward â_relu-forward conv-forward conv-forward conv-backward conv-forward > bn-forward bn-forward bn-backward bn-forward conv-backward bn-backward relu-backward conv-backward bn-backward relu-forward relu-forward relu-backward _relu-forward --- > relu-backward ââ* data dependency ----» control dependency [5] Memory allocation for each output of op, same color indicates shared
# memory.
Figure 3: Memory optimized gradient graph generation example. The forward path is mirrored to represent the re-computation happened at gradient calculation. User speciï¬es the mirror factor to control whether a result should be dropped or kept.
# Algorithm 2: Memory Optimized Gradient Graph Construction
Input: G = (V, pred), input computation graph, the pred[v] gives the predecessors array of
node v. Input: gradient(succ_grads, output, inputs), symbolic gradient function that creates a gradient node given successor gradients and output and inputs Input: m : V + Nt, m(v) gives how many time node v should be duplicated, m(v) = 0 means do no drop output of node v. alu] + v forv EV for k = 1 to max,cy m(v) do for v in topological-order(V) do if k < m(v) then a{v] < new node, same operator as v pred{a[v]] â U, cpredjaj{ale} end end end Vâ & topological-order(V) for v in reverse-topological-order(V) do giv] <â gradient(|g{v] for v in successor(v)], alu], [a{v] for v in pred{u]]) V' & append(Vâ, topological-order(acenstors(g[v])) â Vâ) end Output: Gâ = (Vâ, pred) the new graph, the order in Vâ gives the logical execution order.
reduces the memory cost to be sub-linear. Since the backward operation is nearly twice as time consuming as the forward one, it only slows down the computation by a small amount.
In the most general case, the memory cost of each layer is not the same, so we cannot simply set n. However, the trade-off between the intermediate outputs and the cost of each stage still k = holds. In this case, we use Alg. 3 to do a greedy allocation with a given budget for the memory cost within each segment as a single parameter B. Varying B gives us various allocation plans that either assign more memory to the intermediate outputs, or to computation within each stage. When we do static memory allocation, we can get the exact memory cost given each allocation plan. We can use this information to do a heuristic search over B to ï¬nd optimal memory plan that balances the cost of the two. The details of the searching step is presented in the supplementary material. We ï¬nd this approach works well in practice. We can also generalize this algorithm by considering the cost to run each operation to try to keep time consuming operations when possible.
6
Algorithm 3: Memory Planning with Budget
Input: G = (V, pred), input computation graph. Input: C â V , candidate stage splitting points, we will search splitting points over v â C Input: B, approximate memory budget. We can search over B to optimize the memory allocation. temp â 0, x â 0, y â 0 for v in topological-order(V ) do temp â temp + size-of-output(v) if v â C and temp > B then x â x + size-of-output(v), y â max(y, temp) m(v) = 0, temp â 0 else m(v) = 1 end end Output: x approximate cost to store inter-stage feature maps Output: y approximate memory cost for each sub stage Output: m the mirror plan to feed to Alg. 2
input input-grad Le nv-bn-relu conv-bn-relu backward forward bn-backward relu-backward conv-backward ' iconv-bn-relu conv-bn-relu backward forward + data dependency [1 Memory allocation for each output of op, same color indicates shared memory.
Figure 4: Recursion view of the memory optimized allocations. The segment can be viewed as a single operator that combines all the operators within the segment. Inside each operator, a sub-graph as executed to calculate the gradient.
# 4.4 More General View: Recursion and Subroutine
In this section, we provide an alternative view of the memory optimization scheme described above. Speciï¬cally, we can view each segment as a bulk operator that combines all the operations inside the segment together. The idea is illustrated in Fig. 4. The combined operator calculates the gradient by executing over the sub-graph that describes its internal computation. This view allows us to treat a series of operations as subroutines. The optimization within the sub-graph does not affect the external world. As a result, we can recursively apply our memory optimization scheme to each sub-graph.
Pay Even Less Memory with Recursion Let g(n) to be the memory cost to do forward and backward pass on a n layer neural network. Assume that we store k intermediate results in the graph and apply the same strategy recursively when doing forward and backward pass on the sub-path. We have the following recursion formula.
g(n) = k + g (n/(k + 1)) (2)
Solving this recursion formula gives us
g(n) = k logk+1(n) (3)
7
As a special case, if we set k = 1, we get g(n) = log2 n. This is interesting conclusion as all the existing implementations takes O(n) memory in feature map to train a n layer neural network. This will require O(log2 n) cost forward pass cost, so may not be used commonly. But it demonstrates how we can trade memory even further by using recursion.
# 4.5 Guideline for Deep Learning Frameworks
In this section, we have shown that it is possible to trade computation for memory and combine it with the system optimizations proposed in Sec 3. It is helpful for deep learning frameworks to
Enable option to drop result of low cost operations.
⢠Provide planning algorithms to give efï¬cient memory plan.
⢠Enable user to set the mirror attribute in the computation graph for memory optimization.
While the last option is not strictly necessary, providing such interface enables user to hack their own memory optimizers and encourages future researches on the related directions. Under this spirit, we support the customization of graph mirror plan and will make the source code publicly available.
# 5 Experiments
# 5.1 Experiment Setup
We evaluate the memory cost of storing intermediate feature maps using the methods described in this paper. We our method on top of MXNet [6], which statically allocate all the intermediate feature maps before computation. This enables us to report the exact memory cost spend on feature maps. Note that the memory cost of parameters and temporal memory (e.g. required by convolution) are not part of the memory cost report. We also record the runtime total memory cost by running training steps on a Titan X GPU. Note that all the memory optimizations proposed in this paper gives equivalent weight gradient for training and can always be safely applied. We compare the following memory allocation algorithms
⢠no optimization, directly allocate memory to each node in the graph without any optimization.
⢠inplace, enable inplace optimization when possible.
⢠sharing, enable inplace optimization as well as sharing. This represents all the system opti- mizations presented at Sec. 3.
⢠drop bn-relu, apply all system optimizations, drop result of batch norm and relu, this is only shown in convolutional net benchmark.
⢠sublinear plan, apply all system optimizations, use plan search with Alg 3 to trade computa- tion with memory.
# 5.2 Deep Convolutional Network
We ï¬rst evaluate the proposed method on convolutional neural network for image classiï¬cation. We use deep residual network architecture [11] (ResNet), which gives the state of art result on this task. Speciï¬cally, we use 32 batch size and set input image shape as (3, 224, 224). We generate different depth conï¬guration of ResNet 1 by increasing the depth of each residual stage.
We show the results in Fig. 5. We can ï¬nd that the system optimizations introduced in Sec. 3 can help to reduce the memory cost by factor of two to three. However, the memory cost after optimization still exhibits a linear trend with respect to number of layers. Even with all the system optimizations, it is only possible to train a 200 layer ResNet with the best GPU we can get. On the other hand, the proposed algorithm gives a sub-linear trend in terms of number of layers. By trade computation with memory, we can train a 1000 layer ResNet using less than 7GB of GPU memory.
# 1We count a conv-bn-relu as one layer
8
(a) Feature map memory cost estimation (b) Runtime total memory cost
Figure 5: The memory cost of different allocation strategies on deep residual net conï¬gurations. The feature map memory cost is generated from static memory allocation plan. We also use nvidia- smi to measure the total memory cost during runtime (the missing points are due to out of memory). The ï¬gures are in log-scale, so y = αxβ will translate to log(y) = β log(x) + log α. We can ï¬nd that the graph based allocation strategy indeed help to reduce the memory cost by a factor of two to three. More importantly, the sub-linear planning algorithm indeed gives sub-linear memory trend with respect to the workload. The real runtime result also conï¬rms that we can use our method to greatly reduce memory cost deep net training.
(a) Feature map memory cost estimation (b) Runtime total memory cost
Figure 6: The memory cost of different memory allocation strategies on LSTM conï¬gurations. System optimization gives a lot of memory saving on the LSTM graph, which contains a lot of ï¬ne grained operations. The sub-linear plan can give more than 4x reduction over the optimized plan that do not trade computation with memory.
# 5.3 LSTM for Long Sequences
We also evaluate the algorithms on a LSTM under a long sequence unrolling setting. We unrolled a four layer LSTM with 1024 hidden states equals 64 over time. The batch size is set to 64. The input of each timestamp is a continuous 50 dimension vector and the output is softmax over 5000 class. This is a typical setting for speech recognition[17], but our result can also be generalized to other recurrent networks. Using a long unrolling step can potentially help recurrent model to learn long
9
(a) ResNet (b) LSTM
Figure 7: The runtime speed of different allocation strategy on the two settings. The speed is measured by a running 20 batches on a Titan X GPU. We can see that using sub-linear memory plan incurs roughly 30% of additional runtime cost compared to linear memory allocation. The general trend of speed vs workload remains linear for both strategies.
term dependencies over time. We show the results in Fig. 6. We can ï¬nd that inplace helps a lot here. This is because inplace optimization in our experiment enables direct addition of weight gradient to a single memory cell, preventing allocate space for gradient at each timestamp. The sub-linear plan gives more than 4x reduction over the optimized memory plan.
# Impact on Training Speed
We also measure the runtime cost of each strategy. The speed is benchmarked on a single Titan X GPU. The results are shown in Fig. 7. Because of the double forward cost in gradient calculation, the sublinear allocation strategy costs 30% additional runtime compared to the normal strategy. By paying the small price, we are now able to train a much wider range of deep learning models.
# 6 Conclusion
In this paper, we proposed a systematic approach to reduce the memory consumption of the inter- mediate feature maps when training deep neural networks. Computation graph liveness analysis is used to enable memory sharing between feature maps. We also showed that we can trade the com- putation with the memory. By combining the techniques, we can train a n layer deep neural network with only O( n) memory cost, by paying nothing more than one extra forward computation per mini-batch.
# Acknowledgement
We thank the helpful feedbacks from the MXNet community and developers. We thank Ian Goodfellow and Yu Zhang on helpful discussions on computation memory tradeoffs. We would like to thank David Warde-Farley for pointing out the relation to gradient checkpointing. We would like to thank Nvidia for the hardware support. This work was supported in part by ONR (PECASE) N000141010672, NSF IIS 1258741 and the TerraSwarm Research Center sponsored by MARCO and DARPA. Chiyuan Zhang acknowledges the support of a Nuance Foundation Grant.
10
# References
[1] Mart´ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Good- fellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Man´e, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Vi´egas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. Tensor- Flow: Large-scale machine learning on heterogeneous systems, 2015. Software available from tensorï¬ow.org.
[2] Amit Agarwal, Eldar Akchurin, Chris Basoglu, Guoguo Chen, Scott Cyphers, Jasha Droppo, Adam Eversole, Brian Guenter, Mark Hillebrand, Ryan Hoens, Xuedong Huang, Zhiheng Huang, Vladimir Ivanov, Alexey Kamenev, Philipp Kranen, Oleksii Kuchaiev, Wolfgang Manousek, Avner May, Bhaskar Mitra, Olivier Nano, Gaizka Navarro, Alexey Orlov, Marko Padmilac, Hari Parthasarathi, Baolin Peng, Alexey Reznichenko, Frank Seide, Michael L. Seltzer, Malcolm Slaney, Andreas Stolcke, Yongqiang Wang, Huaming Wang, Kaisheng Yao, Dong Yu, Yu Zhang, and Geoffrey Zweig. An introduction to computational networks and the computational network toolkit. Technical Report MSR-TR-2014-112, August 2014.
[3] Alfred V. Aho, Ravi Sethi, and Jeffrey D. Ullman. Compilers: Principles, Techniques, and Tools. Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA, 1986.
[4] Fr´ed´eric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud Bergeron, Nicolas Bouchard, and Yoshua Bengio. Theano: new features and speed improve- ments. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop, 2012.
[5] James Bergstra, Olivier Breuleux, Fr´ed´eric Bastien, Pascal Lamblin, Razvan Pascanu, Guil- laume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientiï¬c Computing Conference (SciPy), June 2010. Oral Presentation.
[6] Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, , and Zheng Zhang. MXNet: A ï¬exible and efï¬cient machine learning library for heterogeneous distributed systems. In Neural Information Processing Systems, Workshop on Machine Learning Systems (LearningSysâ15), 2015.
[7] Jeffrey Dean, Greg S. Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Quoc V. Le, Mark Z. Mao, MarcAurelio Ranzato, Andrew Senior, Paul Tucker, Ke Yang, and Andrew Y. Ng. Large scale distributed deep networks. In NIPS, 2012.
[8] Ian Goodfellow, Yoshua Bengio, , and Aaron Courville. Deep learning. Book in preparation for MIT Press, 2016.
[9] Andreas Griewank and Andrea Walther. Algorithm 799: Revolve: An implementation of checkpointing for the reverse or adjoint mode of computational differentiation. ACM Trans. Math. Softw., 26(1):19â45, March 2000.
[10] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015.
[11] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. arXiv preprint arXiv:1603.05027, 2016.
[12] Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural Comput., 9(8):1735â1780, November 1997.
11
[13] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32th International Conference on Machine Learning (ICMLâ15), 2015.
[14] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classiï¬cation with deep In Advances in Neural Information Processing Systems 25, convolutional neural networks. pages 1097â1105. 2012.
[15] Yann LeCun, L´eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. In S. Haykin and B. Kosko, editors, Intelligent Signal Pro- cessing, pages 306â351. IEEE Press, 2001.
[16] Minsoo Rhu, Natalia Gimelshein, Jason Clemons, Arslan Zulï¬qar, and Stephen W Keckler. Virtualizing deep neural networks for memory-efï¬cient neural network design. arXiv preprint arXiv:1602.08124, 2016.
[17] Hasim Sak, Andrew W. Senior, and Franc¸oise Beaufays. Long short-term memory recur- rent neural network architectures for large scale acoustic modeling. In INTERSPEECH 2014, 15th Annual Conference of the International Speech Communication Association, Singapore, September 14-18, 2014, pages 338â342, 2014.
[18] Rupesh Kumar Srivastava, Klaus Greff, and J¨urgen Schmidhuber. Training very deep networks. arXiv preprint arXiv:1507.06228, 2015.
[19] Yu Zhang, Guoguo Chen, Dong Yu, Kaisheng Yao, Sanjeev Khudanpur, and James Glass. arXiv preprint Highway long short-term memory rnns for distant speech recognition. arXiv:1510.08983, 2015.
# A Search over Budget B
Alg. 3 allows us to generate an optimized memory plan given a single parameter B. This algorithm relies on approximate memory estimation for faster speed. After we get the plan, we can use the static allocation algorithm to calculate the exact memory cost. We can then do a grid search over B to ï¬nd a good memory plan.
To get the setting of the grid, we ï¬rst run the allocation algorithm with B = 0, then run the xy. Here x and y are the outputs from Alg. 3 in the ï¬rst run. allocation algorithm again with B = Here x is the approximate cost to store inter-stage feature maps and y is the approximate cost to run each stage. B = xy an estimation of each stageâs memory cost. This can already give a good memory plan. We then set grid around B =
xy to further reï¬ne the solution. â
â
2B] can already give good memory plans in the experiments. We implemented the allocation algorithm in python without any attempt to optimize for speed. Our code costs a few seconds to get the plans needed in the experiments.
12 | {
"id": "1512.03385"
} |
1604.03168 | Hardware-oriented Approximation of Convolutional Neural Networks | High computational complexity hinders the widespread usage of Convolutional
Neural Networks (CNNs), especially in mobile devices. Hardware accelerators are
arguably the most promising approach for reducing both execution time and power
consumption. One of the most important steps in accelerator development is
hardware-oriented model approximation. In this paper we present Ristretto, a
model approximation framework that analyzes a given CNN with respect to
numerical resolution used in representing weights and outputs of convolutional
and fully connected layers. Ristretto can condense models by using fixed point
arithmetic and representation instead of floating point. Moreover, Ristretto
fine-tunes the resulting fixed point network. Given a maximum error tolerance
of 1%, Ristretto can successfully condense CaffeNet and SqueezeNet to 8-bit.
The code for Ristretto is available. | http://arxiv.org/pdf/1604.03168 | Philipp Gysel, Mohammad Motamedi, Soheil Ghiasi | cs.CV | 8 pages, 4 figures, Accepted as a workshop contribution at ICLR 2016.
Updated comparison to other works | null | cs.CV | 20160411 | 20161020 | 6 1 0 2
t c O 0 2 ] V C . s c [
3 v 8 6 1 3 0 . 4 0 6 1 : v i X r a
Accepted as a workshop contribution at ICLR 2016
# HARDWARE-ORIENTED APPROXIMATION OF CONVO- LUTIONAL NEURAL NETWORKS
Philipp Gysel, Mohammad Motamedi & Soheil Ghiasi Department of Electrical and Computer Engineering University of California, Davis Davis, CA 95616, USA {pmgysel,mmotamedi,ghiasi}@ucdavis.edu
# ABSTRACT
High computational complexity hinders the widespread usage of Convolutional Neural Networks (CNNs), especially in mobile devices. Hardware accelerators are arguably the most promising approach for reducing both execution time and power consumption. One of the most important steps in accelerator development is hardware-oriented model approximation. In this paper we present Ristretto, a model approximation framework that analyzes a given CNN with respect to numerical resolution used in representing weights and outputs of convolutional and fully connected layers. Ristretto can condense models by using ï¬xed point arithmetic and representation instead of ï¬oating point. Moreover, Ristretto ï¬ne- tunes the resulting ï¬xed point network. Given a maximum error tolerance of 1%, Ristretto can successfully condense CaffeNet and SqueezeNet to 8-bit. The code for Ristretto is available.
# INTRODUCTION
The annually held ILSVRC competition has seen state-of-the-art classiï¬cation accuracies by deep networks such as AlexNet by Krizhevsky et al. (2012), VGG by Simonyan & Zisserman (2015), GoogleNet (Szegedy et al., 2015) and ResNet (He et al., 2015). These networks contain millions of parameters and require billions of arithmetic operations.
Various solutions have been offered to reduce the resource-requirement of CNNs. Fixed point arith- metic is less resource hungry compared to ï¬oating point. Moreover, it has been shown that ï¬xed point arithmetic is adequate for neural network computation (Hammerstrom, 1990). This observa- tion has been leveraged recently to condense deep CNNs. Gupta et al. (2015) show that networks on datasets like CIFAR-10 (10 images classes) can be trained in 16-bit. Further trimming of the same network uses as low as 7-bit multipliers (Courbariaux et al., 2014). Another approach by Courbariaux et al. (2016) uses binary weights and activations, again on the same network.
The complexity of deep CNNs can be split into two parts. First, the convolutional layers contain more than 90% of the required arithmetic operations. By turning these ï¬oating point operations into operations with small ï¬xed point numbers, both the chip area and energy consumption can be sig- niï¬cantly reduced. The second resource-intense layer type are fully connected layers, which contain over 90% of the network parameters. As a nice by-product of using bit-width reduced ï¬xed point numbers, the data transfer to off-chip memory is reduced for fully connected layers. In this paper, we concentrate on approximating convolutional and fully connected layers only. Using ï¬xed point arithmetic is a hardware-friendly way of approximating CNNs. It allows the use of smaller process- ing elements and reduces the memory requirements without adding any computational overhead such as decompression.
Even though it has been shown that CNNs perform well with small ï¬xed point numbers, there exists no thorough investigation of the delicate trade-off between bit-width reduction and accuracy loss. In this paper we present Ristretto, which automatically ï¬nds a perfect balance between the bit-width reduction and the given maximum error tolerance. Ristretto performs a fast and fully automated trimming analysis of any given network. This post-training tool can be used for application-speciï¬c trimming of neural networks.
1
Accepted as a workshop contribution at ICLR 2016
# 2 MIXED FIXED POINT PRECISION
In the next two sections we discuss quantization of a ï¬oating point CNN to ï¬xed point. Moreover, we explain dynamic ï¬xed point, and show how it can be used to further decrease network size while maintaining the classiï¬cation accuracy.
m bits Layer activation O89 C8 &000"⢠m+n+1 bits { So Ss m+n+2 bits { . c < : +) ce (4) i -â me oa fagae ad ae at ae fed v | v m+n+lg(x) bits { ee ee Bias 9 e) 32-bit adder n-bit : Truncate Truncate m bits I T Layer output
Figure 1: Data path of quantized convolutional and fully connected layers.
The data path of fully connected and convolutional layers consists of a series of MAC operations (multiplication and accumulation), as shown in Figure 1. The layer activations are multiplied with the network weights, and the results are accumulated to form the output. As shown by Qiu et al. (2016), it is a good approach to use mixed precision, i.e., different parts of a CNN use different bit-widths.
In Figure 1, m and n refer to the number of bits for layer outputs and layer weights, respectively. Multiplication results are accumulated using an adder tree which gets thicker towards the end. The adder outputs in the ï¬rst level are m + n + 2 bits wide, and the bit-width grows by 1 bit in each level. In the last level, the bit-width is m + n + lg2 x, where x is the number of multiplication operations per output value. In the last stage, the bias is added to form the layer output. For each network layer, we need to ï¬nd the right balance between reducing the bit-widths (m and n) and maintaining a good classiï¬cation accuracy.
# 3 DYNAMIC FIXED POINT
The different parts of a CNN have a significant dynamic range. In large layers, the outputs are the result of thousands of accumulations, thus the network parameters are much smaller than the layer outputs. Fixed point has only limited capability to cover a wide dynamic range. Dynamic fixed point (Williamson} [1991] 2014) is a solution to this problem. In dynamic fixed point, each number is represented as follows: (â1)° -2-f!. are 2â. x;. Here B denotes the bit-width, s the sign bit, fl is the fractional length, and x the mantissa bits. The intermediate values in a network have different ranges. Therefor it is desirable to assign fixed point numbers into groups with constant fl, such that the number of bits allocated to the fractional part is constant within that group. Each network layer is split into two groups: one for the layer outputs, one for the layer weights. This allows to better cover the dynamic range of both layer outputs and weights, as weights are normally significantly smaller. On the hardware side, it is possible to realize dynamic fixed point arithmetic using bit shifters.
Different hardware accelerators for deployment of neural networks have been proposed (Motamedi et al., 2016; Qiu et al., 2016; Han et al., 2016a). The ï¬rst important step in accelerator design is the compression of the network in question. In the next section we present Ristretto, a tool which can condense any neural network in a fast and automated fashion.
2
Accepted as a workshop contribution at ICLR 2016
# 4 RISTRETTO: APPROXIMATION FRAMEWORK IN CAFFE
From Caffe to Ristretto According to Wikipedia, Ristretto is âa short shot of espresso coffee made with the normal amount of ground coffee but extracted with about half the amount of waterâ. Similarly, our compressor removes the unnecessary parts of a CNN, while making sure the essence â the ability to predict image classes â is preserved. With its strong community and fast training for deep CNNs, Caffe (Jia et al., 2014) is an excellent framework to build on.
Ristretto takes a trained model as input, and automatically brews a condensed network version. Input and output of Ristretto are a network description ï¬le (prototxt) and the network parameters. Optionally, the quantized network can be ï¬ne-tuned with Ristretto. The resulting ï¬xed point model in Caffe-format can then be used for a hardware accelerator.
Weight Analysis Determine statistical parameters for Activation Analysis Determine statistical parameters for Bit-Width Reduction Determine the required bit-width for different Fine-tuning Retrain fixed point network parameters effective quantization effective quantization layers t Test the Accuracy Using Training Set Review the effect
Figure 2: Network approximation ï¬ow with Ristretto.
Quantization ï¬ow Ristrettoâs quantization ï¬ow has ï¬ve stages (Figure 2) to compress a ï¬oating point network into ï¬xed point. In the ï¬rst step, the dynamic range of the weights is analyzed to ï¬nd a good ï¬xed point representation. For the quantization from ï¬oating point to ï¬xed point, we use round-nearest. The second step runs several thousand images in forward path. The generated layer activations are analyzed to generate statistical parameters. Ristretto uses enough bits in the integer part of ï¬xed point numbers to avoid saturation of layer activations. Next Ristretto performs a binary search to ï¬nd the optimal number of bits for convolutional weights, fully connected weights, and layer outputs. In this step, a certain network part is quantized, while the rest remains in ï¬oating point. Since there are three network parts that should use independent bit-widths (weights of convolutional and fully connected layers as well as layer outputs), iteratively quantizing one network part allows us to ï¬nd the optimal bit-width for each part. Once a good trade-off between small number representation and classiï¬cation accuracy is found, the resulting ï¬xed point network is retrained.
Fine-tuning In order to make up for the accuracy drop incurred by quantization, the fixed point network is fine- tuned in Ristretto. During this retraining procedure, the network learns how to classify images with fixed point parameters. Since the network weights can only have discrete values, the main chal- lenge consists in the weight update. We adopt the idea of previous work (Courbariaux et al.|/2015) which uses full precision shadow weights. Small weight updates Aw are applied to the full precision weights w, whereas the discrete weights wâ are sampled from the full precision weights. The sam- pling during fine-tuning is done with stochastic rounding. This rounding scheme was successfully used by for weight updates of 16-bit fixed point networks.
Ristretto uses the fine-tuning procedure illustrated in Figure [3] For each batch, the full precision weights are quantized to fixed point. During forward propagation, these discrete weights are used to compute the layer outputs y. Each layer / turns its input batch 2; into output y;, according to its function f; : (x;,wâ) â y;. Assuming the last layer computes the loss, we denote f as the overall CNN function.
3
Accepted as a workshop contribution at ICLR 2016
Stochastic Round nearest sampling sampling w! Val data orn PN YT Apply param : t H i coe H fprop: a :Aw â Fullprecision yi _ A Full precision y, â fprop update params w : w= fiGuw') params w of, / : Validation / Fra ad 2) â accuracy bprop
Figure 3: Fine-tuning with shadow weights. The left side shows the training process with full- precision shadow weights. On the right side the ï¬ne-tuned network is benchmarked on the validation data set. Fixed point values are represented in orange.
The goal of back propagation is to compute the error gradient δf /δw with respect to each ï¬xed point parameter. For parameter updates we use the Adam rule by Kingma & Ba (2015). As an important observation, we do not quantize layer outputs to ï¬xed point during ï¬ne-tuning. We use ï¬oating point layer outputs instead, which enables Ristretto to analytically compute the error gradient with respect to each parameter. In contrast, the validation of the network is done with ï¬xed point layer outputs.
To achieve the best ï¬ne-tuning results, we used a learning rate that is an order of magnitude lower than the last full precision training iteration. Since the choice of hyper parameters for retraining is crucial (Bergstra & Bengio, 2012), Ristretto relies on minimal human intervention in this step.
Fast ï¬ne-tuning with ï¬xed point parameters Ristretto brews a condensed network with ï¬xed point weights and ï¬xed point layer activations. For simulation of the forward propagation in hardware, Ristretto uses full ï¬oating point for accumula- tion. This follows the thought of Gupta et al. (2015) and is conform with our description of the forward data path in hardware (Figure 2). During ï¬ne-tuning, the full precision weights need to be converted to ï¬xed point for each batch, but after that all computation can be done in ï¬oating point (Figure 3). Therefore Ristretto can fully leverage optimized matrix-matrix multiplication routines for both forward and backward propagation. Thanks to its fast implementation on the GPU, a ï¬xed point CaffeNet can be tested on the ILSVRC 2014 validation dataset (50k images) in less than 2 minutes (using one Tesla K-40 GPU).
# 5 RESULTS
In this section we present the results of approximating 32-bit ï¬oating point networks by condensed ï¬xed point models. All classiï¬cation accuracies were obtained running the respective network on the whole validation dataset. We present approximation results of Ristretto for ï¬ve different net- works. First, we consider LeNet (LeCun et al., 1998) which can classify handwritten digits (MNIST dataset). Second, CIFAR-10 Full model provided by Caffe is used to classify images into 10 different classes. Third, we condense CaffeNet, which is the Caffe version of AlexNet and classiï¬es images into the 1000 ImageNet categories. Fourth, we use the BVLC version of GoogLeNet (Szegedy et al., 2015) to classify images of the same data set. Finally, we approximate SqueezeNet (Iandola et al., 2016), a recently proposed architecture with the classiï¬cation accuracy of AlexNet, but >50X fewer parameters.
Impact of dynamic ï¬xed point We used Ristretto to quantize CaffeNet (AlexNet) into ï¬xed point, and compare traditional ï¬xed point with dynamic ï¬xed point. To allow a simpler comparison, all layer outputs and network parameters share the same bit-width. Results show a good performance of static ï¬xed point for as low as 18-bit (Figure 4). However, when reducing the bit-width further, the accuracy starts to drop signiï¬cantly, while dynamic ï¬xed point has a stable accuracy.
4
Accepted as a workshop contribution at ICLR 2016
Static vs Dynamic Fixed Point â+â Dynamic fixed point =-+-<@-== Integer length: 9-bit ===+-@=== Integer length: 10-bit --+----- Integer length: 11-bit Classification Accuracy % Bit-width
Figure 4: Impact of dynamic ï¬xed point: The ï¬gure shows top-1 accuracy for CaffeNet on ILSVRC 2014 validation dataset. Integer length refers to the number of bits assigned to the integer part of ï¬xed point numbers.
We can conclude that dynamic ï¬xed point performs signiï¬cantly better for such a large network. With dynamic ï¬xed point, we can adapt the number of bits allocated to integer and fractional part, according to the dynamic range of different parts of the network. We will therefore concentrate on dynamic ï¬xed point for the subsequent experiments.
Quantization of individual network parts In this section, we analyze the impact of quantization on different parts of a ï¬oating point CNN. Table 1 shows the classiï¬cation accuracy when the layer outputs, the convolution kernels or the parameters of fully connected layers are quantized to dynamic ï¬xed point.
In all three nets, the convolution kernels and layer activations can be trimmed to 8-bit with an absolute accuracy change of only 0.3%. Fully connected layers are more affected from trimming to 8-bit weights, the absolute change is maximally 0.9%. Interestingly, LeNet weights can be trimmed to as low as 2-bit, with absolute accuracy change below 0.4%.
Table 1: Quantization results for different parts of three networks. Only one number category is cast to ï¬xed point, and the remaining numbers are in ï¬oating point format.
Fixed point bit-width
# 16-bit
# 8-bit
â
# 4-bit
# 2-bit
LeNet, 32-bit ï¬oating point accuracy: 99.1%
Layer output CONV parameters FC parameters 99.1% 99.1% 98.9% 85.9% 99.1% 99.1% 99.1% 98.9% 99.1% 99.1% 98.9% 98.7%
Full CIFAR-10, 32-bit ï¬oating point accuracy: 81.7%
Layer output CONV parameters FC parameters 81.6% 81.6% 79.6% 48.0% 81.7% 81.4% 75.9% 19.1% 81.7% 80.8% 79.9% 77.5%
CaffeNet top-1, 32-bit ï¬oating point accuracy: 56.9%
Layer output CONV parameters FC parameters 56.8% 56.7% 06.0% 00.1% 56.9% 56.7% 00.1% 00.1% 56.9% 56.3% 00.1% 00.1%
Fine-tuning of all considered network parts Here we report the accuracy of ï¬ve networks that were condensed and ï¬ne-tuned with Ristretto. All networks use dynamic ï¬xed point parameters as well as dynamic ï¬xed point layer outputs for convolutional and fully connected layers. LeNet performs well in 2/4-bit, while CIFAR-10 and
5
Accepted as a workshop contribution at ICLR 2016
the three ImageNet CNNs can be trimmed to 8-bit (see Table 2). Surprisingly, these compressed networks still perform nearly as well as their ï¬oating point baseline. The relative accuracy drops of LeNet, CIFAR-10 and SqueezeNet are very small (<0.6%), whereas the approximation of the larger CaffeNet and GoogLeNet incurs a slightly higher cost (0.9% and 2.3% respectively). We hope we will further improve the ï¬ne-tuning results of these larger networks in the future.
The SqueezeNet architecture was developed by Iandola et al. (2016) with the goal of a small CNN that performs well on the ImageNet data set. Ristretto can make the already small network even smaller, so that its parameter size is less than 2 MB. This condensed network is well-suited for deployment in smart mobile systems.
All ï¬ve 32-bit ï¬oating point networks can be approximated well in 8-bit and 4-bit ï¬xed point. For a hardware implementation, this reduces the size of multiplication units by about one order of magni- tude. Moreover, the required memory bandwidth is reduced by 4â8X. Finally, it helps to hold 4â8X more parameters in on-chip buffers. The code for reproducing the quantization and ï¬ne-tuning re- sults is available1.
Table 2: Fine-tuned networks with dynamic ï¬xed point parameters and outputs for convolutional and fully connected layers. The numbers in brackets indicate accuracy without ï¬ne-tuning.
LeNet (Exp 1) LeNet (Exp 2) Full CIFAR-10 SqueezeNet top-1 CaffeNet top-1 GoogLeNet top-1 Layer outputs 4-bit 4-bit 8-bit 8-bit 8-bit 8-bit CONV parameters 4-bit 2-bit 8-bit 8-bit 8-bit 8-bit FC parameters 4-bit 2-bit 8-bit 8-bit 8-bit 8-bit 32-bit ï¬oating point baseline 99.1% 99.1% 81.7% 57.7% 56.9% 68.9% Fixed point accuracy 99.0% (98.7%) 98.8% (98.0%) 81.4% (80.6%) 57.1% (55.2%) 56.0% (55.8%) 66.6% (66.1%)
A previous work by Courbariaux et al. (2014) concentrates on training with limited numerical pre- cision. They can train a dynamic ï¬xed point network on the MNIST data set using just 7-bits to represent activations and weights. Ristretto doesnât reduce the resource requirements for training, but concentrates on inference instead. Ristretto can produce a LeNet network with 2-bit parameters and 4-bit activations. Our approach is different in that we train with high numerical precision, then quantize to ï¬xed point, and ï¬nally ï¬ne-tune the ï¬xed point network.
Other works (Courbariaux et al., 2016; Rastegari et al., 2016) can reduce the bit-width even fur- ther to as low as 1-bit, using more advanced number encodings than dynamic ï¬xed point. Ristrettoâs strength lies in its capability to approximate a large number of existing ï¬oating point models on chal- lenging data sets. For the ï¬ve considered networks, Ristretto can quantize activations and weights to 8-bit or lower, at an accuracy drop below 2.3%, compared to the ï¬oating point baseline.
While more sophisticated data compression schemes could be used to achieve higher network size reduction, our approach is very hardware friendly and imposes no additional overhead such as de- compression.
# 6 CONCLUSION AND FUTURE WORK
In this work we presented Ristretto, a Caffe-based approximation framework for deep convolutional neural networks. The framework reduces the memory requirements, area for processing elements and overall power consumption for hardware accelerators. A large net like CaffeNet can be quan- tized to 8-bit for both weights and layer outputs while keeping the networkâs accuracy change below 1% compared to its 32-bit ï¬oating point counterpart. Ristretto is both fast and automated, and we release the code as an open source project.
Ristretto is in its ï¬rst development stage. We consider adding new features in the future: 1. Shared weights: Fetching cookbook indices from off-chip memory, instead of real values (Han et al.,
# 1https://github.com/pmgysel/caffe
6
Accepted as a workshop contribution at ICLR 2016
2016b). 2. Network pruning as shown by the same authors. 3. Network binarization as shown by Courbariaux et al. (2016) and Rastegari et al. (2016). These additional features will help to reduce the bit-width even further, and to reduce the computational complexity of trimmed networks.
# REFERENCES
Bergstra, J. and Bengio, Y. Random Search for Hyper-Parameter Optimization. The Journal of Machine Learning Research, 13(1):281â305, 2012.
Courbariaux, M., David, J.-P., and Bengio, Y. Training Deep Neural Networks with Low Precision Multiplications. arXiv preprint arXiv:1412.7024, 2014.
Courbariaux, M., Bengio, Y., and David, J.-P. BinaryConnect: Training Deep Neural Networks with binary weights during propagations. In Advances in Neural Information Processing Systems, pp. 3105â3113, 2015.
Courbariaux, M., Hubara, I., Soudry, D., El-Yaniv, R., and Bengio, Y. Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1. arXiv preprint arXiv:1602.02830, 2016.
Gupta, S., Agrawal, A., Gopalakrishnan, K., and Narayanan, P. Deep Learning with Limited Nu- merical Precision. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pp. 1737â1746, 2015.
Hammerstrom, D. A VLSI Architecture for High-Performance, Low-Cost, On-chip Learning. In IJCNN International Joint Conference on Neural Networks, 1990, pp. 537â544. IEEE, 1990.
Han, S., Liu, X., Mao, H., Pu, J., Pedram, A., Horowitz, M. A., and Dally, W. J. EIE: Efï¬cient In- ference Engine on Compressed Deep Neural Network. arXiv preprint arXiv:1602.01528, 2016a.
Han, S., Mao, H., and Dally, W. J. Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding. In International Conference on Learning Representations, 2016b.
He, K., Zhang, X., Ren, S., and Sun, J. Deep Residual Learning for Image Recognition. arXiv preprint arXiv:1512.03385, 2015.
Iandola, F. N., Moskewicz, M. W., Ashraf, K., Han, S., Dally, W. J., and Keutzer, K. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size. arXiv:1602.07360, 2016.
Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., and Darrell, T. Caffe: Convolutional Architecture for Fast Feature Embedding. In Proceedings of the ACM International Conference on Multimedia, pp. 675â678. ACM, 2014.
Kingma, D. and Ba, J. Adam: A Method for Stochastic Optimization. In International Conference on Learning Representations, 2015.
Krizhevsky, A., Sutskever, I., and Hinton, G. E. ImageNet Classiï¬cation with Deep Convolutional Neural Networks. In Advances in Neural Information Processing Systems, pp. 1097â1105, 2012.
LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. Gradient-Based Learning Applied to Document Recognition. Proceedings of the IEEE, 86(11):2278â2324, 1998.
Motamedi, M., Gysel, P., Akella, V., and Ghiasi, S. Design Space Exploration of FPGA-Based Deep Convolutional Neural Networks. In 2016 21st Asia and South Paciï¬c Design Automation Conference (ASP-DAC), pp. 575â580. IEEE, 2016.
Qiu, J., Wang, J., Yao, S., Guo, K., Li, B., Zhou, E., Yu, J., Tang, T., Xu, N., Song, S., Wang, Y., and Yang, H. Going Deeper with Embedded FPGA Platform for Convolutional Neural Network. In Proceedings of the 2016 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, pp. 26â35, 2016.
7
Accepted as a workshop contribution at ICLR 2016
Rastegari, M., Ordonez, V., Redmon, J., and Farhadi, A. XNOR-Net: ImageNet Classiï¬cation Using Binary Convolutional Neural Networks. arXiv preprint arXiv:1603.05279, 2016.
Simonyan, K. and Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recog- nition. In International Conference on Learning Representations, 2015.
Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich, A. Going Deeper with Convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1â9, 2015.
Williamson, D. Dynamically scaled ï¬xed point arithmetic. In IEEE Paciï¬c Rim Conference on Communications, Computers and Signal Processing, 1991, pp. 315â318. IEEE, 1991.
8 | {
"id": "1602.07360"
} |
1604.00289 | Building Machines That Learn and Think Like People | Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models. | http://arxiv.org/pdf/1604.00289 | Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman | cs.AI, cs.CV, cs.LG, cs.NE, stat.ML | In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentary | null | cs.AI | 20160401 | 20161102 | 6 1 0 2 v o N 2 ] I A . s c [
3 v 9 8 2 0 0 . 4 0 6 1 : v i X r a
In press at Behavioral and Brain Sciences.
# Building Machines That Learn and Think Like People
Brenden M. Lake,1 Tomer D. Ullman,2,4 Joshua B. Tenenbaum,2,4 and Samuel J. Gershman3,4 1Center for Data Science, New York University 2Department of Brain and Cognitive Sciences, MIT 3Department of Psychology and Center for Brain Science, Harvard University 4Center for Brains Minds and Machines
# Abstract
Recent progress in artiï¬cial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving perfor- mance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems diï¬er from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Speciï¬cally, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recog- nition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
# 1 Introduction
Artiï¬cial intelligence (AI) has been a story of booms and busts, yet by any traditional measure of success, the last few years have been marked by exceptional progress. Much of this progress has come from recent advances in âdeep learning,â characterized by learning large neural-network-style models with multiple layers of representation. These models have achieved remarkable gains in many domains spanning object recognition, speech recognition, and control (LeCun, Bengio, & Hinton, 2015; Schmidhuber, 2015). In object recognition, Krizhevsky, Sutskever, and Hinton (2012) trained a deep convolutional neural network (convnets; LeCun et al., 1989) that nearly halved the error rate of the previous state-of-the-art on the most challenging benchmark to date. In the years since, convnets continue to dominate, recently approaching human-level performance on some object recognition benchmarks (He, Zhang, Ren, & Sun, 2015; Russakovsky et al., 2015; Szegedy et al., 2014). In automatic speech recognition, Hidden Markov Models (HMMs) have been the leading approach since the late 1980s (Juang & Rabiner, 1990), yet this framework has been chipped away piece by piece and replaced with deep learning components (Hinton et al.,
2012). Now, the leading approaches to speech recognition are fully neural network systems (Graves, Mohamed, & Hinton, 2013; Weng, Yu, Watanabe, & Juang, 2014). Ideas from deep learning have also been applied to learning complex control problems. V. Mnih et al. (2015) combined ideas from deep learning and reinforcement learning to make a âdeep reinforcement learningâ algorithm that learns to play large classes of simple video games from just frames of pixels and the game score, achieving human or superhuman level performance on many of these games (see also Guo, Singh, Lee, Lewis, & Wang, 2014; Schaul, Quan, Antonoglou, & Silver, 2016; Stadie, Levine, & Abbeel, 2016).
These accomplishments have helped neural networks regain their status as a leading paradigm in machine learning, much as they were in the late 1980s and early 1990s. The recent success of neural networks has captured attention beyond academia. In industry, companies such as Google and Facebook have active research divisions exploring these technologies, and object and speech recognition systems based on deep learning have been deployed in core products on smart phones and the web. The media has also covered many of the recent achievements of neural networks, often expressing the view that neural networks have achieved this recent success by virtue of their brain-like computation and thus their ability to emulate human learning and human cognition.
In this article, we view this excitement as an opportunity to examine what it means for a machine to learn or think like a person. We ï¬rst review some of the criteria previously oï¬ered by cognitive scientists, developmental psychologists, and AI researchers. Second, we articulate what we view as the essential ingredients for building such a machine that learns or thinks like a person, synthesizing theoretical ideas and experimental data from research in cognitive science. Third, we consider contemporary AI (and deep learning in particular) in light of these ingredients, ï¬nding that deep learning models have yet to incorporate many of them and so may be solving some problems in diï¬erent ways than people do. We end by discussing what we view as the most plausible paths towards building machines that learn and think like people. This includes prospects for integrating deep learning with the core cognitive ingredients we identify, inspired in part by recent work fusing neural networks with lower-level building blocks from classic psychology and computer science (attention, working memory, stacks, queues) that have traditionally been seen as incompatible.
Beyond the speciï¬c ingredients in our proposal, we draw a broader distinction between two diï¬er- ent computational approaches to intelligence. The statistical pattern recognition approach treats prediction as primary, usually in the context of a speciï¬c classiï¬cation, regression, or control task. In this view, learning is about discovering features that have high value states in common â a shared label in a classiï¬cation setting or a shared value in a reinforcement learning setting â across a large, diverse set of training data. The alternative approach treats models of the world as pri- mary, where learning is the process of model-building. Cognition is about using these models to understand the world, to explain what we see, to imagine what could have happened that didnât, or what could be true that isnât, and then planning actions to make it so. The diï¬erence be- tween pattern recognition and model-building, between prediction and explanation, is central to our view of human intelligence. Just as scientists seek to explain nature, not simply predict it, we see human thought as fundamentally a model-building activity. We elaborate this key point with numerous examples below. We also discuss how pattern recognition, even if it is not the core of intelligence, can nonetheless support model-building, through âmodel-freeâ algorithms that learn through experience how to make essential inferences more computationally eï¬cient.
2
Before proceeding, we provide a few caveats about the goals of this article and a brief overview of the key ideas.
# 1.1 What this article is not
For nearly as long as there have been neural networks, there have been critiques of neural networks (Crick, 1989; Fodor & Pylyshyn, 1988; Marcus, 1998, 2001; Minsky & Papert, 1969; Pinker & Prince, 1988). While we are critical of neural networks in this article, our goal is to build on their successes rather than dwell on their shortcomings. We see a role for neural networks in developing more human-like learning machines: They have been applied in compelling ways to many types of machine learning problems, demonstrating the power of gradient-based learning and deep hierarchies of latent variables. Neural networks also have a rich history as computational models of cognition (McClelland, Rumelhart, & the PDP Research Group, 1986; Rumelhart, McClelland, & the PDP Research Group, 1986) â a history we describe in more detail in the next section. At a more fundamental level, any computational model of learning must ultimately be grounded in the brainâs biological neural networks.
We also believe that future generations of neural networks will look very diï¬erent from the current state-of-the-art. They may be endowed with intuitive physics, theory of mind, causal reasoning, and other capacities we describe in the sections that follow. More structure and inductive biases could be built into the networks or learned from previous experience with related tasks, leading to more human-like patterns of learning and development. Networks may learn to eï¬ectively search for and discover new mental models or intuitive theories, and these improved models will, in turn, enable subsequent learning, allowing systems that learn-to-learn â using previous knowledge to make richer inferences from very small amounts of training data.
It is also important to draw a distinction between AI that purports to emulate or draw inspiration from aspects of human cognition, and AI that does not. This article focuses on the former. The latter is a perfectly reasonable and useful approach to developing AI algorithms â avoiding cognitive or neural inspiration as well as claims of cognitive or neural plausibility. Indeed, this is how many researchers have proceeded, and this article has little pertinence to work conducted under this research strategy.1 On the other hand, we believe that reverse engineering human intelligence can usefully inform AI and machine learning (and has already done so), especially for the types of domains and tasks that people excel at. Despite recent computational achievements, people are better than machines at solving a range of diï¬cult computational problems, including concept learning, scene understanding, language acquisition, language understanding, speech recognition, etc. Other human cognitive abilities remain diï¬cult to understand computationally, including creativity, common sense, and general purpose reasoning. As long as natural intelligence remains the best example of intelligence, we believe that the project of reverse engineering the human solutions to diï¬cult computational problems will continue to inform and advance AI.
Finally, while we focus on neural network approaches to AI, we do not wish to give the impres- sion that these are the only contributors to recent advances in AI. On the contrary, some of the
1In their inï¬uential textbook, Russell and Norvig (2003) state that âThe quest for âartiï¬cial ï¬ightâ succeeded when the Wright brothers and others stopped imitating birds and started using wind tunnels and learning about aerodynamics.â (p. 3).
3
# Table 1: Glossary
Neural network: A network of simple neuron-like processing units that collectively per- form complex computations. Neural networks are often organized into layers, including an input layer that presents the data (e.g, an image), hidden layers that transform the data into intermediate representations, and an output layer that produces a response (e.g., a label or an action). Recurrent connections are also popular when processing sequential data. Deep learning: A neural network with at least one hidden layer (some networks have dozens). Most state-of-the-art deep networks are trained using the backpropagation algo- rithm to gradually adjust their connection strengths. Backpropagation: Gradient descent applied to training a deep neural network. The gradient of the objective function (e.g., classiï¬cation error or log-likelihood) with respect to the model parameters (e.g., connection weights) is used to make a series of small adjustments to the parameters in a direction that improves the objective function. Convolutional network (convnet): A neural network that uses trainable ï¬lters instead of (or in addition to) fully-connected layers with independent weights. The same ï¬lter is applied at many locations across an image (or across a time series), leading to neural networks that are eï¬ectively larger but with local connectivity and fewer free parameters. Model-free and model-based reinforcement learning: Model-free algorithms di- rectly learn a control policy without explicitly building a model of the environment (re- ward and state transition distributions). Model-based algorithms learn a model of the environment and use it to select actions by planning. Deep Q-learning: A model-free reinforcement learning algorithm used to train deep neural networks on control tasks such as playing Atari games. A network is trained to approximate the optimal action-value function Q(s, a), which is the expected long-term cumulative reward of taking action a in state s and then optimally selecting future actions. Generative model: A model that speciï¬es a probability distribution over the data. For instance, in a classiï¬cation task with examples X and class labels y, a generative model speciï¬es the distribution of data given labels P (X y), as well as a prior on labels P (y), which can be used for sampling new examples or for classiï¬cation by using Bayesâ rule to X) directly, possibly by using a compute P (y | neural network to predict the label for a given data point, and cannot directly be used to sample new examples or to compute other queries regarding the data. We will generally be concerned with directed generative models (such as Bayesian networks or probabilistic programs) which can be given a causal interpretation, although undirected (non-causal) generative models (such as Boltzmann machines) are also possible. Program induction: Constructing a program that computes some desired function, where that function is typically speciï¬ed by training data consisting of example input- output pairs. In the case of probabilistic programs, which specify candidate generative models for data, an abstract description language is used to deï¬ne a set of allowable programs and learning is a search for the programs likely to have generated the data.
4
most exciting recent progress has been in new forms of probabilistic machine learning (Ghahra- mani, 2015). For example, researchers have developed automated statistical reasoning techniques (Lloyd, Duvenaud, Grosse, Tenenbaum, & Ghahramani, 2014), automated techniques for model building and selection (Grosse, Salakhutdinov, Freeman, & Tenenbaum, 2012), and probabilistic programming languages (e.g., Gelman, Lee, & Guo, 2015; Goodman, Mansinghka, Roy, Bonawitz, & Tenenbaum, 2008; Mansinghka, Selsam, & Perov, 2014). We believe that these approaches will play important roles in future AI systems, and they are at least as compatible with the ideas from cognitive science we discuss here, but a full discussion of those connections is beyond the scope of the current article.
# 1.2 Overview of the key ideas
The central goal of this paper is to propose a set of core ingredients for building more human-like learning and thinking machines. We will elaborate on each of these ingredients and topics in Section 4, but here we brieï¬y overview the key ideas.
The ï¬rst set of ingredients focuses on developmental âstart-up software,â or cognitive capabilities If an present early in development. There are several reasons for this focus on development. ingredient is present early in development, it is certainly active and available well before a child or adult would attempt to learn the types of tasks discussed in this paper. This is true regardless of whether the early-present ingredient is itself learned from experience or innately present. Also, the earlier an ingredient is present, the more likely it is to be foundational to later development and learning.
We focus on two pieces of developmental start-up software (see Wellman & Gelman, 1992, for a review of both). First is intuitive physics (Section 4.1.1): Infants have primitive object concepts that allow them to track objects over time and allow them to discount physically implausible trajectories. For example, infants know that objects will persist over time and that they are solid and coherent. Equipped with these general principles, people can learn more quickly and make more accurate predictions. While a task may be new, physics still works the same way. A second type of software present in early development is intuitive psychology (Section 4.1.2): Infants understand that other people have mental states like goals and beliefs, and this understanding strongly constrains their learning and predictions. A child watching an expert play a new video game can infer that the avatar has agency and is trying to seek reward while avoiding punishment. This inference immediately constrains other inferences, allowing the child to infer what objects are good and what objects are bad. These types of inferences further accelerate the learning of new tasks.
Our second set of ingredients focus on learning. While there are many perspectives on learning, we see model building as the hallmark of human-level learning, or explaining observed data through the construction of causal models of the world (Section 4.2.2). Under this perspective, the early- present capacities for intuitive physics and psychology are also causal models of the world. A primary job of learning is to extend and enrich these models, and to build analogous causally structured theories of other domains.
Compared to state-of-the-art algorithms in machine learning, human learning is distinguished by its
5
richness and its eï¬ciency. Children come with the ability and the desire to uncover the underlying causes of sparsely observed events and to use that knowledge to go far beyond the paucity of the data. It might seem paradoxical that people are capable of learning these richly structured models from very limited amounts of experience. We suggest that compositionality and learning-to- learn are ingredients that make this type of rapid model learning possible (Sections 4.2.1 and 4.2.3, respectively).
A ï¬nal set of ingredients concerns how the rich models our minds build are put into action, in real time (Section 4.3). It is remarkable how fast we are to perceive and to act. People can comprehend a novel scene in a fraction of a second, and or a novel utterance in little more than the time it takes to say it and hear it. An important motivation for using neural networks in machine vision and speech systems is to respond as quickly as the brain does. Although neural networks are usually aiming at pattern recognition rather than model-building, we will discuss ways in which these âmodel-freeâ methods can accelerate slow model-based inferences in perception and cognition (Section 4.3.1). By learning to recognize patterns in these inferences, the outputs of inference can be predicted without having to go through costly intermediate steps. Integrating neural networks that âlearn to do inferenceâ with rich model-building learning mechanisms oï¬ers a promising way to explain how human minds can understand the world so well, so quickly.
We will also discuss the integration of model-based and model-free methods in reinforcement learn- ing (Section 4.3.2), an area that has seen rapid recent progress. Once a causal model of a task has been learned, humans can use the model to plan action sequences that maximize future reward; when rewards are used as the metric for successs in model-building, this is known as model-based reinforcement learning. However, planning in complex models is cumbersome and slow, making the speed-accuracy trade-oï¬ unfavorable for real-time control. By contrast, model-free reinforce- ment learning algorithms, such as current instantiations of deep reinforcement learning, support fast control but at the cost of inï¬exibility and possibly accuracy. We will review evidence that humans combine model-based and model-free learning algorithms both competitively and cooper- atively, and that these interactions are supervised by metacognitive processes. The sophistication of human-like reinforcement learning has yet to be realized in AI systems, but this is an area where crosstalk between cognitive and engineering approaches is especially promising.
# 2 Cognitive and neural inspiration in artiï¬cial intelligence
The questions of whether and how AI should relate to human cognitive psychology are older than the terms âartiï¬cial intelligenceâ and âcognitive psychology.â Alan Turing suspected that it is easier to build and educate a child-machine than try to fully capture adult human cognition (Turing, 1950). Turing pictured the childâs mind as a notebook with ârather little mechanism and lots of blank sheets,â and the mind of a child-machine as ï¬lling in the notebook by responding to rewards and punishments, similar to reinforcement learning. This view on representation and learning echoes behaviorism, a dominant psychological tradition in Turingâs time. It also echoes the strong empiricism of modern connectionist models, the idea that we can learn almost everything we know from the statistical patterns of sensory inputs.
Cognitive science repudiated the over-simpliï¬ed behaviorist view and came to play a central role
6
in early AI research (Boden, 2006). Newell and Simon (1961) developed their âGeneral Problem Solverâ as both an AI algorithm and a model of human problem solving, which they subsequently tested experimentally (Newell & Simon, 1972). AI pioneers in other areas of research explicitly referenced human cognition, and even published papers in cognitive psychology journals (e.g., Bobrow & Winograd, 1977; Hayes-Roth & Hayes-Roth, 1979; Winograd, 1972). For example, Schank (1972), writing in the journal Cognitive Psychology, declared that
We hope to be able to build a program that can learn, as a child does, how to do what we have described in this paper instead of being spoon-fed the tremendous information necessary.
A similar sentiment was expressed by Minsky (1974):
I draw no boundary between a theory of human thinking and a scheme for making an intelligent machine; no purpose would be served by separating these today since neither domain has theories good enough to explainâor to produceâenough mental capacity.
Much of this research assumed that human knowledge representation is symbolic and that reasoning, language, planning and vision could be understood in terms of symbolic operations. Parallel to these developments, a radically diï¬erent approach was being explored, based on neuron-like âsub- symbolicâ computations (e.g., Fukushima, 1980; Grossberg, 1976; Rosenblatt, 1958). The representations and algorithms used by this approach were more directly inspired by neuroscience than by cognitive psychology, although ultimately it would ï¬ower into an inï¬uential school of thought about the nature of cognitionâparallel distributed processing (PDP) (McClelland et al., 1986; Rumelhart, McClelland, & the PDP Research Group, 1986). As its name suggests, PDP emphasizes parallel computation by combining simple units to collectively implement sophisticated computations. The knowledge learned by these neural networks is thus distributed across the collection of units rather than localized as in most symbolic data structures. The resurgence of recent interest in neural networks, more commonly referred to as âdeep learning,â share the same representational commitments and often even the same learning algorithms as the earlier PDP models. âDeepâ refers to the fact that more powerful models can be built by composing many layers of representation (see LeCun et al., 2015; Schmidhuber, 2015, for recent reviews), still very much in the PDP style while utilizing recent advances in hardware and computing capabilities, as well as massive datasets, to learn deeper models.
It is also important to clarify that the PDP perspective is compatible with âmodel buildingâ in addition to âpattern recognition.â Some of the original work done under the banner of PDP (Rumelhart, McClelland, & the PDP Research Group, 1986) is closer to model building than pattern recognition, whereas the recent large-scale discriminative deep learning systems more purely for a related discussion). But, as discussed, exemplify pattern recognition (see Bottou, 2014, there is also a question of the nature of the learned representations within the model â their form, compositionality, and transferability â and the developmental start-up software that was used to get there. We focus on these issues in this paper.
Neural network models and the PDP approach oï¬er a view of the mind (and intelligence more broadly) that is sub-symbolic and often populated with minimal constraints and inductive biases
7
to guide learning. Proponents of this approach maintain that many classic types of structured knowledge, such as graphs, grammars, rules, objects, structural descriptions, programs, etc. can be useful yet misleading metaphors for characterizing thought. These structures are more epiphenom- enal than real, emergent properties of more fundamental sub-symbolic cognitive processes (McClel- land et al., 2010). Compared to other paradigms for studying cognition, this position on the nature of representation is often accompanied by a relatively âblank slateâ vision of initial knowledge and representation, much like Turingâs blank notebook.
When attempting to understand a particular cognitive ability or phenomenon within this paradigm, a common scientiï¬c strategy is to train a relatively generic neural network to perform the task, adding additional ingredients only when necessary. This approach has shown that neural networks can behave as if they learned explicitly structured knowledge, such as a rule for producing the past tense of words (Rumelhart & McClelland, 1986), rules for solving simple balance-beam physics problems (McClelland, 1988), or a tree to represent types of living things (plants and animals) and their distribution of properties (Rogers & McClelland, 2004). Training large-scale relatively generic networks is also the best current approach for object recognition (He et al., 2015; Krizhevsky et al., 2012; Russakovsky et al., 2015; Szegedy et al., 2014), where the high-level feature representations of these convolutional nets have also been used to predict patterns of neural response in human and macaque IT cortex (Khaligh-Razavi & Kriegeskorte, 2014; Kriegeskorte, 2015; Yamins et al., 2014) as well as human typicality ratings (Lake, Zaremba, Fergus, & Gureckis, 2015) and similarity ratings (Peterson, Abbott, & Griï¬ths, 2016) for images of common objects. Moreover, researchers have trained generic networks to perform structured and even strategic tasks, such as the recent work on using a Deep Q-learning Network (DQN) to play simple video games (V. Mnih et al., 2015). If neural networks have such broad application in machine vision, language, and control, and if they can be trained to emulate the rule-like and structured behaviors that characterize cognition, do we need more to develop truly human-like learning and thinking machines? How far can relatively generic neural networks bring us towards this goal?
# 3 Challenges for building more human-like machines
While cognitive science has not yet converged on a single account of the mind or intelligence, the claim that a mind is a collection of general purpose neural networks with few initial constraints is rather extreme in contemporary cognitive science. A diï¬erent picture has emerged that highlights the importance of early inductive biases, including core concepts such as number, space, agency and objects, as well as powerful learning algorithms that rely on prior knowledge to extract knowledge from small amounts of training data. This knowledge is often richly organized and theory-like in structure, capable of the graded inferences and productive capacities characteristic of human thought.
Here we present two challenge problems for machine learning and AI: learning simple visual concepts (Lake, Salakhutdinov, & Tenenbaum, 2015) and learning to play the Atari game Frostbite (V. Mnih et al., 2015). We also use the problems as running examples to illustrate the importance of core cognitive ingredients in the sections that follow.
8
# 3.1 The Characters Challenge
The ï¬rst challenge concerns handwritten character recognition, a classic problem for comparing diï¬erent types of machine learning algorithms. Hofstadter (1985) argued that the problem of recognizing characters in all the ways people do â both handwritten and printed â contains most if not all of the fundamental challenges of AI. Whether or not this statement is right, it highlights the surprising complexity that underlies even âsimpleâ human-level concepts like letters. More practically, handwritten character recognition is a real problem that children and adults must learn to solve, with practical applications ranging from reading envelope addresses or checks in an ATM machine. Handwritten character recognition is also simpler than more general forms of object recognition â the object of interest is two-dimensional, separated from the background, and usually unoccluded. Compared to how people learn and see other types of objects, it seems possible, in the near term, to build algorithms that can see most of the structure in characters that people can see.
The standard benchmark is the MNIST data set for digit recognition, which involves classifying images of digits into the categories â0â-â9â (LeCun, Bottou, Bengio, & Haï¬ner, 1998). The training set provides 6,000 images per class for a total of 60,000 training images. With a large amount of training data available, many algorithms achieve respectable performance, including K-nearest neighbors (5% test error), support vector machines (about 1% test error), and convolutional neu- ral networks (below 1% test error; LeCun et al., 1998). The best results achieved using deep convolutional nets are very close to human-level performance at an error rate of 0.2% (Ciresan, Meier, & Schmidhuber, 2012). Similarly, recent results applying convolutional nets to the far more challenging ImageNet object recognition benchmark have shown that human-level performance is within reach on that data set as well (Russakovsky et al., 2015).
While humans and neural networks may perform equally well on the MNIST digit recognition task and other large-scale image classiï¬cation tasks, it does not mean that they learn and think in the same way. There are at least two important diï¬erences: people learn from fewer examples and they learn richer representations, a comparison true for both learning handwritten characters as well as learning more general classes of objects (Figure 1). People can learn to recognize a new handwritten character from a single example (Figure 1A-i), allowing them to discriminate between novel instances drawn by other people and similar looking non-instances (Lake, Salakhutdinov, & Tenenbaum, 2015; E. G. Miller, Matsakis, & Viola, 2000). Moreover, people learn more than how to do pattern recognition: they learn a concept â that is, a model of the class that allows their acquired knowledge to be ï¬exibly applied in new ways. In addition to recognizing new examples, people can also generate new examples (Figure 1A-ii), parse a character into its most important parts and relations (Figure 1A-iii; Lake, Salakhutdinov, and Tenenbaum (2012)), and generate new characters given a small set of related characters (Figure 1A-iv). These additional abilities come for free along with the acquisition of the underlying concept.
Even for these simple visual concepts, people are still better and more sophisticated learners than the best algorithms for character recognition. People learn a lot more from a lot less, and cap- turing these human-level learning abilities in machines is the Characters Challenge. We recently reported progress on this challenge using probabilistic program induction (Lake, Salakhutdinov, & Tenenbaum, 2015), yet aspects of the full human cognitive ability remain out of reach. While both people and model represent characters as a sequence of pen strokes and relations, people have
9
A i i B) " wo led |°9 leu led |Z5 3 - on ed Saal es ® Oe HO i) a iv) a9 |B) J) Ye ii) iv) ates fj DY} BD /7B led a VIB %
Figure 1: The characters challenge: human-level learning of a novel handwritten characters (A), with the same abilities also illustrated for a novel two-wheeled vehicle (B). A single example of a new visual concept (red box) can be enough information to support the (i) classiï¬cation of new examples, (ii) generation of new examples, (iii) parsing an object into parts and relations, and (iv) generation of new concepts from related concepts. Adapted from Lake, Salakhutdinov, and Tenenbaum (2015).
a far richer repertoire of structural relations between strokes. Furthermore, people can eï¬ciently integrate across multiple examples of a character to infer which have optional elements, such as the horizontal cross-bar in â7âs, combining diï¬erent variants of the same character into a single co- herent representation. Additional progress may come by combining deep learning and probabilistic program induction to tackle even richer versions of the Characters Challenge.
# 3.2 The Frostbite Challenge
The second challenge concerns the Atari game Frostbite (Figure 2), which was one of the control problems tackled by the DQN of V. Mnih et al. (2015). The DQN was a signiï¬cant advance in reinforcement learning, showing that a single algorithm can learn to play a wide variety of complex tasks. The network was trained to play 49 classic Atari games, proposed as a test domain for reinforcement learning (Bellemare, Naddaf, Veness, & Bowling, 2013), impressively achieving human-level performance or above on 29 of the games. It did, however, have particular trouble with Frostbite and other games that required temporally extended planning strategies.
In Frostbite, players control an agent (Frostbite Bailey) tasked with constructing an igloo within a time limit. The igloo is built piece-by-piece as the agent jumps on ice ï¬oes in water (Figure 2A-C). The challenge is that the ice ï¬oes are in constant motion (moving either left or right), and ice ï¬oes only contribute to the construction of the igloo if they are visited in an active state (white rather than blue). The agent may also earn extra points by gathering ï¬sh while avoiding a number of fatal hazards (falling in the water, snow geese, polar bears, etc.). Success in this game requires a
10
Figure 2: Screenshots of Frostbite, a 1983 video game designed for the Atari game console. A) The start of a level in Frostbite. The agent must construct an igloo by hopping between ice ï¬oes and avoiding obstacles such as birds. The ï¬oes are in constant motion (either left or right), making multi-step planning essential to success. B) The agent receives pieces of the igloo (top right) by jumping on the active ice ï¬oes (white), which then deactivates them (blue). C) At the end of a level, the agent must safely reach the completed igloo. D) Later levels include additional rewards (ï¬sh) and deadly obstacles (crabs, clams, and bears).
temporally extended plan to ensure the agent can accomplish a sub-goal (such as reaching an ice ï¬oe) and then safely proceed to the next sub-goal. Ultimately, once all of the pieces of the igloo are in place, the agent must proceed to the igloo and thus complete the level before time expires (Figure 2C).
The DQN learns to play Frostbite and other Atari games by combining a powerful pattern recognizer (a deep convolutional neural network) and a simple model-free reinforcement learning algorithm (Q-learning; Watkins & Dayan, 1992). These components allow the network to map sensory inputs (frames of pixels) onto a policy over a small set of actions, and both the mapping and the policy are trained to optimize long-term cumulative reward (the game score). The network embodies the strongly empiricist approach characteristic of most connectionist models: very little is built into the network apart from the assumptions about image structure inherent in convolutional networks, so the network has to essentially learn a visual and conceptual system from scratch for each new game. In V. Mnih et al. (2015), the network architecture and hyper-parameters were ï¬xed, but
11
the network was trained anew for each game, meaning the visual system and the policy are highly specialized for the games it was trained on. More recent work has shown how these game-speciï¬c networks can share visual features (Rusu et al., 2016) or be used to train a multi-task network (Parisotto, Ba, & Salakhutdinov, 2016), achieving modest beneï¬ts of transfer when learning to play new games.
Although it is interesting that the DQN learns to play games at human-level performance while assuming very little prior knowledge, the DQN may be learning to play Frostbite and other games in a very diï¬erent way than people do. One way to examine the diï¬erences is by considering the amount of experience required for learning. In V. Mnih et al. (2015), the DQN was compared with a professional gamer who received approximately two hours of practice on each of the 49 Atari games (although he or she likely had prior experience with some of the games). The DQN was trained on 200 million frames from each of the games, which equates to approximately 924 hours of game time (about 38 days), or almost 500 times as much experience as the human received.2 Additionally, the DQN incorporates experience replay, where each of these frames is replayed approximately 8 more times on average over the course of learning.
With the full 924 hours of unique experience and additional replay, the DQN achieved less than 10% of human-level performance during a controlled test session (see DQN in Fig. 3). More recent variants of the DQN have demonstrated superior performance (Schaul et al., 2016; Stadie et al., 2016; van Hasselt, Guez, & Silver, 2016; Wang et al., 2016), reaching 83% of the professional gamerâs score by incorporating smarter experience replay (Schaul et al., 2016) and 96% by using smarter replay and more eï¬cient parameter sharing (Wang et al., 2016) (see DQN+ and DQN++ in Fig. 3).3 But they requires a lot of experience to reach this level: the learning curve provided in Schaul et al. (2016) shows performance is around 46% after 231 hours, 19% after 116 hours, and below 3.5% after just 2 hours (which is close to random play, approximately 1.5%). The diï¬erences between the human and machine learning curves suggest that they may be learning diï¬erent kinds of knowledge, using diï¬erent learning mechanisms, or both.
The contrast becomes even more dramatic if we look at the very earliest stages of learning. While both the original DQN and these more recent variants require multiple hours of experience to perform reliably better than random play, even non-professional humans can grasp the basics of the game after just a few minutes of play. We speculate that people do this by inferring a general schema to describe the goals of the game and the object types and their interactions, using the kinds of intuitive theories, model-building abilities and model-based planning mecha- nisms we describe below. While novice players may make some mistakes, such as inferring that ï¬sh are harmful rather than helpful, they can learn to play better than chance within a few min- utes. If humans are able to ï¬rst watch an expert playing for a few minutes, they can learn even faster. In informal experiments with two of the authors playing Frostbite on a Javascript emu- lator (http://www.virtualatari.org/soft.php?soft=Frostbite), after watching videos of expert play on YouTube for just two minutes, we found that we were able to reach scores comparable to or
2The time required to train the DQN (compute time) is not the same as the game (experience) time. Compute time can be longer.
3The reported scores use the âhuman startsâ measure of test performance, designed to prevent networks from just memorizing long sequences of successful actions from a single starting point. Both faster learning (Blundell et al., 2016) and higher scores (Wang et al., 2016) have been reported using other metrics, but it is unclear how well the networks are generalizing with these alternative metrics.
12
5000 4000} 3000 2000 Frostbite Score 1000 2 116 231 346 462 578 693 808 924 Amount of game experience (in hours)
Figure 3: Comparing learning speed for people versus Deep Q-Networks (DQNs). Test performance on the Atari 2600 game âFrostbiteâ is plotted as a function of game experience (in hours at a frame rate of 60 fps), which does not include additional experience replay. Learning curves (if available) and scores are shown from diï¬erent networks: DQN (V. Mnih et al., 2015), DQN+ (Schaul et al., 2016), and DQN++ (Wang et al., 2016). Random play achieves a score of 66.4. The âhuman startsâ performance measure is used (van Hasselt et al., 2016).
better than the human expert reported in V. Mnih et al. (2015) after at most 15-20 minutes of total practice.4
There are other behavioral signatures that suggest fundamental diï¬erences in representation and learning between people and the DQN. For instance, the game of Frostbite provides incremental rewards for reaching each active ice ï¬oe, providing the DQN with the relevant sub-goals for com- pleting the larger task of building an igloo. Without these sub-goals, the DQN would have to take random actions until it accidentally builds an igloo and is rewarded for completing the entire level. In contrast, people likely do not rely on incremental scoring in the same way when ï¬guring out how to play a new game. In Frostbite, it is possible to ï¬gure out the higher-level goal of building an igloo without incremental feedback; similarly, sparse feedback is a source of diï¬culty in other Atari 2600 games such as Montezumaâs Revenge where people substantially outperform current DQN approaches.
The learned DQN network is also rather inï¬exible to changes in its inputs and goals: changing the color or appearance of objects or changing the goals of the network would have devastating consequences on performance if the network is not retrained. While any speciï¬c model is necessarily
4More precisely, the human expert in V. Mnih et al. (2015) scored an average of 4335 points across 30 game sessions of up to ï¬ve minutes of play. In individual sessions lasting no longer than ï¬ve minutes, author TDU obtained scores of 3520 points after approximately 5 minutes of gameplay, 3510 points after 10 minutes, and 7810 points after 15 minutes. Author JBT obtained 4060 after approximately 5 minutes of gameplay, 4920 after 10-15 minutes, and 6710 after no more than 20 minutes. TDU and JBT each watched approximately two minutes of expert play on YouTube (e.g., https://www.youtube.com/watch?v=ZpUFztf9Fjc, but there are many similar examples that can be found in a YouTube search).
13
simpliï¬ed and should not be held to the standard of general human intelligence, the contrast between DQN and human ï¬exibility is striking nonetheless. For example, imagine you are tasked with playing Frostbite with any one of these new goals:
Get the lowest possible score.
Get closest to 100, or 300, or 1000, or 3000, or any level, without going over.
Beat your friend, whoâs playing next to you, but just barely, not by too much, so as not to embarrass them.
Go as long as you can without dying.
Die as quickly as you can.
Pass each level at the last possible minute, right before the temperature timer hits zero and you die (i.e., come as close as you can to dying from frostbite without actually dying).
Get to the furthest unexplored level without regard for your score.
See if you can discover secret Easter eggs.
Get as many ï¬sh as you can.
Touch all the individual ice ï¬oes on screen once and only once.
Teach your friend how to play as eï¬ciently as possible.
This range of goals highlights an essential component of human intelligence: people can learn models and use them for arbitrary new tasks and goals. While neural networks can learn multiple mappings or tasks with the same set of stimuli â adapting their outputs depending on a speciï¬ed goal â these models require substantial training or reconï¬guration to add new tasks (e.g., Collins & Frank, 2013; Eliasmith et al., 2012; Rougier, Noelle, Braver, Cohen, & OâReilly, 2005). In contrast, people require little or no retraining or reconï¬guration, adding new tasks and goals to their repertoire with relative ease.
The Frostbite example is a particularly telling contrast when compared with human play. Even the best deep networks learn gradually over many thousands of game episodes, take a long time to reach good performance and are locked into particular input and goal patterns. Humans, after playing just a small number of games over a span of minutes, can understand the game and its goals well enough to perform better than deep networks do after almost a thousand hours of experience. Even more impressively, people understand enough to invent or accept new goals, generalize over changes to the input, and explain the game to others. Why are people diï¬erent? What core ingredients of human intelligence might the DQN and other modern machine learning methods be missing?
One might object that both the Frostbite and Characters challenges draw an unfair comparison between the speed of human learning and neural network learning. We discuss this objection in detail in Section 5, but we feel it is important to anticipate here as well. To paraphrase one reviewer of an earlier draft of this article, âIt is not that DQN and people are solving the same task
14
diï¬erently. They may be better seen as solving diï¬erent tasks. Human learners â unlike DQN and many other deep learning systems â approach new problems armed with extensive prior experience. The human is encountering one in a years-long string of problems, with rich overlapping structure. Humans as a result often have important domain-speciï¬c knowledge for these tasks, even before they âbegin.â The DQN is starting completely from scratch.â We agree, and indeed this is another way of putting our point here. Human learners fundamentally take on diï¬erent learning tasks than todayâs neural networks, and if we want to build machines that learn and think like people, our machines need to confront the kinds of tasks that human learners do, not shy away from them. People never start completely from scratch, or even close to âfrom scratch,â and that is the secret to their success. The challenge of building models of human learning and thinking then becomes: How do we bring to bear rich prior knowledge to learn new tasks and solve new problems so quickly? What form does that prior knowledge take, and how is it constructed, from some combination of inbuilt capacities and previous experience? The core ingredients we propose in the next section oï¬er one route to meeting this challenge.
# 4 Core ingredients of human intelligence
In the Introduction, we laid out what we see as core ingredients of intelligence. Here we consider the ingredients in detail and contrast them with the current state of neural network modeling. While these are hardly the only ingredients needed for human-like learning and thought (see our discussion of language in Section 5), they are key building blocks which are not present in most current learning-based AI systems â certainly not all present together â and for which additional attention may prove especially fruitful. We believe that integrating them will produce signiï¬cantly more powerful and more human-like learning and thinking abilities than we currently see in AI systems.
Before considering each ingredient in detail, it is important to clarify that by âcore ingredientâ we do not necessarily mean an ingredient that is innately speciï¬ed by genetics or must be âbuilt inâ to any learning algorithm. We intend our discussion to be agnostic with regards to the origins of the key ingredients. By the time a child or an adult is picking up a new character or learning how to play Frostbite, they are armed with extensive real world experience that deep learning systems do not beneï¬t from â experience that would be hard to emulate in any general sense. Certainly, the core ingredients are enriched by this experience, and some may even be a product of the experience itself. Whether learned, built in, or enriched, the key claim is that these ingredients play an active and important role in producing human-like learning and thought, in ways contemporary machine learning has yet to capture.
# 4.1 Developmental start-up software
Early in development, humans have a foundational understanding of several core domains (Spelke, 2003; 2007). These domains include number (numerical and set opera- tions), space (geometry and navigation), physics (inanimate objects and mechanics) and psychology (agents and groups). These core domains cleave cognition at its conceptual joints, and each domain
15
is organized by a set of entities and abstract principles relating the entities. The underlying cogni- tive representations can be understood as âintuitive theories,â with a causal structure resembling a scientiï¬c theory (Carey, 2004, 2009; Gopnik et al., 2004; Gopnik & Meltzoï¬, 1999; Gweon, Tenenbaum, & Schulz, 2010; L. Schulz, 2012; Wellman & Gelman, 1992, 1998). The âchild as scientistâ proposal further views the process of learning itself as also scientist-like, with recent experiments showing that children seek out new data to distinguish between hypotheses, isolate vari- ables, test causal hypotheses, make use of the data-generating process in drawing conclusions, and learn selectively from others (Cook, Goodman, & Schulz, 2011; Gweon et al., 2010; L. E. Schulz, Gopnik, & Glymour, 2007; Stahl & Feigenson, 2015; Tsividis, Gershman, Tenenbaum, & Schulz, 2013). We will address the nature of learning mechanisms in Section 4.2.
Each core domain has been the target of a great deal of study and analysis, and together the domains are thought to be shared cross-culturally and partly with non-human animals. All of these domains may be important augmentations to current machine learning, though below we focus in particular on the early understanding of objects and agents.
# 4.1.1 Intuitive physics
Young children have rich knowledge of intuitive physics. Whether learned or innate, important physical concepts are present at ages far earlier than when a child or adult learns to play Frostbite, suggesting these resources may be used for solving this and many everyday physics-related tasks.
At the age of 2 months and possibly earlier, human infants expect inanimate objects to follow principles of persistence, continuity, cohesion and solidity. Young infants believe objects should move along smooth paths, not wink in and out of existence, not inter-penetrate and not act at a distance (Spelke, 1990; Spelke, Gutheil, & Van de Walle, 1995). These expectations guide object segmentation in early infancy, emerging before appearance-based cues such as color, texture, and perceptual goodness (Spelke, 1990).
These expectations also go on to guide later learning. At around 6 months, infants have already developed diï¬erent expectations for rigid bodies, soft bodies and liquids (Rips & Hespos, 2015). Liquids, for example, are expected to go through barriers, while solid objects cannot (Hespos, Ferry, & Rips, 2009). By their ï¬rst birthday, infants have gone through several transitions of compre- hending basic physical concepts such as inertia, support, containment and collisions (Baillargeon, 2004; Baillargeon, Li, Ng, & Yuan, 2009; Hespos & Baillargeon, 2008).
There is no single agreed-upon computational account of these early physical principles and con- cepts, and previous suggestions have ranged from decision trees (Baillargeon et al., 2009), to cues, to lists of rules (Siegler & Chen, 1998). A promising recent approach sees intuitive physical rea- soning as similar to inference over a physics software engine, the kind of simulators that power modern-day animations and games (Bates, Yildirim, Tenenbaum, & Battaglia, 2015; Battaglia, Hamrick, & Tenenbaum, 2013; Gerstenberg, Goodman, Lagnado, & Tenenbaum, 2015; Sanborn, Mansinghka, & Griï¬ths, 2013). According to this hypothesis, people reconstruct a perceptual scene using internal representations of the objects and their physically relevant properties (such as mass, elasticity, and surface friction), and forces acting on objects (such as gravity, friction, or collision impulses). Relative to physical ground truth, the intuitive physical state representation
16
A= 1. Inputs === 2. Intuitive Physics Engine = 3. Outputs B =r" Changes to Input one | Will it fall? Which direction? Add blocks, blocks made of styrofoam, blocks made of lead, blocks made of goo, table is made of rubber, table is actually quicksand, pour water on the tower, pour honey on the tower, blue blocks are glued together, red blocks are magnetic, gravity is reversed, wind blows over table, table has slippery ice on top...
Figure 4: The intuitive physics-engine approach to scene understanding, illustrated through tower stability. (A) The engine takes in inputs through perception, language, memory and other faculties. It then constructs a physical scene with objects, physical properties and forces, simulates the sceneâs development over time and hands the output to other reasoning systems. (B) Many possible âtweaksâ to the input can result in much diï¬erent scenes, requiring the potential discovery, training and evaluation of new features for each tweak. Adapted from Battaglia et al. (2013).
is approximate and probabilistic, and oversimpliï¬ed and incomplete in many ways. Still, it is rich enough to support mental simulations that can predict how objects will move in the immediate future, either on their own or in responses to forces we might apply.
This âintuitive physics engineâ approach enables ï¬exible adaptation to a wide range of everyday scenarios and judgments in a way that goes beyond perceptual cues. For example (Figure 4), a physics-engine reconstruction of a tower of wooden blocks from the game Jenga can be used to predict whether (and how) a tower will fall, ï¬nding close quantitative ï¬ts to how adults make these predictions (Battaglia et al., 2013) as well as simpler kinds of physical predictions that have been studied in infants (T´egl´as et al., 2011). Simulation-based models can also capture how people make hypothetical or counterfactual predictions: What would happen if certain blocks are taken away, more blocks are added, or the table supporting the tower is jostled? What if certain blocks were glued together, or attached to the table surface? What if the blocks were made of diï¬erent materials (Styrofoam, lead, ice)? What if the blocks of one color were much heavier than other colors? Each of these physical judgments may require new features or new training for a pattern recognition account to work at the same level as the model-based simulator.
What are the prospects for embedding or acquiring this kind of intuitive physics in deep learning systems? Connectionist models in psychology have previously been applied to physical reasoning tasks such as balance-beam rules (McClelland, 1988; Shultz, 2003) or rules relating distance, velocity, and time in motion (Buckingham & Shultz, 2000), but these networks do not attempt to work with complex scenes as input or a wide range of scenarios and judgments as in Figure 4.
17
A recent paper from Facebook AI researchers (Lerer, Gross, & Fergus, 2016) represents an excit- ing step in this direction. Lerer et al. (2016) trained a deep convolutional network-based system (PhysNet) to predict the stability of block towers from simulated images similar to those in Figure 4A but with much simpler conï¬gurations of two, three or four cubical blocks stacked vertically. Impressively, PhysNet generalized to simple real images of block towers, matching human perfor- mance on these images, meanwhile exceeding human performance on synthetic images. Human and PhysNet conï¬dence were also correlated across towers, although not as strongly as for the approx- imate probabilistic simulation models and experiments of Battaglia et al. (2013). One limitation is that PhysNet currently requires extensive training â between 100,000 and 200,000 scenes â to learn judgments for just a single task (will the tower fall?) on a narrow range of scenes (towers with two to four cubes). It has been shown to generalize, but also only in limited ways (e.g., from towers of two and three cubes to towers of four cubes). In contrast, people require far less experience to perform any particular task, and can generalize to many novel judgments and complex scenes with no new training required (although they receive large amounts of physics experience through interacting with the world more generally). Could deep learning systems such as PhysNet cap- ture this ï¬exibility, without explicitly simulating the causal interactions between objects in three dimensions? We are not sure, but we hope this is a challenge they will take on.
Alternatively, instead of trying to make predictions without simulating physics, could neural net- works be trained to emulate a general-purpose physics simulator, given the right type and quantity of training data, such as the raw input experienced by a child? This is an active and intriguing area of research, but it too faces signiï¬cant challenges. For networks trained on object classiï¬cation, deeper layers often become sensitive to successively higher-level features, from edges to textures to shape-parts to full objects (Yosinski, Clune, Bengio, & Lipson, 2014; Zeiler & Fergus, 2014). For deep networks trained on physics-related data, it remains to be seen whether higher layers will encode objects, general physical properties, forces and approximately Newtonian dynamics. A generic network trained on dynamic pixel data might learn an implicit representation of these con- cepts, but would it generalize broadly beyond training contexts as peopleâs more explicit physical concepts do? Consider for example a network that learns to predict the trajectories of several balls bouncing in a box (Kodratoï¬ & Michalski, 2014). If this network has actually learned something like Newtonian mechanics, then it should be able to generalize to interestingly diï¬erent scenarios â at a minimum diï¬erent numbers of diï¬erently shaped objects, bouncing in boxes of diï¬erent shapes and sizes and orientations with respect to gravity, not to mention more severe generalization tests such as all of the tower tasks discussed above, which also fall under the Newtonian domain. Neural network researchers have yet to take on this challenge, but we hope they will. Whether such models can be learned with the kind (and quantity) of data available to human infants is not clear, as we discuss further in Section 5.
It may be diï¬cult to integrate object and physics-based primitives into deep neural networks, but the payoï¬ in terms of learning speed and performance could be great for many tasks. Consider the case of learning to play Frostbite. Although it can be diï¬cult to discern exactly how a network learns to solve a particular task, the DQN probably does not parse a Frostbite screenshot in terms of stable objects or sprites moving according to the rules of intuitive physics (Figure 2). But incorporating a physics-engine-based representation could help DQNs learn to play games such as Frostbite in a faster and more general way, whether the physics knowledge is captured implicitly in a neural network or more explicitly in simulator. Beyond reducing the amount of training data and
18
potentially improving the level of performance reached by the DQN, it could eliminate the need to retrain a Frostbite network if the objects (e.g., birds, ice-ï¬oes and ï¬sh) are slightly altered in their behavior, reward-structure, or appearance. When a new object type such as a bear is introduced, as in the later levels of Frostbite (Figure 2D), a network endowed with intuitive physics would also have an easier time adding this object type to its knowledge (the challenge of adding new objects was also discussed in Marcus, 1998, 2001). In this way, the integration of intuitive physics and deep learning could be an important step towards more human-like learning algorithms.
# 4.1.2 Intuitive psychology
Intuitive psychology is another early-emerging ability with an important inï¬uence on human learn- ing and thought. Pre-verbal infants distinguish animate agents from inanimate objects. This distinction is partially based on innate or early-present detectors for low-level cues, such as the presence of eyes, motion initiated from rest, and biological motion (Johnson, Slaughter, & Carey, 1998; Premack & Premack, 1997; Schlottmann, Ray, Mitchell, & Demetriou, 2006; Tremoulet & Feldman, 2000). Such cues are often suï¬cient but not necessary for the detection of agency.
Beyond these low-level cues, infants also expect agents to act contingently and reciprocally, to have goals, and to take eï¬cient actions towards those goals subject to constraints (Csibra, 2008; Csibra, Biro, Koos, & Gergely, 2003; Spelke & Kinzler, 2007). These goals can be socially directed; at around three months of age, infants begin to discriminate anti-social agents that hurt or hinder others from neutral agents (Hamlin, 2013; Hamlin, Wynn, & Bloom, 2010), and they later distinguish between anti-social, neutral, and pro-social agents (Hamlin, Ullman, Tenenbaum, Goodman, & Baker, 2013; Hamlin, Wynn, & Bloom, 2007).
It is generally agreed that infants expect agents to act in a goal-directed, eï¬cient, and socially sensitive fashion (Spelke & Kinzler, 2007). What is less agreed on is the computational architecture that supports this reasoning and whether it includes any reference to mental states and explicit goals.
One possibility is that intuitive psychology is simply cues âall the way downâ (Schlottmann, Cole, Watts, & White, 2013; Scholl & Gao, 2013), though this would require more and more cues as the scenarios become more complex. Consider for example a scenario in which an agent A is moving towards a box, and an agent B moves in a way that blocks A from reaching the box. Infants and adults are likely to interpret Bâs behavior as âhinderingâ (Hamlin, 2013). This inference could be captured by a cue that states âif an agentâs expected trajectory is prevented from completion, the blocking agent is given some negative association.â
While the cue is easily calculated, the scenario is also easily changed to necessitate a diï¬erent type of cue. Suppose A was already negatively associated (a âbad guyâ); acting negatively towards A could then be seen as good (Hamlin, 2013). Or suppose something harmful was in the box which A didnât know about. Now B would be seen as helping, protecting, or defending A. Suppose A knew there was something bad in the box and wanted it anyway. B could be seen as acting paternalistically. A cue-based account would be twisted into gnarled combinations such as âIf an expected trajectory is prevented from completion, the blocking agent is given some negative association, unless that trajectory leads to a negative outcome or the blocking agent is previously associated as positive,
19
or the blocked agent is previously associated as negative, or...â
One alternative to a cue-based account is to use generative models of action choice, as in the Bayesian inverse planning (or âBayesian theory-of-mindâ) models of Baker, Saxe, and Tenenbaum (2009) or the ânaive utility calculusâ models of Jara-Ettinger, Gweon, Tenenbaum, and Schulz (2015) (See also Jern and Kemp (2015) and Tauber and Steyvers (2011), and a related alternative based on predictive coding from Kilner, Friston, and Frith (2007)). These models formalize ex- plicitly mentalistic concepts such as âgoal,â âagent,â âplanning,â âcost,â âeï¬ciency,â and âbelief,â used to describe core psychological reasoning in infancy. They assume adults and children treat agents as approximately rational planners who choose the most eï¬cient means to their goals. Planning computations may be formalized as solutions to Markov Decision Processes (or POMDPs), taking as input utility and belief functions deï¬ned over an agentâs state-space and the agentâs state-action transition functions, and returning a series of actions the agent should perform to most eï¬ciently fulï¬ll their goals (or maximize their utility). By simulating these planning processes, people can predict what agents might do next, or use inverse reasoning from observing a series of actions to infer the utilities and beliefs of agents in a scene. This is directly analogous to how simulation engines can be used for intuitive physics, to predict what will happen next in a scene or to infer objectsâ dynamical properties from how they move. It yields similarly ï¬exible reasoning abilities: Utilities and beliefs can be adjusted to take into account how agents might act for a wide range of novel goals and situations. Importantly, unlike in intuitive physics, simulation-based reasoning in intuitive psychology can be nested recursively to understand social interactions â we can think about agents thinking about other agents.
As in the case of intuitive physics, the success that generic deep networks will have in capturing intu- itive psychological reasoning will depend in part on the representations humans use. Although deep networks have not yet been applied to scenarios involving theory-of-mind and intuitive psychology, they could probably learn visual cues, heuristics and summary statistics of a scene that happens to involve agents.5 If that is all that underlies human psychological reasoning, a data-driven deep learning approach can likely ï¬nd success in this domain.
However, it seems to us that any full formal account of intuitive psychological reasoning needs to include representations of agency, goals, eï¬ciency, and reciprocal relations. As with objects and forces, it is unclear whether a complete representation of these concepts (agents, goals, etc.) could emerge from deep neural networks trained in a purely predictive capacity. Similar to the intuitive physics domain, it is possible that with a tremendous number of training trajectories in a variety of scenarios, deep learning techniques could approximate the reasoning found in infancy even without learning anything about goal-directed or social-directed behavior more generally. But this is also unlikely to resemble how humans learn, understand, and apply intuitive psychology unless the concepts are genuine. In the same way that altering the setting of a scene or the target of inference in a physics-related task may be diï¬cult to generalize without an understanding of objects, altering the setting of an agent or their goals and beliefs is diï¬cult to reason about without understanding intuitive psychology.
In introducing the Frostbite challenge, we discussed how people can learn to play the game ex-
5While connectionist networks have been used to model the general transition that children undergo between the ages of 3 and 4 regarding false belief (e.g., Berthiaume, Shultz, & Onishi, 2013), we are referring here to scenarios which require inferring goals, utilities, and relations.
20
tremely quickly by watching an experienced player for just a few minutes and then playing a few rounds themselves. Intuitive psychology provides a basis for eï¬cient learning from others, espe- cially in teaching settings with the goal of communicating knowledge eï¬ciently (Shafto, Goodman, & Griï¬ths, 2014). In the case of watching an expert play Frostbite, whether or not there is an explicit goal to teach, intuitive psychology lets us infer the beliefs, desires, and intentions of the experienced player. For instance, we can learn that the birds are to be avoided from seeing how the experienced player appears to avoid them. We do not need to experience a single example of encountering a bird â and watching the Frostbite Bailey die because of the bird â in order to infer that birds are probably dangerous. It is enough to see that the experienced playerâs avoidance behavior is best explained as acting under that belief.
Similarly, consider how a sidekick agent (increasingly popular in video-games) is expected to help a player achieve their goals. This agent can be useful in diï¬erent ways under diï¬erent circumstances, such as getting items, clearing paths, ï¬ghting, defending, healing, and providing information â all under the general notion of being helpful (Macindoe, 2013). An explicit agent representation can predict how such an agent will be helpful in new circumstances, while a bottom-up pixel-based representation is likely to struggle.
There are several ways that intuitive psychology could be incorporated into contemporary deep learning systems. While it could be built in, intuitive psychology may arise in other ways. Connec- tionists have argued that innate constraints in the form of hard-wired cortical circuits are unlikely (Elman, 2005; Elman et al., 1996), but a simple inductive bias, for example the tendency to notice things that move other things, can bootstrap reasoning about more abstract concepts of agency (S. Ullman, Harari, & Dorfman, 2012).6 Similarly, a great deal of goal-directed and socially- directed actions can also be boiled down to a simple utility-calculus (e.g., Jara-Ettinger et al., 2015), in a way that could be shared with other cognitive abilities. While the origins of intuitive psychology is still a matter of debate, it is clear that these abilities are early-emerging and play an important role in human learning and thought, as exempliï¬ed in the Frostbite challenge and when learning to play novel video games more broadly.
# 4.2 Learning as rapid model building
Since their inception, neural networks models have stressed the importance of learning. There are many learning algorithms for neural networks, including the perceptron algorithm (Rosenblatt, 1958), Hebbian learning (Hebb, 1949), the BCM rule (Bienenstock, Cooper, & Munro, 1982), back- propagation (Rumelhart, Hinton, & Williams, 1986), the wake-sleep algorithm (Hinton, Dayan, Frey, & Neal, 1995), and contrastive divergence (Hinton, 2002). Whether the goal is supervised or unsupervised learning, these algorithms implement learning as a process of gradual adjustment of connection strengths. For supervised learning, the updates are usually aimed at improving the algorithmâs pattern recognition capabilities. For unsupervised learning, the updates work towards gradually matching the statistics of the modelâs internal patterns with the statistics of the input data.
6We must be careful here about what âsimpleâ means. An inductive bias may appear simple in the sense that we can compactly describe it, but it may require complex computation (e.g., motion analysis, parsing images into objects, etc.) just to produce its inputs in a suitable form.
21
In recent years, machine learning has found particular success using backpropagation and large data sets to solve diï¬cult pattern recognition problems. While these algorithms have reached human- level performance on several challenging benchmarks, they are still far from matching human-level learning in other ways. Deep neural networks often need more data than people do in order to solve the same types of problems, whether it is learning to recognize a new type of object or learning to play a new game. When learning the meanings of words in their native language, children make meaningful generalizations from very sparse data (Carey & Bartlett, 1978; Landau, Smith, & Jones, 1988; E. M. Markman, 1989; Smith, Jones, Landau, Gershkoï¬-Stowe, & Samuelson, 2002; F. Xu & Tenenbaum, 2007, although see Horst and Samuelson 2008 regarding memory limitations). Children may only need to see a few examples of the concepts hairbrush, pineapple or lightsaber before they largely âget it,â grasping the boundary of the inï¬nite set that deï¬nes each concept from the inï¬nite set of all possible objects. Children are far more practiced than adults at learning new concepts â learning roughly nine or ten new words each day after beginning to speak through the end of high school (Bloom, 2000; Carey, 1978) â yet the ability for rapid âone-shotâ learning does not disappear in adulthood. An adult may need to see a single image or movie of a novel two-wheeled vehicle to infer the boundary between this concept and others, allowing him or her to discriminate new examples of that concept from similar looking objects of a diï¬erent type (Fig. 1B-i).
Contrasting with the eï¬ciency of human learning, neural networks â by virtue of their generality as highly ï¬exible function approximators â are notoriously data hungry (the bias/variance dilemma; Geman, Bienenstock, & Doursat, 1992). Benchmark tasks such as the ImageNet data set for object recognition provides hundreds or thousands of examples per class (Krizhevsky et al., 2012; Russakovsky et al., 2015) â 1000 hairbrushes, 1000 pineapples, etc. In the context of learning new handwritten characters or learning to play Frostbite, the MNIST benchmark includes 6000 examples of each handwritten digit (LeCun et al., 1998), and the DQN of V. Mnih et al. (2015) played each Atari video game for approximately 924 hours of unique training experience (Figure 3). In both cases, the algorithms are clearly using information less eï¬ciently than a person learning to perform the same tasks.
It is also important to mention that there are many classes of concepts that people learn more slowly. Concepts that are learned in school are usually far more challenging and more diï¬cult to acquire, including mathematical functions, logarithms, derivatives, integrals, atoms, electrons, gravity, DNA, evolution, etc. There are also domains for which machine learners outperform human learners, such as combing through ï¬nancial or weather data. But for the vast majority of cognitively natural concepts â the types of things that children learn as the meanings of words â people are still far better learners than machines. This is the type of learning we focus on in this section, which is more suitable for the enterprise of reverse engineering and articulating additional principles that make human learning successful. It also opens the possibility of building these ingredients into the next generation of machine learning and AI algorithms, with potential for making progress on learning concepts that are both easy and diï¬cult for humans to acquire.
Even with just a few examples, people can learn remarkably rich conceptual models. One indicator of richness is the variety of functions that these models support (A. B. Markman & Ross, 2003; Solomon, Medin, & Lynch, 1999). Beyond classiï¬cation, concepts support prediction (Murphy & Ross, 1994; Rips, 1975), action (Barsalou, 1983), communication (A. B. Markman & Makin, 1998), imagination (Jern & Kemp, 2013; Ward, 1994), explanation (Lombrozo, 2009; Williams
22
& Lombrozo, 2010), and composition (Murphy, 1988; Osherson & Smith, 1981). These abilities are not independent; rather they hang together and interact (Solomon et al., 1999), coming for free with the acquisition of the underlying concept. Returning to the previous example of a novel two wheeled vehicle, a person can sketch a range of new instances (Figure 1B-ii), parse the concept into its most important components (Figure 1B-iii), or even create a new complex concept through the combination of familiar concepts (Figure 1B-iv). Likewise, as discussed in the context of Frostbite, a learner who has acquired the basics of the game could ï¬exibly apply their knowledge to an inï¬nite set of Frostbite variants (Section 3.2). The acquired knowledge supports reconï¬guration to new tasks and new demands, such as modifying the goals of the game to survive while acquiring as few points as possible, or to eï¬ciently teach the rules to a friend.
This richness and ï¬exibility suggests that learning as model building is a better metaphor than learning as pattern recognition. Furthermore, the human capacity for one-shot learning suggests that these models are built upon rich domain knowledge rather than starting from a blank slate (Mikolov, Joulin, & Baroni, 2016; Mitchell, Keller, & Kedar-cabelli, 1986). In contrast, much of the recent progress in deep learning has been on pattern recognition problems, including object recognition, speech recognition, and (model-free) video game learning, that utilize large data sets and little domain knowledge.
There has been recent work on other types of tasks including learning generative models of images (Denton, Chintala, Szlam, & Fergus, 2015; Gregor, Danihelka, Graves, Rezende, & Wierstra, 2015), caption generation (Karpathy & Fei-Fei, 2015; Vinyals, Toshev, Bengio, & Erhan, 2014; K. Xu et al., 2015), question answering (Sukhbaatar, Szlam, Weston, & Fergus, 2015; Weston, Chopra, & Bordes, 2015), and learning simple algorithms (Graves, Wayne, & Danihelka, 2014; Grefenstette, Hermann, Suleyman, & Blunsom, 2015); we discuss question answering and learning simple algorithms in Section 6.1. Yet, at least for image and caption generation, these tasks have been mostly studied in the big data setting that is at odds with the impressive human ability for generalizing from small data sets (although see Rezende, Mohamed, Danihelka, Gregor, & Wierstra, 2016, for a deep learning approach to the Character Challenge). And it has been diï¬cult to learn neural-network-style representations that eï¬ortlessly generalize to new tasks that they were not trained on (see Davis & Marcus, 2015; Marcus, 1998, 2001). What additional ingredients may be needed in order to rapidly learn more powerful and more general-purpose representations?
A relevant case study is from our own work on the Characters Challenge (Section 3.1; Lake, 2014; Lake, Salakhutdinov, & Tenenbaum, 2015). People and various machine learning approaches were compared on their ability to learn new handwritten characters from the worldâs alphabets. In addition to evaluating several types of deep learning models, we developed an algorithm using Bayesian Program Learning (BPL) that represents concepts as simple stochastic programs â that is, structured procedures that generate new examples of a concept when executed (Figure 5A). These programs allow the model to express causal knowledge about how the raw data are formed, and the probabilistic semantics allow the model to handle noise and perform creative tasks. Structure sharing across concepts is accomplished by the compositional reuse of stochastic primitives that can combine in new ways to create new concepts.
Note that we are overloading the word âmodelâ to refer to both the BPL framework as a whole (which is a generative model), as well as the individual probabilistic models (or concepts) that it infers from images to represent novel handwritten characters. There is a hierarchy of models: a
23
â 2 âU | t â-<-0d Q a ° wl srt i sub-parts oh, |-. 9 ~, MW MM 7 A] say YT | sMUSTTE , Wy Wtoyvioy MT Taio AT) Ir are IVE spree ) obi 34 kp ly AAW) "ry MA BT yz) ST SHU âoanie relation: relation: relation: Human or Machine? Ne aac i ansen we fe. ; + oxo MIDE | m les 15 8s Spr ey BRS Bakes oe aes b b L 7 TOT TIT TO ge, ro 8 sie 3 z|
Figure 5: A causal, compositional model of handwritten characters. A) New types are generated compositionally by choosing primitive actions (color coded) from a library (i), combining these sub- parts (ii) to make parts (iii), and combining parts with relations to deï¬ne simple programs (iv). These programs can create diï¬erent tokens of a concept (v) that are rendered as binary images (vi). B) Probabilistic inference allows the model to generate new examples from just one example of a new concept, shown here in a visual Turing Test. An example image of a new concept is shown above each pair of grids. One grid was generated by 9 people and the other is 9 samples from the BPL model. Which grid in each pair (A or B) was generated by the machine? Answers by row: 1,2;1,1. Adapted from Lake, Salakhutdinov, and Tenenbaum (2015).
higher-level program that generates diï¬erent types of concepts, which are themselves programs that can be run to generate tokens of a concept. Here, describing learning as ârapid model buildingâ refers to the fact that BPL constructs generative models (lower-level programs) that produce tokens of a concept (Figure 5B).
Learning models of this form allows BPL to perform a challenging one-shot classiï¬cation task at human level performance (Figure 1A-i) and to outperform current deep learning models such as convolutional networks (Koch, Zemel, & Salakhutdinov, 2015).7 The representations that BPL learns also enable it to generalize in other, more creative human-like ways, as evaluated using âvisual Turing testsâ (e.g., Figure 5B). These tasks include generating new examples (Figure 1A-ii and Figure 5B), parsing objects into their essential components (Figure 1A-iii), and generating new concepts in the style of a particular alphabet (Figure 1A-iv). The following sections discuss the three main ingredients â compositionality, causality, and learning-to-learn â that were important to the success of this framework and we believe are important to understanding human learning as rapid model building more broadly. While these ingredients ï¬t naturally within a BPL or a probabilistic program induction framework, they could also be integrated into deep learning models and other types of machine learning algorithms, prospects we discuss in more detail below.
7A new approach using convolutional âmatching networksâ achieves good one-shot classiï¬cation performance when discriminating between characters from diï¬erent alphabets (Vinyals, Blundell, Lillicrap, Kavukcuoglu, & Wierstra, 2016). It has not yet been directly compared with BPL, which was evaluated on one-shot classiï¬cation with characters from the same alphabet.
24
# 4.2.1 Compositionality
Compositionality is the classic idea that new representations can be constructed through the com- bination of primitive elements. In computer programming, primitive functions can be combined together to create new functions, and these new functions can be further combined to create even more complex functions. This function hierarchy provides an eï¬cient description of higher-level functions, like a part hierarchy for describing complex objects or scenes (Bienenstock, Geman, & Potter, 1997). Compositionality is also at the core of productivity: an inï¬nite number of repre- sentations can be constructed from a ï¬nite set of primitives, just as the mind can think an inï¬nite number of thoughts, utter or understand an inï¬nite number of sentences, or learn new concepts from a seemingly inï¬nite space of possibilities (Fodor, 1975; Fodor & Pylyshyn, 1988; Marcus, 2001; Piantadosi, 2011).
Compositionality has been broadly inï¬uential in both AI and cognitive science, especially as it pertains to theories of object recognition, conceptual representation, and language. Here we focus on compositional representations of object concepts for illustration. Structural description models represent visual concepts as compositions of parts and relations, which provides a strong inductive bias for constructing models of new concepts (Biederman, 1987; Hummel & Biederman, 1992; Marr & Nishihara, 1978; van den Hengel et al., 2015; Winston, 1975). For instance, the novel two-wheeled vehicle in Figure 1B might be represented as two wheels connected by a platform, which provides the base for a post, which holds the handlebars, etc. Parts can themselves be composed of sub-parts, forming a âpartonomyâ of part-whole relationships (G. A. Miller & Johnson-Laird, 1976; Tversky & Hemenway, 1984). In the novel vehicle example, the parts and relations can be shared and reused from existing related concepts, such as cars, scooters, motorcycles, and unicycles. Since the parts and relations are themselves a product of previous learning, their facilitation of the construction of new models is also an example of learning-to-learn â another ingredient that is covered below. While compositionality and learning-to-learn ï¬t naturally together, there are also forms of compositionality that rely less on previous learning, such as the bottom-up parts-based representation of Hoï¬man and Richards (1984).
Learning models of novel handwritten characters can be operationalized in a similar way. Handwrit- ten characters are inherently compositional, where the parts are pen strokes and relations describe how these strokes connect to each other. Lake, Salakhutdinov, and Tenenbaum (2015) modeled these parts using an additional layer of compositionality, where parts are complex movements cre- ated from simpler sub-part movements. New characters can be constructed by combining parts, sub-parts, and relations in novel ways (Figure 5). Compositionality is also central to the construc- tion of other types of symbolic concepts beyond characters, where new spoken words can be created through a novel combination of phonemes (Lake, Lee, Glass, & Tenenbaum, 2014) or a new gesture or dance move can be created through a combination of more primitive body movements.
An eï¬cient representation for Frostbite should be similarly compositional and productive. A scene from the game is a composition of various object types, including birds, ï¬sh, ice ï¬oes, igloos, etc. (Figure 2). Representing this compositional structure explicitly is both more economical and better for generalization, as noted in previous work on object-oriented reinforcement learning (Diuk, Cohen, & Littman, 2008). Many repetitions of the same objects are present at diï¬erent locations in the scene, and thus representing each as an identical instance of the same object with the
25
a woman riding a horse on a an airplane is parked on the a group of people standing on dirt road tarmac at an airport top of a beach
Figure 6: Perceiving scenes without intuitive physics, intuitive psychology, compositionality, and causality. Image captions are generated by a deep neural network (Karpathy & Fei-Fei, 2015) using code from github.com/karpathy/neuraltalk2. Image credits: Gabriel Villena Fern´andez (left), TVBS Taiwan / Agence France-Presse (middle) and AP Photo / Dave Martin (right). Similar examples using images from Reuters news can be found at twitter.com/interesting jpg.
same properties is important for eï¬cient representation and quick learning of the game. Further, new levels may contain diï¬erent numbers and combinations of objects, where a compositional representation of objects â using intuitive physics and intuitive psychology as glue â would aid in making these crucial generalizations (Figure 2D).
Deep neural networks have at least a limited notion of compositionality. Networks trained for object recognition encode part-like features in their deeper layers (Zeiler & Fergus, 2014), whereby the presentation of new types of objects can activate novel combinations of feature detectors. Similarly, a DQN trained to play Frostbite may learn to represent multiple replications of the same object with the same features, facilitated by the invariance properties of a convolutional neural network architecture. Recent work has shown how this type of compositionality can be made more explicit, where neural networks can be used for eï¬cient inference in more structured generative models (both neural networks and 3D scene models) that explicitly represent the number of objects in a scene (Eslami et al., 2016). Beyond the compositionality inherent in parts, objects, and scenes, compositionality can also be important at the level of goals and sub-goals. Recent work on hierarchical-DQNs shows that by providing explicit object representations to a DQN, and then deï¬ning sub-goals based on reaching those objects, DQNs can learn to play games with sparse rewards (such as Montezumaâs Revenge) by combining these sub-goals together to achieve larger goals (Kulkarni, Narasimhan, Saeedi, & Tenenbaum, 2016).
We look forward to seeing these new ideas continue to develop, potentially providing even richer notions of compositionality in deep neural networks that lead to faster and more ï¬exible learning. To capture the full extent of the mindâs compositionality, a model must include explicit represen- tations of objects, identity, and relations â all while maintaining a notion of âcoherenceâ when understanding novel conï¬gurations. Coherence is related to our next principle, causality, which is discussed in the section that follows.
26
# 4.2.2 Causality
In concept learning and scene understanding, causal models represent hypothetical real world pro- cesses that produce the perceptual observations. In control and reinforcement learning, causal models represent the structure of the environment, such as modeling state-to-state transitions or action/state-to-state transitions.
Concept learning and vision models that utilize causality are usually generative (as opposed to discriminative; see Glossary in Table 1), but not every generative model is also causal. While a generative model describes a process for generating data, or at least assigns a probability distribu- tion over possible data points, this generative process may not resemble how the data are produced in the real world. Causality refers to the subclass of generative models that resemble, at an abstract level, how the data are actually generated. While generative neural networks such as Deep Belief Networks (Hinton, Osindero, & Teh, 2006) or variational auto-encoders (Gregor, Besse, Rezende, Danihelka, & Wierstra, 2016; Kingma, Rezende, Mohamed, & Welling, 2014) may generate com- pelling handwritten digits, they mark one end of the âcausality spectrum,â since the steps of the generative process bear little resemblance to steps in the actual process of writing. In contrast, the generative model for characters using Bayesian Program Learning (BPL) does resemble the steps of writing, although even more causally faithful models are possible.
Causality has been inï¬uential in theories of perception. âAnalysis-by-synthesisâ theories of per- ception maintain that sensory data can be more richly represented by modeling the process that generated it (Bever & Poeppel, 2010; Eden, 1962; Halle & Stevens, 1962; Neisser, 1966). Relat- ing data to its causal source provides strong priors for perception and learning, as well as a richer basis for generalizing in new ways and to new tasks. The canonical examples of this approach are speech and visual perception. For instance, Liberman, Cooper, Shankweiler, and Studdert- Kennedy (1967) argued that the richness of speech perception is best explained by inverting the production plan, at the level of vocal tract movements, in order to explain the large amounts of acoustic variability and the blending of cues across adjacent phonemes. As discussed, causality does not have to be a literal inversion of the actual generative mechanisms, as proposed in the motor theory of speech. For the BPL of learning handwritten characters, causality is operationalized by treating concepts as motor programs, or abstract causal descriptions of how to produce examples of the concept, rather than concrete conï¬gurations of speciï¬c muscles (Figure 5A). Causality is an important factor in the modelâs success in classifying and generating new examples after seeing just a single example of a new concept (Lake, Salakhutdinov, & Tenenbaum, 2015) (Figure 5B).
Causal knowledge has also been shown to inï¬uence how people learn new concepts; providing a learner with diï¬erent types of causal knowledge changes how they learn and generalize. For example, the structure of the causal network underlying the features of a category inï¬uences how people categorize new examples (Rehder, 2003; Rehder & Hastie, 2001). Similarly, as related to the Characters Challenge, the way people learn to write a novel handwritten character inï¬uences later perception and categorization (Freyd, 1983, 1987).
To explain the role of causality in learning, conceptual representations have been likened to intu- itive theories or explanations, providing the glue that lets core features stick while other equally applicable features wash away (Murphy & Medin, 1985). Borrowing examples from Murphy and Medin (1985), the feature âï¬ammableâ is more closely attached to wood than money due to the
27
underlying causal roles of the concepts, even though the feature is equally applicable to both; these causal roles derive from the functions of objects. Causality can also glue some features together by relating them to a deeper underlying cause, explaining why some features such as âcan ï¬y,â âhas wings,â and âhas feathersâ co-occur across objects while others do not.
Beyond concept learning, people also understand scenes by building causal models. Human-level scene understanding involves composing a story that explains the perceptual observations, drawing upon and integrating the ingredients of intuitive physics, intuitive psychology, and compositionality. Perception without these ingredients, and absent the causal glue that binds them together, can lead to revealing errors. Consider image captions generated by a deep neural network (Figure 6; Karpathy & Fei-Fei, 2015). In many cases, the network gets the key objects in a scene correct but fails to understand the physical forces at work, the mental states of the people, or the causal relationships between the objects â in other words, it does not build the right causal model of the data.
There have been steps towards deep neural networks and related approaches that learn causal mod- els. Lopez-Paz, Muandet, Scholk¨opf, and Tolstikhin (2015) introduced a discriminative, data-driven framework for distinguishing the direction of causality from examples. While it outperforms exist- ing methods on various causal prediction tasks, it is unclear how to apply the approach to inferring rich hierarchies of latent causal variables, as needed for the Frostbite Challenge and (especially) the Characters Challenge. Graves (2014) learned a generative model of cursive handwriting using a recurrent neural network trained on handwriting data. While it synthesizes impressive examples of handwriting in various styles, it requires a large training corpus and has not been applied to other tasks. The DRAW network performs both recognition and generation of handwritten digits using recurrent neural networks with a window of attention, producing a limited circular area of the image at each time step (Gregor et al., 2015). A more recent variant of DRAW was applied to generating examples of a novel character from just a single training example (Rezende et al., 2016). While the model demonstrates an impressive ability to make plausible generalizations that go beyond the training examples, it generalizes too broadly in other cases, in ways that are not especially human-like. It is not clear that it could yet pass any of the âvisual Turing testsâ in Lake, Salakhutdinov, and Tenenbaum (2015) (Figure 5B), although we hope DRAW-style networks will continue to be extended and enriched, and could be made to pass these tests.
Incorporating causality may greatly improve these deep learning models; they were trained without access to causal data about how characters are actually produced, and without any incentive to learn the true causal process. An attentional window is only a crude approximation to the true causal process of drawing with a pen, and in Rezende et al. (2016) the attentional window is not pen-like at all, although a more accurate pen model could be incorporated. We anticipate that these sequential generative neural networks could make sharper one-shot inferences â with the goal of tackling the full Characters Challenge â by incorporating additional causal, compositional, and hierarchical structure (and by continuing to utilize learning-to-learn, described next), potentially leading to a more computationally eï¬cient and neurally grounded variant of the BPL model of handwritten characters (Figure 5).
A causal model of Frostbite would have to be more complex, gluing together object representations and explaining their interactions with intuitive physics and intuitive psychology, much like the game engine that generates the game dynamics and ultimately the frames of pixel images. Inference is
28
the process of inverting this causal generative model, explaining the raw pixels as objects and their interactions, such as the agent stepping on an ice ï¬oe to deactivate it or a crab pushing the agent into the water (Figure 2). Deep neural networks could play a role in two ways: serving as a bottom-up proposer to make probabilistic inference more tractable in a structured generative model (Section 4.3.1) or by serving as the causal generative model if imbued with the right set of ingredients.
# 4.2.3 Learning-to-learn
When humans or machines make inferences that go far beyond the data, strong prior knowledge (or inductive biases or constraints) must be making up the diï¬erence (Geman et al., 1992; Griï¬ths, Chater, Kemp, Perfors, & Tenenbaum, 2010; Tenenbaum, Kemp, Griï¬ths, & Goodman, 2011). One way people acquire this prior knowledge is through âlearning-to-learn,â a term introduced by Harlow (1949) and closely related to the machine learning notions of âtransfer learningâ, âmulti-task learningâ or ârepresentation learning.â These terms refer to ways that learning a new task (or a new concept) can be accelerated through previous or parallel learning of other related tasks (or other related concepts). The strong priors, constraints, or inductive bias needed to learn a particular task quickly are often shared to some extent with other related tasks. A range of mechanisms have been developed to adapt the learnerâs inductive bias as they learn speciï¬c tasks, and then apply these inductive biases to new tasks.
In hierarchical Bayesian modeling (Gelman, Carlin, Stern, & Rubin, 2004), a general prior on concepts is shared by multiple speciï¬c concepts, and the prior itself is learned over the course of learning the speciï¬c concepts (Salakhutdinov, Tenenbaum, & Torralba, 2012, 2013). These models have been used to explain the dynamics of human learning-to-learn in many areas of cognition, including word learning, causal learning, and learning intuitive theories of physical and social domains (Tenenbaum et al., 2011). In machine vision, for deep convolutional networks or other discriminative methods that form the core of recent recognition systems, learning-to-learn can occur through the sharing of features between the models learned for old objects (or old tasks) and the models learned for new objects (or new tasks) (Anselmi et al., 2016; Baxter, 2000; Bottou, 2014; Lopez-Paz, Bottou, Scholk¨opf, & Vapnik, 2016; Rusu et al., 2016; Salakhutdinov, Torralba, & Tenenbaum, 2011; Srivastava & Salakhutdinov, 2013; Torralba, Murphy, & Freeman, 2007; Zeiler & Fergus, 2014). Neural networks can also learn-to-learn by optimizing hyperparameters, including the form of their weight update rule (Andrychowicz et al., 2016), over a set of related tasks.
While transfer learning and multi-task learning are already important themes across AI, and in deep learning in particular, they have not yet led to systems that learn new tasks as rapidly and ï¬exibly as humans do. Capturing more human-like learning-to-learn dynamics in deep networks and other machine learning approaches could facilitate much stronger transfer to new tasks and new problems. To gain the full beneï¬t that humans get from learning-to-learn, however, AI systems might ï¬rst need to adopt the more compositional (or more language-like, see Section 5) and causal forms of representations that we have argued for above.
We can see this potential in both of our Challenge problems. In the Characters Challenge as presented in Lake, Salakhutdinov, and Tenenbaum (2015), all viable models use âpre-trainingâ
29
on many character concepts in a background set of alphabets to tune the representations they use to learn new character concepts in a test set of alphabets. But to perform well, current neural network approaches require much more pre-training than do people or our Bayesian program learning approach, and they are still far from solving the Characters Challenge.8
We cannot be sure how people get to the knowledge they have in this domain, but we do understand how this works in BPL, and we think people might be similar. BPL transfers readily to new concepts because it learns about object parts, sub-parts, and relations, capturing learning about what each concept is like and what concepts are like in general. It is crucial that learning-to-learn occurs at multiple levels of the hierarchical generative process. Previously learned primitive actions and larger generative pieces can be re-used and re-combined to deï¬ne new generative models for new characters (Figure 5A). Further transfer occurs by learning about the typical levels of variability within a typical generative model; this provides knowledge about how far and in what ways to generalize when we have seen only one example of a new character, which on its own could not possibly carry any information about variance. BPL could also beneï¬t from deeper forms of learning-to-learn than it currently does: Some of the important structure it exploits to generalize well is built in to the prior and not learned from the background pre-training, whereas people might learn this knowledge, and ultimately a human-like machine learning system should as well.
Analogous learning-to-learn occurs for humans in learning many new object models, in vision and cognition: Consider the novel two-wheeled vehicle in Figure 1B, where learning-to-learn can operate through the transfer of previously learned parts and relations (sub-concepts such as wheels, motors, handle bars, attached, powered by, etc.) that reconï¬gure compositionally to create a model of the new concept. If deep neural networks could adopt similarly compositional, hierarchical, and causal representations, we expect they might beneï¬t more from learning-to-learn.
In the Frostbite Challenge, and in video games more generally, there is a similar interdependence between the form of the representation and the eï¬ectiveness of learning-to-learn. People seem to transfer knowledge at multiple levels, from low-level perception to high-level strategy, exploiting compositionality at all levels. Most basically, they immediately parse the game environment into objects, types of objects, and causal relations between them. People also understand that video games like this have goals, which often involve approaching or avoiding objects based on their type. Whether the person is a child or a seasoned gamer, it seems obvious that interacting with the birds and ï¬sh will change the game state in some way, either good or bad, because video games typically yield costs or rewards for these types of interactions (e.g., dying or points). These types of hypotheses can be quite speciï¬c and rely on prior knowledge: When the polar bear ï¬rst appears and tracks the agentâs location during advanced levels (Figure 2D), an attentive learner is sure to avoid it. Depending on the level, ice ï¬oes can be spaced far apart (Figure 2A-C) or close together (Figure 2D), suggesting the agent may be able to cross some gaps but not others. In this way,
8Humans typically have direct experience with only one or a few alphabets, and even with related drawing experience, this likely amounts to the equivalent of a few hundred character-like visual concepts at most. For BPL, pre-training with characters in only ï¬ve alphabets (for around 150 character types in total) is suï¬cient to perform human-level one-shot classiï¬cation and generation of new examples. The best neural network classiï¬ers (deep convolutional networks) have error rates approximately ï¬ve times higher than humans when pre-trained with ï¬ve alphabets (23% versus 4% error), and two to three times higher when pre-training on six times as much data (30 alphabets) (Lake, Salakhutdinov, & Tenenbaum, 2015). The current need for extensive pre-training is illustrated for deep generative models by Rezende et al. (2016), who present extensions of the DRAW architecture capable of one-shot learning.
30
general world knowledge and previous video games may help inform exploration and generalization in new scenarios, helping people learn maximally from a single mistake or avoid mistakes altogether.
Deep reinforcement learning systems for playing Atari games have had some impressive successes in transfer learning, but they still have not come close to learning to play new games as quickly as humans can. For example, Parisotto et al. (2016) presents the âActor-mimicâ algorithm that ï¬rst learns 13 Atari games by watching an expert network play and trying to mimic the expert network action selection and/or internal states (for about four million frames of experience each, or 18.5 hours per game). This algorithm can then learn new games faster than a randomly initialized DQN: Scores that might have taken four or ï¬ve million frames of learning to reach might now be reached after one or two million frames of practice. But anecdotally we ï¬nd that humans can still reach these scores with a few minutes of practice, requiring far less experience than the DQNs.
In sum, the interaction between representation and previous experience may be key to building machines that learn as fast as people do. A deep learning system trained on many video games may not, by itself, be enough to learn new games as quickly as people do. Yet if such a system aims to learn compositionally structured causal models of a each game â built on a foundation of intuitive physics and psychology â it could transfer knowledge more eï¬ciently and thereby learn new games much more quickly.
# 4.3 Thinking Fast
The previous section focused on learning rich models from sparse data and proposed ingredients for achieving these human-like learning abilities. These cognitive abilities are even more striking when considering the speed of perception and thought â the amount of time required to understand a scene, think a thought, or choose an action. In general, richer and more structured models require more complex (and slower) inference algorithms â similar to how complex models require more data â making the speed of perception and thought all the more remarkable.
The combination of rich models with eï¬cient inference suggests another way psychology and neu- roscience may usefully inform AI. It also suggests an additional way to build on the successes of deep learning, where eï¬cient inference and scalable learning are important strengths of the ap- proach. This section discusses possible paths towards resolving the conï¬ict between fast inference and structured representations, including Helmholtz-machine-style approximate inference in gener- ative models (Dayan, Hinton, Neal, & Zemel, 1995; Hinton et al., 1995) and cooperation between model-free and model-based reinforcement learning systems.
# 4.3.1 Approximate inference in structured models
Hierarchical Bayesian models operating over probabilistic programs (Goodman et al., 2008; Lake, Salakhutdinov, & Tenenbaum, 2015; Tenenbaum et al., 2011) are equipped to deal with theory- like structures and rich causal representations of the world, yet there are formidable algorithmic challenges for eï¬cient inference. Computing a probability distribution over an entire space of programs is usually intractable, and often even ï¬nding a single high-probability program poses an intractable search problem. In contrast, while representing intuitive theories and structured causal
31
models is less natural in deep neural networks, recent progress has demonstrated the remarkable eï¬ectiveness of gradient-based learning in high-dimensional parameter spaces. A complete account of learning and inference must explain how the brain does so much with limited computational resources (Gershman, Horvitz, & Tenenbaum, 2015; Vul, Goodman, Griï¬ths, & Tenenbaum, 2014).
Popular algorithms for approximate inference in probabilistic machine learning have been proposed as psychological models (see Griï¬ths, Vul, & Sanborn, 2012, for a review). Most prominently, it has been proposed that humans can approximate Bayesian inference using Monte Carlo methods, which stochastically sample the space of possible hypotheses and evaluate these samples according to their consistency with the data and prior knowledge (Bonawitz, Denison, Griï¬ths, & Gopnik, 2014; Gershman, Vul, & Tenenbaum, 2012; T. D. Ullman, Goodman, & Tenenbaum, 2012; Vul et al., 2014). Monte Carlo sampling has been invoked to explain behavioral phenomena ranging from childrenâs response variability (Bonawitz et al., 2014) to garden-path eï¬ects in sentence processing (Levy, Reali, & Griï¬ths, 2009) and perceptual multistability (Gershman et al., 2012; Moreno- Bote, Knill, & Pouget, 2011). Moreover, we are beginning to understand how such methods could be implemented in neural circuits (Buesing, Bill, Nessler, & Maass, 2011; Huang & Rao, 2014; Pecevski, Buesing, & Maass, 2011).9
While Monte Carlo methods are powerful and come with asymptotic guarantees, it is challenging to make them work on complex problems like program induction and theory learning. When the hypothesis space is vast and only a few hypotheses are consistent with the data, how can good models be discovered without exhaustive search? In at least some domains, people may not have an especially clever solution to this problem, instead grappling with the full combinatorial complexity of theory learning (T. D. Ullman et al., 2012). Discovering new theories can be slow and arduous, as testiï¬ed by the long timescale of cognitive development, and learning in a saltatory fashion (rather than through gradual adaptation) is characteristic of aspects of human intelligence, including discovery and insight during development (L. Schulz, 2012), problem-solving (Sternberg & Davidson, 1995), and epoch-making discoveries in scientiï¬c research (Langley, Bradshaw, Simon, & Zytkow, 1987). Discovering new theories can also happen much more quickly â A person learning the rules of Frostbite will probably undergo a loosely ordered sequence of âAha!â moments: they will learn that jumping on ice ï¬oes causes them to change color, changing the color of ice ï¬oes causes an igloo to be constructed piece-by-piece, that birds make you lose points, that ï¬sh make you gain points, that you can change the direction of ice ï¬oe at the cost of one igloo piece, and so on. These little fragments of a âFrostbite theoryâ are assembled to form a causal understanding of the game relatively quickly, in what seems more like a guided process than arbitrary proposals in a Monte Carlo inference scheme. Similarly, as described in the Characters Challenge, people can quickly infer motor programs to draw a new character in a similarly guided processes.
For domains where program or theory learning happens quickly, it is possible that people employ inductive biases not only to evaluate hypotheses, but also to guide hypothesis selection. L. Schulz (2012) has suggested that abstract structural properties of problems contain information about the abstract forms of their solutions. Even without knowing the answer to the question âWhere is the deepest point in the Paciï¬c Ocean?â one still knows that the answer must be a location on a
9In the interest of brevity, we do not discuss here another important vein of work linking neural circuits to variational approximations (Bastos et al., 2012), which have received less attention in the psychological literature.
32
map. The answer â20 inchesâ to the question âWhat year was Lincoln born?â can be invalidated a priori, even without knowing the correct answer. In recent experiments, Tsividis, Tenenbaum, and Schulz (2015) found that children can use high-level abstract features of a domain to guide hypothesis selection, by reasoning about distributional properties like the ratio of seeds to ï¬owers, and dynamical properties like periodic or monotonic relationships between causes and eï¬ects (see also Magid, Sheskin, & Schulz, 2015).
How might eï¬cient mappings from questions to a plausible subset of answers be learned? Recent work in AI spanning both deep learning and graphical models has attempted to tackle this chal- lenge by âamortizingâ probabilistic inference computations into an eï¬cient feed-forward mapping (Eslami, Tarlow, Kohli, & Winn, 2014; Heess, Tarlow, & Winn, 2013; A. Mnih & Gregor, 2014; Stuhlm¨uller, Taylor, & Goodman, 2013). We can also think of this as âlearning to do inference,â which is independent from the ideas of learning as model building discussed in the previous section. These feed-forward mappings can be learned in various ways, for example, using paired generative/recognition networks (Dayan et al., 1995; Hinton et al., 1995) and variational optimization (Gregor et al., 2015; A. Mnih & Gregor, 2014; Rezende, Mohamed, & Wierstra, 2014) or nearest-neighbor density estimation (Kulkarni, Kohli, Tenenbaum, & Mansinghka, 2015; Stuhlm¨uller et al., 2013). One implication of amortization is that solutions to diï¬erent problems will become correlated due to the sharing of amortized computations; some evidence for inferential correlations in humans was reported by Gershman and Goodman (2014). This trend is an avenue of potential integration of deep learning models with probabilistic models and probabilistic pro- gramming: training neural networks to help perform probabilistic inference in a generative model or a probabilistic program (Eslami et al., 2016; Kulkarni, Whitney, Kohli, & Tenenbaum, 2015; Yildirim, Kulkarni, Freiwald, & Te, 2015). Another avenue for potential integration is through diï¬erentiable programming (Dalrmple, 2016) â by ensuring that the program-like hypotheses are diï¬erentiable and thus learnable via gradient descent â a possibility discussed in the concluding section (Section 6.1).
# 4.3.2 Model-based and model-free reinforcement learning
The DQN introduced by V. Mnih et al. (2015) used a simple form of model-free reinforcement learning in a deep neural network that allows for fast selection of actions. There is indeed sub- stantial evidence that the brain uses similar model-free learning algorithms in simple associative learning or discrimination learning tasks (see Niv, 2009, for a review). In particular, the phasic ï¬ring of midbrain dopaminergic neurons is qualitatively (Schultz, Dayan, & Montague, 1997) and quantitatively (Bayer & Glimcher, 2005) consistent with the reward prediction error that drives updating of model-free value estimates.
Model-free learning is not, however, the whole story. Considerable evidence suggests that the brain also has a model-based learning system, responsible for building a âcognitive mapâ of the environment and using it to plan action sequences for more complex tasks (Daw, Niv, & Dayan, 2005; Dolan & Dayan, 2013). Model-based planning is an essential ingredient of human intelli- gence, enabling ï¬exible adaptation to new tasks and goals; it is where all of the rich model-building abilities discussed in the previous sections earn their value as guides to action. As we argued in our discussion of Frostbite, one can design numerous variants of this simple video game that are
33
identical except for the reward function â that is, governed by an identical environment model of state-action-dependent transitions. We conjecture that a competent Frostbite player can easily shift behavior appropriately, with little or no additional learning, and it is hard to imagine a way of doing that other than having a model-based planning approach in which the environment model can be modularly combined with arbitrary new reward functions and then deployed immediately for plan- ning. One boundary condition on this ï¬exibility is the fact that the skills become âhabitizedâ with routine application, possibly reï¬ecting a shift from model-based to model-free control. This shift may arise from a rational arbitration between learning systems to balance the trade-oï¬ between ï¬exibility and speed (Daw et al., 2005; Keramati, Dezfouli, & Piray, 2011).
Similarly to how probabilistic computations can be amortized for eï¬ciency (see previous section), plans can be amortized into cached values by allowing the model-based system to simulate training data for the model-free system (Sutton, 1990). This process might occur oï¬ine (e.g., in dreaming or quiet wakefulness), suggesting a form of consolidation in reinforcement learning (Gershman, Markman, & Otto, 2014). Consistent with the idea of cooperation between learning systems, a recent experiment demonstrated that model-based behavior becomes automatic over the course of training (Economides, Kurth-Nelson, L¨ubbert, Guitart-Masip, & Dolan, 2015). Thus, a marriage of ï¬exibility and eï¬ciency might be achievable if we use the human reinforcement learning systems as guidance.
Intrinsic motivation also plays an important role in human learning and behavior (Berlyne, 1966; Deci & Ryan, 1975; Harlow, 1950). While much of the previous discussion assumes the standard view of behavior as seeking to maximize reward and minimize punishment, all externally provided rewards are reinterpreted according to the âinternal valueâ of the agent, which may depend on the current goal and mental state. There may also be an intrinsic drive to reduce uncertainty and construct models of the environment (Edelman, 2015; Schmidhuber, 2015), closely related to learning-to-learn and multi-task learning. Deep reinforcement learning is only just starting to address intrinsically motivated learning (Kulkarni et al., 2016; Mohamed & Rezende, 2015).
# 5 Responses to common questions
In discussing the arguments in this paper with colleagues, three lines of questioning or critiques have come up frequently. We think it is helpful to address these points directly, to maximize the potential for moving forward together.
1. Comparing the learning speeds of humans and neural networks on speciï¬c tasks is not meaningful, because humans have extensive prior experience.
It may seem unfair to compare neural networks and humans on the amount of training experience required to perform a task, such as learning to play new Atari games or learning new handwritten characters, when humans have had extensive prior experience that these networks have not beneï¬ted from. People have had many hours playing other games, and experience reading or writing many other handwritten characters, not to mention experience in a variety of more loosely related tasks. If neural networks were âpre-trainedâ on the same experience, the argument goes, then they might generalize similarly to humans when exposed to novel tasks.
34
This has been the rationale behind multi-task learning or transfer learning, a strategy with a long history that has shown some promising results recently with deep networks (e.g., Donahue et al., 2013; Luong, Le, Sutskever, Vinyals, & Kaiser, 2015; Parisotto et al., 2016). Furthermore, some deep learning advocates argue, the human brain eï¬ectively beneï¬ts from even more experience through evolution. If deep learning researchers see themselves as trying to capture the equivalent of humansâ collective evolutionary experience, this would be equivalent to a truly immense âpre- trainingâ phase.
We agree that humans have a much richer starting point than neural networks when learning most new tasks, including learning a new concept or to play a new video game. That is the point of the âdevelopmental start-up softwareâ and other building blocks that we argued are key to creating this richer starting point. We are less committed to a particular story regarding the origins of the ingredients, including the relative roles of genetically programmed and experience- driven developmental mechanisms in building these components in early infancy. Either way, we see them as fundamental building blocks for facilitating rapid learning from sparse data.
Learning-to-learn across multiple tasks is conceivably one route to acquiring these ingredients, but simply training conventional neural networks on many related tasks may not be suï¬cient to generalize in human-like ways for novel tasks. As we argued in Section 4.2.3, successful learning- to-learn â or at least, human-level transfer learning â is enabled by having models with the right representational structure, including the other building blocks discussed in this paper. Learning- to-learn is a powerful ingredient, but it can be more powerful when operating over compositional representations that capture the underlying causal structure of the environment, while also building on the intuitive physics and psychology.
Finally, we recognize that some researchers still hold out hope that if only they can just get big enough training datasets, suï¬ciently rich tasks, and enough computing power â far beyond what has been tried out so far â then deep learning methods might be suï¬cient to learn representations equivalent to what evolution and learning provides humans with. We can sympathize with that hope and believe it deserves further exploration, although we are not sure it is a realistic one. We understand in principle how evolution could build a brain with the cognitive ingredients we discuss here. Stochastic hill-climbing is slow â it may require massively parallel exploration, over millions of years with innumerable dead-ends â but it can build complex structures with complex functions if we are willing to wait long enough. In contrast, trying to build these representations from scratch using backpropagation, deep Q-learning or any stochastic gradient-descent weight update rule in a ï¬xed network architecture may be unfeasible regardless of how much training data are available. To build these representations from scratch might require exploring fundamental structural variations in the networkâs architecture, which gradient-based learning in weight space is not prepared to do. Although deep learning researchers do explore many such architectural variations, and have been devising increasingly clever and powerful ones recently, it is the researchers who are driving and directing this process. Exploration and creative innovation in the space of network architectures have not yet been made algorithmic. Perhaps they could, using genetic programming methods (Koza, 1992) or other structure-search algorithms (Yamins et al., 2014). We think this would be a fascinating and promising direction to explore, but we may have to acquire more patience than machine learning researchers typically express with their algorithms: the dynamics of structure-search may look much more like the slow random hill-climbing of evolution than the smooth, methodical progress of stochastic gradient-descent. An alternative strategy is to
35
build in appropriate infant-like knowledge representations and core ingredients as the starting point for our learning-based AI systems, or to build learning systems with strong inductive biases that guide them in this direction.
Regardless of which way an AI developer chooses to go, our main points are orthogonal to this objection. There are a set of core cognitive ingredients for human-like learning and thought. Deep learning models could incorporate these ingredients through some combination of additional structure and perhaps additional learning mechanisms, but for the most part have yet to do so. Any approach to human-like AI, whether based on deep learning or not, is likely to gain from incorporating these ingredients.
2. Biological plausibility suggests theories of intelligence should start with neural networks.
We have focused on how cognitive science can motivate and guide eï¬orts to engineer human-like AI, in contrast to some advocates of deep neural networks who cite neuroscience for inspiration. Our approach is guided by a pragmatic view that the clearest path to a computational formalization of human intelligence comes from understanding the âsoftwareâ before the âhardware.â In the case of this article, we proposed key ingredients of this software in previous sections.
Nonetheless, a cognitive approach to intelligence should not ignore what we know about the brain. Neuroscience can provide valuable inspirations for both cognitive models and AI researchers: the centrality of neural networks and model-free reinforcement learning in our proposals for âThinking fastâ (Section 4.3) are prime exemplars. Neuroscience can also in principle impose constraints on cognitive accounts, both at the cellular and systems level. If deep learning embodies brain-like computational mechanisms and those mechanisms are incompatible with some cognitive theory, then this is an argument against that cognitive theory and in favor of deep learning. Unfortunately, what we âknowâ about the brain is not all that clear-cut. Many seemingly well-accepted ideas regarding neural computation are in fact biologically dubious, or uncertain at best â and thus should not disqualify cognitive ingredients that pose challenges for implementation within that approach.
For example, most neural networks use some form of gradient-based (e.g., backpropagation) or Hebbian learning. It has long been argued, however, that backpropagation is not biologically plausible; as Crick (1989) famously pointed out, backpropagation seems to require that information be transmitted backwards along the axon, which does not ï¬t with realistic models of neuronal function (although recent models circumvent this problem in various ways Liao, Leibo, & Poggio, 2015; Lillicrap, Cownden, Tweed, & Akerman, 2014; Scellier & Bengio, 2016). This has not prevented backpropagation being put to good use in connectionist models of cognition or in building deep neural networks for AI. Neural network researchers must regard it as a very good thing, in this case, that concerns of biological plausibility did not hold back research on this particular algorithmic approach to learning.10 We strongly agree: Although neuroscientists have not found any mechanisms for implementing backpropagation in the brain, neither have they produced deï¬nitive evidence against it. The existing data simply oï¬er little constraint either way, and backpropagation has been of obviously great value in engineering todayâs best pattern recognition systems.
10Michael Jordan made this point forcefully in his 2015 speech accepting the Rumelhart Prize.
36
Hebbian learning is another case in point. In the form of long-term potentiation (LTP) and spike- timing dependent plasticity (STDP), Hebbian learning mechanisms are often cited as biologically supported (Bi & Poo, 2001). However, the cognitive signiï¬cance of any biologically grounded form of Hebbian learning is unclear. Gallistel and Matzel (2013) have persuasively argued that the critical interstimulus interval for LTP is orders of magnitude smaller than the intervals that are behaviorally relevant in most forms of learning. In fact, experiments that simultaneously manipulate the interstimulus and intertrial intervals demonstrate that no critical interval exists. Behavior can persist for weeks or months, whereas LTP decays to baseline over the course of days (Power, Thompson, Moyer, & Disterhoft, 1997). Learned behavior is rapidly reacquired after extinction (Bouton, 2004), whereas no such facilitation is observed for LTP (de Jonge & Racine, 1985). Most relevantly for our focus, it would be especially challenging to try to implement the ingredients described in this article using purely Hebbian mechanisms.
Claims of biological plausibility or implausibility usually rest on rather stylized assumptions about the brain that are wrong in many of their details. Moreover, these claims usually pertain to the cellular and synaptic level, with few connections made to systems level neuroscience and subcor- tical brain organization (Edelman, 2015). Understanding which details matter and which do not requires a computational theory (Marr, 1982). Moreover, in the absence of strong constraints from neuroscience, we can turn the biological argument around: Perhaps a hypothetical biological mechanism should be viewed with skepticism if it is cognitively implausible. In the long run, we are optimistic that neuroscience will eventually place more constraints on theories of intelligence. For now, we believe cognitive plausibility oï¬ers a surer foundation.
# 3. Language is essential for human intelligence. Why is it not more prominent here?
We have said little in this article about peopleâs ability to communicate and think in natural lan- guage, a distinctively human cognitive capacity where machine capabilities lag strikingly. Certainly one could argue that language should be included on any short list of key ingredients in human intelligence: for instance, Mikolov et al. (2016) featured language prominently in their recent paper sketching challenge problems and a road map for AI. Moreover, while natural language processing is an active area of research in deep learning (e.g., Bahdanau, Cho, & Bengio, 2015; Mikolov, Sutskever, & Chen, 2013; K. Xu et al., 2015), it is widely recognized that neural networks are far from implementing human language abilities. The question is, how do we develop machines with a richer capacity for language?
We ourselves believe that understanding language and its role in intelligence goes hand-in-hand with understanding the building blocks discussed in this article. It is also true that language builds on the core abilities for intuitive physics, intuitive psychology, and rapid learning with compositional, causal models that we do focus on. These capacities are in place before children master language, and they provide the building blocks for linguistic meaning and language acquisition (Carey, 2009; Jackendoï¬, 2003; Kemp, 2007; OâDonnell, 2015; Pinker, 2007; F. Xu & Tenenbaum, 2007). We hope that by better understanding these earlier ingredients and how to implement and integrate them computationally, we will be better positioned to understand linguistic meaning and acquisition in computational terms, and to explore other ingredients that make human language possible.
What else might we need to add to these core ingredients to get language? Many researchers have speculated about key features of human cognition that gives rise to language and other uniquely
37
human modes of thought: Is it recursion, or some new kind of recursive structure building ability (Berwick & Chomsky, 2016; Hauser, Chomsky, & Fitch, 2002)? Is it the ability to reuse symbols by name (Deacon, 1998)? Is it the ability to understand others intentionally and build shared intentionality (Bloom, 2000; Frank, Goodman, & Tenenbaum, 2009; Tomasello, 2010)? Is it some new version of these things, or is it just more of the aspects of these capacities that are already present in infants? These are important questions for future work with the potential to expand the list of key ingredients; we did not intend our list to be complete.
Finally, we should keep in mind all the ways that acquiring language extends and enriches the ingredients of cognition we focus on in this article. The intuitive physics and psychology of infants is likely limited to reasoning about objects and agents in their immediate spatial and temporal vicinity, and to their simplest properties and states. But with language, older children become able to reason about a much wider range of physical and psychological situations (Carey, 2009). Language also facilitates more powerful learning-to-learn and compositionality (Mikolov et al., 2016), allowing people to learn more quickly and ï¬exibly by representing new concepts and thoughts in relation to existing concepts (Lupyan & Bergen, 2016; Lupyan & Clark, 2015). Ultimately, the full project of building machines that learn and think like humans must have language at its core.
# 6 Looking forward
In the last few decades, AI and machine learning have made remarkable progress: Computer programs beat chess masters; AI systems beat Jeopardy champions; apps recognize photos of your friends; machines rival humans on large-scale object recognition; smart phones recognize (and, to a limited extent, understand) speech. The coming years promise still more exciting AI applications, in areas as varied as self-driving cars, medicine, genetics, drug design and robotics. As a ï¬eld, AI should be proud of these accomplishments, which have helped move research from academic journals into systems that improve our daily lives.
We should also be mindful of what AI has achieved and what it has not. While the pace of progress has been impressive, natural intelligence is still by far the best example of intelligence. Machine performance may rival or exceed human performance on particular tasks, and algorithms may take inspiration from neuroscience or aspects of psychology, but it does not follow that the algorithm learns or thinks like a person. This is a higher bar worth reaching for, potentially leading to more powerful algorithms while also helping unlock the mysteries of the human mind.
When comparing people and the current best algorithms in AI and machine learning, people learn from less data and generalize in richer and more ï¬exible ways. Even for relatively simple concepts such as handwritten characters, people need to see just one or a few examples of a new concept before being able to recognize new examples, generate new examples, and generate new concepts based on related ones (Figure 1A). So far, these abilities elude even the best deep neural networks for character recognition (Ciresan et al., 2012), which are trained on many examples of each concept and do not ï¬exibly generalize to new tasks. We suggest that the comparative power and ï¬exibility of peopleâs inferences come from the causal and compositional nature of their representations.
We believe that deep learning and other learning paradigms can move closer to human-like learning
38
and thought if they incorporate psychological ingredients including those outlined in this paper. Be- fore closing, we discuss some recent trends that we see as some of the most promising developments in deep learning â trends we hope will continue and lead to more important advances.
# 6.1 Promising directions in deep learning
There has been recent interest in integrating psychological ingredients with deep neural networks, especially selective attention (Bahdanau et al., 2015; V. Mnih, Heess, Graves, & Kavukcuoglu, 2014; K. Xu et al., 2015), augmented working memory (Graves et al., 2014, 2016; Grefenstette et al., 2015; Sukhbaatar et al., 2015; Weston et al., 2015), and experience replay (McClelland, McNaughton, & OâReilly, 1995; V. Mnih et al., 2015). These ingredients are lower-level than the key cognitive ingredients discussed in this paper, yet they suggest a promising trend of using insights from cognitive psychology to improve deep learning, one that may be even furthered by incorporating higher-level cognitive ingredients.
Paralleling the human perceptual apparatus, selective attention forces deep learning models to process raw perceptual data as a series of high-resolution âfoveal glimpsesâ rather than all at once. Somewhat surprisingly, the incorporation of attention has led to substantial performance gains in a variety of domains, including in machine translation (Bahdanau et al., 2015), object recognition (V. Mnih et al., 2014), and image caption generation (K. Xu et al., 2015). Attention may help these models in several ways. It helps to coordinate complex (often sequential) outputs by attending to only speciï¬c aspects of the input, allowing the model to focus on smaller sub-tasks rather than solving an entire problem in one shot. For instance, during caption generation, the attentional window has been shown to track the objects as they are mentioned in the caption, where the network may focus on a boy and then a Frisbee when producing a caption like, âA boy throws a Frisbeeâ (K. Xu et al., 2015). Attention also allows larger models to be trained without requiring every model parameter to aï¬ect every output or action. In generative neural network models, attention has been used to concentrate on generating particular regions of the image rather than the whole image at once (Gregor et al., 2015). This could be a stepping stone towards building more causal generative models in neural networks, such as a neural version of the Bayesian Program Learning model that could be applied to tackling the Characters Challenge (Section 3.1).
Researchers are also developing neural networks with âworking memoriesâ that augment the shorter- term memory provided by unit activation and the longer-term memory provided by the connection weights (Graves et al., 2014, 2016; Grefenstette et al., 2015; Reed & de Freitas, 2016; Sukhbaatar et al., 2015; Weston et al., 2015). These developments are also part of a broader trend towards âdiï¬erentiable programming,â the incorporation of classic data structures such a random access memory, stacks, and queues, into gradient-based learning systems (Dalrmple, 2016). For example, the Neural Turing Machine (NTM; Graves et al., 2014) and its successor the Diï¬erentiable Neural 2016) are neural networks augmented with a random access Computer (DNC; Graves et al., external memory with read and write operations that maintains end-to-end diï¬erentiability. The NTM has been trained to perform sequence-to-sequence prediction tasks such as sequence copying and sorting, and the DNC has been applied to solving block puzzles and ï¬nding paths between nodes in a graph (after memorizing the graph). Additionally, Neural Programmer-Interpreters learn to represent and execute algorithms such as addition and sorting from fewer examples by observing
39
input-output pairs (like the NTM and DNC) as well as execution traces (Reed & de Freitas, 2016). Each model seems to learn genuine programs from examples, albeit in a representation more like assembly language than a high-level programming language.
While this new generation of neural networks has yet to tackle the types of challenge problems introduced in this paper, diï¬erentiable programming suggests the intriguing possibility of combining the best of program induction and deep learning. The types of structured representations and model building ingredients discussed in this paper â objects, forces, agents, causality, and compositionality â help to explain important facets of human learning and thinking, yet they also bring challenges for performing eï¬cient inference (Section 4.3.1). Deep learning systems have not yet shown they can work with these representations, but they have demonstrated the surprising eï¬ectiveness of gradient descent in large models with high-dimensional parameter spaces. A synthesis of these approaches, able to perform eï¬cient inference over programs that richly model the causal structure an infant sees in the world, would be a major step forward for building human-like AI
Another example of combining pattern recognition and model-based search comes from recent AI research into the game Go. Go is considerably more diï¬cult for AI than chess, and it was only recently that a computer program â AlphaGo â ï¬rst beat a world-class player (Chouard, 2016) by using a combination of deep convolutional neural networks (convnets) and Monte Carlo Tree search (Silver et al., 2016). Each of these components has made gains against artiï¬cial and real Go players (Gelly & Silver, 2008, 2011; Silver et al., 2016; Tian & Zhu, 2016), and the notion of combining pattern recognition and model-based search goes back decades in Go and other games. Showing that these approaches can be integrated to beat a human Go champion is an important AI accomplishment (see Figure 7). Just as important, however, are the new questions and directions it opens up for the long-term project of building genuinely human-like AI.
One worthy goal would be to build an AI system that beats a world-class player with the amount and kind of training human champions receive â rather than overpowering them with Google-scale computational resources. AlphaGo is initially trained on 28.4 million positions and moves from 160,000 unique games played by human experts; it then improves through reinforcement learning, playing 30 million more games against itself. Between the publication of Silver et al. (2016) and before facing world champion Lee Sedol, AlphaGo was iteratively retrained several times in this way; the basic system always learned from 30 million games, but it played against successively stronger versions of itself, eï¬ectively learning from 100 million or more games altogether (Silver, 2016). In contrast, Lee has probably played around 50,000 games in his entire life. Looking at numbers like these, it is impressive that Lee can even compete with AlphaGo at all. What would it take to build a professional-level Go AI that learns from only 50,000 games? Perhaps a system that combines the advances of AlphaGo with some of the complementary ingredients for intelligence we argue for here would be a route to that end.
AI could also gain much by trying to match the learning speed and ï¬exibility of normal human Go players. People take a long time to master the game of Go, but as with the Frostbite and Characters challenges (Sections 3.1 and 3.2), humans can learn the basics of the game quickly through a combination of explicit instruction, watching others, and experience. Playing just a few games teaches a human enough to beat someone who has just learned the rules but never played before. Could AlphaGo model these earliest stages of real human learning curves? Human Go players can also adapt what they have learned to innumerable game variants. The Wikipedia page
40
(a) Conv layer Conv layers x 10 Conv layer k parallel softmax Current board 25 feature planes 92 channels 384 channels k maps P 5 x5 kernel 3 x 3 kernel 3 x 3 kernel a | Ae Ly ' â Our next move (next-1) a »~âS +": > 47> mus LY t â Opponent move (next-2) SF ., Ao Gossssseaasaas Our counter move (next-3) (b) â=p Tree policy 22/40 exe Default policy 2/10 Synced Panna DCNN server ~~~
Figure 7: An AI system for playing Go combining a deep convolutional network (convnet) and model-based search through Monte-Carlo Tree Search (MCTS). (A) The convnet on its own can be used to predict the next k moves given the current board. (B) A search tree with the current board state as its root and the current âwin/totalâ statistics at each node. A new MCTS rollout selects moves along the tree according to the MCTS policy (red arrows) until it reaches a new leaf (red circle), where the next move is chosen by the convnet. From there, play proceeds until the gameâs end according to a pre-deï¬ned default policy based on the Pachi program (BaudiËs & Gailly, 2012), itself based on MCTS. (C) The end-game result of the new leaf is used to update the search tree. Adapted from Tian and Zhu (2016) with permission.
41
9 âGo variantsâ describes versions such as playing on bigger or smaller board sizes (ranging from 9 to 38 19 board), or playing on boards of diï¬erent shapes and connectivity structures (rectangles, triangles, hexagons, even a map of the English city Milton Keynes). The board can be a torus, a mobius strip, a cube or a diamond lattice in three dimensions. Holes can be cut in the board, in regular or irregular ways. The rules can be adapted to what is known as First Capture Go (the ï¬rst player to capture a stone wins), NoGo (the player who avoids capturing any enemy stones longer wins) or Time Is Money Go (players begin with a ï¬xed amount of time and at the end of the game, the number of seconds remaining on each playerâs clock is added to their score). Players may receive bonuses for creating certain stone patterns or capturing territory near certain landmarks. There could be four or more players, competing individually or in teams. In each of these variants, eï¬ective play needs to change from the basic game, but a skilled player can adapt and does not simply have to relearn the game from scratch. Could AlphaGo? While techniques for handling variable sized inputs in convnets may help for playing on diï¬erent board sizes (Sermanet et al., 2014), the value functions and policies that AlphaGo learns seem unlikely to generalize as ï¬exibly and automatically as people do. Many of the variants described above would require signiï¬cant reprogramming and retraining, directed by the smart humans who programmed AlphaGo, not the system itself. As impressive as AlphaGo is in beating the worldâs best players at the standard game â and it is extremely impressive â the fact that it cannot even conceive of these variants, let alone adapt to them autonomously, is a sign that it does not understand the game as humans do. Human players can understand these variants and adapt to them because they explicitly represent Go as a game, with a goal to beat an adversary who is playing to achieve the same goal they are, governed by rules about how stones can be placed on a board and how board positions are scored. Humans represent their strategies as a response to these constraints, such that if the game changes, they can begin to adjust their strategies accordingly.
In sum, Go presents compelling challenges for AI beyond matching world-class human performance, in trying to match human levels of understanding and generalization, based on the same kinds and amounts of data, explicit instructions, and opportunities for social learning aï¬orded to people. In learning to play Go as quickly and as ï¬exibly as they do, people are drawing on most of the cognitive ingredients this paper has laid out. They are learning-to-learn with compositional knowledge. They are using their core intuitive psychology, and aspects of their intuitive physics (spatial and object representations). And like AlphaGo, they are also integrating model-free pattern recognition with model-based search. We believe that Go AI systems could be built to do all of these things, potentially capturing better how humans learn and understand the game. We believe it would be richly rewarding for AI and cognitive science to pursue this challenge together, and that such systems could be a compelling testbed for the principles this paper argues for â as well as building on all of the progress to date that AlphaGo represents.
# 6.2 Future applications to practical AI problems
In this paper, we suggested some ingredients for building computational models with more human- like learning and thought. These principles were explained in the context of the Characters and Frostbite Challenges, with special emphasis on reducing the amount of training data required and facilitating transfer to novel yet related tasks. We also see ways these ingredients can spur progress on core AI problems with practical applications. Here we oï¬er some speculative thoughts on these
42
applications.
1. Scene understanding. Deep learning is moving beyond object recognition and towards scene understanding, as evidenced by a ï¬urry of recent work focused on generating natural language captions for images (Karpathy & Fei-Fei, 2015; Vinyals et al., 2014; K. Xu et al., 2015). Yet current algorithms are still better at recognizing objects than understanding scenes, often getting the key objects right but their causal relationships wrong (Figure 6). We see com- positionality, causality, intuitive physics and intuitive psychology as playing an increasingly important role in reaching true scene understanding. For example, picture a cluttered garage workshop with screw drivers and hammers hanging from the wall, wood pieces and tools stacked precariously on a work desk, and shelving and boxes framing the scene. In order for an autonomous agent to eï¬ectively navigate and perform tasks in this environment, the agent would need intuitive physics to properly reason about stability and support. A holistic model of the scene would require the composition of individual object models, glued together by relations. Finally, causality helps infuse the recognition of existing tools (or the learning of new ones) with an understanding of their use, helping to connect diï¬erent object models in the proper way (e.g., hammering a nail into a wall, or using a saw horse to support a beam being cut by a saw). If the scene includes people acting or interacting, it will be nearly impossible to understand their actions without thinking about their thoughts, and especially their goals and intentions towards the other objects and agents they believe are present.
2. Autonomous agents and intelligent devices. Robots and personal assistants (such as cell- phones) cannot be pre-trained on all possible concepts they may encounter. Like a child learning the meaning of new words, an intelligent and adaptive system should be able to learn new concepts from a small number of examples, as they are encountered naturally in the environment. Common concept types include new spoken words (names like âBan Ki-Moonâ or âKoï¬ Annanâ), new gestures (a secret handshake or a âï¬st bumpâ), and new activities, and a human-like system would be able to learn to both recognize and produce new instances from a small number of examples. Like with handwritten characters, a system may be able to quickly learn new concepts by constructing them from pre-existing primitive actions, informed by knowledge of the underlying causal process and learning-to-learn.
3. Autonomous driving. Perfect autonomous driving requires intuitive psychology. Beyond de- tecting and avoiding pedestrians, autonomous cars could more accurately predict pedestrian behavior by inferring mental states, including their beliefs (e.g., Do they think it is safe to cross the street? Are they paying attention?) and desires (e.g., Where do they want to go? Do they want to cross? Are they retrieving a ball lost in the street?). Similarly, other drivers on the road have similarly complex mental states underlying their behavior (e.g., Do they want to change lanes? Pass another car? Are they swerving to avoid a hidden hazard? Are they distracted?). This type of psychological reasoning, along with other types of model-based causal and physical reasoning, are likely to be especially valuable in challenging and novel driving circumstances for which there is little relevant training data (e.g. navigating unusual construction zones, natural disasters, etc.)
4. Creative design. Creativity is often thought to be a pinnacle of human intelligence: chefs de- sign new dishes, musicians write new songs, architects design new buildings, and entrepreneurs
43
start new businesses. While we are still far from developing AI systems that can tackle these types of tasks, we see compositionality and causality as central to this goal. Many com- monplace acts of creativity are combinatorial, meaning they are unexpected combinations of familiar concepts or ideas (Boden, 1998; Ward, 1994). As illustrated in Figure 1-iv, novel vehicles can be created as a combination of parts from existing vehicles, and similarly novel characters can be constructed from the parts of stylistically similar characters, or familiar characters can be re-conceptualized in novel styles (Rehling, 2001). In each case, the free combination of parts is not enough on its own: While compositionality and learning-to-learn can provide the parts for new ideas, causality provides the glue that gives them coherence and purpose.
# 6.3 Towards more human-like learning and thinking machines
Since the birth of AI in the 1950s, people have wanted to build machines that learn and think like people. We hope researchers in AI, machine learning, and cognitive science will accept our challenge problems as a testbed for progress. Rather than just building systems that recognize handwritten characters and play Frostbite or Go as the end result of an asymptotic process, we suggest that deep learning and other computational paradigms should aim to tackle these tasks using as little training data as people need, and also to evaluate models on a range of human-like generalizations beyond the one task the model was trained on. We hope that the ingredients outlined in this article will prove useful for working towards this goal: seeing objects and agents rather than features, building causal models and not just recognizing patterns, recombining representations without needing to retrain, and learning-to-learn rather than starting from scratch.
# Acknowledgments
We are grateful to Peter Battaglia, Matt Botvinick, Y-Lan Boureau, Shimon Edelman, Nando de Freitas, Anatole Gershman, George Kachergis, Leslie Kaelbling, Andrej Karpathy, George Konidaris, Tejas Kulkarni, Tammy Kwan, Michael Littman, Gary Marcus, Kevin Murphy, Steven Pinker, Pat Shafto, David Sontag, Pedro Tsividis, and four anonymous reviewers for helpful com- ments on early versions of this manuscript. Tom Schaul was very helpful in answering questions regarding the DQN learning curves and Frostbite scoring. This work was supported by the Center for Minds, Brains and Machines (CBMM), under NSF STC award CCF-1231216, and the Moore- Sloan Data Science Environment at NYU.
# References
Andrychowicz, M., Denil, M., Gomez, S., Hoï¬man, M. W., Pfau, D., Schaul, T., & de Freitas, N. (2016). Learning to learn by gradient descent by gradient descent. arXiv preprint.
Anselmi, F., Leibo, J. Z., Rosasco, L., Mutch, J., Tacchetti, A., & Poggio, T. (2016). Unsupervised learning of invariant representations. Theoretical Computer Science.
44
Bahdanau, D., Cho, K., & Bengio, Y. (2015). Neural Machine Translation by Jointly Learning to Align and Translate. In International Conference on Learning Representations (ICLR). Retrieved from http://arxiv.org/abs/1409.0473v3
Baillargeon, R. (2004). Infantsâ physical world. Current Directions in Psychological Science, 13 , 89â94. doi: 10.1111/j.0963-7214.2004.00281.x
Baillargeon, R., Li, J., Ng, W., & Yuan, S. (2009). An account of infants physical reasoning. Learning and the infant mind , 66â116.
Baker, C. L., Saxe, R., & Tenenbaum, J. B. (2009). Action understanding as inverse planning. Cognition, 113 (3), 329â349.
Barsalou, L. W. (1983). Ad hoc categories. Memory & Cognition, 11 (3), 211â227. Bastos, A. M., Usrey, W. M., Adams, R. A., Mangun, G. R., Fries, P., & Friston, K. J. (2012).
Canonical microcircuits for predictive coding. Neuron, 76 , 695â711.
Bates, C. J., Yildirim, I., Tenenbaum, J. B., & Battaglia, P. W. (2015). Humans predict liquid dynamics using probabilistic simulation. In Proceedings of the 37th Annual Conference of the Cognitive Science Society.
Battaglia, P. W., Hamrick, J. B., & Tenenbaum, J. B. (2013). Simulation as an engine of physical scene understanding. Proceedings of the National Academy of Sciences, 110 (45), 18327â 18332.
BaudiËs, P., & Gailly, J.-l. (2012). Pachi: State of the art open source go program. In Advances in computer games (pp. 24â38). Springer.
Baxter, J. (2000). A model of inductive bias learning. Journal of Artiï¬cial Intelligence Research, 12 , 149â198.
Bayer, H. M., & Glimcher, P. W. (2005). Midbrain dopamine neurons encode a quantitative reward prediction error signal. Neuron, 47 , 129â141.
Bellemare, M. G., Naddaf, Y., Veness, J., & Bowling, M. (2013). The arcade learning environment: An evaluation platform for general agents. Journal of Artiï¬cial Intelligence Research, 47 , 253â279.
Berlyne, D. E. (1966). Curiosity and exploration. Science, 153 , 25â33. Berthiaume, V. G., Shultz, T. R., & Onishi, K. H. (2013). A constructivist connectionist model of
transitions on false-belief tasks. Cognition, 126 (3), 441â458.
Berwick, R. C., & Chomsky, N. (2016). Why only us: Language and evolution. Cambridge, MA: MIT Press.
Bever, T. G., & Poeppel, D. (2010). Analysis by synthesis: a (re-) emerging program of research for language and vision. Biolinguistics, 4 , 174â200.
Bi, G.-q., & Poo, M.-m. (2001). Synaptic modiï¬cation by correlated activity: Hebbâs postulate revisited. Annual Review of Neuroscience, 24 , 139â166.
Biederman, I. (1987). Recognition-by-components: A theory of human image understanding. Psychological Review , 94 (2), 115â147.
Bienenstock, E., Cooper, L. N., & Munro, P. W. (1982). Theory for the development of neuron selectivity: orientation speciï¬city and binocular interaction in visual cortex. The Journal of Neuroscience, 2 (1), 32â48.
Bienenstock, E., Geman, S., & Potter, D. (1997). Compositionality, MDL Priors, and Object Recognition. In Advances in Neural Information Processing Systems.
Bloom, P. (2000). How Children Learn the Meanings of Words. Cambridge, MA: MIT Press. Blundell, C., Uria, B., Pritzel, A., Li, Y., Ruderman, A., Leibo, J. Z., . . . Hassabis, D. (2016).
45
Model-Free Episodic Control. arXiv preprint.
Bobrow, D. G., & Winograd, T. (1977). An overview of KRL, a knowledge representation language. Cognitive Science, 1 , 3â46.
Boden, M. A. 347â356. (1998). Creativity and artiï¬cial intelligence. Artiï¬cial Intelligence, 103 (I 998),
Boden, M. A. (2006). Mind as machine: A history of cognitive science. Oxford University Press. Bonawitz, E., Denison, S., Griï¬ths, T. L., & Gopnik, A. (2014). Probabilistic models, learning algorithms, and response variability: sampling in cognitive development. Trends in Cognitive Sciences, 18 , 497â500.
Bottou, L. (2014). From machine learning to machine reasoning. Machine learning, 94 (2), 133â149. Bouton, M. E. (2004). Context and behavioral processes in extinction. Learning & Memory, 11 ,
485â494.
Buckingham, D., & Shultz, T. R. (2000). The developmental course of distance, time, and velocity concepts: A generative connectionist model. Journal of Cognition and Development, 1 (3), 305â345.
Buesing, L., Bill, J., Nessler, B., & Maass, W. (2011). Neural dynamics as sampling: a model for stochastic computation in recurrent networks of spiking neurons. PLoS Computational Biology, 7 , e1002211.
Carey, S. (1978). The Child as Word Learner. In J. Bresnan, G. Miller, & M. Halle (Eds.), Linguistic theory and psychological reality (pp. 264â293).
Carey, S. (2004). Bootstrapping and the origin of concepts. Daedalus, 133 (1), 59â68. Carey, S. (2009). The Origin of Concepts. New York, New York, USA: Oxford University Press. Carey, S., & Bartlett, E.
(1978). Acquiring a single new word. Papers and Reports on Child Language Development, 15 , 17â29.
Chouard, T. (2016, March). The go ï¬les: AI computer wraps up 4-1 victory against human champion. ([Online; posted 15-March-2016])
Ciresan, D., Meier, U., & Schmidhuber, J. (2012). Multi-column Deep Neural Networks for Image Classiï¬cation. In Computer Vision and Pattern Recognition (CVPR) (pp. 3642â3649). Collins, A. G. E., & Frank, M. J. (2013). Cognitive control over learning: Creating, clustering, and
generalizing task-set structure. Psychological Review , 120 (1), 190â229.
Cook, C., Goodman, N. D., & Schulz, L. E. (2011). Where science starts: spontaneous experiments in preschoolersâ exploratory play. Cognition, 120 (3), 341â9.
Crick, F. (1989). The recent excitement about neural networks. Nature, 337 , 129â132. Csibra, G. (2008). Goal attribution to inanimate agents by 6.5-month-old infants. Cognition, 107 ,
705â717.
Csibra, G., Biro, S., Koos, O., & Gergely, G. (2003). One-year-old infants use teleological repre- sentations of actions productively. Cognitive Science, 27 , 111â133.
Dalrmple, D. (2016). Diï¬erentiable Programming. Retrieved from https://www.edge.org/ response-detail/26794
Davis, E., & Marcus, G. (2015). Commonsense Reasoning and Commonsense Knowledge in Arti- ï¬cial Intelligence. Communications of the ACM , 58 (9), 92â103.
Daw, N. D., Niv, Y., & Dayan, P. (2005). Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral control. Nature Neuroscience, 8 , 1704â1711. Dayan, P., Hinton, G. E., Neal, R. M., & Zemel, R. S. (1995). The Helmholtz machine. Neural
Computation, 7 (5), 889â904.
46
Deacon, T. W. (1998). The symbolic species: The co-evolution of language and the brain. WW Norton & Company.
Deci, E. L., & Ryan, R. M. (1975). Intrinsic motivation. Wiley Online Library. de Jonge, M., & Racine, R. J. (1985). The eï¬ects of repeated induction of long-term potentiation
in the dentate gyrus. Brain Research, 328 , 181â185.
Denton, E., Chintala, S., Szlam, A., & Fergus, R. (2015). Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks. In Advances in Neural Information Processing Systems 29. Retrieved from http://arxiv.org/abs/1506.05751
Diuk, C., Cohen, A., & Littman, M. L. (2008). An Object-Oriented representation for eï¬cient In Proceedings of the 25th International Conference on Machine reinforcement learning. Learning (ICML) (pp. 240â247).
Dolan, R. J., & Dayan, P. (2013). Goals and habits in the brain. Neuron, 80 , 312â325. Donahue, J., Jia, Y., Vinyals, O., Hoï¬man, J., Zhang, N., Tzeng, E., & Darrell, T.
(2013). Decaf: A deep convolutional activation feature for generic visual recognition. arXiv preprint arXiv:1310.1531 .
Economides, M., Kurth-Nelson, Z., L¨ubbert, A., Guitart-Masip, M., & Dolan, R. J. (2015). Model- based reasoning in humans becomes automatic with training. PLoS Computation Biology, 11 , e1004463.
Edelman, S. (2015). The minority report: some common assumptions to reconsider in the modelling of the brain and behaviour. Journal of Experimental & Theoretical Artiï¬cial Intelligence, 28 (4), 751â776.
Eden, M. (1962). Handwriting and Pattern Recognition. IRE Transactions on Information Theory, 160â166.
Eliasmith, C., Stewart, T. C., Choo, X., Bekolay, T., DeWolf, T., Tang, Y., & Rasmussen, D. (2012). A large-scale model of the functioning brain. Science, 338 (6111), 1202â1205. Elman, J. L. (2005). Connectionist models of cognitive development: Where next? Trends in
Cognitive Sciences, 9 (3), 111â117.
Elman, J. L., Bates, E. A., Johnson, M. H., Karmiloï¬-Smith, A., Parisi, D., & Plunkett, K. (1996). Rethinking innateness. Cambridge, MA: MIT Press.
Eslami, S. M. A., Heess, N., Weber, T., Tassa, Y., Kavukcuoglu, K., & Hinton, G. E. (2016). Attend, infer, repeat: Fast scene understanding with generative models. arXiv preprint arXiv:1603.08575 .
Eslami, S. M. A., Tarlow, D., Kohli, P., & Winn, J. (2014). Just-in-time learning for fast and ï¬exible inference. In Advances in Neural Information Processing Systems (pp. 154â162).
Fodor, J. A. (1975). The Language of Thought. Harvard University Press. Fodor, J. A., & Pylyshyn, Z. W. (1988). Connectionism and cognitive architecture: A critical
analysis. Cognition, 28 , 3â71.
Frank, M. C., Goodman, N. D., & Tenenbaum, J. B. (2009). Using speakersâ referential intentions to model early cross-situational word learning. Psychological Science, 20 , 578â585.
Freyd, J. (1983). Representing the dynamics of a static form. Memory and Cognition, 11 (4), 342â346.
Freyd, J. (1987). Dynamic Mental Representations. Psychological Review , 94 (4), 427â438. Fukushima, K. (1980). Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaï¬ected by shift in position. Biological Cybernetics, 36 , 193â202. Gallistel, C., & Matzel, L. D. (2013). The neuroscience of learning: beyond the Hebbian synapse.
47
Annual Review of Psychology, 64 , 169â200.
Gelly, S., & Silver, D. (2008). Achieving master level play in 9 x 9 computer go.. Gelly, S., & Silver, D. (2011). Monte-carlo tree search and rapid action value estimation in computer
go. Artiï¬cial Intelligence, 175 (11), 1856â1875.
Gelman, A., Carlin, J. B., Stern, H. S., & Rubin, D. B. (2004). Bayesian Data Analysis. Chapman and Hall/CRC.
Gelman, A., Lee, D., & Guo, J. (2015). Stan a probabilistic programming language for Bayesian inference and optimization. Journal of Educational and Behavioral Statistics, 40 , 530â543.
Geman, S., Bienenstock, E., & Doursat, R. (1992). Neural networks and the bias/variance dilemma. Neural Computation, 4 , 1â58.
Gershman, S. J., & Goodman, N. D. (2014). Amortized inference in probabilistic reasoning. In Proceedings of the 36th Annual Conference of the Cognitive Science Society.
(2015). Computational rationality: A converging paradigm for intelligence in brains, minds, and machines. Science, 349 , 273â278. Gershman, S. J., Markman, A. B., & Otto, A. R. (2014). Retrospective revaluation in sequential decision making: A tale of two systems. Journal of Experimental Psychology: General , 143 , 182â194.
Gershman, S. J., Vul, E., & Tenenbaum, J. B. (2012). Multistability and perceptual inference. Neural Computation, 24 , 1â24.
Gerstenberg, T., Goodman, N. D., Lagnado, D. a., & Tenenbaum, J. B. (2015). How, whether, why: Causal judgments as counterfactual contrasts. Proceedings of the 37th Annual Conference of the Cognitive Science Society.
Ghahramani, Z. (2015). Probabilistic machine learning and artiï¬cial intelligence. Nature, 521 , 452â459.
Goodman, N. D., Mansinghka, V. K., Roy, D. M., Bonawitz, K., & Tenenbaum, J. B. (2008). Church: A language for generative models. Uncertainty in Artiï¬cial Intelligence.
Gopnik, A., Glymour, C., Sobel, D. M., Schulz, L. E., Kushnir, T., & Danks, D. (2004). A theory of causal learning in children: Causal maps and Bayes nets. Psychological Review , 111 (1), 3â32.
Gopnik, A., & Meltzoï¬, A. N. (1999). Words, Thoughts, and Theories. Mind: A Quarterly Review of Philosophy, 108 , 0.
Graves, A. (2014). Generating sequences with recurrent neural networks. arXiv preprint. Retrieved from http://arxiv.org/abs/1308.0850
Graves, A., Mohamed, A.-r., & Hinton, G. (2013). Speech recognition with deep recurrent neu- In Acoustics, speech and signal processing (icassp), 2013 ieee international ral networks. conference on (pp. 6645â6649).
Graves, A., Wayne, G., & Danihelka, I. (2014). Neural Turing Machines. arXiv preprint. Retrieved from http://arxiv.org/abs/1410.5401v1
Graves, A., Wayne, G., Reynolds, M., Harley, T., Danihelka, I., Grabska-Barwi´nska, A., . . . Has- sabis, D. (2016). Hybrid computing using a neural network with dynamic external memory. Nature.
Grefenstette, E., Hermann, K. M., Suleyman, M., & Blunsom, P. (2015). Learning to Transduce with Unbounded Memory. In Advances in Neural Information Processing Systems.
Gregor, K., Besse, F., Rezende, D. J., Danihelka, I., & Wierstra, D. (2016). Towards Conceptual Compression. arXiv preprint. Retrieved from http://arxiv.org/abs/1604.08772
48
Gregor, K., Danihelka, I., Graves, A., Rezende, D. J., & Wierstra, D. (2015). DRAW: A Recurrent Neural Network For Image Generation. In International Conference on Machine Learning (ICML).
Griï¬ths, T. L., Chater, N., Kemp, C., Perfors, A., & Tenenbaum, J. B. (2010). Probabilistic models of cognition: exploring representations and inductive biases. Trends in Cognitive Sciences, 14 (8), 357â64.
Griï¬ths, T. L., Vul, E., & Sanborn, A. N. (2012). Bridging levels of analysis for probabilistic models of cognition. Current Directions in Psychological Science, 21 , 263â268.
Grossberg, S. (1976). Adaptive pattern classiï¬cation and universal recoding: I. parallel development and coding of neural feature detectors. Biological Cybernetics, 23 , 121â134.
Grosse, R., Salakhutdinov, R., Freeman, W. T., & Tenenbaum, J. B. (2012). Exploiting composi- tionality to explore a large space of model structures. In Uncertainty in Artiï¬cial Intelligence. Guo, X., Singh, S., Lee, H., Lewis, R. L., & Wang, X. (2014). Deep learning for real-time Atari game play using oï¬ine Monte-Carlo tree search planning. In Advances in neural information processing systems (pp. 3338â3346). Gweon, H., Tenenbaum, J. B., & Schulz, L. E.
Infants consider both the sample and the sampling process in inductive generalization. Proceedings of the National Academy of Sciences, 107 , 9066â9071. doi: 10.1073/pnas.1003095107
Halle, M., & Stevens, K. (1962). Speech Recognition: A Model and a Program for Research. IRE Transactions on Information Theory, 8 (2), 155â159.
Hamlin, K. J. (2013). Moral Judgment and Action in Preverbal Infants and Toddlers: Evidence for an Innate Moral Core. Current Directions in Psychological Science, 22 , 186â193. doi: 10.1177/0963721412470687
Hamlin, K. J., Ullman, T., Tenenbaum, J., Goodman, N. D., & Baker, C. (2013). The mentalistic basis of core social cognition: Experiments in preverbal infants and a computational model. Developmental Science, 16 , 209â226. doi: 10.1111/desc.12017
Hamlin, K. J., Wynn, K., & Bloom, P. (2007). Social evaluation by preverbal infants. Nature, 450 , 557â560.
Hamlin, K. J., Wynn, K., & Bloom, P. (2010). Three-month-olds show a negativity bias in their social evaluations. Developmental Science, 13 , 923â929. doi: 10.1111/j.1467-7687.2010.00951 .x
Harlow, H. F. (1949). The formation of learning sets. Psychological Review , 56 (1), 51â65. Harlow, H. F. (1950). Learning and satiation of response in intrinsically motivated complex puzzle performance by monkeys. Journal of Comparative and Physiological Psychology, 43 , 289â294. Hauser, M. D., Chomsky, N., & Fitch, W. T. (2002). The faculty of language: what is it, who has
it, and how did it evolve? Science, 298 , 1569â1579.
Hayes-Roth, B., & Hayes-Roth, F. (1979). A cognitive model of planning. Cognitive Science, 3 , 275â310.
He, K., Zhang, X., Ren, S., & Sun, J. (2015). Deep Residual Learning for Image Recognition. arXiv preprint. Retrieved from http://arxiv.org/abs/1512.03385
Hebb, D. O. (1949). The organization of behavior. Wiley. Heess, N., Tarlow, D., & Winn, J. (2013). Learning to pass expectation propagation messages. In
Advances in Neural Information Processing Systems (pp. 3219â3227).
Hespos, S. J., & Baillargeon, R. (2008). Young infantsâ actions reveal their developing knowledge of support variables: Converging evidence for violation-of-expectation ï¬ndings. Cognition,
49
107 , 304â316.
Hespos, S. J., Ferry, A. L., & Rips, L. J. (2009). Five-month-old infants have diï¬erent expectations for solids and liquids. Psychological Science, 20 (5), 603â611.
Hinton, G. E. (2002). Training products of experts by minimizing contrastive divergence. Neural Computation, 14 (8), 1771â800.
Hinton, G. E., Dayan, P., Frey, B. J., & Neal, R. M. (1995). The âwake-sleepâ algorithm for unsupervised neural networks. Science, 268 (5214), 1158â61.
Hinton, G. E., Deng, L., Yu, D., Dahl, G. E., Mohamed, A.-r., Jaitly, N., . . . Kingsbury, B. (2012). Deep neural networks for acoustic modeling in speech recognition. IEEE Signal Processing Magazine, 29 , 82â97.
Hinton, G. E., Osindero, S., & Teh, Y. W. (2006). A fast learning algorithm for deep belief nets. Neural Computation, 18 , 1527â1554.
Hoï¬man, D. D., & Richards, W. A. (1984). Parts of recognition. Cognition, 18 , 65â96. Hofstadter, D. R. (1985). Metamagical themas: Questing for the essence of mind and pattern. New
York: Basic Books.
Horst, J. S., & Samuelson, L. K. (2008). Fast Mapping but Poor Retention by 24-Month-Old Infants. Infancy, 13 (2), 128â157.
Huang, Y., & Rao, R. P. (2014). Neurons as Monte Carlo samplers: Bayesian? inference and In Advances in neural information processing systems (pp. learning in spiking networks. 1943â1951).
Hummel, J. E., & Biederman, I. (1992). Dynamic binding in a neural network for shape recognition. Psychological Review , 99 (3), 480â517.
Jackendoï¬, R. (2003). Foundations of Language. Oxford University Press. Jara-Ettinger, J., Gweon, H., Tenenbaum, J. B., & Schulz, L. E. (2015). Childrens understanding
of the costs and rewards underlying rational action. Cognition, 140 , 14â23.
Jern, A., & Kemp, C. (2013). A probabilistic account of exemplar and category generation. Cognitive Psychology, 66 (1), 85â125.
Jern, A., & Kemp, C. (2015). A decision network account of reasoning about other peoples choices. Cognition, 142 , 12â38.
Johnson, S. C., Slaughter, V., & Carey, S. (1998). Whose gaze will infants follow? The elicitation of gaze-following in 12-month-olds. Developmental Science, 1 , 233â238. doi: 10.1111/1467 -7687.00036
Juang, B. H., & Rabiner, L. R. (1990). Hidden Markov models for speech recognition. Technometric,
33 (3), 251â272. Karpathy, A., & Fei-Fei, L.
(2015). Deep Visual-Semantic Alignments for Generating Image Desscriptions. In Computer Vision and Pattern Recognition (CVPR).
Kemp, C. (2007). The acquisition of inductive constraints. Unpublished doctoral dissertation, MIT.
Keramati, M., Dezfouli, A., & Piray, P. (2011). Speed/accuracy trade-oï¬ between the habitual and the goal-directed processes. PLoS Computational Biology, 7 , e1002055.
Khaligh-Razavi, S.-M., & Kriegeskorte, N. (2014). Deep Supervised, but Not Unsupervised, Models May Explain IT Cortical Representation. PLoS Computational Biology, 10 (11), e1003915.
Kilner, J. M., Friston, K. J., & Frith, C. D. (2007). Predictive coding: An account of the mirror neuron system. Cognitive Processing, 8 (3), 159â166.
Kingma, D. P., Rezende, D. J., Mohamed, S., & Welling, M. (2014). Semi-supervised Learning
50
with Deep Generative Models. In Neural Information Processing Systems (NIPS).
Koch, G., Zemel, R. S., & Salakhutdinov, R. (2015). Siamese neural networks for one-shot image recognition. In ICML Deep Learning Workshop.
Kodratoï¬, Y., & Michalski, R. S. (2014). Machine learning: An artiï¬cial intelligence approach (Vol. 3). Morgan Kaufmann.
Koza, J. R. (1992). Genetic programming: on the programming of computers by means of natural selection (Vol. 1). MIT press.
Kriegeskorte, N. (2015). Deep Neural Networks: A New Framework for Modeling Biological Vision and Brain Information Processing. Annural Review of Vision Science, 1 , 417â446.
ImageNet classiï¬cation with deep con- volutional neural networks. In Advances in Neural Information Processing Systems 25 (pp. 1097â1105).
Kulkarni, T. D., Kohli, P., Tenenbaum, J. B., & Mansinghka, V. (2015). Picture: A probabilistic programming language for scene perception. In Computer Vision and Pattern Recognition (CVPR).
Kulkarni, T. D., Narasimhan, K. R., Saeedi, A., & Tenenbaum, J. B. (2016). Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation. arXiv preprint.
Kulkarni, T. D., Whitney, W., Kohli, P., & Tenenbaum, J. B. (2015). Deep Convolutional Inverse Graphics Network. In Computer Vision and Pattern Recognition (CVPR).
Lake, B. M. (2014). Towards more human-like concept learning in machines: Compositionality, causality, and learning-to-learn. Unpublished doctoral dissertation, MIT.
Lake, B. M., Lee, C.-y., Glass, J. R., & Tenenbaum, J. B. (2014). One-shot learning of generative In Proceedings of the 36th Annual Conference of the Cognitive Science speech concepts. Society (pp. 803â808).
Lake, B. M., Salakhutdinov, R., & Tenenbaum, J. B. (2012). Concept learning as motor program induction: A large-scale empirical study. In Proceedings of the 34th Annual Conference of the Cognitive Science Society.
Lake, B. M., Salakhutdinov, R., & Tenenbaum, J. B. (2015). Human-level concept learning through probabilistic program induction. Science, 350 (6266), 1332â1338.
Lake, B. M., Zaremba, W., Fergus, R., & Gureckis, T. M. (2015). Deep Neural Networks Predict Category Typicality Ratings for Images. In Proceedings of the 37th Annual Conference of the Cognitive Science Society.
Landau, B., Smith, L. B., & Jones, S. S. (1988). The importance of shape in early lexical learning. Cognitive Development, 3 (3), 299â321.
Langley, P., Bradshaw, G., Simon, H. A., & Zytkow, J. M. (1987). Scientiï¬c discovery: Computa- tional explorations of the creative processes. MIT press.
LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521 , 436â444. LeCun, Y., Boser, B., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W., & Jackel, L. D. (1989). Backpropagation applied to handwritten zip code recognition. Neural Computation, 1 , 541â551.
LeCun, Y., Bottou, L., Bengio, Y., & Haï¬ner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE , 86 (11), 2278â2323.
Lerer, A., Gross, S., & Fergus, R. (2016). Learning Physical Intuition of Block Towers by Example. arXiv preprint. Retrieved from http://arxiv.org/abs/1603.01312
51
(2009). Modeling the eï¬ects of memory on human online sentence processing with particle ï¬lters. In Advances in Neural Information Processing Systems (pp. 937â944).
Liao, Q., Leibo, J. Z., & Poggio, T. (2015). How important is weight symmetry in backpropagation? arXiv preprint arXiv:1510.05067 .
Liberman, A. M., Cooper, F. S., Shankweiler, D. P., & Studdert-Kennedy, M. (1967). Perception of the speech code. Psychological Review , 74 (6), 431â461.
Lillicrap, T. P., Cownden, D., Tweed, D. B., & Akerman, C. J. (2014). Random feedback weights support learning in deep neural networks. arXiv preprint arXiv:1411.0247 .
Lloyd, J., Duvenaud, D., Grosse, R., Tenenbaum, J., & Ghahramani, Z. (2014). Automatic con- struction and natural-language description of nonparametric regression models. In Proceedings of the National Conference on Artiï¬cial Intelligence (Vol. 2, pp. 1242â1250).
Lombrozo, T. (2009). Explanation and categorization: How âwhy?â informs âwhat?â. Cognition, 110 (2), 248â53.
Lopez-Paz, D., Bottou, L., Scholk¨opf, B., & Vapnik, V. (2016). Unifying distillation and privileged information. In International Conference on Learning Representations (ICLR).
Lopez-Paz, D., Muandet, K., Scholk¨opf, B., & Tolstikhin, I. (2015). Towards a Learning Theory of Cause-Eï¬ect Inference. In Proceedings of the 32nd International Conference on Machine Learning (ICML).
Luong, M.-T., Le, Q. V., Sutskever, I., Vinyals, O., & Kaiser, L. (2015). Multi-task sequence to sequence learning. arXiv preprint arXiv:1511.06114 .
Lupyan, G., & Bergen, B. (2016). How Language Programs the Mind. Topics in Cognitive Science, 8 (2), 408â424. Retrieved from http://doi.wiley.com/10.1111/tops.12155
(2015). Words and the world: Predictive coding and the language- perception-cognition interface. Current Directions in Psychological Science, 24 (4), 279â284. (2013). Sidekick agents for sequential planning problems. Unpublished doctoral
Lupyan,
Macindoe, O. dissertation, Massachusetts Institute of Technology.
Magid, R. W., Sheskin, M., & Schulz, L. E. (2015). Imagination and the generation of new ideas. Cognitive Development, 34 , 99â110.
Mansinghka, V., Selsam, D., & Perov, Y. (2014). Venture: A higher-order probabilistic program- ming platform with programmable inference. arXiv preprint arXiv:1404.0099 .
Marcus, G. (1998). Rethinking Eliminative Connectionism. Cognitive Psychology, 282 (37), 243â 282.
Marcus, G. (2001). The algebraic mind: Integrating connectionism and cognitive science. MIT press.
Markman, A. B., & Makin, V. S. (1998). Referential communication and category acquisition. Journal of Experimental Psychology: General , 127 (4), 331â54.
Markman, A. B., & Ross, B. H. (2003). Category use and category learning. Psychological Bulletin, 129 (4), 592â613.
Markman, E. M. (1989). Categorization and Naming in Children. Cambridge, MA: MIT Press. Marr, D. C. (1982). Vision. San Francisco, CA: W.H. Freeman and Company. Marr, D. C., & Nishihara, H. K. (1978). Representation and recognition of the spatial organization of three-dimensional shapes. Proceedings of the Royal Society of London. Series B , 200 (1140), 269â94.
McClelland, J. L. (1988). Parallel distributed processing: Implications for cognition and development
52
(Tech. Rep.). DTIC Document.
McClelland, J. L., Botvinick, M. M., Noelle, D. C., Plaut, D. C., Rogers, T. T., Seidenberg, M. S., (2010). Letting structure emerge: connectionist and dynamical systems & Smith, L. B. approaches to cognition. Trends in Cognitive Sciences, 14 (8), 348â56.
McClelland, J. L., McNaughton, B. L., & OâReilly, R. C. (1995). Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory. Psychological Review , 102 (3), 419â57. McClelland, J. L., Rumelhart, D. E., & the PDP Research Group. (1986). Parallel Distributed Processing: Explorations in the microstructure of cognition. Volume II. Cambridge, MA: MIT Press.
Mikolov, T., Joulin, A., & Baroni, M. (2016). A Roadmap towards Machine Intelligence. arXiv preprint. Retrieved from http://arxiv.org/abs/1511.08130
Mikolov, T., Sutskever, I., & Chen, K. (2013). Distributed Representations of Words and Phrases and their Compositionality. Advances in Neural Information Processing Systems.
Miller, E. G., Matsakis, N. E., & Viola, P. A. (2000). Learning from one example through shared densities on transformations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.
Miller, G. A., & Johnson-Laird, P. N. (1976). Language and perception. Cambridge, MA: Belknap Press.
Minsky, M. L. (1974). A framework for representing knowledge. MIT-AI Laboratory Memo 306 . Minsky, M. L., & Papert, S. A. (1969). Perceptrons: An introduction to computational geometry.
MIT Press.
Mitchell, T. M., Keller, R. R., & Kedar-cabelli, S. T. (1986). Explanation-Based Generalization: A Unifying View. Machine Learning, 1 , 47â80.
Mnih, A., & Gregor, K. (2014). Neural variational inference and learning in belief networks. In Proceedings of the 31st International Conference on Machine Learning (pp. 1791â1799). Mnih, V., Heess, N., Graves, A., & Kavukcuoglu, K. (2014). Recurrent Models of Visual Attention.
In Advances in Neural Information Processing Systems 27 (pp. 1â9).
Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., . . . Hassabis, D. (2015). Human-level control through deep reinforcement learning. Nature, 518 (7540), 529â533.
Mohamed, S., & Rezende, D. J. (2015). Variational information maximisation for intrinsically motivated reinforcement learning. In Advances in neural information processing systems (pp. 2125â2133).
Moreno-Bote, R., Knill, D. C., & Pouget, A. (2011). Bayesian sampling in visual perception. Proceedings of the National Academy of Sciences, 108 , 12491â12496.
Murphy, G. L. (1988). Comprehending complex concepts. Cognitive Science, 12 (4), 529â562. Murphy, G. L., & Medin, D. L. (1985). The role of theories in conceptual coherence. Psychological
Review , 92 (3), 289â316.
Murphy, G. L., & Ross, B. H. Psychology, 27 , 148â193. (1994). Predictions from Uncertain Categorizations. Cognitive
Neisser, U. (1966). Cognitive Psychology. New York: Appleton-Century-Crofts. Newell, A., & Simon, H. A.
(1961). Gps, a program that simulates human thought. Defense Technical Information Center.
Newell, A., & Simon, H. A. (1972). Human problem solving. Prentice-Hall.
53
Niv, Y. (2009). Reinforcement learning in the brain. Journal of Mathematical Psychology, 53 , 139â154.
OâDonnell, T. J. (2015). Productivity and Reuse in Language: A Theory of Linguistic Computation and Storage. Cambridge, MA: MIT Press.
Osherson, D. N., & Smith, E. E. (1981). On the adequacy of prototype theory as a theory of concepts. Cognition, 9 (1), 35â58.
Parisotto, E., Ba, J. L., & Salakhutdinov, R. (2016). Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning. In International Conference on Learning Representations (ICLR). Retrieved from http://arxiv.org/abs/1511.06342
Pecevski, D., Buesing, L., & Maass, W. (2011). Probabilistic inference in general graphical models through sampling in stochastic networks of spiking neurons. PLoS Computational Biology, 7 , e1002294.
Peterson, J. C., Abbott, J. T., & Griï¬ths, T. L. (2016). Adapting Deep Network Features to Capture Psychological Representations. In Proceedings of the 38th Annual Conference of the Cognitive Science Society.
Piantadosi, S. T. (2011). Learning and the language of thought. Unpublished doctoral dissertation,
Massachusetts Institute of Technology. Pinker, S. (2007). The Stuï¬ of Thought. Penguin. Pinker, S., & Prince, A. (1988). On language and connectionism: Analysis of a parallel distributed
processing model of language acquisition. Cognition, 28 , 73â193.
Power, J. M., Thompson, L. T., Moyer, J. R., & Disterhoft, J. F. (1997). Enhanced synaptic transmission in ca1 hippocampus after eyeblink conditioning. Journal of Neurophysiology, 78 , 1184â1187.
Premack, D., & Premack, A. J. (1997). Infants Attribute Value to the Goal-Directed Actions of Self-propelled Objects (Vol. 9). doi: 10.1162/jocn.1997.9.6.848
Reed, S., & de Freitas, N. (2016). Neural Programmer-Interpreters. In International Conference on Learning Representations (ICLR). Retrieved from http://arxiv.org/abs/1511.06279
# Rehder, B. (2003). A causal-model theory of conceptual representation and categorization. Journal
of Experimental Psychology: Learning, Memory, and Cognition, 29 (6), 1141â59.
Rehder, B., & Hastie, R. (2001). Causal Knowledge and Categories: The Eï¬ects of Causal Beliefs on Categorization, Induction, and Similarity. Journal of Experimental Psychology: General , 130 (3), 323â360.
Rehling, J. A. (2001). Letter Spirit (Part Two): Modeling Creativity in a Visual Domain. Unpub- lished doctoral dissertation, Indiana University.
Rezende, D. J., Mohamed, S., Danihelka, I., Gregor, K., & Wierstra, D. (2016). One-Shot Gen- In International Conference on Machine Learning eralization in Deep Generative Models. (ICML). Retrieved from http://arxiv.org/abs/1603.05106v1
Rezende, D. J., Mohamed, S., & Wierstra, D. (2014). Stochastic backpropagation and approxi- mate inference in deep generative models. In International Conference on Machine Learning (ICML).
Rips, L. J. (1975). Inductive judgments about natural categories. Journal of Verbal Learning and Verbal Behavior , 14 (6), 665â681.
Rips, L. J., & Hespos, S. J. (2015). Divisions of the physical world: Concepts of objects and substances. Psychological Bulletin, 141 , 786â811.
Rogers, T. T., & McClelland, J. L. (2004). Semantic Cognition. Cambridge, MA: MIT Press.
54
Rosenblatt, F. (1958). The perceptron: a probabilistic model for information storage and organi- zation in the brain. Psychological Review , 65 , 386â408.
Rougier, N. P., Noelle, D. C., Braver, T. S., Cohen, J. D., & OâReilly, R. C. (2005). Prefrontal cortex and ï¬exible cognitive control: Rules without symbols. Proceedings of the National Academy of Sciences (PNAS), 102 (20), 7338â7343.
Rumelhart, D. E., Hinton, G., & Williams, R. (1986). Learning representations by back-propagating errors. Nature, 323 (9), 533â536.
Rumelhart, D. E., & McClelland, J. L. (1986). On Learning the Past Tenses of English Verbs. In Parallel distributed processing: Explorations in the microstructure of cognition (pp. 216â271). Cambridge, MA: MIT Press.
Rumelhart, D. E., McClelland, J. L., & the PDP Research Group. (1986). Parallel Distributed Processing: Explorations in the microstructure of cognition. Volume I. Cambridge, MA: MIT Press.
Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., . . . Fei-Fei, L. (2015). ImageNet large scale visual recognition challenge (Tech. Rep.).
Russell, S., & Norvig, P. (2003). Artiï¬cial Intelligence: A Modern Approach. Upper Saddle River, NJ: Prentice Hall.
Rusu, A. A., Rabinowitz, N. C., Desjardins, G., Soyer, H., Kirkpatrick, J., Kavukcuoglu, K., . . . Hadsell, R. (2016). Progressive Neural Networks. arXiv preprint. Retrieved from http:// arxiv.org/abs/1606.04671
Salakhutdinov, R., Tenenbaum, J., & Torralba, A. (2012). One-shot learning with a hierarchical nonparametric Bayesian model. JMLR Workshop on Unsupervised and Transfer Learning, 27 , 195â207.
Salakhutdinov, R., Tenenbaum, J. B., & Torralba, A. (2013). Learning with Hierarchical-Deep Models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35 (8), 1958â71. Salakhutdinov, R., Torralba, A., & Tenenbaum, J. (2011). Learning to Share Visual Appearance for Multiclass Object Detection. In Computer Vision and Pattern Recognition (CVPR). Sanborn, A. N., Mansinghka, V. K., & Griï¬ths, T. L. (2013). Reconciling intuitive physics and
newtonian mechanics for colliding objects. Psychological Review , 120 (2), 411.
Scellier, B., & Bengio, Y. (2016). Towards a biologically plausible backprop. arXiv preprint arXiv:1602.05179 .
Schank, R. C. (1972). Conceptual dependency: A theory of natural language understanding. Cognitive Psychology, 3 , 552â631.
In International Conference on Learning Representations (ICLR). Retrieved from http://arxiv .org/abs/1511.05952
Schlottmann, A., Cole, K., Watts, R., & White, M. (2013). Domain-speciï¬c perceptual causality in children depends on the spatio-temporal conï¬guration, not motion onset. Frontiers in Psychology, 4 . doi: 10.3389/fpsyg.2013.00365
Schlottmann, A., Ray, E. D., Mitchell, A., & Demetriou, N. (2006). Perceived physical and social causality in animated motions: Spontaneous reports and ratings. Acta Psychologica, 123 , 112â143. doi: 10.1016/j.actpsy.2006.05.006
Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural Networks, 61 , 85â117.
Scholl, B. J., & Gao, T. (2013). Perceiving Animacy and Intentionality: Visual Processing or
55
Higher-Level Judgment? Social perception: Detection and interpretation of animacy, agency, and intention.
Schultz, W., Dayan, P., & Montague, P. R. (1997). A neural substrate of prediction and reward. Science, 275 , 1593â1599.
Schulz, L. (2012). The origins of inquiry: Inductive inference and exploration in early childhood. Trends in Cognitive Sciences, 16 (7), 382â9.
Schulz, L. E., Gopnik, A., & Glymour, C. (2007). Preschool children learn about causal structure from conditional interventions. Developmental Science, 10 , 322â332. doi: 10.1111/j.1467 -7687.2007.00587.x
Sermanet, P., Eigen, D., Zhang, X., Mathieu, M., Fergus, R., & LeCun, Y. (2014). OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks. In Inter- national Conference on Learning Representations (ICLR).
Shafto, P., Goodman, N. D., & Griï¬ths, T. L. (2014). A rational account of pedagogical reasoning: Teaching by, and learning from, examples. Cognitive Psychology, 71 , 55â89.
Shultz, T. R. (2003). Computational developmental psychology. MIT Press. Siegler, R. S., & Chen, Z.
(1998). Developmental diï¬erences in rule learning: A microgenetic analysis. Cognitive Psychology, 36 (3), 273â310.
Silver, D. (2016). Personal communication. Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Driessche, G. V. D., . . . Hassabis, D. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529 (7585), 484â489.
Smith, L. B., Jones, S. S., Landau, B., Gershkoï¬-Stowe, L., & Samuelson, L. (2002). Object name learning provides on-the-job training for attention. Psychological Science, 13 (1), 13â19.
Solomon, K., Medin, D., & Lynch, E. Cognitive Sciences, 3 (3), 99â105. (1999). Concepts do more than categorize. Trends in
Spelke, E. S. (1990). Principles of Object Perception. Cognitive Science, 14 (1), 29â56. Spelke, E. S. (2003). Core knowledge. Attention and performance, 20 . Spelke, E. S., Gutheil, G., & Van de Walle, G. (1995). The development of object perception. In Visual cognition: An invitation to cognitive science, vol. 2 (2nd ed.). an invitation to cognitive science (pp. 297â330).
Spelke, E. S., & Kinzler, K. D. (2007). Core knowledge. Developmental Science, 10 (1), 89â96. Srivastava, N., & Salakhutdinov, R. (2013). Discriminative Transfer Learning with Tree-based Priors. In Advances in Neural Information Processing Systems 26.
Stadie, B. C., Levine, S., & Abbeel, P. (2016). Incentivizing Exploration In Reinforcement Learning With Deep Predictive Models. arXiv preprint. Retrieved from http://arxiv.org/abs/ 1507.00814
Stahl, A. E., & Feigenson, L. (2015). Observing the unexpected enhances infantsâ learning and exploration. Science, 348 (6230), 91â94.
Sternberg, R. J., & Davidson, J. E. (1995). The nature of insight. The MIT Press. Stuhlm¨uller, A., Taylor, J., & Goodman, N. D. (2013). Learning stochastic inverses. In Advances
in Neural Information Processing Systems (pp. 3048â3056).
Sukhbaatar, S., Szlam, A., Weston, J., & Fergus, R. (2015). End-To-End Memory Networks. In Advances in Neural Information Processing Systems 29. Retrieved from http://arxiv.org/ abs/1503.08895
Sutton, R. S. (1990). Integrated architectures for learning, planning, and reacting based on ap-
56
proximating dynamic programming. In Proceedings of the Seventh International Conference on Machine Learning (pp. 216â224).
Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., . . . Rabinovich, A. (2014). Going Deeper with Convolutions. arXiv preprint. Retrieved from http://arxiv.org/abs/ 1409.4842
Tauber, S., & Steyvers, M. (2011). Using inverse planning and theory of mind for social goal inference. In Proceedings of the 33rd annual conference of the cognitive science society (pp. 2480â2485).
T´egl´as, E., Vul, E., Girotto, V., Gonzalez, M., Tenenbaum, J. B., & Bonatti, L. L. (2011). Pure reasoning in 12-month-old infants as probabilistic inference. Science, 332 (6033), 1054â9. Tenenbaum, J. B., Kemp, C., Griï¬ths, T. L., & Goodman, N. D. (2011). How to Grow a Mind:
Statistics, Structure, and Abstraction. Science, 331 (6022), 1279â85.
Tian, Y., & Zhu, Y. (2016). Better Computer Go Player with Neural Network and Long-term In International Conference on Learning Representations (ICLR). Retrieved Prediction. from http://arxiv.org/abs/1511.06410
Tomasello, M. (2010). Origins of human communication. MIT press. Torralba, A., Murphy, K. P., & Freeman, W. T. (2007). Sharing visual features for multiclass and multiview object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29 (5), 854â869.
Tremoulet, P. D., & Feldman, J. (2000). Perception of animacy from the motion of a single object. Perception, 29 , 943â951.
Tsividis, P., Gershman, S. J., Tenenbaum, J. B., & Schulz, L. (2013). Information Selection in Noisy Environments with Large Action Spaces. In Proceedings of the 36th Annual Conference of the Cognitive Science Society (pp. 1622â1627).
Tsividis, P., Tenenbaum, J. B., & Schulz, L. E. (2015). Constraints on hypothesis selection in causal learning. Proceedings of the 37th Annual Cognitive Science Society.
Turing, A. M. (1950). Computing Machine and Intelligence. MIND, LIX , 433â460. Retrieved doi: http://dx.doi.org/ from http://mind.oxfordjournals.org/content/LIX/236/433 10.1093 %2FLIX.236.433
# %2Fmind \ Tversky, B., & Hemenway, K. (1984). Objects, Parts, and Categories. Journal of Experimental Psychology: General , 113 (2), 169â191.
Ullman, S., Harari, D., & Dorfman, N. (2012). From simple innate biases to complex visual concepts. Proceedings of the National Academy of Sciences, 109 (44), 18215â18220.
Ullman, T. D., Goodman, N. D., & Tenenbaum, J. B. (2012). Theory learning as stochastic search in the language of thought. Cognitive Development, 27 (4), 455â480.
van den Hengel, A., Russell, C., Dick, A., Bastian, J., Pooley, D., Fleming, L., & Agapito, L. In Computer Vision and (2015). Part-based modelling of compound scenes from images. Pattern Recognition (CVPR) (pp. 878â886).
van Hasselt, H., Guez, A., & Silver, D. (2016). Deep Reinforcement Learning with Double Q- learning. In Thirtieth Conference on Artiï¬cial Intelligence (AAAI).
Vinyals, O., Blundell, C., Lillicrap, T., Kavukcuoglu, K., & Wierstra, D. (2016). Matching Networks for One Shot Learning. arXiv preprint. Retrieved from http://arxiv.org/abs/1606.04080 Vinyals, O., Toshev, A., Bengio, S., & Erhan, D. (2014). Show and Tell: A Neural Image Caption
Generator. In International Conference on Machine Learning (ICML).
Vul, E., Goodman, N., Griï¬ths, T. L., & Tenenbaum, J. B. (2014). One and Done? Optimal
57
Decisions From Very Few Samples. Cognitive Science.
Wang, Z., Schaul, T., Hessel, M., van Hasselt, H., Lanctot, M., & de Freitas, N. (2016). Duel- ing network architectures for deep reinforcement learning. arXiv preprint. Retrieved from http://arxiv.org/abs/1511.06581
Ward, T. B. (1994). Structured imagination: The role of category structure in exemplar generation. Cognitive Psychology, 27 , 1â40.
Watkins, C. J., & Dayan, P. (1992). Q-learning. Machine Learning, 8 , 279â292. Wellman, H. M., & Gelman, S. A. (1992). Cognitive development: Foundational theories of core
domains. Annual Review of Psychology, 43 , 337â75.
Wellman, H. M., & Gelman, S. A. (1998). Knowledge acquisition in foundational domains. In The handbook of child psychology (pp. 523â573). Retrieved from http://doi.apa.org/psycinfo/ 2005-01927-010
Weng, C., Yu, D., Watanabe, S., & Juang, B.-H. F. (2014). Recurrent deep neural networks for robust speech recognition. ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings(2), 5532â5536.
Weston, J., Chopra, S., & Bordes, A. (2015). Memory Networks. In International Conference on Learning Representations (ICLR).
Williams, J. J., & Lombrozo, T. (2010). The role of explanation in discovery and generalization: Evidence from category learning. Cognitive Science, 34 (5), 776â806.
Winograd, T. (1972). Understanding natural language. Cognitive Psychology, 3 , 1â191. Winston, P. H. (1975). Learning structural descriptions from examples. In P. H. Winston (Ed.),
The psychology of computer vision. New York: McGraw-Hill.
Xu, F., & Tenenbaum, J. B. (2007). Word learning as Bayesian inference. Psychological Review , 114 (2), 245â272.
Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhutdinov, R., . . . Bengio, Y. (2015). Show, Attend and Tell: Neural Image Caption Generation with Visual Attention. In International Conference on Machine Learning (ICML). Retrieved from http://arxiv.org/abs/1502 .03044
Yamins, D. L. K., Hong, H., Cadieu, C. F., Solomon, E. a., Seibert, D., & DiCarlo, J. J. (2014). Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proceedings of the National Academy of Sciences, 111 (23), 8619â24.
Yildirim, I., Kulkarni, T. D., Freiwald, W. A., & Te. (2015). Eï¬cient analysis-by-synthesis in vision: A computational framework, behavioral tests, and comparison with neural representations. In Proceedings of the 37th Annual Conference of the Cognitive Science Society.
Yosinski, J., Clune, J., Bengio, Y., & Lipson, H. (2014). How transferable are features in deep neural networks? In Advances in Neural Information Processing Systems (NIPS).
Zeiler, M. D., & Fergus, R. (2014). Visualizing and Understanding Convolutional Networks. In European Conference on Computer Vision (ECCV).
58 | {
"id": "1511.06114"
} |
1603.09025 | Recurrent Batch Normalization | We propose a reparameterization of LSTM that brings the benefits of batch
normalization to recurrent neural networks. Whereas previous works only apply
batch normalization to the input-to-hidden transformation of RNNs, we
demonstrate that it is both possible and beneficial to batch-normalize the
hidden-to-hidden transition, thereby reducing internal covariate shift between
time steps. We evaluate our proposal on various sequential problems such as
sequence classification, language modeling and question answering. Our
empirical results show that our batch-normalized LSTM consistently leads to
faster convergence and improved generalization. | http://arxiv.org/pdf/1603.09025 | Tim Cooijmans, Nicolas Ballas, César Laurent, Çağlar Gülçehre, Aaron Courville | cs.LG | null | null | cs.LG | 20160330 | 20170228 | 7 1 0 2
b e F 8 2 ] G L . s c [
5 v 5 2 0 9 0 . 3 0 6 1 : v i X r a
Published as a conference paper at ICLR 2017
# RECURRENT BATCH NORMALIZATION
Tim Cooijmans, Nicolas Ballas, César Laurent, ÃaËglar Gülçehre & Aaron Courville MILA - Université de Montréal firstname.lastname@umontreal.ca
# ABSTRACT
We propose a reparameterization of LSTM that brings the beneï¬ts of batch nor- malization to recurrent neural networks. Whereas previous works only apply batch normalization to the input-to-hidden transformation of RNNs, we demonstrate that it is both possible and beneï¬cial to batch-normalize the hidden-to-hidden transi- tion, thereby reducing internal covariate shift between time steps. We evaluate our proposal on various sequential problems such as sequence classi- ï¬cation, language modeling and question answering. Our empirical results show that our batch-normalized LSTM consistently leads to faster convergence and im- proved generalization.
# INTRODUCTION
Recurrent neural network architectures such as LSTM (Hochreiter & Schmidhuber, 1997) and GRU (Cho et al., 2014) have recently exhibited state-of-the-art performance on a wide range of complex sequential problems including speech recognition Amodei et al. (2015), machine transla- tion (Bahdanau et al., 2015) and image and video captioning (Xu et al., 2015; Yao et al., 2015). Top-performing models, however, are based on very high-capacity networks that are computation- ally intensive and costly to train. Effective optimization of recurrent neural networks is thus an active area of study (Pascanu et al., 2012; Martens & Sutskever, 2011; Ollivier, 2013).
It is well-known that for deep feed-forward neural networks, covariate shift (Shimodaira, 2000; Ioffe & Szegedy, 2015) degrades the efï¬ciency of training. Covariate shift is a change in the distribution of the inputs to a model. This occurs continuously during training of feed-forward neural networks, where changing the parameters of a layer affects the distribution of the inputs to all layers above it. As a result, the upper layers are continually adapting to the shifting input distribution and unable to learn effectively. This internal covariate shift (Ioffe & Szegedy, 2015) may play an especially important role in recurrent neural networks, which resemble very deep feed-forward networks.
Batch normalization (Ioffe & Szegedy, 2015) is a recently proposed technique for controlling the distributions of feed-forward neural network activations, thereby reducing internal covariate shift. It involves standardizing the activations going into each layer, enforcing their means and variances to be invariant to changes in the parameters of the underlying layers. This effectively decouples each layerâs parameters from those of other layers, leading to a better-conditioned optimization problem. Indeed, deep neural networks trained with batch normalization converge signiï¬cantly faster and generalize better.
Although batch normalization has demonstrated signiï¬cant training speed-ups and generalization beneï¬ts in feed-forward networks, it is proven to be difï¬cult to apply in recurrent architectures (Lau- rent et al., 2016; Amodei et al., 2015). It has found limited use in stacked RNNs, where the nor- to the input of each RNN, but not âhorizontallyâ between malization is applied âverticallyâ, i.e. timesteps. RNNs are deeper in the time direction, and as such batch normalization would be most beneï¬cial when applied horizontally. However, Laurent et al. (2016) hypothesized that applying batch normalization in this way hurts training because of exploding gradients due to repeated rescal- ing.
Our ï¬ndings run counter to this hypothesis. We show that it is both possible and highly beneï¬cial to apply batch normalization in the hidden-to-hidden transition of recurrent models. In particular, we describe a reparameterization of LSTM (Section 3) that involves batch normalization and demon- strate that it is easier to optimize and generalizes better. In addition, we empirically analyze the
1
Published as a conference paper at ICLR 2017
gradient backpropagation and show that proper initialization of the batch normalization parameters is crucial to avoiding vanishing gradient (Section 4). We evaluate our proposal on several sequen- tial problems and show (Section 5) that our LSTM reparameterization consistently outperforms the LSTM baseline across tasks, in terms of both time to convergence and performance.
Liao & Poggio (2016) simultaneously investigated batch normalization in recurrent neural networks, albeit only for very short sequences (10 steps). Ba et al. (2016) independently developed a variant of batch normalization that is also applicable to recurrent neural networks and delivers similar im- provements as our method.
# 2 PREREQUISITES
2.1 LSTM
Long Short-Term Memory (LSTM) networks are an instance of a more general class of recurrent neural networks (RNNs), which we review brieï¬y in this paper. Given an input sequence X = (x1, x2, . . . , xT ), an RNN deï¬nes a sequence of hidden states ht according to
(1) where Wh â RdhÃdh, Wx â RdxÃdh , b â Rdh and the initial state h0 â Rdh are model parame- ters. A popular choice for the activation function Ï( · ) is tanh.
RNNs are popular in sequence modeling thanks to their natural ability to process variable-length sequences. However, training RNNs using first-order stochastic gradient descent (SGD) is notori- ously difficult due to the well-known problem of exploding/vanishing gradients (Bengio et al., 1994; Hochreiter, 1991; Pascanu et al., 2012). Gradient vanishing occurs when states h, are not influenced by small changes in much earlier states h,, t < 7, preventing learning of long-term dependencies in the input data. Although learning long-term dependencies is fundamentally difficult (Bengio et al., 1994), its effects can be mitigated through architectural variations such as LSTM (Hochreiter & Schmidhuber, 1997), GRU (Cho et al., 2014) and iRNN/uRNN (Le et al., 2015; Arjovsky et al., 2015).
In what follows, we focus on the LSTM architecture (Hochreiter & Schmidhuber, 1997) with recur- rent transition given by
# Ëft Ëit Ëot Ëgt
= Whhtâ1 + Wxxt + b (2)
cq = o(f,) Oc-it o(ir) © tanh(gz) h, = o(6;) © tanh(c;),
cq = o(f,) Oc-it o(ir) © tanh(gz) (3) h, = o(6;) © tanh(c;), (4) where W), ⬠R@r*44, W,,.R%*44> b © R4*¢ and the initial states ho ⬠R@,c9 ⬠R® are model parameters. o is the logistic sigmoid function, and the © operator denotes the Hadamard product.
The LSTM differs from simple RNNs in that it has an additional memory cell ct whose update is nearly linear which allows the gradient to ï¬ow back through time more easily. In addition, unlike the RNN which overwrites its content at each timestep, the update of the LSTM cell is regulated by a set of gates. The forget gate ft determines the extent to which information is carried over from the previous timestep, and the input gate it controls the ï¬ow of information from the current input xt. The output gate ot allows the model to read from the cell. This carefully controlled interaction with the cell is what allows the LSTM to robustly retain information for long periods of time.
2.2 BATCH NORMALIZATION
Covariate shift (Shimodaira, 2000) is a phenomenon in machine learning where the features pre- sented to a model change in distribution. In order for learning to succeed in the presence of covari- ate shift, the modelâs parameters must be adjusted not just to learn the concept at hand but also to adapt to the changing distribution of the inputs. In deep neural networks, this problem manifests as
2
Published as a conference paper at ICLR 2017
internal covariate shift (Ioffe & Szegedy, 2015), where changing the parameters of a layer affects the distribution of the inputs to all layers above it.
Batch Normalization (Ioffe & Szegedy, 2015) is a recently proposed network reparameterization It does so by standardizing the activations using which aims to reduce internal covariate shift. empirical estimates of their means and standard deviations. However, it does not decorrelate the activations due to the computationally costly matrix inversion. The batch normalizing transform is as follows:
h-£h BN(h; 7,8) =6 +70 aE 6) Var [h] + â¬
where h ⬠R¢ is the vector of (pre)activations to be normalized, y ⬠R?,8 ⬠R¢@ are model parameters that determine the mean and standard deviation of the normalized activation, and e ⬠R is a regularization hyperparameter. The division should be understood to proceed elementwise. At training time, the statistics E[h] and Var[h] are estimated by the sample mean and sample vari- ance of the current minibatch. This allows for backpropagation through the statistics, preserving the convergence properties of stochastic gradient descent. During inference, the statistics are typically estimated based on the entire training set, so as to produce a deterministic prediction.
# 3 BATCH-NORMALIZED LSTM
This section introduces a reparameterization of LSTM that takes advantage of batch normalization. Contrary to Laurent et al. (2016); Amodei et al. (2015), we leverage batch normalization in both the input-to-hidden and the hidden-to-hidden transformations. We introduce the batch-normalizing transform BN( · ; γ, β) into the LSTM as follows:
i, Be] = BN(Waby1i7n, Bn) + BN(Wox0i 45 Be) + 6) t & cq = o(f;) Oc-it o(ir) © tanh(g;) (7) h, = o(6,) © tanh(BN(ey; 70, Be) (8)
In our formulation, we normalize the recurrent term Whhtâ1 and the input term Wxxt separately. Normalizing these terms individually gives the model better control over the relative contribution of the terms using the γh and γx parameters. We set βh = βx = 0 to avoid unnecessary redun- dancy, instead relying on the pre-existing parameter vector b to account for both biases. In order to leave the LSTM dynamics intact and preserve the gradient ï¬ow through ct, we do not apply batch normalization in the cell update.
The batch normalization transform relies on batch statistics to standardize the LSTM activations. It would seem natural to share the statistics that are used for normalization across time, just as recurrent neural networks share their parameters over time. However, we ï¬nd that simply averaging statistics over time severely degrades performance. Although LSTM activations do converge to a stationary distribution, we observe that their statistics during the initial transient differ signiï¬cantly (see Fig- ure 5 in Appendix A). Consequently, we recommend using separate statistics for each timestep to preserve information of the initial transient phase in the activations.1
Generalizing the model to sequences longer than those seen during training is straightforward thanks to the rapid convergence of the activations to their steady-state distributions (cf. Figure 5). For our experiments we estimate the population statistics separately for each timestep 1, . . . , Tmax where
1 Note that we separate only the statistics over time and not the γ and β parameters.
3
Published as a conference paper at ICLR 2017
Tmax is the length of the longest training sequence. When at test time we need to generalize beyond Tmax, we use the population statistic of time Tmax for all time steps beyond it.
During training we estimate the statistics across the minibatch, independently for each timestep. At test time we use estimates obtained by averaging the minibatch estimates over the training set.
4
# INITIALIZING γ FOR GRADIENT FLOW
Although batch normalization allows for easy control of the pre-activation variance through the γ parameters, common practice is to normalize to unit variance. We suspect that the previous difï¬cul- ties with recurrent batch normalization reported in Laurent et al. (2016); Amodei et al. (2015) are largely due to improper initialization of the batch normalization parameters, and γ in particular. In this section we demonstrate the impact of γ on gradient ï¬ow.
RNN gradient propagation 10° 1.0 derivative through tanh 107 zs 10 . 6 £ 0.8) 10" g nw, 10 = =10% z Bey ||â gamma=0.10 60.6) s'10%))_â 8 10% |) â Z £10!) â $0.41 Ea pl|â Fi â gamma=0.60 s 107°) â gamma=0.70 2 102, â gamma=0.80 Boz) 24|| â gamma=0.90 ES 10 | gamma=1.00 o 10""0 100 200 300 460 500 600 700 600 Bo 02 04 0.6 08 1.0 t input standard deviation
(a) We visualize the gradient ï¬ow through a batch- normalized tanh RNN as a function of γ. High variance causes vanishing gradient. (b) We show the empirical expected derivative and interquartile range of tanh nonlinearity as a func- tion of input variance. High variance causes satura- tion, which decreases the expected derivative.
Figure 1: Inï¬uence of pre-activation variance on gradient propagation.
In Figure 1(a), we show how the pre-activation variance impacts gradient propagation in a simple RNN on the sequential MNIST task described in Section 5.1. Since backpropagation operates in reverse, the plot is best read from right to left. The quantity plotted is the norm of the gradient of the loss with respect to the hidden state at different time steps. For large values of γ, the norm quickly goes to zero as gradient is propagated back in time. For small values of γ the norm is nearly constant.
To demonstrate what we think is the cause of this vanishing, we drew samples x from a set of centered Gaussian distributions with standard deviation ranging from 0 to 1, and computed the derivative tanhâ(x) = 1 â tanh?(z) ⬠[0, 1] for each. Figure 1(b) shows the empirical distribution of the derivative as a function of standard deviation. When the input standard deviation is low, the input tends to be close to the origin where the derivative is close to 1. As the standard deviation increases, the expected derivative decreases as the input is more likely to be in the saturation regime. At unit standard deviation, the expected derivative is much smaller than 1.
We conjecture that this is what causes the gradient to vanish, and recommend initializing γ to a small value. In our trials we found that values of 0.01 or lower caused instabilities during training. Our choice of 0.1 seems to work well across different tasks.
# 5 EXPERIMENTS
This section presents an empirical evaluation of the proposed batch-normalized LSTM on four dif- ferent tasks. Note that for all the experiments, we initialize the batch normalization scale and shift parameters γ and β to 0.1 and 0 respectively.
4
Published as a conference paper at ICLR 2017
Pixel-by-Pixel MNIST (Validation Set) Pixel-by-Pixel Permuted-MNIST (Validation Set) = stm 02] â stm â bn_Istm â bnistm 0 20000 40000 60000 0000 00000 0 20000 40000 60000 0000 00000 Training Iteration Training Iteration
Figure 2: Accuracy on the validation set for the pixel by pixel MNIST classiï¬cation tasks. The batch-normalized LSTM is able to converge faster relatively to a baseline LSTM. Batch-normalized LSTM also shows some improve generalization on the permuted sequential MNIST that require to preserve long-term memory information.
5.1 SEQUENTIAL MNIST
We evaluate our batch-normalized LSTM on a sequential version of the MNIST classiï¬cation task (Le et al., 2015). The model processes each image one pixel at a time and ï¬nally predicts the label. We consider both sequential MNIST tasks, MNIST and permuted MNIST (pMNIST). In MNIST, the pixels are processed in scanline order. In pMNIST the pixels are processed in a ï¬xed random order.
Our baseline consists of an LSTM with 100 hidden units, with a softmax classiï¬er to produce a prediction from the ï¬nal hidden state. We use orthogonal initialization for all weight matrices, except for the hidden-to-hidden weight matrix which we initialize to be the identity matrix, as this yields better generalization performance on this task for both models. The model is trained using RMSProp (Tieleman & Hinton, 2012) with learning rate of 10â3 and 0.9 momentum. We apply gradient clipping at 1 to avoid exploding gradients.
The in-order MNIST task poses a unique problem for our model: the input for the ï¬rst hundred or so timesteps is constant across examples since the upper pixels are almost always black. This causes the variance of the hidden states to be exactly zero for a long period of time. Normalizing these zero- variance activations involves dividing zero by a small number at many timesteps, which does not affect the forward-propagated activations but causes the back-propagated gradient to explode. We work around this by adding Gaussian noise to the initial hidden states. Although the normalization ampliï¬es the noise to signal level, we ï¬nd that it does not hurt performance compared to data- dependent ways of initializing the hidden states.
Model TANH-RNN (Le et al., 2015) iRNN (Le et al., 2015) uRNN (Arjovsky et al., 2015) sTANH-RNN (Zhang et al., 2016) 35.0 97.0 95.1 98.1 35.0 82.0 91.4 94.0 LSTM (ours) BN-LSTM (ours) 98.9 99.0 90.2 95.4
Table 1: Accuracy obtained on the test set for the pixel by pixel MNIST classiï¬cation tasks
In Figure 2 we show the validation accuracy while training for both LSTM and batch-normalized LSTM (BN-LSTM). BN-LSTM converges faster than LSTM on both tasks. Additionally, we ob- serve that BN-LSTM generalizes signiï¬cantly better on pMNIST. It has been highlighted in Ar- jovsky et al. (2015) that pMNIST contains many longer term dependencies across pixels than in the original pixel ordering, where a lot of structure is local. A recurrent network therefore needs to
5
Published as a conference paper at ICLR 2017
Model LSTM (Graves, 2013) Penn Treebank 1.262 HF-MRNN (Mikolov et al., 2012) Norm-stabilized LSTM (Krueger & Memisevic, 2016) ME n-gram (Mikolov et al., 2012) 1.41 1.39 1.37 LSTM (ours) BN-LSTM (ours) 1.38 1.32 Zoneout (Krueger et al., 2016) HM-LSTM (Chung et al., 2016) HyperNetworks (Ha et al., 2016) 1.27 1.24 1.22
Table 2: Bits-per-character on the Penn Treebank test sequence.
characterize dependencies across varying time scales in order to solve this task. Our results suggest that BN-LSTM is better able to capture these long-term dependencies.
Table 1 reports the test set accuracy of the early stop model for LSTM and BN-LSTM using the pop- ulation statistics. Recurrent batch normalization leads to a better test score, especially for pMNIST where models have to leverage long-term temporal depencies. In addition, Table 1 shows that our batch-normalized LSTM achieves state of the art on both MNIST and pMNIST.
5.2 CHARACTER-LEVEL PENN TREEBANK
We evaluate our model on the task of character-level language modeling on the Penn Treebank corpus (Marcus et al., 1993) according to the train/valid/test partition of Mikolov et al. (2012). For training, we segment the training sequence into examples of length 100. The training sequence does not cleanly divide by 100, so for each epoch we randomly crop a subsequence that does and segment that instead.
Our baseline is an LSTM with 1000 units, trained to predict the next character using a softmax classiï¬er on the hidden state ht. We use stochastic gradient descent on minibatches of size 64, with gradient clipping at 1.0 and step rule determined by Adam (Kingma & Ba, 2014) with learning rate 0.002. We use orthogonal initialization for all weight matrices. The setup for the batch-normalized LSTM is the same in all respects except for the introduction of batch normalization as detailed in 3.
We show the learning curves in Figure 3(a). BN-LSTM converges faster and generalizes better than the LSTM baseline. Figure 3(b) shows the generalization of our model to longer sequences. We observe that using the population statistics improves generalization performance, which conï¬rms that repeating the last population statistic (cf. Section 3) is a viable strategy. In table 2 we report the performance of our best models (early-stopped on validation performance) on the Penn Treebank test sequence. Follow up works havd since improved the state of the art (Krueger et al., 2016; Chung et al., 2016; Ha et al., 2016).
# 5.3 TEXT8
We evaluate our model on a second character-level language modeling task on the much larger text8 dataset (Mahoney, 2009). This dataset is derived from Wikipedia and consists of a sequence of 100M characters including only alphabetical characters and spaces. We follow Mikolov et al. (2012); Zhang et al. (2016) and use the ï¬rst 90M characters for training, the next 5M for validation and the ï¬nal 5M characters for testing. We train on nonoverlapping sequences of length 180.
Both our baseline and batch-normalized models are LSTMs with 2000 units, trained to predict the next character using a softmax classiï¬er on the hidden state ht. We use stochastic gradient descent on minibatches of size 128, with gradient clipping at 1.0 and step rule determined by Adam (Kingma & Ba, 2014) with learning rate 0.001. All weight matrices were initialized to be orthogonal.
6
Published as a conference paper at ICLR 2017
We early-stop on validation performance and report the test performance of the resulting model in table 3. We observe that BN-LSTM obtains a signiï¬cant performance improvement over the LSTM baseline. Chung et al. (2016) has since improved on our performance.
Model text8 td-LSTM (Zhang et al., 2016) HF-MRNN (Mikolov et al., 2012) skipping RNN (Pachitariu & Sahani, 2013) 1.63 1.54 1.48 LSTM (ours) BN-LSTM (ours) 1.43 1.36 HM-LSTM (Chung et al., 2016) 1.29
Table 3: Bits-per-character on the text8 test sequence.
5.4 TEACHING MACHINES TO READ AND COMPREHEND
Recently, Hermann et al. (2015) introduced a set of challenging benchmarks for natural language processing, along with neural network architectures to address them. The tasks involve reading real news articles and answering questions about their content. Their principal model, the Atten- tive Reader, is a recurrent neural network that invokes an attention mechanism to locate relevant information in the document. Such models are notoriously hard to optimize and yet increasingly popular.
To demonstrate the generality and practical applicability of our proposal, we apply batch normaliza- tion in the Attentive Reader model and show that this drastically improves training.
We evaluate several variants. The ï¬rst variant, referred to as BN-LSTM, consists of the vanilla At- tentive Reader model with the LSTM simply replaced by our BN-LSTM reparameterization. The second variant, termed BN-everywhere, is exactly like the ï¬rst, except that we also introduce batch normalization into the attention computations, normalizing each term going into the tanh nonlin- earities.
Our third variant, BN-e*, is like BN-everywhere, but improved to more carefully handle variable- length sequences. Throughout this experiment we followed the common practice of padding each batch of variable-length data with zeros. However, this biases the batch mean and variance of xt toward zero. We address this effect using sequencewise normalization of the inputs as proposed by Laurent et al. (2016); Amodei et al. (2015). That is, we share statistics over time for normalization
â isT⢠â isT⢠â BNLSTM â _BN-LSTM, population statistics == _BN-LSTM, batch statistics 22 mean bits per 16 Myââg000 a0 6000 8000 10000 12600 14000 16000 13h 200300 add â500 600700800340 âT000 training steps sequence length
(a) Performance in bits-per-character on length- 100 subsequences of the Penn Treebank validation sequence during training. (b) Generalization to longer subsequences of Penn Treebank using population statistics. The subse- quences are taken from the test sequence.
Figure 3: Penn Treebank evaluation
7
Published as a conference paper at ICLR 2017
1.0 STM train BN-e* train â LsTM valid â BN-e** valid error rate error rate 0.2 cats âod 100 200 300 400 500 600 700 800 a) 50. 100 150 200 250 300 350 400 training steps (thousands) training steps (thousands)
(b) Error rate on the validation set on the full CNN QA task from Hermann et al. (2015).
(a) Error rate on the validation set for the Atten- tive Reader models on a variant of the CNN QA task (Hermann et al., 2015). As detailed in Ap- pendix C, the theoretical lower bound on the error rate on this task is 43%.
Figure 4: Training curves on the CNN question-answering tasks.
of the input terms Wxxt, but not for the recurrent terms Whht or the cell output ct. Doing so avoids many issues involving degenerate statistics due to input sequence padding.
Our fourth and ï¬nal variant BN-e** is like BN-e* but bidirectional. The main difï¬culty in adapting to bidirectional models also involves padding. Padding poses no problem as long as it is properly ignored (by not updating the hidden states based on padded regions of the input). However to perform the reverse application of a bidirectional model, it is common to simply reverse the padded sequences, thus moving the padding to the front. This causes similar problems as were observed on the sequential MNIST task (Section 5.1): the hidden states will not diverge during the initial timesteps and hence their variance will be severely underestimated. To get around this, we reverse only the unpadded portion of the input sequences and leave the padding in place.
See Appendix C for hyperparameters and task details.
Figure 4(a) shows the learning curves for the different variants of the attentive reader. BN-LSTM trains dramatically faster than the LSTM baseline. BN-everywhere in turn shows a signiï¬cant im- provement over BN-LSTM. In addition, both BN-LSTM and BN-everywhere show a generalization beneï¬t over the baseline. The validation curves have minima of 50.3%, 49.5% and 50.0% for the baseline, BN-LSTM and BN-everywhere respectively. We emphasize that these results were ob- tained without any tweaking â all we did was to introduce batch normalization.
BN-e* and BN-e** converge faster yet, and reach lower minima: 47.1% and 43.9% respectively.
Model CNN valid CNN test Attentive Reader (Hermann et al., 2015) 38.4 37.0 LSTM (ours) BN-e** (ours) 45.5 37.9 45.0 36.3
Table 4: Error rates on the CNN question-answering task Hermann et al. (2015).
We train and evaluate our best model, BN-e**, on the full task from (Hermann et al., 2015). On this dataset we had to reduce the number of hidden units to 120 to avoid severe overï¬tting. Training curves for BN-e** and a vanilla LSTM are shown in Figure 4(b). Table 4 reports performances of the early-stopped models.
8
Published as a conference paper at ICLR 2017
# 6 CONCLUSION
Contrary to previous ï¬ndings by Laurent et al. (2016); Amodei et al. (2015), we have demonstrated that batch-normalizing the hidden states of recurrent neural networks greatly improves optimiza- tion. Indeed, doing so yields beneï¬ts similar to those of batch normalization in feed-forward neural networks: our proposed BN-LSTM trains faster and generalizes better on a variety of tasks in- cluding language modeling and question-answering. We have argued that proper initialization of the batch normalization parameters is crucial, and suggest that previous difï¬culties (Laurent et al., 2016; Amodei et al., 2015) were due in large part to improper initialization. Finally, we have shown our model to apply to complex settings involving variable-length data, bidirectionality and highly nonlinear attention mechanisms.
# ACKNOWLEDGEMENTS
The authors would like to acknowledge the following agencies for research funding and computing support: the Nuance Foundation, Samsung, NSERC, Calcul Québec, Compute Canada, the Canada Research Chairs and CIFAR. Experiments were carried out using the Theano (Team et al., 2016) and the Blocks and Fuel (van Merriënboer et al., 2015) libraries for scientiï¬c computing. We thank David Krueger, Saizheng Zhang, Ishmael Belghazi and Yoshua Bengio for discussions and suggestions.
# REFERENCES
D. Amodei et al. Deep speech 2: End-to-end speech recognition in english and mandarin. arXiv:1512.02595, 2015.
M. Arjovsky, A. Shah, and Y. Bengio. Unitary evolution recurrent neural networks. arXiv:1511.06464, 2015.
Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv:1607.06450, 2016.
D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. ICLR, 2015.
Y. Bengio, P. Simard, and P. Frasconi. Learning long-term dependencies with gradient descent is difï¬cult. Neural Networks, IEEE Transactions on, 1994.
K. Cho, B. Van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv:1406.1078, 2014.
Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural net- works. arXiv:1609.01704, 2016.
A. Graves. Generating sequences with recurrent neural networks. arXiv:1308.0850, 2013.
David Ha, Andrew Dai, and Quoc V Le. Hypernetworks. arXiv:1609.09106, 2016.
K. M. Hermann, T. Kocisky, E. Grefenstette, L. Espeholt, W. Kay, M. Suleyman, and P. Blunsom. Teaching machines to read and comprehend. In NIPS, 2015.
S. Hochreiter. Untersuchungen zu dynamischen neuronalen netzen. Masterâs thesis, 1991.
S. Hochreiter and J Schmidhuber. Long short-term memory. Neural computation, 1997.
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. abs/1502.03167, 2015.
D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv:1412.6980, 2014.
D Krueger and R. Memisevic. Regularizing rnns by stabilizing activations. ICLR, 2016.
9
Published as a conference paper at ICLR 2017
David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rose- mary Ke, Anirudh Goyal, Yoshua Bengio, Hugo Larochelle, and Aaron Courville. Zoneout: Regularizing rnns by randomly preserving hidden activations. arXiv:1606.01305, 2016.
C. Laurent, G. Pereyra, P. Brakel, Y. Zhang, and Y. Bengio. Batch normalized recurrent neural networks. ICASSP, 2016.
Quoc V Le, N. Jaitly, and G. Hinton. A simple way to initialize recurrent networks of rectiï¬ed linear units. arXiv:1504.00941, 2015.
Qianli Liao and Tomaso Poggio. Bridging the gaps between residual learning, recurrent neural networks and visual cortex. arXiv:1604.03640, 2016.
M. Mahoney. Large text compression benchmark. 2009.
M. P. Marcus, M. Marcinkiewicz, and B. Santorini. Building a large annotated corpus of english: The penn treebank. Comput. Linguist., 1993.
J. Martens and I. Sutskever. Learning recurrent neural networks with hessian-free optimization. In ICML, 2011.
T. Mikolov, I. Sutskever, A. Deoras, H. Le, S. Kombrink, and J. Cernocky. Subword language modeling with neural networks. preprint, 2012.
Yann Ollivier. Persistent contextual neural networks for learning symbolic data sequences. CoRR, abs/1306.0514, 2013.
Marius Pachitariu and Maneesh Sahani. Regularization and nonlinearities for neural language mod- els: when are they needed? arXiv:1301.5650, 2013.
Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difï¬culty of training recurrent neural networks. arXiv:1211.5063, 2012.
H. Shimodaira. Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of statistical planning and inference, 2000.
The Theano Development Team et al. Theano: A Python framework for fast computation of mathe- matical expressions. arXiv e-prints, abs/1605.02688, May 2016.
T. Tieleman and G. Hinton. Lecture 6.5âRmsProp: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 2012.
Bart van Merriënboer, Dzmitry Bahdanau, Vincent Dumoulin, Dmitriy Serdyuk, David Warde- Farley, Jan Chorowski, and Yoshua Bengio. Blocks and fuel: Frameworks for deep learning. CoRR, abs/1506.00619, 2015. URL http://arxiv.org/abs/1506.00619.
K. Xu, J. Ba, R. Kiros, A. Courville, R. Salakhutdinov, R. Zemel, and Y. Bengio. Show, attend and tell: Neural image caption generation with visual attention. arXiv:1502.03044, 2015.
L. Yao, A. Torabi, K. Cho, N. Ballas, C. Pal, H. Larochelle, and A. Courville. Describing videos by exploiting temporal structure. In ICCV, 2015.
S. Zhang, Y. Wu, T. Che, Z. Lin, R. Memisevic, R. Salakhutdinov, and Y. Bengio. Architectural complexity measures of recurrent neural networks. arXiv:1602.08210, 2016.
10
Published as a conference paper at ICLR 2017
# A CONVERGENCE OF POPULATION STATISTICS
mean of recurrent term mean of cell state 20 variance of recurrent term 15) J 1.0 4 0.5 4 0.0 . {¢) 10 20 30 40 50 [¢) 10 20 30 40 50 time steps time steps
Figure 5: Convergence of population statistics to stationary distributions on the Penn Treebank task. The horizontal axis denotes RNN time. Each curve corresponds to a single hidden unit. Only a random subset of units is shown. See Section 3 for discussion.
# B SENSITIVITY TO INITIALIZATION OF γ
In Section 4 we investigated the effect of initial γ on gradient ï¬ow. To show the practical implica- tions of this, we performed several experiments on the pMNIST and Penn Treebank benchmarks. The resulting performances are shown in Figure 6.
The pMNIST training curves conï¬rm that higher initial values of γ are detrimental to the optimiza- tion of the model. For the Penn Treebank task however, the effect is gone.
We believe this is explained by the difference in the nature of the two tasks. For pMNIST, the model absorbs the input sequence and only at the end of the sequence does it make a prediction on which it receives feedback. Learning from this feedback requires propagating the gradient all the way back through the sequence.
In the Penn Treebank task on the other hand, the model makes a prediction at each timestep. At each step of the backward pass, a fresh learning signal is added to the backpropagated gradient. Essentially, the model is able to get off the ground by picking up short-term dependencies. This fails on pMNIST wich is dominated by long-term dependencies (Arjovsky et al., 2015).
# C TEACHING MACHINES TO READ AND COMPREHEND: TASK SETUP
We evaluate the models on the question answering task using the CNN corpus (Hermann et al., 2015), with placeholders for the named entities. We follow a similar preprocessing pipeline as Her- mann et al. (2015). During training, we randomly sample the examples with replacement and shufï¬e the order of the placeholders in each text inside the minibatch. We use a vocabulary of 65829 words.
We deviate from Hermann et al. (2015) in order to save computation: we use only the 4 most relevant sentences from the description, as identiï¬ed by a string matching procedure. Both the training and validation sets are preprocessed in this way. Due to imprecision this heuristic sometimes strips the
11
Published as a conference paper at ICLR 2017
25 Permuted MNIST train 25 Permuted MNIST valid â gamma 0.10 â "gamma 0.10 â gamma 0.30 â gamma 0.30 20 â gamma 0.50 2.0 â gamma 0.50 â gamma 0.70 â gamma 0.70 > _ > â is gamma 1.00 is gamma 1.00 ⬠⬠5 5 6 1.0 6 10 0s 0.5 0.0 0.0 0 10000 20000 +=« 30000 += 40000~=â«50000 0 10000 20000 +~«30000 += 40000:~=â«50000 training steps training steps PTB train PTB valid 1.10 1.10 â gamma 0.10 â gamma 0.10 1.05 â gamma 0.30 â gamma 0.30 â gamma 0.50 1.08 â gamma 0.50 5 â gamma 0.70 5 â gamma 0.70 = 1.00 â gamma 1.00 o â gamma 1.00 © & 1.06 S 0.95 s a & 1.04 £090 g FI FI 0.85 1.02 0.80 1.00 0 5000 10000 15000 0 5000 10000 15000 training steps training steps
Figure 6: Training curves on pMNIST and Penn Treebank for various initializations of γ.
answers from the passage, putting an upper bound of 57% on the validation accuracy that can be achieved.
For the reported performances, the ï¬rst three models (LSTM, BN-LSTM and BN-everywhere) are trained using the exact same hyperparameters, which were chosen because they work well for the baseline. The hidden state is composed of 240 units. We use stochastic gradient descent on mini- batches of size 64, with gradient clipping at 10 and step rule determined by Adam (Kingma & Ba, 2014) with learning rate 8 à 10â5.
For BN-e* and BN-e**, we use the same hyperparameters except that we reduce the learning rate to 8 Ã 10â4 and the minibatch size to 40.
# D HYPERPARAMETER SEARCHES
Table 5 reports hyperparameter values that were tried in the experiments.
(a) MNIST and pMNIST (b) Penn Treebank Learning rate: RMSProp momentum: Hidden state size: Initial γ: 1e-2, 1e-3, 1e-4 0.5, 0.9 100, 200, 400 1e-1, 3e-1, 5e-1, 7e-1, 1.0 Learning rate: Hidden state size: Batch size: Initial γ: (c) Text8 (d) Attentive Reader Learning rate: Hidden state size: 1e-1, 1e-2, 1e-3 500, 1000, 2000, 4000 Learning rate: Hidden state size: 8e-3, 8e-4, 8e-5, 8e-6 60, 120, 240, 280
1e-1, 1e-2, 2e-2, 1e-3 800, 1000, 1200, 1500, 2000 32, 64, 100, 128 1e-1, 3e-1, 5e-1, 7e-1, 1.0
Table 5: Hyperparameter values that have been explored in the experiments.
For MNIST and pMNIST, the hyperparameters were varied independently. For Penn Treebank, we performed a full grid search on learning rate and hidden state size, and later performed a sensitivity
12
Published as a conference paper at ICLR 2017
analysis on the batch size and initial γ. For the text8 task and the experiments with the Attentive Reader, we carried out a grid search on the learning rate and hidden state size.
The same values were tried for both the baseline and our BN-LSTM. In each case, our reported results are those of the model with the best validation performance.
13 | {
"id": "1609.01704"
} |
1603.08983 | Adaptive Computation Time for Recurrent Neural Networks | This paper introduces Adaptive Computation Time (ACT), an algorithm that
allows recurrent neural networks to learn how many computational steps to take
between receiving an input and emitting an output. ACT requires minimal changes
to the network architecture, is deterministic and differentiable, and does not
add any noise to the parameter gradients. Experimental results are provided for
four synthetic problems: determining the parity of binary vectors, applying
binary logic operations, adding integers, and sorting real numbers. Overall,
performance is dramatically improved by the use of ACT, which successfully
adapts the number of computational steps to the requirements of the problem. We
also present character-level language modelling results on the Hutter prize
Wikipedia dataset. In this case ACT does not yield large gains in performance;
however it does provide intriguing insight into the structure of the data, with
more computation allocated to harder-to-predict transitions, such as spaces
between words and ends of sentences. This suggests that ACT or other adaptive
computation methods could provide a generic method for inferring segment
boundaries in sequence data. | http://arxiv.org/pdf/1603.08983 | Alex Graves | cs.NE | null | null | cs.NE | 20160329 | 20170221 | 7 1 0 2
b e F 1 2 ] E N . s c [
6 v 3 8 9 8 0 . 3 0 6 1 : v i X r a
# Adaptive Computation Time for Recurrent Neural Networks
Alex Graves Google DeepMind gravesa@google.com
# Abstract
This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neu- ral networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and diï¬eren- tiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide in- triguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data.
# Introduction
The amount of time required to pose a problem and the amount of thought required to solve it are notoriously unrelated. Pierre de Fermat was able to write in a margin the conjecture (if not the proof) of a theorem that took three and a half centuries and reams of mathematics to solve [35]. More mundanely, we expect the eï¬ort required to ï¬nd a satisfactory route between two cities, or the number of queries needed to check a particular fact, to vary greatly, and unpredictably, from case to case. Most machine learning algorithms, however, are unable to dynamically adapt the amount of computation they employ to the complexity of the task they perform.
For artiï¬cial neural networks, where the neurons are typically arranged in densely connected layers, an obvious measure of computation time is the number of layer-to-layer transformations the network performs. In feedforward networks this is controlled by the network depth, or number of layers stacked on top of each other. For recurrent networks, the number of transformations also depends on the length of the input sequence â which can be padded or otherwise extended to allow for extra computation. The evidence that increased depth leads to more performant networks is by now inarguable [5, 4, 19, 9], and recent results show that increased sequence length can be similarly beneï¬cial [31, 33, 25]. However it remains necessary for the experimenter to decide a priori on the amount of computation allocated to a particular input vector or sequence. One solution is to simply
1
make every network very deep and design its architecture in such a way as to mitigate the vanishing gradient problem [13] associated with long chains of iteration [29, 17]. However in the interests of both computational eï¬ciency and ease of learning it seems preferable to dynamically vary the number of steps for which the network âpondersâ each input before emitting an output. In this case the eï¬ective depth of the network at each step along the sequence becomes a dynamic function of the inputs received so far.
The approach pursued here is to augment the network output with a sigmoidal halting unit whose activation determines the probability that computation should continue. The resulting halting distribution is used to deï¬ne a mean-ï¬eld vector for both the network output and the internal network state propagated along the sequence. A stochastic alternative would be to halt or continue according to binary samples drawn from the halting distributionâa technique that has recently been applied to scene understanding with recurrent networks [7]. However the mean-ï¬eld approach has the advantage of using a smooth function of the outputs and states, with no need for stochastic gradient estimates. We expect this to be particularly beneï¬cial when long sequences of halting decisions must be made, since each decision is likely to aï¬ect all subsequent ones, and sampling noise will rapidly accumulate (as observed for policy gradient methods [36]).
A related architecture known as Self-Delimiting Neural Networks [26, 30] employs a halting neuron to end a particular update within a large, partially activated network; in this case however a simple activation threshold is used to make the decision, and no gradient with respect to halting time is propagated. More broadly, learning when to halt can be seen as a form of conditional computing, where parts of the network are selectively enabled and disabled according to a learned policy [3, 6]. We would like the network to be parsimonious in its use of computation, ideally limiting itself to the minimum number of steps necessary to solve the problem. Finding this limit in its most general form would be equivalent to determining the Kolmogorov complexity of the data (and hence solving the halting problem) [21]. We therefore take the more pragmatic approach of adding a time cost to the loss function to encourage faster solutions. The network then has to learn to trade oï¬ accuracy against speed, just as a person must when making decisions under time pressure. One weakness is that the numerical weight assigned to the time cost has to be hand-chosen, and the behaviour of the network is quite sensitive to its value.
The rest of the paper is structured as follows: the Adaptive Computation Time algorithm is presented in Section 2, experimental results on four synthetic problems and one real-world dataset are reported in Section 3, and concluding remarks are given in Section 4.
# 2 Adaptive Computation Time
Consider a recurrent neural network R composed of a matrix of input weights Wx, a parametric state transition model S, a set of output weights Wy and an output bias by. When applied to an input sequence x = (x1, . . . , xT ), R computes the state sequence s = (s1, . . . , sT ) and the output sequence y = (y1, . . . , yT ) by iterating the following equations from t = 1 to T :
(1)
# st = S(stâ1, Wxxt) yt = Wyst + by
(2)
The state is a ï¬xed-size vector of real numbers containing the complete dynamic information of the network. For a standard recurrent network this is simply the vector of hidden unit activations. For a Long Short-Term Memory network (LSTM) [14], the state also contains the activations of the memory cells. For a memory augmented network such as a Neural Turing Machine (NTM) [10], the state contains both the complete state of the controller network and the complete state of the memory. In general some portions of the state (for example the NTM memory contents) will not be visible to the output units; in this case we consider the corresponding columns of Wy to be ï¬xed to 0.
2
Adaptive Computation Time (ACT) modiï¬es the conventional setup by allowing R to perform a variable number of state transitions and compute a variable number of outputs at each input step. Let N (t) be the total number of updates performed at step t. Then deï¬ne the intermediate state sequence (s1
S(stâ1, x1 S(snâ1 , xn t t = Wysn yn t + by
sn t = t ) if n = 1 t ) otherwise (3)
(4)
where xn t = xt + δn,1 is the input at time t augmented with a binary ï¬ag that indicates whether the input step has just been incremented, allowing the network to distinguish between repeated inputs and repeated computations for the same input. Note that the same state function is used for all state transitions (intermediate or otherwise), and similarly the output weights and bias are shared for all outputs. It would also be possible to use diï¬erent state and output parameters for each intermediate step; however doing so would cloud the distinction between increasing the number of parameters and increasing the number of computational steps. We leave this for future work.
To determine how many updates R performs at each input step an extra sigmoidal halting unit h is added to the network output, with associated weight matrix Wh and bias bh:
t = Ï (Whsn hn t + bh) (5)
As with the output weights, some columns of Wh may be ï¬xed to zero to give selective access to the network state. The activation of the halting unit is then used to determine the halting probability pn t of the intermediate steps:
pn t = R(t) if n = N (t) hn t otherwise (6)
where
, n N(t) = min{nâ: Ss h? >=1-e} (7) n=l
the remainder R(t) is deï¬ned as follows
N(t)-1 R(t)=1â So ap (8) n=1
and ⬠is a small constant (0.01 for the experiments in this paper), whose purpose is to allow compu- tation to halt after a single update if h} >= 1 â, as otherwise a minimum of two updates would be required for every input step. It follows directly from the definition that yh) p; = 1 and 0 < p? <1 Vn, so this is a valid probability distribution. A similar distribution was recently used to define differentiable push and pop operations for neural stacks and queues [i].
t , yt = y Ën t . However we will eschew sampling techniques and the associated problems of noisy gradients, instead using pn
N(t) N(t) =i vese w= Sovree (9) n=1 n=1
The implicit assumption is that the states and outputs are approximately linear, in the sense that a linear interpolation between a pair of state or output vectors will also interpolate between the
3
(8)
Figure 1: RNN Computation Graph. An RNN unrolled over two input steps (separated by vertical dotted lines). The input and output weights Wx, Wy, and the state transition operator S are shared over all steps.
Figure 2: RNN Computation Graph with Adaptive Computation Time. The graph is equivalent to Figure 1, only with each state and output computation expanded to a variable number of intermediate updates. Arrows touching boxes denote operations applied to all units in the box, while arrows leaving boxes denote summations over all units in the box.
properties the vectors embody. There are several reasons to believe that such an assumption is reasonable. Firstly, it has been observed that the high-dimensional representations present in neu- ral networks naturally tend to behave in a linear way [32, 20], even remaining consistent under arithmetic operations such as addition and subtraction [22]. Secondly, neural networks have been successfully trained under a wide range of adversarial regularisation constraints, including sparse internal states [23], stochastically masked units [28] and randomly perturbed weights [1]. This leads us to believe that the relatively benign constraint of approximately linear representations will not be too damaging. Thirdly, as training converges, the tendency for both mean-ï¬eld and stochastic latent variables is to concentrate all the probability mass on a single value. In this case that yields a standard RNN with each input duplicated a variable, but deterministic, number of times, rendering the linearity assumption irrelevant.
A diagram of the unrolled computation graph of a standard RNN is illustrated in Figure 1, while Figure 2 provides the equivalent diagram for an RNN trained with ACT.
4
# 2.1 Limiting Computation Time
If no constraints are placed on the number of updates R can take at each step it will naturally tend to âponderâ each input for as long as possible (so as to avoid making predictions and incurring errors). We therefore require a way of limiting the amount of computation the network performs. Given a length T input sequence x, deï¬ne the ponder sequence (Ï1, . . . , ÏT ) of R as
Ït = N (t) + R(t) (10)
and the ponder cost P(x) as
T P(x) = opt (11) t=1
Since R(t) ⬠(0,1), P(x) is an upper bound on the ( to reduce, namely the total computation a N(t) during the sequen non-differentiable) property we ultimately want
We can encourage the network to minimise P(x) by modifying the sequence loss function L(x, y) used for training:
ËL(x, y) = L(x, y) + Ï P(x) (12)
where Ï is a time penalty parameter that weights the relative cost of computation versus error. As we will see in the experiments section the behaviour of the network is quite sensitive to the value of Ï , and it is not obvious how to choose a good value. If computation time and prediction error can be meaningfully equated (for example if the relative ï¬nancial cost of both were known) a more principled technique for selecting Ï should be possible.
To prevent very long sequences at the beginning of training (while the network is learning how to use the halting unit) the bias term bh can be initialised to a positive value. In addition, a hard limit M on the maximum allowed value of N (t) can be imposed to avoid excessive space and time costs. In this case Equation (7) is modiï¬ed to
N(t) = min{M, min{nâ : S hf? >=1-e}} (13) n=1
# 2.2 Error Gradients
The ponder costs p; are discontinuous with respect to the halting probabilities at the points where N(t) increments or decrements (that is, when the summed probability mass up to some n either decreases below or increases above 1 â â¬). However they are continuous away from those points, as N(t) remains constant and R(t) is a linear function of the probabilities. In practice we simply ignore the discontinuities by treating N(t) as constant and minimising R(t) everywhere.
Given this approximation, the gradient of the ponder cost with respect to the halting activations is straightforward:
âP(x) âhn t = 0 if n = N (t) â1 otherwise (14)
For a stochastic ACT network, a more natural halting distribution than the one described in Equations to would be to simply treat h? as the probability of halting at step n, in which case p? = h? nae nrâ), One could nf=1 then set pe = ri np; â i.e. the expected ponder time under the stochastic distribution. However experiments show that networks trained to minimise expected rather than total halting time learn to âcheatâ in the following ingenious way: they set ht to a value just below the halting threshold, then keep h} = 0 until some N(t) when they set nO high enough to ensure they halt. In this case pp <p}, so the states and outputs at n = N(t) have much lower weight in the mean field updates (Equation (9}) than those at n = 1; however by making the magnitudes of the states and output vectors much larger at N(t) than n = 1 the network can still ensure that the update is dominated by the final vectors, despite having paid a low ponder penalty.
5
and hence
â ËL(x, y) âhn t = âL(x, y) âhn t â 0 if n = N (t) Ï otherwise (15)
The halting activations only inï¬uence L via their eï¬ect on the halting probabilities, therefore
a£(x,y) <8 ac(x,y) apy ane Ope Oh (16) nf=1
Furthermore, since the halting probabilities only inï¬uence L via their eï¬ect on the states and outputs, it follows from Equation (9) that
âL(x, y) âpn t = âL(x, y) âyt yn t + âL(x, y) âst sn t (17)
while, from Equations (6) and (8)
, Sn ifnâ < N(t) and n < N(t) âlifnâ = N(t) and n < N(t) (18) 0 if n = N(t)
Combining Equations (15), (17) and (18) gives, for n < N (t)
dL(x%y) _ ALY) (nw) , OLOGY) (on NW) _ ~The Oye (uh we) GES (sh 8) 1 (19)
while for n = N (t)
â ËL(x, y) âhN (t) t = 0 (20)
Thereafter the network can be diï¬erentiated as usual (e.g. with backpropagation through time [36]) and trained with gradient descent.
# 3 Experiments
We tested recurrent neural networks (RNNs) with and without ACT on four synthetic tasks and one real-world language processing task. LSTM was used as the network architecture for all experiments except one, where a simple RNN was used. However we stress that ACT is equally applicable to any recurrent architecture.
All the tasks were supervised learning problems with discrete targets and cross-entropy loss. The data for the synthetic tasks was generated online and cross-validation was therefore not needed. Similarly, the character prediction dataset was suï¬ciently large that the network did not overï¬t. The performance metric for the synthetic tasks was the sequence error rate: the fraction of examples where any mistakes were made in the complete output sequence. This metric is useful as it is trivial to evaluate without decoding. For character prediction the metric was the average log-loss of the output predictions, in units of bits per character.
Most of the training parameters were fixed for all experiments: Adam was used for optimi- sation with a learning rate of 10-4, the Hogwild! algorithm was used for asynchronous training with 16 threads; the initial halting unit bias b, mentioned in Equation (5) was 1; the « term from Equation was 0.01. The synthetic tasks were all trained for 1M iterations, where an iteration
6
loco Input seq. Target seq.
Figure 3: Parity training Example. Each sequence consists of a single input and target vector. Only 8 of the 64 input bits are shown for clarity.
is deï¬ned as a weight update on a single thread (hence the total number of weight updates is ap- proximately 16 times the number of iterations). The character prediction task was trained for 10K iterations. Early stopping was not used for any of the experiments.
A logarithmic grid search over time penalties was performed for each experiment, with 20 ran- domly initialised networks trained for each value of Ï . For the synthetic problems the range of the grid search was from i à 10âj with integer i in the range 1â10 and the exponent j in the range 1â4. For the language modelling task, which took many days to complete, the range of j was limited to 1â3 to reduce training time (lower values of Ï , which naturally induce more pondering, tend to give greater data eï¬ciency but slower wall clock training time).
Unless otherwise stated the maximum computation time M (Equation (13)) was set to 100. In all experiments the networks converged on learned values of N (t) that were far less than M , which functions mainly as safeguard against excessively long ponder times early in training.
# 3.1 Parity
Determining the parity of a sequence of binary numbers is a trivial task for a recurrent neural network [27], which simply needs to implement an internal switch that changes sign every time a one is received. For shallow feedforward networks receiving the entire sequence in one vector, however, the number of distinct input patterns, and hence diï¬culty of the task, grows exponentially with the number of bits. We gauged the ability of ACT to infer an inherently sequential algorithm from statically presented data by presenting large binary vectors to the network and asking it to determine the parity. By varying the number of binary bits for which parity must be calculated we were also able to assess ACTâs ability to adapt the amount of computation to the diï¬culty of the vector.
The input vectors had 64 elements, of which a random number from 1 to 64 were randomly set to 1 or â1 and the rest were set to 0. The corresponding target was 1 if there was an odd number of ones and 0 if there was an even number of ones. Each training sequence consisted of a single input and target vector, an example of which is shown in Figure 3. The network architecture was a simple RNN with a single hidden layer containing 128 tanh units and a single sigmoidal output unit, trained with binary cross-entropy loss on minibatches of size 128. Note that without ACT the recurrent connection in the hidden layer was never used since the data had no sequential component, and the network reduced to a feedforward network with a single hidden layer.
Figure 4 demonstrates that the network was unable to reliably solve the problem without ACT, with a mean of almost 40% error compared to 50% for random guessing. For penalties of 0.03 and below the mean error was below 5%. Figure 5 reveals that the solutions were both more rapid and more accurate with lower time penalties. It also highlights the relationship between the time penalty, the classiï¬cation error rate and the average ponder time per input. The variance in ponder time for low Ï networks is very high, indicating that many correct solutions with widely varying runtime can be discovered. We speculate that progressively higher Ï values lead the network to compute
7
0.007 - 0.008 om 0.002 â 0.003 - 8 > s 6 0.40 $ 035 0.30 © 025 E 0.20 W015 0.10 @ 0.05 D 0.00 mala on a ie | i = © Py = 2 © 2 s 2 = 2 © e @ ea 3 $ 8 3 $3 8 g és se 8 § 8 8 & 3 8 8 8 8s 8 8 $6 6 6 6 6 Ss 3 S 3 6 3s o6 6 3 3 3 0.0002 i 0.0003 } 0.0005 0.0006 ft 0.0007 f 0.0009 t No ACT Time Penalty
Figure 4: Parity Error Rates. Bar heights show the mean error rates for diï¬erent time penalties at the end of training. The error bars show the standard error in the mean.
0.5 0.5 Time Penalty â 0.0001 0.0002 0.0003, â 0.0008 0.4 4 Be Oe eee eee eee eee See â 0.0005 2 £ = ovoos S © ° â 0.0007 . 0.0008 ra © 03 ooo . 0.001 gs e . . â o02 â 0003 w wi ââ 0004 o © . â 000s Q Qo? s . 0.006 ⬠02 © : 0.007 oO oO â 0008 5 =] 0.009 loâ loâ â 001 & & or â oo2 â 003 01 0.04 005 006 9.0 â 007 â ove 0.0 mony = ââââ â 009 ) 200000 400000 600000 800000 1000000 oO 2 4 6 8 âo 5 = without acr Iterations Ponder
Figure 5: Parity Learning Curves and Error Rates Versus Ponder Time. Left: faint coloured curves show the errors for individual runs. Bold lines show the mean errors over all 20 runs for each Ï value. âIterationsâ is the number of gradient updates per asynchronous worker. Right: Small circles represent individual runs after training is complete, large circles represent the mean over 20 runs for each Ï value. âPonderâ is the mean number of computation steps per input timestep (minimum 1). The black dotted line shows the mean error for the networks without ACT. The height of the ellipses surrounding the mean values represents the standard error over error rates for that value of Ï , while the width shows the standard error over ponder times.
the parities of successively larger chunks of the input vector at each ponder step, then iteratively combine these calculations to obtain the parity of the complete vector.
Figure 6 shows that for the networks without ACT and those with overly high time penalties, the error rate increases sharply with the diï¬culty of the task (where diï¬culty is deï¬ned as the number of bits whose parity must be determined), while the amount of ponder remains roughly constant. For the more successful networks, with intermediate Ï values, ponder time appears to grow linearly with diï¬culty, with a slope that generally increases as Ï decreases. Even for the best networks the error rate increased somewhat with diï¬culty. For some of the lowest Ï networks there is a dramatic increase in ponder after about 32 bits, suggesting an ineï¬cient algorithm.
# 3.2 Logic
Like parity, the logic task tests if an RNN with ACT can sequentially process a static vector. Unlike parity it also requires the network to internally transfer information across successive input timesteps, thereby testing whether ACT can propagate coherent internal states.
Each input sequence consists of a random number from 1 to 10 of size 102 input vectors. The ï¬rst two elements of each input represent a pair of binary numbers; the remainder of the vector is divided up into 10 chunks of size 10. The ï¬rst B chunks, where B is a random number from
8
Ponder 0 50 60 10 20 30 40 50 60 =o 30 4 Difficulty Difficulty werhour AT
Figure 6: Parity Ponder Time and Error Rate Versus Input Diï¬culty. Faint lines are individual runs, bold lines are means over 20 networks. âDiï¬cultyâ is the number of bits in the parity vectors, with a mean over 1,000 random vectors used for each data-point.
Table 1: Binary Truth Tables for the Logic Task
P Q NOR Xq ABJ XOR NAND AND XNOR if/then T T T F F T F F F F F T F F T F F T F F F T T F F T T T T F F F T F F T T F T T then/if OR T T F T T T T F
1 to 10, contain one-hot representations of randomly chosen numbers between 1 and 10; each of these numbers correspond to an index into the subset of binary logic gates whose truth tables are listed in Table 1. The remaining 10 â B chunks were zeroed to indicate that no further binary operations were deï¬ned for that vector. The binary target bB+1 for each input is the truth value yielded by recursively applying the B binary gates in the vector to the two initial bits b1, b0. That is for 1 ⤠b ⤠B:
bi+1 = Ti(bi, biâ1) (21)
where Ti(., .) is the truth table indexed by chunk i in the input vector.
For the ï¬rst vector in the sequence, the two input bits b0, b1 were randomly chosen to be false (0) or true (1) and assigned to the ï¬rst two elements in the vector. For subsequent vectors, only b1 was random, while b0 was implicitly equal to the target bit from the previous vector (for the purposes of calculating the current target bit), but was always set to zero in the input vector. To solve the task, the network therefore had to learn both how to calculate the sequence of binary operations represented by the chunks in each vector, and how to carry the ï¬nal output of that sequence over to the next timestep. An example input-target sequence pair is shown in Figure 7.
The network architecture was single-layer LSTM with 128 cells. The output was a single sigmoidal unit, trained with binary cross-entropy, and the minibatch size was 16.
Figure 8 shows that the network reaches a minimum sequence error rate of around 0.2 without ACT (compared to 0.5 for random guessing), and virtually zero error for all Ï â¤ 0.01. From Figure 9 it can be seen that low Ï ACT networks solve the task very quickly, requiring about 10,000 training iterations. For higher Ï values ponder time reduces to 1, at which point the networks trained with ACT behave identically to those without. For lower Ï values, the spread of ponder values, and hence computational cost, is quite large. Again we speculate that this is due to the network learning more or less âchunkedâ solutions in which composite truth table are learned for multiple successive logic operations. This is somewhat supported by the clustering of the lowest Ï networks around a ponder time of 5â6, which is approximately the mean number of logic gates applied per sequence,
9
F w i 0} |O} |0 a = Gate2 /1| /0] |0 = = o| fol |4 xX 3 ae od Su g O} jO| |4 Z2PEL Gatet |0) ]1) jo) âr> EES folly input bits al lol lo ollallo Input seq. Target seq.
Figure 7: Logic training Example. Both the input and target sequences consist of 3 vectors. For simplicity only 2 of the 10 possible logic gates represented in the input are shown, and each is restricted to one of the ï¬rst 3 gates in Table 1 (NOR, Xq, and ABJ). The segmentation of the input vectors is show on the left and the recursive application of Equation (21) required to determine the targets (and subsequent b0 values) is shown in italics above the target vectors.
0.15 i 0.10 goes mL [o % 0.00 : oi gs 3 a 4 8 8 gos 3 2 s 36 3 = 89 9 ¢ 8 © & 2 @ee me» © re B® ee Ps or @ 5 sss 8 8 8 § 8§ 8 8s §& § §$ &â¬& & & § BB é6¢ 8 6 8 is} §ss 8 88888 8S 8 8 S&B EEBSB SB $eé eee x eesegeegeeegsegeecscss6 56 6 6 6 6 Ps 6 6 6 6 6 6 6 6 6 s Time Penalty
Figure 8: Logic Error Rates.
and hence the minimum number of computations the network would need if calculating single binary operations at a time.
Figure 10 shows a surprisingly high ponder time for the least diï¬cult inputs, with some networks taking more than 10 steps to evaluate a single logic gate. From 5 to 10 logic gates, ponder gradually increases with diï¬culty as expected, suggesting that a qualitatively diï¬erent solution is learned for the two regimes. This is supported by the error rates for the non ACT and high Ï networks, which increase abruptly after 5 gates. It may be that 5 is the upper limit on the number of successive gates the network can learn as a single composite operation, and thereafter it is forced to apply an iterative algorithm.
# 3.3 Addition
The addition task presents the network with a input sequence of 1 to 5 size 50 input vectors. Each vector represents a D digit number, where D is drawn randomly from 1 to 5, and each digit is drawn randomly from 0 to 9. The ï¬rst 10D elements of the vector are a concatenation of one-hot encodings of the D digits in the number, and the remainder of the vector is set to 0. The required output is the cumulative sum of all inputs up to the current one, represented as a set of 6 simultaneous classiï¬cations for the 6 possible digits in the sum. There is no target for the ï¬rst vector in the sequence, as no sums have yet been calculated. Because the previous sum must be carried over by the network, this task again requires the internal state of the network to remain coherent. Each classiï¬cation is modelled by a size 11 softmax, where the ï¬rst 10 classes are the digits and the 11th is a special marker used to indicate that the number is complete. An example input-target pair is shown in Figure 11.
The network was single-layer LSTM with 512 memory cells. The loss function was the joint cross-entropy of all 6 targets at each time-step where targets were present and the minibatch size
10
07 0.30 Time Penalty â"oooor 0.6 0.25 2 2 © 05 Ch os [aa a 8 6 e 04 i 0.15 w w 8 8 0.3 0.10 fat c 3 g a iow oO 0.2 o 0.05 n n 0.0 oO 200000 400000 600000 800000 1000000 7 o 2 4 6 8 10 12 14 â_âo1 i without Act Iterations Ponder
Figure 9: Logic Learning Curves and Error Rates Versus Ponder Time.
os âTime Penalty ° Ponder Sequence Error Rate 7 8 9 10 I 2 3 4 7 8 9 10 56 56 Difficulty Difficulty
Figure 10: Logic Ponder Time and Error Rate Versus Input Diï¬culty. âDiï¬cultyâ is the number of logic gates in each input vector; all sequences were length 5.
1}/3}/6 i $ 0//9)/8 3/18 3/|2||4| ââ alls 8/|-]/5 «Ilo -}[-} [0 alfa nput seq. Target seq.
Figure 11: Addition training Example. Each digit in the input sequence is represented by a size 10 one hot encoding. Unused input digits, marked â-â, are represented by a vector of 10 zeros. The black vector at the start of the target sequence indicates that no target was required for that step. The target digits are represented as 1-of-11 classes, where the 11th class, marked â*â, is used for digits beyond the end of the target number.
11
se eeeesepeeeggegeeeexecpeeganzeeepegare s§ssssss88s88s88e8s8s8s88s8sesesgssesg8eggge¢8gsgsgegegseX eeeegsgse 8 8G e&sgsSée sé 6S SF SF 6 SC oe oo 8 8 ° $6666 6 5 6 6 2 Time Penalty
Figure 12: Addition Error Rates.
1.0 07 Time Penalty â 00001 â 0.0002 06 â 0.0003 08 â 00004 â 0.0005 v Lv = cocoe oS _ os o 0.0007 fed e â 0.0008 â 0.0009 L Lo â 0001 £ 0.6 £ o4 G02 â 0003 uu Ww â 0.004 o © 03 ââ 0.005 Vv Vv 0.006 S04 ij 0.007 o oO 0.008 3 302 0009 fou fog â oo oO oO â 00 eA) un â 003 0.2 0.1 boa â 005 â 006 0.0 EDO D000=-0-=2 ND OO + er 0.0 â 009 oO 200000 400000 600000 800000 1000000 oO 2 4 6 8 10 2 14 âo jl = without act Iterations Ponder
Figure 13: Addition Learning Curves and Error Rates Versus Ponder Time.
was 32. The maximum ponder M was set to 20 for this task, as it was found that some networks had very high ponder times early in training.
The results in Figure 12 show that the task was perfectly solved by the ACT networks for all values of Ï in the grid search. Unusually, networks with higher Ï solved the problem with fewer training examples. Figure 14 demonstrates that the relationship between the ponder time and the number of digits was approximately linear for most of the ACT networks, and that for the most eï¬cient networks (with the highest Ï values) the slope of the line was close to 1, which matches our expectations that an eï¬cient long addition algorithm should need one computation step per digit. Figure 15 shows how the ponder time is distributed during individual addition sequences, pro-
viding further evidence of an approximately linear-time long addition algorithm.
# 3.4 Sort
The sort task requires the network to sort sequences of 2 to 15 numbers drawn from a standard normal distribution in ascending order. The experiments considered so far have been designed to favour ACT by compressing sequential information into single vectors, and thereby requiring the use of multiple computation steps to unpack them. For the sort task a more natural sequential representation was used: the random numbers were presented one at a time as inputs, and the required output was the sequence of indices into the number sequence placed in sorted order; an example is shown in Figure 16. We were particularly curious to see how the number of ponder steps scaled with the number of elements to be sorted, knowing that eï¬cient sorting algorithms have O(N log N ) computational cost.
The network was single-layer LSTM with 512 cells. The output layer was a size 15 softmax,
12
âTime Penalty Sequence Error Rate 0 0.0 = ââââ 3 3 7 : Difficulty Difficulty â witouract
Figure 14: Addition Ponder Time and Error Rate Versus Input Diï¬culty. âDiï¬cultyâ is the number of digits in each input vector; all sequences were length 3.
ee StS edHHEH e EHHEG HEECHG) 1 1 1 2 °9 «9 1 o1 41 4 9 3 Outputs CO RRnnnaT astelet aneaeb EEE O10 1 8 0 SSIS HHH OHIHEUG GE -HIHIGHHI-HE2 HITE OHIELEHG 8 6 4 Ofte tees 5 5 8 0 6 Fae ieee eee eee . oO mo] Hee 2 7 8 Auiey 7 4 5 5 Inputs 3 5 4 0 9 a 6 1 en 1 9 6 6 3 paauncain) 2 8 8 0 3 4]/4 1 +0 3 2 6]/0 3 6 8 6 3
Figure 15: Ponder Time During Three Addition Sequences. The input sequence is shown along the bottom x-axis and the network output sequence is shown along the top x-axis. The ponder time Ït at each input step is shown by the black lines; the actual number of computational steps taken at each point is Ït rounded up to the next integer. The grey lines show the total number of digits in the two numbers being summed at each step; this appears to give a rough lower bound on the ponder time, suggesting an internal algorithm that is approximately linear in the number of digits. All plots were created using the same network, trained with Ï = 9eâ4.
trained with cross-entropy to classify the indices of the sorted inputs. The minibatch size was 16.
Figure 17 shows that the advantage of using ACT is less dramatic for this task than the previous three, but still substantial (from around 12% error without ACT to around 6% for the best Ï value). However from Figure 18 it is clear that these gains come at a heavy computational cost, with the best networks requiring roughly 9 times as much computation as those without ACT. Not surprisingly, It Figure 19 shows that the error rate grew rapidly with the sequence length for all networks. also indicates that the better networks had a sublinear growth in computations per input step with sequence length, though whether this indicates a logarithmic time algorithm is unclear. One problem with the sort task was that the Gaussian samples were sometimes very close together, making it hard for the network to determine which was greater; enforcing a minimum separation between successive values would probably be beneï¬cial.
Figure 20 shows the ponder time during three sort sequences of varying length. As can be seen, there is a large spike in ponder time near (though not precisely at) the end of the input sequence, presumably when the majority of the sort comparisons take place. Note that the spike is much higher for the longer two sequences than the length 5 one, again pointing to an algorithm that is nonlinear
13
wo 0.08 108 029 055 TTT | â Hil Input seq. Target seq.
Figure 16: Sort training Example. Each size 2 input vector consists of one real number and one binary ï¬ag to indicate the end of sequence to be sorted; inputs following the sort sequence are set to zero and marked in black. No targets are present until after the sort sequence; thereafter the size 15 target vectors represent the sorted indices of the input sequence.
oy O14 ® 012 Bo £0.10 Ww . 0.08 oT PTT TTT 0.06 & is se 228 85 Benge egaeeveseeeaganzeeereaaer sss s 8 88 8 888 8s 8 8S SS se eEgEseEgE gs Fs sR g~ssesesgsesgesgpgeegesee S&S &S FSF FS FS ssseEsSs ss Ss Ss < gessgsegesgegsesse686 8686 8 $s6e8e8se8e88e88 g Time Penalty
Figure 17: Sort Error Rates.
0.20 Time Penalty â 00001 = 0.0002 â 0.0003 . â 0.0004 â 00005 Lv 2 = doves oO Mois * = a0 fad fad â 0.0008 â 0.0009 50 5 = boon £ £ 0002 0003 u uw Fe 0.004 o o 0005 Q QO â 0006 c © 0.10 â 0007 G © oO â 0008 3 3 â 0009 fom fom â oo oO o â on A) wn â 003 â 00s â 005 0.05 â 006 â oo â 003 0.0 â 009 0 200000 400000 -~=â600000 = 8000001000000 0 2 4 6 8 10 R 01 7 = without act Iterations Ponder
Figure 18: Sort Learning Curves and Error Rates Versus Ponder Time.
Ponder Sequence Error Rate 0 2 14 2 4 0 2 4 8 Ft 8 1 Difficulty Difficulty
Figure 19: Sort Ponder Time and Error Rate Versus Input Diï¬culty. sorted. âDiï¬cultyâ is the length of the sequence to be
14
Outputs 2 8117 00 053 180 064 041 065 024 020 090 051 140 067 034 Outputs Outputs 2 2 Ponder 834133 005 077 097 098 059 097 -0.82 1.74 22 0.83 155 025 0.64 inputs Inputs
Figure 20: Ponder Time During Three Sort Sequences. The input sequences to be sorted are shown along the bottom x-axes and the network output sequences are shown along the top x-axes. All plots created using the same network, trained with Ï = 10â3.
1.60 . o gue T 156 2 154 aan (4 ao 1.50 ga 299+ 2 © & @ e@eam se oer Bae ssssssss8eegse ⬠§& §& §& FEE se<eseeseegesesescsssesé6 ss 6s 6 < 666 6 66 6 6S 6 ° 2 Time Penalty
Figure 21: Wikipedia Error Rates.
in sequence length (the average ponder per timestep is nonetheless lower for longer sequences, as little pondering is done away from the spike.).
# 3.5 Wikipedia Character Prediction
The Wikipedia task is character prediction on text drawn from the Hutter prize Wikipedia dataset [15]. Following previous RNN experiments on the same data [8], the raw unicode text was used, including XML tags and markup characters, with one byte presented per input timestep and the next byte predicted as a target. No validation set was used for early stopping, as the networks were unable to overï¬t the data, and all error rates are recorded on the training set. Sequences of 500 consecutive bytes were randomly chosen from the training set and presented to the network, whose internal state was reset to 0 at the start of each sequence.
LSTM networks were used with a single layer of 1500 cells and a size 256 softmax classiï¬cation layer. As can be seen from Figures 21 and 22, the error rates are fairly similar with and without ACT, and across values of Ï (although the learning curves suggest that the ACT networks are somewhat more data eï¬cient). Furthermore the amount of ponder per input is much lower than for the other problems, suggesting that the advantages of extra computation were slight for this task. However Figure 23 reveals an intriguing pattern of ponder allocation while processing a sequence. Character prediction networks trained with ACT consistently pause at spaces between words, and pause for longer at âboundaryâ characters such as commas and full stops. We speculate that the extra computation is used to make predictions about the next âchunkâ in the data (word, sentence, clause), much as humans have been found to do in self-paced reading experiments [16]. This suggests that ACT could be useful for inferring implicit boundaries or transitions in sequence data. Alternative measures for inferring transitions include the next-step prediction loss and predictive entropy, both of which tend to increase during harder predictions. However, as can be seen from the ï¬gure, they
15
2.2 1.80 24 175 â 0003 = coos - . = aces . = avee G20 5 : ares 2 ra £ â coos U 5 0 = 00s K 15 o â oor 2 c ote GS ro â 003 ~ â 0s 5 18 o ts g a â 00s a â oor v 2 = ace 2a Pa = 00s a âo1 â Without act 16 145 15 1.40 2000 3000 4000 5000 6000 7000 8000 9000 10000 0.9 1.0 11 12 13 14 15 16 17 Iterations Ponder
Figure 22: Wikipedia Learning Curves (Zoomed) and Error Rates Versus Ponder Time.
Entropy (bits) and the many people caught in the middle of the two. In recent history, with scientists learning Loss (bits) and the many people caught in the middle of the two. In recent history, with scientists learning Ponder and the many people caught in the middle of the two. In recent history, with scientists learning
Figure 23: Ponder Time, Prediction loss and Prediction Entropy During a Wikipedia Text Sequence. Plot created using a network trained with Ï = 6eâ3
are a less reliable indicator of boundaries, and are not likely to increase at points such as full stops and commas, as these are invariably followed by space characters. More generally, loss and entropy only indicate the diï¬culty of the current prediction, not the degree to which the current input is likely to impact future predictions.
Furthermore Figure 24 reveals that, as well as being an eï¬ective detector of non-text transition markers such as the opening brackets of XML tags, ACT does not increase computation time during random or fundamentally unpredictable sequences like the two ID numbers. This is unsurprising, as doing so will not improve its predictions. In contrast, both entropy and loss are inevitably high for unpredictable data. We are therefore hopeful that computation time will provide a better way to distinguish between structure and noise (or at least data perceived by the network as structure or noise) than existing measures of predictive diï¬culty.
# 4 Conclusion
This paper has introduced Adaptive Computation time (ACT), a method that allows recurrent neural networks to learn how many updates to perform for each input they receive. Experiments on
16
Entropy (bits) » United States security treaty</title> <id>1157</id> <revision> <id>15899658</id> a; rand Ba 32 be United States security treaty</title> <id>1157</id> <revision> <id>15899658</id> Be Boo gis gs » United States security treaty</title> <id>1157</id> <revision> <id>15899658</id>
Figure 24: Ponder Time, Prediction loss and Prediction Entropy During a Wikipedia Sequence Containing XML Tags. Created using the same network as Figure 23.
synthetic data prove that ACT can make otherwise inaccessible problems straightforward for RNNs to learn, and that it is able to dynamically adapt the amount of computation it uses to the demands of the data. An experiment on real data suggests that the allocation of computation steps learned by ACT can yield insight into both the structure of the data and the computational demands of predicting it.
ACT promises to be particularly interesting for recurrent architectures containing soft attention modules [2, 10, 34, 12], which it could enable to dynamically adapt the number of glances or internal operations they perform at each time-step.
One weakness of the current algorithm is that it is quite sensitive to the time penalty parameter that controls the relative cost of computation time versus prediction error. An important direction for future work will be to ï¬nd ways of automatically determining and adapting the trade-oï¬ between accuracy and speed.
# Acknowledgments
The author wishes to thank Ivo Danihleka, Greg Wayne, Tim Harley, Malcolm Reynolds, Jacob Menick, Oriol Vinyals, Joel Leibo, Koray Kavukcuoglu and many others on the DeepMind team for valuable comments and suggestions, as well as Albert Zeyer, Martin Abadi, Dario Amodei, Eugene Brevdo and Christopher Olah for pointing out the discontinuity in the ponder cost, which was erroneously described as smooth in an earlier version of the paper.
# References
[1] G. An. The eï¬ects of adding noise during backpropagation training on a generalization perfor- mance. Neural Computation, 8(3):643â674, 1996.
[2] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. abs/1409.0473, 2014.
[3] E. Bengio, P.-L. Bacon, J. Pineau, and D. Precup. Conditional computation in neural networks for faster models. arXiv preprint arXiv:1511.06297, 2015.
[4] D. C. Ciresan, U. Meier, and J. Schmidhuber. Multi-column deep neural networks for image classiï¬cation. In arXiv:1202.2745v1 [cs.CV], 2012.
17
[5] G. Dahl, D. Yu, L. Deng, and A. Acero. Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. Audio, Speech, and Language Processing, IEEE Trans- actions on, 20(1):30 â42, jan. 2012.
[6] L. Denoyer and P. Gallinari. Deep sequential neural network. arXiv preprint arXiv:1410.0510, 2014.
[7] S. Eslami, N. Heess, T. Weber, Y. Tassa, K. Kavukcuoglu, and G. E. Hinton. Attend, infer, repeat: Fast scene understanding with generative models. arXiv preprint arXiv:1603.08575, 2016.
[8] A. Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013.
[9] A. Graves, A. Mohamed, and G. Hinton. Speech recognition with deep recurrent neural net- works. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Con- ference on, pages 6645â6649. IEEE, 2013.
[10] A. Graves, G. Wayne, and I. Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014.
[11] E. Grefenstette, K. M. Hermann, M. Suleyman, and P. Blunsom. Learning to transduce with unbounded memory. In Advances in Neural Information Processing Systems, pages 1819â1827, 2015.
[12] K. Gregor, I. Danihelka, A. Graves, and D. Wierstra. Draw: A recurrent neural network for image generation. arXiv preprint arXiv:1502.04623, 2015.
[13] S. Hochreiter, Y. Bengio, P. Frasconi, and J. Schmidhuber. Gradient ï¬ow in recurrent nets: the diï¬culty of learning long-term dependencies, 2001.
[14] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735â 1780, 1997.
[15] M. Hutter. Universal artiï¬cial intelligence. Springer, 2005.
[16] M. A. Just, P. A. Carpenter, and J. D. Woolley. Paradigms and processes in reading compre- hension. Journal of experimental psychology: General, 111(2):228, 1982.
[17] N. Kalchbrenner, I. Danihelka, and A. Graves. Grid long short-term memory. arXiv preprint arXiv:1507.01526, 2015.
[18] D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
[19] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â1105, 2012.
[20] Q. V. Le and T. Mikolov. Distributed representations of sentences and documents. arXiv preprint arXiv:1405.4053, 2014.
[21] M. Li and P. Vit´anyi. An introduction to Kolmogorov complexity and its applications. Springer Science & Business Media, 2013.
[22] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111â3119, 2013.
18
[23] B. A. Olshausen et al. Emergence of simple-cell receptive ï¬eld properties by learning a sparse code for natural images. Nature, 381(6583):607â609, 1996.
[24] B. Recht, C. Re, S. Wright, and F. Niu. Hogwild: A lock-free approach to parallelizing stochastic gradient descent. In Advances in Neural Information Processing Systems, pages 693â701, 2011.
[25] S. Reed and N. de Freitas. Neural programmer-interpreters. Technical Report arXiv:1511.06279, 2015.
[26] J. Schmidhuber. Self-delimiting neural networks. arXiv preprint arXiv:1210.0118, 2012.
[27] J. Schmidhuber and S. Hochreiter. Guessing can outperform many long time lag algorithms. Technical report, 1996.
[28] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to prevent neural networks from overï¬tting. The Journal of Machine Learning Research, 15(1):1929â1958, 2014.
[29] R. K. Srivastava, K. Greï¬, and J. Schmidhuber. Training very deep networks. In Advances in Neural Information Processing Systems, pages 2368â2376, 2015.
[30] R. K. Srivastava, B. R. Steunebrink, and J. Schmidhuber. First experiments with powerplay. Neural Networks, 41:130â136, 2013.
[31] S. Sukhbaatar, J. Weston, R. Fergus, et al. End-to-end memory networks. In Advances in Neural Information Processing Systems, pages 2431â2439, 2015.
[32] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. arXiv preprint arXiv:1409.3215, 2014.
[33] O. Vinyals, S. Bengio, and M. Kudlur. Order matters: Sequence to sequence for sets. arXiv preprint arXiv:1511.06391, 2015.
[34] O. Vinyals, M. Fortunato, and N. Jaitly. Pointer networks. In Advances in Neural Information Processing Systems, pages 2674â2682, 2015.
[35] A. J. Wiles. Modular elliptic curves and fermats last theorem. ANNALS OF MATH, 141:141, 1995.
[36] R. J. Williams and D. Zipser. Gradient-based learning algorithms for recurrent networks and their computational complexity. Back-propagation: Theory, architectures and applications, pages 433â486, 1995.
19 | {
"id": "1502.04623"
} |
1603.06147 | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation | The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | http://arxiv.org/pdf/1603.06147 | Junyoung Chung, Kyunghyun Cho, Yoshua Bengio | cs.CL, cs.LG | null | null | cs.CL | 20160319 | 20160621 | 6 1 0 2 n u J 1 2 ] L C . s c [
4 v 7 4 1 6 0 . 3 0 6 1 : v i X r a
# A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation
# Junyoung Chung Universit´e de Montr´eal junyoung.chung@umontreal.ca
Kyunghyun Cho New York University
Yoshua Bengio Universit´e de Montr´eal CIFAR Senior Fellow
# Abstract
The existing machine translation systems, whether phrase-based or neural, have relied almost exclusively on word-level modelling with explicit segmentation. In this paper, we ask a fundamental question: can neural machine translation generate a character sequence without any explicit segmentation? To answer this question, we evaluate an attention-based encoderâ decoder with a subword-level encoder and a character-level decoder on four language pairsâEn-Cs, En-De, En-Ru and En-Fiâ using the parallel corpora from WMTâ15. Our experiments show that the models with a character-level decoder outperform the ones with a subword-level decoder on all of the four language pairs. Further- more, the ensembles of neural models with a character-level decoder outperform the state-of-the-art non-neural machine trans- lation systems on En-Cs, En-De and En-Fi and perform comparably on En-Ru.
tion, although neural networks do not suffer from character-level modelling and rather suffer from the issues speciï¬c to word-level modelling, such as the increased computational complexity from a very large target vocabulary (Jean et al., 2015; Lu- ong et al., 2015b). Therefore, in this paper, we ad- dress a question of whether neural machine trans- lation can be done directly on a sequence of char- acters without any explicit word segmentation.
To answer this question, we focus on represent- ing the target side as a character sequence. We evaluate neural machine translation models with a character-level decoder on four language pairs from WMTâ15 to make our evaluation as convinc- ing as possible. We represent the source side as a sequence of subwords extracted using byte-pair encoding from Sennrich et al. (2015), and vary the target side to be either a sequence of subwords or characters. On the target side, we further design a novel recurrent neural network (RNN), called bi- scale recurrent network, that better handles multi- ple timescales in a sequence, and test it in addition to a naive, stacked recurrent neural network.
1 The existing machine translation systems have re- lied almost exclusively on word-level modelling with explicit segmentation. This is mainly due to the issue of data sparsity which becomes much more severe, especially for n-grams, when a sen- tence is represented as a sequence of characters rather than words, as the length of the sequence grows signiï¬cantly. In addition to data sparsity, we often have a priori belief that a word, or its segmented-out lexeme, is a basic unit of meaning, making it natural to approach translation as map- ping from a sequence of source-language words to a sequence of target-language words.
On all of the four language pairsâEn-Cs, En-De, En-Ru and En-Fiâ, the models with a character- level decoder outperformed the ones with a subword-level decoder. We observed a similar trend with the ensemble of each of these con- ï¬gurations, outperforming both the previous best neural and non-neural translation systems on En- Cs, En-De and En-Fi, while achieving a compara- ble result on En-Ru. We ï¬nd these results to be a strong evidence that neural machine translation can indeed learn to translate at the character-level and that in fact, it beneï¬ts from doing so.
# 2 Neural Machine Translation
This has continued with the more recently proposed paradigm of neural machine transla-
Neural machine translation refers to a recently proposed approach to machine translation (For-
cada and Neco, 1997; Kalchbrenner and Blunsom, 2013; Cho et al., 2014; Sutskever et al., 2014). This approach aims at building an end-to-end neu- ral network that takes as input a source sentence X = (a1,...,@7,) and outputs its translation Y = (y1,.--,yr,), Where x; and yy are respec- tively source and target symbols. This neural net- work is constructed as a composite of an encoder network and a decoder network.
The encoder network encodes the input sen- tence X into its continuous representation. In this paper, we closely follow the neural transla- tion model proposed in Bahdanau et al. (2015) and use a bidirectional recurrent neural network, which consists of two recurrent neural networks. The forward network reads the input sentence in a forward direction: Zt = B (ex(ae), Zi-1), where e,,(x) is a continuous embedding of the t-th input symbol, and ¢ is a recurrent activa- tion function. Similarly, the reverse network reads the sentence in a reverse direction (right to left): vi= (ex (22), Fis) At each loca- tion in the input sentence, we concatenate the hid- den states from the forward and reverse RNNs to form a context set C = {z1,...,27,}, where â= [Ze Z,j.
Then the decoder computes the conditional dis- tribution over all possible translations based on this context set. This is done by first rewrit- ing the conditional probability of a translation: log p(Y|X) = Dy, log p(w lycv,X). For each conditional term in the summation, the decoder RNN updates its hidden state by
hy = $(ey(yvâ1), byâ1, ev), (63)
where e, is the continuous embedding of a target symbol. c; is a context vector computed by a soft- alignment mechanism:
cy = falign(â¬y(Yrâ1); hy_1,C)). (2)
The soft-alignment mechanism falign weights each vector in the context set C according to its relevance given what has been translated. The weight of each vector zt is computed by
Ong = Zor eee), (3)
where fscore iS a parametric function returning an unnormalized score for z; given hy_; and y_1.
We use a feedforward network with a single hid- den layer in this paper.! Z is a normalization con- stant: Z= ey efscore(ey(Yy a) shy 14") This procedure can be understood as computing the alignment probability between the ¢/-th target symbol and t-th source symbol.
The hidden state hy, together with the previous target symbol y_; and the context vector cy, is fed into a feedforward neural network to result in the conditional distribution: v(ye | yer, X) x efote(eu(Yerâa)sbyr ey). (4)
The whole model, consisting of the encoder, decoder and soft-alignment mechanism, is then tuned end-to-end to minimize the negative log- likelihood using stochastic gradient descent.
# 3 Towards Character-Level Translation
# 3.1 Motivation
Let us revisit how the source and target sen- tences (X and Y ) are represented in neural ma- chine translation. For the source side of any given training corpus, we scan through the whole cor- pus to build a vocabulary Vx of unique tokens to which we assign integer indices. A source sen- tence X is then built as a sequence of the indices i.e., of such tokens belonging to the sentence, X = (x1, . . . , xTx), where xt â {1, 2, . . . , |Vx|}. The target sentence is similarly transformed into a target sequence of integer indices.
Each token, or its index, is then transformed into a so-called one-hot vector of dimensionality |Vx|. All but one elements of this vector are set to 0. The only element whose index corresponds to the tokenâs index is set to 1. This one-hot vector is the one which any neural machine translation model sees. The embedding function, ex or ey, is simply the result of applying a linear transforma- tion (the embedding matrix) to this one-hot vector. The important property of this approach based on one-hot vectors is that the neural network is oblivious to the underlying semantics of the to- kens. To the neural network, each and every token in the vocabulary is equal distance away from ev- ery other token. The semantics of those tokens are simply learned (into the embeddings) to maximize the translation quality, or the log-likelihood of the model.
This property allows us great freedom in the choice of tokensâ unit. Neural networks have been
1
For other possible implementations, see (Luong et al., 2015a).
shown to work well with word tokens (Bengio et al., 2001; Schwenk, 2007; Mikolov et al., 2010) but also with ï¬ner units, such as subwords (Sen- nrich et al., 2015; Botha and Blunsom, 2014; Lu- ong et al., 2013) as well as symbols resulting from compression/encoding (Chitnis and DeNero, 2015). Although there have been a number of previous research reporting the use of neural net- works with characters (see, e.g., Mikolov et al. (2012) and Santos and Zadrozny (2014)), the dom- inant approach has been to preprocess the text into a sequence of symbols, each associated with a se- quence of characters, after which the neural net- work is presented with those symbols rather than with characters.
More recently in the context of neural machine translation, two research groups have proposed to directly use characters. Kim et al. (2015) proposed to represent each word not as a single integer index as before, but as a sequence of characters, and use a convolutional network followed by a highway network (Srivastava et al., 2015) to extract a con- tinuous representation of the word. This approach, which effectively replaces the embedding func- tion ex, was adopted by Costa-Juss`a and Fonollosa (2016) for neural machine translation. Similarly, Ling et al. (2015b) use a bidirectional recurrent neural network to replace the embedding functions ex and ey to respectively encode a character se- quence to and from the corresponding continuous word representation. A similar, but slightly differ- ent approach was proposed by Lee et al. (2015), where they explicitly mark each character with its relative location in a word (e.g., âBâeginning and âIântermediate).
Despite the fact that these recent approaches work at the level of characters, it is less satisfying that they all rely on knowing how to segment char- acters into words. Although it is generally easy for languages like English, this is not always the case. This word segmentation procedure can be as simple as tokenization followed by some punc- tuation normalization, but also can be as compli- cated as morpheme segmentation requiring a sep- arate model to be trained in advance (Creutz and Lagus, 2005; Huang and Zhao, 2007). Further- more, these segmentation2 steps are often tuned or designed separately from the ultimate objective of translation quality, potentially contributing to a
2From here on, the term segmentation broadly refers to any method that splits a given character sequence into a se- quence of subword symbols.
suboptimal quality.
Based on this observation and analysis, in this paper, we ask ourselves and the readers a question which should have been asked much earlier: Is it possible to do character-level translation without any explicit segmentation?
# 3.2 Why Word-Level Translation?
(1) Word as a Basic Unit of Meaning A word can be understood in two different senses. In the abstract sense, a word is a basic unit of mean- ing (lexeme), and in the other sense, can be un- derstood as a âconcrete word as used in a sen- tence.â (Booij, 2012). A word in the former sense turns into that in the latter sense via a process of morphology, including inï¬ection, compound- ing and derivation. These three processes do al- ter the meaning of the lexeme, but often it stays close to the original meaning. Because of this view of words as basic units of meaning (either in the form of lexemes or derived form) from lin- guistics, much of previous work in natural lan- guage processing has focused on using words as basic units of which a sentence is encoded as a sequence. Also, the potential difï¬culty in ï¬nding a mapping between a wordâs character sequence and meaning3 has likely contributed to this trend toward word-level modelling.
(2) Data Sparsity There is a further technical reason why much of previous research on ma- chine translation has considered words as a ba- sic unit. This is mainly due to the fact that ma- jor components in the existing translation systems, such as language models and phrase tables, are a count-based estimator of probabilities. In other words, a probability of a subsequence of sym- bols, or pairs of symbols, is estimated by count- ing the number of its occurrences in a training corpus. This approach severely suffers from the issue of data sparsity, which is due to a large state space which grows exponentially w.r.t. the length of subsequences while growing only lin- early w.r.t. the corpus size. This poses a great chal- lenge to character-level modelling, as any subse- quence will be on average 4â5 times longer when characters, instead of words, are used. Indeed, Vilar et al. (2007) reported worse performance when the character sequence was directly used by a phrase-based machine translation system. More
3For instance, âquitâ, âquiteâ and âquietâ are one edit- distance away from each other but have distinct meanings.
recently, Neubig et al. (2013) proposed a method to improve character-level translation with phrase- based translation systems, however, with only a limited success.
(3) Vanishing Gradient Speciï¬cally to neural machine translation, a major reason behind the wide adoption of word-level modelling is due to the difï¬culty in modelling long-term dependen- cies with recurrent neural networks (Bengio et al., 1994; Hochreiter, 1998). As the lengths of the sentences on both sides grow when they are repre- sented in characters, it is easy to believe that there will be more long-term dependencies that must be captured by the recurrent neural network for suc- cessful translation.
# 3.3 Why Character-Level Translation?
Why not Word-Level Translation? The most pressing issue with word-level processing is that we do not have a perfect word segmentation al- gorithm for any one language. A perfect segmen- tation algorithm needs to be able to segment any given sentence into a sequence of lexemes and morphemes. This problem is however a difï¬cult problem on its own and often requires decades of research (see, e.g., Creutz and Lagus (2005) for Finnish and other morphologically rich languages and Huang and Zhao (2007) for Chinese). There- fore, many opt to using either a rule-based tok- enization approach or a suboptimal, but still avail- able, learning based segmentation algorithm.
The outcome of this naive, sub-optimal segmen- tation is that the vocabulary is often ï¬lled with many similar words that share a lexeme but have different morphology. For instance, if we apply a simple tokenization script to an English corpus, ârunâ, ârunsâ, âranâ and ârunningâ are all separate entries in the vocabulary, while they clearly share the same lexeme ârunâ. This prevents any ma- chine translation system, in particular neural ma- chine translation, from modelling these morpho- logical variants efï¬ciently.
More speciï¬cally in the case of neural machine translation, each of these morphological variantsâ ârunâ, ârunsâ, âranâ and ârunningââ will be as- signed a d-dimensional word vector, leading to four independent vectors, while it is clear that if we can segment those variants into a lexeme and other morphemes, we can model them more efï¬- ciently. For instance, we can have a d-dimensional vector for the lexeme ârunâ and much smaller
vectors for âsâ andâingâ. Each of those variants will be then a composite of the lexeme vector (shared across these variants) and morpheme vec- tors (shared across words sharing the same sufï¬x, for example) (Botha and Blunsom, 2014). This makes use of distributed representation, which generally yields better generalization, but seems to require an optimal segmentation, which is un- fortunately almost never available.
In addition to inefï¬ciency in modelling, there are two additional negative consequences from us- ing (unsegmented) words. First, the translation system cannot generalize well to novel words, which are often mapped to a token reserved for an unknown word. This effectively ignores any meaning or structure of the word to be incorpo- rated when translating. Second, even when a lex- eme is common and frequently observed in the training corpus, its morphological variant may not be. This implies that the model sees this speciï¬c, rare morphological variant much less and will not be able to translate it well. However, if this rare morphological variant shares a large part of its spelling with other more common words, it is de- sirable for a machine translation system to exploit those common words when translating those rare variants.
Why Character-Level Translation? All of these issues can be addressed to certain extent by directly modelling characters. Although the issue of data sparsity arises in character-level transla- tion, it is elegantly addressed by using a paramet- ric approach based on recurrent neural networks instead of a non-parametric count-based approach. Furthermore, in recent years, we have learned how to build and train a recurrent neural network that can well capture long-term dependencies by using more sophisticated activation functions, such as long short-term memory (LSTM) units (Hochre- iter and Schmidhuber, 1997) and gated recurrent units (Cho et al., 2014).
Kim et al. (2015) and Ling et al. (2015a) re- cently showed that by having a neural network that converts a character sequence into a word vector, we avoid the issues from having many morpho- logical variants appearing as separate entities in a vocabulary. This is made possible by sharing the character-to-word neural network across all the unique tokens. A similar approach was applied to machine translation by Ling et al. (2015b).
These recent approaches, however, still rely on
the availability of a good, if not optimal, segmen- tation algorithm. Ling et al. (2015b) indeed states that â[m]uch of the prior information regarding morphology, cognates and rare word translation among others, should be incorporatedâ.
It however becomes unnecessary to consider these prior information, if we use a neural net- work, be it recurrent, convolution or their combi- nation, directly on the unsegmented character se- quence. The possibility of using a sequence of un- segmented characters has been studied over many years in the ï¬eld of deep learning. For instance, Mikolov et al. (2012) and Sutskever et al. (2011) trained a recurrent neural network language model (RNN-LM) on character sequences. The latter showed that it is possible to generate sensible text sequences by simply sampling a character at a time from this model. More recently, Zhang et al. (2015) and Xiao and Cho (2016) successfully applied a convolutional net and a convolutional- recurrent net respectively to character-level docu- ment classiï¬cation without any explicit segmenta- tion. Gillick et al. (2015) further showed that it is possible to train a recurrent neural network on unicode bytes, instead of characters or words, to perform part-of-speech tagging and named entity recognition.
These previous works suggest the possibility of applying neural networks for the task of machine translation, which is often considered a substan- tially more difï¬cult problem compared to docu- ment classiï¬cation and language modelling.
# 3.4 Challenges and Questions
There are two overlapping sets of challenges for the source and target sides. On the source side, it is unclear how to build a neural network that learns a highly nonlinear mapping from a spelling to the meaning of a sentence.
On the target side, there are two challenges. The ï¬rst challenge is the same one from the source side, as the decoder neural network needs to sum- marize what has been translated. In addition to this, the character-level modelling on the target side is more challenging, as the decoder network must be able to generate a long, coherent sequence of characters. This is a great challenge, as the size of the state space grows exponentially w.r.t. the number of symbols, and in the case of characters, it is often 300-1000 symbols long.
All these challenges should ï¬rst be framed as
wa
(a) Gating units (b) One-step processing
Ct Ct
Figure 1: Bi-scale recurrent neural network
questions; whether the current recurrent neural networks, which are already widely used in neu- ral machine translation, are able to address these challenges as they are. In this paper, we aim at an- swering these questions empirically and focus on the challenges on the target side (as the target side shows both of the challenges).
# 4 Character-Level Translation
In this paper, we try to answer the questions posed earlier by testing two different types of recurrent neural networks on the target side (decoder).
First, we test an existing recurrent neural net- work with gated recurrent units (GRUs). We call this decoder a base decoder.
Second, we build a novel two-layer recurrent neural network, inspired by the gated-feedback network from Chung et al. (2015), called a bi- scale recurrent neural network. We design this network to facilitate capturing two timescales, mo- tivated by the fact that characters and words may work at two separate timescales.
We choose to test these two alternatives for the following purposes. Experiments with the base decoder will clearly answer whether the existing neural network is enough to handle character-level decoding, which has not been properly answered in the context of machine translation. The alterna- tive, the bi-scale decoder, is tested in order to see whether it is possible to design a better decoder, if the answer to the ï¬rst question is positive.
# 4.1 Bi-Scale Recurrent Neural Network
In this proposed bi-scale recurrent neural network, there are two sets of hidden units, h1 and h2. They contain the same number of units, i.e., dim(h1) = dim(h2). The ï¬rst set h1 models a fast-changing timescale (thereby, a faster layer), and h2 a slower timescale (thereby, a slower layer). For each hid- den unit, there is an associated gating unit, to
which we refer by g! and g?. For the descrip- tion below, we use y_1 and c, for the previous target symbol and the context vector (see Eq. (2)), respectively.
Let us start with the faster layer. The faster layer outputs two sets of activations, a normal output hi}, and its gated version h}. The activation of the faster layer is computed by
h}, = tanh (w" [ev(owâ1); hi; h?; cv) ;
where hi , and hh? , are the gated activations of the faster and slower layers respectively. These gated activations are computed by
hi =(1âgL) Oh}, h? = gh Ohi.
In other words, the faster layerâs activation is based on the adaptive combination of the faster and slower layersâ activations from the previous time step. Whenever the faster layer determines that it needs to reset, i.e., gh = 1, the next activation will be determined based more on the slower layerâs activation.
The faster layerâs gating unit is computed by
Bi =o (w? lev(w a): hp sh?_;ev' ) ,
where Ï is a sigmoid function.
The slower layer also outputs two sets of acti- vations, a normal output h?, and its gated version h?. These activations are computed as follows:
h? = (1â h? = (1- gh) @h?_, +g) oh}, gi) Ohi,
where h?, is a candidate activation. The slower layerâs gating unit g?, is computed by
gi =o (we [(gz © hy); bh? _4; cv) .
This adaptive leaky integration based on the gat- ing unit from the faster layer has a consequence that the slower layer updates its activation only when the faster layer resets. This puts a soft con- straint that the faster layer runs at a faster rate by preventing the slower layer from updating while the faster layer is processing a current chunk.
The candidate activation is then computed by
h?, = tanh (w"â [(gt © hy); hey; cr]) . (5)
BPE BPE â© BPE char (base) ++ BPE Char (bi-scale) Source Sentence Length
GiM(GFE BPE, BPE Chr (Br seae) dae 8PE, BPE Cnar (base 75 a Word Frequency
BPE BPE â© BPE char (base) ++ BPE Char (bi-scale) GiM(GFE BPE, BPE Chr (Br seae) dae 8PE, BPE Cnar (base 75 a Source Sentence Length Word Frequency
Figure 2: (left) The BLEU scores on En-Cs w.r.t. the length of source sentences. (right) The difference of word negative log-probabilities be- tween the subword-level decoder and either of the character-level base or bi-scale decoder.
# Ëh2
h?_ , indicates the reset activation from the pre- vious time step, similarly to what happened in the faster layer, and cy is the input from the context. According to g}, ©h}, in Eq. (5), the faster layer influences the slower layer, only when the faster layer has finished processing the current chunk and is about to reset itself (gh = 1). In other words, the slower layer does not receive any in- put from the faster layer, until the faster layer has quickly processed the current chunk, thereby run- ning at a slower rate than the faster layer does.
At each time step, the final output of the pro- posed bi-scale recurrent neural network is the con- catenation of the output vectors of the faster and slower layers, i.e., [h!; h?]. This concatenated vector is used to compute the probability distribu- ion over all the symbols in the vocabulary, as in Eq. (4). See Fig. 1 for graphical illustration.
# 5 Experiment Settings
For evaluation, we represent a source sentence as a sequence of subword symbols extracted by byte- pair encoding (BPE, Sennrich et al. (2015)) and a target sentence either as a sequence of BPE-based symbols or as a sequence of characters.
Corpora and Preprocessing We use all avail- able parallel corpora for four language pairs from WMTâ15: En-Cs, En-De, En-Ru and En-Fi. They consist of 12.1M, 4.5M, 2.3M and 2M sentence pairs, respectively. We tokenize each corpus using a tokenization script included in Moses.4 We only use the sentence pairs, when the source side is up to 50 subword symbols long and the target side is either up to 100 subword symbols or 500 charac- ters. We do not use any monolingual corpus.
4Although tokenization is not necessary for character- level modelling, we tokenize the all target side corpora to make comparison against word-level modelling easier.
e D - n E s C - n E u R - n E i F - n E (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l) (m) (n) (o) (p) Attention h2 h1 D D D D D D D D D D c r S Trgt 1 2 2 2 2 2 2 State-of-the-art Non-Neural Approachâ BPE E P B Char Base Base Bi-S D D Base D Base D Bi-S 2 2 2 State-of-the-art Non-Neural Approachâ BPE E P B Char D D Base D Base D Bi-S 2 2 2 State-of-the-art Non-Neural Approachâ BPE E P B Char D D Base D Base D Bi-S 2 2 2 State-of-the-art Non-Neural Approachâ BPE E P B Char Development Single Ens 20.78 21.2621.45 20.62 21.5721.88 20.88 20.31 21.2921.43 21.13 20.78 20.08 â 23.49 23.14 â 23.05 â â â 16.1216.96 15.96 17.6817.78 17.39 17.6217.93 17.43 â 18.5618.70 18.26 18.5618.87 18.39 18.3018.54 17.88 â 9.6110.02 9.24 11.1911.55 11.09 10.7311.04 10.40 â 19.21 19.52 19.83 21.17 20.53 20.53 11.92 13.72 13.39 Test1 Single 19.98 20.4720.88 19.30 21.3321.56 19.82 19.70 21.2521.47 20.62 20.19 19.39 20.60(1) 17.1617.68 16.38 19.2519.55 18.89 19.2719.53 19.15 21.00(3) 25.3025.40 24.95 26.0026.07 25.04 25.5925.76 24.57 28.70(5) â â â â Ens â 23.10 23.11 â 23.04 â â 20.79 21.95 22.15 29.26 29.37 29.26 â â â Test2 Single 21.72 22.0222.21 21.35 23.4523.91 21.72 21.30 23.0623.47 22.85 22.26 20.94 24.00(2) 14.6315.09 14.26 16.9817.17 16.81 16.8617.10 16.68 18.20(4) 19.7220.29 19.02 21.1021.24 20.14 20.7321.02 19.97 24.30(6) 8.979.17 8.88 10.9311.56 10.11 10.2410.63 9.71 12.70(7) Ens â 24.83 25.24 â 25.44 â â 17.61 18.92 18.93 22.96 23.51 23.75 11.73 13.48 13.32
Table 1: BLEU scores of the subword-level, character-level base and character-level bi-scale decoders for both single models and ensembles. The best scores among the single models per language pair are bold-faced, and those among the ensembles are underlined. When available, we report the median value, and the minimum and maximum values as a subscript and a superscript, respectively. (â) http: //matrix.statmt.org/ as of 11 March 2016 (constrained only). (1) Freitag et al. (2014). (2, 6) Williams et al. (2015). (3, 5) Durrani et al. (2014). (4) Haddow et al. (2015). (7) Rubino et al. (2015).
the pairs other than En-Fi, we use newstest-2013 as a development set, and newstest- 2014 (Test1) and newstest-2015 (Test2) as test sets. For En-Fi, we use newsdev-2015 and newstest- 2015 as development and test sets, respectively.
given a source sentence. The beam widths are 5 and 15 respectively for the subword-level and character-level decoders. They were chosen based on the translation quality on the development set. The translations are evaluated using BLEU.5
Models and Training We test three models set- tings: (1) BPEâBPE, (2) BPEâChar (base) and (3) BPEâChar (bi-scale). The latter two differ by the type of recurrent neural network we use. We use GRUs for the encoder in all the settings. We used GRUs for the decoders in the ï¬rst two set- tings, (1) and (2), while the proposed bi-scale re- current network was used in the last setting, (3). The encoder has 512 hidden units for each direc- tion (forward and reverse), and the decoder has 1024 hidden units per layer.
Multilayer Decoder and Soft-Alignment Mech- anism When the decoder is a multilayer re- current neural network (including a stacked net- work as well as the proposed bi-scale network), the decoder outputs multiple hidden vectorsâ {h',...,hâ} for L layers, at a time. This allows an extra degree of freedom in the soft-alignment mechanism (fscore in Eq. (3)). We evaluate using alternatives, including (1) using only hâ (slower layer) and (2) using all of them (concatenated).
We train each model using stochastic gradient descent with Adam (Kingma and Ba, 2014). Each update is computed using a minibatch of 128 sen- tence pairs. The norm of the gradient is clipped with a threshold 1 (Pascanu et al., 2013).
Ensembles We also evaluate an ensemble of neural machine translation models and compare its performance against the state-of-the-art phrase- based translation systems on all four language pairs. We decode from an ensemble by taking the average of the output probabilities at each step.
Decoding and Evaluation We use beamsearch to approximately ï¬nd the most likely translation
5We used the multi-bleu.perl script from Moses.
Two sets| of| lights so close| to one| another| eos zwet Lichtersets so nah an elnander 208
of| lights. Zwei Cie eo ets:
Two sets| of| lights so close| to one| another| eos zwet Lichtersets so nah an elnander 208 of| lights. Zwei Cie eo ets:
Figure 3: Alignment matrix of a test example from En-De using the BPEâChar (bi-scale) model.
# 6 Quantitative Analysis
Slower Layer for Alignment On En-De, we test which layer of the decoder should be used for computing soft-alignments. In the case of subword-level decoder, we observed no difference between choosing any of the two layers of the de- coder against using the concatenation of all the layers (Table 1 (aâb)) On the other hand, with the character-level decoder, we noticed an improve- ment when only the slower layer (h2) was used for the soft-alignment mechanism (Table 1 (câg)). This suggests that the soft-alignment mechanism beneï¬ts by aligning a larger chunk in the target with a subword unit in the source, and we use only the slower layer for all the other language pairs.
Single Models In Table 1, we present a com- prehensive report of the translation qualities of (1) subword-level decoder, (2) character-level base decoder and (3) character-level bi-scale decoder, for all the language pairs. We see that the both types of character-level decoder outperform the subword-level decoder for En-Cs and En-Fi quite signiï¬cantly. On En-De, the character-level base decoder outperforms both the subword-level de- coder and the character-level bi-scale decoder, validating the effectiveness of the character-level modelling. On En-Ru, among the single mod- els, the character-level decoders outperform the subword-level decoder, but in general, we observe that all the three alternatives work comparable to each other.
These results clearly suggest that it is indeed possible to do character-level translation without explicit segmentation. In fact, what we observed is that character-level translation often surpasses the translation quality of word-level translation. Of course, we note once again that our experiment is restricted to using an unsegmented character se- quence at the decoder only, and a further explo- ration toward replacing the source sentence with an unsegmented character sequence is needed.
Ensembles Each ensemble was built using eight independent models. The ï¬rst observation we make is that in all the language pairs, neural ma- chine translation performs comparably to, or often better than, the state-of-the-art non-neural transla- tion system. Furthermore, the character-level de- coders outperform the subword-level decoder in all the cases.
# 7 Qualitative Analysis
(1) Can the character-level decoder generate a long, coherent sentence? The translation in in characters is dramatically longer than that words, likely making it more difï¬cult for a recur- rent neural network to generate a coherent sen- tence in characters. This belief turned out to be false. As shown in Fig. 2 (left), there is no sig- niï¬cant difference between the subword-level and character-level decoders, even though the lengths of the generated translations are generally 5â10 times longer in characters.
(2) Does the character-level decoder help with rare words? One advantage of character-level modelling is that it can model the composition of any character sequence, thereby better modelling rare morphological variants. We empirically con- ï¬rm this by observing the growing gap in the aver- age negative log-probability of words between the subword-level and character-level decoders as the frequency of the words decreases. This is shown in Fig. 2 (right) and explains one potential cause behind the success of character-level decoding in our experiments (we deï¬ne diï¬(x, y) = x â y).
(3) Can the character-level decoder soft-align between a source word and a target charac- ter? In Fig. 3 (left), we show an example soft- alignment of a source sentence, âTwo sets of light It is clear that the so close to one anotherâ. character-level translation model well captured the alignment between the source subwords and tar-
get characters. We observe that the character- level decoder correctly aligns to âlightsâ and âsets ofâ when generating a German compound word âLichtersetsâ (see Fig. 3 (right) for the zoomed- in version). This type of behaviour happens simi- larly between âone anotherâ and âeinanderâ. Of course, this does not mean that there exists an alignment between a source word and a target character. Rather, this suggests that the internal state of the character-level decoder, the base or bi- scale, well captures the meaningful chunk of char- acters, allowing the model to map it to a larger chunk (subword) in the source.
(4) How fast is the decoding speed of the character-level decoder? We evaluate the de- coding speed of subword-level base, character- level base and character-level bi-scale decoders on newstest-2013 corpus (En-De) with a single Titan X GPU. The subword-level base decoder gener- ates 31.9 words per second, and the character-level base decoder and character-level bi-scale decoder generate 27.5 words per second and 25.6 words per second, respectively. Note that this is evalu- ated in an online setting, performing consecutive translation, where only one sentence is translated at a time. Translating in a batch setting could dif- fer from these results.
# 8 Conclusion
In this paper, we addressed a fundamental ques- tion on whether a recently proposed neural ma- chine translation system can directly handle trans- lation at the level of characters without any word segmentation. We focused on the target side, in which a decoder was asked to generate one char- acter at a time, while soft-aligning between a tar- get character and a source subword. Our extensive experiments, on four language pairsâEn-Cs, En- De, En-Ru and En-Fiâ strongly suggest that it is indeed possible for neural machine translation to translate at the level of characters, and that it actu- ally beneï¬ts from doing so.
Our result has one limitation that we used sub- word symbols in the source side. However, this has allowed us a more ï¬ne-grained analysis, but in the future, a setting where the source side is also represented as a character sequence must be inves- tigated.
# Acknowledgments
The authors would like to thank the developers of Theano (Team et al., 2016). We acknowledge the support of the following agencies for research funding and computing support: NSERC, Calcul Qu´ebec, Compute Canada, the Canada Research Chairs, CIFAR and Samsung. KC thanks the sup- port by Facebook, Google (Google Faculty Award 2016) and NVIDIA (GPU Center of Excellence 2015-2016). JC thanks Orhan Firat for his con- structive feedbacks.
# References
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly In Proceedings of learning to align and translate. the International Conference on Learning Represen- tations (ICLR).
Yoshua Bengio, Patrice Simard, and Paolo Frasconi. 1994. Learning long-term dependencies with gradi- ent descent is difï¬cult. IEEE Transactions on Neu- ral Networks, 5(2):157â166.
Yoshua Bengio, R´ejean Ducharme, and Pascal Vincent. 2001. A neural probabilistic language model. In Ad- vances in Neural Information Processing Systems, pages 932â938.
Geert Booij. 2012. The grammar of words: An intro- duction to linguistic morphology. Oxford University Press.
Jan A Botha and Phil Blunsom. 2014. Compositional morphology for word representations and language modelling. In ICML 2014.
Rohan Chitnis and John DeNero. 2015. Variable- length word encodings for neural translation models. In Proceedings of the 2015 Conference on Empiri- cal Methods in Natural Language Processing, pages 2088â2093.
Kyunghyun Cho, Bart van Merrienboer, Caglar Gul- cehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine the Empiricial In Proceedings of translation. Methods in Natural Language Processing (EMNLP 2014), October.
Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. 2015. Gated feedback recur- In Proceedings of the 32nd rent neural networks. International Conference on Machine Learning.
Marta R Costa-Juss`a and Jos´e AR Fonollosa. 2016. Character-based neural machine translation. arXiv preprint arXiv:1603.00810.
Mathias Creutz and Krista Lagus. 2005. Unsupervised morpheme segmentation and morphology induction from text corpora using Morfessor 1.0. Helsinki University of Technology.
Nadir Durrani, Barry Haddow, Philipp Koehn, and Kenneth Heaï¬eld. 2014. Edinburghâs phrase-based In Pro- machine translation systems for wmt-14. ceedings of the ACL 2014 Ninth Workshop on Sta- tistical Machine Translation, Baltimore, MD, USA, pages 97â104.
Mikel L Forcada and Ram´on P ËNeco. 1997. Recur- sive hetero-associative memories for translation. In International Work-Conference on Artiï¬cial Neural Networks, pages 453â462. Springer.
Markus Freitag, Stephan Peitz, Joern Wuebker, Her- mann Ney, Matthias Huck, Rico Sennrich, Nadir Durrani, Maria Nadejde, Philip Williams, Philipp Koehn, et al. 2014. Eu-bridge mt: Combined ma- chine translation.
Dan Gillick, Cliff Brunk, Oriol Vinyals, and Amarnag Subramanya. 2015. Multilingual language process- ing from bytes. arXiv preprint arXiv:1512.00103.
Barry Haddow, Matthias Huck, Alexandra Birch, Niko- lay Bogoychev, and Philipp Koehn. 2015. The edin- burgh/jhu phrase-based machine translation systems In Proceedings of the Tenth Work- for wmt 2015. shop on Statistical Machine Translation, pages 126â 133.
Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Neural computation, Long short-term memory. 9(8):1735â1780.
The vanishing gradient problem during learning recurrent neural nets and International Journal of Un- problem solutions. certainty, Fuzziness and Knowledge-Based Systems, 6(02):107â116.
Changning Huang and Hai Zhao. 2007. Chinese word segmentation: A decade review. Journal of Chinese Information Processing, 21(3):8â20.
S´ebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large target vocabulary for neural machine translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics: Short Papers-Volume 2.
Nal Kalchbrenner and Phil Blunsom. 2013. Recur- rent continuous translation models. In EMNLP, vol- ume 3, page 413.
Yoon Kim, Yacine Jernite, David Sontag, and Alexan- der M Rush. 2015. Character-aware neural lan- guage models. arXiv preprint arXiv:1508.06615.
Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
Hyoung-Gyu Lee, JaeSong Lee, Jun-Seok Kim, and Chang-Ki Lee. 2015. Naver machine translation In Proceedings of the 2nd system for wat 2015. Workshop on Asian Translation (WAT2015), pages 69â73.
Wang Ling, Tiago Lu´ıs, Lu´ıs Marujo, Ram´on Fernan- dez Astudillo, Silvio Amir, Chris Dyer, Alan W Black, and Isabel Trancoso. Finding function in form: Compositional character models arXiv for open vocabulary word representation. preprint arXiv:1508.02096.
Wang Ling, Isabel Trancoso, Chris Dyer, and Alan W 2015b. Character-based neural machine Black. translation. arXiv preprint arXiv:1511.04586.
Thang Luong, Richard Socher, and Christopher D Manning. Better word representations with recursive neural networks for morphology. In CoNLL, pages 104â113.
Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015a. Effective approaches to attention- based neural machine translation. arXiv preprint arXiv:1508.04025.
Minh-Thang Luong, Ilya Sutskever, Quoc V Le, Oriol Vinyals, and Wojciech Zaremba. 2015b. Address- ing the rare word problem in neural machine trans- lation. arXiv preprint arXiv:1410.8206.
Tomas Mikolov, Martin Karaï¬Â´at, Lukas Burget, Jan Cernock`y, and Sanjeev Khudanpur. 2010. Recur- rent neural network based language model. In IN- TERSPEECH, volume 2, page 3.
Tomas Mikolov, Ilya Sutskever, Anoop Deoras, Hai- Son Le, Stefan Kombrink, and J Cernocky. 2012. Subword language modeling with neural networks. Preprint.
Graham Neubig, Taro Watanabe, Shinsuke Mori, and Tatsuya Kawahara. 2013. Substring-based machine translation. Machine translation, 27(2):139â166.
Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, 2013. How to construct arXiv preprint and Yoshua Bengio. deep recurrent neural networks. arXiv:1312.6026.
Raphael Rubino, Tommi Pirinen, Miquel Espla-Gomis, N LjubeËsic, Sergio Ortiz Rojas, Vassilis Papavassil- iou, Prokopis Prokopidis, and Antonio Toral. 2015. Abu-matran at wmt 2015 translation task: Morpho- logical segmentation and web crawling. In Proceed- ings of the Tenth Workshop on Statistical Machine Translation, pages 184â191.
Cicero D Santos and Bianca Zadrozny. 2014. Learning character-level representations for part-of-speech In Proceedings of the 31st International tagging. Conference on Machine Learning (ICML-14), pages 1818â1826.
Holger Schwenk. 2007. Continuous space language models. Computer Speech & Language, 21(3):492â 518.
Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909.
Rupesh K Srivastava, Klaus Greff, and J¨urgen Schmid- huber. 2015. Training very deep networks. In Ad- vances in Neural Information Processing Systems, pages 2368â2376.
Ilya Sutskever, James Martens, and Geoffrey E Hin- ton. 2011. Generating text with recurrent neural networks. In Proceedings of the 28th International Conference on Machine Learning (ICMLâ11), pages 1017â1024.
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural net- works. In Advances in Neural Information Process- ing Systems, pages 3104â3112.
The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Fr´ed´eric Bastien, Justin Bayer, Anatoly Belikov, et al. 2016. Theano: A python framework for fast arXiv computation of mathematical expressions. preprint arXiv:1605.02688.
David Vilar, Jan-T Peter, and Hermann Ney. 2007. In Proceedings of the Can we translate letters? Second Workshop on Statistical Machine Transla- tion, pages 33â39. Association for Computational Linguistics.
Philip Williams, Rico Sennrich, Maria Nadejde, Matthias Huck, and Philipp Koehn. 2015. Edin- burghâs syntax-based systems at wmt 2015. In Pro- ceedings of the Tenth Workshop on Statistical Ma- chine Translation, pages 199â209.
2016. Efï¬cient character-level document classiï¬cation by combin- ing convolution and recurrent layers. arXiv preprint arXiv:1602.00367.
Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text clas- siï¬cation. In Advances in Neural Information Pro- cessing Systems, pages 649â657. | {
"id": "1605.02688"
} |
1603.05279 | XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural
networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks,
the filters are approximated with binary values resulting in 32x memory saving.
In XNOR-Networks, both the filters and the input to convolutional layers are
binary. XNOR-Networks approximate convolutions using primarily binary
operations. This results in 58x faster convolutional operations and 32x memory
savings. XNOR-Nets offer the possibility of running state-of-the-art networks
on CPUs (rather than GPUs) in real-time. Our binary networks are simple,
accurate, efficient, and work on challenging visual tasks. We evaluate our
approach on the ImageNet classification task. The classification accuracy with
a Binary-Weight-Network version of AlexNet is only 2.9% less than the
full-precision AlexNet (in top-1 measure). We compare our method with recent
network binarization methods, BinaryConnect and BinaryNets, and outperform
these methods by large margins on ImageNet, more than 16% in top-1 accuracy. | http://arxiv.org/pdf/1603.05279 | Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi | cs.CV | null | null | cs.CV | 20160316 | 20160802 | 6 1 0 2
g u A 2 ] V C . s c [ 4 v 9 7 2 5 0 . 3 0 6 1 : v i X r a
# XNOR-Net: ImageNet Classiï¬cation Using Binary Convolutional Neural Networks
Mohammad Rastegariâ , Vicente Ordonezâ , Joseph Redmonâ, Ali Farhadiâ â
Allen Institute for AIâ , University of Washingtonâ {mohammadr,vicenteor}@allenai.org {pjreddie,ali}@cs.washington.edu
Abstract. We propose two efï¬cient approximations to standard convolutional neural networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight- Networks, the ï¬lters are approximated with binary values resulting in 32à mem- ory saving. In XNOR-Networks, both the ï¬lters and the input to convolutional layers are binary. XNOR-Networks approximate convolutions using primarily bi- nary operations. This results in 58à faster convolutional operations (in terms of number of the high precision operations) and 32à memory savings. XNOR-Nets offer the possibility of running state-of-the-art networks on CPUs (rather than GPUs) in real-time. Our binary networks are simple, accurate, efï¬cient, and work on challenging visual tasks. We evaluate our approach on the ImageNet classiï¬- cation task. The classiï¬cation accuracy with a Binary-Weight-Network version of AlexNet is the same as the full-precision AlexNet. We compare our method with recent network binarization methods, BinaryConnect and BinaryNets, and out- perform these methods by large margins on ImageNet, more than 16% in top-1 accuracy. Our code is available at: http://allenai.org/plato/xnornet.
# 1 Introduction
Deep neural networks (DNN) have shown signiï¬cant improvements in several applica- tion domains including computer vision and speech recognition. In computer vision, a particular type of DNN, known as Convolutional Neural Networks (CNN), have demon- strated state-of-the-art results in object recognition [1,2,3,4] and detection [5,6,7].
Convolutional neural networks show reliable results on object recognition and de- tection that are useful in real world applications. Concurrent to the recent progress in recognition, interesting advancements have been happening in virtual reality (VR by Oculus) [8], augmented reality (AR by HoloLens) [9], and smart wearable devices. Putting these two pieces together, we argue that it is the right time to equip smart portable devices with the power of state-of-the-art recognition systems. However, CNN- based recognition systems need large amounts of memory and computational power. While they perform well on expensive, GPU-based machines, they are often unsuitable for smaller devices like cell phones and embedded electronics.
For example, AlexNet[1] has 61M parameters (249MB of memory) and performs 1.5B high precision operations to classify one image. These numbers are even higher for deeper CNNs e.g.,VGG [2] (see section 4.1). These models quickly overtax the limited storage, battery power, and compute capabilities of smaller devices like cell phones.
2 Rastegari et al.
Network Variations Operations | Memory | Computation | accuracy on used in Saving Saving ImageNet Convolution | (Inference) | (Inference) | (AAlexNet) Real-Value inputs Standard Convolution [o.1-021 -034 +,7-,% 1x 1x %56.7 } Input wo2s061 082 pal f 7 Real-Value Inputs ho A) Binary Weights p.|. «Binary Weight |o.11.021 2038 irr +,7- ~32x ~2x %56.8 Neight 0.25 0.61... 052°) Ad Win - 2 Binary inputs ye BinaryWeight nay wees | NOR Binary input || 4-14 TE "| 32x ~58x 44.2 (KNORNet) | ttt Ee bitcount
Fig. 1: We propose two efï¬cient variations of convolutional neural networks. Binary- Weight-Networks, when the weight ï¬lters contains binary values. XNOR-Networks, when both weigh and input have binary values. These networks are very efï¬cient in terms of memory and computation, while being very accurate in natural image classiï¬- cation. This offers the possibility of using accurate vision techniques in portable devices with limited resources.
In this paper, we introduce simple, efï¬cient, and accurate approximations to CNNs by binarizing the weights and even the intermediate representations in convolutional neural networks. Our binarization method aims at ï¬nding the best approximations of the convolutions using binary operations. We demonstrate that our way of binarizing neural networks results in ImageNet classiï¬cation accuracy numbers that are comparable to standard full precision networks while requiring a signiï¬cantly less memory and fewer ï¬oating point operations.
We study two approximations: Neural networks with binary weights and XNOR- Networks. In Binary-Weight-Networks all the weight values are approximated with bi- nary values. A convolutional neural network with binary weights is signiï¬cantly smaller (â¼ 32Ã) than an equivalent network with single-precision weight values. In addition, when weight values are binary, convolutions can be estimated by only addition and subtraction (without multiplication), resulting in â¼ 2à speed up. Binary-weight ap- proximations of large CNNs can ï¬t into the memory of even small, portable devices while maintaining the same level of accuracy (See Section 4.1 and 4.2).
To take this idea further, we introduce XNOR-Networks where both the weights and the inputs to the convolutional and fully connected layers are approximated with binary values1. Binary weights and binary inputs allow an efï¬cient way of implement- ing convolutional operations. If all of the operands of the convolutions are binary, then the convolutions can be estimated by XNOR and bitcounting operations [11]. XNOR- Nets result in accurate approximation of CNNs while offering â¼ 58à speed up in CPUs (in terms of number of the high precision operations). This means that XNOR-Nets can enable real-time inference in devices with small memory and no GPUs (Inference in XNOR-Nets can be done very efï¬ciently on CPUs).
To the best of our knowledge this paper is the ï¬rst attempt to present an evalua- tion of binary neural networks on large-scale datasets like ImageNet. Our experimental
1 fully connected layers can be implemented by convolution, therefore, in the rest of the paper, we refer to them also as convolutional layers [10].
XNOR-Net: ImageNet Classiï¬cation Using Binary Convolutional Neural Networks
results show that our proposed method for binarizing convolutional neural networks outperforms the state-of-the-art network binarization method of [11] by a large margin (16.3%) on top-1 image classiï¬cation in the ImageNet challenge ILSVRC2012. Our contribution is two-fold: First, we introduce a new way of binarizing the weight val- ues in convolutional neural networks and show the advantage of our solution compared to state-of-the-art solutions. Second, we introduce XNOR-Nets, a deep neural network model with binary weights and binary inputs and show that XNOR-Nets can obtain sim- ilar classiï¬cation accuracies compared to standard networks while being signiï¬cantly more efï¬cient. Our code is available at: http://allenai.org/plato/xnornet
# 2 Related Work
Deep neural networks often suffer from over-parametrization and large amounts of re- dundancy in their models. This typically results in inefï¬cient computation and memory usage[12]. Several methods have been proposed to address efï¬cient training and infer- ence in deep neural networks.
Shallow networks: Estimating a deep neural network with a shallower model re- duces the size of a network. Early theoretical work by Cybenko shows that a network with a large enough single hidden layer of sigmoid units can approximate any decision boundary [13]. In several areas (e.g.,vision and speech), however, shallow networks cannot compete with deep models [14]. [15] trains a shallow network on SIFT features to classify the ImageNet dataset. They show it is difï¬cult to train shallow networks with large number of parameters. [16] provides empirical evidence on small datasets (e.g.,CIFAR-10) that shallow nets are capable of learning the same functions as deep nets. In order to get the similar accuracy, the number of parameters in the shallow net- work must be close to the number of parameters in the deep network. They do this by ï¬rst training a state-of-the-art deep model, and then training a shallow model to mimic the deep model. These methods are different from our approach because we use the standard deep architectures not the shallow estimations.
Compressing pre-trained deep networks: Pruning redundant, non-informative weights in a previously trained network reduces the size of the network at inference time. Weight decay [17] was an early method for pruning a network. Optimal Brain Damage [18] and Optimal Brain Surgeon [19] use the Hessian of the loss function to prune a network by reducing the number of connections. Recently [20] reduced the number of parameters by an order of magnitude in several state-of-the-art neural net- works by pruning. [21] proposed to reduce the number of activations for compression and acceleration. Deep compression [22] reduces the storage and energy required to run inference on large networks so they can be deployed on mobile devices. They remove the redundant connections and quantize weights so that multiple connections share the same weight, and then they use Huffman coding to compress the weights. HashedNets [23] uses a hash function to reduce model size by randomly grouping the weights, such that connections in a hash bucket use a single parameter value. Matrix factorization has been used by [24,25]. We are different from these approaches because we do not use a pretrained network. We train binary networks from scratch.
4 Rastegari et al.
Designing compact layers: Designing compact blocks at each layer of a deep net- work can help to save memory and computational costs. Replacing the fully connected layer with global average pooling was examined in the Network in Network architec- ture [26], GoogLenet[3] and Residual-Net[4], which achieved state-of-the-art results on several benchmarks. The bottleneck structure in Residual-Net [4] has been proposed to reduce the number of parameters and improve speed. Decomposing 3 Ã 3 convo- lutions with two 1 Ã 1 is used in [27] and resulted in state-of-the-art performance on object recognition. Replacing 3 Ã 3 convolutions with 1 Ã 1 convolutions is used in [28] to create a very compact neural network that can achieve â¼ 50Ã reduction in the number of parameters while obtaining high accuracy. Our method is different from this line of work because we use the full network (not the compact version) but with binary parameters.
ing high performance in deep networks. [29] proposed to quantize the weights of fully connected layers in a deep network by vector quantization techniques. They showed just thresholding the weight values at zero only decreases the top-1 accuracy on ILSVRC2012 by less than %10. [30] proposed a provably polynomial time algorithm for training a sparse networks with +1/0/-1 weights. A ï¬xed-point implementation of 8-bit integer was compared with 32-bit ï¬oating point activations in [31]. Another ï¬xed-point net- work with ternary weights and 3-bits activations was presented by [32]. Quantizing a network with L2 error minimization achieved better accuracy on MNIST and CIFAR-10 datasets in [33]. [34] proposed a back-propagation process by quantizing the represen- tations at each layer of the network. To convert some of the remaining multiplications into binary shifts the neurons get restricted values of power-of-two integers. In [34] they carry the full precision weights during the test phase, and only quantize the neu- rons during the back-propagation process, and not during the forward-propagation. Our work is similar to these methods since we are quantizing the parameters in the network. But our quantization is the extreme scenario +1,-1.
Network binarization: These works are the most related to our approach. Several methods attempt to binarize the weights and the activations in neural networks.The per- formance of highly quantized networks (e.g.,binarized) were believed to be very poor due to the destructive property of binary quantization [35]. Expectation BackPropaga- tion (EBP) in [36] showed high performance can be achieved by a network with binary weights and binary activations. This is done by a variational Bayesian approach, that infers networks with binary weights and neurons. A fully binary network at run time presented in [37] using a similar approach to EBP, showing signiï¬cant improvement in energy efï¬ciency. In EBP the binarized parameters were only used during inference. Bi- naryConnect [38] extended the probablistic idea behind EBP. Similar to our approach, BinaryConnect uses the real-valued version of the weights as a key reference for the binarization process. The real-valued weight updated using the back propagated error by simply ignoring the binarization in the update. BinaryConnect achieved state-of-the- art results on small datasets (e.g.,CIFAR-10, SVHN). Our experiments shows that this method is not very successful on large-scale datsets (e.g.,ImageNet). BinaryNet[11] propose an extention of BinaryConnect, where both weights and activations are bi- narized. Our method is different from them in the binarization method and the net-
XNOR-Net: ImageNet Classiï¬cation Using Binary Convolutional Neural Networks
work structure. We also compare our method with BinaryNet on ImageNet, and our method outperforms BinaryNet by a large margin.[39] argued that the noise introduced by weight binarization provides a form of regularization, which could help to improve test accuracy. This method binarizes weights while maintaining full precision activa- tion. [40] proposed fully binary training and testing in an array of committee machines with randomized input. [41] retraine a previously trained neural network with binary weights and binary inputs.
# 3 Binary Convolutional Neural Network
We represent an L-layer CNN architecture with a triplet (Z, W, x). Z is a set of ten- sors, where each element I = Z)(j~1,...,:) is the input tensor for the 1 layer of CNN (Green cubes in figure 1). W is a set of tensors, where each element in this set W = cos x?) is the k⢠weight filter in the 7" layer of the CNN. K' is the number of weight filters in the /'" layer of the CNN. * represents a convolutional operation with I and W as its operandsâ. I ⬠R°*ââ¢*"m, where (c, win, hin) represents channels, width and height respectively.W ⬠R°*â*", where w < win, h < hin. We propose two variations of binary CNN: Binary-weights, where the elements of W are binary tensors and XNOR-Networks, where elements of both Z and W are binary tensors. Wik(k=1
# 3.1 Binary-Weight-Networks
In order to constrain a convolutional neural network (Z, W, *) to have binary weights, we estimate the real-value weight filter W ⬠W using a binary filter B ⬠{+1, â1}**â*" and a scaling factor a ⬠R* such that W ~ aB. A convolutional operation can be ap- priximated by:
I â W â (I â B) α (1)
where, © indicates a convolution without any multiplication. Since the weight values are binary, we can implement the convolution with additions and subtractions. The bi- nary weight filters reduce memory usage by a factor of ~ 32x compared to single- precision filters. We represent a CNN with binary weights by (Z, B, A, ©), where B is a set of binary tensors and A is a set of positive real scalars, such that B = Bix is a binary filter and a = Aj, is an scaling factor and Wiz © AinBir
Estimating binary weights: Without loss of generality we assume W, B are vectors in Rn, where n = c à w à h. To ï¬nd an optimal estimation for W â αB, we solve the following optimization:
a*, B* = argminJ(B, a) 2) aB
2 In this paper we assume convolutional ï¬lters do not have bias terms
6 Rastegari et al.
by expanding equation 2, we have
J(B, α) = α2BTB â 2αWTB + WTW (3)
since B â {+1, â1}n, BTB = n is a constant . WTW is also a constant because W is a known variable. Lets deï¬ne c = WTW. Now, we can rewrite the equation 3 as follow: J(B, α) = α2n â 2αWTB + c. The optimal solution for B can be achieved by maximizing the following constrained optimization: (note that α is a positive value in equation 2, therefore it can be ignored in the maximization)
Bâ = argmax {WTB} s.t. B â {+1, â1}n B (4)
This optimization can be solved by assigning Bi = +1 if Wi ⥠0 and Bi = â1 if Wi < 0, therefore the optimal solution is Bâ = sign(W). In order to ï¬nd the optimal value for the scaling factor αâ, we take the derivative of J with respect to α and set it to zero:
αâ = WTBâ n (5)
By replacing Bâ with sign(W)
, W' sign(W W; 1 a gn(W) _ de |Wil Whe 6) n n n
therefore, the optimal estimation of a binary weight ï¬lter can be simply achieved by taking the sign of weight values. The optimal scaling factor is the average of absolute weight values.
Training Binary-Weights-Networks: Each iteration of training a CNN involves three steps; forward pass, backward pass and parameters update. To train a CNN with binary weights (in convolutional layers), we only binarize the weights during the forward pass and backward propagation. To compute the gradient for sign function sign(r), we fol- low the same approach as [11], where â sign âr = r1|r|â¤1. The gradient in backward after n + â sign the scaled sign function is âC ( 1 α). For updating the parameters, we âWi âWi use the high precision (real-value) weights. Because, in gradient descend the parameter changes are tiny, binarization after updating the parameters ignores these changes and the training objective can not be improved. [11,38] also employed this strategy to train a binary network.
Algorithm | demonstrates our procedure for training a CNN with binary weights. First, we binarize the weight filters at each layer by computing B and A. Then we call forward propagation using binary weights and its corresponding scaling factors, where all the convolutional operations are carried out by equation 1. Then, we call backward propagation, where the gradients are computed with respect to the estimated weight filters W. Lastly, the parameters and the learning rate gets updated by an update rule e.g.,SGD update with momentum or ADAM [42].
Once the training ï¬nished, there is no need to keep the real-value weights. Because, at inference we only perform forward propagation with the binarized weights.
XNOR-Net: ImageNet Classiï¬cation Using Binary Convolutional Neural Networks
Algorithm 1 Training an L-layers CNN with binary weights: Input: A minibatch of inputs and targets (I, Y), cost function C(Y, ËY), current weight W t and
current learning rate ηt.
Output: updated weight W t+1 and updated learning rate ηt+1. 1: Binarizing weight ï¬lters: 2: for l = 1 to L do 3: 4: 5: 6: 7: ËY = BinaryForward(I, B, A) // standard forward propagation except that convolutions are computed
for kâ filter in I" layer do 1 Atk = £\|Wieller
# 1 Atk = £\|Wieller Bu = sign(Wix) Win = An Bir
using equation 1 or 11
8: 2& â BinaryBackward( aw _ W instead of Wt ac ay? W) // standard backward propagation except that gradients are computed
8: 2& â BinaryBackward( aw _ using W instead of Wt
9: W'** = UpdateParameters(Wâ, xs, me) / Any update rules (e.g.,SGD or ADAM) 10: 7'*! = UpdateLearningrate(7â, t) // Any learning rate scheduling function
# 3.2 XNOR-Networks
So far, we managed to find binary weights and a scaling factor to estimate the real- value weights. The inputs to the convolutional layers are still real-value tensors. Now, we explain how to binarize both weigths and inputs, so convolutions can be imple- mented efficiently using XNOR and bitcounting operations. This is the key element of our XNOR-Networks. In order to constrain a convolutional neural network (Z, W, *) to have binary weights and binary inputs, we need to enforce binary operands at each step of the convolutional operation. A convolution consist of repeating a shift operation and a dot product. Shift operation moves the weight filter over the input and the dot product performs element-wise multiplications between the values of the weight filter and the corresponding part of the input. If we express dot product in terms of binary operations, convolution can be approximated using binary operations. Dot product be- tween two binary vectors can be implemented by XNOR-Bitcounting operations [11]. In this section, we explain how to approximate the dot product between two vectors in R" by a dot product between two vectors in {+1, â1}". Next, we demonstrate how to use this approximation for estimating a convolutional operation between two tensors.
Binary Dot Product: To approximate the dot product between X, W â Rn such that XTW â βHTαB, where H, B â {+1, â1}n and β, α â R+, we solve the following optimization:
aâ, B*, 8°, H« = argmin||X © W â baH © B]| (7) a,B,6,H
where © indicates element-wise product. We define Y ⬠Râ such that Y; = X;,W,, Ce {+1, -1}â such that C; = H;B; and y ⬠Rt such that y = Ba. The equation 7 can be written as:
y*,C* = argmin|/Y â yC|| (8) an
7
8 Rastegari et al.
(1) Binarizing Weight = - he f . foao-w 5 |Wla-a ine pB sign(W) (2) Binarizing Input saver nlX Inefficient aman MXalla= Bo" Redundant computations in overlapping areas . fn Efficient B ~p2 (4) Convolution with XNOR-Bitcount 0.2 0.1 3 01" meee , ~ dtn dln nines â1405. 0.2 2", * Ree = 1.40, « 1alay y fea -0.5 3... -1.2 0.2" aA1âwar Tf Ww . sign(W) K iif sign(I)
Fig. 2: This ï¬gure illustrates the procedure explained in section 3.2 for approximating a convo- lution using binary operations.
the optimal solutions can be achieved from equation 2 as follow
C* = sign(Y) = sign(X) © sign(W) = H* © B* (9)
Since |Xi|, |Wi| are independent, knowing that Yi = XiWi then, E [|Yi|] = E [|Xi||Wi|] = E [|Xi|] E [|Wi|] therefore,
Y; X,||W; 1 1 _ y= ERLE INS (2x) (Zila) =r" 0 n n
Binary Convolution: Convolving weight filter W ⬠RXâ! (where win > w, hin > h) with the input tensor I ⬠R°*â i» requires computing the scaling factor ( for all possible sub-tensors in I with same size as W. Two of these sub-tensors are illustrated in figure 2 (second row) by X; and X»g. Due to overlaps between subtensors, comput- ing £ for all possible sub-tensors leads to a large number of redundant computations. To overcome this redundancy, first, we compute a matrix A = Dlbsl which is the average over absolute values of the elements in the input I across the channel. Then we convolve A with a 2D filterk ⬠Râ*", K = A «xk, where Vij kij = oe K contains scaling factors 6 for all sub-tensors in the input I. K;; corresponds to 6 for a sub-tensor centered at the location ij (across width and height). This procedure is shown in the third row of figure 2. Once we obtained the scaling factor a for the weight and for all sub-tensors in I (denoted by K), we can approximate the convolution between input I and weight filter W mainly using binary operations:
I* W & (sign(I) @ sign(W)) © Ka dd)
where ® indicates a convolutional operation using XNOR and bitcount operations. This is illustrated in the last row in figure 2. Note that the number of non-binary operations is very small compared to binary operations.
XNOR-Net: ImageNet Classiï¬cation Using Binary Convolutional Neural Networks
>| | 2s > f= >| la â¬)|/6 mile ||) Be} SHEHE angip a oy} fal) ty} y* 3||s| |e Atypical block in CNN A block in XNOR-Net
Fig. 3: This ï¬gure contrasts the block structure in our XNOR-Network (right) with a typical CNN (left).
Training XNOR-Networks: A typical block in CNN contains several different layers. Figure 3 (left) illustrates a typical block in a CNN. This block has four layers in the following order: 1-Convolutional, 2-Batch Normalization, 3-Activation and 4-Pooling. Batch Normalization layer[43] normalizes the input batch by its mean and variance. The activation is an element-wise non-linear function (e.g.,Sigmoid, ReLU). The pool- ing layer applies any type of pooling (e.g.,max,min or average) on the input batch. Applying pooling on binary input results in signiï¬cant loss of information. For exam- ple, max-pooling on binary input returns a tensor that most of its elements are equal to +1. Therefore, we put the pooling layer after the convolution. To further decrease the information loss due to binarization, we normalize the input before binarization. This ensures the data to hold zero mean, therefore, thresholding at zero leads to less quanti- zation error. The order of layers in a block of binary CNN is shown in Figure 3(right). The binary activation layer(BinActiv) computes K and sign(I) as explained in sec- tion 3.2. In the next layer (BinConv), given K and sign(I), we compute binary convo- lution by equation 11. Then at the last layer (Pool), we apply the pooling operations. We can insert a non-binary activation(e.g.,ReLU) after binary convolution. This helps when we use state-of-the-art networks (e.g.,AlexNet or VGG).
Once we have the binary CNN structure, the training algorithm would be the same as algorithm 1.
Binary Gradient: The computational bottleneck in the backward pass at each layer is computing a convolution between weight ï¬lters(w) and the gradients with respect of the inputs (gin). Similar to binarization in the forward pass, we can binarize gin in the backward pass. This leads to a very efï¬cient training procedure using binary operations. Note that if we use equation 6 to compute the scaling factor for gin, the direction of maximum change for SGD would be diminished. To preserve the maximum change in all dimensions, we use maxi(|gin i
|) as the scaling factor. k-bit Quantization: So far, we showed 1-bit quantization of weights and inputs using sign(x) function. One can easily extend the quantization level to k-bits by using qk(x) = 2( [(2kâ1)( x+1 2 )] 2 ) instead of the sign function. Where [.] indicates rounding operation and x â [â1, 1].
# 4 Experiments
We evaluate our method by analyzing its efï¬ciency and accuracy. We measure the ef- ï¬ciency by computing the computational speedup (in terms of number of high preci- sion operation) achieved by our binary convolution vs. standard convolution. To mea-
9
10 Rastegari et al.
4cB = Double Precision "Binary Precision v 4008 HeMe_ews1.sqe | W7.4Me. ° VoG-419 ResNet18 AlexNet
(a) (b) (c)
Fig. 4: This ï¬gure shows the efï¬ciency of binary convolutions in terms of memory(a) and computation(b-c). (a) is contrasting the required memory for binary and double precision weights in three different architectures(AlexNet, ResNet-18 and VGG-19). (b,c) Show speedup gained by binary convolution under (b)-different number of channels and (c)-different ï¬lter size
sure accuracy, we perform image classiï¬cation on the large-scale ImageNet dataset. This paper is the ï¬rst work that evaluates binary neural networks on the ImageNet dataset. Our binarization technique is general, we can use any CNN architecture. We evaluate AlexNet [1] and two deeper architectures in our experiments. We compare our method with two recent works on binarizing neural networks; BinaryConnect [38] and BinaryNet [11]. The classiï¬cation accuracy of our binary-weight-network version of AlexNet is as accurate as the full precision version of AlexNet. This classiï¬cation ac- curacy outperforms competitors on binary neural networks by a large margin. We also present an ablation study, where we evaluate the key elements of our proposed method; computing scaling factors and our block structure for binary CNN. We shows that our method of computing the scaling factors is important to reach high accuracy.
# 4.1 Efï¬ciency Analysis
In an standard convolution, the total number of operations is cNWNI, where c is the number of channels, NW = wh and NI = winhin. Note that some modern CPUs can fuse the multiplication and addition as a single cycle operation. On those CPUs, Binary- Weight-Networks does not deliver speed up. Our binary approximation of convolution (equation 11) has cNWNI binary operations and NI non-binary operations. With the current generation of CPUs, we can perform 64 binary operations in one clock of CPU, therefore the speedup can be computed by S =
# cNWNI 1 64 cNWNI+NI
The speedup depends on the channel size and ï¬lter size but not the input size. In ï¬g- ure 4-(b-c) we illustrate the speedup achieved by changing the number of channels and ï¬lter size. While changing one parameter, we ï¬x other parameters as follows: c = 256, nI = 142 and nW = 32 (majority of convolutions in ResNet[4] architecture have this structure). Using our approximation of convolution we gain 62.27à theoretical speed up, but in our CPU implementation with all of the overheads, we achieve 58x speed up in one convolution (Excluding the process for memory allocation and memory ac- cess). With the small channel size (c = 3) and ï¬lter size (NW = 1 à 1) the speedup is not considerably high. This motivates us to avoid binarization at the ï¬rst and last
XNOR-Net: ImageNet Classiï¬cation Using Binary Convolutional Neural Networks
layer of a CNN. In the ï¬rst layer the chanel size is 3 and in the last layer the ï¬lter size is 1 à 1. A similar strategy was used in [11]. Figure 4-a shows the required memory for three different CNN architectures(AlexNet, VGG-19, ResNet-18) with binary and double precision weights. Binary-weight-networks are so small that can be easily ï¬tted into portable devices. BinaryNet [11] is in the same order of memory and computation efï¬ciency as our method. In Figure 4, we show an analysis of computation and memory cost for a binary convolution. The same analysis is valid for BinaryNet and BinaryCon- nect. The key difference of our method is using a scaling-factor, which does not change the order of efï¬ciency while providing a signiï¬cant improvement in accuracy.
# Image Classiï¬cation
We evaluate the performance of our proposed approach on the task of natural im- age classiï¬cation. So far, in the literature, binary neural network methods have pre- sented their evaluations on either limited domain or simpliï¬ed datasets e.g.,CIFAR-10, MNIST, SVHN. To compare with state-of-the-art vision, we evaluate our method on ImageNet (ILSVRC2012). ImageNet has â¼1.2M train images from 1K categories and 50K validation images. The images in this dataset are natural images with reasonably high resolution compared to the CIFAR and MNIST dataset, which have relatively small images. We report our classiï¬cation performance using Top-1 and Top-5 accuracies. We adopt three different CNN architectures as our base architectures for binarization: AlexNet [1], Residual Networks (known as ResNet) [4], and a variant of GoogLenet [3].We compare our Binary-weight-network (BWN) with BinaryConnect(BC) [38] and our XNOR-Networks(XNOR-Net) with BinaryNeuralNet(BNN) [11]. BinaryConnect(BC) is a method for training a deep neural network with binary weights during forward and backward propagations. Similar to our approach, they keep the real-value weights during the updating parameters step. Our binarization is different from BC. The bina- rization in BC can be either deterministic or stochastic. We use the deterministic bina- rization for BC in our comparisons because the stochastic binarization is not efï¬cient. The same evaluation settings have been used and discussed in [11]. BinaryNeural- Net(BNN) [11] is a neural network with binary weights and activations during infer- ence and gradient computation in training. In concept, this is a similar approach to our XNOR-Network but the binarization method and the network structure in BNN is dif- ferent from ours. Their training algorithm is similar to BC and they used deterministic binarization in their evaluations.
CIFAR-10 : BC and BNN showed near state-of-the-art performance on CIFAR- 10, MNIST, and SVHN dataset. BWN and XNOR-Net on CIFAR-10 using the same network architecture as BC and BNN achieve the error rate of 9.88% and 10.17% re- spectively. In this paper we explore the possibility of obtaining near state-of-the-art results on a much larger and more challenging dataset (ImageNet).
[1] is a CNN architecture with 5 convolutional layers and two fully- connected layers. This architecture was the ï¬rst CNN architecture that showed to be successful on ImageNet classiï¬cation task. This network has 61M parameters. We use AlexNet coupled with batch normalization layers [43].
Train: In each iteration of training, images are resized to have 256 pixel at their smaller dimension and then a random crop of 224 Ã 224 is selected for training. We run
12 Rastegari et al.
Top-1, Binary-Weight âTop-5, Binary-Weight-Input 0 10 20 Number of epochs Number of epochs Number of epochs Number of epochs 20
Fig. 5: This ï¬gure compares the imagenet classiï¬cation accuracy on Top-1 and Top-5 across training epochs. Our approaches BWN and XNOR-Net outperform BinaryConnect(BC) and Bi- naryNet(BNN) in all the epochs by large margin(â¼17%).
Classiï¬cation Accuracy(%) Binary-Weight Binary-Input-Binary-Weight Full-Precision BNN[11] XNOR-Net AlexNet[1] Top-1 Top-5 Top-1 Top-5 Top-1 Top-5 Top-1 Top-5 Top-1 Top-5 56.8 79.4 35.4 61.0 80.2 BWN BC[11] 50.42 56.6
44.2 69.2 27.9 Table 1: This table compares the ï¬nal accuracies (Top1 - Top5) of the full precision network with our binary precision networks; Binary-Weight-Networks(BWN) and XNOR-Networks(XNOR- Net) and the competitor methods; BinaryConnect(BC) and BinaryNet(BNN).
the training algorithm for 16 epochs with batche size equal to 512. We use negative-log- likelihood over the soft-max of the outputs as our classiï¬cation loss function. In our implementation of AlexNet we do not use the Local-Response-Normalization(LRN) layer3. We use SGD with momentum=0.9 for updating parameters in BWN and BC. For XNOR-Net and BNN we used ADAM [42]. ADAM converges faster and usually achieves better accuracy for binary inputs [11]. The learning rate starts at 0.1 and we apply a learning-rate-decay=0.01 every 4 epochs.
Test: At inference time, we use the 224 à 224 center crop for forward propagation. Figure 5 demonstrates the classiï¬cation accuracy for training and inference along the training epochs for top-1 and top-5 scores. The dashed lines represent training ac- curacy and solid lines shows the validation accuracy. In all of the epochs our method outperforms BC and BNN by large margin (â¼17%). Table 1 compares our ï¬nal accu- racy with BC and BNN. We found that the scaling factors for the weights (α) is much more effective than the scaling factors for the inputs (β). Removing β reduces the ac- curacy by a small margin (less than 1% top-1 alexnet).
Binary Gradient: Using XNOR-Net with binary gradient the accuracy of top-1 will drop only by 1.4%.
Residual Net : We use the ResNet-18 proposed in [4] with short-cut type B.4 Train: In each training iteration, images are resized randomly between 256 and 480 pixel on the smaller dimension and then a random crop of 224 Ã 224 is selected for training. We run the training algorithm for 58 epochs with batch size equal to 256
3 Our implementation is followed by https://gist.github.com/szagoruyko/dd032c529048492630fc 4 We used the Torch implementation in https://github.com/facebook/fb.resnet.torch
# XNOR-Net: ImageNet Classiï¬cation Using Binary Convolutional Neural Networks
(a) (b)
Fig. 6: This ï¬gure shows the classiï¬cation accuracy; (a)Top-1 and (b)Top-5 measures across the training epochs on ImageNet dataset by Binary-Weight-Network and XNOR-Network using ResNet-18.
Network Variations Binary-Weight-Network XNOR-Network Full-Precision-Network ResNet-18 top-5 83.0 73.2 89.2 top-1 60.8 51.2 69.3 GoogLenet top-5 86.1 N/A 90.0 top-1 65.5 N/A 71.3
Table 2: This table compares the ï¬nal classiï¬cation accuracy achieved by our binary precision networks with the full precision network in ResNet-18 and GoogLenet architectures.
images. The learning rate starts at 0.1 and we use the learning-rate-decay equal to 0.01 at epochs number 30 and 40.
Test: At inference time, we use the 224 à 224 center crop for forward propagation. Figure 6 demonstrates the classiï¬cation accuracy (Top-1 and Top-5) along the epochs for training and inference. The dashed lines represent training and the solid lines repre- sent inference. Table 2 shows our ï¬nal accuracy by BWN and XNOR-Net.
GoogLenet Variant : We experiment with a variant of GoogLenet [3] that uses a similar number of parameters and connections but only straightforward convolutions, no branching5. It has 21 convolutional layers with ï¬lter sizes alternating between 1 à 1 and 3 à 3.
Train: Images are resized randomly between 256 and 320 pixel on the smaller di- mension and then a random crop of 224 à 224 is selected for training. We run the training algorithm for 80 epochs with batch size of 128. The learning rate starts at 0.1 and we use polynomial rate decay, β = 4.
Test: At inference time, we use a center crop of 224 Ã 224.
# 4.3 Ablation Studies
There are two key differences between our method and the previous network binariaza- tion methods; the binararization technique and the block structure in our binary CNN.
5 We used the Darknet [44] implementation: http://pjreddie.com/darknet/imagenet/#extraction
13
14
14 Rastegari et al.
Binary-Weight-Network top-1 56.8 46.2 Strategy for computing α Using equation 6 Using a separate layer top-5 79.4 69.5 XNOR-Network top-1 30.3 44.2 Block Structure C-B-A-P B-A-C-P top-5 57.5 69.2
(a)
(b)
Table 3: In this table, we evaluate two key elements of our approach; computing the optimal scaling factors and specifying the right order for layers in a block of CNN with binary input. (a) demonstrates the importance of the scaling factor in training binary-weight-networks and (b) shows that our way of ordering the layers in a block of CNN is crucial for training XNOR- Networks. C,B,A,P stands for Convolutional, BatchNormalization, Acive function (here binary activation), and Pooling respectively.
For binarization, we ï¬nd the optimal scaling factors at each iteration of training. For the block structure, we order the layers in a block in a way that decreases the quantiza- tion loss for training XNOR-Net. Here, we evaluate the effect of each of these elements in the performance of the binary networks. Instead of computing the scaling factor α using equation 6, one can consider α as a network parameter. In other words, a layer after binary convolution multiplies the output of convolution by an scalar parameter for each ï¬lter. This is similar to computing the afï¬ne parameters in batch normalization. Table 3-a compares the performance of a binary network with two ways of computing the scaling factors. As we mentioned in section 3.2 the typical block structure in CNN is not suitable for binarization. Table 3-b compares the standard block structure C-B-A-P (Convolution, Batch Normalization, Activation, Pooling) with our structure B-A-C-P. (A, is binary activation).
# 5 Conclusion
We introduce simple, efï¬cient, and accurate binary approximations for neural networks. We train a neural network that learns to ï¬nd binary values for weights, which reduces the size of network by â¼ 32à and provide the possibility of loading very deep neural networks into portable devices with limited memory. We also propose an architecture, XNOR-Net, that uses mostly bitwise operations to approximate convolutions. This pro- vides â¼ 58à speed up and enables the possibility of running the inference of state of the art deep neural network on CPU (rather than GPU) in real-time.
# Acknowledgements
This work is in part supported by ONR N00014-13-1-0720, NSF IIS- 1338054, Allen Distinguished Investigator Award, and the Allen Institute for Artiï¬cial Intelligence.
XNOR-Net: ImageNet Classiï¬cation Using Binary Convolutional Neural Networks
# References
1. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classiï¬cation with deep convolutional neural networks. In: Advances in neural information processing systems. (2012) 1097â1105 1, 10, 11, 12
2. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recog- nition. arXiv preprint arXiv:1409.1556 (2014) 1
3. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2015) 1â9 1, 4, 11, 13
4. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. CoRR (2015) 1, 4, 10, 11, 12
5. Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. (2014) 580â587 1
6. Girshick, R.: Fast r-cnn. In: Proceedings of the IEEE International Conference on Computer Vision. (2015) 1440â1448 1
7. Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn: Towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems. (2015) 91â99 1
8. Oculus, V.: Oculus rift-virtual reality headset for 3d gaming. URL: http://www. oculusvr. com (2012) 1
9. Gottmer, M.: Merging reality and virtuality with microsoft hololens. (2015) 1 10. Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmenta- tion. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2015) 3431â3440 2
11. Courbariaux, M., Bengio, Y.: Binarynet: Training deep neural networks with weights and activations constrained to +1 or -1. CoRR (2016) 2, 3, 4, 6, 7, 10, 11, 12
12. Denil, M., Shakibi, B., Dinh, L., de Freitas, N., et al.: Predicting parameters in deep learning. In: Advances in Neural Information Processing Systems. (2013) 2148â2156 3
13. Cybenko, G.: Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4) (1989) 303â314 3
14. Seide, F., Li, G., Yu, D.: Conversational speech transcription using context-dependent deep neural networks. In: Interspeech. (2011) 437â440 3
15. Dauphin, Y.N., Bengio, Y.: arXiv:1301.3583 (2013) 3 Big neural networks waste capacity. arXiv preprint
16. Ba, J., Caruana, R.: Do deep nets really need to be deep? In: Advances in neural information processing systems. (2014) 2654â2662 3
17. Hanson, S.J., Pratt, L.Y.: Comparing biases for minimal network construction with back- propagation. In: Advances in neural information processing systems. (1989) 177â185 3 18. LeCun, Y., Denker, J.S., Solla, S.A., Howard, R.E., Jackel, L.D.: Optimal brain damage. In:
NIPs. Volume 89. (1989) 3
19. Hassibi, B., Stork, D.G.: Second order derivatives for network pruning: Optimal brain sur- geon. Morgan Kaufmann (1993) 3
20. Han, S., Pool, J., Tran, J., Dally, W.: Learning both weights and connections for efï¬cient neural network. In: Advances in Neural Information Processing Systems. (2015) 1135â1143 3
21. Van Nguyen, H., Zhou, K., Vemulapalli, R.: Cross-domain synthesis of medical images using In: Medical Image Computing and Computer- efï¬cient location-sensitive deep network. Assisted InterventionâMICCAI 2015. Springer (2015) 677â684 3
15
16 Rastegari et al.
16
22. Han, S., Mao, H., Dally, W.J.: Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149 (2015) 3
23. Chen, W., Wilson, J.T., Tyree, S., Weinberger, K.Q., Chen, Y.: Compressing neural networks with the hashing trick. arXiv preprint arXiv:1504.04788 (2015) 3
24. Denton, E.L., Zaremba, W., Bruna, J., LeCun, Y., Fergus, R.: Exploiting linear structure within convolutional networks for efï¬cient evaluation. In: Advances in Neural Information Processing Systems. (2014) 1269â1277 3
25. Jaderberg, M., Vedaldi, A., Zisserman, A.: Speeding up convolutional neural networks with low rank expansions. arXiv preprint arXiv:1405.3866 (2014) 3
26. Lin, M., Chen, Q., Yan, S.: Network in network. arXiv preprint arXiv:1312.4400 (2013) 4 27. Szegedy, C., Ioffe, S., Vanhoucke, V.: Inception-v4, inception-resnet and the impact of resid-
ual connections on learning. CoRR (2016) 4
28. Iandola, F.N., Moskewicz, M.W., Ashraf, K., Han, S., Dally, W.J., Keutzer, K.: Squeezenet: Alexnet-level accuracy with 50x fewer parameters and¡ 1mb model size. arXiv preprint arXiv:1602.07360 (2016) 4
29. Gong, Y., Liu, L., Yang, M., Bourdev, L.: Compressing deep convolutional networks using vector quantization. arXiv preprint arXiv:1412.6115 (2014) 4
30. Arora, S., Bhaskara, A., Ge, R., Ma, T.: Provable bounds for learning some deep representa- tions. arXiv preprint arXiv:1310.6343 (2013) 4
31. Vanhoucke, V., Senior, A., Mao, M.Z.: Improving the speed of neural networks on cpus. In: Proc. Deep Learning and Unsupervised Feature Learning NIPS Workshop. Volume 1. (2011) 4
32. Hwang, K., Sung, W.: Fixed-point feedforward deep neural network design using weights+ 1, 0, and- 1. In: Signal Processing Systems (SiPS), 2014 IEEE Workshop on, IEEE (2014) 1â6 4
33. Anwar, S., Hwang, K., Sung, W.: Fixed point optimization of deep convolutional neural In: Acoustics, Speech and Signal Processing (ICASSP), networks for object recognition. 2015 IEEE International Conference on, IEEE (2015) 1131â1135 4
34. Lin, Z., Courbariaux, M., Memisevic, R., Bengio, Y.: Neural networks with few multiplica- tions. arXiv preprint arXiv:1510.03009 (2015) 4
35. Courbariaux, M., Bengio, Y., David, J.P.: Training deep neural networks with low precision multiplications. arXiv preprint arXiv:1412.7024 (2014) 4
36. Soudry, D., Hubara, I., Meir, R.: Expectation backpropagation: parameter-free training of In: Advances in Neural multilayer neural networks with continuous or discrete weights. Information Processing Systems. (2014) 963â971 4
37. Esser, S.K., Appuswamy, R., Merolla, P., Arthur, J.V., Modha, D.S.: Backpropagation for energy-efï¬cient neuromorphic computing. In: Advances in Neural Information Processing Systems. (2015) 1117â1125 4
38. Courbariaux, M., Bengio, Y., David, J.P.: Binaryconnect: Training deep neural networks with binary weights during propagations. In: Advances in Neural Information Processing Systems. (2015) 3105â3113 4, 6, 10, 11
39. Wan, L., Zeiler, M., Zhang, S., Cun, Y.L., Fergus, R.: Regularization of neural networks us- ing dropconnect. In: Proceedings of the 30th International Conference on Machine Learning (ICML-13). (2013) 1058â1066 5
40. Baldassi, C., Ingrosso, A., Lucibello, C., Saglietti, L., Zecchina, R.: Subdominant dense clusters allow for simple learning and high computational performance in neural networks with discrete synapses. Physical review letters 115(12) (2015) 128101 5
41. Kim, M., Smaragdis, P.: Bitwise neural networks. arXiv preprint arXiv:1601.06071 (2016) 5
XNOR-Net: ImageNet Classiï¬cation Using Binary Convolutional Neural Networks
42. Kingma, D., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014) 6, 12
43. Ioffe, S., Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167 (2015) 9, 11
44. Redmon, J.: Darknet: Open source neural networks in c. http://pjreddie.com/ darknet/ (2013â2016) 13
17 | {
"id": "1602.07360"
} |
1603.05027 | Identity Mappings in Deep Residual Networks | Deep residual networks have emerged as a family of extremely deep
architectures showing compelling accuracy and nice convergence behaviors. In
this paper, we analyze the propagation formulations behind the residual
building blocks, which suggest that the forward and backward signals can be
directly propagated from one block to any other block, when using identity
mappings as the skip connections and after-addition activation. A series of
ablation experiments support the importance of these identity mappings. This
motivates us to propose a new residual unit, which makes training easier and
improves generalization. We report improved results using a 1001-layer ResNet
on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet.
Code is available at: https://github.com/KaimingHe/resnet-1k-layers | http://arxiv.org/pdf/1603.05027 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV, cs.LG | ECCV 2016 camera-ready | null | cs.CV | 20160316 | 20160725 | # Identity Mappings in Deep Residual Networks
6 1 0 2
l u J 5 2 ] V C . s c [
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun
Microsoft Research
Abstract Deep residual networks [1] have emerged as a family of ex- tremely deep architectures showing compelling accuracy and nice con- vergence behaviors. In this paper, we analyze the propagation formu- lations behind the residual building blocks, which suggest that the for- ward and backward signals can be directly propagated from one block to any other block, when using identity mappings as the skip connec- tions and after-addition activation. A series of ablation experiments sup- port the importance of these identity mappings. This motivates us to propose a new residual unit, which makes training easier and improves generalization. We report improved results using a 1001-layer ResNet on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet. Code is available at: https://github.com/KaimingHe/ resnet-1k-layers.
3 v 7 2 0 5 0 . 3 0 6 1 : v i X r a
# 1 Introduction
Deep residual networks (ResNets) [1] consist of many stacked âResidual Unitsâ. Each unit (Fig. 1 (a)) can be expressed in a general form:
yl = h(xl) + F(xl, Wl), xl+1 = f (yl),
where xl and xl+1 are input and output of the l-th unit, and F is a residual function. In [1], h(xl) = xl is an identity mapping and f is a ReLU [2] function. ResNets that are over 100-layer deep have shown state-of-the-art accuracy for several challenging recognition tasks on ImageNet [3] and MS COCO [4] compe- titions. The central idea of ResNets is to learn the additive residual function F with respect to h(xl), with a key choice of using an identity mapping h(xl) = xl. This is realized by attaching an identity skip connection (âshortcutâ).
In this paper, we analyze deep residual networks by focusing on creating a âdirectâ path for propagating information â not only within a residual unit, but through the entire network. Our derivations reveal that if both h(xl) and f (yl) are identity mappings, the signal could be directly propagated from one unit to any other units, in both forward and backward passes. Our experiments empirically show that training in general becomes easier when the architecture is closer to the above two conditions.
To understand the role of skip connections, we analyze and compare various types of h(xl). We ï¬nd that the identity mapping h(xl) = xl chosen in [1]
2
2-4 20 x) x) H â, ResNetâ1001, original (error: 7.61%) sen ResNetâ1001, proposed (error: 4.92%) ~~ ~~ \ weight BN 15 + + BN RelU 02 M 4 Fa RelU weight 8 4 + 2 weight BN = e £ BN RelU a 1 0.02 ac addition weight 5 a "y â RelU addition My ths : Â¥ v th VW at x xi Teg PNA Why tt 1 . 0.002 te 0 (a) original (b) proposed o 1 2 3 4 5 6 Iterations x10"
Figure 1. Left: (a) original Residual Unit in [1]; (b) proposed Residual Unit. The grey arrows indicate the easiest paths for the information to propagate, corresponding to the additive term âxlâ in Eqn.(4) (forward propagation) and the additive term â1â in Eqn.(5) (backward propagation). Right: training curves on CIFAR-10 of 1001-layer ResNets. Solid lines denote test error (y-axis on the right), and dashed lines denote training loss (y-axis on the left). The proposed unit makes ResNet-1001 easier to train.
achieves the fastest error reduction and lowest training loss among all variants we investigated, whereas skip connections of scaling, gating [5,6,7], and 1Ã1 convolutions all lead to higher training loss and error. These experiments suggest that keeping a âcleanâ information path (indicated by the grey arrows in Fig. 1, 2, and 4) is helpful for easing optimization.
To construct an identity mapping f (yl) = yl, we view the activation func- tions (ReLU and BN [8]) as âpre-activationâ of the weight layers, in contrast to conventional wisdom of âpost-activationâ. This point of view leads to a new residual unit design, shown in (Fig. 1(b)). Based on this unit, we present com- petitive results on CIFAR-10/100 with a 1001-layer ResNet, which is much easier to train and generalizes better than the original ResNet in [1]. We further report improved results on ImageNet using a 200-layer ResNet, for which the counter- part of [1] starts to overï¬t. These results suggest that there is much room to exploit the dimension of network depth, a key to the success of modern deep learning.
# 2 Analysis of Deep Residual Networks
The ResNets developed in [1] are modularized architectures that stack building blocks of the same connecting shape. In this paper we call these blocks âResidual
# z a 10e g &
Unitsâ. The original Residual Unit in [1] performs the following computation:
(1)
yl = h(xl) + F(xl, Wl), xl+1 = f (yl).
(2)
Here xl is the input feature to the l-th Residual Unit. Wl = {Wl,k|1â¤kâ¤K} is a set of weights (and biases) associated with the l-th Residual Unit, and K is the number of layers in a Residual Unit (K is 2 or 3 in [1]). F denotes the residual function, e.g., a stack of two 3Ã3 convolutional layers in [1]. The function f is the operation after element-wise addition, and in [1] f is ReLU. The function h is set as an identity mapping: h(xl) = xl.1
If f is also an identity mapping: xl+1 â¡ yl, we can put Eqn.(2) into Eqn.(1) and obtain:
xl+1 = xl + F(xl, Wl). (3)
Recursively (xl+2 = xl+1 + F (xl+1, Wl+1) = xl + F (xl, Wl) + F (xl+1, Wl+1), etc.) we will have:
L-1 Xr =x + > F(xi,M), (4) i=l
for any deeper unit L and any shallower unit 1. Eqn.(4) exhibits some nice properties. (i) The feature xy, of any deeper unit L can be represented as the feature x; of any shallower unit | plus a residual function in a form of ae F, indicating that the model is in a residual fashion between any units L and 1. (ii) The feature x, = xo + are F (xi, Wi), of any deep unit L, is the summation of the outputs of all preceding residual functions (plus xo). This is in contrast to a âplain networkâ where a feature xz is a series of matrix-vector products, say, en W;Xo (ignoring BN and ReLU).
i=0 Wix0 (ignoring BN and ReLU). Eqn.(4) also leads to nice backward propagation properties. Denoting the
loss function as E, from the chain rule of backpropagation [9] we have:
Ox; Oxy Ox, Oxy, â g L-1 ae gee = HE 1a ) (5) Dag De F(x Mi) i=l
Eqn.(5) indicates that the gradient ge can be decomposed into two additive terms: a term of ee that propagates information directly without concern- ing any weight layers and another term of 2& (or wig! F) that propagates Oxp, through the weight layers. The additive term of 2 oa ensures that information is directly propagated back to any shallower unit I. Eqn. (5) also suggests that it
1 It is noteworthy that there are Residual Units for increasing dimensions and reducing feature map sizes [1] in which h is not identity. In this case the following derivations do not hold strictly. But as there are only a very few such units (two on CIFAR and three on ImageNet, depending on image sizes [1]), we expect that they do not have the exponential impact as we present in Sec. 3. One may also think of our derivations as applied to all Residual Units within the same feature map size.
3
4
is unlikely for the gradient âE to be canceled out for a mini-batch, because in âxl general the term â i=l F cannot be always -1 for all samples in a mini-batch. âxl This implies that the gradient of a layer does not vanish even when the weights are arbitrarily small.
# Discussions
Eqn.(4) and Eqn.(5) suggest that the signal can be directly propagated from any unit to another, both forward and backward. The foundation of Eqn.(4) is two identity mappings: (i) the identity skip connection h(xl) = xl, and (ii) the condition that f is an identity mapping.
These directly propagated information ï¬ows are represented by the grey ar- rows in Fig. 1, 2, and 4. And the above two conditions are true when these grey arrows cover no operations (expect addition) and thus are âcleanâ. In the fol- lowing two sections we separately investigate the impacts of the two conditions.
# 3 On the Importance of Identity Skip Connections
Letâs consider a simple modiï¬cation, h(xl) = λlxl, to break the identity shortcut:
xl+1 = λlxl + F(xl, Wl), (6)
where 4; is a modulating scalar (for simplicity we still assume f is identity). Recursively applying this formulation we obtain an equation similar to Eqn. (4): xp = a 1 "Nix + ear 7 (jn ini As) F (xi, Wi), or simply:
- L-1 xp = di d)xi + SO F(«:,Wi), (7) i=l i=l
where the notation ËF absorbs the scalars into the residual functions. Similar to Eqn.(5), we have backpropagation of the following form:
ae ae ( a eo ae, (UID âmF âmm. ®)
Unlike Eqn.(5), in Eqn.(8) the first additive term is modulated by a factor We 7) di. For an extremely deep network (L is large), if 4; > 1 for all i, this factor can be exponentially large; if A; < 1 for all i, this factor can be expo- nentially small and vanish, which blocks the backpropagated signal from the shortcut and forces it to flow through the weight layers. This results in opti- mization difficulties as we show by experiments.
In the above analysis, the original identity skip connection in Eqn.(3) is re- placed with a simple scaling h(x;) = A;x:. If the skip connection h(x;) represents more complicated transforms (such as gating and 1x1 convolutions), in Eqn.(8) the first term becomes Wa hi, where hâ is the derivative of h. This product may also impede information propagation and hamper the training procedure as witnessed in the following experiments.
âfBxsconv âfBxsconv yeu Trew 3x3 conv 3x3 conv i - addition addition y= (a) original (b) constant scaling (sa conv Rel ~ss conv Trew ixicony| [Bx conv ixtcony] [SxS conv emod = âaddition | . . addition . ye (c) exclusive gating ney (d) shortcut-only gating 3x conv ~{BxS conv Raw TRaw ixi conv 3x3 conv dropout 3x3 conv addition addition Rely (e) conv shortcut Ret (f) dropout shortcut
Figure 2. Various types of shortcut connections used in Table 1. The grey arrows indicate the easiest paths for the information to propagate. The shortcut connections in (b-f) are impeded by diï¬erent components. For simplifying illustrations we do not display the BN layers, which are adopted right after the weight layers for all units here.
# 3.1 Experiments on Skip Connections
We experiment with the 110-layer ResNet as presented in [1] on CIFAR-10 [10]. This extremely deep ResNet-110 has 54 two-layer Residual Units (consisting of 3Ã3 convolutional layers) and is challenging for optimization. Our implementa- tion details (see appendix) are the same as [1]. Throughout this paper we report the median accuracy of 5 runs for each architecture on CIFAR, reducing the impacts of random variations.
Though our above analysis is driven by identity f , the experiments in this section are all based on f = ReLU as in [1]; we address identity f in the next sec- tion. Our baseline ResNet-110 has 6.61% error on the test set. The comparisons of other variants (Fig. 2 and Table 1) are summarized as follows:
Constant scaling. We set λ = 0.5 for all shortcuts (Fig. 2(b)). We further study two cases of scaling F: (i) F is not scaled; or (ii) F is scaled by a constant scalar of 1 â λ = 0.5, which is similar to the highway gating [6,7] but with frozen gates. The former case does not converge well; the latter is able to converge, but the test error (Table 1, 12.35%) is substantially higher than the original ResNet-110. Fig 3(a) shows that the training error is higher than that of the original ResNet-110, suggesting that the optimization has diï¬culties when the shortcut signal is scaled down.
5
6
Table 1. Classiï¬cation error on the CIFAR-10 test set using ResNet-110 [1], with diï¬erent types of shortcut connections applied to all Residual Units. We report âfailâ when the test error is higher than 20%.
case Fig. on shortcut on F error (%) remark original [1] Fig. 2(a) 1 1 6.61 constant scaling Fig. 2(b) 0 0.5 1 1 fail fail This is a plain net 0.5 0.5 12.35 frozen gating exclusive gating Fig. 2(c) 1 â g(x) 1 â g(x) g(x) g(x) fail 8.70 init bg =0 to â5 init bg =-6 1 â g(x) g(x) 9.81 init bg =-7 shortcut-only gating Fig. 2(d) 1 â g(x) 1 â g(x) 1 1 12.86 6.91 init bg =0 init bg =-6 1Ã1 conv shortcut Fig. 2(e) 1Ã1 conv 1 12.22 dropout shortcut Fig. 2(f) dropout 0.5 1 fail
Exclusive gating. Following the Highway Networks [6,7] that adopt a gating mechanism [5], we consider a gating function g(x) = Ï(Wgx + bg) where a transform is represented by weights Wg and biases bg followed by the sigmoid function Ï(x) = 1 1+eâx . In a convolutional network g(x) is realized by a 1Ã1 convolutional layer. The gating function modulates the signal by element-wise multiplication.
We investigate the âexclusiveâ gates as used in [6,7] â the F path is scaled by g(x) and the shortcut path is scaled by 1âg(x). See Fig 2(c). We ï¬nd that the initialization of the biases bg is critical for training gated models, and following the guidelines2 in [6,7], we conduct hyper-parameter search on the initial value of bg in the range of 0 to -10 with a decrement step of -1 on the training set by cross- validation. The best value (â6 here) is then used for training on the training set, leading to a test result of 8.70% (Table 1), which still lags far behind the ResNet-110 baseline. Fig 3(b) shows the training curves. Table 1 also reports the results of using other initialized values, noting that the exclusive gating network does not converge to a good solution when bg is not appropriately initialized.
The impact of the exclusive gating mechanism is two-fold. When 1 â g(x) approaches 1, the gated shortcut connections are closer to identity which helps information propagation; but in this case g(x) approaches 0 and suppresses the function F. To isolate the eï¬ects of the gating functions on the shortcut path alone, we investigate a non-exclusive gating mechanism in the next.
Shortcut-only gating. In this case the function F is not scaled; only the shortcut path is gated by 1 â g(x). See Fig 2(d). The initialized value of bg is still essential in this case. When the initialized bg is 0 (so initially the expectation of 1 â g(x) is 0.5), the network converges to a poor result of 12.86% (Table 1). This is also caused by higher training error (Fig 3(c)).
# 2 See also: people.idsia.ch/~rupesh/very_deep_learning/ by [6,7].
02 (09) 2011350 Training Loss (99) 101131891 San as i â 5 a Wailea laa 110 original iy ââ 110, const scaling (0.5, 0.5) 0.002 ° ° T 2 3 4 5 Iterations (a) (6) 0013 3891 Training Loss (6) 0013891 a 5 AW Ay Mi My 110, original Hethoe 110, original âia ; â 110, shortcut-only gating (nit b=0) [Whit â 110, 1x1 conv shortcut HA pat 0.002 Jo 0.002 Jo 0 1 2 3 4 5 6 0 7 2 3 4 5 6 erations 10! erations ero! (c) (d)
# Training Loss
# Training Loss
Figure 3. Training curves on CIFAR-10 of various shortcuts. Solid lines denote test error (y-axis on the right), and dashed lines denote training loss (y-axis on the left).
When the initialized bg is very negatively biased (e.g., â6), the value of 1 â g(x) is closer to 1 and the shortcut connection is nearly an identity mapping. Therefore, the result (6.91%, Table 1) is much closer to the ResNet-110 baseline. 1Ã1 convolutional shortcut. Next we experiment with 1Ã1 convolutional shortcut connections that replace the identity. This option has been investigated in [1] (known as option C) on a 34-layer ResNet (16 Residual Units) and shows good results, suggesting that 1Ã1 shortcut connections could be useful. But we ï¬nd that this is not the case when there are many Residual Units. The 110-layer ResNet has a poorer result (12.22%, Table 1) when using 1Ã1 convolutional shortcuts. Again, the training error becomes higher (Fig 3(d)). When stacking so many Residual Units (54 for ResNet-110), even the shortest path may still impede signal propagation. We witnessed similar phenomena on ImageNet with ResNet-101 when using 1Ã1 convolutional shortcuts.
Dropout shortcut. Last we experiment with dropout [11] (at a ratio of 0.5) which we adopt on the output of the identity shortcut (Fig. 2(f)). The network fails to converge to a good solution. Dropout statistically imposes a scale of λ with an expectation of 0.5 on the shortcut, and similar to constant scaling by 0.5, it impedes signal propagation.
7
8
Table 2. Classiï¬cation error (%) on the CIFAR-10 test set using diï¬erent activation functions.
case Fig. ResNet-110 ResNet-164 original Residual Unit [1] Fig. 4(a) 6.61 5.93 BN after addition Fig. 4(b) 8.17 6.50 ReLU before addition Fig. 4(c) 7.84 6.14 ReLU-only pre-activation Fig. 4(d) 6.71 5.91 full pre-activation Fig. 4(e) 6.37 5.46
x x xX xX) xX ~~ a â . se weight weight weight ReLU BN 1 1 t t BN BN BN weight ReLU t 4 1 ReLU ReLU ReLU BN weight t 1 1 weight weight weight ReLU BN _BN addition BN weight ReLU a 1 addition BN ReLU BN weight a me a ReLU ReLU addition addition addition v Â¥ Xt Xt Xt Xr Xie oe (b) BN after (c) ReLU before (d) ReLU-only age (a) original addition addition pre-activation (©) full pre-activation
Figure 4. Various usages of activation in Table 2. All these units consist of the same components â only the orders are diï¬erent.
# 3.2 Discussions
As indicated by the grey arrows in Fig. 2, the shortcut connections are the most direct paths for the information to propagate. Multiplicative manipulations (scaling, gating, 1Ã1 convolutions, and dropout) on the shortcuts can hamper information propagation and lead to optimization problems.
It is noteworthy that the gating and 1Ã1 convolutional shortcuts introduce more parameters, and should have stronger representational abilities than iden- tity shortcuts. In fact, the shortcut-only gating and 1Ã1 convolution cover the solution space of identity shortcuts (i.e., they could be optimized as identity shortcuts). However, their training error is higher than that of identity short- cuts, indicating that the degradation of these models is caused by optimization issues, instead of representational abilities.
# 4 On the Usage of Activation Functions
Experiments in the above section support the analysis in Eqn.(5) and Eqn.(8), both being derived under the assumption that the after-addition activation f
is the identity mapping. But in the above experiments f is ReLU as designed in [1], so Eqn.(5) and (8) are approximate in the above experiments. Next we investigate the impact of f .
We want to make f an identity mapping, which is done by re-arranging the activation functions (ReLU and/or BN). The original Residual Unit in [1] has a shape in Fig. 4(a) â BN is used after each weight layer, and ReLU is adopted after BN except that the last ReLU in a Residual Unit is after element- wise addition (f = ReLU). Fig. 4(b-e) show the alternatives we investigated, explained as following.
# 4.1 Experiments on Activation
In this section we experiment with ResNet-110 and a 164-layer Bottleneck [1] architecture (denoted as ResNet-164). A bottleneck Residual Unit consist of a 1Ã1 layer for reducing dimension, a 3Ã3 layer, and a 1Ã1 layer for restoring dimension. As designed in [1], its computational complexity is similar to the two-3Ã3 Residual Unit. More details are in the appendix. The baseline ResNet- 164 has a competitive result of 5.93% on CIFAR-10 (Table 2).
BN after addition. Before turning f into an identity mapping, we go the opposite way by adopting BN after addition (Fig. 4(b)). In this case f involves BN and ReLU. The results become considerably worse than the baseline (Ta- ble 2). Unlike the original design, now the BN layer alters the signal that passes through the shortcut and impedes information propagation, as reï¬ected by the diï¬culties on reducing training loss at the beginning of training (Fib. 6 left).
ReLU before addition. A na¨ıve choice of making f into an identity map- ping is to move the ReLU before addition (Fig. 4(c)). However, this leads to a non-negative output from the transform F, while intuitively a âresidualâ func- tion should take values in (ââ, +â). As a result, the forward propagated sig- nal is monotonically increasing. This may impact the representational ability, and the result is worse (7.84%, Table 2) than the baseline. We expect to have a residual function taking values in (ââ, +â). This condition is satisï¬ed by other Residual Units including the following ones.
Post-activation or pre-activation? In the original design (Eqn.(1) and Eqn.(2)), the activation xl+1 = f (yl) aï¬ects both paths in the next Residual Unit: yl+1 = f (yl) + F(f (yl), Wl+1). Next we develop an asymmetric form where an activation Ëf only aï¬ects the F path: yl+1 = yl + F( Ëf (yl), Wl+1), for any l (Fig. 5 (a) to (b)). By renaming the notations, we have the following form:
xl+1 = xl + F( Ëf (xl), Wl), . (9)
It is easy to see that Eqn.(9) is similar to Eqn.(4), and can enable a backward formulation similar to Eqn.(5). For this new Residual Unit as in Eqn.(9), the new after-addition activation becomes an identity mapping. This design means that if a new after-addition activation Ëf is asymmetrically adopted, it is equivalent to recasting Ëf as the pre-activation of the next Residual Unit. This is illustrated in Fig. 5.
9
10
- act Pact original Fo + T Residual walght asymmet weight weight Unit output q act. activation act â weight weight weight = â pre-activation | ~~. Ge I Residual Unit a weight weight 4 act. act. act. t weight weight weight el ee 7 addition addition i addi act. M + adopt output activation . â PE output equivalent to â only to weight path (a) (b) ()
Figure 5. Using asymmetric after-addition activation is equivalent to constructing a pre-activation Residual Unit.
Table 3. Classiï¬cation error (%) on the CIFAR-10/100 test set using the original Residual Units and our pre-activation Residual Units.
dataset network ResNet-110 (1layer skip) 9.90 8.91 CIFAR-10 ResNet-110 ResNet-164 6.61 5.93 6.37 5.46 ResNet-1001 7.61 4.92 ResNet-164 ResNet-1001 25.16 27.82 24.33 22.71
The distinction between post-activation/pre-activation is caused by the pres- ence of the element-wise addition. For a plain network that has N layers, there are N â 1 activations (BN/ReLU), and it does not matter whether we think of them as post- or pre-activations. But for branched layers merged by addition, the position of activation matters.
and (ii) full pre-activation (Fig. 4(e)) where BN and ReLU are both adopted be- fore weight layers. Table 2 shows that the ReLU-only pre-activation performs very similar to the baseline on ResNet-110/164. This ReLU layer is not used in conjunction with a BN layer, and may not enjoy the beneï¬ts of BN [8].
Somehow surprisingly, when BN and ReLU are both used as pre-activation, the results are improved by healthy margins (Table 2 and Table 3). In Table 3 we report results using various architectures: (i) ResNet-110, (ii) ResNet-164, (iii) a 110-layer ResNet architecture in which each shortcut skips only 1 layer (i.e.,
ââ 164, original â 164, proposed (pre-activation) wei . si SiN apa * - aL err âwaa, 5 110 original âthy © Se om ââ 110, BNafter add eH it oth ae i eid - 0 5 6 10° 0.002 0 0.002 ° T 2 3 4 5 6 ° 1 2 3 4 x10" Iterations x Iterations
Figure 6. Training curves on CIFAR-10. Left: BN after addition (Fig. 4(b)) using ResNet-110. Right: pre-activation unit (Fig. 4(e)) on ResNet-164. Solid lines denote test error, and dashed lines denote training loss.
a Residual Unit has only 1 layer), denoted as âResNet-110(1layer)â, and (iv) a 1001-layer bottleneck architecture that has 333 Residual Units (111 on each feature map size), denoted as âResNet-1001â. We also experiment on CIFAR- 100. Table 3 shows that our âpre-activationâ models are consistently better than the baseline counterparts. We analyze these results in the following.
# 4.2 Analysis
We ï¬nd the impact of pre-activation is twofold. First, the optimization is further eased (comparing with the baseline ResNet) because f is an identity mapping. Second, using BN as pre-activation improves regularization of the models.
Ease of optimization. This eï¬ect is particularly obvious when training the 1001-layer ResNet. Fig. 1 shows the curves. Using the original design in [1], the training error is reduced very slowly at the beginning of training. For f = ReLU, the signal is impacted if it is negative, and when there are many Residual Units, this eï¬ect becomes prominent and Eqn.(3) (so Eqn.(5)) is not a good approximation. On the other hand, when f is an identity mapping, the signal can be propagated directly between any two units. Our 1001-layer network reduces the training loss very quickly (Fig. 1). It also achieves the lowest loss among all models we investigated, suggesting the success of optimization.
We also ï¬nd that the impact of f = ReLU is not severe when the ResNet has fewer layers (e.g., 164 in Fig. 6(right)). The training curve seems to suï¬er a little bit at the beginning of training, but goes into a healthy status soon. By monitoring the responses we observe that this is because after some training, the weights are adjusted into a status such that yl in Eqn.(1) is more frequently above zero and f does not truncate it (xl is always non-negative due to the pre- vious ReLU, so yl is below zero only when the magnitude of F is very negative). The truncation, however, is more frequent when there are 1000 layers.
11
12
Table 4. Comparisons with state-of-the-art methods on CIFAR-10 and CIFAR-100 using âmoderate data augmentationâ (ï¬ip/translation), except for ELU [12] with no augmentation. Better results of [13,14] have been reported using stronger data augmen- tation and ensembling. For the ResNets we also report the number of parameters. Our results are the median of 5 runs with mean±std in the brackets. All ResNets results are obtained with a mini-batch size of 128 except â with a mini-batch size of 64 (code available at https://github.com/KaimingHe/resnet-1k-layers).
CIFAR-10 error (%) CIFAR-100 error (%) NIN [15] 8.81 NIN [15] 35.68 DSN [16] 8.22 DSN [16] 34.57 FitNet [17] 8.39 FitNet [17] 35.04 Highway [7] 7.72 Highway [7] 32.39 All-CNN [14] 7.25 All-CNN [14] 33.71 ELU [12] 6.55 ELU [12] 24.28 FitResNet, LSUV [18] 5.84 FitNet, LSUV [18] 27.66 ResNet-110 [1] (1.7M) 6.61 ResNet-164 [1] (1.7M) 25.16 ResNet-1202 [1] (19.4M) 7.93 ResNet-1001 [1] (10.2M) 27.82 ResNet-164 [ours] (1.7M) 5.46 ResNet-164 [ours] (1.7M) 24.33 4.92 (4.89±0.14) ResNet-1001 [ours] (10.2M) ResNet-1001 [ours] (10.2M)â 4.62 (4.69±0.20) ResNet-1001 [ours] (10.2M) 22.71 (22.68±0.22)
Reducing overï¬tting. Another impact of using the proposed pre-activation unit is on regularization, as shown in Fig. 6 (right). The pre-activation ver- sion reaches slightly higher training loss at convergence, but produces lower test error. This phenomenon is observed on ResNet-110, ResNet-110(1-layer), and ResNet-164 on both CIFAR-10 and 100. This is presumably caused by BNâs reg- ularization eï¬ect [8]. In the original Residual Unit (Fig. 4(a)), although the BN normalizes the signal, this is soon added to the shortcut and thus the merged signal is not normalized. This unnormalized signal is then used as the input of the next weight layer. On the contrary, in our pre-activation version, the inputs to all weight layers have been normalized.
# 5 Results
Comparisons on CIFAR-10/100. Table 4 compares the state-of-the-art meth- ods on CIFAR-10/100, where we achieve competitive results. We note that we do not specially tailor the network width or ï¬lter sizes, nor use regularization techniques (such as dropout) which are very eï¬ective for these small datasets. We obtain these results via a simple but essential concept â going deeper. These results demonstrate the potential of pushing the limits of depth.
Comparisons on ImageNet. Next we report experimental results on the 1000- class ImageNet dataset [3]. We have done preliminary experiments using the skip connections studied in Fig. 2 & 3 on ImageNet with ResNet-101 [1], and observed similar optimization diï¬culties. The training error of these non-identity shortcut networks is obviously higher than the original ResNet at the ï¬rst learning rate
Table 5. Comparisons of single-crop error on the ILSVRC 2012 validation set. All ResNets are trained using the same hyper-parameters and implementations as [1]). Our Residual Units are the full pre-activation version (Fig. 4(e)). â : code/model avail- able at https://github.com/facebook/fb.resnet.torch/tree/master/pretrained, using scale and aspect ratio augmentation in [20].
method augmentation train crop test crop top-1 ResNet-152, original Residual Unit [1] ResNet-152, original Residual Unit [1] ResNet-152, pre-act Residual Unit ResNet-200, original Residual Unit [1] ResNet-200, pre-act Residual Unit ResNet-200, pre-act Residual Unit Inception v3 [19] 6.7 5.5 5.5 6.0 5.3 scale+asp ratio 224Ã224 320Ã320 20.1â 4.8â 5.6 scale+asp ratio 299Ã299 299Ã299 scale scale scale scale scale 23.0 224Ã224 224Ã224 21.3 224Ã224 320Ã320 21.1 224Ã224 320Ã320 224Ã224 320Ã320 21.8 224Ã224 320Ã320 20.7 21.2 top-5
(similar to Fig. 3), and we decided to halt training due to limited resources. But we did ï¬nish a âBN after additionâ version (Fig. 4(b)) of ResNet-101 on ImageNet and observed higher training loss and validation error. This modelâs single-crop (224Ã224) validation error is 24.6%/7.5%, vs. the original ResNet- 101âs 23.6%/7.1%. This is in line with the results on CIFAR in Fig. 6 (left).
Table 5 shows the results of ResNet-152 [1] and ResNet-2003, all trained from scratch. We notice that the original ResNet paper [1] trained the models using scale jittering with shorter side s â [256, 480], and so the test of a 224Ã224 crop on s = 256 (as did in [1]) is negatively biased. Instead, we test a single 320Ã320 crop from s = 320, for all original and our ResNets. Even though the ResNets are trained on smaller crops, they can be easily tested on larger crops because the ResNets are fully convolutional by design. This size is also close to 299Ã299 used by Inception v3 [19], allowing a fairer comparison.
The original ResNet-152 [1] has top-1 error of 21.3% on a 320Ã320 crop, and our pre-activation counterpart has 21.1%. The gain is not big on ResNet-152 because this model has not shown severe generalization diï¬culties. However, the original ResNet-200 has an error rate of 21.8%, higher than the baseline ResNet-152. But we ï¬nd that the original ResNet-200 has lower training error than ResNet-152, suggesting that it suï¬ers from overï¬tting.
Our pre-activation ResNet-200 has an error rate of 20.7%, which is 1.1% lower than the baseline ResNet-200 and also lower than the two versions of ResNet-152. When using the scale and aspect ratio augmentation of [20,19], our ResNet-200 has a result better than Inception v3 [19] (Table 5). Concurrent with our work, an Inception-ResNet-v2 model [21] achieves a single-crop result of 19.9%/4.9%. We expect our observations and the proposed Residual Unit will help this type and generally other types of ResNets.
Computational Cost. Our modelsâ computational complexity is linear on
3 The ResNet-200 has 16 more 3-layer bottleneck Residual Units than ResNet-152, which are added on the feature map of 28Ã28.
13
14
depth (so a 1001-layer net is â¼10Ã complex of a 100-layer net). On CIFAR, ResNet-1001 takes about 27 hours to train on 2 GPUs; on ImageNet, ResNet- 200 takes about 3 weeks to train on 8 GPUs (on par with VGG nets [22]).
# 6 Conclusions
This paper investigates the propagation formulations behind the connection mechanisms of deep residual networks. Our derivations imply that identity short- cut connections and identity after-addition activation are essential for making information propagation smooth. Ablation experiments demonstrate phenom- ena that are consistent with our derivations. We also present 1000-layer deep networks that can be easily trained and achieve improved accuracy.
Appendix: Implementation Details The implementation details and hyper- parameters are the same as those in [1]. On CIFAR we use only the translation and ï¬ipping augmentation in [1] for training. The learning rate starts from 0.1, and is divided by 10 at 32k and 48k iterations. Following [1], for all CIFAR experiments we warm up the training by using a smaller learning rate of 0.01 at the beginning 400 iterations and go back to 0.1 after that, although we remark that this is not necessary for our proposed Residual Unit. The mini-batch size is 128 on 2 GPUs (64 each), the weight decay is 0.0001, the momentum is 0.9, and the weights are initialized as in [23].
On ImageNet, we train the models using the same data augmentation as in [1]. The learning rate starts from 0.1 (no warming up), and is divided by 10 at 30 and 60 epochs. The mini-batch size is 256 on 8 GPUs (32 each). The weight decay, momentum, and weight initialization are the same as above.
When using the pre-activation Residual Units (Fig. 4(d)(e) and Fig. 5), we pay special attention to the ï¬rst and the last Residual Units of the entire net- work. For the ï¬rst Residual Unit (that follows a stand-alone convolutional layer, conv1), we adopt the ï¬rst activation right after conv1 and before splitting into two paths; for the last Residual Unit (followed by average pooling and a fully- connected classiï¬er), we adopt an extra activation right after its element-wise addition. These two special cases are the natural outcome when we obtain the pre-activation network via the modiï¬cation procedure as shown in Fig. 5.
The bottleneck Residual Units (for ResNet-164/1001 on CIFAR) are con- structed following [1]. For example, a [x2 | unit in ResNet-110 is replaced witha 3x3 | unit in ResNet-164, both of which have roughly the same num- 1x1, 64 ber of parameters. For the bottleneck ResNets, when reducing the feature map size we use projection shortcuts [1] for increasing dimensions, and when pre- activation is used, these projection shortcuts are also with pre-activation.
# References
1. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR. (2016)
2. Nair, V., Hinton, G.E.: Rectiï¬ed linear units improve restricted boltzmann ma- chines. In: ICML. (2010)
3. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L.: ImageNet Large Scale Visual Recognition Challenge. IJCV (2015)
4. Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Doll´ar, P., Zitnick, C.L.: Microsoft COCO: Common objects in context. In: ECCV. (2014) 5. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural computation
(1997)
6. Srivastava, R.K., Greï¬, K., Schmidhuber, J.: Highway networks. In: ICML work- shop. (2015)
7. Srivastava, R.K., Greï¬, K., Schmidhuber, J.: Training very deep networks. NIPS. (2015) In:
8. Ioï¬e, S., Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: ICML. (2015)
9. LeCun, Y., Boser, B., Denker, J.S., Henderson, D., Howard, R.E., Hubbard, W., Jackel, L.D.: Backpropagation applied to handwritten zip code recognition. Neural computation (1989)
10. Krizhevsky, A.: Learning multiple layers of features from tiny images. Tech Report (2009)
11. Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.R.: feature detectors. Improving neural networks by preventing co-adaptation of arXiv:1207.0580 (2012)
12. Clevert, D.A., Unterthiner, T., Hochreiter, S.: Fast and accurate deep network
learning by exponential linear units (ELUs). In: ICLR. (2016) 13. Graham, B.: Fractional max-pooling. arXiv:1412.6071 (2014) 14. Springenberg, J.T., Dosovitskiy, A., Brox, T., Riedmiller, M.: Striving for simplic-
ity: The all convolutional net. arXiv:1412.6806 (2014)
15. Lin, M., Chen, Q., Yan, S.: Network in network. In: ICLR. (2014) 16. Lee, C.Y., Xie, S., Gallagher, P., Zhang, Z., Tu, Z.: Deeply-supervised nets. In:
AISTATS. (2015)
17. Romero, A., Ballas, N., Kahou, S.E., Chassang, A., Gatta, C., Bengio, Y.: Fitnets: Hints for thin deep nets. In: ICLR. (2015)
18. Mishkin, D., Matas, J.: All you need is a good init. In: ICLR. (2016) 19. Szegedy, C., Vanhoucke, V., Ioï¬e, S., Shlens, J., Wojna, Z.: Rethinking the incep-
tion architecture for computer vision. In: CVPR. (2016)
20. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: CVPR. (2015) 21. Szegedy, C., Ioï¬e, S., Vanhoucke, V.: Inception-v4, inception-resnet and the impact
of residual connections on learning. arXiv:1602.07261 (2016)
22. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: ICLR. (2015)
23. He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectiï¬ers: Surpassing human- level performance on imagenet classiï¬cation. In: ICCV. (2015)
15 | {
"id": "1602.07261"
} |
1603.04779 | Revisiting Batch Normalization For Practical Domain Adaptation | Deep neural networks (DNN) have shown unprecedented success in various
computer vision applications such as image classification and object detection.
However, it is still a common annoyance during the training phase, that one has
to prepare at least thousands of labeled images to fine-tune a network to a
specific domain. Recent study (Tommasi et al. 2015) shows that a DNN has strong
dependency towards the training dataset, and the learned features cannot be
easily transferred to a different but relevant task without fine-tuning. In
this paper, we propose a simple yet powerful remedy, called Adaptive Batch
Normalization (AdaBN) to increase the generalization ability of a DNN. By
modulating the statistics in all Batch Normalization layers across the network,
our approach achieves deep adaptation effect for domain adaptation tasks. In
contrary to other deep learning domain adaptation methods, our method does not
require additional components, and is parameter-free. It archives
state-of-the-art performance despite its surprising simplicity. Furthermore, we
demonstrate that our method is complementary with other existing methods.
Combining AdaBN with existing domain adaptation treatments may further improve
model performance. | http://arxiv.org/pdf/1603.04779 | Yanghao Li, Naiyan Wang, Jianping Shi, Jiaying Liu, Xiaodi Hou | cs.CV, cs.LG | null | null | cs.CV | 20160315 | 20161108 | 6 1 0 2
v o N 8 ] V C . s c [
4 v 9 7 7 4 0 . 3 0 6 1 : v i X r a
# Under review as a conference paper at ICLR 2017
# REVISITING BATCH NORMALIZATION FOR PRACTICAL DOMAIN ADAPTATION
Yanghao Li', Naiyan Wangâ, Jianping Shi°, Jiaying Liu', Xiaodi Hou* Â¥ Institute of Computer Science and Technology, Peking University *TuSimple ° SenseTime lyttonhao@pku.edu.cn winsty@gmail.com shijianping5000@gmail.com liujiaying@pku.edu.cn xiaodi.hou@gmail.com
# ° SenseTime
# ABSTRACT
Deep neural networks (DNN) have shown unprecedented success in various com- puter vision applications such as image classiï¬cation and object detection. How- ever, it is still a common annoyance during the training phase, that one has to prepare at least thousands of labeled images to ï¬ne-tune a network to a speciï¬c domain. Recent study (Tommasi et al., 2015) shows that a DNN has strong de- pendency towards the training dataset, and the learned features cannot be easily transferred to a different but relevant task without ï¬ne-tuning. In this paper, we propose a simple yet powerful remedy, called Adaptive Batch Normalization (Ad- aBN) to increase the generalization ability of a DNN. By modulating the statistics in all Batch Normalization layers across the network, our approach achieves deep adaptation effect for domain adaptation tasks. In contrary to other deep learning domain adaptation methods, our method does not require additional components, and is parameter-free. It archives state-of-the-art performance despite its surpris- ing simplicity. Furthermore, we demonstrate that our method is complementary with other existing methods. Combining AdaBN with existing domain adaptation treatments may further improve model performance.
# INTRODUCTION
Training a DNN for a new image recognition task is expensive. It requires a large amount of labeled training images that are not easy to obtain. One common practice is to use labeled data from other related source such as a different public dataset, or harvesting images by keywords from a search engine. Because 1) the distributions of the source domains (third party datasets or Internet images) are often different from the target domain (testing images); and 2) DNN is particularly good at capturing dataset bias in its internal representation (Torralba & Efros, 2011), which eventually leads to overï¬tting, imperfectly paired training and testing sets usually leads to inferior performance.
Known as domain adaptation, the effort to bridge the gap between training and testing data distribu- tions has been discussed several times under the context of deep learning (Tzeng et al., 2014; Long et al., 2015; Tzeng et al., 2015; Ganin & Lempitsky, 2015). To make the connection between the domain of training and the domain of testing, most of these methods require additional optimiza- tion steps and extra parameters. Such additional computational burden could greatly complicate the training of a DNN which is already intimidating enough for most people.
In this paper, we propose a simple yet effective approach called AdaBN for batch normalized DNN domain adaptation. We hypothesize that the label related knowledge is stored in the weight matrix of each layer, whereas domain related knowledge is represented by the statistics of the Batch Nor- malization (BN) (Ioffe & Szegedy, 2015) layer. Therefore, we can easily transfer the trained model to a new domain by modulating the statistics in the BN layer. This approach is straightforward to implement, has zero parameter to tune, and requires minimal computational resources. Moreover, our AdaBN is ready to be extended to more sophisticated scenarios such as multi-source domain adaptation and semi-supervised settings. Fig. 1 illustrates the ï¬owchart of AdaBN. To summarize, our contributions are as follows:
1
# Under review as a conference paper at ICLR 2017
Figure 1: Illustration of the proposed method. For each convolutional or fully connected layer, we use different bias/variance terms to perform batch normalization for the training domain and the test domain. The domain speciï¬c normalization mitigates the domain shift issue.
1. We propose a novel domain adaptation technique called Adaptive Batch Normalization (AdaBN). We show that AdaBN can naturally dissociate bias and variance of a dataset, which is ideal for domain adaptation tasks.
2. We validate the effectiveness of our approach on standard benchmarks for both single source and multi-source domain adaptation. Our method outperforms the state-of-the-art methods.
3. We conduct experiments on the cloud detection for remote sensing images to further demonstrate the effectiveness of our approach in practical use.
# 2 RELATED WORK
Domain transfer in visual recognition tasks has gained increasing attention in recent literature (Bei- jbom, 2012; Patel et al., 2015). Often referred to as covariate shift (Shimodaira, 2000) or dataset bias (Torralba & Efros, 2011), this problem poses a great challenge to the generalization ability of a learned model. One key component of domain transfer is to model the difference between source and target distributions. In Khosla et al. (2012), the authors assign each dataset with an explicit bias vector, and train one discriminative model to handle multiple classiï¬cation problems with different bias terms. A more explicit way to compute dataset difference is based on Maximum Mean Discrep- ancy (MMD) (Gretton et al., 2012). This approach projects each data sample into a Reproducing Kernel Hilbert Space, and then computes the difference of sample means. To reduce dataset discrep- ancies, many methods are proposed, including sample selections (Huang et al., 2006; Gong et al., 2013), explicit projection learning (Pan et al., 2011; Gopalan et al., 2011; Baktashmotlagh et al., 2013) and principal axes alignment (Fernando et al., 2013; Gong et al., 2012; Aljundi et al., 2015).
All of these methods face the same challenge of constructing the domain transfer function â a high- dimensional non-linear function. Due to computational constraints, most of the proposed transfer functions are in the category of simple shallow projections, which are typically composed of kernel transformations and linear mapping functions.
In the ï¬eld of deep learning, feature transferability across different domains is a tantalizing yet generally unsolved topic (Yosinski et al., 2014; Tommasi et al., 2015). To transfer the learned representations to a new dataset, pre-training plus ï¬ne-tuning (Donahue et al., 2014) have become de facto procedures. However, adaptation by ï¬ne-tuning is far from perfect. It requires a considerable amount of labeled data from the target domain, and non-negligible computational resources to re- train the whole network.
2
# Under review as a conference paper at ICLR 2017
A series of progress has been made in DNN to facilitate domain transfer. Early works of domain adaptation either focus on reordering ï¬ne-tuning samples (Chopra et al., 2013), or regularizing MMD (Gretton et al., 2012) in a shallow network (Ghifary et al., 2014). It is only until recently that the problem is directly attacked under the setting of classiï¬cation of unlabeled target domain using modern convolutional neural network (CNN) architecture. DDC (Tzeng et al., 2014) used the classical MMD loss to regularize the representation in the last layer of CNN. DAN (Long et al., 2015) further extended the method to multiple kernel MMD and multiple layer adaptation. Be- sides adapting features using MMD, RTN (Long et al., 2016) also added a gated residual layer for classiï¬er adaptation. RevGrad (Ganin & Lempitsky, 2015) devised a gradient reversal layer to com- pensate the back-propagated gradients that are domain speciï¬c. Recently, by explicitly modeling both private and shared components of the domain representations in the network, Bousmalis et al. (2016) proposed a Domain Separation Network to extract better domain-invariant features.
Another related work is CORAL (Sun et al., 2016). This model focuses on the last layer of CNN. CORAL whitens the data in source domain, and then re-correlates the source domain features to target domain. This operation aligns the second order statistics of source domain and target domain distributions. Surprisingly, such simple approach yields state-of-the-arts results in various text clas- siï¬cation and visual recognition tasks. Recently, Deep CORAL (Sun & Saenko, 2016) also extends the method into DNN by incorporating a CORAL loss.
2.1 BATCH NORMALIZATION
In this section, we brieï¬y review Batch Normalization (BN) (Ioffe & Szegedy, 2015) which is closely related to our AdaBN. The BN layer is originally designed to alleviate the issue of internal covariate shifting â a common problem while training a very deep neural network. It ï¬rst standard- izes each feature in a mini-batch, and then learns a common slope and bias for each mini-batch. Formally, given the input to a BN layer X â RnÃp, where n denotes the batch size, and p is the feature dimension, BN layer transforms a feature j â {1 . . . p} into:
x; â E[X.;] Var[X.3] â (1) Yj = 3%; + Bj, where a; and y; are the input/output scalars of one neuron response in one data sample; X.; denotes the 7â column of the input data; and 7; and 3; are parameters to be learned. This transformation guarantees that the input distribution of each layer remains unchanged across different mini-batches. For Stochastic Gradient Descent (SGD) optimization, a stable input distribution could greatly facil- itate model convergence, leading to much faster training speed for CNN. Moreover, if training data are shuffled at each epoch, the same training sample will be applied with different transformations, or in other words, more comprehensively augmented throughout the training. During the testing phase, the global statistics of all training samples is used to normalize every mini-batch of test data. w=
Extensive experiments have shown that Batch Normalization signiï¬cantly reduces the number of iteration to converge, and improves the ï¬nal performance at the same time. BN layer has become a standard component in recent top-performing CNN architectures, such as deep residual network (He et al., 2016), and Inception V3 (Szegedy et al., 2015).
# 3 THE MODEL
In Sec. 3.1, we ï¬rst analyze the domain shift in deep neural network, and reveal two key observa- tions. Then in Sec. 3.2, we introduce our Adaptive Batch Normalization (AdaBN) method based on these observations. Finally, we analyze our method in-depth in Sec. 3.3.
3.1 A PILOT EXPERIMENT
Although the Batch Normalization (BN) technique is originally proposed to help SGD optimization, its core idea is to align the distribution of training data. From this perspective, it is interesting to examine the BN parameters (batch-wise mean and variance) over different dataset at different layers of the network.
3
# Under review as a conference paper at ICLR 2017
In this pilot experiment, we use MXNet implementation (Chen et al., 2016b) of the Inception-BN model (Ioffe & Szegedy, 2015) pre-trained on ImageNet classiï¬cation task (Russakovsky et al., 2015) as our baseline DNN model. Our image data are drawn from (Bergamo & Torresani, 2010), which contains the same classes of images from both Caltech-256 dataset (Grifï¬n et al., 2007) and Bing image search results. For each mini-batch sampled from one dataset, we concatenate the mean and variance of all neurons from one layer to form a feature vector. Using linear SVM, we can almost perfectly classify whether the mini-batch feature vector is from Caltech-256 or Bing dataset. Fig. 2 visualizes the distributions of mini-batch feature vectors from two datasets in 2D. It is clear that BN statistics from different domains are separated into clusters.
(a) Shallow layer distributions (b) Deep layer distributions
Figure 2: t-SNE (Van der Maaten & Hinton, 2008) visualization of the mini-batch BN feature vector distributions in both shallow and deep layers, across different datasets. Each point represents the BN statistics in one mini-batch. Red dots come from Bing domain, while the blue ones are from Caltech-256 domain. The size of each mini-batch is 64.
This pilot experiment suggests:
1. Both shallow layers and deep layers of the DNN are inï¬uenced by domain shift. Domain adaptation by manipulating the output layer alone is not enough.
2. The statistics of BN layer contain the traits of the data domain.
Both observations motivate us to adapt the representation across different domains by BN layer.
3.2 ADAPTIVE BATCH NORMALIZATION
Given the pre-trained DNN model and a target domain, our Adaptive Batch Normalization algorithm is as follows1:
# Algorithm 1 Adaptive Batch Normalization (AdaBN)
for neuron j in DNN do Concatenate neuron responses on all images of tar- get domain t: x; = [...,2;(m),...] Compute the mean and variance of the target do- main: pi = E(x}), of = Var(x!). end for for neuron j in DNN, testing image m in target domain do (©; (m)-n') Compute BN output y;(m) := 7; a + B; end for
# end for
1In practice we adopt an online algorithm (Donald, 1999) to efï¬ciently estimate the mean and variance.
4
# Under review as a conference paper at ICLR 2017
The intuition behind our method is straightforward: The standardization of each layer by domain ensures that each layer receives data from a similar distribution, no matter it comes from the source domain or the target domain.
For K domain adaptation where K > 2, we standardize each sample by the statistics in its own domain. During training, the statistics are calculated for every mini-batch, the only thing that we need to make sure is that the samples in every mini-batch are from the same domain.
For (semi-)supervised domain adaptation, we may use the labeled data to ï¬ne-tune the weights as well. As a result, our method could ï¬t in all different settings of domain adaptation with minimal effort.
3.3 FURTHER THOUGHTS ABOUT ADABN
The simplicity of AdaBN is in sharp contrast to the complication of the domain shift problem. One natural question to ask is whether such simple translation and scaling operations could approximate the intrinsically non-linear domain transfer function. Consider a simple neural network with input x â Rp1Ã1. It has one BN layer with mean and variance of each feature being µi and Ï2 i (i â {1 . . . p2}), one fully connected layer with weight matrix W â Rp1Ãp2 and bias b â Rp2Ã1, and a non-linear transformation layer f (·), where p1 and p2 correspond to the input and output feature size. The output of this network is f (Wax + ba), where
Wa = WT Σâ1, ba = âWT Σâ1µ + b, Σ = diag(Ï1, ..., Ïp1 ), µ = (µ1, ..., µp1 ). (2)
The output without BN is simply f (WT x + b). We can see that the transformation is highly non- linear even for a simple network with one computation layer. As CNN architecture goes deeper, it will gain increasing power to represent more complicated transformations.
Another question is why we transform the neuron responses independently, not decorrelate and then re-correlate the responses as suggested in Sun et al. (2016). Under certain conditions, decorrelation could improve the performance. However, in CNN, the mini-batch size is usually smaller than the feature dimension, leading to singular covariance matrices that is hard to be inversed. As a result, the covariance matrix is always singular. In addition, decorrelation requires to compute the inverse of the covariance matrix which is computationally intensive, especially if we plan to apply AdaBN to all layers of the network.
# 4 EXPERIMENTS
In this section, we demonstrate the effectiveness of AdaBN on standard domain adaptation datasets, and empirically analyze the adapted features. We also evaluation our method on a practical applica- tion with remote sensing images.
4.1 EXPERIMENTAL SETTINGS
We ï¬rst introduce our experiments on two standard datasets: Ofï¬ce (Saenko et al., 2010) and Caltech-Bing (Bergamo & Torresani, 2010).
Ofï¬ce (Saenko et al., 2010) is a standard benchmark for domain adaptation, which is a collection of 4652 images in 31 classes from three different domains: Amazon(A), DSRL(D) and Webcam(W). Similar to (Tzeng et al., 2014; Sun et al., 2016; Long et al., 2015), we evaluate the pairwise do- main adaption performance of AdaBN on all six pairs of domains. For the multi-source setting, we evaluate our method on three transfer tasks {A, W} â D, {A, D} â W, {D, W} â A.
Caltech-Bing (Bergamo & Torresani, 2010) is a much larger domain adaptation dataset, which con- tains 30,607 and 121,730 images in 256 categories from two domains Caltech-256(C) and Bing(B). The images in the Bing set are collected from Bing image search engine by keyword search. Ap- parently Bing data contains noise, and its data distribution is dramatically different from that of Caltech-256.
5
# Under review as a conference paper at ICLR 2017
Method AlexNet (Krizhevsky et al., 2012) DDC (Tzeng et al., 2014) DAN (Long et al., 2015) Deep CORAL (Sun & Saenko, 2016) RevGrad (Ganin & Lempitsky, 2015) Inception BN (Ioffe & Szegedy, 2015) SA (Fernando et al., 2013) GFK (Gong et al., 2012) LSSA (Aljundi et al., 2015) CORAL (Sun et al., 2016) AdaBN AdaBN + CORAL A â W D â W W â D A â D D â A W â A Avg 70.1 70.6 72.9 72.1 - 75.5 75.3 74.7 74.9 76.3 76.7 77.2 61.6 61.8 68.5 66.4 67.3 70.3 69.8 66.7 67.7 70.9 74.2 75.4 95.4 95.0 96.0 95.7 94.0 94.3 95.5 97.0 96.1 95.7 95.7 96.2 99.0 98.5 99.0 99.2 93.7 100 99.0 99.4 98.4 99.8 99.8 99.6 63.8 64.4 67.0 66.8 - 70.5 71.3 70.1 71.3 71.9 73.1 72.7 51.1 52.1 54.0 52.8 - 60.1 59.4 58.0 57.8 59.0 59.8 59.0 49.8 52.2 53.1 51.5 - 57.9 56.9 56.9 57.8 60.2 57.4 60.5
Table 1: Single source domain adaptation results on Ofï¬ce-31 (Saenko et al., 2010) dataset with standard unsupervised adaptation protocol.
We compare our approach with a variety of methods, including four shallow methods: SA (Fernando et al., 2013), LSSA (Aljundi et al., 2015), GFK (Gong et al., 2012), CORAL (Sun et al., 2016), and four deep methods: DDC (Tzeng et al., 2014), DAN (Long et al., 2015), RevGrad (Ganin & Lempitsky, 2015), Deep CORAL (Sun & Saenko, 2016). Speciï¬cally, GFK models domain shift by integrating an inï¬nite number of subspaces that characterize changes in statistical properties from the source to the target domain. SA, LSSA and CORAL align the source and target subspaces by explicit feature space transformations that would map source distribution into the target one. DDC and DAN are deep learning based methods which maximize domain invariance by adding to AlexNet one or several adaptation layers using MMD. RevGrad incorporates a gradient reversal layer in the deep model to encourage learning domain-invariant features. Deep CORAL extends CORAL to perform end-to-end adaptation in DNN. It should be noted that these deep learning methods have the adaptation layers on top of the output layers of DNNs, which is a sharp contrast to our method that delves into early convolution layers as well with the help of BN layers.
We follow the full protocol (Donahue et al., 2014) for the single source setting; while for multiple sources setting, we use all the samples in the source domains as training data, and use all the samples in the target domain as testing data. We ï¬ne-tune the Inception-BN (Ioffe & Szegedy, 2015) model on source domain in each task for 100 epochs. The learning rate is set to 0.01 initially, and then is dropped by a factor 0.1 every 40 epochs. Since the ofï¬ce dataset is quite small, following the best practice in Long et al. (2015), we freeze the ï¬rst three groups of Inception modules, and set the learning rate of fourth and ï¬fth group one tenth of the base learning rate to avoid overï¬tting. For Caltech-Bing dataset, we ï¬ne-tune the whole model with the same base learning rate.
4.2 RESULTS
4.2.1 OFFICE DATASET
Our results on Ofï¬ce dataset is reported in Table 1 and Table 2 for single/multi source(s), respec- tively. Note that the ï¬rst 5 models of the Table 1 are pre-trained on AlexNet (Krizhevsky et al., 2012) instead of the Inception-BN (Ioffe & Szegedy, 2015) model, due to the lack of publicly available pre-trained Inception BN model in Caffe (Jia et al., 2014). Thus, the relative improvements over the baseline (AlexNet/Inception BN) make more sense than the absolute numbers of each algorithm.
From Table 1, we ï¬rst notice that the Inception-BN indeed improves over the AlexNet on average, which means that the CNN pre-trained on ImageNet has learned general features, the improvements on ImageNet can be transferred to new tasks. Among the methods based on Inception-BN features, our method improves the most over the baseline. Moreover, since our method is complementary to other methods, we can simply apply CORAL on the top of AdaBN. Not surprisingly, this simple combination exhibits 0.5% increase in performance. This preliminary test reveals further potential of AdaBN if combined with other advanced domain adaptation methods. Finally, we could improve 1.7% over the baseline, and advance the state-of-the-art results for this dataset.
6
# Under review as a conference paper at ICLR 2017
None of the compared methods has reported their performance on multi-source domain adaptation. To demonstrate the capacity of AdaBN under multi-domain settings, we compare it against CORAL, which is the best performing algorithm in the single source setting. The result is reported in Table 2. We ï¬nd that simply combining two domains does not lead to better performance. The result is generally worse compared to the best performing single domain between the two. This phenomenon suggests that if we cannot properly cope with domain bias, the increase of training samples may be reversely affect to the testing performance. This result conï¬rms the necessity of domain adaptation. In this more challenging setting, AdaBN still outperforms the baseline and CORAL on average. Again, when combined with CORAL, our method demonstrates further improvements. At last, our method archives 2.3% gain over the baseline.
Method Inception BN (Ioffe & Szegedy, 2015) CORAL (Sun et al., 2016) AdaBN AdaBN + CORAL A, D â W A, W â D D, W â A Avg 82.1 83.3 83.6 84.4 90.8 92.1 94.2 95.0 95.4 96.4 97.2 97.8 60.2 61.4 59.3 60.5
Table 2: Multi-source domain adaptation results on Ofï¬ce-31 (Saenko et al., 2010) dataset with standard unsupervised adaptation protocol.
# 4.2.2 CALTECH-BING DATASET
To further evaluate our method on the large-scale dataset, we show our results on Caltech-Bing Dataset in Table 3. Compared with CORAL, AdaBN achieves better performance, which improves 1.8% over the baseline. Note that all the domain adaptation methods show minor improvements over the baseline in the task C â B. One of the hypotheses to this relatively small improvement is that the images in Bing dataset are collected from Internet, which are more diverse and noisier (Bergamo & Torresani, 2010). Thus, it is not easy to adapt on the Bing dataset from the relatively clean dataset Caltech-256. Combining CORAL with our method does not offer further improvements. This might be explained by the noise of the Bing dataset and the imbalance of the number of images in the two domains.
Method Inception BN (Ioffe & Szegedy, 2015) CORAL (Sun et al., 2016) AdaBN AdaBN + CORAL C â B B â C Avg 49.9 51.3 51.7 51.2 35.1 35.3 35.2 35.0 64.6 67.2 68.1 67.5
Table 3: Single source domain adaptation results on Caltech-Bing (Bergamo & Torresani, 2010) dataset.
4.3 EMPIRICAL ANALYSIS
In this section, we empirically analyze the features adapted by our method and investigate the inï¬u- ence of the number of samples in target domain to the performance.
# 4.3.1 ANALYSIS OF FEATURE DIVERGENCE.
In this experiment, we analyze the statistics of the output of one shallow layer (the output of second convolution layer) and one deep layer (the output of last Inception module before ReLU) in the network. In particular, we compute the distance of source domain distribution and target domain distribution before and after adaptation. We denote each feature i as Fi, and assume that the output of each feature generally follows a Gaussian distribution with mean µi and variance Ï2 i . Then we use the symmetric KL divergence as our metric:
D(Fi || Fj) = KL(Fi || Fj) + KL(Fj || Fi), KL(Fi || Fj) = log Ïj Ïi + Ï2 i + (µi â µj)2 2Ï2 j â 1 2 . (3)
7
# Under review as a conference paper at ICLR 2017
We plot the distribution of the distances in Fig. 3. Our method reduces the domain discrepancy in both shallow layer and deep layer. We also report the quantitative results in Table. 4. This experiment once again veriï¬es the effectiveness of the proposed method.
109} 7] âBilbeerore Adaptation After Adaptation eo (Hi Aer Adapt eo} 0} Ed % 02 04 06 08 4
500; [iiibetore Adaptation 5 Adapiaton âoo Titer Acaptt 300] 200] 109} % of 02 08 04 05
109; 7] [ilibefore Adaptation ir Adaptation el Titer Adeptat ©} 20} 20} % 02 04 06 08 1
500) âBilbeerore Adaptation After Adaptation âeo (Hi Aer Adapt 300] 200] 100} % of 02 03 04 05
(a) A â W, shallow layer (b) A â W, deep layer (c) A â D, shallow layer (d) A â D, deep layer
Figure 3: Distribution of the symmetric KL divergence of the outputs in shallow layer and deep layer. Best viewed in color.
Before Adapt After Adapt A â W A â W A â D A â D shallow 0.0716 0.0227 deep 0.0614 0.0134 shallow deep 0.0502 0.2307 0.0140 0.0266
Table 4: The average symmetric KL divergence of the outputs in shallow layer and deep layer, respectively.
4.3.2 SENSITIVITY TO TARGET DOMAIN SIZE.
Since the key of our method is to calculate the mean and variance of the target domain on different BN layers, it is very natural to ask how many target images is necessary to obtain stable statistics. In this experiment, we randomly select a subset of images in target domain to calculate the statistics and then evaluate the performance on the whole target set. Fig. 4 illustrates the effect of using different number of batches. The results demonstrate that our method can obtain good results when using only a small part of the target examples. It should also be noted that in the extremal case of one batch of target images, our method still achieves better results than the baseline. This is valuable in practical use since a large number of target images are often not available.
076 Mil Adapt BN Inception BN 2 4 6 8 10 1
O71; o7- 0.69" Ml Adapt BN Inception BN 0.68 0.67 0.66| cr er i: ir 0)
(a) A â W (b) B â C
Figure 4: Accuracy when varying the number of mini-batches used for calculating the statistics of BN layers in A â W and B â C, respectively. For B â C, we only show the results of using less than 100 batches, since the results are very stable when adding more examples. The batch size is 64 in this experiment.
8
# Under review as a conference paper at ICLR 2017
4.4 PRACTICAL APPLICATION FOR CLOUD DETECTION IN REMOTE SENSING IMAGES
In this section, we further demonstrate the effectiveness of AdaBN on a practical problem: Cloud Detection in Remote Sensing Images. Since remote sensing images are taken by different satellites with different sensors and resolutions, the captured images are visually different in texture, color, and value range distributions, as shown in Fig. 5. How to adapt a model trained on one satellite to another satellite images is naturally a domain adaptation problem.
Our task here is to identify cloud from the remote sensing images, which can be regarded as a semantic segmentation task. The experiment is taken under a self-collected dataset, which includes three image sets, from GF2, GF1 and Tianhui satellites. Each image set contains 635, 324 and 113 images with resolution over 6000x6000 pixels respectively. We name the three different datasets following the satellite names. GF2 dataset is used as the training dataset while GF1 and Tianhui datasets are for testing. We use a state-of-art semantic segmentation method (Chen et al., 2016a) as our baseline model.
Method Tianhui GF1 Baseline 38.95% 14.54% AdaBN 64.50% 29.66%
Table 5: Domain adaptation results (mIOU) on GF1 and Tianhui datasets training on GF2 datasets.
The results on GF1 and Tianhui datasets are shown in Table 5. The relatively low results of the baseline method indicate that there exists large distribution disparity among images from different satellites. Thus, the signiï¬cant improvement after applying AdaBN reveals the effectiveness of our method. Some of the visual results are shown in Fig. 6. Since other domain adaptation methods require either additional optimization steps and extra components (e.g. MMD) or post-processing distribution alignment (like CORAL), it is very hard to apply these methods from image classiï¬- cation to this large-size (6000x6000) segmentation problem. Comparatively, besides the effective performance, our method needs no extra parameters and very few computations over the whole adaptation process.
(a) GF1 image (b) GF2 image (c) Tianhui image
Figure 5: Remote sensing images in different domains.
# 5 CONCLUSION AND FUTURE WORKS
In this paper, we have introduced a simple yet effective approach for domain adaptation on batch normalized neural networks. Besides its original uses, we have exploited another functionality of Batch Normalization (BN) layer: domain adaptation. The main idea is to replace the statistics of each BN layer in source domain with those in target domain. The proposed method is easy to implement and parameter-free, and it takes almost no effort to extend to multiple source domains and semi-supervised settings. Our method established new state-of-the-art results on both single and multiple source(s) domain adaptation settings on standard benchmarks. At last, the experiments on
9
# Under review as a conference paper at ICLR 2017
(a) Original image (b) Without AdaBN (c) AdaBN (a) Original image (b) Without AdaBN (c) AdaBN
~~
Figure 6: Visual cloud detection results on GF1 dataset. White pixels in (b) and (c) represent the detected cloud regions.
cloud detection for large-size remote sensing images further demonstrate the effectiveness of our method in practical use. We believe our method opens up a new direction for domain adaptation.
In contrary to other methods that use Maximum Mean Discrepancy (MMD) or domain confusion loss to update the weights in CNN for domain adaptation, our method only modiï¬es the statistics of BN layer. Therefore, our method is fully complementary to other existing deep learning based methods. It is interesting to see how these different methods can be uniï¬ed under one framework.
# REFERENCES
Rahaf Aljundi, R´emi Emonet, Damien Muselet, and Marc Sebban. Landmarks-based kernelized subspace alignment for unsupervised domain adaptation. In CVPR, 2015.
Mahsa Baktashmotlagh, Mehrtash Harandi, Brian Lovell, and Mathieu Salzmann. Unsupervised domain adaptation by domain invariant projection. In ICCV, pp. 769â776, 2013.
Oscar Beijbom. Domain adaptations for computer vision applications. arXiv preprint arXiv:1211.4860, 2012.
Alessandro Bergamo and Lorenzo Torresani. Exploiting weakly-labeled web images to improve object classiï¬cation: a domain adaptation approach. In NIPS, pp. 181â189, 2010.
Konstantinos Bousmalis, George Trigeorgis, Nathan Silberman, Dilip Krishnan, and Dumitru Erhan. Domain separation networks. NIPS, 2016.
Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. arXiv preprint arXiv:1606.00915, 2016a.
10
# Under review as a conference paper at ICLR 2017
Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang. MXNet: A ï¬exible and efï¬cient machine learning library for heterogeneous distributed systems. NIPS Workshop on Machine Learning Systems, 2016b.
Sumit Chopra, Suhrid Balakrishnan, and Raghuraman Gopalan. DLID: Deep learning for domain adaptation by interpolating between domains. In ICML Workshop on Challenges in Representa- tion Learning, volume 2, 2013.
Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. DeCAF: A deep convolutional activation feature for generic visual recognition. In ICML, pp. 647â655, 2014.
E Knuth Donald. The art of computer programming. Sorting and searching, 3:426â458, 1999.
Basura Fernando, Amaury Habrard, Marc Sebban, and Tinne Tuytelaars. Unsupervised visual do- main adaptation using subspace alignment. In ICCV, pp. 2960â2967, 2013.
Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. In ICML, pp. 1180â1189, 2015.
Muhammad Ghifary, W Bastiaan Kleijn, and Mengjie Zhang. Domain adaptive neural networks for object recognition. In PRICAI: Trends in Artiï¬cial Intelligence, pp. 898â904. 2014.
Boqing Gong, Yuan Shi, Fei Sha, and Kristen Grauman. Geodesic ï¬ow kernel for unsupervised domain adaptation. In CVPR, pp. 2066â2073, 2012.
Boqing Gong, Kristen Grauman, and Fei Sha. Connecting the dots with landmarks: Discriminatively learning domain-invariant features for unsupervised domain adaptation. In ICML, pp. 222â230, 2013.
Raghuraman Gopalan, Ruonan Li, and Rama Chellappa. Domain adaptation for object recognition: An unsupervised approach. In ICCV, pp. 999â1006, 2011.
Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Sch¨olkopf, and Alexander Smola. A kernel two-sample test. The Journal of Machine Learning Research, 13(1):723â773, 2012.
Gregory Grifï¬n, Alex Holub, and Pietro Perona. Caltech-256 object category dataset. 2007.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. CVPR, 2016.
Jiayuan Huang, Arthur Gretton, Karsten M Borgwardt, Bernhard Sch¨olkopf, and Alex J Smola. Correcting sample selection bias by unlabeled data. In NIPS, pp. 601â608, 2006.
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, pp. 448â456, 2015.
Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Ser- gio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embed- ding. In ACM MM, pp. 675â678, 2014.
Aditya Khosla, Tinghui Zhou, Tomasz Malisiewicz, Alexei A Efros, and Antonio Torralba. Undoing the damage of dataset bias. In ECCV, pp. 158â171. 2012.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convo- lutional neural networks. In NIPS, pp. 1097â1105, 2012.
Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. Learning transferable features with deep adaptation networks. In ICML, pp. 97â105, 2015.
Mingsheng Long, Jianmin Wang, and Michael I Jordan. Unsupervised domain adaptation with residual transfer networks. In NIPS, 2016.
Sinno Jialin Pan, Ivor W Tsang, James T Kwok, and Qiang Yang. Domain adaptation via transfer component analysis. IEEE Transactions on Neural Networks, 22(2):199â210, 2011.
11
# Under review as a conference paper at ICLR 2017
Vishal M Patel, Raghuraman Gopalan, Ruonan Li, and Rama Chellappa. Visual domain adaptation: A survey of recent advances. IEEE Signal Processing Magazine, 32(3):53â69, 2015.
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. ImageNet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211â252, 2015.
Kate Saenko, Brian Kulis, Mario Fritz, and Trevor Darrell. Adapting visual category models to new domains. In ECCV, pp. 213â226. 2010.
Hidetoshi Shimodaira. Improving predictive inference under covariate shift by weighting the log- likelihood function. Journal of statistical planning and inference, 90(2):227â244, 2000.
Baochen Sun and Kate Saenko. Deep coral: Correlation alignment for deep domain adaptation. arXiv preprint arXiv:1607.01719, 2016.
Baochen Sun, Jiashi Feng, and Kate Saenko. Return of frustratingly easy domain adaptation. AAAI, 2016.
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Re- thinking the inception architecture for computer vision. arXiv preprint arXiv:1512.00567, 2015.
Tatiana Tommasi, Novi Patricia, Barbara Caputo, and Tinne Tuytelaars. A deeper look at dataset bias. German Conference on Pattern Recognition, 2015.
Antonio Torralba and Alexei A Efros. Unbiased look at dataset bias. In CVPR, pp. 1521â1528, 2011.
Eric Tzeng, Judy Hoffman, Ning Zhang, Kate Saenko, and Trevor Darrell. Deep domain confusion: Maximizing for domain invariance. arXiv preprint arXiv:1412.3474, 2014.
Eric Tzeng, Judy Hoffman, Trevor Darrell, and Kate Saenko. Simultaneous deep transfer across domains and tasks. In ICCV, pp. 4068â4076, 2015.
Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research, 9(2579-2605):85, 2008.
Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In NIPS, pp. 3320â3328, 2014.
12 | {
"id": "1607.01719"
} |
1603.04467 | TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems | TensorFlow is an interface for expressing machine learning algorithms, and an
implementation for executing such algorithms. A computation expressed using
TensorFlow can be executed with little or no change on a wide variety of
heterogeneous systems, ranging from mobile devices such as phones and tablets
up to large-scale distributed systems of hundreds of machines and thousands of
computational devices such as GPU cards. The system is flexible and can be used
to express a wide variety of algorithms, including training and inference
algorithms for deep neural network models, and it has been used for conducting
research and for deploying machine learning systems into production across more
than a dozen areas of computer science and other fields, including speech
recognition, computer vision, robotics, information retrieval, natural language
processing, geographic information extraction, and computational drug
discovery. This paper describes the TensorFlow interface and an implementation
of that interface that we have built at Google. The TensorFlow API and a
reference implementation were released as an open-source package under the
Apache 2.0 license in November, 2015 and are available at www.tensorflow.org. | http://arxiv.org/pdf/1603.04467 | Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, Xiaoqiang Zheng | cs.DC, cs.LG | Version 2 updates only the metadata, to correct the formatting of
Mart\'in Abadi's name | null | cs.DC | 20160314 | 20160316 | 6 1 0 2
r a M 6 1 ] C D . s c [ 2 v 7 6 4 4 0 . 3 0 6 1 : v i X r a
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems (Preliminary White Paper, November 9, 2015)
Mart´ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Man´e, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Vi´egas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng
# Google Researchâ
# Abstract
TensorFlow [1] is an interface for expressing machine learn- ing algorithms, and an implementation for executing such al- gorithms. A computation expressed using TensorFlow can be executed with little or no change on a wide variety of hetero- geneous systems, ranging from mobile devices such as phones and tablets up to large-scale distributed systems of hundreds of machines and thousands of computational devices such as GPU cards. The system is ï¬exible and can be used to express a wide variety of algorithms, including training and inference algorithms for deep neural network models, and it has been used for conducting research and for deploying machine learn- ing systems into production across more than a dozen areas of computer science and other ï¬elds, including speech recogni- tion, computer vision, robotics, information retrieval, natural language processing, geographic information extraction, and computational drug discovery. This paper describes the Ten- sorFlow interface and an implementation of that interface that we have built at Google. The TensorFlow API and a reference implementation were released as an open-source package under the Apache 2.0 license in November, 2015 and are available at www.tensorï¬ow.org.
1
# 1 Introduction
The Google Brain project started in 2011 to explore the use of very-large-scale deep neural networks, both for research and for use in Googleâs products. As part of the early work in this project, we built DistBelief, our ï¬rst-generation scalable distributed training and infer- ence system [14], and this system has served us well. We and others at Google have performed a wide variety of re- search using DistBelief including work on unsupervised learning [31], language representation [35, 52], models for image classiï¬cation and object detection [16, 48], video classiï¬cation [27], speech recognition [56, 21, 20],
sequence prediction [47], move selection for Go [34], pedestrian detection [2], reinforcement learning [38], and other areas [17, 5]. In addition, often in close collab- oration with the Google Brain team, more than 50 teams at Google and other Alphabet companies have deployed deep neural networks using DistBelief in a wide variety of products, including Google Search [11], our advertis- ing products, our speech recognition systems [50, 6, 46], Google Photos [43], Google Maps and StreetView [19], Google Translate [18], YouTube, and many others.
Based on our experience with DistBelief and a more complete understanding of the desirable system proper- ties and requirements for training and using neural net- works, we have built TensorFlow, our second-generation system for the implementation and deployment of large- scale machine learning models. TensorFlow takes com- putations described using a dataï¬ow-like model and maps them onto a wide variety of different hardware platforms, ranging from running inference on mobile device platforms such as Android and iOS to modest- sized training and inference systems using single ma- chines containing one or many GPU cards to large-scale training systems running on hundreds of specialized ma- chines with thousands of GPUs. Having a single system that can span such a broad range of platforms signiï¬- cantly simpliï¬es the real-world use of machine learning system, as we have found that having separate systems for large-scale training and small-scale deployment leads to signiï¬cant maintenance burdens and leaky abstrac- tions. TensorFlow computations are expressed as stateful dataï¬ow graphs (described in more detail in Section 2), and we have focused on making the system both ï¬exible enough for quickly experimenting with new models for research purposes and sufï¬ciently high performance and robust for production training and deployment of ma- chine learning models. For scaling neural network train- ing to larger deployments, TensorFlow allows clients to easily express various kinds of parallelism through repli- cation and parallel execution of a core model dataï¬ow
âCorresponding authors: Jeffrey Dean and Rajat Monga: {jeff,rajatmonga}@google.com
1
graph, with many different computational devices all col- laborating to update a set of shared parameters or other state. Modest changes in the description of the com- putation allow a wide variety of different approaches to parallelism to be achieved and tried with low effort [14, 29, 42]. Some TensorFlow uses allow some ï¬exibil- ity in terms of the consistency of parameter updates, and we can easily express and take advantage of these relaxed synchronization requirements in some of our larger de- ployments. Compared to DistBelief, TensorFlowâs pro- gramming model is more ï¬exible, its performance is sig- niï¬cantly better, and it supports training and using a broader range of models on a wider variety of hetero- geneous hardware platforms.
Dozens of our internal clients of DistBelief have al- ready switched to TensorFlow. These clients rely on TensorFlow for research and production, with tasks as diverse as running inference for computer vision mod- els on mobile phones to large-scale training of deep neural networks with hundreds of billions of parame- ters on hundreds of billions of example records using many hundreds of machines [11, 47, 48, 18, 53, 41]. Although these applications have concentrated on ma- chine learning and deep neural networks in particular, we expect that TensorFlowâs abstractions will be useful in a variety of other domains, including other kinds of machine learning algorithms, and possibly other kinds of numerical computations. We have open-sourced the TensorFlow API and a reference implementation under the Apache 2.0 license in November, 2015, available at www.tensorï¬ow.org.
The rest of this paper describes TensorFlow in more detail. Section 2 describes the programming model and basic concepts of the TensorFlow interface, and Section 3 describes both our single machine and distributed imple- mentations. Section 4 describes several extensions to the basic programming model, and Section 5 describes several optimizations to the basic implementations. Sec- tion 6 describes some of our experiences in using Ten- sorFlow, Section 7 describes several programming id- ioms we have found helpful when using TensorFlow, and Section 9 describes several auxiliary tools we have built around the core TensorFlow system. Sections 10 and 11 discuss future and related work, respectively, and Sec- tion 12 offers concluding thoughts.
# 2 Programming Model and Basic Concepts
A TensorFlow computation is described by a directed graph, which is composed of a set of nodes. The graph represents a dataï¬ow computation, with extensions for allowing some kinds of nodes to maintain and update persistent state and for branching and looping control
2
structures within the graph in a manner similar to Naiad [36]. Clients typically construct a computational graph using one of the supported frontend languages (C++ or Python). An example fragment to construct and then ex- ecute a TensorFlow graph using the Python front end is shown in Figure 1, and the resulting computation graph in Figure 2.
In a TensorFlow graph, each node has zero or more in- puts and zero or more outputs, and represents the instan- tiation of an operation. Values that ï¬ow along normal edges in the graph (from outputs to inputs) are tensors, arbitrary dimensionality arrays where the underlying el- ement type is speciï¬ed or inferred at graph-construction time. Special edges, called control dependencies, can also exist in the graph: no data ï¬ows along such edges, but they indicate that the source node for the control de- pendence must ï¬nish executing before the destination node for the control dependence starts executing. Since our model includes mutable state, control dependencies can be used directly by clients to enforce happens before relationships. Our implementation also sometimes in- serts control dependencies to enforce orderings between otherwise independent operations as a way of, for exam- ple, controlling the peak memory usage.
# Operations and Kernels
An operation has a name and represents an abstract com- putation (e.g., âmatrix multiplyâ, or âaddâ). An opera- tion can have attributes, and all attributes must be pro- vided or inferred at graph-construction time in order to instantiate a node to perform the operation. One com- mon use of attributes is to make operations polymorphic over different tensor element types (e.g., add of two ten- sors of type ï¬oat versus add of two tensors of type int32). A kernel is a particular implementation of an operation that can be run on a particular type of device (e.g., CPU or GPU). A TensorFlow binary deï¬nes the sets of opera- tions and kernels available via a registration mechanism, and this set can be extended by linking in additional op- eration and/or kernel deï¬nitions/registrations. Table 1 shows some of the kinds of operations built into the core TensorFlow library.
# Sessions
Clients programs interact with the TensorFlow system by creating a Session. To create a computation graph, the Session interface supports an Extend method to augment the current graph managed by the session with additional nodes and edges (the initial graph when a session is cre- ated is empty). The other primary operation supported
# import tensorflow as tf
b = tf.Variable(tf.zeros([100])) W = tf.Variable(tf.random_uniform([784,100],-1,1)) # 784x100 matrix w/rnd vals x = tf.placeholder(name="x") relu = tf.nn.relu(tf.matmul(W, x) + b) C = [...]
# s = tf.Session() for step in xrange(0, 10):
input = ...construct 100-D input array ... result = s.run(C, feed_dict={x: input}) print step, result
# Create 100-d vector for input # Fetch cost, feeding x=input
Figure 1: Example TensorFlow code fragment
©) = Add
# Figure 2: Corresponding computation graph for Figure 1
Examples Category Element-wise mathematical operations Add, Sub, Mul, Div, Exp, Log, Greater, Less, Equal, ... Concat, Slice, Split, Constant, Rank, Shape, Shufï¬e, ... Array operations MatMul, MatrixInverse, MatrixDeterminant, ... Matrix operations Variable, Assign, AssignAdd, ... Stateful operations SoftMax, Sigmoid, ReLU, Convolution2D, MaxPool, ... Neural-net building blocks Save, Restore Checkpointing operations Enqueue, Dequeue, MutexAcquire, MutexRelease, ... Queue and synchronization operations Merge, Switch, Enter, Leave, NextIteration Control ï¬ow operations
Table 1: Example TensorFlow operation types
by the session interface is Run, which takes a set of out- put names that need to be computed, as well as an op- tional set of tensors to be fed into the graph in place of certain outputs of nodes. Using the arguments to Run, the TensorFlow implementation can compute the transi- tive closure of all nodes that must be executed in order to compute the outputs that were requested, and can then
arrange to execute the appropriate nodes in an order that respects their dependencies (as described in more detail in 3.1). Most of our uses of TensorFlow set up a Session with a graph once, and then execute the full graph or a few distinct subgraphs thousands or millions of times via Run calls.
3
# Variables
In most computations a graph is executed multiple times. Most tensors do not survive past a single execution of the graph. However, a Variable is a special kind of opera- tion that returns a handle to a persistent mutable tensor that survives across executions of a graph. Handles to these persistent mutable tensors can be passed to a hand- ful of special operations, such as Assign and AssignAdd (equivalent to +=) that mutate the referenced tensor. For machine learning applications of TensorFlow, the param- eters of the model are typically stored in tensors held in variables, and are updated as part of the Run of the train- ing graph for the model.
# Implementation
The main components in a TensorFlow system are the client, which uses the Session interface to communicate with the master, and one or more worker processes, with each worker process responsible for arbitrating access to one or more computational devices (such as CPU cores or GPU cards) and for executing graph nodes on those devices as instructed by the master. We have both lo- cal and distributed implementations of the TensorFlow interface. The local implementation is used when the client, the master, and the worker all run on a single ma- chine in the context of a single operating system process (possibly with multiple devices, if for example, the ma- chine has many GPU cards installed). The distributed implementation shares most of the code with the local implementation, but extends it with support for an en- vironment where the client, the master, and the workers can all be in different processes on different machines. In our distributed environment, these different tasks are containers in jobs managed by a cluster scheduling sys- tem [51]. These two different modes are illustrated in Figure 3. Most of the rest of this section discusses is- sues that are common to both implementations, while Section 3.3 discusses some issues that are particular to the distributed implementation.
# Devices
Devices are the computational heart of TensorFlow. Each worker is responsible for one or more devices, and each device has a device type, and a name. Device names are composed of pieces that identify the de- viceâs type, the deviceâs index within the worker, and, in our distributed setting, an identiï¬cation of the job and task of the worker (or localhost for the case where the devices are local to the process). Example device names are "/job:localhost/device:cpu:0" or "/job:worker/task:17/device:gpu:3". We
4
have implementations of our Device interface for CPUs and GPUs, and new device implementations for other de- vice types can be provided via a registration mechanism. Each device object is responsible for managing alloca- tion and deallocation of device memory, and for arrang- ing for the execution of any kernels that are requested by higher levels in the TensorFlow implementation.
# Tensors
A tensor in our implementation is a typed, multi- dimensional array. We support a variety of tensor ele- ment types, including signed and unsigned integers rang- ing in size from 8 bits to 64 bits, IEEE ï¬oat and double types, a complex number type, and a string type (an ar- bitrary byte array). Backing store of the appropriate size is managed by an allocator that is speciï¬c to the device on which the tensor resides. Tensor backing store buffers are reference counted and are deallocated when no refer- ences remain.
# 3.1 Single-Device Execution
Letâs ï¬rst consider the simplest execution scenario: a sin- gle worker process with a single device. The nodes of the graph are executed in an order that respects the depen- dencies between nodes. In particular, we keep track of a count per node of the number of dependencies of that node that have not yet been executed. Once this count drops to zero, the node is eligible for execution and is added to a ready queue. The ready queue is processed in some unspeciï¬ed order, delegating execution of the ker- nel for a node to the device object. When a node has ï¬nished executing, the counts of all nodes that depend on the completed node are decremented.
# 3.2 Multi-Device Execution
Once a system has multiple devices, there are two main complications: deciding which device to place the com- putation for each node in the graph, and then managing the required communication of data across device bound- aries implied by these placement decisions. This subsec- tion discusses these two issues.
# 3.2.1 Node Placement
Given a computation graph, one of the main responsi- bilities of the TensorFlow implementation is to map the computation onto the set of available devices. A sim- pliï¬ed version of this algorithm is presented here. See Section 4.3 for extensions supported by this algorithm.
One input to the placement algorithm is a cost model, which contains estimates of the sizes (in bytes) of the
single process â_ââ session, ole run execute subgraph master session \_P run execute subgraph worker worker worker process 1 process 2 process 3 (Geue) =) GS Gees (pur) (CPUs (Gpur) (CPUs) } | (GPU) (CPUs }
Figure 3: Single machine and distributed system structure
input and output tensors for each graph node, along with estimates of the computation time required for each node when presented with its input tensors. This cost model is either statically estimated based on heuristics associated with different operation types, or is measured based on an actual set of placement decisions for earlier execu- tions of the graph.
The placement algorithm ï¬rst runs a simulated execu- tion of the graph. The simulation is described below and ends up picking a device for each node in the graph using greedy heuristics. The node to device placement gener- ated by this simulation is also used as the placement for the real execution.
The placement algorithm starts with the sources of the computation graph, and simulates the activity on each device in the system as it progresses. For each node that is reached in this traversal, the set of feasible devices is considered (a device may not be feasible if the device does not provide a kernel that implements the particular operation). For nodes with multiple feasible devices, the placement algorithm uses a greedy heuristic that exam- ines the effects on the completion time of the node of placing the node on each possible device. This heuristic takes into account the estimated or measured execution time of the operation on that kind of device from the cost model, and also includes the costs of any communica- tion that would be introduced in order to transmit inputs to this node from other devices to the considered device. The device where the nodeâs operation would ï¬nish the soonest is selected as the device for that operation, and the placement process then continues onwards to make placement decisions for other nodes in the graph, includ- ing downstream nodes that are now ready for their own simulated execution. Section 4.3 describes some exten- sions that allow users to provide hints and partial con- straints to guide the placement algorithm. The placement algorithm is an area of ongoing development within the system.
# 3.2.2 Cross-Device Communication
Once the node placement has been computed, the graph is partitioned into a set of subgraphs, one per device. Any cross-device edge from x to y is removed and replaced by an edge from x to a new Send node in xâs subgraph and an edge from a corresponding Receive node to y in yâs subgraph. See Figure 4 for an example of this graph transformation.
Device B Device B @ ot > | Device A Device A
Figure 4: Before & after insertion of Send/Receive nodes
At runtime, the implementations of the Send and Re- ceive nodes coordinate to transfer data across devices. This allows us to isolate all communication inside Send and Receive implementations, which simpliï¬es the rest of the runtime.
When we insert Send and Receive nodes, we canoni- calize all users of a particular tensor on a particular de- vice to use a single Receive node, rather than one Re- ceive node per downstream user on a particular device. This ensures that the data for the needed tensor is only transmitted once between a source device â destination device pair, and that memory for the tensor on the desti- nation device is only allocated once, rather than multiple times (e.g., see nodes b and c in Figure 4)
By handling communication in this manner, we also allow the scheduling of individual nodes of the graph on different devices to be decentralized into the work- the Send and Receive nodes impart the necessary ers:
5
synchronization between different workers and devices, and the master only needs to issue a single Run request per graph execution to each worker that has any nodes for the graph, rather than being involved in the scheduling of every node or every cross-device communication. This makes the system much more scalable and allows much ï¬ner-granularity node executions than if the scheduling were forced to be done by the master.
# 3.3 Distributed Execution
Distributed execution of a graph is very similar to multi- device execution. After device placement, a subgraph is created per device. Send/Receive node pairs that com- municate across worker processes use remote communi- cation mechanisms such as TCP or RDMA to move data across machine boundaries.
# Fault Tolerance
Failures in a distributed execution can be detected in a variety of places. The main ones we rely on are (a) an error in a communication between a Send and Receive node pair, and (b) periodic health-checks from the master process to every worker process.
When a failure is detected, the entire graph execution is aborted and restarted from scratch. Recall however that Variable nodes refer to tensors that persist across ex- ecutions of the graph. We support consistent checkpoint- ing and recovery of this state on a restart. In partcular, each Variable node is connected to a Save node. These Save nodes are executed periodically, say once every N iterations, or once every N seconds. When they execute, the contents of the variables are written to persistent stor- age, e.g., a distributed ï¬le system. Similarly each Vari- able is connected to a Restore node that is only enabled in the ï¬rst iteration after a restart. See Section 4.2 for details on how some nodes can only be enabled on some executions of the graph.
# 4 Extensions
In this section we describe several more advanced fea- tures of the basic programming model that was intro- duced in Section 2.
# 4.1 Gradient Computation
Many optimization algorithms, including common ma- chine learning training algorithms like stochastic gradi- ent descent [45], compute the gradient of a cost function with respect to a set of inputs. Because this is such a
6
@ Y dReLU y mt Cad) _, fatwa) Oa AS
Figure 5: Gradients computed for graph in Figure 2
common need, TensorFlow has built-in support for au- If a tensor C in a Ten- tomatic gradient computation. sorFlow graph depends, perhaps through a complex sub- graph of operations, on some set of tensors {Xk}, then there is a built-in function that will return the tensors {dC/dXk}. Gradient tensors are computed, like other tensors, by extending the TensorFlow graph, using the following procedure.
When TensorFlow needs to compute the gradient of a tensor C with respect to some tensor I on which C depends, it ï¬rst ï¬nds the path in the computation graph from I to C. Then it backtracks from C to I, and for each operation on the backward path it adds a node to the TensorFlow graph, composing the partial gradients along the backwards path using the chain rule. The newly added node computes the âgradient functionâ for the cor- responding operation in the forward path. A gradient function may be registered by any operation. This func- tion takes as input not only the partial gradients com- puted already along the backward path, but also, option- ally, the inputs and outputs of the forward operation. Fig- ure 5 shows gradients for a cost computed from the ex- ample of Figure 2. Grey arrows show potential inputs to gradient functions that are not used for the particular operations shown. The addition needed to Figure 1 to compute these gradients is:
[db,dW,dx] = tf.gradients(C, [b,W,x])
In general an operation may have multiple outputs, and C may only depend on some of them. If, for example, operation O has two outputs y1 and y2, and C only de- pends on y2, then the ï¬rst input to Oâs gradient function is set to 0 since dC/dy1 = 0.
Automatic gradient computation complicates opti- mization, particularly of memory usage. When execut- ing âforwardâ computation subgraphs, i.e., those that are explicitly constructed by the user, a sensible heuristic breaks ties when deciding which node to execute next by observing the order in which the graph was constructed.
This generally means that temporary outputs are con- sumed soon after being constructed, so their memory can be reused quickly. When the heuristic is ineffective, the user can change the order of graph construction, or add control dependencies as described in Section 5. When gradient nodes are automatically added to the graph, the user has less control, and the heuristics may break down. In particular, because gradients reverse the forward com- putation order, tensors that are used early in a graphâs execution are frequently needed again near the end of a gradient computation. Such tensors can hold on to a lot of scarce GPU memory and unnecessarily limit the size of computations. We are actively working on improve- ments to memory management to deal better with such cases. Options include using more sophisticated heuris- tics to determine the order of graph execution, recom- puting tensors instead of retaining them in memory, and swapping out long-lived tensors from GPU memory to more plentiful host CPU memory.
# 4.2 Partial Execution
Often a client wants to execute just a subgraph of the entire execution graph. To support this, once the client has set up a computation graph in a Session, our Run method allows them to execute an arbitrary subgraph of the whole graph, and to inject arbitrary data along any edge in the graph, and to retrieve data ï¬owing along any edge in the graph.
Each node in the graph has a name, and each output of a node is identiï¬ed by the source node name and the out- put port from the node, numbered from 0 (e.g., âbar:0â refers to the 1st output of the âbarâ node, while âbar:1â refers to the 2nd output).
Two arguments to the Run call help deï¬ne the exact subgraph of the computation graph that will be executed. First, the Run call accepts inputs, an optional mapping of name:port names to âfedâ tensors values. Second, the Run call accepts output names, a list of output name[:port] speciï¬cations indicating which nodes should be executed, and, if the port portion is present in a name, that that particular output tensor value for the node should be returned to the client if the Run call completes successfully.
The graph is transformed based on the values of in- puts and outputs. Each node:port speciï¬ed in inputs is replaced with a feed node, which will pick up the pro- vided input tensor from specially-initialized entries in a Rendezvous object used for the Run call. Similarly, each output name with a port is connected to a special fetch node that arranges to save the output tensor and return it to the client when the Run call is complete. Finally, once the graph has been rewritten with the insertion of these
7
+o @ Q@ O
Figure 6: Before and after graph transformation for par- tial execution
special feed and fetch nodes, the set of nodes to execute can be determined by starting at each of the nodes named by any output and working backwards in the graph using the graph dependencies to determine the full set of nodes that must be executed in the rewritten graph in order to compute the outputs. Figure 6 shows an original graph on the left, and the transformed graph that results when Run is invoked with inputs=={b} and outputs=={f:0}. Since we only need to compute the output of node f, we will not execute nodes d and e, since they have no con- tribution to the output of f.
# 4.3 Device Constraints
TensorFlow clients can control the placement of nodes on devices by providing partial constraints for a node about which devices it can execute on. For ex- type ample, âonly place this node on a device of GPUâ, or âthis node can be placed on any device in /job:worker/task:17â, or âColocate this node with the node named variable13â. Within the con- ï¬nes of these constraints, the placement algorithm is re- sponsible for choosing an assignment of nodes to de- vices that provides fast execution of the computation and also satisï¬es various constraints imposed by the devices themselves, such as limiting the total amount of memory needed on a device in order to execute its subset of graph nodes.
Supporting such constraints requires changes to the placement algorithm described in Section 3.2.1. We ï¬rst compute the feasible set of devices for each node, and then use union-ï¬nd on the graph of colocation constraints to compute the graph components that must be placed together. For each such component, we compute the in- tersection of the feasible device sets. The computed fea- sible device set per node ï¬ts easily into the placement algorithmâs simulator.
# 4.4 Control Flow
Although dataï¬ow graphs without any explicit control ï¬ow are quite expressive, we have observed a number of cases where supporting conditionals and loops can lead to more concise and efï¬cient representations of machine learning algorithms.
Much as in the dataï¬ow-machine approach described by Arvind [3], we introduce a small set of primitive con- trol ï¬ow operators into TensorFlow and generalize Ten- sorFlow to handle cyclic dataï¬ow graphs. The Switch and Merge operators allow us to skip the execution of an entire subgraph based on the value of a boolean ten- sor. The Enter, Leave, and NextIteration operators allow us to express iteration. High-level programming con- structs such as if-conditionals and while-loops can be easily compiled into dataï¬ow graphs with these control ï¬ow operators.
The TensorFlow runtime implements a notion of tags and frames conceptually similar to the MIT Tagged- Token machine [4]. Each iteration of a loop is uniquely identiï¬ed by a tag, and its execution state is represented by a frame. An input can enter an iteration whenever it becomes available; thus, multiple iterations can be exe- cuted concurrently.
TensorFlow uses a distributed coordination mecha- nism to execute graphs with control ï¬ow. In general, a loop can contain nodes that are assigned to many dif- ferent devices. Therefore, managing the state of a loop becomes a problem of distributed termination detection. TensorFlowâs solution is based on graph rewriting. Dur- ing the graph partitioning, we automatically add control nodes to each partition. These nodes implement a small state machine that orchestrates the start and termination of each iteration, and decides the termination of the loop. For each iteration, the device that owns the loop termi- nation predicate sends a tiny control message to every participating device.
As explained above, we often train machine learning models by gradient descent, and represent gradient com- putations as part of dataï¬ow graphs. When a model includes control-ï¬ow operations, we must account for them in the corresponding gradient computation. For ex- ample, the gradient computation for a model with an if- conditional will need to know which branch of the con- ditional was taken, then apply the gradient logic to this branch. Similarly, the gradient computation for a model with a while-loop will need to know how many iterations were taken, and will also rely on the intermediate values computed during those iterations. The basic technique is to rewrite the graph so to memorize the values needed for the gradient computation. We omit the somewhat intri- cate details of this encoding.
8
# Input Operations
Although input data can be provided to a computation via feed nodes, another common mechanism used for train- ing large-scale machine learning models is to have spe- cial input operation nodes in the graph, which are typi- cally conï¬gured with a set of ï¬lenames and which yield a tensor containing one or more examples from the data stored in that set of ï¬les each time they are executed. This allows data to be read directly from the underlying storage system into the memory of the machine that will perform subsequent processing on the data. In conï¬gura- tions where the client process is separate from the worker process, if the data were fed, it typically would require an extra network hop (from the storage system to the client and then from the client to the worker vs. directly from the storage system to ther worker when using an input node).
# 4.6 Queues
Queues are a useful feature that we have added to Ten- sorFlow. They allow different portions of the graph to execute asynchronously, possibly at different candences, and to hand off data through Enqueue and Dequeue op- erations. Enqueue operations can block until space be- comes available in the queue, and Dequeue operations can block until a desired minimum number of elements are available in the queue. One use of queues is to allow input data to be prefetched from disk ï¬les while a previ- ous batch of data is still being processed by the compu- tational portion of a machine learning model. They can also be used for other kinds of grouping, including accu- mulating many gradients in order to compute some more complex combination of gradients over a larger batch, or to group different input sentences for recurrent lan- guage models into bins of sentences that are approxi- mately the same length, which can then be processed more efï¬ciently.
In addition to normal FIFO queues, we have also im- plemented a shufï¬ing queue, which randomly shufï¬es its elements within a large in-memory buffer. This shufï¬ing functionality is useful for machine learning algorithms that want to randomize the order in which they process examples, for example.
# 4.7 Containers
A Container is the mechanism within TensorFlow for managing longer-lived mutable state. The backing store for a Variable lives in a container. The default con- tainer is one that persists until the process terminates, but we also allow other named containers. A container
can be reset by clearing it of its contents entirely. Us- ing containers, it is possible to share state even across completely disjoint computation graphs associated with different Sessions.
# 5 Optimizations
In this section, we describe some of the optimizations in the TensorFlow implementation that improve perfor- mance or resource usage of the system.
# 5.1 Common Subexpression Elimination
Since the construction of computation graphs is often done by many different layers of abstractions in the client code, computation graphs can easily end up with redun- dant copies of the same computation. To handle this, we have implemented a common subexpression pass similar to the algorithm described by Click [12] that runs over the computation graph and canonicalizes multiple copies of operations with identical inputs and operation types to just a single one of these nodes, and redirects graph edges appropriately to reï¬ect this canonicalization.
# 5.2 Controlling Data Communication and Memory Usage
Careful scheduling of TensorFlow operations can result in better performance of the system, in particular with respect to data transfers and memory usage. Speciï¬cally, scheduling can reduce the time window during which intermediate results need to be kept in memory in be- tween operations and hence the peak memory consump- tion. This reduction is particularly important for GPU devices where memory is scarce. Furthermore, orches- trating the communication of data across devices can re- duce contention for network resources.
While there are many opportunities for scheduling op- timizations, here we focus on one that we found partic- ularly necessary and effective. It concerns the schedul- ing of Receive nodes for reading remote values. If no precautions are taken, these nodes may start much ear- lier than necessary, possibly all at once when execution starts. By performing an as-soon-as-possible/as-late-as- possible (ASAP/ALAP) calculation, of the kind common in operations research, we analyze the critical paths of graphs, in order to estimate when to start the Receive nodes. We then insert control edges with the aim of de- laying the start of these nodes until just before their re- sults are needed.
9
# 5.3 Asynchronous Kernels
In addition to normal synchronous kernels that complete their execution at the end of the Compute method, our framework also supports non-blocking kernels. Such non-blocking kernels use a slightly different interface whereby the Compute method is passed a continuation that should be invoked when the kernelâs execution is complete. This is an optimization for environments where having many active threads is relatively expensive in terms of memory usage or other resources, and allows us to avoid tying up an execution thread for unbounded periods of time while waiting for I/O or other events to occur. Examples of asynchronous kernels include the Receive kernel, and the Enqueue and Dequeue kernels (which might need to block if queue space is not avail- able or if no data is available to be read, respectively).
# 5.4 Optimized Libraries for Kernel Imple- mentations
We often make use of pre-existing highly-optimized nu- merical libraries to implement kernels for some opera- tions. For example, there are a number of optimized li- braries for performing matrix multiplies on different de- vices, including BLAS [15] and cuBLAS [39], or GPU libraries for convolutional kernels for deep neural nets such as cuda-convnet [28] and cuDNN [9]. Many of our kernel implementations are relatively thin wrappers around such optimized libraries.
We make fairly extensive use of the open-source Eigen linear algebra library [25] for many of the kernel imple- mentations in the system. As one part of the develop- ment of TensorFlow, our team (primarily Benoit Steiner) has extended the open source Eigen library with support for arbitrary dimensionality tensor operations.
# 5.5 Lossy Compression
Some machine learning algorithms, including those typ- ically used for training neural networks, are tolerant of noise and reduced precision arithmetic. In a manner sim- ilar to the DistBelief system [14], we often use lossy compression of higher precision internal representations when sending data between devices (sometimes within the same machine but especially across machine bound- aries). For example, we often insert special conversion nodes that convert 32-bit ï¬oating point representations into a 16-bit ï¬oating point representation (not the pro- posed IEEE 16-bit ï¬oating point standard, but rather just a 32-bit IEEE 794 ï¬oat format, but with 16 bits less pre- cision in the mantissa), and then convert back to a 32- bit representation on the other side of the communica- tion channel (by just ï¬lling in zeroes for the lost portion
of the mantissa, since thatâs less computationally expen- sive than doing the mathematically correct probabilistic rounding when doing this 32 â 16 â 32-bit conver- sion).
# 6 Status and Experience
The TensorFlow interface and a reference implemen- tation have been open sourced under an Apache 2.0 license, and the system is available for download at www.tensorï¬ow.org. The system includes detailed docu- mentation, a number of tutorials, and a number of exam- ples demonstrating how to use the system for a variety of different machine learning tasks. The examples in- clude models for classifying hand-written digits from the MNIST dataset (the âhello worldâ of machine learning algorithms) [32], classifying images from the CIFAR- 10 dataset [30], doing language modeling using a recur- rent LSTM [22] network, training word embedding vec- tors [35] and more.
The system includes front-ends for specifying Tensor- Flow computations in Python and C++, and we expect other front-ends to be added over time in response to the desires of both internal Google users and the broader open-source community.
We have quite a few machine learning models in our previous DistBelief system [14] that we have migrated over to TensorFlow. The rest of this section discusses some lessons we have learned that are generalizable for any such migration of machine learning models from one system to another, and therefore may be valuable to oth- ers.
In particular, we focus on our lessons from porting a state-of-the-art convolutional neural network for image recognition termed Inception [23]. This image recogni- tion system classiï¬es 224 à 224 pixel images into one of 1000 labels (e.g., âcheetahâ, âgarbage truckâ, etc.). Such a model comprises 13.6 million learnable parame- ters and 36,000 operations when expressed as a Tensor- Flow graph. Running inference on a single image re- quires 2 billion multiply-add operations.
After building all necessary mathematical operations in TensorFlow, assembling and debugging all 36,000 op- erations into the correct graph structure proved challeng- ing. Validating correctness is a difï¬cult enterprise be- cause the system is inherently stochastic and only in- tended to behave in a certain way in expectation â po- tentially after hours of computation. Given these cir- cumstances, we found the following strategies critical for porting the Inception model to TensorFlow:
1. Build tools to gain insight into the exact number of parameters in a given model. Such tools demon-
10
strated subtle ï¬aws in a complex network architec- ture speciï¬cation. In particular we were able to identify operations and variables instantiated incor- rectly due to automatic broadcasting in a mathemat- ical operation across a dimension.
2. Start small and scale up. The ï¬rst convolutional neural network that we ported from our previ- ous system was a small network employed on the CIFAR-10 data set [30]. Debugging such a network elucidated subtle edge cases in individual opera- tions (e.g., max-pooling) within the machine learn- ing system that would have been practically indeci- pherable in more complex models.
3. Always ensure that the objective (loss function) matches between machine learning systems when learning is turned off. Setting the learning rate to be zero helped us identify unexpected behavior in how we had randomly initialized variables in a model. Such an error would have been difï¬cult to identify in a dynamic, training network.
4. Make a single machine implementation match be- fore debugging a distributed implementation. This strategy helped us delineate and debug discrep- ancies in training performance between machine learning system. In particular, we identiï¬ed bugs due to race conditions and non-atomic operations incorrectly assumed to be atomic.
5. Guard against numerical errors. Numerical li- braries are inconsistent in how they handle non- ï¬nite ï¬oating point values. Convolutional neu- ral networks are particularly susceptible to numer- ical instability and will tend to diverge quite regu- larly during experimentation and debugging phases. Guarding against this behavior by checking for non- ï¬nite ï¬oating point values allows one to detect er- rors in real time as opposed to identifying divergent behavior post-hoc.
6. Analyze pieces of a network and understand the magnitude of numerical error. Running subsec- tions of a neural network in parallel on two machine learning systems provides a precise method to en- sure that a numerical algorithm is identical across two systems. Given that such algorithms run with ï¬oating point precision, it is important to predict and understand the magnitude of expected numer- ical error in order to judge whether a given compo- nent is correctly implemented (e.g., distinguishing between âwithin 1e-2, great!â and âwithin 1e-2: why is it so incorrect?!â).
Parameter Device(s) Device A Device B Device C model $ model Parameter Device(s) AP. Synchronous Data Parallelism } Device A Device B Device C model 3 3 model g model Data Parallelism
Asynchronous Data Parallelism
Figure 7: Synchronous and asynchronous data parallel training
Validating complex mathematical operations in the presence of an inherently stochastic system is quite chal- lenging. The strategies outlined above proved invaluable in gaining conï¬dence in the system and ultimately in in- stantiating the Inception model in TensorFlow. The end result of these efforts resulted in a 6-fold speed improve- ment in training time versus our existing DistBelief im- plementation of the model and such speed gains proved indispensable in training a new class of larger-scale im- age recognition models.
# 7 Common Programming Idioms
TensorFlowâs basic dataï¬ow graph model can be used in a variety of ways for machine learning applications. One domain we care about is speeding up training of com- putationally intensive neural network models on large datasets. This section describes several techniques that we and others have developed in order to accomplish this, and illustrates how to use TensorFlow to realize these various approaches.
The approaches in this subsection assume that the model is being trained using stochastic gradient descent (SGD) with relatively modest-sized mini-batches of 100 to 1000 examples.
# Data Parallel Training
One simple technique for speeding up SGD is to paral- lelize the computation of the gradient for a mini-batch across mini-batch elements. For example, if we are us- ing a mini-batch size of 1000 elements, we can use 10 replicas of the model to each compute the gradient for 100 elements, and then combine the gradients and apply updates to the parameters synchronously, in order to be- have exactly as if we were running the sequential SGD algorithm with a batch size of 1000 elements. In this case, the TensorFlow graph simply has many replicas of the portion of the graph that does the bulk of the model computation, and a single client thread drives the entire training loop for this large graph. This is illustrated in the top portion of Figure 7.
This approach can also be made asynchronous, where the TensorFlow graph has many replicas of the portion of the graph that does the bulk of the model computation, and each one of these replicas also applies the parame- ter updates to the model parameters asynchronously. In this conï¬guration, there is one client thread for each of the graph replicas. This is illustrated in the bottom por- tion of Figure 7. This asynchronous approach was also described in [14].
11
Device 3 â_ @/os = Ec 3-⬠c gt aL Eee e. A y A P| ab(a | a)
Figure 8: Model parallel training
pdate ines imodel § inoie} ( @> f Gnput) im Gnput) ler
Figure 9: Concurrent steps
# Model Parallel Training
Model parallel training, where different portions of the model computation are done on different computational devices simultaneously for the same batch of examples, is also easy to express in TensorFlow. Figure 8 shows an example of a recurrent, deep LSTM model used for sequence to sequence learning (see [47]), parallelized across three different devices.
# Concurrent Steps for Model Computation Pipelining
Another common way to get better utilization for train- ing deep neural networks is to pipeline the computation of the model within the same devices, by running a small number of concurrent steps within the same set of de- vices. This is shown in Figure 9. It is somewhat similar to asynchronous data parallelism, except that the paral- lelism occurs within the same device(s), rather than repli- cating the computation graph on different devices. This allows âï¬lling in the gapsâ where computation of a sin- gle batch of examples might not be able to fully utilize the full parallelism on all devices at all times during a single step.
12
# 8 Performance
A future version of this white paper will have a compre- hensive performance evaluation section of both the sin- gle machine and distributed implementations.
# 9 Tools
This section describes some tools we have developed that sit alongside the core TensorFlow graph execution en- gine.
# 9.1 TensorBoard: Visualization of graph structures and summary statistics
In order to help users understand the structure of their computation graphs and also to understand the overall behavior of machine learning models, we have built Ten- sorBoard, a companion visualization tool for TensorFlow that is included in the open source release.
# Visualization of Computation Graphs
Many of the computation graphs for deep neural net- works can be quite complex. For example, the computa- tion graph for training a model similar to Googleâs Incep- tion model [48], a deep convolutional neural net that had the best classiï¬cation performance in the ImageNet 2014 contest, has over 36,000 nodes in its TensorFlow compu- tation graph, and some deep recurrent LSTM models for language modeling have more than 15,000 nodes.
Due to the size and topology of these graphs, naive vi- sualization techniques often produce cluttered and over- whelming diagrams. To help users see the underlying organization of the graphs, the algorithms in Tensor- Board collapse nodes into high-level blocks, highlighting groups with identical structures. The system also sep- arates out high-degree nodes, which often serve book- keeping functions, into a separate area of the screen. Do- ing so reduces visual clutter and focuses attention on the core sections of the computation graph.
The entire visualization is interactive: users can pan, zoom, and expand grouped nodes to drill down for de- tails. An example of the visualization for the graph of a deep convolutional image model is shown in Figure 10.
# Visualization of Summary Data
When training machine learning models, users often want to be able to examine the state of various aspects of the model, and how this state changes over time. To this end, TensorFlow supports a collection of different Summary operations that can be inserted into the graph,
total_lo... train group_de... total_loss count [lif scetersu. Const O- gradient... global-s... 00%} globals. softmax global-s... total_lo... ore global_s. sgd LabelClasses total_lo... moving_a... -O old_grad... softmax_linear init init global_s... 8 more sgd moving_a... local4 group_de... init init 9 more sgd moving_a... local3 group_de... init init «9 more
Figure 10: TensorBoard graph visualization of a convolutional neural network model
nn1/biases nn1/biases:gradient nn1/weights 0,600 0,300 0,400 2,0000-4 0,200 0,200 0.00 0.100 0.00 0.00 -0.200 -2.0000-4 -0.100 -0.400 -0.200 -0.600 -4,0000-4 -0.300 ra ra ra Be 0.000 100.0k 200.0k 300.0k we 0.000 30.00k 60.00k + 90.00k Be 0.000 20.00k 40.00k 60.00k 80.00k
Figure 11: TensorBoard graphical display of model summary statistics time series data
including scalar summaries (e.g., for examining overall properties of the model, such as the value of the loss function averaged across a collection of examples, or the time taken to execute the computation graph), histogram- based summaries (e.g., the distribution of weight values in a neural network layer), or image-based summaries (e.g., a visualization of the ï¬lter weights learned in a convolutional neural network). Typically computation graphs are set up so that Summary nodes are included to monitor various interesting values, and every so often during execution of the training graph, the set of sum- mary nodes are also executed, in addition to the normal set of nodes that are executed, and the client driver pro- gram writes the summary data to a log ï¬le associated with the model training. The TensorBoard program is then conï¬gured to watch this log ï¬le for new summary
records, and can display this summary information and how it changes over time (with the ability to select the measurement of âtimeâ to be relative wall time since the beginning of the execution of the TensorFlow pro- gram, absolute time, or âstepsâ, a numeric measure of the number of graph executions that have occurred since the beginning of execution of the TensorFlow program). A screen shot of the visualization of summary values in TensorBoard is shown in Figure 11.
# 9.2 Performance Tracing
We also have an internal tool called EEG (not included in the initial open source release in November, 2015) that we use to collect and visualize very ï¬ne-grained informa- tion about the exact ordering and performance character-
13
istics of the execution of TensorFlow graphs. This tool works in both our single machine and distributed imple- mentations, and is very useful for understanding the bot- tlenecks in the computation and communication patterns of a TensorFlow program.
Traces are collected simultaneously on each machine in the system from a variety of sources including Linux kernel ftrace, our own lightweight thread tracing tools and the CUDA Proï¬ling Tools Interface (CUPTI). With these logs we can reconstruct the execution of a dis- tributed training step with microsecond-level details of every thread-switch, CUDA kernel launch and DMA op- eration.
Traces are combined in a visualization server which is designed to rapidly extract events in a speciï¬ed timerange and summarize at appropriate detail level for the user-interface resolution. Any signiï¬cant delays due to communication, synchronization or DMA-related stalls are identiï¬ed and highlighted using arrows in the visualization. Initially the UI provides an overview of the entire trace, with only the most signiï¬cant performance artifacts highlighted. As the user progressively zooms in, increasingly ï¬ne resolution details are rendered.
Figure 12 shows an example EEG visualization of a model being trained on a multi-core CPU platform. The top third of the screenshot shows TensorFlow operations being dispatched in parallel, according to the dataï¬ow constraints. The bottom section of the trace shows how most operations are decomposed into multiple work- items which are executed concurrently in a thread pool. The diagonal arrows on the right hand size show where queueing delay is building up in the thread pool. Fig- ure 13 shows another EEG visualization with compu- tation mainly happening on the GPU. Host threads can be seen enqueuing TensorFlow GPU operations as they become runnable (the light blue thread pool), and back- ground housekeeping threads can be seen in other col- ors being migrated across processor cores. Once again, arrows show where threads are stalled on GPU to CPU transfers, or where ops experience signiï¬cant queueing delay.
Finally, Figure 14 shows a more detailed view which allows us to examine how Tensorï¬ow GPU operators are assigned to multiple GPU streams. Whenever the dataï¬ow graph allows parallel execution or data trans- fer we endeavour to expose the ordering constraints to the GPU device using streams and stream dependency primitives.
# 10 Future Work
We have several different directions for future work. We will continue to use TensorFlow to develop new and in-
14
teresting machine learning models for artiï¬cial intelli- gence, and in the course of doing this, we may discover ways in which we will need to extend the basic Ten- sorFlow system. The open source community may also come up with new and interesting directions for the Ten- sorFlow implementation.
One extension to the basic programming model that we are considering is a function mechanism, whereby a user can specify an entire subgraph of a TensorFlow computation to be a reusable component. In the imple- mentation we have designed, these functions can become reusable components even across different front-end lan- guages for TensorFlow, so that a user could deï¬ne a func- tion using the Python front end, but then use that func- tion as a basic building block from within the C++ front- end. We are hopeful that this cross-language reusability will bootstrap a vibrant community of machine learning researchers publishing not just whole examples of their research, but also small reusable components from their work that can be reused in other contexts.
We also have a number of concrete directions to im- prove the performance of TensorFlow. One such direc- tion is our initial work on a just-in-time compiler that can take a subgraph of a TensorFlow execution, perhaps with some runtime proï¬ling information about the typi- cal sizes and shapes of tensors, and can generate an op- timized routine for this subgraph. This compiler will un- derstand the semantics of perform a number of optimiza- tions such as loop fusion, blocking and tiling for locality, specialization for particular shapes and sizes, etc.
We also imagine that a signiï¬cant area for future work will be in improving the placement and node scheduling algorithms used to decide where different nodes will exe- cute, and when they should start executing. We have cur- rently implemented a number of heuristics in these sub- systems, and weâd like to have the system instead learn to make good placement decisions (perhaps using a deep neural network, combined with a reinforcement learning objective function).
# 11 Related Work
There are many other systems that are comparable in various ways with TensorFlow. Theano [7], Torch [13], Caffe [26], Chainer [49] and the Computational Network Toolkit [54] are a few systems designed primarily for the training of neural networks. Each of these systems maps the computation onto a single machine, unlike the dis- tributed TensorFlow implementation. Like Theano and Chainer, TensorFlow supports symbolic differentiation, thus making it easier to deï¬ne and work with gradient- based optimization algorithms. Like Caffe, TensorFlow has a core written in C++, simplifying the deployment
Figure 12: EEG visualization of multi-threaded CPU operations (x-axis is time in µs).
CPU Utiization 214,09% orp: as: i i TS nn is [| Job.workerleplica-tnask Olgpu:0 erecep498 I 3,730,000 3,736,000 13,740,000 3,745,000 3,750,000 13,755,000 13,760,000 3,765,000
Figure 13: EEG visualization of Inception training showing CPU and GPU activity.
of trained models in a wide variety of production set- tings, including memory- and computation-constrained environments such as mobile devices.
The TensorFlow system shares some design charac- teristics with its predecessor system, DistBelief [14], and with later systems with similar designs like Project Adam [10] and the Parameter Server project [33]. Like DistBelief and Project Adam, TensorFlow allows com- putations to be spread out across many computational de- vices across many machines, and allows users to specify
machine learning models using relatively high-level de- scriptions. Unlike DistBelief and Project Adam, though, the general-purpose dataï¬ow graph model in TensorFlow is more ï¬exible and more amenable to expressing a wider variety of machine learning models and optimization al- gorithms. It also permits a signiï¬cant simpliï¬cation by allowing the expression of stateful parameter nodes as variables, and variable update operations that are just additional nodes in the graph; in contrast, DistBelief, Project Adam and the Parameter Server systems all have
15
âjou:workerrepica onask eps Jjobsworkerkepicadrasleeps0 gpu-omemeny seo IES gpu/steam:12 1 i souttsean19 1 sou0sveans4 aounsream's | aounsveam's | gouoreveam:t7 (IRIN gpu/sveam:18 [| gpu/steam:19 I [ gpudisteam26 veracoteaui2 kracoicou's I 4} I = âeracacput \ ai) leracoieputs sol eracoioputs aU veracetout? racoicours y U keraceiepu'9 Q mM A kiracoicnu i Ly iracaienu2t tot | wind an veracoieu22 eracoiepu23 | | t La lL | eracelepudt reseco â7.684,000, res6.co 7,688,000 7,700,000 ee â at â___- ti 7,702,000 CPU Uniasion 159,20% \ ri moa id on een oowie ' 13% ie = if mon to tay 2 WOU a | i a | iat wi 1H =i, alli be a fox oan = we otis - i} 13% : mo Ln ou me nn al ey 0 coun em mm me reagan 7,705,000, ryreqcoo 7,710,000 rriggo 7718000 7,716,000
Figure 14: Timeline of multi-stream GPU execution.
whole separate parameter server subsystems devoted to communicating and updating parameter values.
The Halide system [40] for expressing image pro- cessing pipelines uses a similar intermediate represen- tation to the TensorFlow dataï¬ow graph. Unlike Ten- sorFlow, though, the Halide system actually has higher- level knowledge of the semantics of its operations and uses this knowledge to generate highly optimized pieces of code that combine multiple operations, taking into ac- count parallelism and locality. Halide runs the resulting computations only on a single machine, and not in a dis- tributed setting. In future work we are hoping to extend TensorFlow with a similar cross-operation dynamic com- pilation framework.
that the system uses a single, optimized dataï¬ow graph to represent the entire computation, and caches information about that graph on each device to minimize coordination overhead. Like Spark and Naiad, TensorFlow works best when there is sufï¬cient RAM in the cluster to hold the working set of the computation. Iteration in TensorFlow uses a hybrid approach: multiple replicas of the same dataï¬ow graph may be executing at once, while sharing the same set of variables. Replicas can share data asyn- chronously through the variables, or use synchronization mechanisms in the graph, such as queues, to operate syn- chronously. TensorFlow also supports iteration within a graph, which is a hybrid of CIEL and Naiad: for simplic- ity, each node ï¬res only when all of its inputs are ready (like CIEL); but for efï¬ciency the graph is represented as a static, cyclic dataï¬ow (like Naiad).
Like TensorFlow, several other distributed systems have been developed for executing dataï¬ow graphs across a cluster. Dryad [24] and Flume [8] demon- strate how a complex workï¬ow can be represented as a dataï¬ow graph. CIEL [37] and Naiad [36] introduce generic support for data-dependent control ï¬ow: CIEL represents iteration as a DAG that dynamically unfolds, whereas Naiad uses a static graph with cycles to support lower-latency iteration. Spark [55] is optimized for com- putations that access the same data repeatedly, using âre- silient distributed datasetsâ (RDDs), which are soft-state cached outputs of earlier computations. Dandelion [44] executes dataï¬ow graphs across a cluster of heteroge- neous devices, including GPUs. TensorFlow uses a hy- brid dataï¬ow model that borrows elements from each Its dataï¬ow scheduler, which is the of these systems. component that chooses the next node to execute, uses the same basic algorithm as Dryad, Flume, CIEL, and Spark. Its distributed architecture is closest to Naiad, in
# 12 Conclusions
We have described TensorFlow, a ï¬exible data ï¬ow- based programming model, as well as single machine and distributed implementations of this programming model. The system is borne from real-world experience in conducting research and deploying more than one hun- dred machine learning projects throughout a wide range of Google products and services. We have open sourced a version of TensorFlow, and hope that a vibrant shared community develops around the use of TensorFlow. We are excited to see how others outside of Google make use of TensorFlow in their own work.
16
# Acknowledgements
The development of TensorFlow has beneï¬tted enor- mously from the large and broad machine learning com- munity at Google, and in particular from the suggestions and contributions from rest of the Google Brain team and also from the hundreds of DistBelief and TensorFlow users within Google. Without a doubt, the usability and functionality of TensorFlow has been greatly expanded by listening to their feedback.
Many individuals have contributed to TensorFlow and to its open source release, including John Gian- nandrea (for creating a supportive research environ- ment), Irina Kofman and Phing Turner (project manage- ment), Bill Gruber and David Westbrook (technical writ- ing), Dave Andersen, Anelia Angelova, Yaroslav Bu- latov, Jianmin Chen, Jerjou Cheng, George Dahl, An- drew Dai, Lucy Gao, mig Gerard, Stephan Gouws, Naveen Kumar, Geoffrey Hinton, Mrinal Kalarishnan, Anjuli Kannan, Yutaka Leon-Suematsu, Frank Li, Pe- ter Liu, Xiaobing Liu, Nishant Patil, Pierre Sermanet, Noam Shazeer, Jascha Sohl-dickstein, Philip Tucker, Yonghui Wu, Ke Yang, and Cliff Young (general con- tributions), Doug Fritz, Patrick Hurst, Dilip Krish- nan, Daniel Smilkov, James Wexler, Jimbo Wilson, Kanit Ham Wongsuphasawat, Cassandra Xia, and the Big Picture team (graph visualization), Chris Leary, Robert Springer and the Stream Executor team, Kayur Patel, Michael Piatek, and the coLab team, and the many others who have contributed to the TensorFlow design and code base.
# References
[1] Mart´ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghe- mawat, Ian Goodfellow, Andrew Harp, Geoffrey Irv- ing, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Man´e, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Vi´egas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. Soft- ware available from tensorï¬ow.org.
[2] Anelia Angelova, Alex Krizhevsky, and Vincent Van- houcke. Pedestrian detection with a large-ï¬eld-of-view deep network. In Robotics and Automation (ICRA), 2015 IEEE International Conference on, pages 704â711. IEEE, 2015. CalTech PDF.
[3] Arvind and David E. Culler. science vol. 1, of computer Annual 1986. review chapter
17
225â253. Dataï¬ow Architectures, www.dtic.mil/cgi-bin/GetTRDoc?Location=U2& doc=GetTRDoc.pdf&AD=ADA166235.
Executing a pro- gram on the MIT tagged-token dataï¬ow architec- IEEE Trans. Comput., 39(3):300â318, 1990. ture. dl.acm.org/citation.cfm?id=78583.
[5] Jimmy Ba, Volodymyr Mnih, and Koray Kavukcuoglu. atten- 2014. Multiple tion. arxiv.org/abs/1412.7755. object arXiv recognition with preprint visual arXiv:1412.7755,
[6] Franc¸oise Beaufays. The neural behind Voice googleresearch.blogspot.com/2015/08/the-neural- networks-behind-google-voice.html. Google transcription, networks 2015.
[7] James Bergstra, Olivier Breuleux, Fr´ed´eric Bastien, Pas- cal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. Theano: A CPU and GPU math expression compiler. In Proceedings of the Python for scientiï¬c computing con- ference (SciPy), volume 4, page 3. Austin, TX, 2010. UMontreal PDF.
[8] Craig Chambers, Ashish Raniwala, Frances Perry, Stephen Adams, Robert R Henry, Robert Bradshaw, and Nathan Weizenbaum. easy, efï¬- In ACM Sigplan No- cient data-parallel pipelines. tices, volume 45, pages 363â375. ACM, 2010. re- search.google.com/pubs/archive/35650.pdf.
[9] Sharan Chetlur, Cliff Woolley, Philippe Vandermer- sch, Jonathan Cohen, John Tran, Bryan Catanzaro, and Evan Shelhamer. cuDNN: Efï¬cient primitives for deep learning. arXiv preprint arXiv:1410.0759, 2014. arxiv.org/abs/1410.0759.
[10] Trishul Chilimbi, Yutaka Suzue, Johnson Apacible, and Project Adam: Building an Karthik Kalyanaraman. efï¬cient and scalable deep learning training system. In 11th USENIX Symposium on Operating Systems Design and Implementation (OSDI 14), pages 571â582, 2014. www.usenix.org/system/ï¬les/conference/osdi14/osdi14- paper-chilimbi.pdf.
lucrative web 2015. www.bloomberg.com/news/articles/2015-10-26/google- turning-its-lucrative-web-search-over-to-ai-machines.
[12] Cliff Click. Global code motion/global value number- ing. In ACM SIGPLAN Notices, volume 30, pages 246â 257. ACM, 1995. courses.cs.washington.edu/courses/ cse501/06wi/reading/click-pldi95.pdf.
[13] Ronan Collobert, Johnny Torch: A modular machine learning IDIAP, 2002. report, Samy Bengio, and Mari´ethoz. software library. infoscience.epï¬.ch/record/82802/ï¬les/rr02-46.pdf. Technical
[14] Jeffrey Dean, Gregory S. Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Quoc V. Le, Mark Z. Mao, MarcâAurelio Ranzato, Andrew Senior, Paul Tucker,
Ke Yang, and Andrew Y. Ng. Large scale distributed deep networks. In NIPS, 2012. Google Research PDF.
[15] Jack J Dongarra, Jeremy Du Croz, Sven Hammar- ling, and Iain S Duff. A set of level 3 basic lin- ACM Transactions on ear algebra subprograms. Mathematical Software (TOMS), 16(1):1â17, 1990. www.maths.manchester.ac.uk/Ësven/pubs/Level3BLAS- 1-TOMS16-90.pdf.
[16] Andrea Frome, Greg S Corrado, Jonathon Shlens, Jeff Dean, Tomas Mikolov, et al. embedding deep Information Pro- re- Samy Bengio, DeVISE: A model. cessing Systems, search.google.com/pubs/archive/41473.pdf. visual-semantic In Advances in Neural pages 2121â2129, 2013.
[17] Javier Gonzalez-Dominguez, Ignacio Lopez-Moreno, Pe- dro J Moreno, and Joaquin Gonzalez-Rodriguez. Frame- by-frame language identiï¬cation in short utterances using deep neural networks. Neural Networks, 64:49â58, 2015.
[18] Otavio Good. deep How Google a squeezes googleresearch.blogspot.com/2015/07/how-google- translate-squeezes-deep.html. learning onto phone, Translate 2015.
[19] Ian J. Goodfellow, Yaroslav Bulatov, Julian Ibarz, Sacha Arnoud, and Vinay Shet. Multi-digit number recognition from Street View imagery using deep convolutional neu- In International Conference on Learning ral networks. Representations, 2014. arxiv.org/pdf/1312.6082.
[20] Georg Heigold, Vincent Vanhoucke, Alan Senior, Patrick Nguyen, MarcâAurelio Ranzato, Matthieu Devin, and Jeffrey Dean. Multilingual acoustic models using dis- In Acoustics, Speech tributed deep neural networks. and Signal Processing (ICASSP), 2013 IEEE Interna- tional Conference on, pages 8619â8623. IEEE, 2013. re- search.google.com/pubs/archive/40807.pdf.
[21] Geoffrey E. Hinton, Li Deng, Dong Yu, George E. Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, An- drew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N. Sainath, Deep for acoustic modeling in speech neural networks research recognition: IEEE Signal Process. Mag., 29(6):82â groups. 97, www.cs.toronto.edu/Ëgdahl/papers/ deepSpeechReviewSPM2012.pdf.
[22] Sepp Hochreiter and J¨urgen Schmidhuber. Long short- term memory. Neural computation, 9(8):1735â1780, 1997. ftp.idsia.ch/pub/juergen/lstm.pdf.
[23] Sergey Ioffe and Christian Szegedy. Batch normaliza- tion: Accelerating deep network training by reducing internal covariate shift. CoRR, abs/1502.03167, 2015. arxiv.org/abs/1502.03167.
[24] Michael Isard, Mihai Budiu, Yuan Yu, Andrew distributed building In ACM SIGOPS Operating Systems pages 59â72. ACM, 2007. Birrell, and Dennis Fetterly. data-parallel blocks. Review, www.michaelisard.com/pubs/eurosys07.pdf. Dryad: programs from sequential volume 41,
18
[25] BenoËıt Jacob, Ga¨el Guennebaud, et al. Eigen library for linear algebra. eigen.tuxfamily.org.
[26] Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadar- rama, and Trevor Darrell. Caffe: Convolutional archi- In Proceedings of tecture for fast feature embedding. the ACM International Conference on Multimedia, pages 675â678. ACM, 2014. arxiv.org/pdf/1408.5093.
[27] Andrej Karpathy, George Toderici, Sachin Shetty, and Li Fei- Tommy Leung, Rahul Sukthankar, Large-scale video classiï¬cation with con- Fei. In Computer Vision volutional neural networks. and Pattern Recognition (CVPR), 2014 IEEE Con- ference on, pages 1725â1732. re- search.google.com/pubs/archive/42455.pdf.
[28] A Krizhevsky. Cuda-convnet, 2014. code.google.com/p/cuda-convnet/.
[29] Alex Krizhevsky. One weird trick for paralleliz- arXiv preprint ing convolutional neural networks. arXiv:1404.5997, 2014. arxiv.org/abs/1404.5997.
[30] Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. The CIFAR-10 dataset. www.cs.toronto.edu/Ëkriz/cifar.html.
[31] Quoc Le, MarcâAurelio Ranzato, Rajat Monga, Matthieu Devin, Greg Corrado, Kai Chen, Jeff Dean, and Andrew Ng. Building high-level features using large scale unsu- pervised learning. In ICMLâ2012, 2012. Google Research PDF.
[32] Yann LeCun, Corinna Cortes, and Christopher JC Burges. The MNIST database of handwritten digits, 1998. yann.lecun.com/exdb/mnist/.
[33] Mu Li, Dave Andersen, and Alex Smola. Parameter server. parameterserver.org.
[34] Chris J Maddison, Aja Huang, Ilya Sutskever, and David Silver. Move evaluation in Go using deep convolutional neural networks. arXiv preprint arXiv:1412.6564, 2014. arxiv.org/abs/1412.6564.
[35] Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- Efï¬cient estimation of word representa- frey Dean. In International Conference tions in vector space. on Learning Representations: Workshops Track, 2013. arxiv.org/abs/1301.3781.
[36] Derek G Murray, Frank McSherry, Rebecca Isaacs, Michael Isard, Paul Barham, and Mart´ın Abadi. Naiad: a timely dataï¬ow system. In Proceedings of the Twenty- Fourth ACM Symposium on Operating Systems Princi- ples, pages 439â455. ACM, 2013. Microsoft Research PDF.
[37] Derek G. Murray, Malte Schwarzkopf, Christopher Smowton, Steven Smit, Anil Madhavapeddy, and Steven a universal execution engine for dis- Hand. tributed data-ï¬ow computing. In Proceedings of the Ninth USENIX Symposium on Networked Systems Design and Implementation, 2011. Usenix PDF.
[38] Arun Nair, Praveen Srinivasan, Sam Blackwell, Cagdas Alcicek, Rory Fearon, Alessandro De Maria, Ve- davyas Panneershelvam, Mustafa Suleyman, Charles Beattie, Stig Petersen, et al. Massively parallel meth- arXiv preprint ods for deep reinforcement learning. arXiv:1507.04296, 2015. arxiv.org/abs/1507.04296.
[39] CUDA Nvidia. Cublas library. NVIDIA Corpo- devel- ration, Santa Clara, California, 15, 2008. oper.nvidia.com/cublas.
[40] Jonathan Ragan-Kelley, Connelly Barnes, Andrew Adams, Sylvain Paris, Fr´edo Durand, and Saman Ama- rasinghe. Halide: A language and compiler for optimiz- ing parallelism, locality, and recomputation in image pro- cessing pipelines. ACM SIGPLAN Notices, 48(6):519â 530, people.csail.mit.edu/fredo/tmp/Halide- 5min.pdf.
[41] Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, and Vijay Pande. Massively multitask networks for drug discovery. arXiv preprint arXiv:1502.02072, 2015. arxiv.org/abs/1502.02072.
[42] Benjamin Recht, Christopher Re, Stephen Wright, and Feng Niu. Hogwild: A lock-free approach to paral- In Advances in lelizing stochastic gradient descent. Neural Information Processing Systems, pages 693â701, 2011. papers.nips.cc/paper/4390-hogwild-a-lock-free- approach-to-parallelizing-stochastic-gradient-descent.
[43] Chuck Rosenberg. across step Improving Photo Search: 2013. A the googleresearch.blogspot.com/2013/06/improving- photo-search-step-across.html. semantic gap,
[44] Christopher J Rossbach, Yuan Yu, Jon Currey, Jean- Philippe Martin, and Dennis Fetterly. Dandelion: a compiler and runtime for heterogeneous systems. In Proceedings of the Twenty-Fourth ACM Symposium on Operating Systems Principles, pages 49â68. ACM, 2013. research-srv.microsoft.com/pubs/201110/sosp13- dandelion-ï¬nal.pdf.
[45] David E Rumelhart, Geoffrey E Hinton, and Ronald J back- Cognitive modeling, 5:3, 1988. Williams. propagating errors. www.cs.toronto.edu/ hinton/absps/naturebp.pdf. Learning representations by
Kanishka Rao, Franc¸oise Beaufays, and Johan Schalkwyk. Google Voice Search: 2015. googleresearch.blogspot.com/2015/09/google-voice- search-faster-and-more.html.
[47] Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence In NIPS, to sequence learning with neural networks. 2014. papers.nips.cc/paper/5346-sequence-to-sequence- learning-with-neural.
[48] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Ser- manet, Scott Reed, Dragomir Anguelov, Dumitru Er- han, Vincent Vanhoucke, and Andrew Rabinovich. Go- In CVPRâ2015, 2015. ing deeper with convolutions. arxiv.org/abs/1409.4842.
19
[49] Seiya Tokui. Chainer: A powerful, ï¬exible and intuitive framework of neural networks. chainer.org.
[50] Vincent Vanhoucke. Speech recognition and deep learn- ing, 2015. googleresearch.blogspot.com/2012/08/speech- recognition-and-deep-learning.html.
[51] Abhishek Verma, Luis Pedrosa, Madhukar Korupolu, David Oppenheimer, Eric Tune, and John Wilkes. Large-scale cluster management at Google with Borg. the Tenth European Conference In Proceedings of on Computer Systems, page 18. ACM, 2015. re- search.google.com/pubs/archive/43438.pdf.
[52] O. Vinyals, L. Kaiser, T. Koo, S. Petrov, I. Sutskever, and G. Hinton. Grammar as a foreign language. Technical report, arXiv:1412.7449, 2014. arxiv.org/abs/1412.7449.
[53] Oriol Vinyals, Meire Fortunato, Jaitly. arxiv.org/abs/1506.03134. Pointer networks. and Navdeep In NIPS, 2015.
[54] Dong Yu, Adam Eversole, Mike Seltzer, Kaisheng Yao, Zhiheng Huang, Brian Guenter, Oleksii Kuchaiev, Yu Zhang, Frank Seide, Huaming Wang, et al. An introduction to computational networks and the com- Technical report, Tech. putational network toolkit. Rep. MSR, Microsoft Research, 2014, 2014. re- search.microsoft.com/apps/pubs/?id=226641.
[55] Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Justin Ma, Murphy McCauley, Michael J Franklin, Scott Shenker, and Ion Stoica. Resilient distributed datasets: A fault-tolerant abstraction for In Proceedings of the in-memory cluster computing. 9th USENIX conference on Networked Systems De- sign and Implementation. USENIX Association, 2012. www.usenix.org/system/ï¬les/conference/nsdi12/nsdi12- ï¬nal138.pdf.
[56] Matthew D. Zeiler, MarcâAurelio Ranzato, Rajat Monga, Mark Mao, Ke Yang, Quoc Le, Patrick Nguyen, Andrew Senior, Vincent Vanhoucke, Jeff Dean, and On rectiï¬ed linear units Geoffrey E. Hinton. In ICASSP, 2013. for speech processing. re- search.google.com/pubs/archive/40811.pdf. | {
"id": "1502.02072"
} |
1603.01360 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | 6 1 0 2
r p A 7 ] L C . s c [
3 v 0 6 3 1 0 . 3 0 6 1 : v i X r a
Neural Architectures for Named Entity Recognition Guillaume Lampleâ Miguel Ballesterosâ£â Sandeep Subramanianâ Kazuya Kawakamiâ Chris Dyerâ â Carnegie Mellon University â£NLP Group, Pompeu Fabra University {glample,sandeeps,kkawakam,cdyer}@cs.cmu.edu, miguel.ballesteros@upf.edu
# Abstract
State-of-the-art named entity recognition sys- tems rely heavily on hand-crafted features and domain-speciï¬c knowledge in order to learn effectively from the small, supervised training corpora that are available. In this paper, we introduce two new neural architecturesâone based on bidirectional LSTMs and conditional random ï¬elds, and the other that constructs and labels segments using a transition-based approach inspired by shift-reduce parsers. Our models rely on two sources of infor- mation about words: character-based word representations learned from the supervised corpus and unsupervised word representa- tions learned from unannotated corpora. Our models obtain state-of-the-art performance in NER in four languages without resorting to any language-speciï¬c knowledge or resources such as gazetteers. 1
1
# 1 Introduction
Named entity recognition (NER) is a challenging learning problem. One the one hand, in most lan- there is only a very small guages and domains, amount of supervised training data available. On the other, there are few constraints on the kinds of words that can be names, so generalizing from this small sample of data is difï¬cult. As a result, carefully con- structed orthographic features and language-speciï¬c knowledge resources, such as gazetteers, are widely used for solving this task. Unfortunately, language- speciï¬c resources and features are costly to de- velop in new languages and new domains, making NER a challenge to adapt. Unsupervised learning
from unannotated corpora offers an alternative strat- egy for obtaining better generalization from small amounts of supervision. However, even systems that have relied extensively on unsupervised fea- tures (Collobert et al., 2011; Turian et al., 2010; Lin and Wu, 2009; Ando and Zhang, 2005b, in- ter alia) have used these to augment, rather than replace, hand-engineered features (e.g., knowledge about capitalization patterns and character classes in a particular language) and specialized knowledge re- sources (e.g., gazetteers).
In this paper, we present neural architectures for NER that use no language-speciï¬c resources or features beyond a small amount of supervised training data and unlabeled corpora. Our mod- els are designed to capture two intuitions. First, since names often consist of multiple tokens, rea- soning jointly over tagging decisions for each to- ken is important. We compare two models here, (i) a bidirectional LSTM with a sequential condi- tional random layer above it (LSTM-CRF; §2), and (ii) a new model that constructs and labels chunks of input sentences using an algorithm inspired by transition-based parsing with states represented by stack LSTMs (S-LSTM; §3). Second, token-level evidence for âbeing a nameâ includes both ortho- graphic evidence (what does the word being tagged as a name look like?) and distributional evidence (where does the word being tagged tend to oc- cur in a corpus?). To capture orthographic sen- sitivity, we use character-based word representa- tion model (Ling et al., 2015b) to capture distribu- tional sensitivity, we combine these representations with distributional representations (Mikolov et al., 2013b). Our word representations combine both of these, and dropout training is used to encourage the model to learn to trust both sources of evidence (§4).
1The code of the LSTM-CRF and Stack-LSTM NER https://github.com/ systems at glample/tagger and https://github.com/clab/ stack-lstm-ner
Experiments in English, Dutch, German, and Spanish show that we are able to obtain state-
of-the-art NER performance with the LSTM-CRF model in Dutch, German, and Spanish, and very near the state-of-the-art in English without any hand-engineered features or gazetteers (§5). The transition-based algorithm likewise surpasses the best previously published results in several lan- guages, although it performs less well than the LSTM-CRF model.
# 2 LSTM-CRF Model
We provide a brief description of LSTMs and CRFs, and present a hybrid tagging architecture. This ar- chitecture is similar to the ones presented by Col- lobert et al. (2011) and Huang et al. (2015).
# 2.1 LSTM
Recurrent neural networks (RNNs) are a family of neural networks that operate on sequential data. They take as input a sequence of vectors (x1, x2, . . . , xn) sequence (h1, h2, . . . , hn) that represents some information about the sequence at every step in the input. Although RNNs can, in theory, learn long depen- dencies, in practice they fail to do so and tend to be biased towards their most recent inputs in the sequence (Bengio et al., 1994). Long Short-term Memory Networks (LSTMs) have been designed to combat this issue by incorporating a memory-cell and have been shown to capture long-range depen- dencies. They do so using several gates that control the proportion of the input to give to the memory cell, and the proportion from the previous state to forget (Hochreiter and Schmidhuber, 1997). We use the following implementation:
ip = o(Waixt + Washe-1 4 ec = (1â i) © eit Weicr-1 4 bi)
i, © tanh(Waext + Warchiâ1 + be) 04 = 0(Waoxt + Wroliâ1 + Weotr + bo) hy = 0; © tanh(cz),
where o is the element-wise sigmoid function, and © is the element-wise product.
For a given sentence (x1, x2, . . . , xn) containing n words, each represented as a d-dimensional vector, ââ ht of the left an LSTM computes a representation
context of the sentence at every word t. Naturally, ââ ht generating a representation of the right context as well should add useful information. This can be achieved using a second LSTM that reads the same sequence in reverse. We will refer to the former as the forward LSTM and the latter as the backward LSTM. These are two distinct networks with differ- ent parameters. This forward and backward LSTM pair is referred to as a bidirectional LSTM (Graves and Schmidhuber, 2005).
The representation of a word using this model is obtained by concatenating its left and right context ââ ââ representations, ht = [ ht]. These representa- ht; tions effectively include a representation of a word in context, which is useful for numerous tagging ap- plications.
# 2.2 CRF Tagging Models
A very simpleâbut surprisingly effectiveâtagging model is to use the htâs as features to make indepen- dent tagging decisions for each output yt (Ling et al., 2015b). Despite this modelâs success in simple problems like POS tagging, its independent classiï¬- cation decisions are limiting when there are strong dependencies across output labels. NER is one such task, since the âgrammarâ that characterizes inter- pretable sequences of tags imposes several hard con- straints (e.g., I-PER cannot follow B-LOC; see §2.4 for details) that would be impossible to model with independence assumptions.
Therefore, instead of modeling tagging decisions independently, we model them jointly using a con- ditional random ï¬eld (Lafferty et al., 2001). For an input sentence
X = (x1, x2, . . . , xn),
we consider P to be the matrix of scores output by the bidirectional LSTM network. P is of size n à k, where k is the number of distinct tags, and Pi,j cor- responds to the score of the jth tag of the ith word in a sentence. For a sequence of predictions
y = (y1, y2, . . . , yn),
we deï¬ne its score to be
n n s(X,y) = S- Ay. yigs + > Pry: i=0 i=1
where A is a matrix of transition scores such that Ai,j represents the score of a transition from the tag i to tag j. y0 and yn are the start and end tags of a sentence, that we add to the set of possi- ble tags. A is therefore a square matrix of size k +2.
A softmax over all possible tag sequences yields a probability for the sequence y:
es(&y) Vyerx es 9)â P(y|X) =
During training, we maximize the log-probability of the correct tag sequence:
log(p(y|X)) = s(X,y) âlog { $2 es) yeYx = 5(X,y) â logadd s(XÂ¥), () yeYx
where YX represents all possible tag sequences (even those that do not verify the IOB format) for a sentence X. From the formulation above, it is ev- ident that we encourage our network to produce a valid sequence of output labels. While decoding, we predict the output sequence that obtains the maxi- mum score given by:
y* = argmax s(X,y). (2) yeYx
Since we are only modeling bigram interactions between outputs, both the summation in Eq. 1 and the maximum a posteriori sequence yâ in Eq. 2 can be computed using dynamic programming.
# 2.3 Parameterization and Training
The scores associated with each tagging decision for each token (i.e., the P;âs) are defined to be the dot product between the embedding of a word- in-context computed with a bidirectional LSTMâ exactly the same as the POS tagging model of and these are combined with bigram compatibility scores (i.e., the Ay y's). This archi- tecture is shown in figure [I] Circles represent ob- served variables, diamonds are deterministic func- tions of their parents, and double circles are random variables.
CRF Layer 4 Bi-LSTM encoder Word embeddings
Figure 1: Main architecture of the network. Word embeddings are given to a bidirectional LSTM. li represents the word i and its left context, ri represents the word i and its right context. Concatenating these two vectors yields a representation of the word i in its context, ci.
The parameters of this model are thus the matrix of bigram compatibility scores A, and the parame- ters that give rise to the matrix P, namely the param- eters of the bidirectional LSTM, the linear feature weights, and the word embeddings. As in part 2.2, let xi denote the sequence of word embeddings for every word in a sentence, and yi be their associated tags. We return to a discussion of how the embed- dings xi are modeled in Section 4. The sequence of word embeddings is given as input to a bidirectional LSTM, which returns a representation of the left and right context for each word as explained in 2.1.
These representations are concatenated (ci) and linearly projected onto a layer whose size is equal to the number of distinct tags. Instead of using the softmax output from this layer, we use a CRF as pre- viously described to take into account neighboring tags, yielding the ï¬nal predictions for every word yi. Additionally, we observed that adding a hidden layer between ci and the CRF layer marginally im- proved our results. All results reported with this model incorporate this extra-layer. The parameters are trained to maximize Eq. 1 of observed sequences of NER tags in an annotated corpus, given the ob- served words.
# 2.4 Tagging Schemes
The task of named entity recognition is to assign a named entity label to every word in a sentence. A single named entity could span several tokens within a sentence. Sentences are usually represented in the IOB format (Inside, Outside, Beginning) where ev- ery token is labeled as B-label if the token is the beginning of a named entity, I-label if it is inside a named entity but not the ï¬rst token within the named entity, or O otherwise. However, we de- cided to use the IOBES tagging scheme, a variant of IOB commonly used for named entity recognition, which encodes information about singleton entities (S) and explicitly marks the end of named entities (E). Using this scheme, tagging a word as I-label with high-conï¬dence narrows down the choices for the subsequent word to I-label or E-label, however, the IOB scheme is only capable of determining that the subsequent word cannot be the interior of an- other label. Ratinov and Roth (2009) and Dai et al. (2015) showed that using a more expressive tagging scheme like IOBES improves model performance marginally. However, we did not observe a signif- icant improvement over the IOB tagging scheme.
# 3 Transition-Based Chunking Model
As an alternative to the LSTM-CRF discussed in the previous section, we explore a new architecture that chunks and labels a sequence of inputs using an algorithm similar to transition-based dependency parsing. This model directly constructs representa- tions of the multi-token names (e.g., the name Mark Watney is composed into a single representation).
This model relies on a stack data structure to in- crementally construct chunks of the input. To ob- tain representations of this stack used for predict- ing subsequent actions, we use the Stack-LSTM pre- sented by Dyer et al. (2015), in which the LSTM is augmented with a âstack pointer.â While sequen- tial LSTMs model sequences from left to right, stack LSTMs permit embedding of a stack of objects that are both added to (using a push operation) and re- moved from (using a pop operation). This allows the Stack-LSTM to work like a stack that maintains a âsummary embeddingâ of its contents. We refer to this model as Stack-LSTM or S-LSTM model for simplicity.
Finally, we refer interested readers to the original paper (Dyer et al., 2015) for details about the Stack- LSTM model since in this paper we merely use the same architecture through a new transition-based al- gorithm presented in the following Section.
# 3.1 Chunking Algorithm
We designed a transition inventory which is given in Figure 2 that is inspired by transition-based parsers, in particular the arc-standard parser of Nivre (2004). In this algorithm, we make use of two stacks (des- ignated output and stack representing, respectively, completed chunks and scratch space) and a buffer that contains the words that have yet to be processed. The transition inventory contains the following tran- sitions: The SHIFT transition moves a word from the buffer to the stack, the OUT transition moves a word from the buffer directly into the output stack while the REDUCE(y) transition pops all items from the top of the stack creating a âchunk,â labels this with label y, and pushes a representation of this chunk onto the output stack. The algorithm com- pletes when the stack and buffer are both empty. The algorithm is depicted in Figure 2, which shows the sequence of operations required to process the sen- tence Mark Watney visited Mars.
The model is parameterized by deï¬ning a prob- ability distribution over actions at each time step, given the current contents of the stack, buffer, and output, as well as the history of actions taken. Fol- lowing Dyer et al. (2015), we use stack LSTMs to compute a ï¬xed dimensional embedding of each of these, and take a concatenation of these to ob- tain the full algorithm state. This representation is used to deï¬ne a distribution over the possible ac- tions that can be taken at each time step. The model is trained to maximize the conditional probability of sequences of reference actions (extracted from a la- beled training corpus) given the input sentences. To label a new input sequence at test time, the maxi- mum probability action is chosen greedily until the algorithm reaches a termination state. Although this is not guaranteed to ï¬nd a global optimum, it is ef- fective in practice. Since each token is either moved directly to the output (1 action) or ï¬rst to the stack and then the output (2 actions), the total number of actions for a sequence of length n is maximally 2n. It is worth noting that the nature of this algorithm
Outt O O O Stackt S (u, u), . . . , (v, v), S B S Action Buffert (u, u), B SHIFT REDUCE(y) (u, u), B OUT Outt+1 O g(u, . . . , v, ry), O S S g(u, râ
), O Stackt+1 Buffert+1 (u, u), S B B B Segments â (u . . . v, y) â
Figure 2: Transitions of the Stack-LSTM model indicating the action applied and the resulting state. Bold symbols indicate (learned) embeddings of words and relations, script symbols indicate the corresponding words and relations.
Transition SHIFT SHIFT REDUCE(PER) OUT SHIFT REDUCE(LOC) Output [] [] [] [(Mark Watney)-PER] [(Mark Watney)-PER, visited] [(Mark Watney)-PER, visited] [(Mark Watney)-PER, visited, (Mars)-LOC] Stack [] [Mark] [Mark, Watney] [] [] [Mars] [] Buffer [Mark, Watney, visited, Mars] [Watney, visited, Mars] [visited, Mars] [visited, Mars] [Mars] [] [] Segment (Mark Watney)-PER (Mars)-LOC
Figure 3: Transition sequence for Mark Watney visited Mars with the Stack-LSTM model.
model makes it agnostic to the tagging scheme used since it directly predicts labeled chunks.
# 3.2 Representing Labeled Chunks
When the REDUCE(y) operation is executed, the al- gorithm shifts a sequence of tokens (together with their vector embeddings) from the stack to the out- put buffer as a single completed chunk. To compute an embedding of this sequence, we run a bidirec- tional LSTM over the embeddings of its constituent tokens together with a token representing the type of the chunk being identiï¬ed (i.e., y). This function is given as g(u, . . . , v, ry), where ry is a learned em- bedding of a label type. Thus, the output buffer con- tains a single vector representation for each labeled chunk that is generated, regardless of its length.
Embedding from Lookup tabl lookup table ookup table t © © Embedding from characters Concatenation - Final embedding Tars
# Input Word Embeddings
The input layers to both of our models are vector representations of individual words. Learning inde- pendent representations for word types from the lim- ited NER training data is a difï¬cult problem: there are simply too many parameters to reliably estimate. Since many languages have orthographic or mor- phological evidence that something is a name (or not a name), we want representations that are sen- sitive to the spelling of words. We therefore use a model that constructs representations of words from representations of the characters they are composed of (4.1). Our second intuition is that names, which may individually be quite varied, appear in regular contexts in large corpora. Therefore we use embed-
Figure 4: The character embeddings of the word âMarsâ are given to a bidirectional LSTMs. We concatenate their last out- puts to an embedding from a lookup table to obtain a represen- tation for this word.
dings learned from a large corpus that are sensitive to word order (4.2). Finally, to prevent the models from depending on one representation or the other too strongly, we use dropout training and ï¬nd this is crucial for good generalization performance (4.3).
# 4.1 Character-based models of words
An important distinction of our work from most previous approaches is that we learn character-level
features while training instead of hand-engineering preï¬x and sufï¬x information about words. Learn- ing character-level embeddings has the advantage of learning representations speciï¬c to the task and do- main at hand. They have been found useful for mor- phologically rich languages and to handle the out- of-vocabulary problem for tasks like part-of-speech tagging and language modeling (Ling et al., 2015b) or dependency parsing (Ballesteros et al., 2015).
Figure 4 describes our architecture to generate a word embedding for a word from its characters. A character lookup table initialized at random contains an embedding for every character. The character embeddings corresponding to every character in a word are given in direct and reverse order to a for- ward and a backward LSTM. The embedding for a word derived from its characters is the concatenation of its forward and backward representations from the bidirectional LSTM. This character-level repre- sentation is then concatenated with a word-level rep- resentation from a word lookup-table. During test- ing, words that do not have an embedding in the lookup table are mapped to a UNK embedding. To train the UNK embedding, we replace singletons with the UNK embedding with a probability 0.5. In all our experiments, the hidden dimension of the for- ward and backward character LSTMs are 25 each, which results in our character-based representation of words being of dimension 50.
Recurrent models like RNNs and LSTMs are ca- pable of encoding very long sequences, however, they have a representation biased towards their most recent inputs. As a result, we expect the ï¬nal rep- resentation of the forward LSTM to be an accurate representation of the sufï¬x of the word, and the ï¬- nal state of the backward LSTM to be a better rep- resentation of its preï¬x. Alternative approachesâ most notably like convolutional networksâhave been proposed to learn representations of words from their characters (Zhang et al., 2015; Kim et al., 2015). However, convnets are designed to discover position-invariant features of their inputs. While this is appropriate for many problems, e.g., image recog- nition (a cat can appear anywhere in a picture), we argue that important information is position depen- dent (e.g., preï¬xes and sufï¬xes encode different in- formation than stems), making LSTMs an a priori better function class for modeling the relationship
between words and their characters.
# 4.2 Pretrained embeddings
As in Collobert et al. (2011), we use pretrained word embeddings to initialize our lookup table. We observe signiï¬cant improvements using pretrained word embeddings over randomly initialized ones. Embeddings are pretrained using skip-n-gram (Ling et al., 2015a), a variation of word2vec (Mikolov et al., 2013a) that accounts for word order. These em- beddings are ï¬ne-tuned during training.
Word embeddings for Spanish, Dutch, German and English are trained using the Spanish Gigaword version 3, the Leipzig corpora collection, the Ger- man monolingual training data from the 2010 Ma- chine Translation Workshop and the English Giga- word version 4 (with the LA Times and NY Times portions removed) respectively.2 We use an embed- ding dimension of 100 for English, 64 for other lan- guages, a minimum word frequency cutoff of 4, and a window size of 8.
# 4.3 Dropout training
Initial experiments showed that character-level em- beddings did not improve our overall performance when used in conjunction with pretrained word rep- resentations. To encourage the model to depend on both representations, we use dropout training (Hin- ton et al., 2012), applying a dropout mask to the ï¬nal embedding layer just before the input to the bidirec- tional LSTM in Figure 1. We observe a signiï¬cant improvement in our modelâs performance after us- ing dropout (see table 5).
# 5 Experiments
This section presents the methods we use to train our models, the results we obtained on various tasks and the impact of our networksâ conï¬guration on model performance.
# 5.1 Training
For both models presented, we train our networks using the back-propagation algorithm updating our parameters on every training example, one at a time, using stochastic gradient descent (SGD) with
2(Graff, 2011; Biemann et al., 2007; Callison-Burch et al., 2010; Parker et al., 2009)
a learning rate of 0.01 and a gradient clipping of 5.0. Several methods have been proposed to enhance the performance of SGD, such as Adadelta (Zeiler, 2012) or Adam (Kingma and Ba, 2014). Although we observe faster convergence using these methods, none of them perform as well as SGD with gradient clipping.
Our LSTM-CRF model uses a single layer for the forward and backward LSTMs whose dimen- sions are set to 100. Tuning this dimension did not signiï¬cantly impact model performance. We set the dropout rate to 0.5. Using higher rates nega- tively impacted our results, while smaller rates led to longer training time.
The stack-LSTM model uses two layers each of dimension 100 for each stack. The embeddings of the actions used in the composition functions have 16 dimensions each, and the output embedding is of dimension 20. We experimented with different dropout rates and reported the scores using the best dropout rate for each language.3 It is a greedy model that apply locally optimal actions until the entire sentence is processed, further improvements might be obtained with beam search (Zhang and Clark, 2011) or training with exploration (Ballesteros et al., 2016).
# 5.2 Data Sets
We test our model on different datasets for named entity recognition. To demonstrate our modelâs ability to generalize to different languages, we present results on the CoNLL-2002 and CoNLL- 2003 datasets (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meulder, 2003) that contain in- dependent named entity labels for English, Span- ish, German and Dutch. All datasets contain four different types of named entities: locations, per- sons, organizations, and miscellaneous entities that do not belong in any of the three previous cate- gories. Although POS tags were made available for all datasets, we did not include them in our models. We did not perform any dataset preprocessing, apart from replacing every digit with a zero in the English NER dataset.
3English (D=0.2), German, Spanish and Dutch (D=0.3)
# 5.3 Results
Table 1 presents our comparisons with other mod- els for named entity recognition in English. To make the comparison between our model and oth- ers fair, we report the scores of other models with and without the use of external labeled data such as gazetteers and knowledge bases. Our models do not use gazetteers or any external labeled resources. The best score reported on this task is by Luo et al. (2015). They obtained a F1 of 91.2 by jointly model- ing the NER and entity linking tasks (Hoffart et al., 2011). Their model uses a lot of hand-engineered features including spelling features, WordNet clus- ters, Brown clusters, POS tags, chunks tags, as well as stemming and external knowledge bases like Freebase and Wikipedia. Our LSTM-CRF model outperforms all other systems, including the ones us- ing external labeled data like gazetteers. Our Stack- LSTM model also outperforms all previous models that do not incorporate external features, apart from the one presented by Chiu and Nichols (2015).
Tables 2, 3 and 4 present our results on NER for German, Dutch and Spanish respectively in compar- ison to other models. On these three languages, the LSTM-CRF model signiï¬cantly outperforms all pre- vious methods, including the ones using external la- beled data. The only exception is Dutch, where the model of Gillick et al. (2015) can perform better by leveraging the information from other NER datasets. The Stack-LSTM also consistently presents state- the-art (or close to) results compared to systems that do not use external data.
As we can see in the tables, the Stack-LSTM model is more dependent on character-based repre- sentations to achieve competitive performance; we hypothesize that the LSTM-CRF model requires less orthographic information since it gets more contex- tual information out of the bidirectional LSTMs; however, the Stack-LSTM model consumes the words one by one and it just relies on the word rep- resentations when it chunks words.
# 5.4 Network architectures
Our models had several components that we could tweak to understand their impact on the overall per- formance. We explored the impact that the CRF, the character-level representations, pretraining of our
Model Collobert et al. (2011)* Lin and Wu (2009) Lin and Wu (2009)* Huang et al. (2015)* Passos et al. (2014) Passos et al. (2014)* Luo et al. (2015)* + gaz Luo et al. (2015)* + gaz + linking Chiu and Nichols (2015) Chiu and Nichols (2015)* LSTM-CRF (no char) LSTM-CRF S-LSTM (no char) S-LSTM F1 89.59 83.78 90.90 90.10 90.05 90.90 89.9 91.2 90.69 90.77 90.20 90.94 87.96 90.33
Table 1: English NER results (CoNLL-2003 test set). * indi- cates models trained with the use of external labeled data
Model Florian et al. (2003)* Ando and Zhang (2005a) Qi et al. (2009) Gillick et al. (2015) Gillick et al. (2015)* LSTM-CRF â no char LSTM-CRF S-LSTM â no char S-LSTM F1 72.41 75.27 75.72 72.08 76.22 75.06 78.76 65.87 75.66
Table 2: German NER results (CoNLL-2003 test set). * indi-
cates models trained with the use of external labeled data
Model Carreras et al. (2002) Nothman et al. (2013) Gillick et al. (2015) Gillick et al. (2015)* LSTM-CRF â no char LSTM-CRF S-LSTM â no char S-LSTM F1 77.05 78.6 78.08 82.84 73.14 81.74 69.90 79.88
Table 3: Dutch NER (CoNLL-2002 test set). * indicates mod- els trained with the use of external labeled data
Model Carreras et al. (2002)* Santos and GuimarËaes (2015) Gillick et al. (2015) Gillick et al. (2015)* LSTM-CRF â no char LSTM-CRF S-LSTM â no char S-LSTM F1 81.39 82.21 81.83 82.95 83.44 85.75 79.46 83.93
Table 4: Spanish NER (CoNLL-2002 test set). * indicates mod- els trained with the use of external labeled data
word embeddings and dropout had on our LSTM- CRF model. We observed that pretraining our word embeddings gave us the biggest improvement in overall performance of +7.31 in F1. The CRF layer gave us an increase of +1.79, while using dropout resulted in a difference of +1.17 and ï¬nally learn-
ing character-level word embeddings resulted in an increase of about +0.74. For the Stack-LSTM we performed a similar set of experiments. Results with different architectures are given in table 5.
Model LSTM LSTM-CRF LSTM-CRF LSTM-CRF LSTM-CRF LSTM-CRF S-LSTM S-LSTM S-LSTM S-LSTM S-LSTM Variant char + dropout + pretrain char + dropout pretrain pretrain + char pretrain + dropout pretrain + dropout + char char + dropout pretrain pretrain + char pretrain + dropout pretrain + dropout + char F1 89.15 83.63 88.39 89.77 90.20 90.94 80.88 86.67 89.32 87.96 90.33
Table 5: English NER results with our models, using differ- ent conï¬gurations. âpretrainâ refers to models that include pre- trained word embeddings, âcharâ refers to models that include character-based modeling of words, âdropoutâ refers to models that include dropout rate.
# 6 Related Work
In the CoNLL-2002 shared task, Carreras et al. (2002) obtained among the best results on both Dutch and Spanish by combining several small ï¬xed-depth decision trees. Next year, in the CoNLL- 2003 Shared Task, Florian et al. (2003) obtained the best score on German by combining the output of four diverse classiï¬ers. Qi et al. (2009) later im- proved on this with a neural network by doing unsu- pervised learning on a massive unlabeled corpus.
Several other neural architectures have previously been proposed for NER. For instance, Collobert et al. (2011) uses a CNN over a sequence of word em- beddings with a CRF layer on top. This can be thought of as our ï¬rst model without character-level embeddings and with the bidirectional LSTM be- ing replaced by a CNN. More recently, Huang et al. (2015) presented a model similar to our LSTM-CRF, but using hand-crafted spelling features. Zhou and Xu (2015) also used a similar model and adapted it to the semantic role labeling task. Lin and Wu (2009) used a linear chain CRF with L2 regular- ization, they added phrase cluster features extracted from the web data and spelling features. Passos et al. (2014) also used a linear chain CRF with spelling features and gazetteers.
Language independent NER models like ours have also been proposed in the past. Cucerzan
and Yarowsky (1999; 2002) present semi-supervised bootstrapping algorithms for named entity recogni- tion by co-training character-level (word-internal) and token-level (context) features. Eisenstein et al. (2011) use Bayesian nonparametrics to construct a database of named entities in an almost unsu- pervised setting. Ratinov and Roth (2009) quanti- tatively compare several approaches for NER and build their own supervised model using a regular- ized average perceptron and aggregating context in- formation.
Finally, there is currently a lot of interest in mod- els for NER that use letter-based representations. Gillick et al. (2015) model the task of sequence- labeling as a sequence to sequence learning prob- lem and incorporate character-based representations into their encoder model. Chiu and Nichols (2015) employ an architecture similar to ours, but instead use CNNs to learn character-level features, in a way similar to the work by Santos and GuimarËaes (2015).
# 7 Conclusion
This paper presents two neural architectures for se- quence labeling that provide the best NER results ever reported in standard evaluation settings, even compared with models that use external resources, such as gazetteers.
A key aspect of our models are that they model output label dependencies, either via a simple CRF architecture, or using a transition-based algorithm to explicitly construct and label chunks of the in- put. Word representations are also crucially impor- tant for success: we use both pre-trained word rep- resentations and âcharacter-basedâ representations that capture morphological and orthographic infor- mation. To prevent the learner from depending too heavily on one representation class, dropout is used.
# Acknowledgments
This work was sponsored in part by the Defense Advanced Research Projects Agency (DARPA) Information Innovation Ofï¬ce (I2O) under the Low Resource Languages for Emergent Incidents (LORELEI) program issued by DARPA/I2O under Contract No. HR0011-15-C-0114. Miguel Balles- teros is supported by the European Commission un- der the contract numbers FP7-ICT-610411 (project
MULTISENSOR) and H2020-RIA-645012 (project KRISTINA).
# References
[Ando and Zhang2005a] Rie Kubota Ando and Tong Zhang. 2005a. A framework for learning predictive structures from multiple tasks and unlabeled data. The Journal of Machine Learning Research, 6:1817â1853. [Ando and Zhang2005b] Rie Kubota Ando and Tong Zhang. 2005b. Learning predictive structures. JMLR, 6:1817â1853.
[Ballesteros et al.2015] Miguel Ballesteros, Chris Dyer, and Noah A. Smith. 2015. Improved transition-based dependency parsing by modeling characters instead of words with LSTMs. In Proceedings of EMNLP. [Ballesteros et al.2016] Miguel Ballesteros, Yoav Gold- erg, Chris Dyer, and Noah A. Smith. 2016. Train- ing with Exploration Improves a Greedy Stack-LSTM Parser. In arXiv:1603.03793.
[Bengio et al.1994] Yoshua Bengio, Patrice Simard, and Paolo Frasconi. 1994. Learning long-term depen- dencies with gradient descent is difï¬cult. Neural Net- works, IEEE Transactions on, 5(2):157â166.
[Biemann et al.2007] Chris Biemann, Gerhard Heyer, Uwe Quasthoff, and Matthias Richter. 2007. The leipzig corpora collection-monolingual corpora of standard size. Proceedings of Corpus Linguistic.
Callison-Burch, Philipp Koehn, Christof Monz, Kay Peterson, Mark Przybocki, and Omar F Zaidan. Findings of the 2010 joint workshop on statistical machine In translation and metrics for machine translation. Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR, pages 17â53. Association for Computational Linguistics.
[Carreras et al.2002] Xavier Carreras, Llu´ıs M`arquez, and Llu´ıs Padr´o. 2002. Named entity extraction using ad- aboost, proceedings of the 6th conference on natural language learning. August, 31:1â4.
[Chiu and Nichols2015] Jason PC Chiu and Eric Nichols. 2015. Named entity recognition with bidirectional lstm-cnns. arXiv preprint arXiv:1511.08308.
[Collobert et al.2011] Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language process- ing (almost) from scratch. The Journal of Machine Learning Research, 12:2493â2537. [Cucerzan and Yarowsky1999] Silviu
and Cucerzan David Yarowsky. Language independent named entity recognition combining morphological and contextual evidence. In Proceedings of the 1999
Joint SIGDAT Conference on EMNLP and VLC, pages 90â99.
and David Yarowsky. 2002. Language independent ner using a uniï¬ed model of internal and contextual In proceedings of the 6th conference on evidence. Natural language learning-Volume 20, pages 1â4. Association for Computational Linguistics.
[Dai et al.2015] Hong-Jie Dai, Po-Ting Lai, Yung-Chun Chang, and Richard Tzong-Han Tsai. 2015. Enhanc- ing of chemical compound and drug name recogni- tion using representative tag scheme and ï¬ne-grained Journal of cheminformatics, 7(Suppl tokenization. 1):S14.
[Dyer et al.2015] Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transition-based dependency parsing with stack long short-term memory. In Proc. ACL. [Eisenstein et al.2011] Jacob Eisenstein,
Tae Yano, William W Cohen, Noah A Smith, and Eric P Xing. 2011. Structured databases of named entities from bayesian nonparametrics. In Proceedings of the First Workshop on Unsupervised Learning in NLP, pages 2â12. Association for Computational Linguistics.
Ittycheriah, [Florian et al.2003] Radu Florian, Abe 2003. Named Hongyan Jing, and Tong Zhang. entity recognition through classiï¬er combination. In Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003-Volume 4, pages 168â171. Association for Computational Linguistics.
[Gillick et al.2015] Dan Gillick, Cliff Brunk, Oriol Vinyals, and Amarnag Subramanya. 2015. Multilin- gual language processing from bytes. arXiv preprint arXiv:1512.00103.
[Graff2011] David Graff. 2011. Spanish gigaword third edition (ldc2011t12). Linguistic Data Consortium, Univer-sity of Pennsylvania, Philadelphia, PA.
[Graves and Schmidhuber2005] Alex Graves and J¨urgen Schmidhuber. 2005. Framewise phoneme classiï¬- In Proc. cation with bidirectional LSTM networks. IJCNN.
[Hinton et al.2012] Geoffrey E Hinton, Nitish Srivas- tava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. 2012. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580.
[Hochreiter and Schmidhuber1997] Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735â1780.
[Hoffart et al.2011] Johannes Hoffart, Mohamed Amir Ilaria Bordino, Hagen F¨urstenau, Manfred Yosef, Pinkal, Marc Spaniol, Bilyana Taneva, Stefan Thater,
and Gerhard Weikum. 2011. Robust disambiguation of named entities in text. In Proceedings of the Con- ference on Empirical Methods in Natural Language Processing, pages 782â792. Association for Compu- tational Linguistics.
[Huang et al.2015] Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging. CoRR, abs/1508.01991.
[Kim et al.2015] Yoon Kim, Yacine Jernite, David Son- tag, and Alexander M. Rush. 2015. Character-aware neural language models. CoRR, abs/1508.06615. [Kingma and Ba2014] Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
[Lafferty et al.2001] John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random ï¬elds: Probabilistic models for segmenting and label- ing sequence data. In Proc. ICML.
[Lin and Wu2009] Dekang Lin and Xiaoyun Wu. 2009. Phrase clustering for discriminative learning. In Pro- ceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2-Volume 2, pages 1030â1038. As- sociation for Computational Linguistics.
[Ling et al.2015a] Wang Ling, Lin Chu-Cheng, Yulia Tsvetkov, Silvio Amir, R´amon Fernandez Astudillo, Chris Dyer, Alan W Black, and Isabel Trancoso. 2015a. Not all contexts are created equal: Better word representations with variable attention. In Proc. EMNLP.
[Ling et al.2015b] Wang Ling, Tiago Lu´ıs, Lu´ıs Marujo, Ram´on Fernandez Astudillo, Silvio Amir, Chris Dyer, Alan W Black, and Isabel Trancoso. 2015b. Finding function in form: Compositional character models for open vocabulary word representation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP).
[Luo et al.2015] Gang Luo, Xiaojiang Huang, Chin-Yew Lin, and Zaiqing Nie. 2015. Joint named entity recog- nition and disambiguation. In Proc. EMNLP.
[Mikolov et al.2013a] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efï¬cient estima- tion of word representations in vector space. arXiv preprint arXiv:1301.3781.
Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Proc. NIPS.
[Nivre2004] Joakim Nivre. 2004. Incrementality in de- In Proceedings of terministic dependency parsing. the Workshop on Incremental Parsing: Bringing En- gineering and Cognition Together.
[Nothman et al.2013] Joel Nothman, Nicky Ringland, Will Radford, Tara Murphy, and James R Curran. 2013. Learning multilingual named entity recognition from wikipedia. Artiï¬cial Intelligence, 194:151â175. [Parker et al.2009] Robert Parker, David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2009. English gigaword fourth edition (ldc2009t13). Linguistic Data Consortium, Univer-sity of Pennsylvania, Philadel- phia, PA.
[Passos et al.2014] Alexandre Passos, Vineet Kumar, and Andrew McCallum. 2014. Lexicon infused phrase arXiv embeddings for named entity resolution. preprint arXiv:1404.5367.
[Qi et al.2009] Yanjun Qi, Ronan Collobert, Pavel Kuksa, Koray Kavukcuoglu, and Jason Weston. 2009. Com- bining labeled and unlabeled data with word-class dis- In Proceedings of the 18th ACM tribution learning. conference on Information and knowledge manage- ment, pages 1737â1740. ACM.
[Ratinov and Roth2009] Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In Proceedings of the Thir- teenth Conference on Computational Natural Lan- guage Learning, pages 147â155. Association for Computational Linguistics.
[Santos and GuimarËaes2015] Cicero Nogueira dos Santos and Victor GuimarËaes. 2015. Boosting named entity recognition with neural character embeddings. arXiv preprint arXiv:1505.05008.
[Tjong Kim Sang and De Meulder2003] Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proc. CoNLL.
[Tjong Kim Sang2002] Erik F. Tjong Kim Sang. 2002. Introduction to the conll-2002 shared task: Language- In Proc. independent named entity recognition. CoNLL.
and Yoshua Bengio. 2010. Word representations: A sim- ple and general method for semi-supervised learning. In Proc. ACL.
[Zeiler2012] Matthew D Zeiler. An adaptive learning rate method. arXiv:1212.5701. 2012. Adadelta: arXiv preprint
[Zhang and Clark2011] Yue Zhang and Stephen Clark. 2011. Syntactic processing using the generalized per- ceptron and beam search. Computational Linguistics, 37(1).
[Zhang et al.2015] Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classiï¬cation. In Advances in Neural Informa- tion Processing Systems, pages 649â657.
[Zhou and Xu2015] Jie Zhou and Wei Xu. 2015. End-to- end learning of semantic role labeling using recurrent
neural networks. In Proceedings of the Annual Meet- ing of the Association for Computational Linguistics. | {
"id": "1603.03793"
} |
1603.01025 | Convolutional Neural Networks using Logarithmic Data Representation | Recent advances in convolutional neural networks have considered model
complexity and hardware efficiency to enable deployment onto embedded systems
and mobile devices. For example, it is now well-known that the arithmetic
operations of deep networks can be encoded down to 8-bit fixed-point without
significant deterioration in performance. However, further reduction in
precision down to as low as 3-bit fixed-point results in significant losses in
performance. In this paper we propose a new data representation that enables
state-of-the-art networks to be encoded to 3 bits with negligible loss in
classification performance. To perform this, we take advantage of the fact that
the weights and activations in a trained network naturally have non-uniform
distributions. Using non-uniform, base-2 logarithmic representation to encode
weights, communicate activations, and perform dot-products enables networks to
1) achieve higher classification accuracies than fixed-point at the same
resolution and 2) eliminate bulky digital multipliers. Finally, we propose an
end-to-end training procedure that uses log representation at 5-bits, which
achieves higher final test accuracy than linear at 5-bits. | http://arxiv.org/pdf/1603.01025 | Daisuke Miyashita, Edward H. Lee, Boris Murmann | cs.NE, cs.LG | 10 pages, 7 figures | null | cs.NE | 20160303 | 20160317 | 6 1 0 2
r a M 7 1 ] E N . s c [
2 v 5 2 0 1 0 . 3 0 6 1 : v i X r a
# Convolutional Neural Networks using Logarithmic Data Representation
# Daisuke Miyashita Stanford University, Stanford, CA 94305 USA Toshiba, Kawasaki, Japan
DAISUKEM@STANFORD.EDU
# Edward H. Lee Stanford University, Stanford, CA 94305 USA
# EDHLEE@STANFORD.EDU
# Boris Murmann Stanford University, Stanford, CA 94305 USA
# MURMANN@STANFORD.EDU
# Abstract
Recent advances in convolutional neural net- works have considered model complexity and hardware efï¬ciency to enable deployment onto For embedded systems and mobile devices. example, it is now well-known that the arith- metic operations of deep networks can be en- coded down to 8-bit ï¬xed-point without signiï¬- cant deterioration in performance. However, fur- ther reduction in precision down to as low as 3-bit ï¬xed-point results in signiï¬cant losses in performance. In this paper we propose a new data representation that enables state-of-the-art networks to be encoded to 3 bits with negligi- ble loss in classiï¬cation performance. To per- form this, we take advantage of the fact that the weights and activations in a trained net- work naturally have non-uniform distributions. Using non-uniform, base-2 logarithmic repre- sentation to encode weights, communicate acti- vations, and perform dot-products enables net- works to 1) achieve higher classiï¬cation accura- cies than ï¬xed-point at the same resolution and 2) eliminate bulky digital multipliers. Finally, we propose an end-to-end training procedure that uses log representation at 5-bits, which achieves higher ï¬nal test accuracy than linear at 5-bits.
# 1. Introduction
Deep convolutional neural networks (CNN) have demon- strated state-of-the-art performance in image classiï¬cation
(Krizhevsky et al., 2012; Simonyan & Zisserman, 2014; He et al., 2015) but have steadily grown in computational complexity. For example, the Deep Residual Learning (He et al., 2015) set a new record in image classiï¬cation accu- racy at the expense of 11.3 billion ï¬oating-point multiply- and-add operations per forward-pass of an image and 230 MB of memory to store the weights in its 152-layer net- work.
In order for these large networks to run in real-time ap- plications such as for mobile or embedded platforms, it is often necessary to use low-precision arithmetic and apply compression techniques. Recently, many researchers have successfully deployed networks that compute using 8-bit ï¬xed-point representation (Vanhoucke et al., 2011; Abadi et al., 2015) and have successfully trained networks with 16-bit ï¬xed point (Gupta et al., 2015). This work in par- ticular is built upon the idea that algorithm-level noise tol- erance of the network can motivate simpliï¬cations in hard- ware complexity.
Interesting directions point towards matrix factorization (Denton et al., 2014) and tensoriï¬cation (Novikov et al., 2015) by leveraging structure of the fully-connected (FC) layers. Another promising area is to prune the FC layer be- fore mapping this to sparse matrix-matrix routines in GPUs (Han et al., 2015b). However, many of these inventions aim at systems that meet some required and speciï¬c crite- ria such as networks that have many, large FC layers or ac- celerators that handle efï¬cient sparse matrix-matrix arith- metic. And with network architectures currently pushing towards increasing the depth of convolutional layers by set- tling for fewer dense FC layers (He et al., 2015; Szegedy et al., 2015), there are potential problems in motivating a one-size-ï¬ts-all solution to handle these computational and memory demands.
We propose a general method of representing and comput-
Convolutional Neural Networks using Logarithmic Data Representation
ing the dot products in a network that can allow networks with minimal constraint on the layer properties to run more efï¬ciently in digital hardware. In this paper we explore the use of communicating activations, storing weights, and computing the atomic dot-products in the binary logarith- mic (base-2 logarithmic) domain for both inference and training. The motivations for moving to this domain are the following:
⢠Training networks with weight decay leads to ï¬nal weights that are distributed non-uniformly around 0.
⢠Similarly, activations are also highly concentrated near 0. Our work uses rectiï¬ed Linear Units (ReLU) as the non-linearity.
⢠Logarithmic representations can encode data with very large dynamic range in fewer bits than can ï¬xed- point representation (Gautschi et al., 2016).
⢠Data representation in log-domain is naturally en- coded in digital hardware (as shown in Section 4.3).
encoded to as little as 5 bits without a signiï¬cant accuracy penalty. There has also been recent work in training us- ing low precision arithmetic. (Gupta et al., 2015) propose a stochastic rounding scheme to help train networks using 16-bit ï¬xed-point. (Lin et al., 2015) propose quantized back-propagation and ternary connect. This method re- duces the number of ï¬oating-point multiplications by cast- ing these operations into powers-of-two multiplies, which are easily realized with bitshifts in digital hardware. They apply this technique on MNIST and CIFAR10 with lit- tle loss in performance. However, their method does not completely eliminate all multiplications end-to-end. Dur- ing test-time the network uses the learned full resolution weights for forward propagation. Training with reduced precision is motivated by the idea that high-precision gra- dient updates is unnecessary for the stochastic optimization of networks (Bottou & Bousquet, 2007; Bishop, 1995; Au- dhkhasi et al., 2013). In fact, there are some studies that show that gradient noise helps convergence. For example, (Neelakantan et al., 2015) empirically ï¬nds that gradient noise can also encourage faster exploration and annealing of optimization space, which can help network generaliza- tion performance.
Our contributions are listed:
⢠we show that networks obtain higher classiï¬cation accuracies with logarithmic quantization than linear quantization using traditional ï¬xed-point at equivalent resolutions.
⢠we show that activations are more robust to quantiza- tion than weights. This is because the number of ac- tivations tend to be larger than the number of weights which are reused during convolutions.
⢠we apply our logarithmic data representation on state- of-the-art networks, allowing activations and weights to use only 3b with almost no loss in classiï¬cation performance.
Hardware implementations. There have been a few but signiï¬cant advances in the development of specialized hardware of large networks. For example (Farabet et al., 2010) developed Field-Programmable Gate Arrays (FPGA) to perform real-time forward propagation. These groups have also performed a comprehensive study of classiï¬ca- tion performance and energy efï¬ciency as function of res- olution. (Zhang et al., 2015) have also explored the design of convolutions in the context of memory versus compute management under the RoofLine model. Other works fo- cus on specialized, optimized kernels for general purpose GPUs (Chetlur et al., 2014).
# 3. Concept and Motivation
⢠we generalize base-2 arithmetic to handle different 2 enables base. In particular, we show that a base- the ability to capture large dynamic ranges of weights and activations but also ï¬ner precisions across the en- coded range of values as well.
⢠we develop logarithmic backpropagation for efï¬cient training.
Each convolutional and fully-connected layer of a network performs matrix operations that distills down to dot prod- ucts y = wT x, where x â Rn is the input, w â Rn the weights, and y the activations before being transformed by the non-linearity (e.g. ReLU). Using conventional digital hardware, this operation is performed using n multiply- and-add operations using ï¬oating or ï¬xed point represen- tation as shown in Figure 1(a). However, this dot product can also be computed in the log-domain as shown in Fig- ure 1(b,c).
# 2. Related work
Reduced-precision computation. (Shin et al., 2016; Sung et al., 2015; Vanhoucke et al., 2011; Han et al., 2015a) ana- lyzed the effects of quantizing the trained weights for infer- ence. For example, (Han et al., 2015b) shows that convo- lutional layers in AlexNet (Krizhevsky et al., 2012) can be
# 3.1. Proposed Method 1.
The ï¬rst proposed method as shown in Figure 1(b) is to transform one operand to its log representation, convert the resulting transformation back to the linear domain, and
Convolutional Neural Networks using Logarithmic Data Representation
# 3.2. Proposed Method 2.
(a) Conventional w y 4 32b float wv + es From memory me dw xx LARGE bandwidth Multiply-Accumulate (b) Proposed 1 y 4 Fl Â¥ LARGE bandwidth wv oo loator FI Fixed â H-âa=» â_ From memory [fim Yw«x Leftmost â1â To memory SMALL bandwidth ait shift-Accumulate position SMALL bandwidth (c) Proposed 2 y log, w i] Ab fixed = = O = From memory {i Eq. (3),(4) To memory SMALL bandwidth SMALL bandwidth
The second proposed method as shown in Figure 1(c) is to extend the ï¬rst method to compute dot products in the log-domain for both operands. Additions in linear-domain map to sums of exponentials in the log-domain and mul- tiplications in linear become log-addition. The resulting dot-product is
whe ~ J) 2 Qvantizetoga (on) +t (0gs (20) i=l n = S> Bitshift(1, @; + #;), (2) i=l
Figure 1. Concept and motivation of this study.
where the Quantize(log2(wi)) and Ëxi = Quantize(log2(xi)). log-domain weights log-domain are Ëwi inputs = are
multiply this by the other operand. This is simply
By transforming both the weights and inputs, we compute the original dot product by bitshifting 1 by an integer result Ëwi + Ëxi and summing over all i.
n Ss wi x QF i=1 = )_Bitshift(w;,%,), (1) i=1 wie ~
where Ëxi = Quantize(log2(xi)), Quantize(â¢) quantizes ⢠to an integer, and Bitshift(a, b) is the function that bit- shifts a value a by an integer b in ï¬xed-point arithmetic. In ï¬oating-point, this operation is simply an addition of b with the exponent part of a. Taking advantage of the Bitshift(a, b) operator to perform multiplication obviates the need for expensive digital multipliers.
# 3.3. Accumulation in log domain
Although Fig. 1(b,c) indicates a logarithm-to-linear con- verter between layers where the actual accumulation is per- formed in the linear domain, this accumulation is able to be performed in the log-domain using the approximation log,(1 + 2) ~ x forO0 < a < 1. For example, let Sn = WT +... +FWrXn, Sn = logy (Sn), and pj = W;+Zj. When n = 2,
2 Bo logs (= Bitshift (1, ») ~ max (p1,)2) + Bitshift (1, â|P1 â p2|), (3)
Quantizing the activations and weights in the log-domain (logs(a) and logs (w)) instead of x and w is also motivated by leveraging structure of the non-uniform distributions of x and w. A detailed treatment is shown in the next section. In order to quantize, we propose two hardware-friendly fla- vors. The first option is to simply floor the input. This method computes |log(w)| by returning the position of the first 1 bit seen from the most significant bit (MSB). The second option is to round to the nearest integer, which is more precise than the first option. With the latter op- tion, after computing the integer part, the fractional part is computed in order to assert the rounding direction. This method of rounding is summarized as follows. Pick m bits followed by the leftmost 1 and consider it as a fixed point number F° with 0 integer bit and m fractional bits. Then, if F > /2-â1,round F up to the nearest integer and other- wise round it down to the nearest integer.
and for n in general,
$n ~ max (Snâ1, Pn) + Bitshift (1,â|[Sn-1] â Bnl). @
Note that Ësi preserves the fractional part of the word dur- ing accumulation. Both accumulation in linear domain and accumulation in log domain have its pros and cons. Ac- cumulation in linear domain is simpler but requires larger bit widths to accommodate large dynamic range numbers. Accumulation in log in (3) and (4) appears to be more com- plicated, but is in fact simply computed using bit-wise op- erations in digital hardware.
# 4. Experiments of Proposed Methods
Here we evaluate our methods as detailed in Sections 3.1 and 3.2 on the classiï¬cation task of ILSVRC-2012 (Deng
Convolutional Neural Networks using Logarithmic Data Representation
Table 1. Structure of AlexNet(Krizhevsky et al., 2012) with quan- tization
Table 2. Structure of VGG16(Simonyan & Zisserman, 2014) with quantization
layer # Weight # Input FSR ReLU(Conv1) LogQuant1 LRN1 Pool1 ReLU(Conv2) LogQuant2 LRN2 Pool2 ReLU(Conv3) LogQuant3 ReLU(Conv4) LogQuant4 ReLU(Conv5) LogQuant5 Pool5 ReLU(FC6) LogQuant6 ReLU(FC7) LogQuant7 FC8 96 · 3 · 112 - - - 256 · 96 · 52 - - - 384 · 256 · 32 - 384 · 384 · 32 - 256 · 384 · 32 - - 4096 · 256 · 62 - 4096 · 4096 - 1000 · 4096 3 · 2272 96 · 552 - 96 · 552 96 · 272 256 · 272 - 256 · 272 256 · 132 384 · 132 384 · 132 384 · 132 384 · 132 256 · 132 256 · 132 256 · 62 4096 4096 4096 4096 - fsr + 3 - - - fsr + 3 - - - fsr + 4 - fsr + 3 - fsr + 3 - - fsr + 1 - fsr -
et al., 2009) using Chainer (Tokui et al., 2015). We eval- uate method 1 (Section 3.1) on inference (forward pass) in Section 4.1. Similarly, we evaluate method 2 (Section 3.2) on inference in Sections 4.2 and 4.3. For those ex- periments, we use published models (AlexNet (Krizhevsky et al., 2012), VGG16 (Simonyan & Zisserman, 2014)) from the caffe model zoo ((Jia et al., 2014)) without any ï¬ne tun- ing (or extra retraining). Finally, we evaluate method 2 on training in Section 4.4.
layer # Weight # Input FSR
# 4.1. Logarithmic Representation of Activations
This experiment evaluates the classiï¬cation accuracy us- ing logarithmic activations and ï¬oating point 32b for the weights. In similar spirit to that of (Gupta et al., 2015), we describe the logarithmic quantization layer LogQuant that performs the element-wise operation as follows:
0 9% «x=0, LogQuant(«, bitwidth, FSR) = otherwise
where
% = Clip (Round (log, (|x|)), FSR â 2°" FSR), (©) 0 x < min, Clip(z, min, max) = 4 maxâ1 «> max, (7) x otherwise.
These layers perform the logarithmic quantization and computation as detailed in Section 3.1. Tables 1 and 2
(6)
(7)
illustrate the addition of these layers to the models. The quantizer has a speciï¬ed full scale range, and this range in linear scale is 2FSR, where we express this as simply FSR throughout this paper for notational convenience. The FSR values for each layer are shown in Tables 1 and 2; they show fsr added by an offset parameter. This offset param- eter is chosen to properly handle the variation of activation ranges from layer to layer using 100 images from the train- ing set. The fsr is a parameter which is global to the net- work and is tuned to perform the experiments to measure the effect of FSR on classiï¬cation accuracy. The bitwidth is the number of bits required to represent a number after quantization. Note that since we assume applying quanti- zation after ReLU function, x is 0 or positive and then we
Convolutional Neural Networks using Logarithmic Data Representation
use unsigned format without sign bit for activations.
In order to evaluate our logarithmic representation, we de- tail an equivalent linear quantization layer described as
LinearQuant(z, bitwidth, FSR) = Clip (Row (5) x step, 0, zen) step
and where
step = 2FSRâbitwidth. (9)
Figure 2 illustrates the effect of the quantizer on activa- tions following the conv2 2 layer used in VGG16. The pre- quantized distribution tends to 0 exponentially, and the log- quantized distribution illustrates how the log-encoded acti- vations are uniformly equalized across many output bins which is not prevalent in the linear case. Many smaller activation values are more ï¬nely represented by log quan- tization compared to linear quantization. The total quanti- zation error 1 N ||Quantize(x) â x||1, where Quantize(â¢) is LogQuant(â¢) or LinearQuant(â¢), x is the vectorized ac- tivations of size N , is less for the log-quantized case than for linear. This result is illustrated in Figure 3. Using linear quantization with step size of 1024, we obtain a distribu- tion of quantization errors that are highly concentrated in the region where |LinearQuant(x) â x| < 512. How- ever, log quantization with the bitwidth as linear results in a signiï¬cantly lower number of quantization errors in the region 128 < |LogQuant(x) â x| < 512. This comes at the expense of a slight increase in errors in the region 512 < |LogQuant(x) â x|. Nonetheless, the quantiza- tion errors 1 N ||LogQuant(x) â x||1 = 34.19 for log and 1 N ||LogQuant(x) â x||1 = 102.89 for linear. We run the models as described in Tables 1 and 2 and test on the validation set without data augmentation. We evalu- ate it with variable bitwidths and FSRs for both quantizer layers.
(8)
i: | | 1 4 0 1024 2048 3072 4096 5120 6144 7168 8192 Value of activation Count (log scale, a.u.)
Figure 2. Distribution of activations of conv2 2 layer in VGG16 before and after log and linear quantization. The order (from top to bottom) is: before log-quantization, after log-quantization, be- fore linear quantization, and after linear quantization. The color highlights the binning process of these two quantizers.
by 4b linear for VGG16. Third, with 4b log, there is no loss in top-5 accuracy from the original ï¬oat32 representation.
Table 3. Top-5 accuracies with quantized activations at optimal FSRs
Model AlexNet VGG16 Float 32b Log. 3b Log. 4b Linear 3b Linear 4b 78.3% 76.9%(fsr = 7) 76.9%(fsr = 15) 77.1%(fsr = 5) 77.6%(fsr = 5) 89.8% 89.2%(fsr = 6) 89.8%(fsr = 11) 83.0%(fsr = 3) 89.4%(fsr = 4)
Figure 4 illustrates the results of AlexNet. Using only 3 bits to represent the activations for both logarithmic and linear quantizations, the top-5 accuracy is still very close to that of the original, unquantized model encoded at ï¬oating-point 32b. However, logarithmic representations tolerate a large dynamic range of FSRs. For example, using 4b log, we can obtain 3 order of magnitude variations in the full scale without a signiï¬cant loss of top-5 accuracy. We see similar results for VGG16 as shown in Figure 5. Table 3 lists the classiï¬cation accuracies with the optimal FSRs for each case. There are some interesting observations. First, 3b log performs 0.2% worse than 3b linear for AlexNet but 6.2% better for VGG16, which is a higher capacity network than AlexNet. Second, by encoding the activations in 3b log, we achieve the same top-5 accuracy compared to that achieved
# 4.2. Logarithmic Representation of Weights of Fully Connected Layers
The FC weights are quantized using the same strategies as those in Section 4.1, except that they have sign bit. We evaluate the classiï¬cation performance using log data rep- resentation for both FC weights and activations jointly us- ing method 2 in Section 3.2. For comparison, we use lin- ear for FC weights and log for activations as reference. For both methods, we use optimal 4b log for activations that were computed in Section 4.1.
Table 4 compares the mentioned approaches along with ï¬oating point. We observe a small 0.4% win for log over linear for AlexNet but a 0.2% decrease for VGG16. Nonetheless, log computation is performed without the use of multipliers.
Convolutional Neural Networks using Logarithmic Data Representation
' log quantization an â__|[LogQuant(x)âs|,/N=34.19 alys linear quantization o : ~ 7 ||LinearQuant(x)â2||,/N =102.89 a oO a Ale o = _ = S fo} 1S) 0 128 256 384 512 640 |LogQuant(x)â2|, |LinearQuant(x)â2|
-- log quantization 3b â _ linear quantization 4b -- linear quantization 3b -- float 32b â log quantization 4b 2 2 lf 6 Top-5 Accuracy 2 a Full Scale Range (2/*")
Figure 3. Comparison of the quantization error distribution be- tween logarithmic quantization and linear quantization
Figure 5. Top5 Accuracy vs Full scale range: VGG16
-- log quantization 3b â _ linear quantization 4b -- linear quantization 3b -- float 32b â log quantization 4b Top-5 Accuracy co 98 Se Sh oo ° a | ES * q Full Scale Range (2/*")
Figure 4. Top5 Accuracy vs Full scale range: AlexNet
# 4.3. Logarithmic Representation of Weights of Convolutional Layers
We now represent the convolutional layers using the same procedure. We keep the representation of activations at 4b log and the representation of weights of FC layers at 4b log, and compare our log method with the linear reference and ideal ï¬oating point. We also perform the dot products using two different bases: 2, 2. Note that there is no additional overhead for log base- 2 as it is computed with the same equation shown in Equation 4.
Table 5 shows the classiï¬cation results. The results illus- trate an approximate 6% drop in performance from ï¬oating point down to 5b base-2 but a relatively minor 1.7% drop 2. They includes sign bit. There are also for 5b base- some important observations here.
Table 5. Top-5 accuracy after applying quantization to weights of convolutional layers
Table 4. Top-5 accuracy after applying quantization to weights of FC layers
Model AlexNet VGG16 Float 32b 76.9% 89.8% Log. 4b 76.8% 89.5% Linear 4b 76.4% 89.7%
Model Float 32b Linear 5b Base-2 Log 5b â 2 Log 5b Base- AlexNet VGG16 76.8% 73.6% 70.6% 89.5% 85.1% 83.4% 75.1% 89.0%
An added beneï¬t to quantization is a reduction of the model size. By quantizing down to 4b log including sign bit, we compress the FC weights for free signiï¬cantly from 1.9 Gb to 0.27 Gb for AlexNet and 4.4 Gb to 0.97 Gb for VGG16. This is because the dense FC layers occupy 98.2% and 89.4% of the total model size for AlexNet and VGG16 re- spectively.
We ï¬rst observe that the weights of the convolutional layers for AlexNet and VGG16 are more sensitive to quantization than are FC weights. Each FC weight is used only once per image (batch size of 1) whereas convolutional weights are reused many times across the layerâs input activation map. Because of this, the quantization error of each weight now inï¬uences the dot products across the entire activation volume. Second, we observe that by moving from 5b base- 2, we allow the 2 to a ï¬ner granularity such as 5b base-
Convolutional Neural Networks using Logarithmic Data Representation
network to 1) be robust to quantization errors and degrada- tion in classiï¬cation performance and 2) retain the practical features of log-domain arithmetic.
H base =2 H â + |LogQuant(«)âa||,/N=21.39 base =V/2_ ||LogQuant(x)â2||,/N =10.39 Count (linear scale a.u.) |LogQuant(z)â2|
Figure 6. Distribution of quantization errors for weights under base 2 and
The distributions of quantization errors for both 5b base-2 2 are shown in Figure 6. The total quanti- and 5b base- zation error on the weights, 1 N ||Quantize(x) â x||1, where x is the vectorized weights of size N , is 2Ã smaller for base-
Algorithm 1 Training a CNN with base-2 logarithmic rep- resentation. C is the softmax loss for each minibatch. LogQuant(x) quantizes x in base-2 log-domain. The op- timization step Update(Wk,gWk ) updates the weights Wk based on backpropagated gradients gWk . We use the SGD with momentum and Adam rule. Require: a minibatch of inputs and targets (a0, aâ), previ-
based on backpropagated gradients gy. We use with momentum and Adam rule. Require: a minibatch of inputs and targets (ao, a*), ous weights W. Ensure: updated weights W'+! {1. Computing the parametersâ gradient: } {1.1. Forward propagation: } for k = 1 to Ldo Wi © LogQuant(W) a), â ReLU (af_, Wp) aj. â LogQuant(a;) end for {1.2. Backward propagation: } Compute ga, = gc knowing ay and a* for k = Ltoldo gf, â LogQuant (ga, ) Jax. â 94,Wi gm, â 94) G4 end for {2. Accumulating the parametersâ gradient: } for k = 1 to Ldo Witt © Update(We, gw;,) end for
# 4.4. Training with Logarithmic Representation
We incorporate log representation during the training phase. This entire algorithm can be computed using Method 2 in Section 3.2. Table 6 illustrates the networks that we compare. The proposed log and linear networks are trained at the same resolution using 4-bit unsigned ac- tivations and 5-bit signed weights and gradients using Al- gorithm 1 on the CIFAR10 dataset with simple data aug- mentation described in (He et al., 2015). Note that un- like BinaryNet (Courbariaux & Bengio, 2016), we quantize the backpropagated gradients to train log-net. This enables end-to-end training using logarithmic representation at the 5-bit level. For linear quantization however, we found it necessary to keep the gradients in its unquantized ï¬oating- point precision form in order to achieve good convergence. Furthermore, we include the training curve for BinaryNet, which uses unquantized gradients.
7 illustrates the training results of log, linear, and Fig. BinaryNet. Final test accuracies for log-5b, linear-5b, and BinaryNet are 0.9379, 0.9253, 0.8862 respectively where linear-5b and BinaryNet use unquantized gradients. The test results indicate that even with quantized gradients, our proposed network with log representation still outperforms the others that use unquantized gradients.
25 â float 32b 2.0 â _ log-5b FI ny â linear-5b 3 1s â linear-5b unquant. grad. 2 : BinaryNet unquant. grad. Ta ig £ 0.5 0.0 0 5 10 15 20 25 30 35 epoch 1.0 - r T 0.8 F o7 â float 32b (0.941) 5 06 â _log-5b (0.9379) % â _ linear-5b (0.2909) Me li â _linear-5b unquant. grad. (0.9253) Re 0.4F â BinaryNet unquant. grad. (0.8862) 0.3 0.2 NOVA Naan 0.1 0 5 10 15 20 25 30 35 epoch
Figure 7. Loss curves and test accuracies
Convolutional Neural Networks using Logarithmic Data Representation
# 5. Conclusion
Table 6. Structure of VGG-like network for CIFAR10
In this paper, we describe a method to represent the weights and activations with low resolution in the log-domain, which eliminates bulky digital multipliers. This method is also motivated by the non-uniform distributions of weights and activations, making log representation more robust to quantization as compared to linear. We evaluate our meth- ods on the classiï¬cation task of ILSVRC-2012 using pre- trained models (AlexNet and VGG16). We also offer ex- tensions that incorporate end-to-end training using log rep- resentation including gradients.
# log quantization
# linear quantization
# BinaryNet
Conv 64 · 3 · 32 BatchNorm ReLU LogQuant Conv 64 · 64 · 32 BatchNorm ReLU LogQuant MaxPool 2 à 2 Conv 128 · 64 · 32 BatchNorm ReLU LogQuant Conv 128 · 128 · 32 BatchNorm ReLU LogQuant MaxPool 2 à 2 Conv 256 · 128 · 32 BatchNorm ReLU LogQuant Conv 256 · 256 · 32 BatchNorm ReLU LogQuant Conv 256 · 256 · 32 BatchNorm ReLU LogQuant Conv 256 · 256 · 32 BatchNorm ReLU LogQuant MaxPool 2 à 2 FC 1024 · 256 · 42 BatchNorm ReLU LogQuant FC 1024 · 1024 BatchNorm ReLU LogQuant FC 10 · 1024 - Conv 64 · 3 · 32 BatchNorm ReLU LinearQuant Conv 64 · 64 · 32 BatchNorm ReLU LinearQuant MaxPool 2 à 2 Conv 128 · 64 · 32 BatchNorm ReLU LinearQuant Conv 128 · 128 · 32 BatchNorm ReLU LinearQuant MaxPool 2 à 2 Conv 256 · 128 · 32 BatchNorm ReLU LinearQuant Conv 256 · 256 · 32 BatchNorm ReLU LinearQuant Conv 256 · 256 · 32 BatchNorm ReLU LinearQuant Conv 256 · 256 · 32 BatchNorm ReLU LinearQuant MaxPool 2 à 2 FC 1024 · 256 · 42 BatchNorm ReLU LinearQuant FC 1024 · 1024 BatchNorm ReLU LinearQuant FC 10 · 1024 -
# References
Abadi, Mart´ın, Agarwal, Ashish, Barham, Paul, Brevdo, Eugene, Chen, Zhifeng, Citro, Craig, Corrado, Greg S., Davis, Andy, Dean, Jeffrey, Devin, Matthieu, Ghe- mawat, Sanjay, Goodfellow, Ian, Harp, Andrew, Irv- ing, Geoffrey, Isard, Michael, Jia, Yangqing, Jozefowicz, Rafal, Kaiser, Lukasz, Kudlur, Manjunath, Levenberg, Josh, Man´e, Dan, Monga, Rajat, Moore, Sherry, Murray,
Convolutional Neural Networks using Logarithmic Data Representation
Derek, Olah, Chris, Schuster, Mike, Shlens, Jonathon, Steiner, Benoit, Sutskever, Ilya, Talwar, Kunal, Tucker, Paul, Vanhoucke, Vincent, Vasudevan, Vijay, Vi´egas, Fernanda, Vinyals, Oriol, Warden, Pete, Wattenberg, Martin, Wicke, Martin, Yu, Yuan, and Zheng, Xiaoqiang. TensorFlow: Large-scale machine learning on heteroge- neous systems, 2015.
Solid- State Circuits Conference - (ISSCC), 2016 IEEE International. IEEE, 2016.
Gupta, Suyog, Agrawal, Ankur, Gopalakrishnan, Kailash, and Narayanan, Pritish. Deep learning with limited nu- In Proceedings of The 32nd Inter- merical precision. national Conference on Machine Learning (ICML2015), pp. 1737â1746, 2015.
Audhkhasi, Kartik, Osoba, Osonde, and Kosko, Bart. Noise beneï¬ts in backpropagation and deep bidirectional pre-training. In Proceedings of The 2013 International Joint Conference on Neural Networks (IJCNN), pp. 1â8. IEEE, 2013.
Han, Song, Mao, Huizi, and Dally, William J. Deep com- pression: Compressing deep neural network with prun- ing, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015a.
Bishop, Christopher M. Training with noise is equivalent to tikhonov regularization. In Neural Computation, pp. 108â116, 1995.
Bottou, L´eon and Bousquet, Olivier. The tradeoffs of large scale learning. In Platt, J.C., Koller, D., Singer, Y., and Roweis, S.T. (eds.), Advances in Neural Information Processing Systems 20, pp. 161â168. Curran Associates, Inc., 2007.
Han, Song, Pool, Jeff, Tran, John, and Dally, William. Learning both weights and connections for efï¬cient neu- ral network. In Proceedings of Advances in Neural In- formation Processing Systems 28 (NIPS2015), pp. 1135â 1143, 2015b.
He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015.
Chetlur, Sharan, Woolley, Cliff, Vandermersch, Philippe, Cohen, Jonathan, Tran, John, Catanzaro, Bryan, and Shelhamer, Evan. cudnn: Efï¬cient primitives for deep learning. In Proceedings of Deep Learning and Repre- sentation Learning Workshop: NIPS 2014, 2014.
Jia, Yangqing, Shelhamer, Evan, Donahue, Jeff, Karayev, Sergey, Long, Jonathan, Girshick, Ross, Guadarrama, Sergio, and Darrell, Trevor. Caffe: Convolutional ar- chitecture for fast feature embedding. In Proceedings of the 22nd ACM International Conference on Multimedia, pp. 675â678. ACM, 2014.
Courbariaux, Matthieu and Bengio, Yoshua. Binarynet: Training deep neural networks with weights and ac- arXiv preprint tivations constrained to +1 or -1. arXiv:1602.02830, 2016.
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei- ImageNet: A Large-Scale Hierarchical Image Fei, L. Database. In CVPR09, 2009.
Denton, Emily, Zaremba, Wojciech, Bruna, Joan, LeCun, Yann, and Fergus, Rob. Exploiting linear structure within convolutional networks for efï¬cient evaluation. In Advances in Neural Information Processing Systems 27 (NIPS2014), pp. 1269â1277, 2014.
Farabet, Cl´ement, Martini, Berin, Akselrod, Polina, Talay, Selc¸uk, LeCun, Yann, and Culurciello, Eugenio. Hard- ware accelerated convolutional neural networks for syn- In Proceedings of 2010 IEEE thetic vision systems. International Symposium on Circuits and Systems (IS- CAS),, pp. 257â260. IEEE, 2010.
Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. Imagenet classiï¬cation with deep convolutional neural networks. In Pereira, F., Burges, C.J.C., Bottou, L., and Weinberger, K.Q. (eds.), Advances in Neural Informa- tion Processing Systems 25, pp. 1097â1105, 2012.
Lin, Zhouhan, Courbariaux, Matthieu, Memisevic, Roland, and Bengio, Yoshua. Neural networks with few multipli- cations. arXiv preprint arXiv:1510.03009, 2015.
Neelakantan, Arvind, Vilnis, Luke, Le, Quoc V., Sutskever, Ilya, Kaiser, Lukasz, and Karol Kurach, James Martens. Adding gradient noise improves learning for very deep networks. arXiv preprint arXiv:1511.06807, 2015.
Novikov, Alexander, Podoprikhin, Dmitry, Osokin, Anton, and Vetrov, Dmitry. Tensorizing neural networks. In Advances in Neural Information Processing Systems 28 (NIPS2015), pp. 442â450, 2015.
Gautschi, Michael, Schaffner, Michael, Gurkaynak, Frank K., and Benini, Luca. A 65nm CMOS 6.4-to- 29.2pJ/FLOP at 0.8V shared logarithmic ï¬oating point unit for acceleration of nonlinear function kernels in In Proceedings of a tightly coupled processor cluster.
Shin, Sungho, Hwang, Kyuyeon, and Sung, Wonyong. Fixed point performance analysis of recurrent neural net- works. In Proceedings of The 41st IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP2016). IEEE, 2016.
Convolutional Neural Networks using Logarithmic Data Representation
Simonyan, Karen and Zisserman, Andrew. Very deep con- volutional networks for large-scale image recognition. arXiv preprint arXiv:11409.1556, 2014.
Sung, Wonyong, Shin, Sungho, and Hwang, Kyuyeon. Resiliency of deep neural networks under quantization. arXiv preprint arXiv:1511.06488, 2015.
Szegedy, Christian, Liu, Wei, Jia, Yangqing, Sermanet, Pierre, Reed, Scott, Anguelov, Dragomir, Erhan, Du- mitru, Vanhoucke, Vincent, and Rabinovich, Andrew. Going deeper with convolutions. In CVPR 2015, 2015.
Tokui, Seiya, Oono, Kenta, Hido, Shohei, and Clayton, Justin. Chainer: a next-generation open source frame- In Proceedings of Workshop work for deep learning. on Machine Learning Systems (LearningSys) in The Twenty-ninth Annual Conference on Neural Information Processing Systems (NIPS), 2015.
Vanhoucke, Vincent, Senior, Andrew, and Mao, Mark Z. Improving the speed of neural networks on cpus. In Pro- ceedings of Deep Learning and Unsupervised Feature Learning Workshop, NIPS 2011, 2011.
Zhang, Chen, Li, Peng, Sun, Guangyu, Guan, Yijin, Xiao, Bingjun, and Cong, Jason. Optimizing FPGA-based accelerator design for deep convolutional neural net- In Proceedings of 23rd International Sympo- works. sium on Field-Programmable Gate Arrays (FPGA2015), 2015. | {
"id": "1510.03009"
} |
1602.07868 | Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks | We present weight normalization: a reparameterization of the weight vectors
in a neural network that decouples the length of those weight vectors from
their direction. By reparameterizing the weights in this way we improve the
conditioning of the optimization problem and we speed up convergence of
stochastic gradient descent. Our reparameterization is inspired by batch
normalization but does not introduce any dependencies between the examples in a
minibatch. This means that our method can also be applied successfully to
recurrent models such as LSTMs and to noise-sensitive applications such as deep
reinforcement learning or generative models, for which batch normalization is
less well suited. Although our method is much simpler, it still provides much
of the speed-up of full batch normalization. In addition, the computational
overhead of our method is lower, permitting more optimization steps to be taken
in the same amount of time. We demonstrate the usefulness of our method on
applications in supervised image recognition, generative modelling, and deep
reinforcement learning. | http://arxiv.org/pdf/1602.07868 | Tim Salimans, Diederik P. Kingma | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20160225 | 20160604 | 6 1 0 2 n u J 4 ] G L . s c [
3 v 8 6 8 7 0 . 2 0 6 1 : v i X r a
# Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks
Tim Salimans OpenAI tim@openai.com Diederik P. Kingma OpenAI dpkingma@openai.com
# Abstract
We present weight normalization: a reparameterization of the weight vectors in a neural network that decouples the length of those weight vectors from their direction. By reparameterizing the weights in this way we improve the conditioning of the optimization problem and we speed up convergence of stochastic gradient descent. Our reparameterization is inspired by batch normalization but does not introduce any dependencies between the examples in a minibatch. This means that our method can also be applied successfully to recurrent models such as LSTMs and to noise-sensitive applications such as deep reinforcement learning or generative models, for which batch normalization is less well suited. Although our method is much simpler, it still provides much of the speed-up of full batch normalization. In addition, the computational overhead of our method is lower, permitting more optimization steps to be taken in the same amount of time. We demonstrate the usefulness of our method on applications in supervised image recognition, generative modelling, and deep reinforcement learning.
# Introduction
Recent successes in deep learning have shown that neural networks trained by ï¬rst-order gradient based optimization are capable of achieving amazing results in diverse domains like computer vision, speech recognition, and language modelling [5]. However, it is also well known that the practical success of ï¬rst-order gradient based optimization is highly dependent on the curvature of the objective that is optimized. If the condition number of the Hessian matrix of the objective at the optimum is low, the problem is said to exhibit pathological curvature, and ï¬rst-order gradient descent will have trouble making progress [18, 28]. The amount of curvature, and thus the success of our optimization, is not invariant to reparameterization [1]: there may be multiple equivalent ways of parameterizing the same model, some of which are much easier to optimize than others. Finding good ways of parameterizing neural networks is thus an important problem in deep learning.
While the architectures of neural networks differ widely across applications, they are typically mostly composed of conceptually simple computational building blocks sometimes called neurons: each such neuron computes a weighted sum over its inputs and adds a bias term, followed by the application of an elementwise nonlinear transformation. Improving the general optimizability of deep networks is a challenging task [4], but since many neural architectures share these basic building blocks, improving these building blocks improves the performance of a very wide range of model architectures and could thus be very useful.
Several authors have recently developed methods to improve the conditioning of the cost gradient for general neural network architectures. One approach is to explicitly left multiply the cost gradient with an approximate inverse of the Fisher information matrix, thereby obtaining an approximately whitened natural gradient. Such an approximate inverse can for example be obtained by using a Kronecker factored approximation to the Fisher matrix and inverting it (KFAC, [19]), by using an
approximate Cholesky factorization of the inverse Fisher matrix (FANG, [8]), or by whitening the input of each layer in the neural network (PRONG, [3]).
Alternatively, we can use standard ï¬rst order gradient descent without preconditioning, but change the parameterization of our model to give gradients that are more like the whitened natural gradients of these methods. For example, Raiko et al. [23] propose to transform the outputs of each neuron to have zero output and zero slope on average. They show that this transformation approximately diagonalizes the Fisher information matrix, thereby whitening the gradient, and that this leads to improved optimization performance. Another approach in this direction is batch normalization [11], a method where the output of each neuron (before application of the nonlinearity) is normalized by the mean and standard deviation of the outputs calculated over the examples in the minibatch. This reduces covariate shift of the neuron outputs and the authors suggest it also brings the Fisher matrix closer to the identity matrix.
Following this second approach to approximate natural gradient optimization, we propose a simple but general method, called weight normalization, for improving the optimizability of the weights of neural network models. The method is inspired by batch normalization, but it is a deterministic method that does not share batch normalizationâs property of adding noise to the gradients. In addition, the overhead imposed by our method is lower: no additional memory is required and the additional computation is negligible. The method show encouraging results on a wide range of deep learning applications.
# 2 Weight Normalization
We consider standard artiï¬cial neural networks where the computation of each neuron consists in taking a weighted sum of input features, followed by an elementwise nonlinearity:
y = Ï(w · x + b), (1)
where w is a k-dimensional weight vector, b is a scalar bias term, x is a k-dimensional vector of input features, Ï(.) denotes an elementwise nonlinearity such as the rectiï¬er max(., 0), and y denotes the scalar output of the neuron.
After associating a loss function to one or more neuron outputs, such a neural network is commonly trained by stochastic gradient descent in the parameters w, b of each neuron. In an effort to speed up the convergence of this optimization procedure, we propose to reparameterize each weight vector w in terms of a parameter vector v and a scalar parameter g and to perform stochastic gradient descent with respect to those parameters instead. We do so by expressing the weight vectors in terms of the new parameters using
w = g ||v|| v (2)
where v is a k-dimensional vector, g is a scalar, and ||v|| denotes the Euclidean norm of v. This reparameterization has the effect of ï¬xing the Euclidean norm of the weight vector w: we now have ||w|| = g, independent of the parameters v. We therefore call this reparameterizaton weight normalization.
The idea of normalizing the weight vector has been proposed before (e.g. [27]) but earlier work typically still performed optimization in the w-parameterization, only applying the normalization after each step of stochastic gradient descent. This is fundamentally different from our approach: we propose to explicitly reparameterize the model and to perform stochastic gradient descent in the new parameters v, g directly. Doing so improves the conditioning of the gradient and leads to improved convergence of the optimization procedure: By decoupling the norm of the weight vector (g) from the direction of the weight vector (v/||v||), we speed up convergence of our stochastic gradient descent optimization, as we show experimentally in section 5.
Instead of working with g directly, we may also use an exponential parameterization for the scale, i.e. g = es, where s is a log-scale parameter to learn by stochastic gradient descent. Parameterizing the g parameter in the log-scale is more intuitive and more easily allows g to span a wide range of different magnitudes. Empirically, however, we did not ï¬nd this to be an advantage. In our experiments, the eventual test-set performance was not signiï¬cantly better or worse than the results with directly learning g in its original parameterization, and optimization was slightly slower.
2
# 2.1 Gradients
Training a neural network in the new parameterization is done using standard stochastic gradient descent methods. Here we differentiate through (2) to obtain the gradient of a loss function L with respect to the new parameters v, g. Doing so gives
âgL = âwL · v ||v|| , âvL = g ||v|| âwL â gâgL ||v||2 v, (3)
where âwL is the gradient with respect to the weights w as used normally.
Backpropagation using weight normalization thus only requires a minor modiï¬cation to the usual backpropagation equations, and is easily implemented using standard neural network software. We provide reference implementations for Theano at https://github.com/TimSalimans/weight_ norm. Unlike with batch normalization, the expressions above are independent of the minibatch size and thus cause only minimal computational overhead.
An alternative way to write the gradient is
2 Vw, with My =I-~~., (4) Ilv|| ||w|| VVL=
where Mw is a projection matrix that projects onto the complement of the w vector. This shows that weight normalization accomplishes two things: it scales the weight gradient by g/||v||, and it projects the gradient away from the current weight vector. Both effects help to bring the covariance matrix of the gradient closer to identity and beneï¬t optimization, as we explain below.
Due to projecting away from w, the norm of v grows monotonically with the number of weight updates when learning a neural network with weight normalization using standard gradient descent without momentum: Let vâ = v + Av denote our parameter update, with Av x V,L (steepest ascent/descent), then Av is necessarily orthogonal to the current weight vector w since we project away from it when calculating Vy L (equation|4). Since v is proportional to w, the update is thus also orthogonal to v and increases its norm by the Pythagorean theorem. Specifically, if || Av||/||v|| = ¢ the new weight vector will have norm |{vâ|| = /|v|[? + ¢?||v||? = V1 + c||v|| = ||v||. The rate of increase will depend on the the variance of the weight gradient. If our gradients are noisy, c will be high and the norm of v will quickly increase, which in turn will decrease the scaling factor g/||v||. If the norm of the gradients is small, we get V1 + c? ~ 1, and the norm of v will stop increasing. Using this mechanism, the scaled gradient self-stabilizes its norm. This property does not strictly hold for optimizers that use separate learning rates for individual parameters, like Adam [12] which we use in experiments, or when using momentum. However, qualitatively we still find the same effect to hold.
Empirically, we ï¬nd that the ability to grow the norm ||v|| makes optimization of neural networks with weight normalization very robust to the value of the learning rate: If the learning rate is too large, the norm of the unnormalized weights grows quickly until an appropriate effective learning rate is reached. Once the norm of the weights has grown large with respect to the norm of the updates, the effective learning rate stabilizes. Neural networks with weight normalization therefore work well with a much wider range of learning rates than when using the normal parameterization. It has been observed that neural networks with batch normalization also have this property [11], which can also be explained by this analysis.
By projecting the gradient away from the weight vector w, we also eliminate the noise in that direction. If the covariance matrix of the gradient with respect to w is given by C, the covariance matrix of the gradient in v is given by D = (g2/||v||2)MwCMw. Empirically, we ï¬nd that w is often (close to) a dominant eigenvector of the covariance matrix C: removing that eigenvector then gives a new covariance matrix D that is closer to the identity matrix, which may further speed up learning.
# 2.2 Relation to batch normalization
An important source of inspiration for this reparameterization is batch normalization [11], which normalizes the statistics of the pre-activation t for each minibatch as
t=
3
with µ[t], Ï[t] the mean and standard deviation of the pre-activations t = v · x. For the special case where our network only has a single layer, and the input features x for that layer are whitened (independently distributed with zero mean and unit variance), these statistics are given by µ[t] = 0 and Ï[t] = ||v||. In that case, normalizing the pre-activations using batch normalization is equivalent to normalizing the weights using weight normalization.
Convolutional neural networks usually have much fewer weights than pre-activations, so normalizing the weights is often much cheaper computationally. In addition, the norm of v is non-stochastic, while the minibatch mean µ[t] and variance Ï2[t] can in general have high variance for small minibatch size. Weight normalization can thus be viewed as a cheaper and less noisy approximation to batch normalization. Although exact equivalence does not usually hold for deeper architectures, we still ï¬nd that our weight normalization method provides much of the speed-up of full batch normalization. In addition, its deterministic nature and independence on the minibatch input also means that our method can be applied more easily to models like RNNs and LSTMs, as well as noise-sensitive applications like reinforcement learning.
# 3 Data-Dependent Initialization of Parameters
Besides a reparameterization effect, batch normalization also has the beneï¬t of ï¬xing the scale of the features generated by each layer of the neural network. This makes the optimization robust against parameter initializations for which these scales vary across layers. Since weight normalization lacks this property, we ï¬nd it is important to properly initialize our parameters. We propose to sample the elements of v from a simple distribution with a ï¬xed scale, which is in our experiments a normal distribution with mean zero and standard deviation 0.05. Before starting training, we then initialize the b and g parameters to ï¬x the minibatch statistics of all pre-activations in our network, just like in batch normalization, but only for a single minibatch of data and only during initialization. This can be done efï¬ciently by performing an initial feedforward pass through our network for a single minibatch of data X, using the following computation at each neuron:
vex _ (tâHft) t= and y=0(â44 ), (5)
where µ[t] and Ï[t] are the mean and standard deviation of the pre-activation t over the examples in the minibatch. We can then initialize the neuronâs biase b and scale g as
g â 1 Ï[t] , b â âµ[t] Ï[t] , (6)
so that y = Ï(w · x + b). Like batch normalization, this method ensures that all features initially have zero mean and unit variance before application of the nonlinearity. With our method this only holds for the minibatch we use for initialization, and subsequent minibatches may have slightly different statistics, but experimentally we ï¬nd this initialization method to work well. The method can also be applied to networks without weight normalization, simply by doing stochastic gradient optimization on the parameters w directly, after initialization in terms of v and g: this is what we compare to in section 5. Independently from our work, this type of initialization was recently proposed by different authors [20, 14] who found such data-based initialization to work well for use with the standard parameterization in terms of w.
The downside of this initialization method is that it can only be applied in similar cases as where batch normalization is applicable. For models with recursion, such as RNNs and LSTMs, we will have to resort to standard initialization methods.
# 4 Mean-only Batch Normalization
Weight normalization, as introduced in section 2, makes the scale of neuron activations approximately independent of the parameters v. Unlike with batch normalization, however, the means of the neuron activations still depend on v. We therefore also explore the idea of combining weight normalization with a special version of batch normalization, which we call mean-only batch normalization: With this normalization method, we subtract out the minibatch means like with full batch normalization,
4
but we do not divide by the minibatch standard deviations. That is, we compute neuron activations using
Ët = t â µ[t] + b, (7) where w is the weight vector, parameterized using weight normalization, and µ[t] is the minibatch mean of the pre-activation t. During training, we keep a running average of the minibatch mean which we substitute in for µ[t] at test time.
The gradient of the loss with respect to the pre-activation t is calculated as
âtL = âËtL â µ[âËtL], where µ[.] denotes once again the operation of taking the minibatch mean. Mean-only batch normal- ization thus has the effect of centering the gradients that are backpropagated. This is a comparatively cheap operation, and the computational overhead of mean-only batch normalization is thus lower than for full batch normalization. In addition, this method causes less noise during training, and the noise that is caused is more gentle as the law of large numbers ensures that µ[t] and µ[âËt] are approximately normally distributed. Thus, the added noise has much lighter tails than the highly kurtotic noise caused by the minibatch estimate of the variance used in full batch normalization. As we show in section 5.1, this leads to improved accuracy at test time.
# 5 Experiments
We experimentally validate the usefulness of our method using four different models for varied applications in supervised image recognition, generative modelling, and deep reinforcement learning.
# 5.1 Supervised Classiï¬cation: CIFAR-10
To test our reparameterization method for the application of supervised classiï¬cation, we consider the CIFAR-10 data set of natural images [15]. The model we are using is based on the ConvPool-CNN-C architecture of [26], with some small modiï¬cations: we replace the ï¬rst dropout layer by a layer that adds Gaussian noise, we expand the last hidden layer from 10 units to 192 units, and we use 2 à 2 max-pooling, rather than 3 à 3. The only hyperparameter that we actively optimized (the standard deviation of the Gaussian noise) was chosen to maximize the performance of the network on a holdout set of 10000 examples, using the standard parameterization (no weight normalization or batch normalization). A full description of the resulting architecture is given in table A in the supplementary material.
We train our network for CIFAR-10 using Adam [12] for 200 epochs, with a ï¬xed learning rate and momentum of 0.9 for the ï¬rst 100 epochs. For the last 100 epochs we set the momentum to 0.5 and linearly decay the learning rate to zero. We use a minibatch size of 100. We evaluate 5 different parameterizations of the network: 1) the standard parameterization, 2) using batch normalization, 3) using weight normalization, 4) using weight normalization combined with mean-only batch nor- malization, 5) using mean-only batch normalization with the normal parameterization. The network parameters are initialized using the scheme of section 3 such that all four cases have identical param- eters starting out. For each case we pick the optimal learning rate in {0.0003, 0.001, 0.003, 0.01}. The resulting error curves during training can be found in ï¬gure 1: both weight normalization and batch normalization provide a signiï¬cant speed-up over the standard parameterization. Batch normalization makes slightly more progress per epoch than weight normalization early on, although this is partly offset by the higher computational cost: with our implementation, training with batch normalization was about 16% slower compared to the standard parameterization. In contrast, weight normalization was not noticeably slower. During the later stage of training, weight normalization and batch normalization seem to optimize at about the same speed, with the normal parameterization (with or without mean-only batch normalization) still lagging behind.
After optimizing the network for 200 epochs using the different parameterizations, we evaluate their performance on the CIFAR-10 test set. The results are summarized in table 2: weight normalization, the normal parameterization, and mean-only batch normalization have similar test accuracy (â 8.5% error). Batch normalization does signiï¬cantly better at 8.05% error. Mean-only batch normalization combined with weight normalization has the best performance at 7.31% test error, and interestingly does much better than mean-only batch normalization combined with the normal parameterization: This suggests that the noise added by batch normalization can be useful for regularizing the network,
5
normal param. . weight norm. go Wn + mean-only BN 2 mean-only BN id 0.05 % 50 100 150 200 training epochs
Model Maxout [6] Network in Network [17] Deeply Supervised [16] ConvPool-CNN-C [26] ALL-CNN-C [26] our CNN, mean-only B.N. our CNN, weight norm. our CNN, normal param. our CNN, batch norm. ours, W.N. + mean-only B.N. Test Error 11.68% 10.41% 9.6% 9.31% 9.08% 8.52% 8.46% 8.43% 8.05% 7.31%
Figure 1: Training error for CIFAR-10 using differ- ent network parameterizations. For weight normal- ization, batch normalization, and mean-only batch normalization we show results using Adam with a learning rate of 0.003. For the normal parameteri- zation we instead use 0.0003 which works best in this case. For the last 100 epochs the learning rate is linearly decayed to zero.
Figure 2: Classiï¬cation results on CIFAR-10 without data augmentation.
but that the reparameterization provided by weight normalization or full batch normalization is also needed for optimal results. We hypothesize that the substantial improvement by mean-only B.N. with weight normalization over regular batch normalization is due to the distribution of the noise caused by the normalization method during training: for mean-only batch normalization the minibatch mean has a distribution that is approximately Gaussian, while the noise added by full batch normalization during training has much higher kurtosis. As far as we are aware, the result with mean-only batch normalization combined with weight normalization represents the state-of-the-art for CIFAR-10 among methods that do not use data augmentation.
# 5.2 Generative Modelling: Convolutional VAE
Next, we test the effect of weight normalization applied to deep convolutional variational auto- encoders (CVAEs) [13, 24, 25], trained on the MNIST data set of images of handwritten digits and the CIFAR-10 data set of small natural images.
Variational auto-encoders are generative models that explain the data vector x as arising from a set of latent variables z, through a joint distribution of the form p(z, x) = p(z)p(x|z), where the decoder p(x|z) is speciï¬ed using a neural network. A lower bound on the log marginal likelihood log p(x) can be obtained by approximately inferring the latent variables z from the observed data x using an encoder distribution q(z|x) that is also speciï¬ed as a neural network. This lower bound is then optimized to ï¬t the model to the data.
We follow a similar implementation of the CVAE as in [25] with some modiï¬cations, mainly that the encoder and decoder are parameterized with ResNet [9] blocks, and that the diagonal posterior is replaced with auto-regressive variational inference1. For MNIST, the encoder consists of 3 sequences of two ResNet blocks each, the ï¬rst sequence acting on 16 feature maps, the others on 32 feature maps. The ï¬rst two sequences are followed by a 2-times subsampling operation implemented using 2 à 2 stride, while the third sequence is followed by a fully connected layer with 450 units. The decoder has a similar architecture, but with reversed direction. For CIFAR-10, we used a neural architecture with ResNet units and multiple intermediate stochastic layers1. We used Adamax [12] with α = 0.002 for optimization, in combination with Polyak averaging [22] in the form of an exponential moving average that averages parameters over approximately 10 epochs.
In ï¬gure 3, we plot the test-set lower bound as a function of number of training epochs, including error bars based on multiple different random seeds for initializing parameters. As can be seen, the parameterization with weight normalization has lower variance and converges to a better optimum. We observe similar results across different hyper-parameter settings.
# 1Manuscript in preparation
6
8
#
&
# E 5
8
Convolutional VAE on MNIST Convolutional VAE on CIFAR-10 84.0) 10000 84.5] -85.0 8 9500 -85.5) & -86.0 © 9000 E 86.5 s -87.0) B 8500 87.5 T+ normal parameterization â normal parameterization 41 weight normalization â _ weight normalization 88.0) 8000 30 100 150 200 250 300 0 50 100 150 200 250 300 350 400 450 training epochs training epochs
Figure 3: Marginal log likelihood lower bound on the MNIST (top) and CIFAR-10 (bottom) test sets for a convolutional VAE during training, for both the standard implementation as well as our modiï¬cation with weight normalization. For MNIST, we provide standard error bars to indicate variance based on different initial random seeds.
# 5.3 Generative Modelling: DRAW
Next, we consider DRAW, a recurrent generative model by [7]. DRAW is a variational auto-encoder with generative model p(z)p(x|z) and encoder q(z|x), similar to the model in section 5.2, but with both the encoder and decoder consisting of a recurrent neural network comprised of Long Short-Term Memory (LSTM) [10] units. LSTM units consist of a memory cell with additive dynamics, combined with input, forget, and output gates that determine which information ï¬ows in and out of the memory. The additive dynamics enables learning of long-range dependencies in the data.
At each time step of the model, DRAW uses the same set of weight vectors to update the cell states of the LSTM units in its encoder and decoder. Because of the recurrent nature of this process it is not clear how batch normalization could be applied to this model: Normalizing the cell states diminishes their ability to pass through information. Fortunately, weight normalization can be applied trivially to the weight vectors of each LSTM unit, and we ï¬nd this to work well empirically.
We take the Theano implementation of DRAW provided at https://github.com/jbornschein/ draw and use it to model the MNIST data set of handwritten digits. We then make a single modiï¬ca- tion to the model: we apply weight normalization to all weight vectors. As can be seen in ï¬gure 4, this signiï¬cantly speeds up convergence of the optimization procedure, even without modifying the initialization method and learning rate that were tuned for use with the normal parameterization.
bound on marginal log likelihood normal parameterization weight normalization 10 20 30 «40 «50 60 70 8 90 100 training epochs
Figure 4: Marginal log likelihood lower bound on the MNIST test set for DRAW during training, for both the standard implementation as well as our modiï¬cation with weight normalization. 100 epochs is not sufï¬cient for convergence for this model, but the implementation using weight normalization clearly makes progress much more quickly than with the standard parameterization.
7
# 5.4 Reinforcement Learning: DQN
Next we apply weight normalization to the problem of Reinforcement Learning for playing games on the Atari Learning Environment [2]. The approach we use is the Deep Q-Network (DQN) proposed by [21]. This is an application for which batch normalization is not well suited: the noise introduced by estimating the minibatch statistics destabilizes the learning process. We were not able to get batch normalization to work for DQN without using an impractically large minibatch size. In contrast, weight normalization is easy to apply in this context, as is the initialization method of section 3. Stochastic gradient learning is performed using Adamax [12] with momentum of 0.5. We search for optimal learning rates in {0.0001, 0.0003, 0.001, 0.003}, generally ï¬nding 0.0003 to work well with weight normalization and 0.0001 to work well for the normal parameterization. We also use a larger minibatch size (64) which we found to be more efï¬cient on our hardware (Amazon Elastic Compute Cloud g2.2xlarge GPU instance). Apart from these changes we follow [21] as closely as possible in terms of parameter settings and evaluation methods. However, we use a Python/Theano/Lasagne reimplementation of their work, adapted from the implementation available at https://github.com/spragunr/deep_q_rl, so there may be small additional differences in implementation.
Figure 5 shows the training curves obtained using DQN with the standard parameterization and with weight normalization on Space Invaders. Using weight normalization the algorithm progresses more quickly and reaches a better ï¬nal result. Table 6 shows the ï¬nal evaluation scores obtained by DQN with weight normalization for four games: on average weight normalization improves the performance of DQN.
: 5 g eight rermatzaton or os on io 30 training epochs 500
Figure 5: Evaluation scores for Space In- vaders obtained by DQN after each epoch of training, for both the standard parameteriza- tion and using weight normalization. Learn- ing rates for both cases were selected to max- imize the highest achieved test score.
Game Breakout Enduro Seaquest Space Invaders normal weightnorm Mnih 410 1,250 7,188 1,779 403 1,448 7,375 2,179 401 302 5,286 1,975
Figure 6: Maximum evaluation scores obtained by DQN, using either the normal parameterization or using weight normalization. The scores indicated by Mnih et al. are those reported by [21]: Our normal parameterization is approximately equivalent to their method. Differences in scores may be caused by small differences in our implementation. Speciï¬cally, the difference in our score on Enduro and that reported by [21] might be due to us not using a play-time limit during evaluation.
# 6 Conclusion
We have presented weight normalization, a simple reparameterization of the weight vectors in a neural network that accelerates the convergence of stochastic gradient descent optimization. Weight normalization was applied to four different models in supervised image recognition, generative modelling, and deep reinforcement learning, showing a consistent advantage across applications. The reparameterization method is easy to apply, has low computational overhead, and does not introduce dependencies between the examples in a minibatch, making it our default choice in the development of new deep learning architectures.
# Acknowledgments
We thank John Schulman for helpful comments on an earlier draft of this paper.
8
# References
[1] S. Amari. Neural learning in structured parameter spaces - natural Riemannian gradient. In Advances in Neural Information Processing Systems, pages 127â133. MIT Press, 1997.
[2] M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The arcade learning environment: An evaluation platform for general agents. Journal of Artiï¬cial Intelligence Research, 47:253â279, 06 2013.
[3] G. Desjardins, K. Simonyan, R. Pascanu, et al. Natural neural networks. In Advances in Neural Information Processing Systems, pages 2062â2070, 2015.
[4] X. Glorot and Y. Bengio. Understanding the difï¬culty of training deep feedforward neural networks. In International conference on artiï¬cial intelligence and statistics, pages 249â256, 2010.
[5] I. Goodfellow, Y. Bengio, and A. Courville. Deep learning. Book in preparation for MIT Press, 2016.
[6] I. J. Goodfellow, D. Warde-Farley, M. Mirza, A. Courville, and Y. Bengio. Maxout networks. In ICML, 2013.
[7] K. Gregor, I. Danihelka, A. Graves, and D. Wierstra. Draw: A recurrent neural network for image generation. arXiv preprint arXiv:1502.04623, 2015.
[8] R. Grosse and R. Salakhudinov. Scaling up natural gradient by sparsely factorizing the inverse ï¬sher matrix. In ICML, pages 2304â2313, 2015.
[9] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015.
[10] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735â1780, 1997.
[11] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015.
[12] D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
[13] D. P. Kingma and M. Welling. Auto-Encoding Variational Bayes. Proceedings of the 2nd International Conference on Learning Representations, 2013.
[14] P. Krähenbühl, C. Doersch, J. Donahue, and T. Darrell. Data-dependent initializations of convolutional neural networks. arXiv preprint arXiv:1511.06856, 2015.
[15] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images, 2009.
[16] C.-Y. Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu. Deeply-supervised nets. In Deep Learning and Representation Learning Workshop, NIPS, 2014.
[17] M. Lin, C. Qiang, and S. Yan. Network in network. In ICLR: Conference Track, 2014.
[18] J. Martens. Deep learning via hessian-free optimization. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pages 735â742, 2010.
[19] J. Martens and R. Grosse. Optimizing neural networks with kronecker-factored approximate curvature. arXiv preprint arXiv:1503.05671, 2015.
[20] D. Mishkin and J. Matas. All you need is a good init. arXiv preprint arXiv:1511.06422, 2015.
[21] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529â533, 2015.
[22] B. T. Polyak and A. B. Juditsky. Acceleration of stochastic approximation by averaging. SIAM Journal on Control and Optimization, 30(4):838â855, 1992.
[23] T. Raiko, H. Valpola, and Y. LeCun. Deep learning made easier by linear transformations in perceptrons. In International Conference on Artiï¬cial Intelligence and Statistics, pages 924â932, 2012.
[24] D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In ICML, pages 1278â1286, 2014.
[25] T. Salimans, D. P. Kingma, and M. Welling. Markov chain Monte Carlo and variational inference: Bridging the gap. In ICML, 2015.
9
[26] J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller. Striving for simplicity: The all convolu- tional net. In ICLR Workshop Track, 2015.
[27] N. Srebro and A. Shraibman. Rank, trace-norm and max-norm. In Proceedings of the 18th Annual Conference on Learning Theory, pages 545â-560, 2005.
[28] I. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the importance of initialization and momentum in deep learning. In ICML, pages 1139â1147, 2013.
10
# A Neural network architecure for CIFAR-10 experiments
Layer type raw RGB input ZCA whitening Gaussian noise Ï = 0.15 3 Ã 3 conv leaky ReLU 3 Ã 3 conv leaky ReLU 3 Ã 3 conv leaky ReLU 2 Ã 2 max pool, str. 2 dropout with p = 0.5 3 Ã 3 conv leaky ReLU 3 Ã 3 conv leaky ReLU 3 Ã 3 conv leaky ReLU 2 Ã 2 max pool, str. 2 dropout with p = 0.5 3 Ã 3 conv leaky ReLU 1 Ã 1 conv leaky ReLU 1 Ã 1 conv leaky ReLU global average pool softmax output # channels 3 3 3 96 96 96 96 96 192 192 192 192 192 192 192 192 192 10
x, y dimension 32 32 32 32 32 32 16 16 16 16 16 8 8 6 6 6 1 1
Table 1: Neural network architecture for CIFAR-10.
11 | {
"id": "1512.03385"
} |
1602.07360 | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size | Recent research on deep neural networks has focused primarily on improving
accuracy. For a given accuracy level, it is typically possible to identify
multiple DNN architectures that achieve that accuracy level. With equivalent
accuracy, smaller DNN architectures offer at least three advantages: (1)
Smaller DNNs require less communication across servers during distributed
training. (2) Smaller DNNs require less bandwidth to export a new model from
the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on
FPGAs and other hardware with limited memory. To provide all of these
advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet
achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters.
Additionally, with model compression techniques we are able to compress
SqueezeNet to less than 0.5MB (510x smaller than AlexNet).
The SqueezeNet architecture is available for download here:
https://github.com/DeepScale/SqueezeNet | http://arxiv.org/pdf/1602.07360 | Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer | cs.CV, cs.AI | In ICLR Format | null | cs.CV | 20160224 | 20161104 | 6 1 0 2
v o N 4 ] V C . s c [
4 v 0 6 3 7 0 . 2 0 6 1 : v i X r a
# Under review as a conference paper at ICLR 2017
SQUEEZENET: ALEXNET-LEVEL ACCURACY WITH 50X FEWER PARAMETERS AND <0.5MB MODEL SIZE
Forrest N. Iandola1, Song Han2, Matthew W. Moskewicz1, Khalid Ashraf1, William J. Dally2, Kurt Keutzer1 1DeepScaleâ & UC Berkeley {forresti, moskewcz, kashraf, keutzer}@eecs.berkeley.edu {songhan, dally}@stanford.edu
# ABSTRACT
Recent research on deep convolutional neural networks (CNNs) has focused pri- marily on improving accuracy. For a given accuracy level, it is typically possi- ble to identify multiple CNN architectures that achieve that accuracy level. With equivalent accuracy, smaller CNN architectures offer at least three advantages: (1) Smaller CNNs require less communication across servers during distributed train- ing. (2) Smaller CNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller CNNs are more feasible to deploy on FP- GAs and other hardware with limited memory. To provide all of these advantages, we propose a small CNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques, we are able to compress SqueezeNet to less than 0.5MB (510Ã smaller than AlexNet). The https://github.com/DeepScale/SqueezeNet
1 Much of the recent research on deep convolutional neural networks (CNNs) has focused on increas- ing accuracy on computer vision datasets. For a given accuracy level, there typically exist multiple CNN architectures that achieve that accuracy level. Given equivalent accuracy, a CNN architecture with fewer parameters has several advantages:
More efï¬cient distributed training. Communication among servers is the limiting factor to the scalability of distributed CNN training. For distributed data-parallel training, com- munication overhead is directly proportional to the number of parameters in the model (Ian- dola et al., 2016). In short, small models train faster due to requiring less communication. ⢠Less overhead when exporting new models to clients. For autonomous driving, compa- nies such as Tesla periodically copy new models from their servers to customersâ cars. This practice is often referred to as an over-the-air update. Consumer Reports has found that the safety of Teslaâs Autopilot semi-autonomous driving functionality has incrementally improved with recent over-the-air updates (Consumer Reports, 2016). However, over-the- air updates of todayâs typical CNN/DNN models can require large data transfers. With AlexNet, this would require 240MB of communication from the server to the car. Smaller models require less communication, making frequent updates more feasible.
⢠Feasible FPGA and embedded deployment. FPGAs often have less than 10MB1 of on- chip memory and no off-chip memory or storage. For inference, a sufï¬ciently small model could be stored directly on the FPGA instead of being bottlenecked by memory band- width (Qiu et al., 2016), while video frames stream through the FPGA in real time. Further, when deploying CNNs on Application-Speciï¬c Integrated Circuits (ASICs), a sufï¬ciently small model could be stored directly on-chip, and smaller models may enable the ASIC to ï¬t on a smaller die.
âhttp://deepscale.ai 1For example, the Xilinx Vertex-7 FPGA has a maximum of 8.5 MBytes (i.e. 68 Mbits) of on-chip memory
and does not provide off-chip memory.
1
# Under review as a conference paper at ICLR 2017
As you can see, there are several advantages of smaller CNN architectures. With this in mind, we focus directly on the problem of identifying a CNN architecture with fewer parameters but equivalent accuracy compared to a well-known model. We have discovered such an architecture, which we call SqueezeNet. In addition, we present our attempt at a more disciplined approach to searching the design space for novel CNN architectures.
The rest of the paper is organized as follows. In Section 2 we review the related work. Then, in Sections 3 and 4 we describe and evaluate the SqueezeNet architecture. After that, we turn our attention to understanding how CNN architectural design choices impact model size and accuracy. In We gain this understanding by exploring the design space of SqueezeNet-like architectures. Section 5, we do design space exploration on the CNN microarchitecture, which we deï¬ne as the organization and dimensionality of individual layers and modules. In Section 6, we do design space exploration on the CNN macroarchitecture, which we deï¬ne as high-level organization of layers in a CNN. Finally, we conclude in Section 7. In short, Sections 3 and 4 are useful for CNN researchers as well as practitioners who simply want to apply SqueezeNet to a new application. The remaining sections are aimed at advanced researchers who intend to design their own CNN architectures.
2 RELATED WORK 2.1 MODEL COMPRESSION The overarching goal of our work is to identify a model that has very few parameters while preserv- ing accuracy. To address this problem, a sensible approach is to take an existing CNN model and compress it in a lossy fashion. In fact, a research community has emerged around the topic of model compression, and several approaches have been reported. A fairly straightforward approach by Den- ton et al. is to apply singular value decomposition (SVD) to a pretrained CNN model (Denton et al., 2014). Han et al. developed Network Pruning, which begins with a pretrained model, then replaces parameters that are below a certain threshold with zeros to form a sparse matrix, and ï¬nally performs a few iterations of training on the sparse CNN (Han et al., 2015b). Recently, Han et al. extended their work by combining Network Pruning with quantization (to 8 bits or less) and huffman encoding to create an approach called Deep Compression (Han et al., 2015a), and further designed a hardware accelerator called EIE (Han et al., 2016a) that operates directly on the compressed model, achieving substantial speedups and energy savings.
2.2 CNN MICROARCHITECTURE Convolutions have been used in artiï¬cial neural networks for at least 25 years; LeCun et al. helped to popularize CNNs for digit recognition applications in the late 1980s (LeCun et al., 1989). In neural networks, convolution ï¬lters are typically 3D, with height, width, and channels as the key dimensions. When applied to images, CNN ï¬lters typically have 3 channels in their ï¬rst layer (i.e. RGB), and in each subsequent layer Li the ï¬lters have the same number of channels as Liâ1 has ï¬lters. The early work by LeCun et al. (LeCun et al., 1989) uses 5x5xChannels2 ï¬lters, and the recent VGG (Simonyan & Zisserman, 2014) architectures extensively use 3x3 ï¬lters. Models such as Network-in-Network (Lin et al., 2013) and the GoogLeNet family of architectures (Szegedy et al., 2014; Ioffe & Szegedy, 2015; Szegedy et al., 2015; 2016) use 1x1 ï¬lters in some layers.
With the trend of designing very deep CNNs, it becomes cumbersome to manually select ï¬lter di- mensions for each layer. To address this, various higher level building blocks, or modules, comprised of multiple convolution layers with a speciï¬c ï¬xed organization have been proposed. For example, the GoogLeNet papers propose Inception modules, which are comprised of a number of different di- mensionalities of ï¬lters, usually including 1x1 and 3x3, plus sometimes 5x5 (Szegedy et al., 2014) and sometimes 1x3 and 3x1 (Szegedy et al., 2015). Many such modules are then combined, perhaps with additional ad-hoc layers, to form a complete network. We use the term CNN microarchitecture to refer to the particular organization and dimensions of the individual modules.
2.3 CNN MACROARCHITECTURE While the CNN microarchitecture refers to individual layers and modules, we deï¬ne the CNN macroarchitecture as the system-level organization of multiple modules into an end-to-end CNN architecture.
2From now on, we will simply abbreviate HxWxChannels to HxW.
2
# Under review as a conference paper at ICLR 2017
Perhaps the mostly widely studied CNN macroarchitecture topic in the recent literature is the impact of depth (i.e. number of layers) in networks. Simoyan and Zisserman proposed the VGG (Simonyan & Zisserman, 2014) family of CNNs with 12 to 19 layers and reported that deeper networks produce higher accuracy on the ImageNet-1k dataset (Deng et al., 2009). K. He et al. proposed deeper CNNs with up to 30 layers that deliver even higher ImageNet accuracy (He et al., 2015a).
The choice of connections across multiple layers or modules is an emerging area of CNN macroar- chitectural research. Residual Networks (ResNet) (He et al., 2015b) and Highway Networks (Sri- vastava et al., 2015) each propose the use of connections that skip over multiple layers, for example additively connecting the activations from layer 3 to the activations from layer 6. We refer to these connections as bypass connections. The authors of ResNet provide an A/B comparison of a 34-layer CNN with and without bypass connections; adding bypass connections delivers a 2 percentage-point improvement on Top-5 ImageNet accuracy.
2.4 NEURAL NETWORK DESIGN SPACE EXPLORATION Neural networks (including deep and convolutional NNs) have a large design space, with numerous options for microarchitectures, macroarchitectures, solvers, and other hyperparameters. It seems natural that the community would want to gain intuition about how these factors impact a NNâs accuracy (i.e. the shape of the design space). Much of the work on design space exploration (DSE) of NNs has focused on developing automated approaches for ï¬nding NN architectures that deliver higher accuracy. These automated DSE approaches include bayesian optimization (Snoek et al., 2012), simulated annealing (Ludermir et al., 2006), randomized search (Bergstra & Bengio, 2012), and genetic algorithms (Stanley & Miikkulainen, 2002). To their credit, each of these papers pro- vides a case in which the proposed DSE approach produces a NN architecture that achieves higher accuracy compared to a representative baseline. However, these papers make no attempt to provide intuition about the shape of the NN design space. Later in this paper, we eschew automated ap- proaches â instead, we refactor CNNs in such a way that we can do principled A/B comparisons to investigate how CNN architectural decisions inï¬uence model size and accuracy.
In the following sections, we ï¬rst propose and evaluate the SqueezeNet architecture with and with- out model compression. Then, we explore the impact of design choices in microarchitecture and macroarchitecture for SqueezeNet-like CNN architectures.
3 SQUEEZENET: PRESERVING ACCURACY WITH FEW PARAMETERS In this section, we begin by outlining our design strategies for CNN architectures with few param- eters. Then, we introduce the Fire module, our new building block out of which to build CNN architectures. Finally, we use our design strategies to construct SqueezeNet, which is comprised mainly of Fire modules.
3.1 ARCHITECTURAL DESIGN STRATEGIES Our overarching objective in this paper is to identify CNN architectures that have few parameters while maintaining competitive accuracy. To achieve this, we employ three main strategies when designing CNN architectures:
Strategy 1. Replace 3x3 ï¬lters with 1x1 ï¬lters. Given a budget of a certain number of convolution ï¬lters, we will choose to make the majority of these ï¬lters 1x1, since a 1x1 ï¬lter has 9X fewer parameters than a 3x3 ï¬lter.
Strategy 2. Decrease the number of input channels to 3x3 ï¬lters. Consider a convolution layer that is comprised entirely of 3x3 ï¬lters. The total quantity of parameters in this layer is (number of input channels) * (number of ï¬lters) * (3*3). So, to maintain a small total number of parameters in a CNN, it is important not only to decrease the number of 3x3 ï¬lters (see Strategy 1 above), but also to decrease the number of input channels to the 3x3 ï¬lters. We decrease the number of input channels to 3x3 ï¬lters using squeeze layers, which we describe in the next section.
Strategy 3. Downsample late in the network so that convolution layers have large activation maps. In a convolutional network, each convolution layer produces an output activation map with a spatial resolution that is at least 1x1 and often much larger than 1x1. The height and width of these activation maps are controlled by: (1) the size of the input data (e.g. 256x256 images) and (2)
3
conference paper at ICLR 2017 saves 1x1 convolution filters RelU. expane 1x1 and 3x3 convolution filters = ved fo» 0008 09008f 090) fe) Ate) Ale) Ale) Boa Iedee) Aptede) Aetere) 0907 oa0F% \jao0% 999
# Under review as a conference paper at ICLR 2017
Figure 1: Microarchitectural view: Organization of convolution ï¬lters in the Fire module. In this example, s1x1 = 3, e1x1 = 4, and e3x3 = 4. We illustrate the convolution ï¬lters but not the activations.
the choice of layers in which to downsample in the CNN architecture. Most commonly, downsam- pling is engineered into CNN architectures by setting the (stride > 1) in some of the convolution or pooling layers (e.g. (Szegedy et al., 2014; Simonyan & Zisserman, 2014; Krizhevsky et al., 2012)). If early3 layers in the network have large strides, then most layers will have small activation maps. Conversely, if most layers in the network have a stride of 1, and the strides greater than 1 are con- centrated toward the end4 of the network, then many layers in the network will have large activation maps. Our intuition is that large activation maps (due to delayed downsampling) can lead to higher classiï¬cation accuracy, with all else held equal. Indeed, K. He and H. Sun applied delayed down- sampling to four different CNN architectures, and in each case delayed downsampling led to higher classiï¬cation accuracy (He & Sun, 2015).
Strategies 1 and 2 are about judiciously decreasing the quantity of parameters in a CNN while attempting to preserve accuracy. Strategy 3 is about maximizing accuracy on a limited budget of parameters. Next, we describe the Fire module, which is our building block for CNN architectures that enables us to successfully employ Strategies 1, 2, and 3.
3.2 THE FIRE MODULE We deï¬ne the Fire module as follows. A Fire module is comprised of: a squeeze convolution layer (which has only 1x1 ï¬lters), feeding into an expand layer that has a mix of 1x1 and 3x3 convolution ï¬lters; we illustrate this in Figure 1. The liberal use of 1x1 ï¬lters in Fire modules is an application of Strategy 1 from Section 3.1. We expose three tunable dimensions (hyperparameters) in a Fire module: s1x1, e1x1, and e3x3. In a Fire module, s1x1 is the number of ï¬lters in the squeeze layer (all 1x1), e1x1 is the number of 1x1 ï¬lters in the expand layer, and e3x3 is the number of 3x3 ï¬lters in the expand layer. When we use Fire modules we set s1x1 to be less than (e1x1 + e3x3), so the squeeze layer helps to limit the number of input channels to the 3x3 ï¬lters, as per Strategy 2 from Section 3.1.
3.3 THE SQUEEZENET ARCHITECTURE We now describe the SqueezeNet CNN architecture. We illustrate in Figure 2 that SqueezeNet begins with a standalone convolution layer (conv1), followed by 8 Fire modules (ï¬re2-9), ending with a ï¬nal conv layer (conv10). We gradually increase the number of ï¬lters per ï¬re module from the beginning to the end of the network. SqueezeNet performs max-pooling with a stride of 2 after layers conv1, ï¬re4, ï¬re8, and conv10; these relatively late placements of pooling are per Strategy 3 from Section 3.1. We present the full SqueezeNet architecture in Table 1.
3In our terminology, an âearlyâ layer is close to the input data. 4In our terminology, the âendâ of the network is the classiï¬er.
4
review as a conference paper at ICLR 2017 conv J 96 } 96 maxpool/2 maxpool/2 | fire? 128 fired 128 fired maxppol/2 256 | 384 L__,] = Pa 12 maxpool/2 maxpool/2 512 512 1000 J 2000 slobal wepo0| a ador global avgpoo! rotted Gir) dog"
convi, j maxpool/2 96 Citirez [convixd) 18 < fire3 128 ââ_] Tired (convixa) maxpool/2 oe [convix1| 384 on sy) maxpool/2 512 1000 global avgpool t
2017 conv convi, j } 96 maxpool/2 maxpool/2 | 96 fire? Citirez [convixd) 128 18 < fired fire3 128 128 ââ_] fired Tired (convixa) maxppol/2 maxpool/2 | oe [convix1| L__,] 384 on 12 sy) maxpool/2 maxpool/2 512 512 J 2000 1000 global avgpoo! global avgpool t Gir)
# Under review as a conference paper at ICLR 2017
conv convi, j J 96 } 96 maxpool/2 maxpool/2 maxpool/2 | 96 fire? Citirez [convixd) 128 18 < fired fire3 128 128 ââ_] fired Tired (convixa) maxppol/2 maxpool/2 256 | oe [convix1| 384 L__,] 384 = on Pa 12 sy) maxpool/2 maxpool/2 maxpool/2 512 512 512 1000 J 2000 1000 slobal wepo0| a ador global avgpoo! global avgpool t rotted Gir) dog"
Figure 2: Macroarchitectural view of our SqueezeNet architecture. Left: SqueezeNet (Section 3.3); Middle: SqueezeNet with simple bypass (Section 6); Right: SqueezeNet with complex bypass (Sec- tion 6).
3.3.1 OTHER SQUEEZENET DETAILS For brevity, we have omitted number of details and design choices about SqueezeNet from Table 1 and Figure 2. We provide these design choices in the following. The intuition behind these choices may be found in the papers cited below.
So that the output activations from 1x1 and 3x3 ï¬lters have the same height and width, we add a 1-pixel border of zero-padding in the input data to 3x3 ï¬lters of expand modules. ⢠ReLU (Nair & Hinton, 2010) is applied to activations from squeeze and expand layers. ⢠Dropout (Srivastava et al., 2014) with a ratio of 50% is applied after the ï¬re9 module. ⢠Note the lack of fully-connected layers in SqueezeNet; this design choice was inspired by
the NiN (Lin et al., 2013) architecture.
⢠When training SqueezeNet, we begin with a learning rate of 0.04, and we lin- early decrease the learning rate throughout training, as described in (Mishkin et al., 2016). For details on the training protocol (e.g. batch size, learning rate, parame- ter initialization), please refer to our Caffe-compatible conï¬guration ï¬les located here: https://github.com/DeepScale/SqueezeNet.
⢠The Caffe framework does not natively support a convolution layer that contains multiple ï¬lter resolutions (e.g. 1x1 and 3x3) (Jia et al., 2014). To get around this, we implement our expand layer with two separate convolution layers: a layer with 1x1 ï¬lters, and a layer with 3x3 ï¬lters. Then, we concatenate the outputs of these layers together in the channel dimension. This is numerically equivalent to implementing one layer that contains both 1x1 and 3x3 ï¬lters.
We released the SqueezeNet conï¬guration ï¬les in the format deï¬ned by the Caffe CNN frame- work. However, in addition to Caffe, several other CNN frameworks have emerged, including MXNet (Chen et al., 2015a), Chainer (Tokui et al., 2015), Keras (Chollet, 2016), and Torch (Col- lobert et al., 2011). Each of these has its own native format for representing a CNN architec- ture. That said, most of these libraries use the same underlying computational back-ends such as cuDNN (Chetlur et al., 2014) and MKL-DNN (Das et al., 2016). The research community has
5
Under review as a conference paper at ICLR 2017 ported the SqueezeNet CNN architecture for compatibility with a number of other CNN software frameworks: ⢠MXNet (Chen et al., 2015a) port of SqueezeNet: (Haria, 2016) ⢠Chainer (Tokui et al., 2015) port of SqueezeNet: (Bell, 2016) ⢠Keras (Chollet, 2016) port of SqueezeNet: (DT42, 2016) ⢠Torch (Collobert et al., 2011) port of SqueezeNetâs Fire Modules: (Waghmare, 2016) 4 EVALUATION OF SQUEEZENET We now turn our attention to evaluating SqueezeNet. In each of the CNN model compression papers reviewed in Section 2.1, the goal was to compress an AlexNet (Krizhevsky et al., 2012) model that was trained to classify images using the ImageNet (Deng et al., 2009) (ILSVRC 2012) dataset. Therefore, we use AlexNet5 and the associated model compression results as a basis for comparison when evaluating SqueezeNet. Table 1: SqueezeNet architectural dimensions. (The formatting of this table was inspired by the Inception2 paper (Ioffe & Szegedy, 2015).)
# Under review as a conference paper at ICLR 2017
4 EVALUATION OF SQUEEZENET We now turn our attention to evaluating SqueezeNet. In each of the CNN model compression papers reviewed in Section 2.1, the goal was to compress an AlexNet (Krizhevsky et al., 2012) model that was trained to classify images using the ImageNet (Deng et al., 2009) (ILSVRC 2012) dataset. Therefore, we use AlexNet5 and the associated model compression results as a basis for comparison when evaluating SqueezeNet.
input image | 224x224x3 conv 111x111x96 | 7x7/2 (x96) 1 100% (7x7) 6bit 14,208 14,208 maxpool1 55x55x96 3x3/2 0 fire2 55x55x128 2 16 64 64 100% 100% 33% 6bit 11,920 5,746 fire3 55x55x128 2 16 64 64 100% 100% 33% 6bit 12,432 6,258 fired 55x55x256 2 32 128 128 100% 100% 33% 6bit 45,344 20,646 maxpool4 | 27x27x256 3x3/2 0 fires 2727x256 2 32 128 128 100% 100% 33% 6bit 49,440 24,742 fire 27x27x384 2 48 192 192 100% 50% 33% 6bit 104,880 44,700 fire7 2727x384 2 48 192 192 50% 100% 33% 6bit 111,024 46,236 fires 27x27x512 2 64 256 256 100% 50% 33% 6bit 188,992 77,581 maxpool8 13x12x512 3x3/2 0 fired 13x13x512 2 64 256 256 50% 100% 30% 6bit 197,184 77,581 conv10 13x13x1000 } 1x1/1 (x1000) 1 20% (3x3) 6bit 513,000 103,400 avgpool10 | 1x1x1000 13x13/1 0 - 1,248,424 421,098 activations parameters compression info (total) (total)
In Table 2, we review SqueezeNet in the context of recent model compression results. The SVD- based approach is able to compress a pretrained AlexNet model by a factor of 5x, while diminishing top-1 accuracy to 56.0% (Denton et al., 2014). Network Pruning achieves a 9x reduction in model size while maintaining the baseline of 57.2% top-1 and 80.3% top-5 accuracy on ImageNet (Han et al., 2015b). Deep Compression achieves a 35x reduction in model size while still maintaining the baseline accuracy level (Han et al., 2015a). Now, with SqueezeNet, we achieve a 50X reduction in model size compared to AlexNet, while meeting or exceeding the top-1 and top-5 accuracy of AlexNet. We summarize all of the aforementioned results in Table 2.
It appears that we have surpassed the state-of-the-art results from the model compression commu- nity: even when using uncompressed 32-bit values to represent the model, SqueezeNet has a 1.4à smaller model size than the best efforts from the model compression community while maintain- ing or exceeding the baseline accuracy. Until now, an open question has been: are small models amenable to compression, or do small models âneedâ all of the representational power afforded by dense ï¬oating-point values? To ï¬nd out, we applied Deep Compression (Han et al., 2015a)
5Our baseline is bvlc alexnet from the Caffe codebase (Jia et al., 2014).
6
# Under review as a conference paper at ICLR 2017
Table 2: Comparing SqueezeNet to model compression approaches. By model size, we mean the number of bytes required to store all of the parameters in the trained model. Reduction in Model Size vs. AlexNet 1x 5x
Top-1 ImageNet Accuracy 57.2% 56.0% CNN architecture Original â Compressed Model Size 240MB 240MB â 48MB Data Type Compression Approach None (baseline) SVD (Denton et al., 2014) Network Pruning (Han et al., 2015b) Deep Compression (Han et al., 2015a) None Deep Compression Deep Compression 32 bit 32 bit AlexNet AlexNet 32 bit 240MB â 27MB 9x 57.2% AlexNet 240MB â 6.9MB 5-8 bit 57.2% AlexNet 35x 32 bit 8 bit 6 bit 4.8MB 4.8MB â 0.66MB 4.8MB â 0.47MB 50x 363x 510x SqueezeNet (ours) SqueezeNet (ours) SqueezeNet (ours) 57.5% 57.5% 57.5% Top-5 ImageNet Accuracy 80.3% 79.4% 80.3% 80.3% 80.3% 80.3% 80.3%
to SqueezeNet, using 33% sparsity6 and 8-bit quantization. This yields a 0.66 MB model (363Ã smaller than 32-bit AlexNet) with equivalent accuracy to AlexNet. Further, applying Deep Compres- sion with 6-bit quantization and 33% sparsity on SqueezeNet, we produce a 0.47MB model (510Ã smaller than 32-bit AlexNet) with equivalent accuracy. Our small model is indeed amenable to compression.
In addition, these results demonstrate that Deep Compression (Han et al., 2015a) not only works well on CNN architectures with many parameters (e.g. AlexNet and VGG), but it is also able to compress the already compact, fully convolutional SqueezeNet architecture. Deep Compression compressed SqueezeNet by 10Ã while preserving the baseline accuracy. In summary: by combin- ing CNN architectural innovation (SqueezeNet) with state-of-the-art compression techniques (Deep Compression), we achieved a 510Ã reduction in model size with no decrease in accuracy compared to the baseline.
Finally, note that Deep Compression (Han et al., 2015b) uses a codebook as part of its scheme for quantizing CNN parameters to 6- or 8-bits of precision. Therefore, on most commodity processors, it is not trivial to achieve a speedup of 32 6 = 5.3x with 6-bit quantization using the scheme developed in Deep Compression. However, Han et al. developed custom hardware â Efï¬cient Inference Engine (EIE) â that can compute codebook-quantized CNNs more efï¬ciently (Han et al., 2016a). In addition, in the months since we released SqueezeNet, P. Gysel developed a strategy called Ristretto for linearly quantizing SqueezeNet to 8 bits (Gysel, 2016). Speciï¬cally, Ristretto does computation in 8 bits, and it stores parameters and activations in 8-bit data types. Using the Ristretto strategy for 8-bit computation in SqueezeNet inference, Gysel observed less than 1 percentage-point of drop in accuracy when using 8-bit instead of 32-bit data types.
# 5 CNN MICROARCHITECTURE DESIGN SPACE EXPLORATION
So far, we have proposed architectural design strategies for small models, followed these principles to create SqueezeNet, and discovered that SqueezeNet is 50x smaller than AlexNet with equivalent accuracy. However, SqueezeNet and other models reside in a broad and largely unexplored design space of CNN architectures. Now, in Sections 5 and 6, we explore several aspects of the design space. We divide this architectural exploration into two main topics: microarchitectural exploration (per-module layer dimensions and conï¬gurations) and macroarchitectural exploration (high-level end-to-end organization of modules and other layers).
In this section, we design and execute experiments with the goal of providing intuition about the shape of the microarchitectural design space with respect to the design strategies that we proposed in Section 3.1. Note that our goal here is not to maximize accuracy in every experiment, but rather to understand the impact of CNN architectural choices on model size and accuracy.
6Note that, due to the storage overhead of storing sparse matrix indices, 33% sparsity leads to somewhat less than a 3Ã decrease in model size.
7
# Under review as a conference paper at ICLR 2017
Squeeze Ratio (SR) Percentage of 3x3 filters (pet;,) 0.1250.25 0.5 0.75 1.0 100 1.0 12.5 25.0 37.5 50.0 62.5 75.0 87.5 99.0 g 100/-âsqueezeNet 85.3% 860% gS 85:3% 85.3 ne 80.3% accuracy accuracy ~ 76.3% accuracy accuracy > accuracy 2 30 accuracy g 80+ 13 MB of 19 MB of i g r ° 13 MB of : 21 MB of 3 âleah weights weights 3 5.7 MB of weights weights & 3 ight: © ot ® Gop weights 2 8 © 40h Q 4ob ot a 2 @ = 201 Z 201 a a E E = 0 ii : L : =o er 48 7.6 13 19 24 5.77.4 9.3 11 13 15 17 19 21 MB of weights in model MB of weights in model the of the ratio the of the ratio of 3x3 filters
(a) Exploring the impact of the squeeze ratio (SR) on model size and accuracy. (b) Exploring the impact of the ratio of 3x3 ï¬lters in expand layers (pct3x3) on model size and accuracy.
Figure 3: Microarchitectural design space exploration.
5.1 CNN MICROARCHITECTURE METAPARAMETERS In SqueezeNet, each Fire module has three dimensional hyperparameters that we deï¬ned in Sec- tion 3.2: s1x1, e1x1, and e3x3. SqueezeNet has 8 Fire modules with a total of 24 dimensional hyperparameters. To do broad sweeps of the design space of SqueezeNet-like architectures, we deï¬ne the following set of higher level metaparameters which control the dimensions of all Fire modules in a CNN. We deï¬ne basee as the number of expand ï¬lters in the ï¬rst Fire module in a CNN. After every f req Fire modules, we increase the number of expand ï¬lters by incre. In other words, for Fire module i, the number of expand ï¬lters is ei = basee + (incre â ). In the expand layer of a Fire module, some ï¬lters are 1x1 and some are 3x3; we deï¬ne ei = ei,1x1 + ei,3x3 with pct3x3 (in the range [0, 1], shared over all Fire modules) as the percentage of expand ï¬lters that are 3x3. In other words, ei,3x3 = ei â pct3x3, and ei,1x1 = ei â (1 â pct3x3). Finally, we deï¬ne the number of ï¬lters in the squeeze layer of a Fire module using a metaparameter called the squeeze ratio (SR) (again, in the range [0, 1], shared by all Fire modules): si,1x1 = SR â ei (or equivalently si,1x1 = SR â (ei,1x1 + ei,3x3)). SqueezeNet (Table 1) is an example architecture that we gen- erated with the aforementioned set of metaparameters. Speciï¬cally, SqueezeNet has the following metaparameters: basee = 128, incre = 128, pct3x3 = 0.5, f req = 2, and SR = 0.125.
5.2 SQUEEZE RATIO In Section 3.1, we proposed decreasing the number of parameters by using squeeze layers to decrease the number of input channels seen by 3x3 ï¬lters. We deï¬ned the squeeze ratio (SR) as the ratio between the number of ï¬lters in squeeze layers and the number of ï¬lters in expand layers. We now design an experiment to investigate the effect of the squeeze ratio on model size and accuracy.
In these experiments, we use SqueezeNet (Figure 2) as a starting point. As in SqueezeNet, these experiments use the following metaparameters: basee = 128, incre = 128, pct3x3 = 0.5, and f req = 2. We train multiple models, where each model has a different squeeze ratio (SR)7 in the range [0.125, 1.0]. In Figure 3(a), we show the results of this experiment, where each point on the graph is an independent model that was trained from scratch. SqueezeNet is the SR=0.125 point in this ï¬gure.8 From this ï¬gure, we learn that increasing SR beyond 0.125 can further increase ImageNet top-5 accuracy from 80.3% (i.e. AlexNet-level) with a 4.8MB model to 86.0% with a 19MB model. Accuracy plateaus at 86.0% with SR=0.75 (a 19MB model), and setting SR=1.0 further increases model size without improving accuracy.
5.3 TRADING OFF 1X1 AND 3X3 FILTERS In Section 3.1, we proposed decreasing the number of parameters in a CNN by replacing some 3x3 ï¬lters with 1x1 ï¬lters. An open question is, how important is spatial resolution in CNN ï¬lters?
7Note that, for a given model, all Fire layers share the same squeeze ratio. 8Note that we named it SqueezeNet because it has a low squeeze ratio (SR). That is, the squeeze layers in
SqueezeNet have 0.125x the number of ï¬lters as the expand layers.
8
# Under review as a conference paper at ICLR 2017
The VGG (Simonyan & Zisserman, 2014) architectures have 3x3 spatial resolution in most layersâ ï¬lters; GoogLeNet (Szegedy et al., 2014) and Network-in-Network (NiN) (Lin et al., 2013) have 1x1 ï¬lters in some layers. In GoogLeNet and NiN, the authors simply propose a speciï¬c quantity of 1x1 and 3x3 ï¬lters without further analysis.9 Here, we attempt to shed light on how the proportion of 1x1 and 3x3 ï¬lters affects model size and accuracy.
We use the following metaparameters in this experiment: basee = incre = 128, f req = 2, SR = 0.500, and we vary pct3x3 from 1% to 99%. In other words, each Fire moduleâs expand layer has a predeï¬ned number of ï¬lters partitioned between 1x1 and 3x3, and here we turn the knob on these ï¬lters from âmostly 1x1â to âmostly 3x3â. As in the previous experiment, these models have 8 Fire modules, following the same organization of layers as in Figure 2. We show the results of this experiment in Figure 3(b). Note that the 13MB models in Figure 3(a) and Figure 3(b) are the same architecture: SR = 0.500 and pct3x3 = 50%. We see in Figure 3(b) that the top-5 accuracy plateaus at 85.6% using 50% 3x3 ï¬lters, and further increasing the percentage of 3x3 ï¬lters leads to a larger model size but provides no improvement in accuracy on ImageNet.
6 CNN MACROARCHITECTURE DESIGN SPACE EXPLORATION So far we have explored the design space at the microarchitecture level, i.e. the contents of individual modules of the CNN. Now, we explore design decisions at the macroarchitecture level concerning the high-level connections among Fire modules. Inspired by ResNet (He et al., 2015b), we explored three different architectures:
Vanilla SqueezeNet (as per the prior sections). ⢠SqueezeNet with simple bypass connections between some Fire modules. (Inspired by (Sri-
vastava et al., 2015; He et al., 2015b).)
SqueezeNet with complex bypass connections between the remaining Fire modules.
We illustrate these three variants of SqueezeNet in Figure 2.
Our simple bypass architecture adds bypass connections around Fire modules 3, 5, 7, and 9, requiring these modules to learn a residual function between input and output. As in ResNet, to implement a bypass connection around Fire3, we set the input to Fire4 equal to (output of Fire2 + output of Fire3), where the + operator is elementwise addition. This changes the regularization applied to the parameters of these Fire modules, and, as per ResNet, can improve the ï¬nal accuracy and/or ability to train the full model.
One limitation is that, in the straightforward case, the number of input channels and number of output channels has to be the same; as a result, only half of the Fire modules can have simple bypass connections, as shown in the middle diagram of Fig 2. When the âsame number of channelsâ requirement canât be met, we use a complex bypass connection, as illustrated on the right of Figure 2. While a simple bypass is âjust a wire,â we deï¬ne a complex bypass as a bypass that includes a 1x1 convolution layer with the number of ï¬lters set equal to the number of output channels that are needed. Note that complex bypass connections add extra parameters to the model, while simple bypass connections do not.
In addition to changing the regularization, it is intuitive to us that adding bypass connections would help to alleviate the representational bottleneck introduced by squeeze layers. In SqueezeNet, the squeeze ratio (SR) is 0.125, meaning that every squeeze layer has 8x fewer output channels than the accompanying expand layer. Due to this severe dimensionality reduction, a limited amount of in- formation can pass through squeeze layers. However, by adding bypass connections to SqueezeNet, we open up avenues for information to ï¬ow around the squeeze layers.
We trained SqueezeNet with the three macroarchitectures in Figure 2 and compared the accuracy and model size in Table 3. We ï¬xed the microarchitecture to match SqueezeNet as described in Table 1 throughout the macroarchitecture exploration. Complex and simple bypass connections both yielded an accuracy improvement over the vanilla SqueezeNet architecture. Interestingly, the simple bypass enabled a higher accuracy accuracy improvement than complex bypass. Adding the
9To be clear, each ï¬lter is 1x1xChannels or 3x3xChannels, which we abbreviate to 1x1 and 3x3.
9
# Under review as a conference paper at ICLR 2017
Table 3: SqueezeNet accuracy and model size using different macroarchitecture conï¬gurations Top-1 Accuracy 57.5% 60.4% 58.8%
Architecture Vanilla SqueezeNet SqueezeNet + Simple Bypass SqueezeNet + Complex Bypass Top-5 Accuracy Model Size 80.3% 82.5% 82.0% 4.8MB 4.8MB 7.7MB
simple bypass connections yielded an increase of 2.9 percentage-points in top-1 accuracy and 2.2 percentage-points in top-5 accuracy without increasing model size.
7 CONCLUSIONS In this paper, we have proposed steps toward a more disciplined approach to the design-space explo- ration of convolutional neural networks. Toward this goal we have presented SqueezeNet, a CNN architecture that has 50Ã fewer parameters than AlexNet and maintains AlexNet-level accuracy on ImageNet. We also compressed SqueezeNet to less than 0.5MB, or 510Ã smaller than AlexNet without compression. Since we released this paper as a technical report in 2016, Song Han and his collaborators have experimented further with SqueezeNet and model compression. Using a new approach called Dense-Sparse-Dense (DSD) (Han et al., 2016b), Han et al. use model compres- sion during training as a regularizer to further improve accuracy, producing a compressed set of SqueezeNet parameters that is 1.2 percentage-points more accurate on ImageNet-1k, and also pro- ducing an uncompressed set of SqueezeNet parameters that is 4.3 percentage-points more accurate, compared to our results in Table 2.
We mentioned near the beginning of this paper that small models are more amenable to on-chip implementations on FPGAs. Since we released the SqueezeNet model, Gschwend has developed a variant of SqueezeNet and implemented it on an FPGA (Gschwend, 2016). As we anticipated, Gschwend was able to able to store the parameters of a SqueezeNet-like model entirely within the FPGA and eliminate the need for off-chip memory accesses to load model parameters.
In the context of this paper, we focused on ImageNet as a target dataset. However, it has become common practice to apply ImageNet-trained CNN representations to a variety of applications such as ï¬ne-grained object recognition (Zhang et al., 2013; Donahue et al., 2013), logo identiï¬cation in images (Iandola et al., 2015), and generating sentences about images (Fang et al., 2015). ImageNet- trained CNNs have also been applied to a number of applications pertaining to autonomous driv- ing, including pedestrian and vehicle detection in images (Iandola et al., 2014; Girshick et al., 2015; Ashraf et al., 2016) and videos (Chen et al., 2015b), as well as segmenting the shape of the road (Badrinarayanan et al., 2015). We think SqueezeNet will be a good candidate CNN architecture for a variety of applications, especially those in which small model size is of importance.
SqueezeNet is one of several new CNNs that we have discovered while broadly exploring the de- sign space of CNN architectures. We hope that SqueezeNet will inspire the reader to consider and explore the broad range of possibilities in the design space of CNN architectures and to perform that exploration in a more systematic manner.
# REFERENCES
Khalid Ashraf, Bichen Wu, Forrest N. Iandola, Matthew W. Moskewicz, and Kurt Keutzer. Shallow networks for high-accuracy road object-detection. arXiv:1606.01561, 2016.
Vijay Badrinarayanan, Alex Kendall, and Roberto Cipolla. SegNet: A deep convolutional encoder- decoder architecture for image segmentation. arxiv:1511.00561, 2015.
Eddie Bell. A implementation of squeezenet in chainer. https://github.com/ejlb/ squeezenet-chainer, 2016.
J. Bergstra and Y. Bengio. An optimization methodology for neural network weights and architec- tures. JMLR, 2012.
Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang. Mxnet: A ï¬exible and efï¬cient machine learning library for heterogeneous distributed systems. arXiv:1512.01274, 2015a.
10
# Under review as a conference paper at ICLR 2017
Xiaozhi Chen, Kaustav Kundu, Yukun Zhu, Andrew G Berneshawi, Huimin Ma, Sanja Fidler, and Raquel Urtasun. 3d object proposals for accurate object class detection. In NIPS, 2015b.
Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catan- zaro, and Evan Shelhamer. cuDNN: efï¬cient primitives for deep learning. arXiv:1410.0759, 2014.
Francois Chollet. Keras: Deep learning library for theano and tensorï¬ow. https://keras.io, 2016.
Ronan Collobert, Koray Kavukcuoglu, and Clement Farabet. Torch7: A matlab-like environment for machine learning. In NIPS BigLearn Workshop, 2011.
Consumer Reports. Teslas new needs Better http://www.consumerreports.org/tesla/ autopilot: but still improvement. tesla-new-autopilot-better-but-needs-improvement, 2016.
Dipankar Das, Sasikanth Avancha, Dheevatsa Mudigere, Karthikeyan Vaidyanathan, Srinivas Srid- haran, Dhiraj D. Kalamkar, Bharat Kaul, and Pradeep Dubey. Distributed deep learning using synchronous stochastic gradient descent. arXiv:1602.06709, 2016.
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A large-scale hierarchical image database. In CVPR, 2009.
E.L Denton, W. Zaremba, J. Bruna, Y. LeCun, and R. Fergus. Exploiting linear structure within convolutional networks for efï¬cient evaluation. In NIPS, 2014.
Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. arXiv:1310.1531, 2013.
DT42. Squeezenet keras implementation. https://github.com/DT42/squeezenet_ demo, 2016.
Hao Fang, Saurabh Gupta, Forrest Iandola, Rupesh Srivastava, Li Deng, Piotr Dollar, Jianfeng Gao, Xiaodong He, Margaret Mitchell, John C. Platt, C. Lawrence Zitnick, and Geoffrey Zweig. From captions to visual concepts and back. In CVPR, 2015.
Ross B. Girshick, Forrest N. Iandola, Trevor Darrell, and Jitendra Malik. Deformable part models are convolutional neural networks. In CVPR, 2015.
David Gschwend. Zynqnet: An fpga-accelerated embedded convolutional neural network. Masterâs thesis, Swiss Federal Institute of Technology Zurich (ETH-Zurich), 2016.
Philipp Gysel. Ristretto: Hardware-oriented approximation of convolutional neural networks. arXiv:1605.06402, 2016.
S. Han, H. Mao, and W. Dally. Deep compression: Compressing DNNs with pruning, trained quantization and huffman coding. arxiv:1510.00149v3, 2015a.
S. Han, J. Pool, J. Tran, and W. Dally. Learning both weights and connections for efï¬cient neural networks. In NIPS, 2015b.
Song Han, Xingyu Liu, Huizi Mao, Jing Pu, Ardavan Pedram, Mark A Horowitz, and William J Dally. Eie: Efï¬cient inference engine on compressed deep neural network. International Sympo- sium on Computer Architecture (ISCA), 2016a.
Song Han, Jeff Pool, Sharan Narang, Huizi Mao, Shijian Tang, Erich Elsen, Bryan Catanzaro, John Tran, and William J. Dally. Dsd: Regularizing deep neural networks with dense-sparse-dense training ï¬ow. arXiv:1607.04381, 2016b.
Guo Haria. convert squeezenet to mxnet. https://github.com/haria/SqueezeNet/ commit/0cf57539375fd5429275af36fc94c774503427c3, 2016.
11
# Under review as a conference paper at ICLR 2017
K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectiï¬ers: Surpassing human-level perfor- mance on imagenet classiï¬cation. In ICCV, 2015a.
Kaiming He and Jian Sun. Convolutional neural networks at constrained time cost. In CVPR, 2015.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. arXiv:1512.03385, 2015b.
Forrest N. Iandola, Matthew W. Moskewicz, Sergey Karayev, Ross B. Girshick, Trevor Darrell, and Kurt Keutzer. Densenet: Implementing efï¬cient convnet descriptor pyramids. arXiv:1404.1869, 2014.
Forrest N. Iandola, Anting Shen, Peter Gao, and Kurt Keutzer. DeepLogo: Hitting logo recognition with the deep neural network hammer. arXiv:1510.02131, 2015.
Forrest N. Iandola, Khalid Ashraf, Matthew W. Moskewicz, and Kurt Keutzer. FireCaffe: near-linear acceleration of deep neural network training on compute clusters. In CVPR, 2016.
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. JMLR, 2015.
Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Ser- gio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embed- ding. arXiv:1408.5093, 2014.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. ImageNet Classiï¬cation with Deep Con- volutional Neural Networks. In NIPS, 2012.
Y. LeCun, B.Boser, J.S. Denker, D. Henderson, R.E. Howard, W. Hubbard, and L.D. Jackel. Back- propagation applied to handwritten zip code recognition. Neural Computation, 1989.
Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. arXiv:1312.4400, 2013.
T.B. Ludermir, A. Yamazaki, and C. Zanchettin. An optimization methodology for neural network weights and architectures. IEEE Trans. Neural Networks, 2006.
Dmytro Mishkin, Nikolay Sergievskiy, and Jiri Matas. Systematic evaluation of cnn advances on the imagenet. arXiv:1606.02228, 2016.
Vinod Nair and Geoffrey E. Hinton. Rectiï¬ed linear units improve restricted boltzmann machines. In ICML, 2010.
Jiantao Qiu, Jie Wang, Song Yao, Kaiyuan Guo, Boxun Li, Erjin Zhou, Jincheng Yu, Tianqi Tang, Ningyi Xu, Sen Song, Yu Wang, and Huazhong Yang. Going deeper with embedded fpga platform for convolutional neural network. In ACM International Symposium on FPGA, 2016.
Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556, 2014.
J. Snoek, H. Larochelle, and R.P. Adams. Practical bayesian optimization of machine learning algorithms. In NIPS, 2012.
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overï¬tting. JMLR, 2014.
R. K. Srivastava, K. Greff, and J. Schmidhuber. Highway networks. In ICML Deep Learning Workshop, 2015.
K.O. Stanley and R. Miikkulainen. Evolving neural networks through augmenting topologies. Neu- rocomputing, 2002.
Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Du- mitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. arXiv:1409.4842, 2014.
12
# Under review as a conference paper at ICLR 2017
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Re- thinking the inception architecture for computer vision. arXiv:1512.00567, 2015.
Christian Szegedy, Sergey Ioffe, and Vincent Vanhoucke. Inception-v4, inception-resnet and the impact of residual connections on learning. arXiv:1602.07261, 2016.
S. Tokui, K. Oono, S. Hido, and J. Clayton. Chainer: a next-generation open source framework for deep learning. In NIPS Workshop on Machine Learning Systems (LearningSys), 2015.
Sagar M Waghmare. FireModule.lua. https://github.com/Element-Research/dpnn/ blob/master/FireModule.lua, 2016.
Ning Zhang, Ryan Farrell, Forrest Iandola, and Trevor Darrell. Deformable part descriptors for ï¬ne-grained recognition and attribute prediction. In ICCV, 2013.
13 | {
"id": "1512.00567"
} |
1602.07261 | Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning | Very deep convolutional networks have been central to the largest advances in
image recognition performance in recent years. One example is the Inception
architecture that has been shown to achieve very good performance at relatively
low computational cost. Recently, the introduction of residual connections in
conjunction with a more traditional architecture has yielded state-of-the-art
performance in the 2015 ILSVRC challenge; its performance was similar to the
latest generation Inception-v3 network. This raises the question of whether
there are any benefit in combining the Inception architecture with residual
connections. Here we give clear empirical evidence that training with residual
connections accelerates the training of Inception networks significantly. There
is also some evidence of residual Inception networks outperforming similarly
expensive Inception networks without residual connections by a thin margin. We
also present several new streamlined architectures for both residual and
non-residual Inception networks. These variations improve the single-frame
recognition performance on the ILSVRC 2012 classification task significantly.
We further demonstrate how proper activation scaling stabilizes the training of
very wide residual Inception networks. With an ensemble of three residual and
one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the
ImageNet classification (CLS) challenge | http://arxiv.org/pdf/1602.07261 | Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alemi | cs.CV | null | null | cs.CV | 20160223 | 20160823 | 6 1 0 2
g u A 3 2 ] V C . s c [ 2 v 1 6 2 7 0 . 2 0 6 1 : v i X r a
# Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning
# Christian Szegedy Google Inc. 1600 Amphitheatre Pkwy, Mountain View, CA szegedy@google.com
# Vincent Vanhoucke vanhoucke@google.com
Sergey Ioffe sioffe@google.com
# Alex Alemi alemi@google.com
# Abstract
Very deep convolutional networks have been central to the largest advances in image recognition performance in recent years. One example is the Inception architecture that has been shown to achieve very good performance at rel- atively low computational cost. Recently, the introduction of residual connections in conjunction with a more tradi- tional architecture has yielded state-of-the-art performance in the 2015 ILSVRC challenge; its performance was similar to the latest generation Inception-v3 network. This raises the question of whether there are any beneï¬t in combining the Inception architecture with residual connections. Here we give clear empirical evidence that training with residual connections accelerates the training of Inception networks signiï¬cantly. There is also some evidence of residual Incep- tion networks outperforming similarly expensive Inception networks without residual connections by a thin margin. We also present several new streamlined architectures for both residual and non-residual Inception networks. These varia- tions improve the single-frame recognition performance on the ILSVRC 2012 classiï¬cation task signiï¬cantly. We fur- ther demonstrate how proper activation scaling stabilizes the training of very wide residual Inception networks. With an ensemble of three residual and one Inception-v4, we achieve 3.08% top-5 error on the test set of the ImageNet classiï¬cation (CLS) challenge.
tion [7], object tracking [18], and superresolution [3]. These examples are but a few of all the applications to which deep convolutional networks have been very successfully applied ever since.
In this work we study the combination of the two most recent ideas: Residual connections introduced by He et al. in [5] and the latest revised version of the Inception archi- tecture [15]. In [5], it is argued that residual connections are of inherent importance for training very deep architectures. Since Inception networks tend to be very deep, it is natu- ral to replace the ï¬lter concatenation stage of the Inception architecture with residual connections. This would allow Inception to reap all the beneï¬ts of the residual approach while retaining its computational efï¬ciency.
Besides a straightforward integration, we have also stud- ied whether Inception itself can be made more efï¬cient by making it deeper and wider. For that purpose, we designed a new version named Inception-v4 which has a more uni- form simpliï¬ed architecture and more inception modules than Inception-v3. Historically, Inception-v3 had inherited a lot of the baggage of the earlier incarnations. The techni- cal constraints chieï¬y came from the need for partitioning the model for distributed training using DistBelief [2]. Now, after migrating our training setup to TensorFlow [1] these constraints have been lifted, which allowed us to simplify the architecture signiï¬cantly. The details of that simpliï¬ed architecture are described in Section 3.
# 1. Introduction
Since the 2012 ImageNet competition [11] winning en- try by Krizhevsky et al [8], their network âAlexNetâ has been successfully applied to a larger variety of computer vision tasks, for example to object-detection [4], segmen- tation [10], human pose estimation [17], video classiï¬ca-
In this report, we will compare the two pure Inception variants, Inception-v3 and v4, with similarly expensive hy- brid Inception-ResNet versions. Admittedly, those mod- els were picked in a somewhat ad hoc manner with the main constraint being that the parameters and computa- tional complexity of the models should be somewhat similar to the cost of the non-residual models. In fact we have tested bigger and wider Inception-ResNet variants and they per- formed very similarly on the ImageNet classiï¬cation chal-
1
# lenge [11] dataset.
The last experiment reported here is an evaluation of an ensemble of all the best performing models presented here. As it was apparent that both Inception-v4 and Inception- ResNet-v2 performed similarly well, exceeding state-of- the art single frame performance on the ImageNet valida- tion dataset, we wanted to see how a combination of those pushes the state of the art on this well studied dataset. Sur- prisingly, we found that gains on the single-frame perfor- mance do not translate into similarly large gains on ensem- bled performance. Nonetheless, it still allows us to report 3.1% top-5 error on the validation set with four models en- sembled setting a new state of the art, to our best knowl- edge.
In the last section, we study some of the classiï¬cation failures and conclude that the ensemble still has not reached the label noise of the annotations on this dataset and there is still room for improvement for the predictions.
# 2. Related Work
Convolutional networks have become popular in large scale image recognition tasks after Krizhevsky et al. [8]. Some of the next important milestones were Network-in- network [9] by Lin et al., VGGNet [12] by Simonyan et al. and GoogLeNet (Inception-v1) [14] by Szegedy et al.
Residual connection were introduced by He et al. in [5] in which they give convincing theoretical and practical ev- idence for the advantages of utilizing additive merging of signals both for image recognition, and especially for object detection. The authors argue that residual connections are inherently necessary for training very deep convolutional models. Our ï¬ndings do not seem to support this view, at least for image recognition. However it might require more measurement points with deeper architectures to understand the true extent of beneï¬cial aspects offered by residual con- nections. In the experimental section we demonstrate that it is not very difï¬cult to train competitive very deep net- works without utilizing residual connections. However the use of residual connections seems to improve the training speed greatly, which is alone a great argument for their use. The Inception deep convolutional architecture was intro- duced in [14] and was called GoogLeNet or Inception-v1 in our exposition. Later the Inception architecture was reï¬ned in various ways, ï¬rst by the introduction of batch normaliza- tion [6] (Inception-v2) by Ioffe et al. Later the architecture was improved by additional factorization ideas in the third iteration [15] which will be referred to as Inception-v3 in this report.
Relu activation + Conv Conv Relu activation
Figure 1. Residual connections as introduced in He et al. [5].
Relu activation + Conv i 1x1 Conv Relu activation
Figure 2. Optimized version of ResNet connections by [5] to shield computation.
# 3. Architectural Choices
# 3.1. Pure Inception blocks
Our older Inception models used to be trained in a par- titioned manner, where each replica was partitioned into a multiple sub-networks in order to be able to ï¬t the whole model in memory. However, the Inception architecture is highly tunable, meaning that there are a lot of possible changes to the number of ï¬lters in the various layers that do not affect the quality of the fully trained network. In order to optimize the training speed, we used to tune the layer sizes carefully in order to balance the computation be- tween the various model sub-networks. In contrast, with the introduction of TensorFlow our most recent models can be trained without partitioning the replicas. This is enabled in part by recent optimizations of memory used by backprop- agation, achieved by carefully considering what tensors are needed for gradient computation and structuring the compu-
tation to reduce the number of such tensors. Historically, we have been relatively conservative about changing the archi- tectural choices and restricted our experiments to varying isolated network components while keeping the rest of the network stable. Not simplifying earlier choices resulted in networks that looked more complicated that they needed to be. In our newer experiments, for Inception-v4 we decided to shed this unnecessary baggage and made uniform choices for the Inception blocks for each grid size. Plase refer to Figure 9 for the large scale structure of the Inception-v4 net- work and Figures 3, 4, 5, 6, 7 and 8 for the detailed struc- ture of its components. All the convolutions not marked with âVâ in the ï¬gures are same-padded meaning that their output grid matches the size of their input. Convolutions marked with âVâ are valid padded, meaning that input patch of each unit is fully contained in the previous layer and the grid size of the output activation map is reduced accord- ingly.
# 3.2. Residual Inception Blocks
For the residual versions of the Inception networks, we use cheaper Inception blocks than the original Inception. Each Inception block is followed by ï¬lter-expansion layer (1 à 1 convolution without activation) which is used for scaling up the dimensionality of the ï¬lter bank before the addition to match the depth of the input. This is needed to compensate for the dimensionality reduction induced by the Inception block.
We tried several versions of the residual version of In- ception. Only two of them are detailed here. The ï¬rst one âInception-ResNet-v1â roughly the computational cost of Inception-v3, while âInception-ResNet-v2â matches the raw cost of the newly introduced Inception-v4 network. See Figure 15 for the large scale structure of both varianets. (However, the step time of Inception-v4 proved to be signif- icantly slower in practice, probably due to the larger number of layers.)
Another small technical difference between our resid- ual and non-residual Inception variants is that in the case of Inception-ResNet, we used batch-normalization only on top of the traditional layers, but not on top of the summa- tions. It is reasonable to expect that a thorough use of batch- normalization should be advantageous, but we wanted to keep each model replica trainable on a single GPU. It turned out that the memory footprint of layers with large activa- tion size was consuming disproportionate amount of GPU- memory. By omitting the batch-normalization on top of those layers, we were able to increase the overall number of Inception blocks substantially. We hope that with bet- ter utilization of computing resources, making this trade-off will become unecessary.
Filter concat | ssx35xsea oO 3x3 Conv MaxPool (192 V) (stride=2 V) ââ Filter concat ee 3x3 Conv (96 V) t 1x7 Conv 3x3 Conv (64) (96 V) t { 7x1 Conv 64 1x1 Conv ( + ) (64) TAx71x192 1x1 Conv (64) Filter concat â7sx73x160 eee 3x3 MaxPool 3x3 Conv (stride 2 V) (96 stride 2 V) ââ 3x3 Conv (64) f 3x3 Conv (32 V) i 3x3 Conv (32 stride 2 V) a 1A7x147x64 147x147%32 149x149x32
Input (299x299x3)
299x299x3
Figure 3. The schema for stem of the pure Inception-v4 and Inception-ResNet-v2 networks. This is the input part of those net- works. Cf. Figures 9 and 15
Filter concat 3x3 Conv (96) | a 4x1 Conv 3x3 Conv 3x3 Conv (98) (96) (96) i âAvg Poolin: tka Com 1x1 Conv 4x1 Conv ° ° (96) (64) (64) + Filter concat
Figure 4. The schema for 35 Ã 35 grid modules of the pure Inception-v4 network. This is the Inception-A block of Figure 9.
Filter concat 7x1 Conv (256) t 1x7 Conv 4x1 Conv prconv (224) (128) (256) ' 7x1 Conv 1x1 Conv 1x7 Conv (224) (384) (224) t 1x7 Conv 1x1 Conv (192) Avg Pooling 2) t 1x1 Conv (192) Filter concat
Figure 5. The schema for 17 Ã 17 grid modules of the pure Inception-v4 network. This is the Inception-B block of Figure 9.
Filter concat 3x1 Conv (256) 4x3 Conv 4x1 Conv (256) (256) 1x3 Conv 3x1 Conv (256) (256) 3x1 Conv 4x1 Conv Ge) (256) il | 4x3 Conv (448) t 1x1 Conv (384) (384) | 1x1 Conv Avg Pooling | Filter concat
Figure 6. The schema for 8Ã8 grid modules of the pure Inception- v4 network. This is the Inception-C block of Figure 9.
Filter concat 3x3 Conv (m stride 2 V) ry 3x3 MaxPool (stride 2 V) 3x3 Conv 3x3 Conv (n stride 2 V) (I) ry 1x1 Conv (k) Filter concat
Figure 7. The schema for 35 à 35 to 17 à 17 reduction module. Different variants of this blocks (with various number of ï¬lters) are used in Figure 9, and 15 in each of the new Inception(-v4, - ResNet-v1, -ResNet-v2) variants presented in this paper. The k, l, m, n numbers represent ï¬lter bank sizes which can be looked up in Table 1.
Filter concat 3x3 Conv 20 stride 2 V) 3x3 Conv (820 stride 2V) (192 stride 2 V) t i 7x1 Conv 3x3 MaxPool (320) (stride 2 V) t 1x7 Conv 1x1 Conv (256) (192) 1 cy 1x1 Conv (256) Filter concat
Figure 8. The schema for 17 Ã 17 to 8 Ã 8 grid-reduction mod- ule. This is the reduction module used by the pure Inception-v4 network in Figure 9.
Softmax Output: 1000 Dropout (keep 0.8) Avarage Pooling Output: 1536 Output: 1536 3 x Inception-C Reduction-B Output: 8x8x1536 Output: 8x8x1536, 7 x Inception-B Output: 1717x1024 Reduction-A Output: 1717x1024 4x Inception-A Stem Input (299x299x3) Output: 35x35x384 Output: 35x35x384 299x299%3
Figure 9. The overall schema of the Inception-v4 network. For the detailed modules, please refer to Figures 3, 4, 5, 6, 7 and 8 for the detailed structure of the various components.
Relu activation 1x1 Conv (256 Linear) at 3x3 Conv (32) 1x1 Conv + (32) 3x3 Conv 3x3 Conv (32) (32) 1x1 Conv 1x1 Conv (32) (32) Relu activation
Figure 10. The schema for 35 Ã 35 grid (Inception-ResNet-A) module of Inception-ResNet-v1 network.
# Relu
activation 1x1 Conv (896 Linear) 7x1 Conv (128) 1x1 Conv 1x7 Conv (128) (128) 1x1 Conv (128) activation
# Relu
Figure 11. The schema for 17 Ã 17 grid (Inception-ResNet-B) module of Inception-ResNet-v1 network.
Filter concat ZN Conv (256 stride 2 V) 3x3 Conv 3x3 Conv t (384 stride 2V) | (256 stride 2 V) 3x3 MaxPool 3x3 Conv (stride 2 V) i (256) NN 1x1 Conv 1x1 Conv t (9) (256) 1x1 Conv (256) Previous Layer
Figure 12. âReduction-Bâ 17 Ã 17 to 8 Ã 8 grid-reduction module. This module used by the smaller Inception-ResNet-v1 network in Figure 15.
# Relu
activation activation 1x1 Conv (1792 Linear) 3x1 Conv (192) 1x1 Conv 1x3 Conv (192) (192) ry 1x1 Conv (192)
# Relu
Figure 13. The schema for 8Ã8 grid (Inception-ResNet-C) module of Inception-ResNet-v1 network.
3x3 Conv (256 stride 2 V) + 3x3 Conv (192 V) 3x3 MaxPool (stride 2 V) ry 3x3 Conv (64) ry 3x3 Conv (32 V) ry 3x3 Conv (32 stride 2 V) Input (299x299x3) 35x35x256 71x71x192 73x73x80 73x73x64 147x147x64 147x147x32 149x149x32 299x299x3
Figure 14. The stem of the Inception-ResNet-v1 network.
Softmax Dropout (keep 0.8) Average Pooling 5 x Inception-resnet-C Reduction-B ' 40x Inception-resnet-B Reduction-A 5 x Inception-resnet-A Stem Input (299x299x3) Output 1000 Output 1792 Output: 1792 Output: ex8x1702 (Output Bxax1792 Output 173174898 Output 173174898 Output: 351351256 Output: 351351256 2091200
Figure 15. Schema for Inception-ResNet-v1 and Inception- ResNet-v2 networks. This schema applies to both networks but the underlying components differ. Inception-ResNet-v1 uses the blocks as described in Figures 14, 10, 7, 11, 12 and 13. Inception- ResNet-v2 uses the blocks as described in Figures 3, 16, 7,17, 18 and 19. The output sizes in the diagram refer to the activation vector tensor shapes of Inception-ResNet-v1.
Relu activation 1x1 Conv (384 Linear) aa 1x1 Conv (32) Relu activation 3x3 Conv (64) 3x3 Conv 3x3 Conv (32) (48) 1x1 Conv 1x1 Conv (32) (32)
Figure 16. The schema for 35 Ã 35 grid (Inception-ResNet-A) module of the Inception-ResNet-v2 network.
activation 1x1 Conv (1154 Linear) 7x1 Conv (192) + 1x1 Conv (192) 1x7 Conv Relu activation (160) 1x1 Conv (128)
# Relu
Figure 17. The schema for 17 Ã 17 grid (Inception-ResNet-B) module of the Inception-ResNet-v2 network.
Filter concat dN Conv (320 stride 2 V) 3x3 Conv 3x3 Conv (384 stride 2V) | (288 stride 2 V) 3x3 MaxPool 3x3 Conv (stride 2 V) I | (288) 1x1 Conv 1x1 Conv ii (53) (256) 1x1 Conv (256) Previous Layer
Figure 18. The schema for 17 Ã 17 to 8 Ã 8 grid-reduction mod- ule. Reduction-B module used by the wider Inception-ResNet-v1 network in Figure 15.
# Relu
activation activation 1x1 Conv (2048 Linear) 3x1 Conv (256) 1x1 Conv 1x3 Conv (192) (224) 1x1 Conv (192)
# Relu
Figure 19. The schema for 8Ã8 grid (Inception-ResNet-C) module of the Inception-ResNet-v2 network.
Network Inception-v4 Inception-ResNet-v1 Inception-ResNet-v2 k 192 192 256 l 224 192 256 m 256 256 384 n 384 384 384
Table 1. The number of ï¬lters of the Reduction-A module for the three Inception variants presented in this paper. The four numbers in the colums of the paper parametrize the four convolutions of Figure 7
Relu activation + Activation Scaling f Inception Relu activation
Figure 20. The general schema for scaling combined Inception- resnet moduels. We expect that the same idea is useful in the gen- eral resnet case, where instead of the Inception block an arbitrary subnetwork is used. The scaling block just scales the last linear activations by a suitable constant, typically around 0.1.
# 3.3. Scaling of the Residuals
Also we found that if the number of ï¬lters exceeded 1000, the residual variants started to exhibit instabilities and the network has just âdiedâ early in the training, meaning that the last layer before the average pooling started to pro- duce only zeros after a few tens of thousands of iterations. This could not be prevented, neither by lowering the learn- ing rate, nor by adding an extra batch-normalization to this layer.
We found that scaling down the residuals before adding them to the previous layer activation seemed to stabilize the training. In general we picked some scaling factors between 0.1 and 0.3 to scale the residuals before their being added to the accumulated layer activations (cf. Figure 20).
A similar instability was observed by He et al. in [5] in the case of very deep residual networks and they suggested a two-phase training where the ï¬rst âwarm-upâ phase is done with very low learning rate, followed by a second phase with high learning rata. We found that if the number of ï¬lters is very high, then even a very low (0.00001) learning rate is not sufï¬cient to cope with the instabilities and the training with high learning rate had a chance to destroy its effects. We found it much more reliable to just scale the residuals.
Even where the scaling was not strictly necessary, it never seemed to harm the ï¬nal accuracy, but it helped to stabilize the training.
# 4. Training Methodology
We have trained our networks with stochastic gradient utilizing the TensorFlow [1] distributed machine learning system using 20 replicas running each on a NVidia Kepler GPU. Our earlier experiments used momentum [13] with a decay of 0.9, while our best models were achieved using
at <== inception-v3 a â_inception-resnet-v1 âbo 0 60 80 100 100 a0 160 180 200 Epoch
Figure 21. Top-1 error evolution during training of pure Inception- v3 vs a residual network of similar computational cost. The eval- uation is measured on a single crop on the non-blacklist images of the ILSVRC-2012 validation set. The residual model was train- ing much faster, but reached slightly worse ï¬nal accuracy than the traditional Inception-v3.
RMSProp with decay of 0.9 and ⬠= 1.0. We used a learning rate of 0.045, decayed every two epochs using an exponential rate of 0.94. Model evaluations are performed using a running average of the parameters computed over time.
# 5. Experimental Results
First we observe the top-1 and top-5 validation-error evo- lution of the four variants during training. After the exper- iment was conducted, we have found that our continuous evaluation was conducted on a subset of the validation set which omitted about 1700 blacklisted entities due to poor bounding boxes. It turned out that the omission should have been only performed for the CLSLOC benchmark, but yields somewhat incomparable (more optimistic) numbers when compared to other reports including some earlier re- ports by our team. The difference is about 0.3% for top-1 error and about 0.15% for the top-5 error. However, since the differences are consistent, we think the comparison be- tween the curves is a fair one.
On the other hand, we have rerun our multi-crop and en- semble results on the complete validation set consisting of 50000 images. Also the ï¬nal ensemble result was also per- formed on the test set and sent to the ILSVRC test server for validation to verify that our tuning did not result in an over-ï¬tting. We would like to stress that this ï¬nal validation was done only once and we have submitted our results only twice in the last year: once for the BN-Inception paper and later during the ILSVR-2015 CLSLOC competition, so we believe that the test set numbers constitute a true estimate of the generalization capabilities of our model.
Finally, we present some comparisons, between various versions of Inception and Inception-ResNet. The models Inception-v3 and Inception-v4 are deep convolutional net-
49] === inoeption-v3 3a â_inception-resnetw1 ag " 5 â i 5 H . , 1 17 a0 30 cy 700 a0 140 160 0200 Epoch
Figure 22. Top-5 error evolution during training of pure Inception- v3 vs a residual Inception of similar computational cost. The eval- uation is measured on a single crop on the non-blacklist images of the ILSVRC-2012 validation set. The residual version has trained much faster and reached slightly better ï¬nal recall on the valida- tion set.
ba === inception-v4 bo 40 @ 80 100 10 cc) Teo Epoch
Figure 23. Top-1 error evolution during training of pure Inception- v3 vs a residual Inception of similar computational cost. The eval- uation is measured on a single crop on the non-blacklist images of the ILSVRC-2012 validation set. The residual version was train- ing much faster and reached slightly better ï¬nal accuracy than the traditional Inception-v4.
Network BN-Inception [6] Inception-v3 [15] Inception-ResNet-v1 Inception-v4 Inception-ResNet-v2 Top-1 Error Top-5 Error 25.2% 21.2% 21.3% 20.0% 19.9% 7.8% 5.6% 5.5% 5.0% 4.9%
Table 2. Single crop - single model experimental results. Reported on the non-blacklisted subset of the validation set of ILSVRC 2012.
works not utilizing residual connections while Inception- ResNet-v1 and Inception-ResNet-v2 are Inception style net- works that utilize residual connections instead of ï¬lter con- catenation.
Table 2 shows the single-model, single crop top-1 and top-5 error of the various architectures on the validation set.
Error (09-5) % 4 <== inception-v4 â_ inception-resnet-v2 40 6 0 80 100 10 a0 Â¥60 pach
Figure 24. Top-5 error evolution during training of pure Inception- v4 vs a residual Inception of similar computational cost. The eval- uation is measured on a single crop on the non-blacklist images of the ILSVRC-2012 validation set. The residual version trained faster and reached slightly better ï¬nal recall on the validation set.
45 inception-v4 4. inception-resnet-v2 a8 inception 30) inception-resnet-v1 280 7 0 a 700 i ao 700 Epoch
Figure 25. Top-5 error evolution of all four models (single model, single crop). Showing the improvement due to larger model size. Although the residual version converges faster, the ï¬nal accuracy seems to mainly depend on the model size.
inception inception-resnet-v1 Ey co 30 cy 700 or) a0 70 Epoch
Figure 26. Top-1 error evolution of all four models (single model, single crop). This paints a similar picture as the top-5 evaluation.
Table 3 shows the performance of the various models with a small number of crops: 10 crops for ResNet as was reported in [5]), for the Inception variants, we have used the 12 crops evaluation as as described in [14].
Network ResNet-151 [5] Inception-v3 [15] Inception-ResNet-v1 Inception-v4 Inception-ResNet-v2 Crops Top-1 Error Top-5 Error 10 12 12 12 12 21.4% 19.8% 19.8% 18.7% 18.7% 5.7% 4.6% 4.6% 4.2% 4.1%
Table 3. 10/12 crops evaluations - single model experimental re- sults. Reported on the all 50000 images of the validation set of ILSVRC 2012.
Network ResNet-151 [5] Inception-v3 [15] Inception-ResNet-v1 Inception-v4 Inception-ResNet-v2 Crops Top-1 Error Top-5 Error dense 144 144 144 144 19.4% 18.9% 18.8% 17.7% 17.8% 4.5% 4.3% 4.3% 3.8% 3.7%
Table 4. 144 crops evaluations - single model experimental results. Reported on the all 50000 images of the validation set of ILSVRC 2012.
Network ResNet-151 [5] Inception-v3 [15] 6 4 â 17.3% 3.6% 3.6% Inception-v4 + 3Ã Inception-ResNet-v2 4 16.5% 3.1%
# Models Top-1 Error Top-5 Error
Table 5. Ensemble results with 144 crops/dense evaluation. Re- ported on the all 50000 images of the validation set of ILSVRC 2012. For Inception-v4(+Residual), the ensemble consists of one pure Inception-v4 and three Inception-ResNet-v2 models and were evaluated both on the validation and on the test-set. The test-set performance was 3.08% top-5 error verifying that we donât over- ï¬t on the validation set.
Table 4 shows the single model performance of the var- ious models using. For residual network the dense evalua- tion result is reported from [5]. For the inception networks, the 144 crops strategy was used as described in [14].
Table 5 compares ensemble results. For the pure resid- ual network the 6 models dense evaluation result is reported from [5]. For the inception networks 4 models were ensem- bled using the 144 crops strategy as described in [14].
# 6. Conclusions
We have presented three new network architectures in detail:
⢠Inception-ResNet-v1: a hybrid Inception version that to Inception-v3 has a similar computational cost from [15].
⢠Inception-ResNet-v2: a costlier hybrid Inception ver- sion with signiï¬cantly improved recognition perfor- mance.
⢠Inception-v4: a pure Inception variant without residual connections with roughly the same recognition perfor- mance as Inception-ResNet-v2.
We studied how the introduction of residual connections leads to dramatically improved training speed for the Incep- tion architecture. Also our latest models (with and without residual connections) outperform all our previous networks, just by virtue of the increased model size.
# References
[1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghe- mawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Man´e, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Vi´egas, O. Vinyals, P. War- den, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng. Tensor- Flow: Large-scale machine learning on heterogeneous sys- tems, 2015. Software available from tensorï¬ow.org.
[2] J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, M. Mao, A. Senior, P. Tucker, K. Yang, Q. V. Le, et al. Large scale dis- tributed deep networks. In Advances in Neural Information Processing Systems, pages 1223â1231, 2012.
[3] C. Dong, C. C. Loy, K. He, and X. Tang. Learning a deep convolutional network for image super-resolution. In Com- puter VisionâECCV 2014, pages 184â199. Springer, 2014. [4] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich fea- ture hierarchies for accurate object detection and semantic In Proceedings of the IEEE Conference on segmentation. Computer Vision and Pattern Recognition (CVPR), 2014. [5] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learn- ing for image recognition. arXiv preprint arXiv:1512.03385, 2015.
[6] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of The 32nd International Conference on Ma- chine Learning, pages 448â456, 2015.
[7] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei. Large-scale video classiï¬cation with con- In Computer Vision and Pat- volutional neural networks. tern Recognition (CVPR), 2014 IEEE Conference on, pages 1725â1732. IEEE, 2014.
Imagenet classiï¬cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â1105, 2012.
[9] M. Lin, Q. Chen, and S. Yan. Network in network. arXiv preprint arXiv:1312.4400, 2013.
[10] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recogni- tion, pages 3431â3440, 2015.
[11] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein,
et al. 2014. Imagenet large scale visual recognition challenge.
[12] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
[13] I. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the importance of initialization and momentum in deep learning. In Proceedings of the 30th International Conference on Ma- chine Learning (ICML-13), volume 28, pages 1139â1147. JMLR Workshop and Conference Proceedings, May 2013.
[14] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1â9, 2015.
[15] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for computer vision. arXiv preprint arXiv:1512.00567, 2015.
[16] T. Tieleman and G. Hinton. Divide the gradient by a run- ning average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4, 2012. Accessed: 2015- 11-05.
[17] A. Toshev and C. Szegedy. Deeppose: Human pose estima- tion via deep neural networks. In Computer Vision and Pat- tern Recognition (CVPR), 2014 IEEE Conference on, pages 1653â1660. IEEE, 2014.
[18] N. Wang and D.-Y. Yeung. Learning a deep compact image In Advances in Neural representation for visual tracking. Information Processing Systems, pages 809â817, 2013. | {
"id": "1512.00567"
} |
1602.02867 | Value Iteration Networks | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains. | http://arxiv.org/pdf/1602.02867 | Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | cs.AI, cs.LG, cs.NE, stat.ML | Fixed missing table values | Advances in Neural Information Processing Systems 29 pages
2154--2162, 2016 | cs.AI | 20160209 | 20170320 | 7 1 0 2
r a M 0 2 ] I A . s c [
4 v 7 6 8 2 0 . 2 0 6 1 : v i X r a
# Value Iteration Networks
Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, and Pieter Abbeel
Dept. of Electrical Engineering and Computer Sciences, UC Berkeley
# Abstract
We introduce the value iteration network (VIN): a fully differentiable neural net- work with a âplanning moduleâ embedded within. VINs can learn to plan, and are suitable for predicting outcomes that involve planning-based reasoning, such as policies for reinforcement learning. Key to our approach is a novel differentiable approximation of the value-iteration algorithm, which can be represented as a con- volutional neural network, and trained end-to-end using standard backpropagation. We evaluate VIN based policies on discrete and continuous path-planning domains, and on a natural-language based search task. We show that by learning an explicit planning computation, VIN policies generalize better to new, unseen domains.
# 1 Introduction
Over the last decade, deep convolutional neural networks (CNNs) have revolutionized supervised learning for tasks such as object recognition, action recognition, and semantic segmentation [3, 15, 6, 19]. Recently, CNNs have been applied to reinforcement learning (RL) tasks with visual observations such as Atari games [21], robotic manipulation [18], and imitation learning (IL) [9]. In these tasks, a neural network (NN) is trained to represent a policy â a mapping from an observation of the systemâs state to an action, with the goal of representing a control strategy that has good long-term behavior, typically quantiï¬ed as the minimization of a sequence of time-dependent costs.
The sequential nature of decision making in RL is inherently different than the one-step decisions in supervised learning, and in general requires some form of planning [2]. However, most recent deep RL works [21, 18, 9] employed NN architectures that are very similar to the standard networks used in supervised learning tasks, which typically consist of CNNs for feature extraction, and fully connected layers that map the features to a probability distribution over actions. Such networks are inherently reactive, and in particular, lack explicit planning computation. The success of reactive policies in sequential problems is due to the learning algorithm, which essentially trains a reactive policy to select actions that have good long-term consequences in its training domain.
To understand why planning can nevertheless be an important ingredient in a policy, consider the grid-world navigation task depicted in Figure 1 (left), in which the agent can observe a map of its domain, and is required to navigate between some obstacles to a target position. One hopes that after training a policy to solve several instances of this problem with different obstacle conï¬gurations, the policy would generalize to solve a different, unseen domain, as in Figure 1 (right). However, as we show in our experiments, while standard CNN-based networks can be easily trained to solve a set of such maps, they do not generalize well to new tasks outside this set, because they do not understand the goal-directed nature of the behavior. This observation suggests that the computation learned by reactive policies is different from planning, which is required to solve a new task1.
1In principle, with enough training data that covers all possible task conï¬gurations, and a rich enough policy representation, a reactive policy can learn to map each task to its optimal policy. In practice, this is often too expensive, and we offer a more data-efï¬cient approach by exploiting a ï¬exible prior about the planning computation underlying the behavior.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
In this work, we propose a NN-based policy that can effectively learn to plan. Our model, termed a value-iteration network (VIN), has a differen- tiable âplanning programâ embedded within the NN structure.
=
. a] a al
= . a] a al
The key to our approach is an observation that the classic value-iteration (VI) planning algo- rithm [1, 2] may be represented by a speciï¬c Figure 1: Two instances of a grid-world domain. type of CNN. By embedding such a VI network Task is to move to the goal between the obstacles. module inside a standard feed-forward classiï¬- cation network, we obtain a NN model that can learn the parameters of a planning computation that yields useful predictions. The VI block is differentiable, and the whole network can be trained using standard backpropagation. This makes our policy simple to train using standard RL and IL algorithms, and straightforward to integrate with NNs for perception and control.
Connections between planning algorithms and recurrent NNs were previously explored by Ilin et al. [12]. Our work builds on related ideas, but results in a more broadly applicable policy representation. Our approach is different from model-based RL [25, 4], which requires system identiï¬cation to map the observations to a dynamics model, which is then solved for a policy. In many applications, including robotic manipulation and locomotion, accurate system identiï¬cation is difï¬cult, and modelling errors can severely degrade the policy performance. In such domains, a model-free approach is often preferred [18]. Since a VIN is just a NN policy, it can be trained model free, without requiring explicit system identiï¬cation. In addition, the effects of modelling errors in VINs can be mitigated by training the network end-to-end, similarly to the methods in [13, 11].
We demonstrate the effectiveness of VINs within standard RL and IL algorithms in various problems, among which require visual perception, continuous control, and also natural language based decision making in the WebNav challenge [23]. After training, the policy learns to map an observation to a planning computation relevant for the task, and generate action predictions based on the resulting plan. As we demonstrate, this leads to policies that generalize better to new, unseen, task instances. 2 Background
In this section we provide background on planning, value iteration, CNNs, and policy representations for RL and IL. In the sequel, we shall show that CNNs can implement a particular form of planning computation similar to the value iteration algorithm, which can then be used as a policy for RL or IL.
Value Iteration: A standard model for sequential decision making and planning is the Markov decision process (MDP) (1) 2]. An MDP M consists of states s ⬠S, actions a ⬠A, a reward function R(s, a), and a transition kernel P(sâ|s, a) that encodes the probability of the next state given the current state and action. A policy 7(a|s) prescribes an action distribution for each state. The goal in an MDP is to find a policy that obtains high rewards in the long term. Formally, the value Vâ¢(s) of a state under policy 7 is the expected discounted sum of rewards when starting from that state and executing policy 7, V"(s) = E* [7/9 y'r(sz, a4)| 80 = 8], where 7 ⬠(0, 1) is a discount factor, and Eâ denotes an expectation over trajectories of states and actions (so, a0, 81,41 -..), in which actions are selected according to 7, and states evolve according to the transition kernel P(sâ|s, a). The optimal value function V*(s) = max, V(s) is the maximal long-term return possible from a state. A policy 7* is said to be optimal if Vâ¢" (s) = V*(s) Vs. A popular algorithm for calculating V* and â¢* is value iteration (VI):
Vr4i(s) = maxeQn(s,a) Vs, where Q,(s,a) = R(s,a) +730,
Vr4i(s) = maxeQn(s,a) Vs, where Q,(s,a) = R(s,a) +730, P(s'|s,a)Vn(sâ). (1) It is well known that the value function V,, in VI converges as n â oo to V*, from which an optimal policy may be derived as 7*(s) = arg max, Q.0(s, a). Convolutional Neural Networks (CNNs) are NNs with a particular architecture that has proved useful for computer vision, among other domains {15}. A CNN is comprised of stacked convolution and max-pooling layers. The input to each convolution layer is a 3- dimensional signal X, typically, an image with | channels, m horizontal pixels, and n verti- cal pixels, and its output fis a I/-channel convolution of the image with kernels W1,..., w', hy ij =O (Si Wit Xuwrâia'-a) , where o is some scalar activation function. A max-pooling layer selects, for each channel / and pixel i, j in h, the maximum value among its neighbors N (i, 7), pinazpool "ij = maxyjren(i,j) h1iâ,j". Typically, the neighbors N(i, 7) are chosen as a k x k image
2
patch around pixel i, 7. After max-pooling, the image is down-sampled by a constant factor d, com- monly 2 or 4, resulting in an output signal with â channels, m/d horizontal pixels, and n/d vertical pixels. CNNs are typically trained using stochastic gradient descent (SGD), with backpropagation for computing gradients. Reinforcement Learning and Imitation Learning: In MDPs where the state space is very large or continuous, or when the MDP transitions or rewards are not known in advance, planning algorithms cannot be applied. In these cases, a policy can be learned from either expert supervision â IL, or by trial and error â RL. While the learning algorithms in both cases are different, the policy representations â which are the focus of this work â are similar. Additionally, most state-of-the-art algorithms such as [2: are agnostic to the policy representation, and only require it to be differentiable, for performing gradient descent on some algorithm-specific loss function. Therefore, in this paper we do not commit to a specific learning algorithm, and only consider the policy. Let ¢(s) denote an observation for state s. The policy is specified as a parametrized function tt9(a|(s)) mapping observations to a probability over actions, where @ are the policy parameters. For example, the policy could be represented as a neural network, with @ denoting the network weights. The goal is to tune the parameters such that the policy behaves well in the sense that to(a|@(s)) & x*(alb(s)), where 7* is the optimal policy for the MDP, as defined in Section[2] In IL, a dataset of N_ state observations and corresponding optimal actions {9(s'), a" ~ x*(0(s'))} 4 yn is generated by an expert. Learning a policy then becomes an instance of supervised learning [24] /9]. In RL, the optimal action is not available, but instead, the agent can act in the world and observe the rewards and state transitions its actions effect. RL algorithms such as in use these observations to improve the value of the policy. 3 The Value Iteration Network Model In this section we introduce a general policy representation that embeds an explicit planning module. As stated earlier, the motivation for such a representation is that a natural solution to many tasks, such as the path planning described above, involves planning on some model of the domain.
Let M denote the MDP of the domain for which we design our policy 7. We assume that there is some unknown MDP M such that the optimal plan in M contains useful information about the optimal policy in the original task /. However, we emphasize that we do not assume to know M in advance. Our idea is to equip the policy with the ability to learn and solve M, and to add the solution of M as an element in the policy 7. We hypothesize that this will lead to a policy that automatically learns a useful M to plan on. We denote by 5 ⬠S,a ⬠A, R(5,a), and P(8â|8, @) the states, actions, rewards, and transitions in M. To facilitate a connection between M and M, we let R and P depend on the observation in M, namely, R = fr((s)) and P = fp(¢(s)), and we will later learn the functions fr and fp as a part of the policy learning process. For example, in the grid-world domain described above, we can let MW have the same state and action spaces as the true grid-world MM. The reward function fz can map an image of the domain to a high reward at the goal, and negative reward near an obstacle, while fp can encode deterministic movements in the grid-world that do not depend on the observation. While these rewards and transitions are not necessarily the true rewards and transitions in the task, an optimal plan in M will still follow a trajectory that avoids obstacles and reaches the goal, similarly to the optimal plan in /. Once an MDP I has been specified, any standard planning algorithm can be used to obtain the value function V*. In the next section, we shall show that using a particular implementation of VI for planning has the advantage of being differentiable, and simple to implement within a NN framework. In this section however, we focus on how to use the planning result V* within the NN policy 7. Our approach is based on two important observations. The first is that the vector of values V*(s) Vs encodes all the information about the optimal plan in /. Thus, adding the vector V* as additional features to the policy 7 is sufficient for extracting information about the optimal plan in /. However, an additional property of V* is that the optimal decision 7*(3) at a state 5 can depend only on a subset of the values of V*, since 7*(3) = arg max, R(8,a) + y 05 P(3'|8,a)V*(8â). Therefore, if the MDP has a local connectivity structure, such as in the grid-world example above, the states for which P(8â|5,@) > 0 is a small subset of S. In NN terminology, this is a form of attention [32], in the sense that for a given label prediction (action), only a subset of the input features (value function) is relevant. Attention is known to improve learning performance by reducing the effective number of network parameters during learning. Therefore, the second element in our network is an attention module that outputs a vector of (attention
3
modulated) values Ï(s). Finally, the vector Ï(s) is added as additional features to a reactive policy Ïre(a|Ï(s), Ï(s)). The full network architecture is depicted in Figure 2 (left). Returning to our grid-world example, at a particular state s, the reactive policy only needs to query the values of the states neighboring s in order to select the correct action. Thus, the attention module in this case could return a Ï(s) vector with a subset of ¯V â for these neighboring states.
Value Iteration Network VI Module ViModule R _JPlanon ve New Value p. |Mop M| Vv Observation] | =| rrr Bf fee -O (s) >| Attention Reactive Policy Hre(alo(s), Y(s)) K recurrence
Value Iteration Network ViModule R _JPlanon ve p. |Mop M| Observation] | =| (s) >| Attention Reactive Policy Hre(alo(s), Y(s))
Figure 2: Planning-based NN models. Left: a general policy representation that adds value function features from a planner to a reactive policy. Right: VI module â a CNN representation of VI algorithm.
Let θ denote all the parameters of the policy, namely, the parameters of fR, fP , and Ïre, and note that Ï(s) is in fact a function of Ï(s). Therefore, the policy can be written in the form Ïθ(a|Ï(s)), similarly to the standard policy form (cf. Section 2). If we could back-propagate through this function, then potentially we could train the policy using standard RL and IL algorithms, just like any other standard policy representation. While it is easy to design functions fR and fP that are differentiable (and we provide several examples in our experiments), back-propagating the gradient through the planning algorithm is not trivial. In the following, we propose a novel interpretation of an approximate VI algorithm as a particular form of a CNN. This allows us to conveniently treat the planning module as just another NN, and by back-propagating through it, we can train the whole policy end-to-end.
# 3.1 The VI Module
We now introduce the VI module â a NN that encodes a differentiable planning computation. Our starting point is the VI algorithm (1). Our main observation is that each iteration of VI may be seen as passing the previous value function Vn and reward function R through a convolution layer and max-pooling layer. In this analogy, each channel in the convolution layer corresponds to the Q-function for a speciï¬c action, and convolution kernel weights correspond to the discounted transition probabilities. Thus by recurrently applying a convolution layer K times, K iterations of VI are effectively performed.
Following this idea, we propose the VI network module, as depicted in Figure2B. The inputs to the VI module is a âreward imageâ R of dimensions 1, m,n, where here, for the purpose of clarity, we follow the CNN formulation and explicitly assume that the state space S maps to a 2-dimensional grid. However, our approach can be extended to general discrete state spaces, for example, a graph, as we report in the WikiNav experiment in Seton] T The reward is fed into a convolutional layer Q with A channels and a linear activation function, Q =u. j we, yh, i/âijââj- Each channel in this layer corresponds to Q(5, @) for a particular action @. This layer is then max-pooled along the actions channel to produce the next-iteration value function layer V, V; j = max, Q(a,i,j). The next-iteration value function layer V is then stacked with the reward R, and fed back into the convolutional layer and max-pooling layer K times, to perform K iterations of value iteration.
The VI module is simply a NN architecture that has the capability of performing an approximate VI computation. Nevertheless, representing VI in this form makes learning the MDP parameters and reward function natural â by backpropagating through the network, similarly to a standard CNN. VI modules can also be composed hierarchically, by treating the value of one VI module as additional input to another VI module. We further report on this idea in the supplementary material.
# 3.2 Value Iteration Networks
We now have all the ingredients for a differentiable planning-based policy, which we term a value iteration network (VIN). The VIN is based on the general planning-based policy deï¬ned above, with the VI module as the planning algorithm. In order to implement a VIN, one has to specify the state
4
and action spaces for the planning module ¯S and ¯A, the reward and transition functions fR and fP , and the attention function; we refer to this as the VIN design. For some tasks, as we show in our experiments, it is relatively straightforward to select a suitable design, while other tasks may require more thought. However, we emphasize an important point: the reward, transitions, and attention can be deï¬ned by parametric functions, and trained with the whole policy2. Thus, a rough design can be speciï¬ed, and then ï¬ne-tuned by end-to-end training.
Once a VIN design is chosen, implementing the VIN is straightforward, as it is simply a form of a CNN. The networks in our experiments all required only several lines of Theano [28] code. In the next section, we evaluate VIN policies on various domains, showing that by learning to plan, they achieve a better generalization capability.
4 Experiments In this section we evaluate VINs as policy representations on various domains. Additional experiments investigating RL and hierarchical VINs, as well as technical implementation details are discussed in the supplementary material. Source code is available at https://github.com/avivt/VIN. Our goal in these experiments is to investigate the following questions:
1. Can VINs effectively learn a planning computation using standard RL and IL algorithms?
2. Does the planning computation learned by VINs make them better than reactive policies at generalizing to new domains?
An additional goal is to point out several ideas for designing VINs for various tasks. While this is not an exhaustive list that ï¬ts all domains, we hope that it will motivate creative designs in future work.
4.1 Grid-World Domain Our ï¬rst experiment domain is a synthetic grid-world with randomly placed obstacles, in which the observation includes the position of the agent, and also an image of the map of obstacles and goal position. Figure 3 shows two random instances of such a grid-world of size 16 à 16. We conjecture that by learning the optimal policy for several instances of this domain, a VIN policy would learn the planning computation required to solve a new, unseen, task. In such a simple domain, an optimal policy can easily be calculated using exact VI. Note, however, that here we are interested in evaluating whether a NN policy, trained using RL or IL, can learn to plan. In the following results, policies were trained using IL, by standard supervised learning from demonstrations of the optimal policy. In the supplementary material, we report additional RL experiments that show similar ï¬ndings. We design a VIN for this task following the guidelines described above, where the planning MDP ¯M is a grid-world, similar to the true MDP. The reward mapping fR is a CNN mapping the image input to a reward map in the grid-world. Thus, fR should potentially learn to discriminate between obstacles, non-obstacles and the goal, and assign a suitable reward to each. The transitions ¯P were deï¬ned as 3 à 3 convolution kernels in the VI block, exploiting the fact that transitions in the grid-world are local3. The recurrence K was chosen in proportion to the grid-world size, to ensure that information can ï¬ow from the goal state to any other state. For the attention module, we chose a trivial approach that selects the ¯Q values in the VI block for the current state, i.e., Ï(s) = ¯Q(s, ·). The ï¬nal reactive policy is a fully connected network that maps Ï(s) to a probability over actions. We compare VINs to the following NN reactive policies: CNN network: We devised a CNN-based reactive policy inspired by the recent impressive results of DQN [21], with 5 convolution layers, and a fully connected output. While the network in [21] was trained to predict Q values, our network outputs a probability over actions. These terms are related, since Ïâ(s) = arg maxa Q(s, a). Fully Convolutional Network (FCN): The problem setting for this domain is similar to semantic segmentation [19], in which each pixel in the image is assigned a semantic label (the action in our case). We therefore devised an FCN inspired by a state-of-the-art semantic segmentation algorithm [19], with 3 convolution layers, where the ï¬rst layer has a ï¬lter that spans the whole image, to properly convey information from the goal to every other state. In Table 1 we present the average 0 â 1 prediction loss of each model, evaluated on a held-out test-set of maps with random obstacles, goals, and initial states, for different problem sizes. In addition, for each map, a full trajectory from the initial state was predicted, by iteratively rolling-out the next-states
2VINs are fundamentally different than inverse RL methods [22], where transitions are required to be known. 3Note that the transitions deï¬ned this way do not depend on the state ¯s. Interestingly, we shall see that the network learned to plan successful trajectories nevertheless, by appropriately shaping the reward.
5
Shortest path â*â Predicted path â*â Fredicted path Shortest path
Shortest path â*â Fredicted path
â*â Predicted path Shortest path
Figure 3: Grid-world domains (best viewed in color). A,B: Two random instances of the 28 à 28 synthetic gridworld, with the VIN-predicted trajectories and ground-truth shortest paths between random start and goal positions. C: An image of the Mars domain, with points of elevation sharper than 10⦠colored in red. These points were calculated from a matching image of elevation data (not shown), and were not available to the learning algorithm. Note the difï¬culty of distinguishing between obstacles and non-obstacles. D: The VIN-predicted (purple line with cross markers), and the shortest-path ground truth (blue line) trajectories between between random start and goal positions.
Domain 8 Ã 8 16 Ã 16 28 Ã 28 Prediction loss 0.004 0.05 0.11 VIN Success rate Traj. diff. 99.6% 0.001 99.3% 0.089 97% 0.086 Pred. loss 0.02 0.10 0.13 CNN Succ. rate Traj. diff. 97.9% 0.006 87.6% 0.06 74.2% 0.078 Pred. loss 0.01 0.07 0.09 FCN Succ. rate Traj. diff. 97.3% 0.004 88.3% 0.05 76.6% 0.08
Table 1: Performance on grid-world domain. Top: comparison with reactive policies. For all domain sizes, VIN networks signiï¬cantly outperform standard reactive networks. Note that the performance gap increases dramatically with problem size.
predicted by the network. A trajectory was said to succeed if it reached the goal without hitting obstacles. For each trajectory that succeeded, we also measured its difference in length from the optimal trajectory. The average difference and the average success rate are reported in Table 1. Clearly, VIN policies generalize to domains outside the training set. A visualization of the reward mapping fR (see supplementary material) shows that it is negative at obstacles, positive at the goal, and a small negative constant otherwise. The resulting value function has a gradient pointing towards a direction to the goal around obstacles, thus a useful planning computation was learned. VINs also signiï¬cantly outperform the reactive networks, and the performance gap increases dramatically with the problem size. Importantly, note that the prediction loss for the reactive policies is comparable to the VINs, although their success rate is signiï¬cantly worse. This shows that this is not a standard case of overï¬tting/underï¬tting of the reactive policies. Rather, VIN policies, by their VI structure, focus prediction errors on less important parts of the trajectory, while reactive policies do not make this distinction, and learn the easily predictable parts of the trajectory yet fail on the complete task. The VINs have an effective depth of K, which is larger than the depth of the reactive policies. One may wonder, whether any deep enough network would learn to plan. In principle, a CNN or FCN of depth K has the potential to perform the same computation as a VIN. However, it has much more parameters, requiring much more training data. We evaluate this by untying the weights in the K recurrent layers in the VIN. Our results, reported in the supplementary material, show that untying the weights degrades performance, with a stronger effect for smaller sizes of training data.
4.2 Mars Rover Navigation In this experiment we show that VINs can learn to plan from natural image input. We demonstrate this on path-planning from overhead terrain images of a Mars landscape. Each domain is represented by a 128 à 128 image patch, on which we deï¬ned a 16 à 16 grid-world, where each state was considered an obstacle if the terrain in its corresponding 8 à 8 image patch contained an elevation angle of 10 degrees or more, evaluated using an external elevation data base. An example of the domain and terrain image is depicted in Figure 3. The MDP for shortest-path planning in this case is similar to the grid-world domain of Section 4.1, and the VIN design was similar, only with a deeper CNN in the reward mapping fR for processing the image. The policy was trained to predict the shortest-path directly from the terrain image. We emphasize that the elevation data is not part of the input, and must be inferred (if needed) from the terrain image.
6
After training, VIN achieved a success rate of 84.8%. To put this rate in context, we compare with the best performance achievable without access to the elevation data, which is 90.3%. To make this comparison, we trained a CNN to classify whether an 8 à 8 patch is an obstacle or not. This classiï¬er was trained using the same image data as the VIN network, but its labels were the true obstacle classiï¬cations from the elevation map (we reiterate that the VIN did not have access to these ground-truth obstacle labels during training or testing). The success rate of planner that uses the obstacle map generated by this classiï¬er from the raw image is 90.3%, showing that obstacle identiï¬cation from the raw image is indeed challenging. Thus, the success rate of the VIN, which was trained without any obstacle labels, and had to âï¬gure outâ the planning process is quite remarkable.
4.3 Continuous Control We now consider a 2D path planning domain with continuous states and continuous actions, which cannot be solved using VI, and therefore a VIN cannot be naively applied. Instead, we will construct the VIN to perform âhigh-levelâ planning on a discrete, coarse, grid-world rep- resentation of the continuous domain. We shall show that a VIN can learn to plan such a âhigh- levelâ plan, and also exploit that plan within its âlow-levelâ continuous control policy. Moreover, the VIN policy results in better generalization than a reactive policy. Consider the domain in Figure 4. A red-colored particle needs to be navigated to a green goal us- ing horizontal and vertical forces. Gray-colored obstacles are randomly positioned in the domain, and apply an elastic force and friction when contacted. This domain presents a non-trivial control problem, as the agent needs to both plan a feasible trajectory between the obstacles (or use them to bounce off), but also control the particle (which has mass and inertia) to follow it. The state obser- vation consists of the particleâs continuous position and velocity, and a static 16 à 16 downscaled image of the obstacles and goal position in the domain. In principle, such an observation is sufï¬cient to devise a ârough planâ for the particle to follow. As in our previous experiments, we investigate whether a policy trained on several instances of this domain with different start state, goal, and obstacle positions, would generalize to an unseen domain. For training we chose the guided policy search (GPS) algorithm with unknown dynamics [17], which is suitable for learning policies for continuous dynamics with contacts, and we used the publicly available GPS code [7], and Mujoco [30] for physical simulation. We generated 200 random training instances, and evaluate our performance on 40 different test instances from the same distribution. Our VIN design is similar to the grid-world cases, with some important modiï¬cations: the attention module selects a 5 à 5 patch of the value ¯V , centered around the current (discretized) position in the map. The ï¬nal reactive policy is a 3-layer fully connected network, with a 2-dimensional continuous output for the controls. In addition, due to the limited number of training domains, we pre-trained the VIN with transition weights that correspond to discounted grid-world transitions. This is a reasonable prior for the weights in a 2-d task, and we emphasize that even with this initialization, the initial value function is meaningless, since the reward map fR is not yet learned. We compare with a CNN-based reactive policy inspired by the state-of-the-art results in [21, 20], with 2 CNN layers for image processing, followed by a 3-layer fully connected network similar to the VIN reactive policy. Figure 4 shows the performance of the trained policies, measured as the ï¬nal distance to the target. The VIN clearly outperforms the CNN on test domains. We also plot several trajectories of both policies on test domains, showing that VIN learned a more sensible generalization of the task.
gee age ae ae
gee age ae ne
4.4 WebNav Challenge In the previous experiments, the planning aspect of the task corresponded to 2D navigation. We now consider a more general domain: WebNav [23] â a language based search task on a graph. In WebNav [23], the agent needs to navigate the links of a website towards a goal web-page, specified by a short 4-sentence query. At each state s (web-page), the agent can observe average word- embedding features of the state ¢(s) and possible next states ¢(sâ) (linked pages), and the features of the query (q), and based on that has to select which link to follow. In [23], the search was performed
7
on the Wikipedia website. Here, we report experiments on the âWikipedia for Schoolsâ website, a simplified Wikipedia designed for children, with over 6000 pages and at most 292 links per page. In [23], a NN-based policy was proposed, which first learns a NN mapping from (¢(s), ¢(q)) toa hidden state vector h. The action is then selected according to 7(sâ|¢(s), 6(q)) x exp (h' 6(sâ)). In essence, this policy is reactive, and relies on the word embedding features at each state to contain meaningful information about the path to the goal. Indeed, this property naturally holds for an encyclopedic website that is structured as a tree of categories, sub-categories, sub-sub-categories, etc. We sought to explore whether planning, based on a VIN, can lead to better performance in this task, with the intuition that a plan on a simplified model of the website can help guide the reactive policy in difficult queries. Therefore, we designed a VIN that plans on a small subset of the graph that contains only the Ist and 2nd level categories (< 3% of the graph), and their word-embedding features. Designing this VIN requires a different approach from the grid-world VINs described earlier, where the most challenging aspect is to define a meaningful mapping between nodes in the true graph and nodes in the smaller VIN graph. For the reward mapping fr, we chose a weighted similarity measure between the query features ¢(q), and the features of nodes in the small graph $(3). Thus, intuitively, nodes that are similar to the query should have high reward. The transitions were fixed based on the graph connectivity of the smaller VIN graph, which is known, though different from the true graph. The attention module was also based on a weighted similarity measure between the features of the possible next states $(sâ) and the features of each node in the simplified graph ¢(5). The reactive policy part of the VIN was similar to the policy of described above. Note that by training such a VIN end-to-end, we are effectively learning how to exploit the small graph for doing better planning on the true, large graph. Both the VIN policy and the baseline reactive policy were trained by supervised learning, on random trajectories that start from the root node of the graph. Similarly to [23], a policy is said to succeed a query if all the correct predictions along the path are within its top-4 predictions. After training, the VIN policy performed mildly better than the baseline on 2000 held-out test queries when starting from the root node, achieving 1030 successful runs vs. 1025 for the baseline. However, when we tested the policies on a harder task of starting from a random position in the graph, VINs significantly outperformed the baseline, achieving 346 successful runs vs. 304 for the baseline, out of 4000 test queries. These results confirm that indeed, when navigating a tree of categories from the root up, the features at each state contain meaningful information about the path to the goal, making a reactive policy sufficient. However, when starting the navigation from a different state, a reactive policy may fail to understand that it needs to first go back to the root and switch to a different branch in the tree. Our results indicate such a strategy can be better represented by a VIN. We remark that there is still room for further improvements of the WebNav results, e.g., by better models for reward and attention functions, and better word-embedding representations of text.
5 Conclusion and Outlook The introduction of powerful and scalable RL methods has opened up a range of new problems for deep learning. However, few recent works investigate policy architectures that are speciï¬cally tailored for planning under uncertainty, and current RL theory and benchmarks rarely investigate the generalization properties of a trained policy [27, 21, 5]. This work takes a step in this direction, by exploring better generalizing policy representations. Our VIN policies learn an approximate planning computation relevant for solving the task, and we have shown that such a computation leads to better generalization in a diverse set of tasks, ranging from simple gridworlds that are amenable to value iteration, to continuous control, and even to navigation of Wikipedia links. In future work we intend to learn different planning computations, based on simulation [10], or optimal linear control [31], and combine them with reactive policies, to potentially develop RL solutions for task and motion planning [14].
# Acknowledgments
This research was funded in part by Siemens, by ONR through a PECASE award, by the Army Research Ofï¬ce through the MAST program, and by an NSF CAREER award (#1351028). A. T. was partially funded by the Viterbi Scholarship, Technion. Y. W. was partially funded by a DARPA PPAML program, contract FA8750-14-C-0011.
8
# References
[1] R. Bellman. Dynamic Programming. Princeton University Press, 1957. [2] D. Bertsekas. Dynamic Programming and Optimal Control, Vol II. Athena Scientiï¬c, 4th edition, 2012. [3] D. Ciresan, U. Meier, and J. Schmidhuber. Multi-column deep neural networks for image classiï¬cation. In
Computer Vision and Pattern Recognition, pages 3642â3649, 2012.
[4] M. Deisenroth and C. E. Rasmussen. Pilco: A model-based and data-efï¬cient approach to policy search. In ICML, 2011.
[5] Y. Duan, X. Chen, R. Houthooft, J. Schulman, and P. Abbeel. Benchmarking deep reinforcement learning for continuous control. arXiv preprint arXiv:1604.06778, 2016.
[6] C. Farabet, C. Couprie, L. Najman, and Y. LeCun. Learning hierarchical features for scene labeling. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8):1915â1929, 2013.
[7] C. Finn, M. Zhang, J. Fu, X. Tan, Z. McCarthy, E. Scharff, and S. Levine. Guided policy search code implementation, 2016. Software available from rll.berkeley.edu/gps.
[8] K. Fukushima. Neural network model for a mechanism of pattern recognition unaffected by shift in position- neocognitron. Transactions of the IECE, J62-A(10):658â665, 1979.
[9] A. Giusti et al. A machine learning approach to visual perception of forest trails for mobile robots. IEEE Robotics and Automation Letters, 2016.
[10] X. Guo, S. Singh, H. Lee, R. L. Lewis, and X. Wang. Deep learning for real-time atari game play using ofï¬ine monte-carlo tree search planning. In NIPS, 2014.
[11] X. Guo, S. Singh, R. Lewis, and H. Lee. Deep learning for reward design to improve monte carlo tree search in atari games. arXiv:1604.07095, 2016.
[12] R. Ilin, R. Kozma, and P. J. Werbos. Efï¬cient learning in cellular simultaneous recurrent neural networks-the case of maze navigation problem. In ADPRL, 2007.
[13] J. Joseph, A. Geramifard, J. W. Roberts, J. P. How, and N. Roy. Reinforcement learning with misspeciï¬ed model classes. In ICRA, 2013.
[14] L. P. Kaelbling and T. Lozano-Pérez. Hierarchical task and motion planning in the now. International Conference on Robotics and Automation (ICRA), pages 1470â1477, 2011.
[15] A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classiï¬cation with deep convolutional neural networks. In NIPS, 2012.
[16] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278â2324, 1998.
[17] S. Levine and P. Abbeel. Learning neural network policies with guided policy search under unknown dynamics. In NIPS, 2014.
[18] S. Levine, C. Finn, T. Darrell, and P. Abbeel. End-to-end training of deep visuomotor policies. JMLR, 17, 2016.
[19] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In IEEE Conference on Computer Vision and Pattern Recognition, pages 3431â3440, 2015.
[20] V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu. Asynchronous methods for deep reinforcement learning. arXiv preprint arXiv:1602.01783, 2016. [21] V. Mnih, K. Kavukcuoglu, D. Silver, A. Rusu, J. Veness, M. Bellemare, A. Graves, M. Riedmiller, A. Fidjeland, G. Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529â533, 2015.
[22] G. Neu and C. Szepesvári. Apprenticeship learning using inverse reinforcement learning and gradient methods. In UAI, 2007.
[23] R. Nogueira and K. Cho. Webnav: A new large-scale task for natural language based sequential decision making. arXiv preprint arXiv:1602.02261, 2016.
[24] S. Ross, G. Gordon, and A. Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. In AISTATS, 2011.
[25] J. Schmidhuber. An on-line algorithm for dynamic reinforcement learning and planning in reactive environments. In International Joint Conference on Neural Networks. IEEE, 1990.
[26] J. Schulman, S. Levine, P. Abbeel, M. Jordan, and P. Moritz. Trust region policy optimization. In ICML, 2015.
[27] R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction. MIT press, 1998. [28] Theano Development Team. Theano: A Python framework for fast computation of mathematical expres-
sions. arXiv e-prints, abs/1605.02688, May 2016.
[29] T. Tieleman and G. Hinton. Lecture 6.5. COURSERA: Neural Networks for Machine Learning, 2012. [30] E. Todorov, T. Erez, and Y. Tassa. Mujoco: A physics engine for model-based control. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pages 5026â5033. IEEE, 2012. [31] M. Watter, J. Springenberg, J. Boedecker, and M. Riedmiller. Embed to control: A locally linear latent
dynamics model for control from raw images. In NIPS, 2015.
[32] K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhudinov, R. Zemel, and Y. Bengio. Show, attend and tell: Neural image caption generation with visual attention. In ICML, 2015.
9
# A Visualization of Learned Reward and Value
In Figure 5 we plot the learned reward and value function for the gridworld task. The learned reward is very negative at obstacles, very positive at goal, and a slightly negative constant otherwise. The resulting value function has a peak at the goal, and a gradient pointing towards a direction to the goal around obstacles. This plot clearly shows that the VI block learned a useful planning computation.
Learned Value Leamed Reward
Learned Value Leamed Reward
Figure 5: Visualization of learned reward and value function. Left: a sample domain. Center: learned reward fR for this domain. Right: resulting value function (in VI block) for this domain.
# B Weight Sharing
The VINs have an effective depth of K, which is larger than the depth of the reactive policies. One may wonder, whether any deep enough network would learn to plan. In principle, a CNN or FCN of depth K has the potential to perform the same computation as a VIN. However, it has much more parameters, requiring much more training data. We evaluate this by untying the weights in the K recurrent layers in the VIN. Our results, in Table 2 show that untying the weights degrades performance, with a stronger effect for smaller sizes of training data.
Training data 20% 50% 100% Pred. loss 0.06 0.05 0.05 VIN Succ. rate Traj. diff. 98.2% 0.106 99.4% 0.018 99.3% 0.089 VIN Untied Weights Traj. Succ. diff. rate 91.9% 0.094 95.2% 0.078 95.6% 0.068 Pred. loss 0.09 0.07 0.05
Table 2: Performance on 16 Ã 16 grid-world domain. Evaluation of the effect of VI module shared weights relative to data size.
# C Gridworld with Reinforcement Learning
We demonstrate that the value iteration network can be trained using reinforcement learning methods and achieves favorable generalization properties as compared to standard convolutional neural networks (CNNs). The overall setup of the experiment is as follows: we train policies parameterized by VINs and policies parameterized by convolutional networks on the same set of randomly generated gridworld maps in the same way (described below) and then test their performance on a held-out set of test maps, which was generated in the same way as the set of training maps but is disjoint from the training set. The MDP is what one would expect of a gridworld environment â the states are the positions on the map; the actions are movements up, down, left, and right; the rewards are +1 for reaching the goal, â1 for falling into a hole, and â0.01 otherwise (to encourage the policy to ï¬nd the shortest path); the transitions are deterministic. Structure of the networks. The VINs used are similar to those described in the main body of the paper. After K value-iteration recurrences, we have approximate Q values for every state and action in the map. The attention selects only those for the current state, and these are converted to a
10
Network VIN CNN 16 Ã 16 8 Ã 8 90.9% 82.5% 86.9% 33.1%
# Table 3: RL Results â performance on test maps.
probability distribution over actions using the softmax function. We use K = 10 for the 8 à 8 maps and K = 20 for the 16 à 16 maps. The convolutional networksâ structure was adapted to accommodate the size of the maps. For the 8Ã8 maps, we use 50 ï¬lters in the ï¬rst layer and then 100 ï¬lters in the second layer, all of size 3 à 3. Each of these layers is followed by a 2 à 2 max-pool. At the end we have a fully connected hidden layer with 100 hidden units, followed by a fully-connected layer to the (4) outputs, which are converted to probabilities using the softmax function. The network for the 16 à 16 maps is similar but uses three convolutional layers (with 50, 100, and 100 ï¬lters respectively), the ï¬rst two of which are 2 à 2 max-pooled, followed by two fully-connected hidden layers (200 and 100 hidden units respectively) before connecting to the outputs and performing softmax. Training with a curriculum. To ensure that the policies are not simply memorizing speciï¬c maps, we randomly select a map before each episode. But some maps are far more difï¬cult than others, and the agent learns best when it stands a reasonable chance of reaching this goal. Thus we found it beneï¬cial to begin training on the easiest maps and then gradually progress to more difï¬cult maps. This is the idea of curriculum training. We consider curriculum training as a way to address the exploration problem. If a completely untrained agent is dropped into a very challenging map, it moves randomly and stands approximately zero chance of reaching the goal (and thus learning a useful reward). But even a random policy can consistently reach goals nearby and learn something useful in the process, e.g. to move toward the goal. Once the policy knows how to solve tasks of difï¬culty n, it can more easily learn to solve tasks of difï¬culty n + 1, as compared to a completely untrained policy. This strategy is well-aligned with how formal education is structured; you canât effectively learn calculus without knowing basic algebra. Not all environments have an obvious difï¬culty metric, but fortunately the gridworld task does. We deï¬ne the difï¬culty of a map as the length of the shortest path from the start state to the goal state. It is natural to start with difï¬culty 1 (the start state and goal state are adjacent) and ramp up the difï¬culty by one level once a certain threshold of âsuccessâ is reached. In our experiments we use the average discounted return to assess progress and increase the difï¬culty level from n to n + 1 when the average discounted return for an iteration exceeds 1 â n 35 . This rule was chosen empirically and takes into account the fact that higher difï¬culty levels are more difï¬cult to learn. All networks were trained using the trust region policy optimization (TRPO) [26] algorithm, using publicly available code in the RLLab benchmark [5]. Testing. When testing, we ignore the exact rewards and measure simply whether or not the agent reaches the goal. For each map in the test set, we run an episode, noting if the policy succeeds in reaching the goal. The proportion of successful trials out of all the trials is reported for each network. (See Table 3.) On the 8 à 8 maps, we used the same number of training iterations on both types of networks to make the comparison as fair as possible. On the 16 à 16 maps, it became clear that the convolutional network was struggling, so we allowed it twice as many training iterations as the VIN, yet it still failed to achieve even a remotely similar level of performance on the test maps. (See left image of Figure 6.) We posit that this is because the VIN learns to plan, while the CNN simply follows a reactive policy. Though the CNN policy performs reasonably well on the smaller domains, it does not scale to larger domains, while the VIN does. (See right image of Figure 6.)
# D Technical Details for Experiments
We report the full technical details used for training our networks.
11
â VIN â_cnn ; Ss 5 i Success rate on test set \ \ soo ood 18003000250 3000 Training epochs (1000 transitions)
VIN i } I | i i o 2 7 G 7 Fr 2 oiicuty
â VIN VIN â_cnn ; i } I Ss | i i 5 i Success rate on test set \ \ soo ood 18003000250 3000 o 2 7 G 7 Fr 2 Training epochs (1000 transitions) oiicuty
Figure 6: RL results â performance of VIN and CNN on 16 à 16 test maps. Left: Performance on all maps as a function of amount of training. Right: Success rate on test maps of increasing difï¬culty.
# D.1 Grid-world Domain
Our training set consists of N; = 5000 random grid-world instances, with N; = 7 shortest-path trajectories (calculated using an optimal planning algorithm) from a random start-state to a random goal-state for each instance; a total of N; x N; trajectories. For each state s = (7, j) in each trajectory, we produce a (2 x m x n)-sized observation image Simage- The first channel of simage encodes the obstacle presence (1 for obstacle, 0 otherwise), while the second channel encodes the goal position (1 at the goal, 0 otherwise). The full observation vector is 4(s) = [s, Simage]- In addition, for each state we produce a label a that encodes the action (one of 8 directions) that an optimal shortest-path policy would take in that state. We design a VIN for this task as follows. The state space S was chosen to be a m x n grid-world, similar to the true state space S/*| The reward R in this space can be represented by an m x n map, and we chose the reward mapping fz to be a CNN with Simage as its input, one layer with 150 kernels of size 3 x 3, and a second layer with one 3 x 3 filter to output R. Thus, maps the image of obstacles and goal to a âreward imageâ. The transitions P were defined as 3 x 3 convolution kernels in the VI block, and exploit the fact that transitions in the grid-world are local. Note that the transitions defined this way do not depend on the state s. Interestingly, we shall see that the network learned rewards and transitions that nevertheless enable it to successfully plan in this task. For the attention module, since there is a one-to-one mapping between the agent position in S and in S, we chose a trivial approach that selects the Q values in the VI block for the state in the real MDP s, i.e., w(s) = Q(s,-). The final reactive policy is a fully connected softmax output layer with weights W, Tre(-|w(s)) x exp (WT y(s)) . We trained several neural-network policies based on a multi-class logistic regression loss function using stochastic gradient descent, with an RMSProp step size [29], implemented in the Theano [28] library. We compare the policies:
VIN network We used the VIN model of Section 3 as described above, with 10 channels for the q layer in the VI block. The recurrence K was set relative to the problem size: K = 10 for 8 à 8 domains, K = 20 for 16 à 16 domains, and K = 36 for 28 à 28 domains. The guideline for choosing these values was to keep the network small while guaranteeing that goal information can ï¬ow to every state in the map. CNN network: We devised a CNN-based reactive policy inspired by the recent impressive results of DQN [21], with 5 convolution layers with [50, 50, 100, 100, 100] kernels of size 3 à 3, and 2 à 2 max-pooling after the ï¬rst and third layers. The ï¬nal layer is fully connected, and maps to a softmax over actions. To represent the current state, we added to simage a channel that encodes the current position (1 at the current state, 0 otherwise).
4For a particular conï¬guration of obstacles, the true grid-world domain can be captured by a m à n state space with the obstacles encoded in the MDP transitions, as in our notation. For a general obstacle conï¬guration, the obstacle positions have to also be encoded in the state. The VIN was able to learn a policy for a general obstacle conï¬guration by planning in a m à n state space by also taking into account the observation of the map.
12
Fully Convolutional Network (FCN): The problem setting for this domain is similar to semantic segmentation [19], in which each pixel in the image is assigned a semantic label (the action in our case). We therefore devised an FCN inspired by a state-of-the-art semantic segmentation algorithm [19], with 3 convolution layers, where the ï¬rst layer has a ï¬lter that spans the whole image, to properly convey information from the goal to every other state. The ï¬rst convolution layer has 150 ï¬lters of size (2m â 1) à (2n â 1), which span the whole image and can convey information about the goal to every pixel. The second layer has 150 ï¬lters of size 1 à 1, and the third layer has 10 ï¬lters of size 1 à 1, to produce an output sized 10 à m à n, similarly to the ¯Q layer in our VIN. Similarly to the attention mechanism in the VIN, the values that correspond to the current state (pixel) are passed to a fully connected softmax output layer.
# D.2 Mars Domain
We consider the problem of autonomously navigating the surface of Mars by a rover such as the Mars Science Laboratory (MSL) (Lockwood, 2006) over long-distance trajectories. The MSL has a limited ability for climbing high-degree slopes, and its path-planning algorithm should therefore avoid navigating into high-slope areas. In our experiment, we plan trajectories that avoid slopes of 10 degrees or more, using overhead terrain images from the High Resolution Imaging Science Experiment (HiRISE) (McEwen et al., 2007). The HiRISE data consists of grayscale images of the Mars terrain, and matching elevation data, accurate to tens of centimeters. We used an image of a 33.3km by 6.3km area at 49.96 degrees latitude and 219.2 degrees longitude, with a 10.5 sq. meters / pixel resolution. Each domain is a 128 à 128 image patch, on which we deï¬ned a 16 à 16 grid-world, where each state was considered an obstacle if its corresponding 8 à 8 image patch contained an angle of 10 degrees or more, evaluated using an additional elevation data. An example of the domain and terrain image is depicted in Figure 3. The MDP for shortest-path planning in this case is similar to the grid-world domain of Section 4.1, and the VIN design was similar, only with a deeper CNN in the reward mapping fR for processing the image. Our goal is to train a network that predicts the shortest-path trajectory directly from the terrain image data. We emphasize that the ground-truth elevation data is not part of the input, and the elevation therefore must be inferred (if needed) from the terrain image itself. Our VIN design follows the model of Section 4.1. In this case, however, instead of feeding in the obstacle map, we feed in the raw terrain image, and accordingly modify the reward mapping fR with 2 additional CNN layers for processing the image: the ï¬rst with 6 kernels of size 5 à 5 and 4 à 4 max-pooling, and the second with a 12 kernels of size 3 à 3 and 2 à 2 max-pooling. The resulting 12 à m à n tensor is concatenated with the goal image, and passed to a third layer with 150 kernels of size 3 à 3 and a fourth layer with one 3 à 3 ï¬lter to output ¯R. The state inputs and output labels remain as in the grid-world experiments. We emphasize that the whole network is trained end-to-end, without pre-training the input ï¬lters. In Table 4 we present our results for training a m = n = 16 map from a 10K image-patch dataset, with 7 random trajectories per patch, evaluated on a held-out test set of 1K patches. Figure 3 shows an instance of the input image, the obstacles, the shortest-path trajectory, and the trajectory predicted by our method. To put the 84.8% success rate in context, we compare with the best performance achievable without access to the elevation data. To make this comparison, we trained a CNN to classify whether an 8 à 8 patch is an obstacle or not. This classiï¬er was trained using the same image data as the VIN network, but its labels were the true obstacle classiï¬cations from the elevation map (we reiterate that the VIN network did not have access to these ground-truth obstacle classiï¬cation labels during training or testing). Training this classiï¬er is a standard binary classiï¬cation problem, and its performance represents the best obstacle identiï¬cation possible with our CNN in this domain. The best-achievable shortest-path prediction is then deï¬ned as the shortest path in an obstacle map generated by this classiï¬er from the raw image. The results of this optimal predictor are reported in Table 1. The 90.3% success rate shows that obstacle identiï¬cation from the raw image is indeed challenging. Thus, the success rate of the VIN network, which was trained without any obstacle labels, and had to âï¬gure outâ the planning process is quite remarkable.
# D.3 Continuous Control
For training we chose the guided policy search (GPS) algorithm with unknown dynamics [17], which is suitable for learning policies for continuous dynamics with contacts, and we used the publicly available GPS code [7], and Mujoco [30] for physical simulation. GPS works by learning time- varying iLQG controllers for each domain, and then ï¬tting the controllers to a single NN policy using
13
VIN Best achievable Pred. loss 0.089 - Traj. diff. 84.8% 0.016 90.3% 0.0089 Succ. rate
Table 4: Performance of VINs on the Mars domain. For comparison, the performance of a planner that used obstacle predictions trained from labeled obstacle data is shown. This upper bound on performance demonstrates the difï¬culty in identifying obstacles from the raw image data. Remarkably, the VIN achieved close performance without access to any labeled data about the obstacles.
supervised learning. This process is repeated for several iterations, and a special cost function is used to enforce an agreement between the trajectory distribution of the iLQG and NN controllers. We refer to [17, 7] for the full algorithm details. For our task, we ran 10 iterations of iLQG, with the cost being a quadratic distance to the goal, followed by one iteration of NN policy ï¬tting. This allows us to cleanly compare VINs to other policies without GPS-speciï¬c effects. Our VIN design is similar to the grid-world cases: the state space ¯S is a 16 à 16 grid-world, and the transitions ¯P are 3 à 3 convolution kernels in the VI block, similar to the grid-world of Section 4.1. However, we made some important modiï¬cations: the attention module selects a 5 à 5 patch of the value ¯V , centered around the current (discretized) position in the map. The ï¬nal reactive policy is a 3-layer fully connected network, with a 2-dimensional continuous output for the controls. In addition, due to the limited number of training domains, we pre-trained the VIN with transition weights that correspond to discounted grid-world transitions (for example, the transitions for an action to go north-west would be γ in the top left corner and zeros otherwise), before training end-to-end. This is a reasonable prior for the weights in a 2-d task, and we emphasize that even with this initialization, the initial value function is meaningless, since the reward map fR is not yet learned. The reward mapping fR is a CNN with simage as its input, one layer with 150 kernels of size 3 à 3, and a second layer with one 3 à 3 ï¬lter to output ¯R.
# D.4 WebNav
âWebNavâ [23] is a recently proposed goal-driven web navigation benchmark. In WebNav, web pages and links from some website form a directed graph G(S, E). The agent is presented with a query text, which consists of N, sentences from a target page at most N;, hops away from the starting page. The goal for the agent is to navigate to that target page from the starting page via clicking at most Ny links per page. Here, we choose N;, = Ny = Np = 4. In [23], the agent receives a reward of 1 when reaching the target page via any path no longer than 10 hops. For evaluation convenience, in our experiment, the agent can receive a reward only if it reaches the destination via the shortest path, which makes the task much harder. We measure the top-1 and top-4 prediction accuracy as well as the average reward for the baseline and our VIN model. For every page s, the valid transitions are A, = {sâ : (s,sâ) ⬠E}. For every web page s and every query text g, we utilize the bag-of-words model with pretrained word embedding provided by [23] to produce feature vectors ¢(s) and $(q). The agent should choose at most NV, valid actions from A, = {sâ : (s,sâ) ⬠E} based on the current s and q. The baseline method of uses a single tanh-layer neural net parametrized by W to compute
a hidden vector h: h(s,q) = tanh (w aia \) . The final baseline policy is computed via Tsi(sâ|8, q) o exp (h(s,q)'(sâ)) for sâ ⬠Ag. We design a VIN for this task as follows. We firstly selected a smaller website as the approximate graph G(S, E), and choose S as the states in VI. For query q and a page § in S, we compute the reward R(s) by fr(3|q) = tanh ((Wre(a) + bp)! o(3)) with parameters Wr (diagonal matrix) and br (vector). For transition, since the graph remains unchanged, P is fixed. For the attention module II(V*, s), we compute it by II(V*,s) = });cg sigmoid ((Win(s) + bn)" o(3)) v*(s), where Wy and by are parameters and Wy is diagonal. Moreover, we compute the coefficient 7 based on the query q and the state s using a tanh-layer neural net parametrized by W,: y(s,q) =
14
Network Top-1 Test Err. Top-4 Test Err. Avg. Reward BSL VIN 52.019% 50.562% 24.424% 26.055% 0.27779 0.30389
Table 5: Performance on the full wikipedia dataset.
# aay
tanh Wγ . Finally, we combine the VI module and the baseline method as our VIN
model by simply adding the outputs from these two networks together. In addition to the experiments reported in the main text, we performed experiments on the full wikipedia, using âwikipedia for schoolsâ as the graph for VIN planning. We report our preliminary results here. Full wikipedia website: The full wikipedia dataset consists 779169 training queries (3 million training samples) and 20004 testing queries (76664 testing samples) over 4.8 million pages with maximum 300 links per page. We use the whole WikiSchool website as our approximate graph and set K = 4. In VIN, to accelerate training, we ï¬rstly only train the VI module with K = 0. Then, we ï¬x ¯R obtained in the K = 0 case and jointly train the whole model with K = 4. The results are shown in Tab. 5 VIN achieves 1.5% better prediction accuracy than the baseline. Interestingly, with only 1.5% prediction accuracy enhancement, VIN achieves 2.5% better success rate than the baseline: note that the agent can only success when making 4 consecutive correct predictions. This indicates the VI does provide useful high-level planning information.
# D.5 Additional Technical Comments
Runtime: For the 2D domains, different samples from the same domain share the same VI com- putation, since they have the same observation. Therefore, a single VI computation is required for samples from the same domain. Using this, and GPU code (Theano), VINs are not much slower than the baselines. For the language task, however, since Theano doesnât support convolutions on graphs nor sparse operations on GPU, VINs were considerably slower in our implementation.
# E Hierarchical VI Modules
The number of VI iterations K required in the VIN depends on the problem size. Consider, for example, a grid-world in which the goal is located L steps away from some state s. Then, at least L iterations of VI are required to convey the reward information from the goal to state s, and clearly, any action prediction obtained with less than L VI iterations at state s is unaware of the goal location, and therefore unacceptable. To convey reward information faster in VI, and reduce the effective K, we propose to perform VI at multiple levels of resolution. We term this model a hierarchical VI Network (HVIN), due to its similarity with hierarchical planning algorithms. In a HVIN, a copy of the input down-sampled by a factor of d is ï¬rst fed into a VI module termed the high-level VI module. The down-sampling offers a dà speedup of information transmission in the map, at the price of reduced accuracy. The value layer of the high-level VI module is then up-sampled, and added as an additional input channel to the input of the standard VI module. Thus, the high-level VI module learns a mapping from down-sampled image features to a suitable reward-shaping for the nominal VI module. The full HVIN model is depicted in Figure 7. This model can easily be extended to include multiple levels of hierarchy. Table 6 shows the performance of the HVIN module in the grid-world task, compared to the VIN results reported in the main text. We used a 2 à 2 down-sampling layer. Similarly to the standard VIN, 3 à 3 convolution kernels, 150 channels for each hidden layer H (for both the down-sampled image, and standard image), and 10 channels for the q layer in each VI block. Similarly to the VIN networks, the recurrence K was set relative to the problem size, taking into account the down- sampling factor: K = 4 for 8 à 8 domains, K = 10 for 16 à 16 domains, and K = 16 for 28 à 28 domains (in comparison, the respective K values for standard VINs were 10, 20, and 36). The HVINs demonstrated better performance for the larger 28 à 28 map, which we attribute to the improved information transmission in the hierarchical VI module.
15
Hierarchical VI Network Observation Reward vl Module Reward R Map I up-sample down x sample lew % e 4 Value a K recurrence High-level VI block
Figure 7: Hierarchical VI network. A copy of the input is ï¬rst fed into a convolution layer and then downsampled. This signal is then fed into a VI module to produce a coarse value function, corresponding to the upper level in the hierarchy. This value function is then up-sampled, and added as an additional channel in the reward layer of a standard VI module (lower level of the hierarchy).
Domain 8 Ã 8 16 Ã 16 28 Ã 28 Prediction loss 0.004 0.05 0.11 VIN Success Trajectory rate 99.6% 99.3% 97% diff. 0.001 0.089 0.086 Hierarchical VIN Prediction loss 0.005 0.03 0.05 Success Trajectory rate 99.3% 99% 98.1% diff. 0.0 0.007 0.037
Table 6: HVIN performance on grid-world domain.
16 | {
"id": "1602.02261"
} |
1602.02410 | Exploring the Limits of Language Modeling | In this work we explore recent advances in Recurrent Neural Networks for
large scale Language Modeling, a task central to language understanding. We
extend current models to deal with two key challenges present in this task:
corpora and vocabulary sizes, and complex, long term structure of language. We
perform an exhaustive study on techniques such as character Convolutional
Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark.
Our best single model significantly improves state-of-the-art perplexity from
51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20),
while an ensemble of models sets a new record by improving perplexity from 41.0
down to 23.7. We also release these models for the NLP and ML community to
study and improve upon. | http://arxiv.org/pdf/1602.02410 | Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu | cs.CL | null | null | cs.CL | 20160207 | 20160211 | 6 1 0 2
b e F 1 1 ] L C . s c [ 2 v 0 1 4 2 0 . 2 0 6 1 : v i X r a
# Exploring the Limits of Language Modeling
Rafal Jozefowicz Oriol Vinyals Mike Schuster Noam Shazeer Yonghui Wu
RAFALJ@GOOGLE.COM VINYALS@GOOGLE.COM SCHUSTER@GOOGLE.COM NOAM@GOOGLE.COM YONGHUI@GOOGLE.COM
Google Brain
Abstract In this work we explore recent advances in Re- current Neural Networks for large scale Lan- guage Modeling, a task central to language un- derstanding. We extend current models to deal with two key challenges present in this task: cor- pora and vocabulary sizes, and complex, long term structure of language. We perform an ex- haustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Bench- mark. Our best single model signiï¬cantly im- proves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of param- eters by a factor of 20), while an ensemble of models sets a new record by improving perplex- ity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
# 1. Introduction
Language Modeling (LM) is a task central to Natural Language Processing (NLP) and Language Understanding. Models which can accurately place distributions over sen- tences not only encode complexities of language such as grammatical structure, but also distill a fair amount of in- formation about the knowledge that a corpora may con- tain. Indeed, models that are able to assign a low probabil- ity to sentences that are grammatically correct but unlikely may help other tasks in fundamental language understand- ing like question answering, machine translation, or text summarization.
LMs have played a key role in traditional NLP tasks such as speech recognition (Mikolov et al., 2010; Arisoy et al., 2012), machine translation (Schwenk et al., 2012; Vaswani et al.), or text summarization (Rush et al., 2015; Filippova et al., 2015). Often (although not always), training better
language models improves the underlying metrics of the downstream task (such as word error rate for speech recog- nition, or BLEU score for translation), which makes the task of training better LMs valuable by itself.
Further, when trained on vast amounts of data, language models compactly extract knowledge encoded in the train- ing data. For example, when trained on movie subti- tles (Serban et al., 2015; Vinyals & Le, 2015), these lan- guage models are able to generate basic answers to ques- tions about object colors, facts about people, etc. Lastly, recently proposed sequence-to-sequence models employ conditional language models (Mikolov & Zweig, 2012) as their key component to solve diverse tasks like machine translation (Sutskever et al., 2014; Cho et al., 2014; Kalch- brenner et al., 2014) or video generation (Srivastava et al., 2015a).
Deep Learning and Recurrent Neural Networks (RNNs) have fueled language modeling research in the past years as it allowed researchers to explore many tasks for which the strong conditional independence assumptions are unre- alistic. Despite the fact that simpler models, such as N- grams, only use a short history of previous words to predict the next word, they are still a key component to high qual- ity, low perplexity LMs. Indeed, most recent work on large scale LM has shown that RNNs are great in combination with N-grams, as they may have different strengths that complement N-gram models, but worse when considered in isolation (Mikolov et al., 2011; Mikolov, 2012; Chelba et al., 2013; Williams et al., 2015; Ji et al., 2015a; Shazeer et al., 2015).
We believe that, despite much work being devoted to small data sets like the Penn Tree Bank (PTB) (Marcus et al., 1993), research on larger tasks is very relevant as overï¬t- ting is not the main limitation in current language model- ing, but is the main characteristic of the PTB task. Results on larger corpora usually show better what matters as many ideas work well on small data sets but fail to improve on
Exploring the Limits of Language Modeling
CAT Cc A T Char CNN i ae Cc | A T | Char CNN | Char CNN THE T| [4] [e] || THE | {a) (b) (ce)
⢠We show that an ensemble of a number of different models can bring down perplexity on this task to 23.7, a large improvement compared to current state-of-art.
⢠We share the model and recipes in order to help and motivate further research in this area.
In Section 2 we review important concepts and previous work on language modeling. Section 3 presents our contri- butions to the ï¬eld of neural language modeling, emphasiz- ing large scale recurrent neural network training. Sections 4 and 5 aim at exhaustively describing our experience and understanding throughout the project, as well as emplacing our work relative to other known approaches.
# 2. Related Work
Figure 1. A high-level diagram of the models presented in this pa- per. (a) is a standard LSTM LM. (b) represents an LM where both input and Softmax embeddings have been replaced by a character CNN. In (c) we replace the Softmax by a next character prediction LSTM network.
In this section we describe previous work relevant to the approaches discussed in this paper. A more detailed dis- cussion on language modeling research is provided in (Mikolov, 2012).
# 2.1. Language Models
larger data sets. Further, given current hardware trends and vast amounts of text available on the Web, it is much more straightforward to tackle large scale modeling than it used to be. Thus, we hope that our work will help and motivate researchers to work on traditional LM beyond PTB â for this purpose, we will open-source our models and training recipes.
We focused on a well known, large scale LM benchmark: the One Billion Word Benchmark data set (Chelba et al., 2013). This data set is much larger than PTB (one thou- sand fold, 800k word vocabulary and 1B words training data) and far more challenging. Similar to Imagenet (Deng et al., 2009), which helped advance computer vision, we believe that releasing and working on large data sets and models with clear benchmarks will help advance Language Modeling.
The contributions of our work are as follows:
⢠We explored, extended and tried to unify some of the current research on large scale LM.
⢠Speciï¬cally, we designed a Softmax loss which is based on character level CNNs, is efï¬cient to train, and is as precise as a full Softmax which has orders of magnitude more parameters.
Language Modeling (LM) has been a central task in NLP. The goal of LM is to learn a probability distribution over sequences of symbols pertaining to a language. Much work has been done on both parametric (e.g., log-linear models) and non-parametric approaches (e.g., count-based LMs). Count-based approaches (based on statistics of N-grams) typically add smoothing which account for unseen (yet pos- sible) sequences, and have been quite successful. To this extent, Kneser-Ney smoothed 5-gram models (Kneser & Ney, 1995) are a fairly strong baseline which, for large amounts of training data, have challenged other paramet- ric approaches based on Neural Networks (Bengio et al., 2006).
Most of our work is based on Recurrent Neural Networks (RNN) models which retain long term dependencies. To this extent, we used the Long-Short Term Memory model (Hochreiter & Schmidhuber, 1997) which uses a gating mechanism (Gers et al., 2000) to ensure proper propaga- tion of information through many time steps. Much work has been done on small and large scale RNN-based LMs (Mikolov et al., 2010; Mikolov, 2012; Chelba et al., 2013; Zaremba et al., 2014; Williams et al., 2015; Ji et al., 2015a; Wang & Cho, 2015; Ji et al., 2015b). The architectures that we considered in this paper are represented in Figure 1.
⢠Our study yielded signiï¬cant improvements to the state-of-the-art on a well known, large scale LM task: from 51.3 down to 30.0 perplexity for single models whilst reducing the number of parameters by a factor of 20.
In our work, we train models on the popular One Bil- lion Word Benchmark, which can be considered to be a medium-sized data set for count-based LMs but a very large data set for NN-based LMs. This regime is most interesting to us as we believe learning a very good model of human language is a complex task which will require large models,
Exploring the Limits of Language Modeling
and thus large amounts of data. Further advances in data availability and computational resources helped our study. We argue this leap in scale enabled tremendous advances in deep learning. A clear example found in computer vision is Imagenet (Deng et al., 2009), which enabled learning com- plex vision models from large amounts of data (Krizhevsky et al., 2012).
A crucial aspect which we discuss in detail in later sections is the size of our models. Despite the large number of pa- rameters, we try to minimize computation as much as pos- sible by adopting a strategy proposed in (Sak et al., 2014) of projecting a relatively big recurrent state space down so that the matrices involved remain relatively small, yet the model has large memory capacity.
# 2.2. Convolutional Embedding Models
inner product zw = hT ew where h is a context vector and ew is a âword embeddingâ for w.
The main challenge when |V | is very large (in the order of one million in this paper) is the fact that computing all inner products between h and all embeddings becomes prohibitively slow during training (even when exploiting matrix-matrix multiplications and modern GPUs). Several approaches have been proposed to cope with the scaling is- sue: importance sampling (Bengio et al., 2003; Bengio & Sen´ecal, 2008), Noise Contrastive Estimation (NCE) (Gut- mann & Hyv¨arinen, 2010; Mnih & Kavukcuoglu, 2013), self normalizing partition functions (Vincent et al., 2015) or Hierarchical Softmax (Morin & Bengio, 2005; Mnih & Hinton, 2009) â they all offer good solutions to this prob- lem. We found importance sampling to be quite effective on this task, and explain the connection between it and NCE in the following section, as they are closely related.
There is an increased interest in incorporating character- level inputs to build word embeddings for various NLP problems, including part-of-speech tagging, parsing and language modeling (Ling et al., 2015; Kim et al., 2015; Ballesteros et al., 2015). The additional character informa- tion has been shown useful on relatively small benchmark data sets.
# 3. Language Modeling Improvements
Recurrent Neural Networks based LMs employ the chain rule to model joint probabilities over word sequences:
The approach proposed in (Ling et al., 2015) builds word embeddings using bidirectional LSTMs (Schuster & Pali- wal, 1997; Graves & Schmidhuber, 2005) over the charac- ters. The recurrent networks process sequences of charac- ters from both sides and their ï¬nal state vectors are concate- nated. The resulting representation is then fed to a Neural Network. This model achieved very good results on a part- of-speech tagging task.
N p(wi,...,WN) = [rites +++, Wi-1) i=1
where the context of all previous words is encoded with an LSTM, and the probability over words uses a Softmax (see Figure 1(a)).
# 3.1. Relationship between Noise Contrastive Estimation and Importance Sampling
In (Kim et al., 2015), the words characters are processed by a 1-d CNN (Le Cun et al., 1990) with max-pooling across the sequence for each convolutional feature. The result- ing features are fed to a 2-layer highway network (Srivas- tava et al., 2015b), which allows the embedding to learn se- mantic representations. The model was evaluated on small- scale language modeling experiments for various languages and matched the best results on the PTB data set despite having 60% fewer parameters.
# 2.3. Softmax Over Large Vocabularies
As discussed in Section 2.3, a large scale Softmax is neces- sary for training good LMs because of the vocabulary size. A Hierarchical Softmax (Mnih & Hinton, 2009) employs a tree in which the probability distribution over words is decomposed into a product of two probabilities for each word, greatly reducing training and inference time as only the path speciï¬ed by the hierarchy needs to be computed and updated. Choosing a good hierarchy is important for obtaining good results and we did not explore this approach further for this paper as sampling methods worked well for our setup.
Assigning probability distributions over large vocabularies is computationally challenging. For modeling language, maximizing log-likelihood of a given word sequence leads to optimizing cross-entropy between the target probability distribution (e.g., the target word we should be predicting), and our model predictions p. Generally, predictions come from a linear layer followed by a Softmax non-linearity: where zw is the logit correspond- p(w) =
Sampling approaches are only useful during training, as they propose an approximation to the loss which is cheap to compute (also in a distributed setting) â however, at infer- ence time one still has to compute the normalization term over all words. Noise Contrastive Estimation (NCE) pro- poses to consider a surrogate binary classiï¬cation task in which a classiï¬er is trained to discriminate between true data, or samples coming from some arbitrary distribution. If both the noise and data distributions were known, the
Exploring the Limits of Language Modeling
optimal classiï¬er would be:
# 3.2. CNN Softmax
p(Y = true|w) = pd(w) pd(w) + kpn(w)
where Y is the binary random variable indicating whether w comes from the true data distribution, k is the number of negative samples per positive word, and pd and pn are the data and noise distribution respectively (we dropped any dependency on previous words for notational simplicity).
It is easy to show that if we train a logistic classifier po(Y = true|w) = o(se(w,h) â logkp,(w)) where o is the logistic function, then, pâ(w) = softmax(so(w, h)) is a good approximation of pg(w) (sq is a logit which e.g. an LSTM LM computes).
The other technique, which is based on importance sam- pling (IS), proposes to directly approximate the partition function (which comprises a sum over all words) with an estimate of it through importance sampling. Though the methods look superï¬cially similar, we will derive a similar surrogate classiï¬cation task akin to NCE which arrives at IS, showing a strong connection between the two.
Suppose that, instead of having a binary task to decide if a word comes from the data or from the noise distribution, we want to identify the words coming from the true data distribution in a set W = {w1, . . . , wk+1}, comprised of k noise samples and one data distribution sample. Thus, we can train a multiclass loss over a multinomial random variable Y which maximizes log p(Y = 1|W ), assuming w.l.o.g. that w1 â W is always the word coming from true data. By Bayes rule, and ignoring terms that are constant with respect to Y , we can write:
The character-level features allow for a smoother and com- pact parametrization of the word embeddings. Recent ef- forts on small scale language modeling have used CNN character embeddings for the input embeddings (Kim et al., 2015). Although not as straightforward, we propose an ex- tension to this idea to also reduce the number of param- eters of the Softmax layer. Recall from Section 2.3 that the Softmax computes a logit as zw = hT ew where h is a context vector and ew the word embedding. Instead of building a matrix of |V | Ã |h| (whose rows correspond to ew), we produce ew with a CNN over the characters of w as ew = CN N (charsw) â we call this a CNN Softmax. We used the same network architecture to dynamically gener- ate the Softmax word embeddings without sharing the pa- rameters with the input word-embedding sub-network. For inference, the vectors ew can be precomputed, so there is no computational complexity increase w.r.t. the regular Soft- max.
We note that, when using an importance sampling loss such as the one described in Section 3.1, only a few logits have non-zero gradient (those corresponding to the true and sam- pled words). With a Softmax where ew are independently learned word embeddings, this is not a problem. But we observed that, when using a CNN, all the logits become tied as the function mapping from w to ew is quite smooth. As a result, a much smaller learning rate had to be used. Even with this, the model lacks capacity to differentiate between words that have very different meanings but that are spelled similarly. Thus, a reasonable compromise was to add a small correction factor which is learned per word, such that:
p(Y = k|W ) âY pd(wk) pn(wk)
and, following a similar argument than for NCE, if we de- fine p(Y = k|W) = softmax(sg(we) â log pn (we)) then p(w) = softmax(sg(w,h)) is a good approximation of pa(word). Note that the only difference between NCE and IS is that, in NCE, we define a binary classification task between true or noise words with a logistic loss, whereas in IS we define a multiclass classification problem with a Softmax and cross entropy loss. We hope that our deriva- tion helps clarify the similarities and differences between the two. In particular, we observe that IS, as it optimizes a multiclass classification task (in contrast to solving a bi- nary task), may be a better choice. Indeed, the updates to the logits with IS are tied whereas in NCE they are inde- pendent.
zw = hT CN N (charsw) + hT M corrw where M is a matrix projecting a low-dimensional embed- ding vector corrw back up to the dimensionality of the pro- jected LSTM hidden state of h. This amounts to adding a bottleneck linear layer, and brings the CNN Softmax much closer to our best result, as can be seen in Table 1, where adding a 128-dim correction halves the gap between regu- lar and the CNN Softmax.
Aside from a big reduction in the number of parameters and incorporating morphological knowledge from words, the other beneï¬t of this approach is that out-of-vocabulary (OOV) words can easily be scored. This may be useful for other problems such as Machine Translation where han- dling out-of-vocabulary words is very important (Luong et al., 2014). This approach also allows parallel training over various data sets since the model is no longer explic- itly parametrized by the vocabulary size â or the language. This has shown to help when using byte-level input embed- dings for named entity recognition (Gillick et al., 2015),
Exploring the Limits of Language Modeling
and we hope it will enable similar gains when used to map onto words.
# 3.3. Char LSTM Predictions
The CNN Softmax layer can handle arbitrary words and is much more efï¬cient in terms of number of parameters than the full Softmax matrix. It is, though, still considerably slow, as to evaluate perplexities we need to compute the partition function. A class of models that solve this prob- lem more efï¬ciently are character-level LSTMs (Sutskever et al., 2011; Graves, 2013). They make predictions one character at a time, thus allowing to compute probabili- ties over a much smaller vocabulary. On the other hand, these models are more difï¬cult to train and seem to per- form worse even in small tasks like PTB (Graves, 2013). Most likely this is due to the sequences becoming much longer on average as the LSTM reads the input character by character instead of word by word.
# 4.2. Model Setup
The typical measure used for reporting progress in language modeling is perplexity, which is the aver- age per-word log-probability on the holdout data set: eâ 1 ln pwi . We follow the standard procedure and sum over all the words (including the end of sentence symbol).
We used the 1B Word Benchmark data set without any pre- processing. Given the shufï¬ed sentences, they are input to the network as a batch of independent streams of words. Whenever a sentence ends, a new one starts without any padding (thus maximizing the occupancy per batch).
For the models that consume characters as inputs or as tar- gets, each word is fed to the model as a sequence of charac- ter IDs of preespeciï¬ed length (see Figure 1(b)). The words were processed to include special begin and end of word to- kens and were padded to reach the expected length. I.e. if the maximum word length was 10, the word âcatâ would be transformed to â$catË â due to the CNN model.
Thus, we combine the word and character-level models by feeding a word-level LSTM hidden state h into a small LSTM that predicts the target word one character at a time (see Figure 1(c)). In order to make the whole process rea- sonably efï¬cient, we train the standard LSTM model un- til convergence, freeze its weights, and replace the stan- dard word-level Softmax layer with the aforementioned character-level LSTM.
In our experiments we found that limiting the maximum word length in training to 50 was sufï¬cient to reach very good results while 32 was clearly insufï¬cient. We used 256 characters in our vocabulary and the non-ascii symbols were represented as a sequence of bytes.
# 4.3. Model Architecture
The resulting model scales independently of vocabulary size â both for training and inference. However, it does seem to be worse than regular and CNN Softmax â we are hopeful that further research will enable these models to replace ï¬xed vocabulary models whilst being computation- ally attractive.
# 4. Experiments
We evaluated many variations of RNN LM architectures. These include the dimensionalities of the embedding lay- ers, the state, projection sizes, and number of LSTM layers to use. Exhaustively trying all combinations would be ex- tremely time consuming for such a large data set, but our ï¬ndings suggest that LSTMs with a projection layer (i.e., a bottleneck between hidden states as in (Sak et al., 2014)) trained with truncated BPTT (Williams & Peng, 1990) for 20 steps performed well.
All experiments were run using the TensorFlow system (Abadi et al., 2015), with the exception of some older mod- els which were used in the ensemble.
# 4.1. Data Set
The experiments are performed on the 1B Word Bench- mark data set introduced by (Chelba et al., 2013), which is a publicly available benchmark for measuring progress of statistical language modeling. The data set contains about 0.8B words with a vocabulary of 793471 words, including sentence boundary markers. All the sentences are shufï¬ed and the duplicates are removed. The words that are out of vocabulary (OOV) are marked with a special UNK token (there are approximately 0.3% such words).
Following (Zaremba et al., 2014) we use dropout (Srivas- tava, 2013) before and after every LSTM layer. The bi- ases of LSTM forget gate were initialized to 1.0 (Jozefow- icz et al., 2015). The size of the models will be described in more detail in the following sections, and the choices of hyper-parameters will be released as open source upon publication.
For any model using character embedding CNNs, we closely follow the architecture from (Kim et al., 2015). The only important difference is that we use a larger number of convolutional features of 4096 to give enough capacity to the model. The resulting embedding is then linearly trans- formed to match the LSTM projection sizes. This allows it to match the performance of regular word embeddings but only uses a small fraction of parameters.
Exploring the Limits of Language Modeling
Table 1. Best results of single models on the 1B word benchmark. Our results are shown below previous work.
MODEL TEST PERPLEXITY NUMBER OF PARAMS [BILLIONS] SIGMOID-RNN-2048 (JI ET AL., 2015A) INTERPOLATED KN 5-GRAM, 1.1B N-GRAMS (CHELBA ET AL., 2013) SPARSE NON-NEGATIVE MATRIX LM (SHAZEER ET AL., 2015) RNN-1024 + MAXENT 9-GRAM FEATURES (CHELBA ET AL., 2013) 68.3 67.6 52.9 51.3 LSTM-512-512 LSTM-1024-512 LSTM-2048-512 LSTM-8192-2048 (NO DROPOUT) LSTM-8192-2048 (50% DROPOUT) 2-LAYER LSTM-8192-1024 (BIG LSTM) BIG LSTM+CNN INPUTS 54.1 48.2 43.7 37.9 32.2 30.6 30.0 BIG LSTM+CNN INPUTS + CNN SOFTMAX BIG LSTM+CNN INPUTS + CNN SOFTMAX + 128-DIM CORRECTION BIG LSTM+CNN INPUTS + CHAR LSTM PREDICTIONS 39.8 35.8 47.9 4.1 1.76 33 20 0.82 0.82 0.83 3.3 3.3 1.8 1.04 0.29 0.39 0.23
Table 2. Best results of ensembles on the 1B Word Benchmark.
MODEL TEST PERPLEXITY LARGE ENSEMBLE (CHELBA ET AL., 2013) RNN+KN-5 (WILLIAMS ET AL., 2015) RNN+KN-5 (JI ET AL., 2015A) RNN+SNM10-SKIP (SHAZEER ET AL., 2015) LARGE ENSEMBLE (SHAZEER ET AL., 2015) 43.8 42.4 42.0 41.3 41.0 OUR 10 BEST LSTM MODELS (EQUAL WEIGHTS) OUR 10 BEST LSTM MODELS (OPTIMAL WEIGHTS) 10 LSTMS + KN-5 (EQUAL WEIGHTS) 10 LSTMS + KN-5 (OPTIMAL WEIGHTS) 10 LSTMS + SNM10-SKIP (SHAZEER ET AL., 2015) 26.3 26.1 25.3 25.1 23.7
# 4.4. Training Procedure
The models were trained until convergence with an Ada- Grad optimizer using a learning rate of 0.2. In all the exper- iments the RNNs were unrolled for 20 steps without ever resetting the LSTM states. We used a batch size of 128. We clip the gradients of the LSTM weights such that their norm is bounded by 1.0 (Pascanu et al., 2012).
We used a large number of negative (or noise) samples: 8192 such samples were drawn per step, but were shared across all the target words in the batch (2560 total, i.e. 128 times 20 unrolled steps). This results in multiplying (2560 x 1024) times (1024 x (8192+1)) (instead of (2560 x 1024) times (1024 x 793471)), i.e. about 100-fold less computa- tion.
Using these hyper-parameters we found large LSTMs to be relatively easy to train. The same learning rate was used in almost all of the experiments. In a few cases we had to re- duce it by an order of magnitude. Unless otherwise stated, the experiments were performed with 32 GPU workers and asynchronous gradient updates. Further details will be fully speciï¬ed with the code upon publication.
Training a model for such large target vocabulary (793471 words) required to be careful with some details about the approximation to full Softmax using importance sampling.
# 5. Results and Analysis
In this section we summarize the results of our experiments and do an in-depth analysis. Table 1 contains all results for our models compared to previously published work. Ta- ble 2 shows previous and our own work on ensembles of models. We hope that our encouraging results, which im- proved the best perplexity of a single model from 51.3 to 30.0 (whilst reducing the model size considerably), and set a new record with ensembles at 23.7, will enable rapid re- search and progress to advance Language Modeling. For
Exploring the Limits of Language Modeling
this purpose, we will release the model weights and recipes upon publication.
# 5.1. Size Matters
Table 3. The test perplexities of an LSTM-2048-512 trained with different losses versus number of epochs. The model needs about 40 minutes per epoch. First epoch is a bit slower because we slowly increase the number of workers.
Unsurprisingly, size matters: when training on a very large and complex data set, ï¬tting the training data with an LSTM is fairly challenging. Thus, the size of the LSTM layer is a very important factor that inï¬uences the results, as seen in Table 1. The best models are the largest we were able to ï¬t into a GPU memory. Our largest model was a 2- layer LSTM with 8192+1024 dimensional recurrent state in each of the layers. Increasing the embedding and projec- tion size also helps but causes a large increase in the num- ber of parameters, which is less desirable. Lastly, training an RNN instead of an LSTM yields poorer results (about 5 perplexity worse) for a comparable model size.
EPOCHS NCE IS TRAINING TIME [HOURS] 1 5 10 20 50 97 58 53 49 46.1 60 47.5 45 44 43.7 1 4 8 14 34
Table 4. Nearest neighbors in the character CNN embedding space of a few out-of-vocabulary words. Even for words that the model has never seen, the model usually still ï¬nds reasonable neighbors.
# 5.2. Regularization Importance
As shown in Table 1, using dropout improves the results. To our surprise, even relatively small models (e.g., single layer LSTM with 2048 units projected to 512 dimensional outputs) can over-ï¬t the training set if trained long enough, eventually yielding holdout set degradation.
WORD TOP-1 TOP-2 TOP-3 INCERDIBLE WWW.A.COM 7546 TOWNHAL1 KOMARSKI INCREDIBLE WWW.AA.COM 7646 TOWNHALL KOHARSKI NONEDIBLE WWW.AAA.COM 7534 DJC2 KONARSKI EXTENDIBLE WWW.CA.COM 8566 MOODSWING360 KOMANSKI
Using dropout on non-recurrent connections largely miti- gates these issues. While over-ï¬tting still occurs, there is no more need for early stopping. For models that had 4096 or less units in the LSTM layer, we used 10% dropout prob- ability. For larger models, 25% was signiï¬cantly better. Even with such regularization, perplexities on the training set can be as much as 6 points below test.
In one experiment we tried to use a smaller vocabulary comprising of the 100,000 most frequent words and found the difference between train and test to be smaller â which suggests that too much capacity is given to rare words. This is less of an issue with character CNN embedding models as the embeddings are shared across all words.
using character-level embeddings is feasible and does not degrade performance â in fact, our best single model uses a Character CNN embedding.
An additional advantage is that the number of parameters of the input layer is reduced by a factor of 11 (though training speed is slightly worse). For inference, the embeddings can be precomputed so there is no speed penalty. Overall, the embedding of the best model is parametrized by 72M weights (down from 820M weights).
Table 4 shows a few examples of nearest neighbor embed- dings for some out-of-vocabulary words when character CNNs are used.
# 5.3. Importance Sampling is Data Efï¬cient
# 5.5. Smaller Models with CNN Softmax
Table 3 shows the test perplexities of NCE vs IS loss after a few epochs of 2048 unit LSTM with 512 projection. The IS objective signiï¬cantly improves the speed and the overall performance of the model when compared to NCE.
Even with character-level embeddings, the model is still fairly large (though much smaller than the best competing models from previous work). Most of the parameters are in the linear layer before the Softmax: 820M versus a total of 1.04B parameters.
# 5.4. Word Embeddings vs Character CNN
Replacing the embedding layer with a parametrized neural network that process characters of a given word allows the model to consume arbitrary words and is not restricted to a ï¬xed vocabulary. This property is useful for data sets with conversational or informal text as well as for mor- phologically rich languages. Our experiments show that
In one of the experiments we froze the word-LSTM after convergence and replaced the Softmax layer with the CNN Softmax sub-network. Without any ï¬ne-tuning that model was able to reach 39.8 perplexity with only 293M weights (as seen in Table 1).
As described in Section 3.2, adding a âcorrectionâ word embedding term alleviates the gap between regular and
Exploring the Limits of Language Modeling
CNN Softmax. Indeed, we can trade-off model size versus perplexity. For instance, by adding 100M weights (through a 128 dimensional bottleneck embedding) we achieve 35.8 perplexity (see Table 1).
To contrast with the CNN Softmax, we also evaluated a model that replaces the Softmax layer with a smaller LSTM that predicts one character at a time (see Section 3.3). Such a model does not have to learn long dependencies because the base LSTM still operates at the word-level (see Fig- ure 1(c)). With a single-layer LSTM of 1024 units we reached 49.0 test perplexity, far below the best model. In order to make the comparisons more fair, we performed a very expensive marginalization over the words in the vo- cabulary (to rule out words not in the dictionary which the character LSTM would assign some probability). When doing this marginalization, the perplexity improved a bit down to 47.9.
ment over previous work. Interestingly, including the best N-gram model reduces the perplexity by 1.2 point even though the model is rather weak on its own (67.6 perplex- ity). Most previous work had to either ensemble with the best N-gram model (as their RNN only used a limited out- put vocabulary of a few thousand words), or use N-gram features as additional input to the RNN. Our results, on the contrary, suggest that N-grams are of limited beneï¬t, and suggest that a carefully trained LSTM LM is the most competitive model.
# 5.8. LSTMs are best on the tail words
Figure 2 shows the difference in log probabilities between our best model (at 30.0 perplexity) and the KN-5. As can be seen from the plot, the LSTM is better across all the buckets and signiï¬cantly outperforms KN-5 on the rare words. This is encouraging as it seems to suggest that LSTM LMs may fare even better for languages or data sets where the number of rare words is larger than traditional N-gram models.
25 2.0 Mean difference in log perplexity 5 0.0
# 5.9. Samples from the model
To qualitatively evaluate the model, we sampled many sen- tences. We discarded short and politically incorrect ones, but the sample shown below is otherwise ârawâ (i.e., not hand picked). The samples are of high quality â which is not a surprise, given the perplexities attained â but there are still some occasional mistakes.
Sentences generated by the ensemble (about 26 perplexity):
Words buckets of equal size (less frequent words on the right)
Figure 2. The difference in log probabilities between the best LSTM and KN-5 (higher is better). The words from the hold- out set are grouped into 25 buckets of equal size based on their frequencies.
# 5.6. Training Speed
< S > With even more new technologies coming onto the market quickly during the past three years , an increasing number of compa- nies now must tackle the ever-changing and ever-changing environ- mental challenges online . < S > Check back for updates on this breaking news story . < S > About 800 people gathered at Hever Castle on Long Beach from noon to 2pm , three to four times that of the funeral cort`ege . < S > We are aware of written instructions from the copyright holder not to , in any way , mention Rosenberg âs negative comments if they are relevant as indicated in the documents , â eBay said in a statement . < S > It is now known that coffee and cacao products can do no harm on the body . < S > Yuri Zhirkov was in attendance at the Stamford Bridge at the start of the second half but neither Drogba nor Malouda was able to push on through the Barcelona defence .
We used 32 Tesla K40 GPUs to train our models. The smaller version of the LSTM model with 2048 units and 512 projections needs less than 10 hours to reach below 45 perplexity and after only 2 hours of training the model beats previous state-of-the art on this data set. The best model needs about 5 days to get to 35 perplexity and 10 days to 32.5. The best results were achieved after 3 weeks of training. See Table 3 for more details.
# 5.7. Ensembles
We averaged several of our best models and we were able to reach 23.7 test perplexity (more details and results can be seen in Table 2), which is more than 40% improve-
# 6. Discussion and Conclusions
In this paper we have shown that RNN LMs can be trained on large amounts of data, and outperform competing mod- els including carefully tuned N-grams. The reduction in perplexity from 51.3 to 30.0 is due to several key compo- nents which we studied in this paper. Thus, a large, regular- ized LSTM LM, with projection layers and trained with an approximation to the true Softmax with importance sam- pling performs much better than N-grams. Unlike previ- ous work, we do not require to interpolate both the RNN LM and the N-gram, and the gains of doing so are rather marginal.
Exploring the Limits of Language Modeling
By exploring recent advances in model architectures (e.g. LSTMs), exploiting small character CNNs, and by sharing our ï¬ndings in this paper and accompanying code and mod- els (to be released upon publication), we hope to inspire research on large scale Language Modeling, a problem we consider crucial towards language understanding. We hope for future research to focus on reasonably sized datasets taking inspiration from recent advances seen in the com- puter vision community thanks to efforts such as Imagenet (Deng et al., 2009).
Jean- S´ebastien, Morin, Fr´ederic, and Gauvain, Jean-Luc. Neural probabilistic language models. In Innovations in Machine Learning, pp. 137â186. Springer, 2006.
Chelba, Ciprian, Mikolov, Tomas, Schuster, Mike, Ge, Qi, Brants, Thorsten, Koehn, Phillipp, and Robinson, Tony. One billion word benchmark for measuring progress in statistical language modeling. arXiv preprint arXiv:1312.3005, 2013.
# Acknowledgements
We thank Ciprian Chelba, Ilya Sutskever, and the Google Brain Team for their help and discussions. We also thank Koray Kavukcuoglu for his help with the manuscript.
Cho, Kyunghyun, Van Merri¨enboer, Bart, Gulcehre, Caglar, Bahdanau, Dzmitry, Bougares, Fethi, Schwenk, Holger, and Bengio, Yoshua. Learning phrase represen- tations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014.
# References
Abadi, Mart´ın, Agarwal, Ashish, Barham, Paul, Brevdo, Eugene, Chen, Zhifeng, Citro, Craig, Corrado, Greg S., Davis, Andy, Dean, Jeffrey, Devin, Matthieu, Ghe- mawat, Sanjay, Goodfellow, Ian, Harp, Andrew, Irv- ing, Geoffrey, Isard, Michael, Jia, Yangqing, Jozefowicz, Rafal, Kaiser, Lukasz, Kudlur, Manjunath, Levenberg, Josh, Man´e, Dan, Monga, Rajat, Moore, Sherry, Murray, Derek, Olah, Chris, Schuster, Mike, Shlens, Jonathon, Steiner, Benoit, Sutskever, Ilya, Talwar, Kunal, Tucker, Paul, Vanhoucke, Vincent, Vasudevan, Vijay, Vi´egas, Fernanda, Vinyals, Oriol, Warden, Pete, Wattenberg, Martin, Wicke, Martin, Yu, Yuan, and Zheng, Xiaoqiang. TensorFlow: Large-scale machine learning on heteroge- neous systems, 2015. URL http://tensorflow. org/. Software available from tensorï¬ow.org.
Deng, Jia, Dong, Wei, Socher, Richard, Li, Li-Jia, Li, Kai, Imagenet: A large-scale hierarchical and Fei-Fei, Li. image database. In Computer Vision and Pattern Recog- nition, 2009. CVPR 2009. IEEE Conference on, pp. 248â 255. IEEE, 2009.
Filippova, Katja, Alfonseca, Enrique, Colmenares, Car- los A, Kaiser, Lukasz, and Vinyals, Oriol. Sentence com- pression by deletion with lstms. In Proceedings of the 2015 Conference on Empirical Methods in Natural Lan- guage Processing, pp. 360â368, 2015.
Gers, Felix A, Schmidhuber, J¨urgen, and Cummins, Fred. Learning to forget: Continual prediction with lstm. Neu- ral computation, 12(10):2451â2471, 2000.
Gillick, Dan, Brunk, Cliff, Vinyals, Oriol, and Subra- manya, Amarnag. Multilingual language processing from bytes. arXiv preprint arXiv:1512.00103, 2015.
Arisoy, Ebru, Sainath, Tara N, Kingsbury, Brian, and Ram- abhadran, Bhuvana. Deep neural network language mod- els. In Proceedings of the NAACL-HLT 2012 Workshop: Will We Ever Really Replace the N-gram Model? On the Future of Language Modeling for HLT, pp. 20â28. As- sociation for Computational Linguistics, 2012.
Graves, Alex. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013.
Framewise phoneme classiï¬cation with bidirectional lstm and other neural network architectures. Neural Networks, 18(5): 602â610, 2005.
Ballesteros, Miguel, Dyer, Chris, and Smith, Noah A. Improved transition-based parsing by modeling char- arXiv preprint acters instead of words with lstms. arXiv:1508.00657, 2015.
Noise- contrastive estimation: A new estimation principle for unnormalized statistical models. In International Con- ference on Artiï¬cial Intelligence and Statistics, pp. 297â 304, 2010.
Bengio, Yoshua and Sen´ecal, Jean-S´ebastien. Adaptive im- portance sampling to accelerate training of a neural prob- abilistic language model. Neural Networks, IEEE Trans- actions on, 19(4):713â722, 2008.
Hochreiter, Sepp and Schmidhuber, J¨urgen. Long short- term memory. Neural computation, 9(8):1735â1780, 1997.
Bengio, Yoshua, Sen´ecal, Jean-S´ebastien, et al. Quick training of probabilistic neural nets by importance sam- pling. In AISTATS, 2003.
Ji, Shihao, Vishwanathan, S. V. N., Satish, Nadathur, An- derson, Michael J., and Dubey, Pradeep. Blackout: Speeding up recurrent neural network language models
Exploring the Limits of Language Modeling
with very large vocabularies. CoRR, abs/1511.06909, URL http://arxiv.org/abs/1511. 2015a. 06909.
Mikolov, Tomas and Zweig, Geoffrey. Context dependent In SLT, pp. recurrent neural network language model. 234â239, 2012.
Ji, Yangfeng, Cohn, Trevor, Kong, Lingpeng, Dyer, Chris, and Eisenstein, Jacob. Document context language mod- els. arXiv preprint arXiv:1511.03962, 2015b.
Jozefowicz, Rafal, Zaremba, Wojciech, and Sutskever, Ilya. An empirical exploration of recurrent network ar- In Proceedings of the 32nd International chitectures. Conference on Machine Learning (ICML-15), pp. 2342â 2350, 2015.
Mikolov, Tomas, Karaï¬Â´at, Martin, Burget, Lukas, Cer- nock`y, Jan, and Khudanpur, Sanjeev. Recurrent neural network based language model. In INTERSPEECH, vol- ume 2, pp. 3, 2010.
Mikolov, Tomas, Deoras, Anoop, Kombrink, Stefan, Bur- get, Lukas, and Cernock`y, Jan. Empirical evaluation and combination of advanced language modeling techniques. In INTERSPEECH, number s 1, pp. 605â608, 2011.
Kalchbrenner, Nal, Grefenstette, Edward, and Blunsom, Phil. A convolutional neural network for modelling sen- tences. arXiv preprint arXiv:1404.2188, 2014.
Mnih, Andriy and Hinton, Geoffrey E. A scalable hierar- chical distributed language model. In Advances in neural information processing systems, pp. 1081â1088, 2009.
Kim, Yoon, Jernite, Yacine, Sontag, David, and Rush, Alexander M. Character-aware neural language models. arXiv preprint arXiv:1508.06615, 2015.
Kneser, Reinhard and Ney, Hermann. Improved backing- off for m-gram language modeling. In Acoustics, Speech, and Signal Processing, 1995. ICASSP-95., 1995 Inter- national Conference on, volume 1, pp. 181â184. IEEE, 1995.
Mnih, Andriy and Kavukcuoglu, Koray. Learning word embeddings efï¬ciently with noise-contrastive estima- tion. In Advances in Neural Information Processing Sys- tems, pp. 2265â2273, 2013.
Morin, Frederic and Bengio, Yoshua. Hierarchical proba- bilistic neural network language model. In Aistats, vol- ume 5, pp. 246â252. Citeseer, 2005.
Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097â1105, 2012.
Pascanu, Razvan, Mikolov, Tomas, and Bengio, Yoshua. On the difï¬culty of training recurrent neural networks. arXiv preprint arXiv:1211.5063, 2012.
Le Cun, B Boser, Denker, John S, Henderson, D, Howard, Richard E, Hubbard, W, and Jackel, Lawrence D. Hand- written digit recognition with a back-propagation net- work. In Advances in neural information processing sys- tems. Citeseer, 1990.
Ling, Wang, Lu´ıs, Tiago, Marujo, Lu´ıs, Astudillo, Ram´on Fernandez, Amir, Silvio, Dyer, Chris, Black, Alan W, and Trancoso, Isabel. Finding function in form: Compositional character models for open vocabulary word representation. arXiv preprint arXiv:1508.02096, 2015.
Rush, Alexander M, Chopra, Sumit, and Weston, Jason. A neural attention model for abstractive sentence summa- rization. arXiv preprint arXiv:1509.00685, 2015.
Sak, Hasim, Senior, Andrew W, and Beaufays, Franc¸oise. Long short-term memory recurrent neural network archi- In INTER- tectures for large scale acoustic modeling. SPEECH, pp. 338â342, 2014.
Schuster, Mike and Paliwal, Kuldip K. Bidirectional recur- rent neural networks. Signal Processing, IEEE Transac- tions on, 45(11):2673â2681, 1997.
Luong, Minh-Thang, Sutskever, Ilya, Le, Quoc V, Vinyals, Oriol, and Zaremba, Wojciech. Addressing the rare word problem in neural machine translation. arXiv preprint arXiv:1410.8206, 2014.
Marcus, Mitchell P, Marcinkiewicz, Mary Ann, and San- torini, Beatrice. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313â330, 1993.
Mikolov, Tom´aËs. Statistical language models based on neu- ral networks. Presentation at Google, Mountain View, 2nd April, 2012.
Schwenk, Holger, Rousseau, Anthony, and Attik, Mo- hammed. Large, pruned or continuous space language models on a gpu for statistical machine translation. In Proceedings of the NAACL-HLT 2012 Workshop: Will We Ever Really Replace the N-gram Model? On the Fu- ture of Language Modeling for HLT, pp. 11â19. Associ- ation for Computational Linguistics, 2012.
Serban, Iulian Vlad, Sordoni, Alessandro, Bengio, Yoshua, Courville, Aaron C., and Pineau, Joelle. Hierarchical neural network generative models for movie dialogues. CoRR, abs/1507.04808, 2015. URL http://arxiv. org/abs/1507.04808.
Exploring the Limits of Language Modeling
Shazeer, Noam, Pelemans, Joris, and Chelba, Ciprian. Sparse non-negative matrix language modeling for skip- grams. Proceedings of Interspeech, pp. 1428â1432, 2015.
Srivastava, Nitish. Improving neural networks with dropout. PhD thesis, University of Toronto, 2013.
Srivastava, Nitish, Mansimov, Elman, and Salakhutdinov, Ruslan. Unsupervised learning of video representations using lstms. arXiv preprint arXiv:1502.04681, 2015a.
Srivastava, Rupesh K, Greff, Klaus, and Schmidhuber, In Advances in J¨urgen. Training very deep networks. Neural Information Processing Systems, pp. 2368â2376, 2015b.
Sutskever, Ilya, Martens, James, and Hinton, Geoffrey E. Generating text with recurrent neural networks. In Pro- ceedings of the 28th International Conference on Ma- chine Learning (ICML-11), pp. 1017â1024, 2011.
Se- In quence to sequence learning with neural networks. Advances in neural information processing systems, pp. 3104â3112, 2014.
Vaswani, Ashish, Zhao, Yinggong, Fossum, Victoria, and Chiang, David. Decoding with large-scale neural lan- guage models improves translation. Citeseer.
Vincent, Pascal, de Br´ebisson, Alexandre, and Bouthillier, Xavier. Efï¬cient exact gradient update for training deep networks with very large sparse targets. In Advances in Neural Information Processing Systems, pp. 1108â1116, 2015.
Vinyals, Oriol and Le, Quoc. A neural conversational model. arXiv preprint arXiv:1506.05869, 2015.
Wang, Tian and Cho, Kyunghyun. Larger-context language modelling. arXiv preprint arXiv:1511.03729, 2015.
Williams, Ronald J and Peng, Jing. An efï¬cient gradient- based algorithm for on-line training of recurrent network trajectories. Neural computation, 2(4):490â501, 1990.
Williams, Will, Prasad, Niranjani, Mrva, David, Ash, Tom, and Robinson, Tony. Scaling recurrent neural network language models. In Acoustics, Speech and Signal Pro- cessing (ICASSP), 2015 IEEE International Conference on, pp. 5391â5395. IEEE, 2015.
Zaremba, Wojciech, Sutskever, Ilya, and Vinyals, Oriol. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329, 2014. | {
"id": "1512.00103"
} |
1602.01783 | Asynchronous Methods for Deep Reinforcement Learning | We propose a conceptually simple and lightweight framework for deep
reinforcement learning that uses asynchronous gradient descent for optimization
of deep neural network controllers. We present asynchronous variants of four
standard reinforcement learning algorithms and show that parallel
actor-learners have a stabilizing effect on training allowing all four methods
to successfully train neural network controllers. The best performing method,
an asynchronous variant of actor-critic, surpasses the current state-of-the-art
on the Atari domain while training for half the time on a single multi-core CPU
instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds
on a wide variety of continuous motor control problems as well as on a new task
of navigating random 3D mazes using a visual input. | http://arxiv.org/pdf/1602.01783 | Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu | cs.LG | null | ICML 2016 | cs.LG | 20160204 | 20160616 | 6 1 0 2 n u J 6 1 ] G L . s c [
2 v 3 8 7 1 0 . 2 0 6 1 : v i X r a
# Asynchronous Methods for Deep Reinforcement Learning
VMNIH@GOOGLE.COM ADRIAP@GOOGLE.COM MIRZAMOM@IRO.UMONTREAL.CA GRAVESA@GOOGLE.COM THARLEY@GOOGLE.COM COUNTZERO@GOOGLE.COM DAVIDSILVER@GOOGLE.COM KORAYK@GOOGLE.COM
Volodymyr Mnih1 Adrià Puigdomènech Badia1 Mehdi Mirza1,2 Alex Graves1 Tim Harley1 Timothy P. Lillicrap1 David Silver1 Koray Kavukcuoglu 1 1 Google DeepMind 2 Montreal Institute for Learning Algorithms (MILA), University of Montreal
# Abstract
and We lightweight framework for deep reinforce- ment learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
# 1. Introduction
Deep neural networks provide rich representations that can enable reinforcement learning (RL) algorithms to perform effectively. However, it was previously thought that the combination of simple online RL algorithms with deep neural networks was fundamentally unstable. Instead, a va- riety of solutions have been proposed to stabilize the algo- rithm (Riedmiller, 2005; Mnih et al., 2013; 2015; Van Has- selt et al., 2015; Schulman et al., 2015a). These approaches share a common idea: the sequence of observed data en- countered by an online RL agent is non-stationary, and on-
Proceedings of the 33 rd International Conference on Machine Learning, New York, NY, USA, 2016. JMLR: W&CP volume 48. Copyright 2016 by the author(s).
line RL updates are strongly correlated. By storing the agentâs data in an experience replay memory, the data can be batched (Riedmiller, 2005; Schulman et al., 2015a) or randomly sampled (Mnih et al., 2013; 2015; Van Hasselt et al., 2015) from different time-steps. Aggregating over memory in this way reduces non-stationarity and decorre- lates updates, but at the same time limits the methods to off-policy reinforcement learning algorithms.
Deep RL algorithms based on experience replay have achieved unprecedented success in challenging domains such as Atari 2600. However, experience replay has several drawbacks: it uses more memory and computation per real interaction; and it requires off-policy learning algorithms that can update from data generated by an older policy.
In this paper we provide a very different paradigm for deep reinforcement learning. Instead of experience replay, we asynchronously execute multiple agents in parallel, on mul- tiple instances of the environment. This parallelism also decorrelates the agentsâ data into a more stationary process, since at any given time-step the parallel agents will be ex- periencing a variety of different states. This simple idea enables a much larger spectrum of fundamental on-policy RL algorithms, such as Sarsa, n-step methods, and actor- critic methods, as well as off-policy RL algorithms such as Q-learning, to be applied robustly and effectively using deep neural networks.
Our parallel reinforcement learning paradigm also offers practical beneï¬ts. Whereas previous approaches to deep re- inforcement learning rely heavily on specialized hardware such as GPUs (Mnih et al., 2015; Van Hasselt et al., 2015; Schaul et al., 2015) or massively distributed architectures (Nair et al., 2015), our experiments run on a single machine with a standard multi-core CPU. When applied to a vari- ety of Atari 2600 domains, on many games asynchronous reinforcement learning achieves better results, in far less
Asynchronous Methods for Deep Reinforcement Learning
time than previous GPU-based algorithms, using far less resource than massively distributed approaches. The best of the proposed methods, asynchronous advantage actor- critic (A3C), also mastered a variety of continuous motor control tasks as well as learned general strategies for ex- ploring 3D mazes purely from visual inputs. We believe that the success of A3C on both 2D and 3D games, discrete and continuous action spaces, as well as its ability to train feedforward and recurrent agents makes it the most general and successful reinforcement learning agent to date.
# 2. Related Work
The General Reinforcement Learning Architecture (Gorila) of (Nair et al., 2015) performs asynchronous training of re- inforcement learning agents in a distributed setting. In Go- rila, each process contains an actor that acts in its own copy of the environment, a separate replay memory, and a learner that samples data from the replay memory and computes gradients of the DQN loss (Mnih et al., 2015) with respect to the policy parameters. The gradients are asynchronously sent to a central parameter server which updates a central copy of the model. The updated policy parameters are sent to the actor-learners at ï¬xed intervals. By using 100 sep- arate actor-learner processes and 30 parameter server in- stances, a total of 130 machines, Gorila was able to signif- icantly outperform DQN over 49 Atari games. On many games Gorila reached the score achieved by DQN over 20 times faster than DQN. We also note that a similar way of parallelizing DQN was proposed by (Chavez et al., 2015).
proaches have recently been applied to some visual rein- forcement learning tasks. In one example, (KoutnÃk et al., 2014) evolved convolutional neural network controllers for the TORCS driving simulator by performing ï¬tness evalu- ations on 8 CPU cores in parallel.
# 3. Reinforcement Learning Background
We consider the standard reinforcement learning setting where an agent interacts with an environment ⬠over a number of discrete time steps. At each time step t, the agent receives a state s; and selects an action a; from some set of possible actions A according to its policy 7, where m is a mapping from states s; to actions a,. In return, the agent receives the next state s,4 1 and receives a scalar re- ward r;. The process continues until the agent reaches a terminal state after which the process restarts. The return R= Yro 7*rt.x is the total accumulated return from time step ¢ with discount factor 7 ⬠(0, 1]. The goal of the agent is to maximize the expected return from each state s;. The action value Q*(s,a) = E[R;|s; = s,a] is the ex- pected return for selecting action a in state s and follow- ing policy 7. The optimal value function Q*(s,a) = max, Q*(s,a) gives the maximum action value for state s and action a achievable by any policy. Similarly, the value of state s under policy 7 is defined as V"(s) = E [R,|s_ = s] and is simply the expected return for follow- ing policy 7 from state s.
In earlier work, (Li & Schuurmans, 2011) applied the Map Reduce framework to parallelizing batch reinforce- ment learning methods with linear function approximation. Parallelism was used to speed up large matrix operations but not to parallelize the collection of experience or sta- bilize learning. (Grounds & Kudenko, 2008) proposed a parallel version of the Sarsa algorithm that uses multiple separate actor-learners to accelerate training. Each actor- learner learns separately and periodically sends updates to weights that have changed signiï¬cantly to the other learn- ers using peer-to-peer communication.
In value-based model-free reinforcement learning methods, the action value function is represented using a function ap- proximator, such as a neural network. Let Q(s, a; θ) be an approximate action-value function with parameters θ. The updates to θ can be derived from a variety of reinforcement learning algorithms. One example of such an algorithm is Q-learning, which aims to directly approximate the optimal action value function: Qâ(s, a) â Q(s, a; θ). In one-step Q-learning, the parameters θ of the action value function Q(s, a; θ) are learned by iteratively minimizing a sequence of loss functions, where the ith loss function deï¬ned as
(Tsitsiklis, 1994) studied convergence properties of Q- learning in the asynchronous optimization setting. These results show that Q-learning is still guaranteed to converge when some of the information is outdated as long as out- dated information is always eventually discarded and sev- eral other technical assumptions are satisï¬ed. Even earlier, (Bertsekas, 1982) studied the related problem of distributed dynamic programming.
Another related area of work is in evolutionary meth- ods, which are often straightforward to parallelize by dis- tributing ï¬tness evaluations over multiple machines or threads (Tomassini, 1999). Such parallel evolutionary ap-
2 L;(0;) =E (r + ymax Q(sâ, aâ; 0-1) â Q(s, a; 64)) a
where sâ is the state encountered after state s.
We refer to the above method as one-step Q-learning be- cause it updates the action value Q(s,a) toward the one- step return r + ymaxq Q(sâ,aâ;@). One drawback of us- ing one-step methods is that obtaining a reward r only di- rectly affects the value of the state action pair s, a that led to the reward. The values of other state action pairs are affected only indirectly through the updated value Q(s, a). This can make the learning process slow since many up- dates are required the propagate a reward to the relevant preceding states and actions.
Asynchronous Methods for Deep Reinforcement Learning
One way of propagating rewards faster is by using n- step returns (Watkins, 1989; Peng & Williams, 1996). In n-step Q-learning, Q(s, a) is updated toward the n- step return deï¬ned as rt + γrt+1 + · · · + γnâ1rt+nâ1 + maxa γnQ(st+n, a). This results in a single reward r di- rectly affecting the values of n preceding state action pairs. This makes the process of propagating rewards to relevant state-action pairs potentially much more efï¬cient.
In contrast to value-based methods, policy-based model- free methods directly parameterize the policy Ï(a|s; θ) and update the parameters θ by performing, typically approx- imate, gradient ascent on E[Rt]. One example of such a method is the REINFORCE family of algorithms due to Williams (1992). Standard REINFORCE updates the policy parameters θ in the direction âθ log Ï(at|st; θ)Rt, which is an unbiased estimate of âθE[Rt]. It is possible to reduce the variance of this estimate while keeping it unbi- ased by subtracting a learned function of the state bt(st), known as a baseline (Williams, 1992), from the return. The resulting gradient is âθ log Ï(at|st; θ) (Rt â bt(st)).
Algorithm 1 Asynchronous one-step Q-learning - pseu- docode for each actor-learner thread.
docode for each actor-learner thread. // Assume global shared 0, 0~, and counter T = 0. Initialize thread step counter t <- 0 Initialize target network weights 6~ < 6 Initialize network gradients dO + 0 Get initial state s repeat Take action a with e-greedy policy based on Q(s, a; 0) Receive new state sâ and reward r for terminal sâ y= for non-terminal sâ r r+ ymaxa Q(sâ,aâ;07) Accumulate gradients wrt 6: dO < d@ + y= Q(s.036))* , s=s T<T+landt+t+1 ifT mod Itarget == 0 then Update the target network 0~ < 0 end if ift mod [Asyncupdate == 0 or s is terminal then Perform asynchronous update of 6 using d@. Clear gradients d@ + 0. end if until T > Tmax
A learned estimate of the value function is commonly used as the baseline bt(st) â V Ï(st) leading to a much lower variance estimate of the policy gradient. When an approx- imate value function is used as the baseline, the quantity Rt â bt used to scale the policy gradient can be seen as an estimate of the advantage of action at in state st, or A(at, st) = Q(at, st)âV (st), because Rt is an estimate of QÏ(at, st) and bt is an estimate of V Ï(st). This approach can be viewed as an actor-critic architecture where the pol- icy Ï is the actor and the baseline bt is the critic (Sutton & Barto, 1998; Degris et al., 2012).
# 4. Asynchronous RL Framework
learners running in parallel are likely to be exploring dif- ferent parts of the environment. Moreover, one can explic- itly use different exploration policies in each actor-learner to maximize this diversity. By running different explo- ration policies in different threads, the overall changes be- ing made to the parameters by multiple actor-learners ap- plying online updates in parallel are likely to be less corre- lated in time than a single agent applying online updates. Hence, we do not use a replay memory and rely on parallel actors employing different exploration policies to perform the stabilizing role undertaken by experience replay in the DQN training algorithm.
We now present multi-threaded asynchronous variants of one-step Sarsa, one-step Q-learning, n-step Q-learning, and advantage actor-critic. The aim in designing these methods was to ï¬nd RL algorithms that can train deep neural net- work policies reliably and without large resource require- ments. While the underlying RL methods are quite dif- ferent, with actor-critic being an on-policy policy search method and Q-learning being an off-policy value-based method, we use two main ideas to make all four algorithms practical given our design goal.
In addition to stabilizing learning, using multiple parallel actor-learners has multiple practical beneï¬ts. First, we ob- tain a reduction in training time that is roughly linear in the number of parallel actor-learners. Second, since we no longer rely on experience replay for stabilizing learning we are able to use on-policy reinforcement learning methods such as Sarsa and actor-critic to train neural networks in a stable way. We now describe our variants of one-step Q- learning, one-step Sarsa, n-step Q-learning and advantage actor-critic.
First, we use asynchronous actor-learners, similarly to the Gorila framework (Nair et al., 2015), but instead of using separate machines and a parameter server, we use multi- ple CPU threads on a single machine. Keeping the learn- ers on a single machine removes the communication costs of sending gradients and parameters and enables us to use Hogwild! (Recht et al., 2011) style updates for training.
Second, we make the observation that multiple actors-
Asynchronous one-step Q-learning: Pseudocode for our variant of Q-learning, which we call Asynchronous one- step Q-learning, is shown in Algorithm 1. Each thread in- teracts with its own copy of the environment and at each step computes a gradient of the Q-learning loss. We use a shared and slowly changing target network in comput- ing the Q-learning loss, as was proposed in the DQN train- ing method. We also accumulate gradients over multiple timesteps before they are applied, which is similar to us-
Asynchronous Methods for Deep Reinforcement Learning
ing minibatches. This reduces the chances of multiple ac- tor learners overwriting each otherâs updates. Accumulat- ing updates over several steps also provides some ability to trade off computational efï¬ciency for data efï¬ciency.
Finally, we found that giving each thread a different explo- ration policy helps improve robustness. Adding diversity to exploration in this manner also generally improves per- formance through better exploration. While there are many possible ways of making the exploration policies differ we experiment with using ¢-greedy exploration with ⬠periodi- cally sampled from some distribution by each thread.
by tmax. The pseudocode for the algorithm is presented in Supplementary Algorithm S3.
As with the value-based methods we rely on parallel actor- learners and accumulated updates for improving training stability. Note that while the parameters θ of the policy and θv of the value function are shown as being separate for generality, we always share some of the parameters in practice. We typically use a convolutional neural network that has one softmax output for the policy Ï(at|st; θ) and one linear output for the value function V (st; θv), with all non-output layers shared.
Asynchronous one-step Sarsa: The asynchronous one- step Sarsa algorithm is the same as asynchronous one-step Q-learning as given in Algorithm 1 except that it uses a dif- ferent target value for Q(s,a). The target value used by one-step Sarsa is r + yQ(sâ,aâ;6â) where aâ is the action taken in state sâ (Rummery & Niranjan, 1994; Sutton & Barto, 1998). We again use a target network and updates accumulated over multiple timesteps to stabilize learning.
Asynchronous n-step Q-learning: Pseudocode for our variant of multi-step Q-learning is shown in Supplementary Algorithm S2. The algorithm is somewhat unusual because it operates in the forward view by explicitly computing n- step returns, as opposed to the more common backward view used by techniques like eligibility traces (Sutton & Barto, 1998). We found that using the forward view is eas- ier when training neural networks with momentum-based methods and backpropagation through time. In order to compute a single update, the algorithm ï¬rst selects actions using its exploration policy for up to tmax steps or until a terminal state is reached. This process results in the agent receiving up to tmax rewards from the environment since its last update. The algorithm then computes gradients for n-step Q-learning updates for each of the state-action pairs encountered since the last update. Each n-step update uses the longest possible n-step return resulting in a one-step update for the last state, a two-step update for the second last state, and so on for a total of up to tmax updates. The accumulated updates are applied in a single gradient step.
We also found that adding the entropy of the policy 7 to the objective function improved exploration by discouraging premature convergence to suboptimal deterministic poli- cies. This technique was originally proposed by (Williams & Peng, 1991), who found that it was particularly help- ful on tasks requiring hierarchical behavior. The gradi- ent of the full objective function including the entropy regularization term with respect to the policy parame- ters takes the form Vy log (az| 51; 6â)(Ri â V(s13 90) + BV o H((s1;6â)), where H is the entropy. The hyperpa- rameter 6 controls the strength of the entropy regulariza- tion term.
Optimization: We investigated three different optimiza- tion algorithms in our asynchronous framework â SGD with momentum, RMSProp (Tieleman & Hinton, 2012) without shared statistics, and RMSProp with shared statis- tics. We used the standard non-centered RMSProp update given by
9 Ae g=ag+ (1âa)AM and 6 + 6 "Tare (1)
where all operations are performed elementwise. A com- parison on a subset of Atari 2600 games showed that a vari- ant of RMSProp where statistics g are shared across threads is considerably more robust than the other two methods. Full details of the methods and comparisons are included in Supplementary Section 7.
Asynchronous advantage actor-critic: The algorithm, which we call asynchronous advantage actor-critic (A3C), maintains a policy 7(a,|s,;@) and an estimate of the value function V(s;;0,). Like our variant of n-step Q-learning, our variant of actor-critic also operates in the forward view and uses the same mix of n-step returns to update both the policy and the value-function. The policy and the value function are updated after every t,,q, actions or when a terminal state is reached. The update performed by the al- gorithm can be seen as Vy log (az |51; 6â) A(Sz, at; 9, Ov) where A(s;, a1; 9, 0,,) is an estimate of the advantage func- tion given by Yh9 Viren: + 7*V (Stan: Ov) â V(s15 80), where k can vary from state to state and is upper-bounded
# 5. Experiments
We use four different platforms for assessing the properties of the proposed framework. We perform most of our exper- iments using the Arcade Learning Environment (Bellemare et al., 2012), which provides a simulator for Atari 2600 games. This is one of the most commonly used benchmark environments for RL algorithms. We use the Atari domain to compare against state of the art results (Van Hasselt et al., 2015; Wang et al., 2015; Schaul et al., 2015; Nair et al., 2015; Mnih et al., 2015), as well as to carry out a detailed stability and scalability analysis of the proposed methods. We performed further comparisons using the TORCS 3D car racing simulator (Wymann et al., 2013). We also use
Asynchronous Methods for Deep Reinforcement Learning
Beamrider Breakout 16000 600 30 â pon â DeN 14000 __ 1-step Q â Lstep Q â I step SARSA 500 Lstep SARSA 20 12000 ih etep Q ABC â nsstep Q 10000 8000 Score 6000 4000 2000 0 -30 0 2 4 6 8 1012 14 0 2 4 6 8 10 12 14 0 2 Training time (hours) Training time (hours) â DON 4000 â 1step Q â Lsstep SARSA â mstep Q ABC Training time (hours) Pong 12000 Q*bert 1600 Space Invaders â DON â DON oovo â 2-step Q 1400 â 1-step Q â 1-step SARSA â 1-step SARSA â n-step Q 1200 pstep Q 8000 3c 1000 A3C 6000 2000 0 8 10 12 14 0 2 4 6 8 10 12 14 0 2 4 6 8 1012 14 Training time (hours) Training time (hours)
Figure 1. Learning speed comparison for DQN and the new asynchronous algorithms on ï¬ve Atari 2600 games. DQN was trained on a single Nvidia K40 GPU while the asynchronous methods were trained using 16 CPU cores. The plots are averaged over 5 runs. In the case of DQN the runs were for different seeds with ï¬xed hyperparameters. For asynchronous methods we average over the best 5 models from 50 experiments with learning rates sampled from LogU nif orm(10â4, 10â2) and all other hyperparameters ï¬xed.
two additional domains to evaluate only the A3C algorithm â Mujoco and Labyrinth. MuJoCo (Todorov, 2015) is a physics simulator for evaluating agents on continuous mo- tor control tasks with contact dynamics. Labyrinth is a new 3D environment where the agent must learn to ï¬nd rewards in randomly generated mazes from a visual input. The pre- cise details of our experimental setup can be found in Sup- plementary Section 8.
Method DQN Gorila D-DQN Dueling D-DQN Prioritized DQN A3C, FF A3C, FF A3C, LSTM Training Time 8 days on GPU 4 days, 100 machines 8 days on GPU 8 days on GPU 8 days on GPU 1 day on CPU 4 days on CPU 4 days on CPU Mean Median 121.9% 47.5% 215.2% 71.3% 332.9% 110.9% 343.8% 117.1% 463.6% 127.6% 344.1% 68.2% 496.8% 116.6% 623.0% 112.6%
# 5.1. Atari 2600 Games
We ï¬rst present results on a subset of Atari 2600 games to demonstrate the training speed of the new methods. Fig- ure 1 compares the learning speed of the DQN algorithm trained on an Nvidia K40 GPU with the asynchronous methods trained using 16 CPU cores on ï¬ve Atari 2600 games. The results show that all four asynchronous meth- ods we presented can successfully train neural network controllers on the Atari domain. The asynchronous meth- ods tend to learn faster than DQN, with signiï¬cantly faster learning on some games, while training on only 16 CPU cores. Additionally, the results suggest that n-step methods learn faster than one-step methods on some games. Over- all, the policy-based advantage actor-critic method signiï¬- cantly outperforms all three value-based methods.
We then evaluated asynchronous advantage actor-critic on 57 Atari games. In order to compare with the state of the art in Atari game playing, we largely followed the train- ing and evaluation protocol of (Van Hasselt et al., 2015). Speciï¬cally, we tuned hyperparameters (learning rate and amount of gradient norm clipping) using a search on six Atari games (Beamrider, Breakout, Pong, Q*bert, Seaquest and Space Invaders) and then ï¬xed all hyperparameters for all 57 games. We trained both a feedforward agent with the same architecture as (Mnih et al., 2015; Nair et al., 2015; Van Hasselt et al., 2015) as well as a recurrent agent with an additional 256 LSTM cells after the ï¬nal hidden layer. We additionally used the ï¬nal network weights for evaluation to make the results more comparable to the original results
Table 1. Mean and median human-normalized scores on 57 Atari games using the human starts evaluation metric. Supplementary Table SS3 shows the raw scores for all games.
from (Bellemare et al., 2012). We trained our agents for four days using 16 CPU cores, while the other agents were trained for 8 to 10 days on Nvidia K40 GPUs. Table 1 shows the average and median human-normalized scores obtained by our agents trained by asynchronous advantage actor-critic (A3C) as well as the current state-of-the art. Supplementary Table S3 shows the scores on all games. A3C signiï¬cantly improves on state-of-the-art the average score over 57 games in half the training time of the other methods while using only 16 CPU cores and no GPU. Fur- thermore, after just one day of training, A3C matches the average human normalized score of Dueling Double DQN and almost reaches the median human normalized score of Gorila. We note that many of the improvements that are presented in Double DQN (Van Hasselt et al., 2015) and Dueling Double DQN (Wang et al., 2015) can be incorpo- rated to 1-step Q and n-step Q methods presented in this work with similar potential improvements.
# 5.2. TORCS Car Racing Simulator
We also compared the four asynchronous methods on the TORCS 3D car racing game (Wymann et al., 2013). TORCS not only has more realistic graphics than Atari 2600 games, but also requires the agent to learn the dy- namics of the car it is controlling. At each step, an agent received only a visual input in the form of an RGB image
Asynchronous Methods for Deep Reinforcement Learning
of the current frame as well as a reward proportional to the agentâs velocity along the center of the track at the agentâs current position. We used the same neural network archi- tecture as the one used in the Atari experiments speciï¬ed in Supplementary Section 8. We performed experiments us- ing four different settings â the agent controlling a slow car with and without opponent bots, and the agent controlling a fast car with and without opponent bots. Full results can be found in Supplementary Figure S6. A3C was the best per- forming agent, reaching between roughly 75% and 90% of the score obtained by a human tester on all four game con- ï¬gurations in about 12 hours of training. A video showing the learned driving behavior of the A3C agent can be found at https://youtu.be/0xo1Ldx3L5Q.
1 Method 1-step Q 1.0 1-step SARSA 1.0 1.0 n-step Q 1.0 A3C Number of threads 2 3.0 2.8 2.7 2.1 4 6.3 5.9 5.9 3.7 8 13.3 13.1 10.7 6.9 16 24.1 22.1 17.2 12.5
Table 2. The average training speedup for each method and num- ber of threads averaged over seven Atari games. To compute the training speed-up on a single game we measured the time to re- quired reach a ï¬xed reference score using each method and num- ber of threads. The speedup from using n threads on a game was deï¬ned as the time required to reach a ï¬xed reference score using one thread divided the time required to reach the reference score using n threads. The table shows the speedups averaged over seven Atari games (Beamrider, Breakout, Enduro, Pong, Q*bert, Seaquest, and Space Invaders).
# 5.3. Continuous Action Control Using the MuJoCo Physics Simulator
We also examined a set of tasks where the action space is continuous. In particular, we looked at a set of rigid body physics domains with contact dynamics where the tasks include many examples of manipulation and loco- motion. These tasks were simulated using the Mujoco physics engine. We evaluated only the asynchronous ad- vantage actor-critic algorithm since, unlike the value-based methods, it is easily extended to continuous actions. In all problems, using either the physical state or pixels as in- put, Asynchronous Advantage-Critic found good solutions in less than 24 hours of training and typically in under a few hours. Some successful policies learned by our agent can be seen in the following video https://youtu.be/ Ajjc08-iPx8. Further details about this experiment can be found in Supplementary Section 9.
# 5.4. Labyrinth
We performed an additional set of experiments with A3C on a new 3D environment called Labyrinth. The speciï¬c task we considered involved the agent learning to ï¬nd re- wards in randomly generated mazes. At the beginning of each episode the agent was placed in a new randomly gen- erated maze consisting of rooms and corridors. Each maze contained two types of objects that the agent was rewarded for ï¬nding â apples and portals. Picking up an apple led to a reward of 1. Entering a portal led to a reward of 10 after which the agent was respawned in a new random location in the maze and all previously collected apples were regener- ated. An episode terminated after 60 seconds after which a new episode would begin. The aim of the agent is to collect as many points as possible in the time limit and the optimal strategy involves ï¬rst ï¬nding the portal and then repeatedly going back to it after each respawn. This task is much more challenging than the TORCS driving domain because the agent is faced with a new maze in each episode and must learn a general strategy for exploring random mazes.
We trained an A3C LSTM agent on this task using only 84 à 84 RGB images as input. The ï¬nal average score of around 50 indicates that the agent learned a reason- able strategy for exploring random 3D maxes using only a visual input. A video showing one of the agents ex- ploring previously unseen mazes is included at https: //youtu.be/nMR5mjCFZCw.
# 5.5. Scalability and Data Efï¬ciency
We analyzed the effectiveness of our proposed framework by looking at how the training time and data efï¬ciency changes with the number of parallel actor-learners. When using multiple workers in parallel and updating a shared model, one would expect that in an ideal case, for a given task and algorithm, the number of training steps to achieve a certain score would remain the same with varying num- bers of workers. Therefore, the advantage would be solely due to the ability of the system to consume more data in the same amount of wall clock time and possibly improved exploration. Table 2 shows the training speed-up achieved by using increasing numbers of parallel actor-learners av- eraged over seven Atari games. These results show that all four methods achieve substantial speedups from using mul- tiple worker threads, with 16 threads leading to at least an order of magnitude speedup. This conï¬rms that our pro- posed framework scales well with the number of parallel workers, making efï¬cient use of resources.
Somewhat surprisingly, asynchronous one-step Q-learning and Sarsa algorithms exhibit superlinear speedups that cannot be explained by purely computational gains. We observe that one-step methods (one-step Q and one-step Sarsa) often require less data to achieve a particular score when using more parallel actor-learners. We believe this is due to positive effect of multiple threads to reduce the bias in one-step methods. These effects are shown more clearly in Figure 3, which shows plots of the average score against the total number of training frames for different
Asynchronous Methods for Deep Reinforcement Learning
26, Pong 236 oer 6, Space Invaders
Figure 2. Scatter plots of scores obtained by asynchronous advantage actor-critic on ï¬ve games (Beamrider, Breakout, Pong, Q*bert, Space Invaders) for 50 different learning rates and random initializations. On each game, there is a wide range of learning rates for which all random initializations acheive good scores. This shows that A3C is quite robust to learning rates and initial random weights.
numbers of actor-learners and training methods on ï¬ve Atari games, and Figure 4, which shows plots of the av- erage score against wall-clock time.
# 5.6. Robustness and Stability
substantially improve the data efï¬ciency of these methods by reusing old data. This could in turn lead to much faster training times in domains like TORCS where interacting with the environment is more expensive than updating the model for the architecture we used.
Finally, we analyzed the stability and robustness of the four proposed asynchronous algorithms. For each of the four algorithms we trained models on ï¬ve games (Break- out, Beamrider, Pong, Q*bert, Space Invaders) using 50 different learning rates and random initializations. Figure 2 shows scatter plots of the resulting scores for A3C, while Supplementary Figure S11 shows plots for the other three methods. There is usually a range of learning rates for each method and game combination that leads to good scores, indicating that all methods are quite robust to the choice of learning rate and random initialization. The fact that there are virtually no points with scores of 0 in regions with good learning rates indicates that the methods are stable and do not collapse or diverge once they are learning.
# 6. Conclusions and Discussion
We have presented asynchronous versions of four standard reinforcement learning algorithms and showed that they are able to train neural network controllers on a variety of domains in a stable manner. Our results show that in our proposed framework stable training of neural networks through reinforcement learning is possible with both value- based and policy-based methods, off-policy as well as on- policy methods, and in discrete as well as continuous do- mains. When trained on the Atari domain using 16 CPU cores, the proposed asynchronous algorithms train faster than DQN trained on an Nvidia K40 GPU, with A3C sur- passing the current state-of-the-art in half the training time.
Combining other existing reinforcement learning meth- ods or recent advances in deep reinforcement learning with our asynchronous framework presents many possibil- ities for immediate improvements to the methods we pre- sented. While our n-step methods operate in the forward view (Sutton & Barto, 1998) by using corrected n-step re- turns directly as targets, it has been more common to use the backward view to implicitly combine different returns through eligibility traces (Watkins, 1989; Sutton & Barto, 1998; Peng & Williams, 1996). The asynchronous ad- vantage actor-critic method could be potentially improved by using other ways of estimating the advantage function, such as generalized advantage estimation of (Schulman et al., 2015b). All of the value-based methods we inves- tigated could beneï¬t from different ways of reducing over- estimation bias of Q-values (Van Hasselt et al., 2015; Belle- mare et al., 2016). Yet another, more speculative, direction is to try and combine the recent work on true online tempo- ral difference methods (van Seijen et al., 2015) with non- linear function approximation.
In addition to these algorithmic improvements, a number of complementary improvements to the neural network ar- chitecture are possible. The dueling architecture of (Wang et al., 2015) has been shown to produce more accurate es- timates of Q-values by including separate streams for the state value and advantage in the network. The spatial soft- max proposed by (Levine et al., 2015) could improve both value-based and policy-based methods by making it easier for the network to represent feature coordinates.
One of our main ï¬ndings is that using parallel actor- learners to update a shared model had a stabilizing effect on the learning process of the three value-based methods we considered. While this shows that stable online Q-learning is possible without experience replay, which was used for this purpose in DQN, it does not mean that experience re- play is not useful. Incorporating experience replay into the asynchronous reinforcement learning framework could
# ACKNOWLEDGMENTS
We thank Thomas Degris, Remi Munos, Marc Lanctot, Sasha Vezhnevets and Joseph Modayil for many helpful discussions, suggestions and comments on the paper. We also thank the DeepMind evaluation team for setting up the environments used to evaluate the agents in the paper.
Asynchronous Methods for Deep Reinforcement Learning
% 2000 Training enone § e000
Figure 3. Data efï¬ciency comparison of different numbers of actor-learners for three asynchronous methods on ï¬ve Atari games. The x-axis shows the total number of training epochs where an epoch corresponds to four million frames (across all threads). The y-axis shows the average score. Each curve shows the average over the three best learning rates. Single step methods show increased data efï¬ciency from more parallel workers. Results for Sarsa are shown in Supplementary Figure S9.
Traning ume hous) ârang tie nous) e000 eamrier o000 creer 600 space mvaders § 000 § e000 Trang ume nous)
Figure 4. Training speed comparison of different numbers of actor-learners on ï¬ve Atari games. The x-axis shows training time in hours while the y-axis shows the average score. Each curve shows the average over the three best learning rates. All asynchronous methods show signiï¬cant speedups from using greater numbers of parallel actor-learners. Results for Sarsa are shown in Supplementary Figure S10.
Asynchronous Methods for Deep Reinforcement Learning
# References
Bellemare, Marc G, Naddaf, Yavar, Veness, Joel, and Bowling, Michael. The arcade learning environment: An evaluation platform for general agents. Journal of Artiï¬cial Intelligence Research, 2012.
Bellemare, Marc G., Ostrovski, Georg, Guez, Arthur, Thomas, Philip S., and Munos, Rémi. Increasing the ac- tion gap: New operators for reinforcement learning. In Proceedings of the AAAI Conference on Artiï¬cial Intel- ligence, 2016.
Bertsekas, Dimitri P. Distributed dynamic programming. Automatic Control, IEEE Transactions on, 27(3):610â 616, 1982.
Chavez, Kevin, Ong, Hao Yi, and Hong, Augustus. Dis- tributed deep q-learning. Technical report, Stanford Uni- versity, June 2015.
Degris, Thomas, Pilarski, Patrick M, and Sutton, Richard S. Model-free reinforcement learning with continuous ac- tion in practice. In American Control Conference (ACC), 2012, pp. 2177â2182. IEEE, 2012.
Mnih, Volodymyr, Kavukcuoglu, Koray, Silver, David, Rusu, Andrei A., Veness, Joel, Bellemare, Marc G., Graves, Alex, Riedmiller, Martin, Fidjeland, Andreas K., Ostrovski, Georg, Petersen, Stig, Beattie, Charles, Sadik, Amir, Antonoglou, Ioannis, King, Helen, Kumaran, Dharshan, Wierstra, Daan, Legg, Shane, and Hassabis, Demis. Human-level control through deep reinforcement learning. Nature, 518(7540):529â533, 02 2015. URL http://dx.doi.org/10.1038/nature14236.
Nair, Arun, Srinivasan, Praveen, Blackwell, Sam, Alci- cek, Cagdas, Fearon, Rory, Maria, Alessandro De, Pan- neershelvam, Vedavyas, Suleyman, Mustafa, Beattie, Charles, Petersen, Stig, Legg, Shane, Mnih, Volodymyr, Kavukcuoglu, Koray, and Silver, David. Massively par- allel methods for deep reinforcement learning. In ICML Deep Learning Workshop. 2015.
Peng, Jing and Williams, Ronald J. Incremental multi-step q-learning. Machine Learning, 22(1-3):283â290, 1996.
Recht, Benjamin, Re, Christopher, Wright, Stephen, and Niu, Feng. Hogwild: A lock-free approach to paralleliz- ing stochastic gradient descent. In Advances in Neural Information Processing Systems, pp. 693â701, 2011.
Grounds, Matthew and Kudenko, Daniel. Parallel rein- forcement learning with linear function approximation. In Proceedings of the 5th, 6th and 7th European Confer- ence on Adaptive and Learning Agents and Multi-agent Systems: Adaptation and Multi-agent Learning, pp. 60â 74. Springer-Verlag, 2008.
Riedmiller, Martin. Neural ï¬tted q iterationâï¬rst experi- ences with a data efï¬cient neural reinforcement learning method. In Machine Learning: ECML 2005, pp. 317â 328. Springer Berlin Heidelberg, 2005.
KoutnÃk, Jan, Schmidhuber, Jürgen, and Gomez, Faustino. Evolving deep unsupervised convolutional networks for vision-based reinforcement learning. In Proceedings of the 2014 conference on Genetic and evolutionary com- putation, pp. 541â548. ACM, 2014.
Rummery, Gavin A and Niranjan, Mahesan. On-line q- learning using connectionist systems. 1994.
Schaul, Tom, Quan, John, Antonoglou, Ioannis, and Sil- ver, David. Prioritized experience replay. arXiv preprint arXiv:1511.05952, 2015.
Levine, Sergey, Finn, Chelsea, Darrell, Trevor, and Abbeel, Pieter. End-to-end training of deep visuomotor policies. arXiv preprint arXiv:1504.00702, 2015.
Schulman, John, Levine, Sergey, Moritz, Philipp, Jordan, Michael I, and Abbeel, Pieter. Trust region policy op- In International Conference on Machine timization. Learning (ICML), 2015a.
Li, Yuxi and Schuurmans, Dale. Mapreduce for parallel re- inforcement learning. In Recent Advances in Reinforce- ment Learning - 9th European Workshop, EWRL 2011, Athens, Greece, September 9-11, 2011, Revised Selected Papers, pp. 309â320, 2011.
Schulman, John, Moritz, Philipp, Levine, Sergey, Jordan, Michael, and Abbeel, Pieter. High-dimensional con- tinuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438, 2015b.
Lillicrap, Timothy P, Hunt, Jonathan J, Pritzel, Alexander, Heess, Nicolas, Erez, Tom, Tassa, Yuval, Silver, David, and Wierstra, Daan. Continuous control with deep re- inforcement learning. arXiv preprint arXiv:1509.02971, 2015.
Sutton, R. and Barto, A. Reinforcement Learning: an In- troduction. MIT Press, 1998.
Tieleman, Tijmen and Hinton, Geoffrey. Lecture 6.5- rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4, 2012.
Mnih, Volodymyr, Kavukcuoglu, Koray, Silver, David, Graves, Alex, Antonoglou, Ioannis, Wierstra, Daan, and Riedmiller, Martin. Playing atari with deep reinforce- ment learning. In NIPS Deep Learning Workshop. 2013.
Todorov, E. MuJoCo: Modeling, Simulation and Visual- ization of Multi-Joint Dynamics with Contact (ed 1.0). Roboti Publishing, 2015.
Asynchronous Methods for Deep Reinforcement Learning
Tomassini, Marco. Parallel and distributed evolutionary al- gorithms: A review. Technical report, 1999.
Tsitsiklis, John N. Asynchronous stochastic approxima- tion and q-learning. Machine Learning, 16(3):185â202, 1994.
Van Hasselt, Hado, Guez, Arthur, and Silver, David. Deep reinforcement learning with double q-learning. arXiv preprint arXiv:1509.06461, 2015.
van Seijen, H., Rupam Mahmood, A., Pilarski, P. M., Machado, M. C., and Sutton, R. S. True Online Temporal-Difference Learning. ArXiv e-prints, Decem- ber 2015.
Wang, Z., de Freitas, N., and Lanctot, M. Dueling Network Architectures for Deep Reinforcement Learning. ArXiv e-prints, November 2015.
Watkins, Christopher John Cornish Hellaby. Learning from delayed rewards. PhD thesis, University of Cambridge England, 1989.
Williams, R.J. Simple statistical gradient-following algo- rithms for connectionist reinforcement learning. Ma- chine Learning, 8(3):229â256, 1992.
Williams, Ronald J and Peng, Jing. Function optimization using connectionist reinforcement learning algorithms. Connection Science, 3(3):241â268, 1991.
Wymann, B., EspiÃlâ, E., Guionneau, C., Dimitrakakis, C., Coulom, R., and Sumner, A. Torcs: The open racing car simulator, v1.3.5, 2013.
# Supplementary Material for "Asynchronous Methods for Deep Reinforcement Learning"
# November 7, 2021
# 7. Optimization Details
We investigated two different optimization algorithms with our asynchronous framework â stochastic gradient descent and RMSProp. Our implementations of these algorithms do not use any locking in order to maximize throughput when using a large number of threads.
Momentum SGD: The implementation of SGD in an asynchronous setting is relatively straightforward and well studied (Recht et al., 2011). Let θ be the parameter vector that is shared across all threads and let âθi be the accumulated gradients of the loss with respect to parameters θ computed by thread number i. Each thread i independently applies the standard momentum SGD update mi = αmi + (1 â α)âθi followed by θ â θ â ηmi with learning rate η, momentum α and without any locks. Note that in this setting, each thread maintains its own separate gradient and momentum vector.
RMSProp: While RMSProp (Tieleman & Hinton, 2012) has been widely used in the deep learning literature, it has not been extensively studied in the asynchronous optimization setting. The standard non-centered RMSProp update is given by
g = αg + (1 â α)âθ2 (S2)
A@ 9 O~ nT (S3)
where all operations are performed elementwise. In order to apply RMSProp in the asynchronous optimiza- tion setting one must decide whether the moving average of elementwise squared gradients g is shared or per-thread. We experimented with two versions of the algorithm. In one version, which we refer to as RM- SProp, each thread maintains its own g shown in Equation S2. In the other version, which we call Shared RMSProp, the vector g is shared among threads and is updated asynchronously and without locking. Sharing statistics among threads also reduces memory requirements by using one fewer copy of the parameter vector per thread.
We compared these three asynchronous optimization algorithms in terms of their sensitivity to different learn- ing rates and random network initializations. Figure S5 shows a comparison of the methods for two different reinforcement learning methods (Async n-step Q and Async Advantage Actor-Critic) on four different games (Breakout, Beamrider, Seaquest and Space Invaders). Each curve shows the scores for 50 experiments that correspond to 50 different random learning rates and initializations. The x-axis shows the rank of the model after sorting in descending order by ï¬nal average score and the y-axis shows the ï¬nal average score achieved by the corresponding model. In this representation, the algorithm that performs better would achieve higher maximum rewards on the y-axis and the algorithm that is most robust would have its slope closest to horizon- tal, thus maximizing the area under the curve. RMSProp with shared statistics tends to be more robust than RMSProp with per-thread statistics, which is in turn more robust than Momentum SGD.
Asynchronous Methods for Deep Reinforcement Learning
# 8. Experimental Setup
The experiments performed on a subset of Atari games (Figures 1, 3, 4 and Table 2) as well as the TORCS experiments (Figure S6) used the following setup. Each experiment used 16 actor-learner threads running on a single machine and no GPUs. All methods performed updates after every 5 actions (tmax = 5 and IU pdate = 5) and shared RMSProp was used for optimization. The three asynchronous value-based methods used a shared target network that was updated every 40000 frames. The Atari experiments used the same input preprocessing as (Mnih et al., 2015) and an action repeat of 4. The agents used the network architecture from (Mnih et al., 2013). The network used a convolutional layer with 16 ï¬lters of size 8 à 8 with stride 4, followed by a convolutional layer with with 32 ï¬lters of size 4 à 4 with stride 2, followed by a fully connected layer with 256 hidden units. All three hidden layers were followed by a rectiï¬er nonlinearity. The value-based methods had a single linear output unit for each action representing the action-value. The model used by actor-critic agents had two set of outputs â a softmax output with one entry per action representing the probability of selecting the action, and a single linear output representing the value function. All experiments used a discount of γ = 0.99 and an RMSProp decay factor of α = 0.99.
The value based methods sampled the exploration rate ⬠from a distribution taking three values â¬1, â¬2, â¬; with probabilities 0.4, 0.3, 0.3. The values of â¬1,â¬2,â¬3 were annealed from 1 to 0.1,0.01,0.5 respectively over the first four million frames. Advantage actor-critic used entropy regularization with a weight 8 = 0.01 for all Atari and TORCS experiments. We performed a set of 50 experiments for five Atari games and every TORCS level, each using a different random initialization and initial learning rate. The initial learning rate was sampled from a LogUniform(10~4, 10-7) distribution and annealed to 0 over the course of training. Note that in comparisons to prior work (Tables 1 and S3) we followed standard evaluation protocol and used fixed hyperparameters.
# 9. Continuous Action Control Using the MuJoCo Physics Simulator
To apply the asynchronous advantage actor-critic algorithm to the Mujoco tasks the necessary setup is nearly identical to that used in the discrete action domains, so here we enumerate only the differences required for the continuous action domains. The essential elements for many of the tasks (i.e. the physics models and task objectives) are near identical to the tasks examined in (Lillicrap et al., 2015). However, the rewards and thus performance are not comparable for most of the tasks due to changes made by the developers of Mujoco which altered the contact model.
For all the domains we attempted to learn the task using the physical state as input. The physical state consisted of the joint positions and velocities as well as the target position if the task required a target. In addition, for three of the tasks (pendulum, pointmass2D, and gripper) we also examined training directly from RGB pixel inputs. In the low dimensional physical state case, the inputs are mapped to a hidden state using one hidden layer with 200 ReLU units. In the cases where we used pixels, the input was passed through two layers of spatial convolutions without any non-linearity or pooling. In either case, the output of the encoder layers were fed to a single layer of 128 LSTM cells. The most important difference in the architecture is in the the output layer of the policy network. Unlike the discrete action domain where the action output is a Softmax, here the two outputs of the policy network are two real number vectors which we treat as the mean vector µ and scalar variance Ï2 of a multidimensional normal distribution with a spherical covariance. To act, the input is passed through the model to the output layer where we sample from the normal distribution determined by µ and Ï2. In practice, µ is modeled by a linear layer and Ï2 by a SoftPlus operation, log(1 + exp(x)), as the activation computed as a function of the output of a linear layer. In our experiments with continuous control problems the networks for policy network and value network do not share any parameters, though this detail is unlikely to be crucial. Finally, since the episodes were typically at most several hundred time steps long, we did not use any bootstrapping in the policy or value function updates and batched each episode into a single update.
As in the discrete action case, we included an entropy cost which encouraged exploration. In the continuous
Asynchronous Methods for Deep Reinforcement Learning
case the we used a cost on the differential entropy of the normal distribution deï¬ned by the output of the actor network, â 1 2 (log(2ÏÏ2) + 1), we used a constant multiplier of 10â4 for this cost across all of the tasks examined. The asynchronous advantage actor-critic algorithm ï¬nds solutions for all the domains. Figure S8 shows learning curves against wall-clock time, and demonstrates that most of the domains from states can be solved within a few hours. All of the experiments, including those done from pixel based observations, were run on CPU. Even in the case of solving the domains directly from pixel inputs we found that it was possible to reliably discover solutions within 24 hours. Figure S7 shows scatter plots of the top scores against the sampled learning rates. In most of the domains there is large range of learning rates that consistently achieve good performance on the task.
# Algorithm S2 Asynchronous n-step Q-learning - pseudocode for each actor-learner thread.
// Assume global shared parameter vector 0. // Assume global shared target parameter vector 0~ . // Assume global shared counter T = 0. Initialize thread step counter t <- 1 Initialize target network parameters 0~ < 0 Initialize thread-specific parameters 6â = 0 Initialize network gradients dO ~ 0 repeat Clear gradients d@ + 0 Synchronize thread-specific parameters 0â = 0 Estar t= t Get state s; repeat Take action a, according to the e-greedy policy based on Q(s+, a; 6â) Receive reward r; and new state 5441 t<et+l1 TeT+1 until terminal s; or t â tstart == tmazx _f 0 for terminal s; R= maxa Q(s:,4;07 ) for non-terminal s; fori ⬠{tâ1,...,tstare} do Rerit+yR > Accumulate gradients wrt 6â: d@ â d@ + (R= OCsi,0550"))" end for Perform asynchronous update of @ using d0. ifT mod Itarget == 0 then a +80 end if until T > Tinax
# until T > Tmax
Asynchronous Methods for Deep Reinforcement Learning
# Algorithm S3 Asynchronous advantage actor-critic - pseudocode for each actor-learner thread.
// Assume global shared parameter vectors 0 and 0, and global shared counter T = 0 // Assume thread-specific parameter vectors 0' and 67, Initialize thread step counter t <- 1 repeat Reset gradients: d@ < 0 and d6,, < 0. Synchronize thread-specific parameters 6â = 0 and 6/, = 0, Estar t= t Get state s; repeat Perform a; according to policy 7(az|s1; 6â) Receive reward r; and new state 5,41 t<t+l1 TeT+1 until terminal sz or t â tstart == tmazx R= 0 for terminal s; ~ ) V(se,0) for non-terminal s;// Bootstrap from last state for i ⬠{tâ - +s tstarte} do Rern+yR Accumulate gradients wrt 0â: d0 <â d@ + Vor log m(ai|si; 6â)(R â V(si; 0)) Accumulate gradients wrt 6,: d0,, â dO, + O(R â V(si;64,))â/00, end for Perform asynchronous update of @ using dé and of @y using d6v. until T
until T > Tmax
Asynchronous Methods for Deep Reinforcement Learning
seo â step Q, SD Tatas 0, RMSProp 2 peste 0, AMSProp rates 0 Shares RMSProp rte 0, Shares RuSProp step 0,560 2 peste 0, AMSPop reste 0, shared RMS Prop 2 psten 0, AMSPrep â nsten Shares RSProp
Figure S5. Comparison of three different optimization methods (Momentum SGD, RMSProp, Shared RMSProp) tested using two different algorithms (Async n-step Q and Async Advantage Actor-Critic) on four different Atari games (Break- out, Beamrider, Seaquest and Space Invaders). Each curve shows the ï¬nal scores for 50 experiments sorted in descending order that covers a search over 50 random initializations and learning rates. The top row shows results using Async n-step Q algorithm and bottom row shows results with Async Advantage Actor-Critic. Each individual graph shows results for one of the four games and three different optimization methods. Shared RMSProp tends to be more robust to different learning rates and random initializations than Momentum SGD and RMSProp without sharing.
S000 Slow car, no bots S000 Slow car, bots 4000 4000 3000 3000 $2000 $2000 1000 Async 1-step Q 1000 Async 1-step Q Async SARSA Async Async actor-critic sync actor-critic â â Async SARSA â Asyne n-step Q = Human tester Human tester 1000 -1000 0 10 20 30 40 0 10 20 30 40 Training time (hours) Training time (hours) 6000 Fast car, no bots 6000 Fast car, bots 5000 5000 4000 4000 y 3000 y 3000 * 2000 * 2000 Async L-step Q Async L-step Q 1000 1000 âAsync SARSA Async n-step Q Async actor-critic Human tester âAsync SARSA. Async n-step Q Async actor-critic Human tester 10 20 30 40 Training time (hours) 10 20 Training time (hours) 40
Figure S6. Comparison of algorithms on the TORCS car racing simulator. Four different conï¬gurations of car speed and opponent presence or absence are shown. In each plot, all four algorithms (one-step Q, one-step Sarsa, n-step Q and Advantage Actor-Critic) are compared on score vs training time in wall clock hours. Multi-step algorithms achieve better policies much faster than one-step algorithms on all four levels. The curves show averages over the 5 best runs from 50 experiments with learning rates sampled from LogU nif orm(10â4, 10â2) and all other hyperparameters ï¬xed.
Asynchronous Methods for Deep Reinforcement Learning
Figure S7. Performance for the Mujoco continuous action domains. Scatter plot of the best score obtained against learning rates sampled from LogU nif orm(10â5, 10â1). For nearly all of the tasks there is a wide range of learning rates that lead to good performance on the task.
Asynchronous Methods for Deep Reinforcement Learning
Figure S8. Score per episode vs wall-clock time plots for the Mujoco domains. Each plot shows error bars for the top 5 experiments.
Figure S9. Data efï¬ciency comparison of different numbers of actor-learners one-step Sarsa on ï¬ve Atari games. The x-axis shows the total number of training epochs where an epoch corresponds to four million frames (across all threads). The y-axis shows the average score. Each curve shows the average of the three best performing agents from a search over 50 random learning rates. Sarsa shows increased data efï¬ciency with increased numbers of parallel workers.
Asynchronous Methods for Deep Reinforcement Learning
Figure S10. Training speed comparison of different numbers of actor-learners for all one-step Sarsa on ï¬ve Atari games. The x-axis shows training time in hours while the y-axis shows the average score. Each curve shows the average of the three best performing agents from a search over 50 random learning rates. Sarsa shows signiï¬cant speedups from using greater numbers of parallel actor-learners.
1000 1 step 0.Beamide wo step 9, Bretaut so step 0. Pang sovo 1atep 0, omer
Figure S11. Scatter plots of scores obtained by one-step Q, one-step Sarsa, and n-step Q on ï¬ve games (Beamrider, Breakout, Pong, Q*bert, Space Invaders) for 50 different learning rates and random initializations. All algorithms exhibit some level of robustness to the choice of learning rate.
Asynchronous Methods for Deep Reinforcement Learning
DQN 570.2 133.4 3332.3 124.5 697.1 76108.0 176.3 17560.0 8672.4 41.2 25.8 303.9 3773.1 3046.0 50992.0 12835.2 -21.6 475.6 -2.3 25.8 157.4 2731.8 216.5 12952.5 -3.8 348.5 2696.0 3864.0 11875.0 50.0 763.5 5439.9 16.2 298.2 4589.8 4065.3 9264.0 58.5 2793.9 1449.7 34081.0 -2.3 5640.0 32.4 3311.3 54.0 20228.1 246.0 Gorila 813.5 189.2 1195.8 3324.7 933.6 629166.5 399.4 19938.0 3822.1 54.0 74.2 313.0 6296.9 3191.8 65451.0 14880.1 -11.3 71.0 4.6 10.2 426.6 4373.0 538.4 8963.4 -1.7 444.0 1431.0 6363.1 20620.0 84.0 1263.0 9238.5 16.7 2598.6 7089.8 5310.3 43079.8 61.8 10145.9 1183.3 14919.2 -0.7 8267.8 118.5 8747.7 523.4 112093.4 10431.0
Game Alien Amidar Assault Asterix Asteroids Atlantis Bank Heist Battle Zone Beam Rider Berzerk Bowling Boxing Breakout Centipede Chopper Comman Crazy Climber Defender Demon Attack Double Dunk Enduro Fishing Derby Freeway Frostbite Gopher Gravitar H.E.R.O. Ice Hockey James Bond Kangaroo Krull Kung-Fu Master Montezumaâs Revenge Ms. Pacman Name This Game Phoenix Pit Fall Pong Private Eye Q*Bert River Raid Road Runner Robotank Seaquest Skiing Solaris Space Invaders Star Gunner Surround Tennis Time Pilot Tutankham Up and Down Venture Video Pinball Wizard of Wor Yars Revenge Zaxxon
Double 1033.4 169.1 6060.8 16837.0 1193.2 319688.0 886.0 24740.0 17417.2 1011.1 69.6 73.5 368.9 3853.5 3495.0 113782.0 27510.0 69803.4 -0.3 1216.6 3.2 28.8 1448.1 15253.0 200.5 14892.5 -2.5 573.0 11204.0 6796.1 30207.0 42.0 1241.3 8960.3 12366.5 -186.7 19.1 -575.5 11020.8 10838.4 43156.0 59.1 14498.0 -11490.4 810.0 2628.7 58365.0 1.9 -7.8 6608.0 92.2 19086.9 21.0 367823.7 6201.0 6270.6 8593.0
Dueling 1486.5 172.7 3994.8 15840.0 2035.4 445360.0 1129.3 31320.0 14591.3 910.6 65.7 77.3 411.6 4881.0 3784.0 124566.0 33996.0 56322.8 -0.8 2077.4 -4.1 0.2 2332.4 20051.4 297.0 15207.9 -1.3 835.5 10334.0 8051.6 24288.0 22.0 2250.6 11185.1 20410.5 -46.9 18.8 292.6 14175.8 16569.4 58549.0 62.0 37361.6 -11928.0 1768.4 5993.1 90804.0 4.0 4.4 6601.0 48.0 24759.2 200.0 110976.2 7054.0 25976.5 10164.0
Prioritized 900.5 218.4 7748.5 31907.5 1654.0 593642.0 816.8 29100.0 26172.7 1165.6 65.8 68.6 371.6 3421.9 6604.0 131086.0 21093.5 73185.8 2.7 1884.4 9.2 27.9 2930.2 57783.8 218.0 20506.4 -1.0 3511.5 10241.0 7406.5 31244.0 13.0 1824.6 11836.1 27430.1 -14.8 18.9 179.0 11277.0 18184.4 56990.0 55.4 39096.7 -10852.8 2238.2 9063.0 51959.0 -0.9 -2.0 7448.0 33.6 29443.7 244.0 374886.9 7451.0 5965.1 9501.0
A3C FF, 1 day 182.1 283.9 3746.1 6723.0 3009.4 772392.0 946.0 11340.0 13235.9 1433.4 36.2 33.7 551.6 3306.5 4669.0 101624.0 36242.5 84997.5 0.1 -82.2 13.6 0.1 180.1 8442.8 269.5 28765.8 -4.7 351.5 106.0 8066.6 3046.0 53.0 594.4 5614.0 28181.8 -123.0 11.4 194.4 13752.3 10001.2 31769.0 2.3 2300.2 -13700.0 1884.8 2214.7 64393.0 -9.6 -10.2 5825.0 26.1 54525.4 19.0 185852.6 5278.0 7270.8 2659.0
A3C FF 518.4 263.9 5474.9 22140.5 4474.5 911091.0 970.1 12950.0 22707.9 817.9 35.1 59.8 681.9 3755.8 7021.0 112646.0 56533.0 113308.4 -0.1 -82.5 18.8 0.1 190.5 10022.8 303.5 32464.1 -2.8 541.0 94.0 5560.0 28819.0 67.0 653.7 10476.1 52894.1 -78.5 5.6 206.9 15148.8 12201.8 34216.0 32.8 2355.4 -10911.1 1956.0 15730.5 138218.0 -9.7 -6.3 12679.0 156.3 74705.7 23.0 331628.1 17244.0 7157.5 24622.0
A3C LSTM 945.3 173.0 14497.9 17244.5 5093.1 875822.0 932.8 20760.0 24622.2 862.2 41.8 37.3 766.8 1997.0 10150.0 138518.0 233021.5 115201.9 0.1 -82.5 22.6 0.1 197.6 17106.8 320.0 28889.5 -1.7 613.0 125.0 5911.4 40835.0 41.0 850.7 12093.7 74786.7 -135.7 10.7 421.1 21307.5 6591.9 73949.0 2.6 1326.1 -14863.8 1936.4 23846.0 164766.0 -8.3 -6.4 27202.0 144.2 105728.7 25.0 470310.5 18082.0 5615.5 23519.0
831.0
6159.4
Table S3. Raw scores for the human start condition (30 minutes emulator time). DQN scores taken from (Nair et al., 2015). Double DQN scores taken from (Van Hasselt et al., 2015), Dueling scores from (Wang et al., 2015) and Prioritized scores taken from (Schaul et al., 2015) | {
"id": "1509.02971"
} |
1602.01137 | A Dual Embedding Space Model for Document Ranking | A fundamental goal of search engines is to identify, given a query, documents
that have relevant text. This is intrinsically difficult because the query and
the document may use different vocabulary, or the document may contain query
words without being relevant. We investigate neural word embeddings as a source
of evidence in document ranking. We train a word2vec embedding model on a large
unlabelled query corpus, but in contrast to how the model is commonly used, we
retain both the input and the output projections, allowing us to leverage both
the embedding spaces to derive richer distributional relationships. During
ranking we map the query words into the input space and the document words into
the output space, and compute a query-document relevance score by aggregating
the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures
evidence on whether a document is about a query term in addition to what is
modelled by traditional term-frequency based approaches. Our experiments show
that the DESM can re-rank top documents returned by a commercial Web search
engine, like Bing, better than a term-matching based signal like TF-IDF.
However, when ranking a larger set of candidate documents, we find the
embeddings-based approach is prone to false positives, retrieving documents
that are only loosely related to the query. We demonstrate that this problem
can be solved effectively by ranking based on a linear mixture of the DESM and
the word counting features. | http://arxiv.org/pdf/1602.01137 | Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana | cs.IR | This paper is an extended evaluation and analysis of the model
proposed in a poster to appear in WWW'16, April 11 - 15, 2016, Montreal,
Canada | null | cs.IR | 20160202 | 20160202 | 6 1 0 2
b e F 2 ] R I . s c [
1 v 7 3 1 1 0 . 2 0 6 1 : v i X r a
# A Dual Embedding Space Model for Document Ranking
Bhaskar Mitra Microsoft Cambridge, UK bmitra@microsoft.com
Eric Nalisnick University of California Irvine, USA enalisni@uci.edu
Nick Craswell, Rich Caruana Microsoft Redmond, USA nickcr, rcaruana@microsoft.com
ABSTRACT A fundamental goal of search engines is to identify, given a query, documents that have relevant text. This is intrinsically difï¬cult because the query and the document may use different vocabulary, or the document may contain query words without being relevant. We investigate neural word embeddings as a source of evidence in document ranking. We train a word2vec embedding model on a large unlabelled query corpus, but in contrast to how the model is commonly used, we retain both the input and the output projections, allowing us to leverage both the embedding spaces to derive richer distributional relationships. During ranking we map the query words into the input space and the document words into the output space, and compute a query-document relevance score by aggregating the cosine similarities across all the query-document word pairs.
We postulate that the proposed Dual Embedding Space Model (DESM) captures evidence on whether a document is about a query term in addition to what is modelled by traditional term-frequency based approaches. Our experiments show that the DESM can re- rank top documents returned by a commercial Web search engine, like Bing, better than a term-matching based signal like TF-IDF. However, when ranking a larger set of candidate documents, we ï¬nd the embeddings-based approach is prone to false positives, retrieving documents that are only loosely related to the query. We demonstrate that this problem can be solved effectively by ranking based on a linear mixture of the DESM and the word counting features. Categories and Subject Descriptors H.3 [Information Storage and Retrieval]: H.3.3 Information Search and Retrieval Keywords: Document ranking; Word embeddings; Word2vec
Figure 1: A two dimensional PCA projection of the 200- dimensional embeddings. Relevant documents are yellow, irrel- evant documents are grey, and the query is blue. To visualize the results of multiple queries at once, before dimensionality reduction we centre query vectors at the origin and represent documents as the difference between the document vector and its query vector. (a) uses IN word vector centroids to represent both the query and the documents. (b) uses IN for the queries and OUT for the documents, and seems to have a higher density of relevant documents near the query.
# INTRODUCTION
Identifying relevant documents for a given query is a core chal- lenge for Web search. For large-scale search engines, it is possible to identify a very small set of pages that can answer a good proportion of queries [2]. For such popular pages, clicks and hyperlinks may provide sufï¬cient ranking evidence and it may not be important to match the query against the body text. However, in many Web search scenarios such query-content matching is crucial. If new content is available, the new and updated documents may not have click evidence or may have evidence that is out of date. For new or tail queries, there may be no memorized connections between the queries and the documents. Furthermore, many search engines and apps have a relatively smaller number of users, which limits their
This paper is an extended evaluation and analysis of the model proposed by Nalisnick et al. [32] to appear in WWWâ16, April 11 - 15, 2016, Montreal, Canada. Copyright 2016 by the author(s).
ability to answer queries based on memorized clicks. There may even be insufï¬cient behaviour data to learn a click-based embedding [18] or a translation model [10, 19]. In these cases it is crucial to model the relationship between the query and the document content, without click data.
When considering the relevance of document body text to a query, the traditional approach is to count repetitions of query terms in the document. Different transformation and weighting schemes for those counts lead to a variety of possible TF-IDF ranking features. One theoretical basis for such features is the probabilistic model of information retrieval, which has yielded the very successful TF-IDF formulation BM25[35]. As noted by Robertson [34], the probabilis- tic approach can be restricted to consider only the original query terms or it can automatically identify additional terms that are cor- related with relevance. However, the basic commonly-used form
Table 1: The nearest neighbours for the words "yale", "seahawks" and "eminem" according to the cosine similarity based on the IN-IN, OUT-OUT and IN-OUT vector comparisons for the different words in the vocabulary. These examples show that IN-IN and OUT-OUT cosine similarities are high for words that are similar by function or type (typical), and the IN-OUT cosine similarities are high between words that often co-occur in the same query or document (topical). The word2vec model used here was trained on a query corpus with a vocabulary of 2,748,230 words.
IN-IN yale harvard nyu cornell tulane tufts yale OUT-OUT yale uconn harvard tulane nyu tufts IN-OUT yale faculty alumni orientation haven graduate IN-IN seahawks 49ers broncos packers nï¬ steelers seahawks OUT-OUT seahawks broncos 49ers nï¬ packers steelers IN-OUT seahawks highlights jerseys tshirts seattle hats IN-IN eminem rihanna ludacris kanye beyonce 2pac eminem OUT-OUT eminem rihanna dre kanye beyonce tupac IN-OUT eminem rap featuring tracklist diss performs
of BM25 considers query terms only, under the assumption that non-query terms are less useful for document ranking.
In the probabilistic approach, the 2-Poisson model forms the ba- sis for counting term frequency [6, 15, 36]. The stated goal is to distinguish between a document that is about a term and a document that merely mentions that term. These two types of documents have term frequencies from two different Poisson distributions, such that documents about the term tend to have higher term frequency than those that merely mention it. This explanation for the relation- ship between term frequency and aboutness is the basis for the TF function in BM25 [36].
The new approach in this paper uses word occurrences as ev- idence of aboutness, as in the probabilistic approach. However, instead of considering term repetition as evidence of aboutness it considers the relationship between the query terms and all the terms in the document. For example, given a query term âyaleâ, in addi- tion to considering the number of times Yale is mentioned in the document, we look at whether related terms occur in the document, such as âfacultyâ and âalumniâ. Similarly, in a document about the Seahawks sports team one may expect to see the terms âhighlightsâ and âjerseysâ. The occurrence of these related terms in sufï¬cient numbers is a way to distinguish between documents that merely mention Yale or Seahawks and the documents that are about the university or about the sports team.
⢠We propose a document ranking feature based on comparing all the query words with all the document words, which is equivalent to comparing each query word to a centroid of the document word embeddings.
⢠We analyse the positive aspects of the new feature, prefer- ring documents that contain many words related to the query words, but also note the potential of the feature to have false positive matches.
⢠We empirically compare the new approach to a single em- bedding and the traditional word counting features. The new approach works well on its own in a telescoping setting, re- ranking the top documents returned by a commercial Web search engine, and in combination with word counting for a more general document retrieval task.
2. DISTRIBUTIONAL SEMANTICS FOR IR In this section we ï¬rst introduce the Continuous Bag-of-Words (CBOW) model made popular by the software Word2Vec [28, 29]. Then, inspired by our ï¬ndings that distinctly different topic-based relationships can be found by using both the input and the output embeddings jointly â the latter of which is usually discarded after training â we propose the Dual Embedding Space Model (DESM) for document ranking.
With this motivation, in Section 2 we describe how the input and the output embedding spaces learned by a word2vec model may be jointly particularly attractive for modelling the aboutness aspect of document ranking. Table 1 gives some anecdotal evidence of why this is true. If we look in the neighbourhood of the IN vector of the word âyaleâ then the other IN vectors that are close correspond to words that are functionally similar or of the same type, e.g., âharvardâ and ânyuâ. A similar pattern emerges if we look at the OUT vectors in the neighbourhood of the OUT vector of âyaleâ. On the other hand, if we look at the OUT vectors that are closest to the IN vector of âyaleâ we ï¬nd words like âfacultyâ and âalumniâ. We use this property of the IN-OUT embeddings to propose a novel Dual Embedding Space Model (DESM) for document ranking. Figure 1 further illustrates how in this Dual Embedding Space model, using the IN embeddings for the query words and the OUT embeddings for the document words we get a much more useful similarity deï¬nition between the query and the relevant document centroids.
The main contributions of this paper are,
# 2.1 Continuous Bag-of-Words
While many word embedding models have been proposed re- cently, the Continuous Bag-of-Words (CBOW) and the Skip-Gram (SG) architectures proposed by Mikolov et al. [29] are arguably the most popular (perhaps due to the popularity of the software Word2Vec1, which implements both). Although here we will con- centrate exclusively on the CBOW model, our proposed IR ranking methodology is just as applicable to vectors produced by SG, as both models produce qualitatively and quantitatively similar embeddings. The CBOW model learns a wordâs embedding via maximizing the log conditional probability of the word given the context words occurring within a ï¬xed-sized window around that word. That is, the words in the context window serve as input, and from them, the model attempts to predict the center (missing) word. For a formal deï¬nition, let ck â Rd be a d-dimensional, real-valued vector representing the kth context word ck appearing in a K â 1- sized window around an instance of word wi, which is represented by a vector wi â Rd. The model âpredictsâ word wi by adapting its representation vector such that it has a large inner-product with
⢠A novel Dual Embedding Space Model, with one embedding for query words and a separate embedding for document words, learned jointly based on an unlabelled text corpus.
1https://code.google.com/p/word2vec/
Input Layer Output Layer [e) [e) e) Hidden Layer Oo [e) [e) e) [e) [e) [e) [e) [e) [e) oO e) [e)
Figure 2: The architecture of a word2vec (CBOW) model con- sidering a single context word. WIN and WOU T are the two weight matrices learnt during training and corresponds to the IN and the OUT word embedding spaces of the model.
the mean of the context word vectors. Training CBOW requires minimization of the following objective
IDI Losow = S â log p(wilCx) - e) |D| & = 2018 ew ye eck ,
where
aA 1 CK= K-1 S Ck (2) i-K<SkSi+K Hi
and D represents the training corpus. Notice that the probability is normalized by summing over all the vocabulary, which is quite costly when training on web-scale data. To make CBOW scalable, Mikolov et al. [29] proposed the following slightly altered negative sampling objective:
N âlog p(wi|Cx) © âlogo(Cxwi) -S log o(âCxWn) (3) n=1
where Ï is the Sigmoid function and N is the number of negative sample words drawn either from the uniform or empirical distribu- tion over the vocabulary. All our experiments were performed with the negative sampling objective.
A crucial detail often overlooked when using Word2Vec is that there are two different sets of vectors (represented above by c and w respectively and henceforth referred to as the IN and OUT em- bedding spaces), which correspond to the WIN and WOU T weight matrices in Figure 2. By default, Word2Vec discards WOU T at the end of training and outputs only WIN . Subsequent tasks deter- mine word-to-word semantic relatedness by computing the cosine similarity:
; cre; sim(ci, cj) = cos(ci, ej) = Tesliie I (4) aI Cj
# 2.2 Dual Embedding Space Model
A key challenge for term-matching based retrieval is to distin- guish whether a document merely references a term or is about that entity. See Figure 3 for a concrete example of two passages that contain the term "Albuquerque" an equal number of times although only one of the passages is about that entity. The presence of the words like "population" and "metropolitan" indicate that the left passage is about Albuquerque, whereas the passage on the right just mentions it. However, these passages would be indistinguishable under term counting. The semantic similarity of non-matched terms (i.e. the words a TF feature would overlook) are crucial for inferring a documentâs topic of focusâits aboutness.
Due to its ability to capture word co-occurrence (i.e. perform missing word prediction), CBOW is a natural ï¬t for modelling the aboutness of a document. The learnt embedding spaces contain use- ful knowledge about the distributional properties of words, allowing, in the case of Figure 3, an IR system to recognize the city-related terms in the left document. With this motivation, we deï¬ne a simple yet, as we will demonstrate, effective ranking function we call the Dual Embedding Space Model:
DESM: D) (2.2)= 9 » jallDi Ta TT ©
where
mL I 6 I (6) â¬D
dj âD Here D is the centroid of all the normalized vectors for the words in the document serving as a single embedding for the whole docu- ment. In this formulation of the DESM, the document embeddings can be pre-computed, and at the time of ranking, we only need to sum the score contributions across the query terms. We expect that the ability to pre-compute a single document embedding is a very useful property when considering runtime efï¬ciency.
IN-IN vs. IN-OUT. Hill et al. [16] noted, "Not all neural em- beddings are born equal". As previously mentioned, the CBOW (and SG) model contains two separate embedding spaces (IN and OUT) whose interactions capture additional distributional seman- tics of words that are not observable by considering any of the two embeddings spaces in isolation. Table 1 illustrates clearly how the CBOW model "pushes" the IN vectors of words closer to the OUT vectors of other words that they commonly co-occur with. In doing so, words that appear in similar contexts get pushed closer to each other within the IN embedding space (and also within the OUT embedding space). Therefore the IN-IN (or the OUT-OUT) cosine similarities are higher for words that are typically (by type or by function) similar, whereas the IN-OUT cosine similarities are higher for words that co-occur often in the training corpus (topically simi- lar). This gives us at least two variants of the DESM, corresponding to retrieval in the IN-OUT space or the IN-IN space2.
T qiniDour =a val[Doorl fcg llaryallllDour| DESM,n-our(Q, D)
dh din isDIn ilar lll/Drwl DESM,y-1n(Q, D) âa » uEeQ (8)
2It and DESMOU T âIN , but based on limited experimentation we expect them to behave similar to DESMIN âIN and DESMIN âOU T , respectively.
Albuquerque is the most populous city in the U.S. state of New Mexico. The high-altitude city serves as the county seat of Bernalillo County, and it is situated in the central part of the state, straddling the Rio Grande. The city population is 557,169 as of the July 1, 2014, population estimate from the United States Census Bureau, and ranks as the 32nd-largest city in the U.S. The Metropolitan Statistical Area (or MSA) has a population of 902,797 according to the United States Census Bureau's most recently available estimate for July 1, 2013. (a) Allen suggested that they could program a BASIC interpreter for the device; after a call from Gates claiming to have a working interpreter, MITS requested a demonstration. Since they didn't actually have one, Allen worked on a simulator for the Altair while Gates developed the interpreter. Although they developed the interpreter on a simulator and not the actual device, the interpreter worked flawlessly when they demonstrated the interpreter to MITS in Albuquerque, New Mexico in March 1975; MITS agreed to distribute it, marketing it as Altair BASIC. (b)
Figure 3: Two different passages from Wikipedia that mentions "Albuquerque" (highlighted in orange) exactly once. Highlighted in green are all the words that have an IN-OUT similarity score with the word "Albuquerque" above a ï¬xed threshold (we choose -0.03 for this visualization) and can be considered as providing supporting evidence that (a) is about Albuquerque, whereas (b) happens to only mention the city.
In Section 4, we show that the DESMIN âOU T is a better indi- cation of aboutness than BM25, because of its knowledge of the word distributional properties, and DESMIN âIN , since topical similarity is a better indicator of aboutness than typical similarity.
Modelling document aboutness. We perform a simple word perturbation analysis to illustrate how the DESM can collect evi- dence on document aboutness from both matched and non-matched terms in the document. In Table 2, we consider ï¬ve small passages of text. The ï¬rst three passages are about Cambridge, Oxford and giraffes respectively. The next two passages are generated by re- placing the word "giraffe" by the word "Cambridge" in the passage about giraffes, and vice versa.
We compute the DESMIN âOU T and the DESMIN âIN scores along with the term frequencies for each of these passages for the query term "cambridge". As expected, all three models score the passage about Cambridge highly. However, unlike the term fre- quency feature, the DESM seem robust towards keyword stufï¬ng3, at least in this speciï¬c example where we replace the word "giraffe" with "cambridge" in the passage about giraffes, but the DESMs still score the passage relatively low. This is exactly the kind of evidence that we expect the DESM to capture that may not be possible by simple term counting.
focusing at ranking for top positions is in fact quite common and has been used by many recent studies (e.g., [10, 18]).
Dot product vs. cosine similarity. In the DESM formulation (Equation 5) we compute the cosine similaritiy between every query word and the normalized document centroid. The use of cosine similarity (as opposed to, say, dot-product) is motivated by several factors. Firstly, much of the existing literature[28, 29] on CBOW and SG uses cosine similarity and normalized unit vectors (for per- forming vector algebra for word analogies). As the cosine similarity has been shown to perform well in practice in these embedding spaces we adopt the same strategy here.
A secondary justiï¬cation can be drawn based on the observa- tions made by Wilson and Schakel [48] that the length of the non- normalized word vectors has a direct relation to the frequency of the word. In information retrieval (IR), it is well known that frequently occurring words are ineffective features for distinguishing relevant documents from irrelevant ones. The inverse-document frequency weighting is often used in IR to capture this effect. By normalizing the word vectors in the document before computing the document centroids, we are counteracting the extra inï¬uence frequent words would have on the sum.
On the other hand, both the DESMs score the passage about Oxford very highly. This is expected because both these passages contain many words that are likely to co-occur with the word "cam- bridge" in the training corpus. This implies that the DESM features are very susceptible to false positive matches and can only be used either in conjunction with other document ranking features, such as TF-IDF, or for re-ranking a smaller set of candidate documents already deemed at least somewhat relevant. This is similar to the tele- scoping evaluation setup described by Matveeva et al. [27], where multiple nested rankers are used to achieve better retrieval perfor- mance over a single ranker. At each stage of telescoping, a ranker is used to reduce the set of candidate documents that is passed on to the next. Improved performance is possible because the ranker that sees only top-scoring documents can specialize in handling such documents, for example by using different feature weights. In our experiments, we will see the DESM to be a poor standalone ranking signal on a larger set of documents, but performs signiï¬cantly better against the BM25 and the LSA baselines once we reach a small high-quality candidate document set. This evaluation strategy of
Training corpus. Our CBOW model is trained on a query cor- pus4 consisting of 618,644,170 queries and a vocabulary size of 2,748,230 words. The queries are sampled from Bingâs large scale search logs from the period of August 19, 2014 to August 25, 2014. We repeat all our experiments using another CBOW model trained on a corpus of document body text with 341,787,174 distinct sen- tences sampled from the Bing search index and a corresponding vocabulary size of 5,108,278 words. Empirical results on the perfor- mance of both the models are presented in Section 4.
Out-of-vocabulary (OOV) words. One of the challenges of the embedding models is that they can only be applied to a ï¬xed size vocabulary. It is possible to explore different strategies to deal with out-of-vocab (OOV) words in the Equation 5 5. But we leave this for future investigation and instead, in this paper, all the OOV words are ignored for computing the DESM score, but not for computing the TF-IDF feature, a potential advantage for the latter.
3https://en.wikipedia.org/wiki/Keyword_ stuffing
4We provide the IN and OUT word embeddings trained using word2vec on the Bing query corpus at http://research. microsoft.com/projects/DESM. 5In machine translation there are examples of interesting strategies to handle out-of-vocabulary words (e.g., [25])
Table 2: A word perturbation analysis to show how the DESM collects evidence on the aboutness of a document. The DESM models are more robust irrelevant terms. For example, when the word "giraffe" is replaced by the word "cambridge", the passage on giraffes is still scored low by the DESM for the query "cambridge" because it ï¬nds low supporting evidence from the other words in the passage. However, the DESM confuses the passage about Oxford to be relevant for the query "cambridge" because it detects a high number of similar words in the passage that frequently co-occur with the word "Cambridge".
Query: "cambridge" Passage type Passage about Cambridge Passage about Oxford Passage about giraffes Passage about giraffes, but the word "giraffe" is replaced by the word "Cam- bridge" Passage about Cambridge, but the word "Cam- bridge" is re- placed by the word "giraffe" Passage text The city of Cambridge is a university city and the county town of Cambridgeshire, England. It lies in East Anglia, on the River Cam, about 50 miles (80 km) north of London. According to the United Kingdom Census 2011, its population was 123,867 (including 24,488 students). This makes Cambridge the second largest city in Cambridgeshire after Peterborough, and the 54th largest in the United Kingdom. There is archaeological evidence of settlement in the area during the Bronze Age and Roman times; under Viking rule Cambridge became an important trading centre. The ï¬rst town charters were granted in the 12th century, although city status was not conferred until 1951. Oxford is a city in the South East region of England and the county town of Oxfordshire. With a population of 159,994 it is the 52nd largest city in the United Kingdom, and one of the fastest growing and most ethnically diverse. Oxford has a broad economic base. Its industries include motor manufacturing, education, publishing and a large number of information technology and science-based businesses, some being academic offshoots. The city is known worldwide as the home of the University of Oxford, the oldest university in the English-speaking world. Buildings in Oxford demonstrate examples of every English architectural period since the arrival of the Saxons, including the mid-18th-century Radcliffe Camera. Oxford is known as the city of dreaming spires, a term coined by poet Matthew Arnold. The giraffe (Giraffa camelopardalis) is an African even-toed ungulate mammal, the tallest living terrestrial animal and the largest ruminant. Its species name refers to its camel-like shape and its leopard-like colouring. Its chief distinguishing characteristics are its extremely long neck and legs, its horn-like ossicones, and its distinctive coat patterns. It is classiï¬ed under the family Girafï¬dae, along with its closest extant relative, the okapi. The nine subspecies are distinguished by their coat patterns. The giraffeâs scattered range extends from Chad in the north to South Africa in the south, and from Niger in the west to Somalia in the east. Giraffes usually inhabit savannas, grasslands, and open woodlands. The cambridge (Giraffa camelopardalis) is an African even-toed ungulate mammal, the tallest living terrestrial animal and the largest ruminant. Its species name refers to its camel-like shape and its leopard- like colouring. Its chief distinguishing characteristics are its extremely long neck and legs, its horn-like ossicones, and its distinctive coat patterns. It is classiï¬ed under the family Girafï¬dae, along with its closest extant relative, the okapi. The nine subspecies are distinguished by their coat patterns. The cambridgeâs scattered range extends from Chad in the north to South Africa in the south, and from Niger in the west to Somalia in the east. giraffes usually inhabit savannas, grasslands, and open woodlands. The city of Giraffe is a university city and the county town of Cambridgeshire, England. It lies in East Anglia, on the River Cam, about 50 miles (80 km) north of London. According to the United Kingdom Census 2011, its population was 123,867 (including 24,488 students). This makes Giraffe the second largest city in Cambridgeshire after Peterborough, and the 54th largest in the United Kingdom. There is archaeological evidence of settlement in the area during the Bronze Age and Roman times; under Viking rule Giraffe became an important trading centre. The ï¬rst town charters were granted in the 12th century, although city status was not conferred until 1951. DESM (IN-OUT) Score -0.062 -0.070 -0.102 -0.094 -0.076 DESM (IN-IN) Score 0.120 0.107 0.011 0.033 0.088 Term Frequency Count 5 0 0 3 0
Document length normalization. In Equation 5 we normal- ize the scores linearly by both the query and the document lengths. While more sophisticated length normalization strategies, such as pivoted document length normalization [43], are reasonable, we leave this also for future work.
such as BM25, for the non-telescoping evaluation setup described in Section 3.2.
We deï¬ne the mixture model MM(Q, D) as,
M M (Q, D) = αDESM (Q, D) + (1 â α)BM 25(Q, D) α â R, 0 ⤠α ⤠1
# 2.3 The Mixture Model
The DESM is a weak ranker and while it models some important aspects of document ranking, our experiments will show that itâs effective only at ranking at high positions (i.e. documents we already know are at least somewhat relevant). We are inspired by previous work in neural language models, for example by Bengio et al. [4], which demonstrates that combining a neural model for predicting the next word with a more traditional counting-based language model is effective because the two models make different kinds of mistakes. Adopting a similar strategy we propose a simple and intuitive mixture model combining DESM with a term based feature,
To choose the appropriate value for α, we perform a parameter sweep between zero and one at intervals of 0.01 on the implicit feedback based training set described in Section 3.1.
# 3. EXPERIMENTS
We compare the retrieval performance of DESM against BM25, a traditional count-based method, and Latent Semantic Analysis (LSA), a traditional vector-based method. We conduct our eval- uations on two different test sets (explicit and implicit relevance judgements) and under two different experimental conditions (a large collection of documents and a telescoped subset).
Table 3: NDCG results comparing the DESMIN âOU T with the BM25 and the LSA baselines. The DESMIN âOU T performs signiï¬cantly better than both the BM25 and the LSA baselines at all rank positions. It also performs better than the DESMIN âIN on both the evaluation sets. The DESMs using embeddings trained on the query corpus also performs better than if trained on document body text. The highest NDCG values for every column is highlighted in bold and all the statistically signiï¬cant (p < 0.05) differences over the BM25 baseline are marked with the asterisk (*).
Explicitly Judged Test Set NDCG@3 NDCG@1 NDCG@10 Implicit Feedback based Test Set NDCG@3 NDCG@1 NDCG@10 BM25 LSA DESM (IN-IN, trained on body text) DESM (IN-IN, trained on queries) DESM (IN-OUT, trained on body text) DESM (IN-OUT, trained on queries) 23.69 22.41* 23.59 23.75 24.06 25.02* 29.14 28.25* 29.59 29.72 30.32* 31.14* 44.77 44.24* 45.51* 46.36* 46.57* 47.89* 13.65 16.35* 18.62* 18.37* 19.67* 20.66* 27.41 31.75* 33.80* 35.18* 35.53* 37.34* 49.26 52.05* 53.32* 54.20* 54.13* 55.84*
# 3.1 Datasets
All the datasets that are used for this study are sampled from Bingâs large scale query logs. The body text for all the candidate documents are extracted from Bingâs document index.
Explicitly judged test set. This evaluation set consists of 7,741 queries randomly sampled from Bingâs query logs from the period of October, 2014 to December, 2014. For each sampled query, a set of candidate documents is constructed by retrieving the top results from Bing over multiple scrapes during a period of a few months. In total the ï¬nal evaluation set contains 171,302 unique documents across all queries which are then judged by human evaluators on a ï¬ve point relevance scale (Perfect, Excellent, Good, Fair and Bad).
In our non-telescoped experiment, we consider every distinct document in the test set as a candidate for every query in the same dataset. This setup is more in line with the traditional IR evaluation methodologies, where the model needs to retrieve the most relevant documents from a single large document collection. Our empirical results in Section 4 will show that the DESM model is a strong re-ranking signal, but as a standalone ranker, it is prone to false positives. Yet, when we mix our neural model (DESM) with a counting based model (BM25), good performance is achieved.
For all the experiments we report the normalized discounted cumulative gain (NDCG) at different rank positions as a measure of performance for the different models under study.
# 3.3 Baseline models
Implicit feedback based test set. This dataset is sampled from the Bing logs from the period of the September 22, 2014 to September 28, 2014. The dataset consists of the search queries submitted by the user and the corresponding documents that were returned by the search engine in response. The documents are associated with a binary relevance judgment based on whether the document was clicked by the user. This test set contains 7,477 queries and the 42,573 distinct documents.
We compare the DESM models to a term-matching based baseline, in BM25, and a vector space model baseline, in Latent Semantic Analysis (LSA)[8]. For the BM25 baseline we use the values of 1.7 for the k1 parameter and 0.95 for the b parameter based on a parameter sweep on the implicit feedback based training set. The LSA model is trained on the body text of 366,470 randomly sampled documents from Bingâs index with a vocabulary size of 480,608 words. Note that unlike the word2vec models that train on word co-occurrence data, the LSA model by default trains on a word- document matrix.
Implicit feedback based training set. This dataset is sam- pled exactly the same way as the previous test but from the period of September 15, 2014 to September 21, 2014 and has 7,429 queries and 42,253 distinct documents. This set is used for tuning the parameters for the BM25 baseline and the mixture model.
# 3.2 Experiment Setup
We perform two distinct sets of evaluations for all the experimen- tal and baseline models. In the ï¬rst experiment, we consider all documents retrieved by Bing (from the online scrapes in the case of the explicitly judged set or as recorded in the search logs in the case of the implicit feedback based sets) as the candidate set of documents to be re-ranked for each query. The fact that each of the documents were retrieved by the search engine implies that they are all at least marginally relevant to the query. Therefore, this experi- mental design isolates performance at the top ranks. As mentioned in Section 2.2, there is a parallel between this experiment setup and the telescoping [27] evaluation strategy, and has been used often in recent literature (e.g., [18, 41]). Note that by having a strong retrieval model, in the form of the Bing search engine, for ï¬rst stage retrieval enables us to have a high conï¬dence candidate set and in turn ensures reliable comparison with the baseline BM25 feature.
# 4. RESULTS
Table 3 shows the NCDG based performance evaluations un- der the telescoping setup. On both the explicitly judged and the implicit feedback based test sets the DESMIN âOU T performs sig- niï¬cantly better than the BM25 and the LSA baselines, as well as the DESMIN âIN model. Under the all documents as candidates setup in Table 4, however, the DESMs (both IN-IN and IN-OUT) are clearly seen to not perform well as standalone document rankers. The mixture of DESMIN âOU T (trained on queries) and BM25 rectiï¬es this problem and gives the best NDCG result under the non-telescoping settings and demonstrates a statistically signiï¬cant improvement over the BM25 baseline.
Figure 4 illustrates that the DESMIN âOU T is the most discrimi- nating feature for the relevant and the irrelevant documents retrieved by a ï¬rst stage retrieval system. However, BM25 is clearly superior in separating out the random irrelevant documents in the candidate set. The mixture model, unsurprisingly, has the good properties from both the DESMIN âOU T and the BM25 models. Figure 5 shows the joint distribution of the scores from the different models which further reinforces these points and shows that the DESM and the BM25 models make different errors.
Table 4: Results of NDCG evaluations under the non-telescoping settings. Both the DESM and the LSA models perform poorly in the presence of random irrelevant documents in the candidate set. The mixture of DESMIN âOU T with BM25 achieves the best NDCG. The best NDCG values are highlighted per column in bold and all the statistically signiï¬cant (p < 0.05) differences with the BM25 baseline are indicated by the asterisk (*)
Explicitly Judged Test Set NDCG@3 NDCG@1 NDCG@10 Implicit Feedback based Test Set NDCG@3 NDCG@1 NDCG@10 BM25 LSA DESM (IN-IN, trained on body text) DESM (IN-IN, trained on queries) DESM (IN-OUT, trained on body text) DESM (IN-OUT, trained on queries) BM25 + DESM (IN-IN, trained on body text) BM25 + DESM (IN-IN, trained on queries) BM25 + DESM (IN-OUT, trained on body text) BM25 + DESM (IN-OUT, trained on queries) 21.44 04.61* 06.69* 05.56* 01.01* 00.62* 21.53 21.58 21.47 21.54 26.09 04.63* 06.80* 05.59* 01.16* 00.58* 26.16 26.20 26.18 26.42* 37.53 04.83* 07.39* 06.03* 01.58* 00.81* 37.48 37.62 37.55 37.86* 11.68 01.97* 03.39* 02.62* 00.78* 00.29* 11.96 11.91 11.83 12.22* 22.14 03.24* 05.09* 04.06* 01.12* 00.39* 22.58* 22.47* 22.42* 22.96* 33.19 04.54* 07.13* 05.92* 02.07* 01.36* 33.70* 33.72* 33.60* 34.11*
We do not report the results of evaluating the mixture models under the telescoping setup because tuning the α parameter under those settings on the training set results in the best performance from the standalone DESM models. Overall, we conclude that the DESM is primarily suited for ranking at top positions or in conjunction with other document ranking features.
Interestingly, under the telescoping settings, the LSA baseline also shows some (albeit small) improvement over the BM25 baseline on the implicit feedback based test set but a loss on the explicitly judged test set.
With respect to the CBOWâs training data, the DESM models with the embeddings trained on the query corpus performs signiï¬cantly better than the models trained on document body text across different conï¬gurations. We have a plausible hypothesis on why this happens. Users tend to choose the most signiï¬cant terms that they expect to match in the target document to formulate their search queries. Therefore in the query corpus, one may say that, the less important terms from the document corpus has been ï¬ltered out. Therefore when training on the query corpus the CBOW model is more likely to see important terms within the context window compared to when trained on a corpus of document body text, which may make it a better training dataset for the Word2vec model.
# 5. RELATED WORK
The probabilistic model of information retrieval leads to the de- velopment of the BM25 ranking feature [35]. The increase in BM25 as term frequency increases is justiï¬ed according to the 2-Poisson model [15, 36], which makes a distinction between documents about a term and documents that merely mention that term. Those two types of document have term frequencies from two different Poisson distributions, which justiï¬es the use of term frequency as evidence of aboutness. By contrast, the model introduced in this paper uses the occurrence of other related terms as evidence of aboutness. For example, under the 2-Poisson model a document about Eminem will tend to mention the term âeminemâ repeatedly. Under our all- pairs vector model, a document about Eminem will tend to contain more related terms such as ârapâ, âtracklistâ and âperformsâ. Our experiments show both notions of aboutness to be useful.
Neural embeddings for IR. The word embeddings produced by the CBOW and SG models have been shown to be surprisingly effective at capturing detailed semantics useful for various Natural Language Processing (NLP) and reasoning tasks, including word analogies [28, 29]. Recent papers have explored in detail the SG and CBOW training methodology [11, 37] and its connection to other approaches for learning word embeddings such as explicit vector space representations [23, 24], matrix factorization [22, 33, 42] and density-based representations [45].
Term based IR. For an overview of lexical matching approaches for information retrieval, such as the vector space, probabilistic and language modelling approach, see [26]. In Saltonâs classic vector space model [39] queries and documents are represented as sparse vectors in a vector space of dimensionality |V|, where V is the word vocabulary. Elements in the vector are non-zero if that term occurs. Documents can be ranked in descending order of cosine similarity with the query, although a wide variety of weighting and similarity functions are possible [51]. In contrast to the classical vector space model, LSA[8], PLSA[17] and LDA[5, 47] learn dense vector representations of much lower dimensionality. It has been suggested that these models perform poorly as standalone retrieval models [1] unless combined with other TF-IDF like features. In our approach the query and documents are also low dimensional dense vectors. We learn 200-dimensional neural word embeddings, and generate document vectors as the centroids of all the word vectors. Yan et al. [49] suggested that term correlation data is less sparse than term-document matrix and hence may be more effective for training embeddings.
Baroni et al. [3] evaluated neural word embeddings against tradi- tional word counting approaches and demonstrated the success of the former on a variety of NLP tasks. However, more recent works [16, 40] have shown that there does not seem to be one embedding approach that is best for all tasks. This observation is similar to ours, where we note that IN-IN and IN-OUT model different kinds of word relationships. Although IN-IN, for example, works well for word analogy tasks [28, 29], it might perform less effectively for other tasks, such as those in information retrieval. If so, instead of claiming that any one embedding captures âsemanticsâ, it is proba- bly better to characterize embeddings according to which tasks they perform well on.
Our paper is not the ï¬rst to apply neural word embeddings in IR. Ganguly et al. [9] recently proposed a generalized language model for IR that incorporates IN-IN similarities. The similarities are used to expand and reweight the terms in each document, which seems to be motivated by intuitions similar to ours, where a term is reinforced if a similar terms occurs in the query. In their case, after greatly expanding the document vocabulary, they perform retrieval based on word occurrences rather than in an embedding space. Word
IN-OUT im Rel. EE Irrel. (J) Irrel. (R) 0.20 -0.15 0.10 0.05 0.00 0.05 BM25 lm Rel. Inrel. (J) = Irrel. (R) IN-IN [ | Rel. [ Irrel. (J) Irrel. (R) -0.4 0.2 0.0 0.2 0.4 0.6 0.8 1.0 BM25 + IN-OUT [a =0.97] (im Rel. (0 Irrel. (J) [= Irrel. (R)
Figure 4: Feature distributions over three sets of documents: Rel. retrieved by Bing and judged relevant, Irrel. (J) retrieved by Bing and judged irrelevant, and Irrel. (R) random documents not retrieved for this query. Our telescoping evaluation setup only uses the ï¬rst two sets, whose distributions are quite close in all four plots. IN-OUT may have the greatest difference between Rel. and Irrel. (J), which corresponds to its good telescoping NDCG results. BM25 is far superior at separating Irrel. (R) results from the rest, which explains the success of BM25 and mixture models in non-telescoping evaluation.
embeddings have also been studied in other IR contexts such as term reweighting [50], cross-lingual retrieval [14, 46, 52] and short- text similarity [20]. Beyond word co-occurrence, recent studies have also explored learning text embeddings from clickthrough data [18, 41], session data [12, 13, 30], query preï¬x-sufï¬x pairs [31], via auto-encoders [38], and for sentiment classiï¬cation [44] and for long text[21].
# 6. DISCUSSION AND CONCLUSION
We have also identiï¬ed and investigated a failure of embedding- based ranking: performance is highly dependent on the relevancy of the initial candidate set of documents to be ranked. While stand- alone DESM clearly bests BM25 and LSA on ranking telescoped datasets (Table 3), the same embedding model needs to be com- bined with BM25 to perform well on a raw, unï¬ltered document collection (Table 4). However, this is not a signiï¬cant deï¬ciency with the DESM as telescoping is a common initial set in industrial IR pipelines [7]. Moreover, our DESM is especially well suited for late-stage ranking since it incurs little computational overhead, only requiring the documentâs centroid (which can be precomputed and stored) and its cosine similarity with the query.
This paper motivated and evaluated the use neural word embed- dings to gauge a documentâs aboutness with respect to a query. Mapping words to points in a shared semantic space allows a query term to be compared against all terms in the document, providing for a reï¬ned relevance scoring. We formulate a Dual Embedding Space Model (DESM) that leverages the often discarded output em- beddings learned by the CBOW model. Our model exploits a novel use of both the input and output embeddings to capture topic-based semantic relationships. The examples in Table1 show that drasti- cally different nearest neighbors can be found by using proximity in the IN-OUT vs the IN-IN spaces. We have demonstrated through intuition and large-scale experimentation that ranking via proximity in IN-OUT space is better for retrieval than IN-IN based rankers. This ï¬nding emphasizes that usage of the CBOW and SG models is application dependent and that quantifying semantic relatedness via cosine similarity in IN space should not be a default practice.
In addition to proposing an effective and efï¬cient ranking scheme, our work suggests multiple avenues for further investigation. Can the IN-IN and the IN-OUT based distances be incorporated into other stages of the IR pipeline, such as in pseudo relevance feed- back and for query expansion? Are there better ways to compose word-level embeddings into document-level representations? Is there a principled way to ï¬lter the noisy comparisons that degrade performance on the non-telescoped datasets?
Content-based document retrieval is a difï¬cult problem. Not only is language inherently subtle and ambiguous â allowing for the same ideas to be represented by a multitude of different words â but the appearance of a given word in a document does not necessarily mean that document is relevant. While TF-IDF features such as BM25 are a proven source of evidence for aboutness, they are not sufï¬ciently precise to rank highly relevant documents ahead of fairly relevant
Relevant Irrelevant (judged) Irrelevant (unjudged) 0.05 Jueg yucg 1800 120 140 105 1600 0.00 | 120 1400 % 1200 ft â0.05) 4 100 7 5 1000 3 80 60 z 800 = -0.10 J | Ie6o 45 600 -0.15 4} 14° 30 400 20 15 200 0.290 + + 4 0 _2 oR _Sh 0 rr 0 1.0 135 140 1400 0.8 | 120 120 1200 0.6 | 105 100 90 1000 0.4 | z2 80 75 800 = 02 . 60 60 600 45 0.0 4 | Jao 400 30 â0.2 7 | 420 15 200 | a 0 7 A : : 0 10 20 30 40 50 60 0 10 20 30 40 50 60 0 10 20 30 40 50 60 BM25 BM25 BM25
Figure 5: Bivariate analysis of our lexical matching and neural word embedding features. On unjudged (random) documents, BM25 is very successful at giving zero score, but both IN-IN and IN-OUT give a range of scores. This explains their poor performance in non-telescoping evaluation. For the judged relevant and judged irrelevant sets, we see a range of cases where both types of feature fail. For example BM25 has both false positives, where an irrelevant document mentions the query terms, and false negatives, where a relevant document does not mention the query terms.
documents. To do that task well, all of a documentâs words must be considered. Neural word embeddings, and speciï¬cally our DESM, provide an effective and efï¬cient way for all words in a document to contribute, resulting in ranking attune to semantic subtleties.
References [1] A. Atreya and C. Elkan. Latent semantic indexing (lsi) fails for trec collections. ACM SIGKDD Explorations Newsletter, 12(2):5â10, 2011.
optimizations for additive machine learned ranking systems. In Proc. WSDM, pages 411â420. ACM, 2010.
[8] S. C. Deerwester, S. T. Dumais, T. K. Landauer, G. W. Furnas, and R. A. Harshman. Indexing by latent semantic analysis. JASIS, 41(6):391â407, 1990.
[9] D. Ganguly, D. Roy, M. Mitra, and G. J. Jones. Word embedding based generalized language model for information retrieval. In Proc. SIGIR, pages 795â798. ACM, 2015.
[2] R. Baeza-Yates, P. Boldi, and F. Chierichetti. Essential web pages are easy to ï¬nd. pages 97â107. International World Wide Web Conferences Steering Committee, 2015.
[10] J. Gao, K. Toutanova, and W.-t. Yih. Clickthrough-based latent semantic models for web search. In Proc. SIGIR, pages 675â684. ACM, 2011.
[3] M. Baroni, G. Dinu, and G. Kruszewski. Donât count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors. In Proc. ACL, volume 1, pages 238â247, 2014.
[4] Y. Bengio, R. Ducharme, P. Vincent, and C. Janvin. A neural probabilistic language model. JMLR, 3:1137â1155, 2003.
[11] Y. Goldberg and O. Levy. word2vec explained: deriving mikolov et al.âs negative-sampling word-embedding method. arXiv preprint arXiv:1402.3722, 2014.
[12] M. Grbovic, N. Djuric, V. Radosavljevic, and N. Bhamidipati. Search retargeting using directed query embeddings. In Proc. WWW, pages 37â38. International World Wide Web Conferences Steering Committee, 2015.
[5] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent dirichlet allocation. the Journal of machine Learning research, 3: 993â1022, 2003.
[6] A. Bookstein and D. R. Swanson. Probabilistic models for automatic indexing. JASIS, 25(5):312â316, 1974.
[7] B. B. Cambazoglu, H. Zaragoza, O. Chapelle, J. Chen, C. Liao, Z. Zheng, and J. Degenhardt. Early exit
[13] M. Grbovic, N. Djuric, V. Radosavljevic, F. Silvestri, and N. Bhamidipati. Context-and content-aware embeddings for query rewriting in sponsored search. In Proc. SIGIR, pages 383â392. ACM, 2015.
[14] P. Gupta, K. Bali, R. E. Banchs, M. Choudhury, and P. Rosso. Query expansion for mixed-script information retrieval. In Proc. SIGIR, pages 677â686. ACM, 2014.
[15] S. P. Harter. A probabilistic approach to automatic keyword indexing. JASIS, 26(5):280â289, 1975.
[16] F. Hill, K. Cho, S. Jean, C. Devin, and Y. Bengio. Not all neural embeddings are born equal. arXiv preprint arXiv:1410.0718, 2014.
[17] T. Hofmann. Probabilistic latent semantic indexing. In Proc. SIGIR, pages 50â57. ACM, 1999.
[18] P.-S. Huang, X. He, J. Gao, L. Deng, A. Acero, and L. Heck. Learning deep structured semantic models for web search using clickthrough data. In Proc. CIKM, pages 2333â2338. ACM, 2013.
[19] R. Jones, B. Rey, O. Madani, and W. Greiner. Generating query substitutions. In Proc. WWW â06, pages 387â396, 2006.
[20] T. Kenter and M. de Rijke. Short text similarity with word embeddings. In Proc. CIKM, volume 15, page 115.
[21] Q. V. Le and T. Mikolov. Distributed representations of sentences and documents. arXiv preprint arXiv:1405.4053, 2014.
[22] O. Levy and Y. Goldberg. Neural word embedding as implicit matrix factorization. In Advances in Neural Information Processing Systems, pages 2177â2185, 2014.
[23] O. Levy, Y. Goldberg, and I. Ramat-Gan. Linguistic regularities in sparse and explicit word representations. CoNLL-2014, page 171, 2014.
[24] O. Levy, Y. Goldberg, and I. Dagan. Improving distributional similarity with lessons learned from word embeddings. Transactions of the Association for Computational Linguistics, 3:211â225, 2015.
[25] M.-T. Luong, I. Sutskever, Q. V. Le, O. Vinyals, and W. Zaremba. Addressing the rare word problem in neural machine translation. In Proc. ACL, 2015.
[26] C. D. Manning, P. Raghavan, H. Schütze, et al. Introduction to information retrieval, volume 1. Cambridge university press Cambridge, 2008.
[27] I. Matveeva, C. Burges, T. Burkard, A. Laucius, and L. Wong. High accuracy retrieval with multiple nested ranker. pages 437â444. ACM, 2006.
[28] T. Mikolov, K. Chen, G. Corrado, and J. Dean. Efï¬cient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013.
[29] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. Distributed representations of words and phrases and their compositionality. In Proc. NIPS, pages 3111â3119, 2013.
[30] B. Mitra. Exploring session context using distributed representations of queries and reformulations. In Proc. SIGIR, pages 3â12. ACM, 2015.
[31] B. Mitra and N. Craswell. Query auto-completion for rare preï¬xes. In Proc. CIKM. ACM, 2015.
[32] E. Nalisnick, B. Mitra, N. Craswell, and R. Caruana. Improving document ranking with dual word embeddings. In Proc. WWW. International World Wide Web Conferences Steering Committee, to appear, 2016.
[33] J. Pennington, R. Socher, and C. D. Manning. Glove: Global vectors for word representation. Proc. EMNLP, 12: 1532â1543, 2014.
[34] S. Robertson. Understanding inverse document frequency: on theoretical arguments for idf. Journal of documentation, 60 (5):503â520, 2004.
[35] S. Robertson and H. Zaragoza. The probabilistic relevance framework: BM25 and beyond. Now Publishers Inc, 2009.
[36] S. E. Robertson and S. Walker. Some simple effective approximations to the 2-poisson model for probabilistic weighted retrieval. pages 232â241. Springer-Verlag New York, Inc., 1994.
[37] X. Rong. word2vec parameter learning explained. arXiv preprint arXiv:1411.2738, 2014.
[38] R. Salakhutdinov and G. Hinton. Semantic hashing. International Journal of Approximate Reasoning, 50(7): 969â978, 2009.
[39] G. Salton, A. Wong, and C.-S. Yang. A vector space model for automatic indexing. Communications of the ACM, 18(11): 613â620, 1975.
[40] T. Schnabel, I. Labutov, D. Mimno, and T. Joachims. Evaluation methods for unsupervised word embeddings. In Proc. EMNLP, 2015.
[41] Y. Shen, X. He, J. Gao, L. Deng, and G. Mesnil. Learning semantic representations using convolutional neural networks for web search. In Proc. WWW, pages 373â374, 2014.
[42] T. Shi and Z. Liu. Linking glove with word2vec. arXiv preprint arXiv:1411.5595, 2014.
[43] A. Singhal, C. Buckley, and M. Mitra. Pivoted document length normalization. In Proceedings of the 19th annual international ACM SIGIR conference on Research and development in information retrieval, pages 21â29. ACM, 1996.
[44] D. Tang, F. Wei, N. Yang, M. Zhou, T. Liu, and B. Qin. Learning sentiment-speciï¬c word embedding for twitter sentiment classiï¬cation. In Proc. ACL, volume 1, pages 1555â1565, 2014.
[45] L. Vilnis and A. McCallum. Word representations via gaussian embedding. arXiv preprint arXiv:1412.6623, 2014.
[46] I. Vuli´c and M.-F. Moens. Monolingual and cross-lingual information retrieval models based on (bilingual) word embeddings. In Proc. SIGIR, pages 363â372. ACM, 2015.
[47] X. Wei and W. B. Croft. Lda-based document models for ad-hoc retrieval. In Proc. SIGIR, pages 178â185. ACM, 2006.
[48] B. J. Wilson and A. M. J. Schakel. Controlled experiments for word embeddings. arXiv preprint arXiv:1510.02675, 2015.
[49] X. Yan, J. Guo, S. Liu, X. Cheng, and Y. Wang. Learning topics in short texts by non-negative matrix factorization on term correlation matrix. In Proceedings of the SIAM International Conference on Data Mining, 2013.
[50] G. Zheng and J. Callan. Learning to reweight terms with distributed representations. In Proc. SIGIR, pages 575â584. ACM, 2015.
[51] J. Zobel and A. Moffat. Exploring the similarity space. In ACM SIGIR Forum, volume 32, pages 18â34. ACM, 1998.
[52] W. Y. Zou, R. Socher, D. M. Cer, and C. D. Manning. Bilingual word embeddings for phrase-based machine translation. In EMNLP, pages 1393â1398, 2013. | {
"id": "1510.02675"
} |
1602.00367 | Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers | Document classification tasks were primarily tackled at word level. Recent
research that works with character-level inputs shows several benefits over
word-level approaches such as natural incorporation of morphemes and better
handling of rare words. We propose a neural network architecture that utilizes
both convolution and recurrent layers to efficiently encode character inputs.
We validate the proposed model on eight large scale document classification
tasks and compare with character-level convolution-only models. It achieves
comparable performances with much less parameters. | http://arxiv.org/pdf/1602.00367 | Yijun Xiao, Kyunghyun Cho | cs.CL | null | null | cs.CL | 20160201 | 20160201 | 6 1 0 2
b e F 1 ] L C . s c [
1 v 7 6 3 0 0 . 2 0 6 1 : v i X r a
# Efï¬cient Character-level Document Classiï¬cation by Combining Convolution and Recurrent Layers
Yijun Xiao Center for Data Sciences, New York University ryjxiao@nyu.edu
Kyunghyun Cho Courant Institute and Center for Data Science, New York University kyunghyun.cho@nyu.edu
# Abstract
Document classiï¬cation tasks were primar- ily tackled at word level. Recent research that works with character-level inputs shows several beneï¬ts over word-level approaches such as natural incorporation of morphemes and better handling of rare words. We pro- pose a neural network architecture that utilizes both convolution and recurrent layers to efï¬- ciently encode character inputs. We validate the proposed model on eight large scale doc- ument classiï¬cation tasks and compare with character-level convolution-only models. It achieves comparable performances with much less parameters.
1
# Introduction
Document classiï¬cation is a task in natural language processing where one needs to assign a single or multiple predeï¬ned categories to a sequence of text. A conventional approach to document classiï¬cation generally consists of a feature extraction stage fol- lowed by a classiï¬cation stage. For instance, it is usual to use a TF-IDF vector of a given document as an input feature to a subsequent classiï¬er.
More recently, it has become more common to use a deep neural network, which jointly performs fea- ture extraction and classiï¬cation, for document clas- siï¬cation (Kim, 2014; Mesnil et al., 2014; Socher et al., 2013; Carrier and Cho, 2014). In most cases, an input document is represented as a sequence of words, of which each is presented as a one-hot vec- tor.1 Each word in the sequence is projected into a
1 A one-hot vector of the i-th word is a binary vector whose
continuous vector space by being multiplied with a weight matrix, forming a sequence of dense, real- valued vectors. This sequence is then fed into a deep neural network which processes the sequence in multiple layers, resulting in a prediction proba- bility. This whole pipeline, or a network, is tuned jointly to maximize the classiï¬cation accuracy on a training set.
One important aspect of these recent approaches based on deep learning is that they often work at the level of words. Despite its recent success, the word- level approach has a number of major shortcomings. First, it is statistically inefï¬cient, as each word token is considered separately and estimated by the same number of parameters, despite the fact that many words share common root, preï¬x or sufï¬x. This can be overcome by using an external mechanism to seg- ment each word and infer its components (root, pre- ï¬x, sufï¬x), but this is not desirable as the mechanism is highly language-dependent and is tuned indepen- dently from the target objective of document classi- ï¬cation.
Second, the word-level approach cannot handle out-of-vocabulary words. Any word that is not present or rare in a training corpus, is mapped to an unknown word token. This is problematic, be- cause the model cannot handle typos easily, which happens frequently in informal documents such as postings from social network sites. Also, this makes it difï¬cult to use a trained model to a new domain, as there may be large mismatch between the domain of the training corpus and the target domain.
elements are all zeros, except for the i-th element which is set to one.
Recently this year, a number of researchers have noticed that it is not at all necessary for a deep neu- ral network to work at the word level. As long as the document is represented as a sequence of one-hot vectors, the model works without any change, re- gardless of whether each one-hot vector corresponds to a word, a sub-word unit or a character. Based on this intuition, Kim et al. (Kim et al., 2015) and Ling et al. (Ling et al., 2015) proposed to use a char- acter sequence as an alternative to the word-level one-hot vector. A similar idea was applied to de- pendency parsing in (Ballesteros et al., 2015). The work in this direction, most relevant to this paper, is the character-level convolutional network for doc- ument classiï¬cation by Zhang et al. (Zhang et al., 2015).
The character-level convolutional net in (Zhang et al., 2015) is composed of many layers of convolu- tion and max-pooling, similarly to the convolutional network in computer vision (see, e.g., (Krizhevsky et al., 2012).) Each layer ï¬rst extracts features from small, overlapping windows of the input sequence and pools over small, non-overlapping windows by taking the maximum activations in the window. This is applied recursively (with untied weights) for many times. The ï¬nal convolutional layerâs activation is ï¬attened to form a vector which is then fed into a small number of fully-connected layers followed by the classiï¬cation layer.
We notice that the use of a vanilla convolutional network for character-level document classiï¬cation has one shortcoming. As the receptive ï¬eld of each convolutional layer is often small (7 or 3 in (Zhang et al., 2015),) the network must have many layers in order to capture long-term dependencies in an in- put sentence. This is likely the reason why Zhang et al. (Zhang et al., 2015) used a very deep convo- lutional network with six convolutional layers fol- lowed by two fully-connected layers.
In order to overcome this inefï¬ciency in model- ing a character-level sequence, in this paper we pro- pose to make a hybrid of convolutional and recur- rent networks. This was motivated by recent suc- cesses of applying recurrent networks to natural lan- guages (see, e.g., (Cho et al., 2014; Sundermeyer et al., 2015)) and from the fact that the recurrent net- work can efï¬ciently capture long-term dependencies even with a single layer. The hybrid model processes
an input sequence of characters with a number of convolutional layers followed by a single recurrent layer. Because the recurrent layer, consisting of ei- ther gated recurrent units (GRU, (Cho et al., 2014) or long short-term memory units (LSTM, (Hochre- iter and Schmidhuber, 1997; Gers et al., 2000), can efï¬ciently capture long-term dependencies, the pro- posed network only needs a very small number of convolutional layers.
We empirically validate the proposed model, to which we refer as a convolution-recurrent network, large-scale document classiï¬cation on the eight tasks from (Zhang et al., 2015). We mainly com- pare the proposed model against the convolutional network in (Zhang et al., 2015) and show that it is indeed possible to use a much smaller model to achieve the same level of classiï¬cation performance when a recurrent layer is put on top of the convolu- tional layers.
# 2 Basic Building Blocks: Neural Network Layers
In this section, we describe four basic layers in a neural network that will be used later to constitute a single network for classifying a document.
# 2.1 Embedding Layer
As mentioned earlier, each document is represented as a sequence of one-hot vectors. A one-hot vector of the i-th symbol in a vocabulary is a binary vector whose elements are all zeros except for the i-th ele- ment which is set to one. Therefore, each document is a sequence of T one-hot vectors (x1, x2, . . . , xT ). An embedding layer projects each of the one- hot vectors into a d-dimensional continuous vec- tor space Rd. This is done by simply multiplying the one-hot vector from left with a weight matrix W â RdÃ|V |, where |V | is the number of unique symbols in a vocabulary:
et = Wxt.
After the embedding layer, the input sequence of one-hot vectors becomes a sequence of dense, real- valued vectors (e1, e2, . . . , eT ).
# 2.2 Convolutional Layer
A convolutional layer consists of two stages. In the first stage, a set of dâ filters of receptive field size r,
F ⬠R®*", is applied to the input sequence: f, = O(F [ep (rjayqas e+ Ft +++ Cr4(r/2)])> where ¢ is a nonlinear activation function such as tanh or a rectifier. This is done for every time step of the input sequence, resulting in a sequence Fâ = (f1, fo,..., fr).
The resulting sequence Fis max-pooled with size r:
where max applies for each element of the vectors, resulting in a sequence
1 t Ui Uy Fâ = (f[, f5,..., T)r!):
# 2.3 Recurrent Layer
A recurrent layer consists of a recursive function f which takes as input one input vector and the previ- ous hidden state, and returns the new hidden state:
hy = f (Xt, hi-1), where x; ⬠R? is one time step from the input se- quence (x, X9,...,x7). ho ⬠R is often initial- ized as an all-zero vector.
Recursive Function The most naive recursive function is implemented as
ht = tanh (Wxxt + Uhhtâ1) ,
where W,, ⬠Râ*â and U,, ⬠Râ*â' are the weight matrices. This naive recursive function however is known to suffer from the problem of vanishing gra- dient (Bengio et al., 1994} Hochreiter et al., 2001).
More recently it is common to use a more com- plicated function that learns to control the ï¬ow of information so as to prevent the vanishing gradient and allows the recurrent layer to more easily capture long-term dependencies. Long short-term memory (LSTM) unit from (Hochreiter and Schmidhuber, 1997; Gers et al., 2000) is a representative example. The LSTM unit consists of four sub-unitsâinput, output, forget gates and candidate memory cell, which are computed by
it = Ï (Wixt + Uihtâ1) , ot = Ï (Woxt + Uohtâ1) , ft = Ï (Wf xt + Uf htâ1) , Ëct = tanh (Wcxt + Uchtâ1) .
Based on these, the LSTM unit ï¬rst computes the memory cell:
oe =hOGt+h Oc,
and computes the output, or activation:
hy, = 0; © tanh(c;).
The resulting sequence from the recurrent layer is
# then
(h1, h2, . . . , hT ), where T is the length of the input sequence to the layer.
Bidirectional Recurrent Layer One property of the recurrent layer is that there is imbalance in the amount of information seen by the hidden states at different time steps. The earlier hidden states only observe a few vectors from the lower layer, while the later ones are computed based on the most of the lower-layer vectors. This can be easily alleviated by having a bidirectional recurrent layer which is com- posed of two recurrent layers working in opposite directions. This layer will return two sequences of hidden states from the forward and reverse recurrent layers, respectively.
# 2.4 Classiï¬cation Layer
A classiï¬cation layer is in essence a logistic re- gression classiï¬er. Given a ï¬xed-dimensional input from the lower layer, the classiï¬cation layer afï¬ne- transforms it followed by a softmax activation func- tion (Bridle, 1990) to compute the predictive proba- bilities for all the categories. This is done by
exp(w) x + by) Sh exp(wjix + by)â ply = k|X)
where wkâs and bkâs are the weight and bias vectors. We assume there are K categories.
It is worth noting that this classiï¬cation layer takes as input a ï¬xed-dimensional vector, while the recurrent layer or convolutional layer returns a variable-length sequence of vectors (the length de- termined by the input sequence). This can be ad- dressed by either simply max-pooling the vectors (Kim, 2014) over the time dimension (for both con- volutional and recurrent layers), taking the last hid- den state (for recurrent layers) or taking the last hid- den states of the forward and reverse recurrent net- works (for bidirectional recurrent layers.)
# 3 Character-Level Convolutional-Recurrent Network
In this section, we propose a hybrid of convolutional and recurrent networks for character-level document classiï¬cation.
# 3.1 Motivation
One basic motivation for using the convolutional layer is that it learns to extract higher-level features that are invariant to local translation. By stack- ing multiple convolutional layers, the network can extract higher-level, abstract, (locally) translation- invariant features from the input sequence, in this case the document, efï¬ciently.
Despite this advantage, we noticed that it requires many layers of convolution to capture long-term de- pendencies, due to the locality of the convolution and pooling (see Sec. 2.2.) This becomes more se- vere as the length of the input sequence grows, and in the case of character-level modeling, it is usual for a document to be a sequence of hundreds or thou- sands of characters. Ultimately, this leads to the need for a very deep network having many convo- lutional layers.
Contrary to the convolutional layer, the recurrent layer from Sec. 2.3 is able to capture long-term de- pendencies even when there is only a single layer. This is especially true in the case of a bidirectional recurrent layer, because each hidden state is com- puted based on the whole input sequence. However, the recurrent layer is computationally more expen- sive. The computational complexity grows linearly with respect to the length of the input sequence, and most of the computations need to be done sequen- tially. This is in contrast to the convolutional layer for which computations can be efï¬ciently done in parallel.
Based on these observations, we propose to com- bine the convolutional and recurrent layers into a single model so that this network can capture long- term dependencies in the document more efï¬ciently for the task of classiï¬cation.
# 3.2 Model Description
The proposed model, convolution-recurrent network (ConvRec),
p(y|X) Classification Layers Sec. 2.4 (Recurrent ee Layers /se0. 2.3 ( Embedding Layer iS Sec. 2.1 (11,22, eeegd ry)
(a) (b)
P(y|X) Classification Layers Sec. 2.4 Convolutional Layers Sec. 2.2 ( Embedding Layer ie Sec. 2.1 (11,22, eeegd ry)
Figure 1: Graphical illustration of (a) the convolutional net- work and (b) the proposed convolution-recurrent network for character-level document classiï¬cation.
with a one-hot sequence input
X = (x1, x2, . . . , xT ).
This input sequence is turned into a sequence of dense, real-valued vectors
E = (e1, e2, . . . , eT )
using the embedding layer from Sec. 2.1.
We apply multiple convolutional layers (Sec. 2.2) to E to get a shorter sequence of feature vectors:
This feature vector is then fed into a bidirectional recurrent layer (Sec. 2.3), resulting in two sequences
> Hyorward = (hi, hg,..., hr), Freverse = (hi, ho, ae) hrâ).
We take the last hidden states of both directions and concatenate them to form a ï¬xed-dimensional vec- tor:
h= [Br hi] .
Finally, the ï¬xed-dimensional vector h is fed into the classiï¬cation layer to compute the predictive probabilities p(y = k|X) of all the categories k = 1, . . . , K given the input sequence X.
See Fig. 1 (b) for the graphical illustration of the proposed model.
Data set Classes Task Training size Test size AGâs news Sogou news DBPedia Yelp review polarity Yelp review full Yahoo! Answers Amazon review polarity Amazon review full 4 5 14 2 5 10 2 5 news categorization news categorization ontology classiï¬cation sentiment analysis sentiment analysis question type classiï¬cation sentiment analysis sentiment analysis 120,000 450,000 560,000 560,000 650,000 1,400,000 3,600,000 3,000,000 7,600 60,000 70,000 38,000 50,000 60,000 400,000 650,000
Table 1: Data sets summary.
# 3.3 Related Work
Convolutional network for document classiï¬ca- tion The convolutional networks for document classiï¬cation, proposed earlier in (Kim, 2014; Zhang et al., 2015) and illustrated in Fig. 1 (a), is almost identical to the proposed model. One ma- jor difference is the lack of the recurrent layer in their models. Their model consists of the embedding layer, a number of convolutional layers followed by the classiï¬cation layer only.
Recurrent network for document classiï¬cation Carrier and Cho in (Carrier and Cho, 2014) give a tutorial on using a recurrent neural network for sen- timent analysis which is one type of document clas- siï¬cation. Unlike the convolution-recurrent network proposed in this paper, they do not use any convolu- tional layer in their model. Their model starts with the embedding layer followed by the recurrent layer. The hidden states from the recurrent layer are then averaged and fed into the classiï¬cation layer.
Hybrid model: Conv-GRNN Perhaps the most related work is the convolution-gated recurrent neu- ral net (Conv-GRNN) from (Tang et al., 2015). They proposed a hierarchical processing of a document. In their model, either a convolutional network or a recurrent network is used to extract a feature vector from each sentence, and another (bidirectional) re- current network is used to extract a feature vector of the document by reading the sequence of sentence vectors. This document vector is used by the classi- ï¬cation layer.
work. In their model, the convolutional network is strictly constrained to model each sentence, and the recurrent network to model inter-sentence struc- tures. On the other hand, the proposed ConvRec network uses a recurrent layer in order to assist the convolutional layers to capture long-term dependen- cies (across the whole document) more efï¬ciently. These are orthogonal to each other, and it is possi- ble to plug in the proposed ConvRec as a sentence feature extraction module in the Conv-GRNN from (Tang et al., 2015). Similarly, it is possible to use the proposed ConvRec as a composition function for the sequence of sentence vectors to make computation more efï¬cient, especially when the input document consists of many sentences.
Recursive Neural Networks A recursive neural network has been applied to sentence classiï¬cation earlier (see, e.g., (Socher et al., 2013).) In this ap- proach, a composition function is deï¬ned and recur- sively applied at each node of the parse tree of an input sentence to eventually extract a feature vector of the sentence. This model family is heavily de- pendent on an external parser, unlike all the other models such as the ConvRec proposed here as well as other related models described above. It is also not trivial to apply the recursive neural network to documents which consist of multiple sentences. We do not consider this family of recursive neural net- works directly related to the proposed model.
# 4 Experiment Settings
The major difference between their approach and the proposed ConvRec is in the purpose of com- bining the convolutional network and recurrent net-
# 4.1 Task Description
We validate the proposed model on eight large-scale document classiï¬cation tasks from (Zhang et al.,
Embedding Layer Convolutional Layer Recurrent Layer Model Sec.|2 Sec. Sec. |V| d dâ r r o d C2RIDD 5,3 2,2 C3RIDD 5,5,3 2,2,2 C4RIDD 6 8 5,533 2,222 Rev D C5RIDD 5,5,3,3,3 2,2,2,1,2
Table 2: Different architectures tested in this paper.
2015). The sizes of the data sets range from 200,000 to 4,000,000 documents. These tasks include senti- ment analysis (Yelp reviews, Amazon reviews), on- tology classiï¬cation (DBPedia), question type clas- siï¬cation (Yahoo! Answers), and news categoriza- tion (AGâs news, Sogou news).
Dropout (Srivastava et al., 2014) is an effective way to regularize deep neural networks. We apply dropout after the last convolutional layer as well as after the recurrent layer. Without dropout, the inputs to the recurrent layer xtâs are
Data Sets A summary of the statistics for each data set is listed in Table 1. There are equal num- ber of examples in each class for both training and test sets. DBPedia data set, for example, has 40,000 training and 5,000 test examples per class. For more detailed information on the data set construction process, see (Zhang et al., 2015).
x, =f;
where f; is the ¢-th output from the last convolutional layer defined in Sec. After adding dropout, we have
ri ~ Bernoulli(p) ~ gl x =rOf
# 4.2 Model Settings
p is the dropout probability which we set to 0.5; 7} is the i-th component of the binary vector r, ⬠R®.
Referring to Sec. 2.1, the vocabulary V for our experiments consists of 96 characters including all upper-case and lower-case letters, digits, common punctuation marks, and spaces. Character embed- ding size d is set to 8.
As described in Sec. 3-1] we believe by adding re- current layers, one can effectively reduce the num- ber of convolutional layers needed in order to cap- ture long-term dependencies. Thus for each data set, we consider models with two to five convolutional layers. Following notations in Sec. each layer has dâ = 128 filters. For AGâs news and Yahoo! An- swers, we also experiment larger models with 1,024 filters in the convolutional layers. Receptive field size r is either five or three depending on the depth. Max pooling size râ is set to 2. Rectified linear units (ReLUs, (Glorot et al., 2011)) are used as activation functions in the convolutional layers. The recurrent layer (Sec. is fixed to a single layer of bidi- rectional LSTM for all models. Hidden states di- mension dâ is set to 128. More detailed setups are described in Table[2}
# 4.3 Training and Validation
For each of the data sets, we randomly split the full training examples into training and validation. The validation size is the same as the corresponding test size and is balanced in each class.
The models are trained by minimizing the follow- ing regularized negative log-likelihood or cross en- tropy loss. Xâs and yâs are document character se- quences and their corresponding observed class as- signments in the training set D. w is the collec- tion of model weights. Weight decay is applied with λ = 5 à 10â4.
MN. 1=â $7 log(v(ylX)) + Sill? XyEeD
We train our models using AdaDelta with p = 0.95, ⬠= 10~> and a batch size of 128. Examples are padded to the longest sequence in each batch and masks are generated to help iden- tify the padded region. The corresponding masks of
Data set #Ex. #Cl. Network #Params â Error (%) Network #Params_ Error (%) AG 120k 4 C2R1D1024 20M 8.39/8.64 C6F2D1024 27â¢M. -/9.85 Sogou 450k 5 C3R1D128 AM 4.82/4.83 C6F2D1024* 27â¢M. -/4.88 DBPedia 560k 14 C2R1D128 3M 1.46/1.43 C6F2D1024 27â¢M. -/1.66 Yelp P. 560k 2 C2R1D128 3M 5.50/5.51 C6F2D1024 27â¢M. -/5.25 Yelp F. 650k 5 C2R1D128 3M 38.00/38.18 | C6F2D1024 27â¢M. -/38.40 Yahoo A. 1.4M 10 | C2R1D1024 20M 28.62/28.26 | C6F2D1024* 27â¢M. -/29.55 Amazon P. || 3.6M 2 C3R1D128 AM 5.64/5.87 C6F2D256* 2.7M -/5.50 Amazon F. || 3.0M 5 C3R1D128 AM 40.30/40.77 | C6F2D256* 2.7M -/40.53
Table 3: Results on character-level document classification. CCRRFFDD refers to a network with C convolutional layers, R recurrent layers, F' fully-connected layers and D dimensional feature vectors. * denotes a model which does not distinguish between lower-case and upper-case letters. We only considered the character-level models without using Thesaraus-based data augmentation. We report both the validation and test errors. In our case, the network architecture for each dataset was selected based on the validation errors. The numbers of parameters are approximate.
the outputs from convolutional layers can be com- puted analytically and are used by the recurrent layer to properly ignore padded inputs. The gradient of the cost function is computed with backpropagation through time (BPTT, (Werbos, 1990p). If the gra- dient has an L2 norm larger than 5, we rescale the gradient by a factor of Tan Le.
leh) llglle Zc = g-min (1.
dw and gc is the clipped gradient. Early stopping strategy is employed to prevent Before training, we set an initial overï¬tting. patience value. At each epoch, we calculate and record the validation loss. If it is lower than the current lowest validation loss by 0.5%, we extend patience by two. Training stops when the number of epochs is larger than patience. We report the test error rate evaluated using the model with the lowest validation error.
# 5 Results and Analysis
Experimental results are listed in Table 3. We com- pare to the best character-level convolutional model without data augmentation from (Zhang et al., 2015) on each data set. Our model achieves comparable performances for all the eight data sets with signiï¬- cantly less parameters. Speciï¬cally, it performs bet- ter on AGâs news, Sogou news, DBPedia, Yelp re- view full, and Yahoo! Answers data sets.
Number of classes Fig. 2 (a) shows how relative performance of our model changes with respect to It is worth noting that as the number of classes. the number of classes increases, our model achieves better results compared to convolution-only models. For example, our model has a much lower test er- ror on DBPedia which has 14 classes, but it scores worse on Yelp review polarity and Amazon review polarity both of which have only two classes. Our conjecture is that more detailed and complete infor- mation needs to be preserved from the input text for the model to assign one of many classes to it. The convolution-only model likely loses detailed local features because it has more pooling layers. On the other hand, the proposed model with less pooling layers can better maintain the detailed information and hence performs better when such needs exist.
Number of training examples Although it is less signiï¬cant, Fig. 2 (b) shows that the proposed model generally works better compared to the convolution- only model when the data size is small. Considering the difference in the number of parameters, we sus- pect that because the proposed model is more com- pact, it is less prone to overï¬tting. Therefore it gen- eralizes better when the training size is limited.
Number of convolutional layers An interesting observation from our experiments is that the model accuracy does not always increase with the number of convolutional layers. Performances peak at two or three convolutional layers and decrease if we add
e i=) L x iJ Sf} x 5 vo o x gv 0}. ------- Yor crt & o -5 x c oC ic] 2 -10 x -15 * 2 4 6 8 10 12 14 16 # of classes
10 -10 % change in test error x 0 500 1000 1500 2000 2500 3000 3500 4000 # of training examples (in thousands) -15
e i=) L x iJ Sf} x 5 vo o x 0}. ------- Yor crt & o -5 x c oC ic] 2 -10 x -15 * 2 4 6 8 10 12 14 16 # of classes (a) 10 -10 % change in test error x 0 500 1000 1500 2000 2500 3000 3500 4000 # of training examples (in thousands) (b) -15
(a)
(b)
Figure 2: Relative test performance of the proposed model compared to the convolution-only model w.r.t. (a) the number of classes and (b) the size of training set. Lower is better.
more to the model. As more convolutional layers produce longer character n-grams, this indicates that there is an optimal level of local features to be fed into the recurrent layer. Also, as discussed above, more pooling layers likely lead to the lost of detailed information which in turn affects the ability of the recurrent layer to capture long-term dependencies.
Number of ï¬lters We experiment large models with 1,024 ï¬lters on AGâs news and Yahoo! An- swers data sets. Although adding more ï¬lters in the convolutional layers does help with the model per- formances on these two data sets, the gains are lim- ited compared to the increased number of parame- ters. Validation error improves from 8.75% to 8.39% for AGâs news and from 29.48% to 28.62% for Ya- hoo! Answers at the cost of a 70 times increase in the number of model parameters.
Note that in our model we set the number of ï¬l- ters in the convolutional layers to be the same as the dimension of the hidden states in the recurrent layer. It is possible to use more ï¬lters in the convolutional layers while keeping the recurrent layer dimension the same to potentially get better performances with less sacriï¬ce of the number of parameters.
information.
We validated the proposed model on eight large scale document classiï¬cation tasks. The model achieved comparable results with much less convo- lutional layers compared to the convolution-only ar- chitecture. We further discussed several aspects that affect the model performance. The proposed model generally performs better when number of classes is large, training size is small, and when the number of convolutional layers is set to two or three.
The proposed model is a general encoding archi- tecture that is not limited to document classiï¬ca- tion tasks or natural language inputs. For example, (Chen et al., 2015; Visin et al., 2015) combined con- volution and recurrent layers to tackle image seg- mentation tasks; (Sainath et al., 2015) applied a sim- ilar model to do speech recognition. It will be inter- esting to see future research on applying the archi- tecture to other applications such as machine trans- lation and music information retrieval. Using recur- rent layers as substitutes for pooling layers to poten- tially reduce the lost of detailed local information is also a direction that worth exploring.
# 6 Conclusion
# Acknowledgments
In this paper, we proposed a hybrid model that pro- cesses an input sequence of characters with a num- ber of convolutional layers followed by a single re- current layer. The proposed model is able to encode documents from character level capturing sub-word
This work is done as a part of the course DS-GA 1010-001 Independent Study in Data Science at the Center for Data Science, New York University.
# References
[Ballesteros et al.2015] Miguel Ballesteros, Chris Dyer, and Noah A Smith. 2015. Improved transition-based parsing by modeling characters instead of words with lstms. arXiv preprint arXiv:1508.00657.
[Bengio et al.1994] Yoshua Bengio, Patrice Simard, and Paolo Frasconi. 1994. Learning long-term depen- dencies with gradient descent is difï¬cult. Neural Net- works, IEEE Transactions on, 5(2):157â166.
[Bridle1990] John S Bridle. 1990. Probabilistic interpre- tation of feedforward classiï¬cation network outputs, with relationships to statistical pattern recognition. In Neurocomputing, pages 227â236. Springer.
[Carrier and Cho2014] Pierre Luc Carrier and LSTM networks for Kyunghyun Cho. sentiment analysis. Deep Learning Tutorials. 2014.
[Chen et al.2015] Liang-Chieh Chen, Jonathan T. Bar- ron, George Papandreou, Kevin Murphy, and Alan L. Yuille. 2015. Semantic image segmentation with task-speciï¬c edge detection using cnns and a dis- CoRR, criminatively trained domain transform. abs/1511.03328.
[Cho et al.2014] Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase represen- tations using rnn encoder-decoder for statistical ma- chine translation. In Conference on Empirical Meth- ods in Natural Language Processing (EMNLP 2014). [Gers et al.2000] Felix A Gers, J¨urgen Schmidhuber, and 2000. Learning to forget: Con- Neural computation,
Fred Cummins. tinual prediction with lstm. 12(10):2451â2471.
[Glorot et al.2011] Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Deep sparse rectiï¬er neural networks. In Geoffrey J. Gordon and David B. Dun- son, editors, Proceedings of the Fourteenth Interna- tional Conference on Artiï¬cial Intelligence and Statis- tics (AISTATS-11), volume 15, pages 315â323. Journal of Machine Learning Research - Workshop and Con- ference Proceedings.
[Hochreiter and Schmidhuber1997] Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735â1780.
[Hochreiter et al.2001] Sepp Hochreiter, Yoshua Bengio, Paolo Frasconi, and Jï¬rgen Schmidhuber. 2001. Gra- dient ï¬ow in recurrent nets: the difï¬culty of learning long-term dependencies, volume 1. IEEE.
[Kim et al.2015] Yoon Kim, Yacine Jernite, David Son- 2015. Character- arXiv preprint language models.
tag, and Alexander M Rush. aware neural arXiv:1508.06615. [Kim2014] Yoon Kim.
2014. Convolutional neural networks for sentence classiï¬cation. arXiv preprint arXiv:1408.5882.
[Krizhevsky et al.2012] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2012. Imagenet classiï¬cation with deep convolutional neural networks. In F. Pereira, C.J.C. Burges, L. Bottou, and K.Q. Weinberger, edi- tors, Advances in Neural Information Processing Sys- tems 25, pages 1097â1105. Curran Associates, Inc. [Ling et al.2015] Wang Ling, Tiago Lu´ıs, Lu´ıs Marujo, Ram´on Fernandez Astudillo, Silvio Amir, Chris Dyer, Alan W Black, and Isabel Trancoso. 2015. Finding function in form: Compositional character models for open vocabulary word representation. arXiv preprint arXiv:1508.02096.
[Mesnil et al.2014] Gr´egoire Mesnil, MarcâAurelio Ran- zato, Tomas Mikolov, and Yoshua Bengio. 2014. En- semble of generative and discriminative techniques for sentiment analysis of movie reviews. arXiv preprint arXiv:1412.5335.
[Sainath et al.2015] T.N. Sainath, O. Vinyals, A. Senior, and H. Sak. 2015. Convolutional, long short-term memory, fully connected deep neural networks. In Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, pages 4580â 4584, April.
[Socher et al.2013] Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP.
[Srivastava et al.2014] Nitish Srivastava, Geoffrey Hin- ton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to pre- vent neural networks from overï¬tting. Journal of Ma- chine Learning Research, 15:1929â1958.
[Sundermeyer et al.2015] Martin Sundermeyer, Hermann Ney, and Ralf Schluter. 2015. From feedforward to recurrent lstm neural networks for language modeling. Audio, Speech, and Language Processing, IEEE/ACM Transactions on, 23(3):517â529.
[Tang et al.2015] Duyu Tang, Bing Qin, and Ting Liu. 2015. Document modeling with gated recurrent neu- ral network for sentiment classiï¬cation. In Proceed- ings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1422â1432.
Kyle Kastner, Aaron C. Courville, Yoshua Bengio, Matteo Mat- 2015. Reseg: A teucci, and KyungHyun Cho. recurrent neural network for object segmentation. CoRR, abs/1511.07053. [Werbos1990] P. Werbos.
1990. Backpropagation In
through time: what does it do and how to do it. Proceedings of IEEE, volume 78, pages 1550â1560. [Zeiler2012] Matthew D. Zeiler. 2012. ADADELTA: an adaptive learning rate method. CoRR, abs/1212.5701.
[Zhang et al.2015] Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classiï¬cation. In Advanced in Neural Informa- tion Processing Systems (NIPS 2015), volume 28. | {
"id": "1508.06615"
} |
1601.06759 | Pixel Recurrent Neural Networks | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent. | http://arxiv.org/pdf/1601.06759 | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20160125 | 20160819 | 6 1 0 2
g u A 9 1 ] V C . s c [
3 v 9 5 7 6 0 . 1 0 6 1 : v i X r a
# Pixel Recurrent Neural Networks
# A¨aron van den Oord Nal Kalchbrenner Koray Kavukcuoglu
AVDNOORD@GOOGLE.COM NALK@GOOGLE.COM KORAYK@GOOGLE.COM
Google DeepMind
# Abstract
occluded
# completions
# original
Modeling the distribution of natural images is a landmark problem in unsupervised learning. This task requires an image model that is at tractable and scalable. We once expressive, present a deep neural network that sequentially predicts the pixels in an image along the two spatial dimensions. Our method models the dis- crete probability of the raw pixel values and en- codes the complete set of dependencies in the image. Architectural novelties include fast two- dimensional recurrent layers and an effective use of residual connections in deep recurrent net- works. We achieve log-likelihood scores on nat- ural images that are considerably better than the previous state of the art. Our main results also provide benchmarks on the diverse ImageNet dataset. Samples generated from the model ap- pear crisp, varied and globally coherent.
Figure 1. Image completions sampled from a PixelRNN.
eling is building complex and expressive models that are also tractable and scalable. This trade-off has resulted in a large variety of generative models, each having their ad- vantages. Most work focuses on stochastic latent variable models such as VAEâs (Rezende et al., 2014; Kingma & Welling, 2013) that aim to extract meaningful representa- tions, but often come with an intractable inference step that can hinder their performance.
# 1. Introduction
Generative image modeling is a central problem in unsu- pervised learning. Probabilistic density models can be used for a wide variety of tasks that range from image compres- sion and forms of reconstruction such as image inpainting (e.g., see Figure 1) and deblurring, to generation of new images. When the model is conditioned on external infor- mation, possible applications also include creating images based on text descriptions or simulating future frames in a planning task. One of the great advantages in generative modeling is that there are practically endless amounts of image data available to learn from. However, because im- ages are high dimensional and highly structured, estimating the distribution of natural images is extremely challenging.
One of the most important obstacles in generative mod-
Proceedings of the 33 rd International Conference on Machine Learning, New York, NY, USA, 2016. JMLR: W&CP volume 48. Copyright 2016 by the author(s).
One effective approach to tractably model a joint distribu- tion of the pixels in the image is to cast it as a product of conditional distributions; this approach has been adopted in autoregressive models such as NADE (Larochelle & Mur- ray, 2011) and fully visible neural networks (Neal, 1992; Bengio & Bengio, 2000). The factorization turns the joint modeling problem into a sequence problem, where one learns to predict the next pixel given all the previously gen- erated pixels. But to model the highly nonlinear and long- range correlations between pixels and the complex condi- tional distributions that result, a highly expressive sequence model is necessary.
Recurrent Neural Networks (RNN) are powerful models that offer a compact, shared parametrization of a series of conditional distributions. RNNs have been shown to excel at hard sequence problems ranging from handwriting gen- eration (Graves, 2013), to character prediction (Sutskever et al., 2011) and to machine translation (Kalchbrenner & Blunsom, 2013). A two-dimensional RNN has produced very promising results in modeling grayscale images and textures (Theis & Bethge, 2015).
In this paper we advance two-dimensional RNNs and ap-
Pixel Recurrent Neural Networks
Mask B 2. oe ee eee cs Multi-scale context Mask A Context
Figure 2. Left: To generate pixel xi one conditions on all the pre- viously generated pixels left and above of xi. Center: To gen- erate a pixel in the multi-scale case we can also condition on the subsampled image pixels (in light blue). Right: Diagram of the connectivity inside a masked convolution. In the ï¬rst layer, each of the RGB channels is connected to previous channels and to the context, but is not connected to itself. In subsequent layers, the channels are also connected to themselves.
The contributions of the paper are as follows. In Section 3 we design two types of PixelRNNs corresponding to the two types of LSTM layers; we describe the purely convo- lutional PixelCNN that is our fastest architecture; and we design a Multi-Scale version of the PixelRNN. In Section 5 we show the relative beneï¬ts of using the discrete softmax distribution in our models and of adopting residual connec- tions for the LSTM layers. Next we test the models on MNIST and on CIFAR-10 and show that they obtain log- likelihood scores that are considerably better than previous results. We also provide results for the large-scale Ima- geNet dataset resized to both 32 à 32 and 64 à 64 pixels; to our knowledge likelihood values from generative models have not previously been reported on this dataset. Finally, we give a qualitative evaluation of the samples generated from the PixelRNNs.
ply them to large-scale modeling of natural images. The resulting PixelRNNs are composed of up to twelve, fast two-dimensional Long Short-Term Memory (LSTM) lay- ers. These layers use LSTM units in their state (Hochreiter & Schmidhuber, 1997; Graves & Schmidhuber, 2009) and adopt a convolution to compute at once all the states along one of the spatial dimensions of the data. We design two types of these layers. The ï¬rst type is the Row LSTM layer where the convolution is applied along each row; a similar technique is described in (Stollenga et al., 2015). The sec- ond type is the Diagonal BiLSTM layer where the convolu- tion is applied in a novel fashion along the diagonals of the image. The networks also incorporate residual connections (He et al., 2015) around LSTM layers; we observe that this helps with training of the PixelRNN for up to twelve layers of depth.
We also consider a second, simpliï¬ed architecture which shares the same core components as the PixelRNN. We ob- serve that Convolutional Neural Networks (CNN) can also be used as sequence model with a ï¬xed dependency range, by using Masked convolutions. The PixelCNN architec- ture is a fully convolutional network of ï¬fteen layers that preserves the spatial resolution of its input throughout the layers and outputs a conditional distribution at each loca- tion.
# 2. Model
Our aim is to estimate a distribution over natural images that can be used to tractably compute the likelihood of im- ages and to generate new ones. The network scans the im- age one row at a time and one pixel at a time within each row. For each pixel it predicts the conditional distribution over the possible pixel values given the scanned context. Figure 2 illustrates this process. The joint distribution over the image pixels is factorized into a product of conditional distributions. The parameters used in the predictions are shared across all pixel positions in the image.
To capture the generation process, Theis & Bethge (2015) propose to use a two-dimensional LSTM network (Graves & Schmidhuber, 2009) that starts at the top left pixel and proceeds towards the bottom right pixel. The advantage of the LSTM network is that it effectively handles long-range dependencies that are central to object and scene under- standing. The two-dimensional structure ensures that the signals are well propagated both in the left-to-right and top- to-bottom directions.
In this section we ï¬rst focus on the form of the distribution, whereas the next section will be devoted to describing the architectural innovations inside PixelRNN.
Both PixelRNN and PixelCNN capture the full generality of pixel inter-dependencies without introducing indepen- dence assumptions as in e.g., latent variable models. The dependencies are also maintained between the RGB color values within each individual pixel. Furthermore, in con- trast to previous approaches that model the pixels as con- tinuous values (e.g., Theis & Bethge (2015); Gregor et al. (2014)), we model the pixels as discrete values using a multinomial distribution implemented with a simple soft- max layer. We observe that this approach gives both repre- sentational and training advantages for our models.
# 2.1. Generating an Image Pixel by Pixel
The goal is to assign a probability p(x) to each image x formed of n à n pixels. We can write the image x as a one- dimensional sequence x1, ..., xn2 where pixels are taken from the image row by row. To estimate the joint distri- bution p(x) we write it as the product of the conditional distributions over the pixels:
p(x) =] [e(eiles, 214) dd) i=1
Pixel Recurrent Neural Networks
The value p(xi|x1, ..., xiâ1) is the probability of the i-th pixel xi given all the previous pixels x1, ..., xiâ1. The gen- eration proceeds row by row and pixel by pixel. Figure 2 (Left) illustrates the conditioning scheme.
Each pixel xi is in turn jointly determined by three values, one for each of the color channels Red, Green and Blue (RGB). We rewrite the distribution p(xi|x<i) as the fol- lowing product:
p(xi,R|x<i)p(xi,G|x<i, xi,R)p(xi,B|x<i, xi,R, xi,G) (2)
Each of the colors is thus conditioned on the other channels as well as on all the previously generated pixels.
_ Sai nn
Figure 3. In the Diagonal BiLSTM, to allow for parallelization along the diagonals, the input map is skewed by offseting each row by one position with respect to the previous row. When the spatial layer is computed left to right and column by column, the output map is shifted back into the original size. The convolution uses a kernel of size 2 Ã 1.
Note that during training and evaluation the distributions over the pixel values are computed in parallel, while the generation of an image is sequential.
dimensional convolution has size k à 1 where k ⥠3; the larger the value of k the broader the context that is captured. The weight sharing in the convolution ensures translation invariance of the computed features along each row.
# 2.2. Pixels as Discrete Variables
Previous approaches use a continuous distribution for the values of the pixels in the image (e.g. Theis & Bethge (2015); Uria et al. (2014)). By contrast we model p(x) as a discrete distribution, with every conditional distribution in Equation 2 being a multinomial that is modeled with a softmax layer. Each channel variable xi,â simply takes one of 256 distinct values. The discrete distribution is represen- tationally simple and has the advantage of being arbitrarily multimodal without prior on the shape (see Fig. 6). Exper- imentally we also ï¬nd the discrete distribution to be easy to learn and to produce better performance compared to a continuous distribution (Section 5).
The computation proceeds as follows. An LSTM layer has an input-to-state component and a recurrent state-to-state component that together determine the four gates inside the LSTM core. To enhance parallelization in the Row LSTM the input-to-state component is ï¬rst computed for the entire two-dimensional input map; for this a k à 1 convolution is used to follow the row-wise orientation of the LSTM itself. The convolution is masked to include only the valid context (see Section 3.4) and produces a tensor of size 4h à n à n, representing the four gate vectors for each position in the input map, where h is the number of output feature maps.
# 3. Pixel Recurrent Neural Networks
To compute one step of the state-to-state component of the LSTM layer, one is given the previous hidden and cell states hiâ1 and ciâ1, each of size h à n à 1. The new hidden and cell states hi, ci are obtained as follows:
In this section we describe the architectural components that compose the PixelRNN. In Sections 3.1 and 3.2, we describe the two types of LSTM layers that use convolu- tions to compute at once the states along one of the spatial dimensions. In Section 3.3 we describe how to incorporate residual connections to improve the training of a PixelRNN with many LSTM layers. In Section 3.4 we describe the softmax layer that computes the discrete joint distribution of the colors and the masking technique that ensures the proper conditioning scheme. In Section 3.5 we describe the PixelCNN architecture. Finally in Section 3.6 we describe the multi-scale architecture.
# 3.1. Row LSTM
The Row LSTM is a unidirectional layer that processes the image row by row from top to bottom computing fea- tures for a whole row at once; the computation is per- formed with a one-dimensional convolution. For a pixel xi the layer captures a roughly triangular context above the pixel as shown in Figure 4 (center). The kernel of the one-
{o;, fi, i;, gi] = o(K** ® hy_| + K* ®x;) f,Oc¢-14+i; Ogi (3) 0; © tanh(c;) Ci hj
where x; of size h x n x 1 is row i of the input map, and ® represents the convolution operation and © the element- wise multiplication. The weights K** and K** are the kernel weights for the state-to-state and the input-to-state components, where the latter is precomputed as described above. In the case of the output, forget and input gates 0,, f, and i;, the activation a is the logistic sigmoid function, whereas for the content gate g;, o is the tanh function. Each step computes at once the new state for an entire row of the input map. Because the Row LSTM has a triangular receptive field (Figure 4), it is unable to capture the entire available context.
Pixel Recurrent Neural Networks
oo000 2 _® @-©-9-0- ©0000 ones oeees ooe@0o°o 00800 @ECOO oo cof~ Oo 00 cfoo oo cto Oo OTe Kore) OOOO Ooo000 ofe (e) oy olronene) O01I000 C@e0°0 C0@e@00 C0e@0o°o oo000 lomonenene) oo000 PixelCNN Row LSTM Diagonal BiLSTM
Figure 4. Visualization of the input-to-state and state-to-state mappings for the three proposed architectures.
# 3.2. Diagonal BiLSTM
The Diagonal BiLSTM is designed to both parallelize the computation and to capture the entire available context for any image size. Each of the two directions of the layer scans the image in a diagonal fashion starting from a cor- ner at the top and reaching the opposite corner at the bot- tom. Each step in the computation computes at once the LSTM state along a diagonal in the image. Figure 4 (right) illustrates the computation and the resulting receptive ï¬eld.
The diagonal computation proceeds as follows. We ï¬rst skew the input map into a space that makes it easy to ap- ply convolutions along diagonals. The skewing operation offsets each row of the input map by one position with re- spect to the previous row, as illustrated in Figure 3; this results in a map of size n à (2n â 1). At this point we can compute the input-to-state and state-to-state components of the Diagonal BiLSTM. For each of the two directions, the input-to-state component is simply a 1 à 1 convolution K is that contributes to the four gates in the LSTM core; the op- eration generates a 4h à n à n tensor. The state-to-state recurrent component is then computed with a column-wise convolution K ss that has a kernel of size 2 à 1. The step takes the previous hidden and cell states, combines the con- tribution of the input-to-state component and produces the next hidden and cell states, as deï¬ned in Equation 3. The output feature map is then skewed back into an n à n map by removing the offset positions. This computation is re- peated for each of the two directions. Given the two out- put maps, to prevent the layer from seeing future pixels, the right output map is then shifted down by one row and added to the left output map.
Besides reaching the full dependency ï¬eld, the Diagonal BiLSTM has the additional advantage that it uses a con- volutional kernel of size 2 à 1 that processes a minimal amount of information at each step yielding a highly non- linear computation. Kernel sizes larger than 2 à 1 are not particularly useful as they do not broaden the already global receptive ï¬eld of the Diagonal BiLSTM.
# 3.3. Residual Connections
We train PixelRNNs of up to twelve layers of depth. As a means to both increase convergence speed and propagate signals more directly through the network, we deploy resid- ual connections (He et al., 2015) from one LSTM layer to the next. Figure 5 shows a diagram of the residual blocks. The input map to the PixelRNN LSTM layer has 2h fea- tures. The input-to-state component reduces the number of features by producing h features per gate. After applying the recurrent layer, the output map is upsampled back to 2h features per position via a 1 Ã 1 convolution and the input map is added to the output map. This method is related to previous approaches that use gating along the depth of the recurrent network (Kalchbrenner et al., 2015; Zhang et al., 2016), but has the advantage of not requiring additional gates. Apart from residual connections, one can also use learnable skip connections from each layer to the output. In the experiments we evaluate the relative effectiveness of residual and layer-to-output skip connections.
ReLU - 1x1 Conv 1x1 Conv 2h ry 2h h ReLU - 3x3 Conv h ry h 2h eres 2h LSTM
Figure 5. Residual blocks for a PixelCNN (left) and PixelRNNs.
# 3.4. Masked Convolution
The h features for each input position at every layer in the network are split into three parts, each corresponding to one of the RGB channels. When predicting the R chan- nel for the current pixel xi, only the generated pixels left and above of xi can be used as context. When predicting the G channel, the value of the R channel can also be used as context in addition to the previously generated pixels. Likewise, for the B channel, the values of both the R and G channels can be used. To restrict connections in the net- work to these dependencies, we apply a mask to the input- to-state convolutions and to other purely convolutional lay- ers in a PixelRNN.
We use two types of masks that we indicate with mask A and mask B, as shown in Figure 2 (Right). Mask A is ap- plied only to the ï¬rst convolutional layer in a PixelRNN and restricts the connections to those neighboring pixels and to those colors in the current pixels that have already been predicted. On the other hand, mask B is applied to all the subsequent input-to-state convolutional transitions and relaxes the restrictions of mask A by also allowing the connection from a color to itself. The masks can be eas- ily implemented by zeroing out the corresponding weights in the input-to-state convolutions after each update. Simi-
Pixel Recurrent Neural Networks
PixelCNN Row LSTM Diagonal BiLSTM 7 à 7 conv mask A Multiple residual blocks: (see ï¬g 5) Conv 3 à 3 mask B i-s: 3 à 1 mask B s-s: 3 à 1 no mask Row LSTM Diagonal BiLSTM i-s: 1 à 1 mask B s-s: 1 à 2 no mask ReLU followed by 1 à 1 conv, mask B (2 layers) 256-way Softmax for each RGB color (Natural images) or Sigmoid (MNIST)
Table 1. Details of the architectures. In the LSTM architectures i-s and s-s stand for input-state and state-state convolutions.
layer in the conditional PixelRNN, one simply maps the c à n à n conditioning map into a 4h à n à n map that is added to the input-to-state map of the corresponding layer; this is performed using a 1 à 1 unmasked convolution. The larger n à n image is then generated as usual.
# 4. Speciï¬cations of Models
In this section we give the speciï¬cations of the PixelRNNs used in the experiments. We have four types of networks: the PixelRNN based on Row LSTM, the one based on Di- agonal BiLSTM, the fully convolutional one and the Multi- Scale one.
lar masks have also been used in variational autoencoders (Gregor et al., 2014; Germain et al., 2015).
# 3.5. PixelCNN
The Row and Diagonal LSTM layers have a potentially unbounded dependency range within their receptive ï¬eld. This comes with a computational cost as each state needs to be computed sequentially. One simple workaround is to make the receptive ï¬eld large, but not unbounded. We can use standard convolutional layers to capture a bounded receptive ï¬eld and compute features for all pixel positions at once. The PixelCNN uses multiple convolutional lay- ers that preserve the spatial resolution; pooling layers are not used. Masks are adopted in the convolutions to avoid seeing the future context; masks have previously also been used in non-convolutional models such as MADE (Ger- main et al., 2015). Note that the advantage of paralleliza- tion of the PixelCNN over the PixelRNN is only available during training or during evaluating of test images. The image generation process is sequential for both kinds of networks, as each sampled pixel needs to be given as input back into the network.
Table 1 speciï¬es each layer in the single-scale networks. The ï¬rst layer is a 7 à 7 convolution that uses the mask of type A. The two types of LSTM networks then use a vari- able number of recurrent layers. The input-to-state con- volution in this layer uses a mask of type B, whereas the state-to-state convolution is not masked. The PixelCNN uses convolutions of size 3 à 3 with a mask of type B. The top feature map is then passed through a couple of layers consisting of a Rectiï¬ed Linear Unit (ReLU) and a 1Ã1 convolution. For the CIFAR-10 and ImageNet experi- ments, these layers have 1024 feature maps; for the MNIST experiment, the layers have 32 feature maps. Residual and layer-to-output connections are used across the layers of all three networks.
The networks used in the experiments have the following hyperparameters. For MNIST we use a Diagonal BiLSTM with 7 layers and a value of h = 16 (Section 3.3 and Figure 5 right). For CIFAR-10 the Row and Diagonal BiLSTMs have 12 layers and a number of h = 128 units. The Pixel- CNN has 15 layers and h = 128. For 32 Ã 32 ImageNet we adopt a 12 layer Row LSTM with h = 384 units and for 64 Ã 64 ImageNet we use a 4 layer Row LSTM with h = 512 units; the latter model does not use residual con- nections.
# 3.6. Multi-Scale PixelRNN
The Multi-Scale PixelRNN is composed of an uncondi- tional PixelRNN and one or more conditional PixelRNNs. The unconditional network ï¬rst generates in the standard way a smaller sÃs image that is subsampled from the orig- inal image. The conditional network then takes the s à s image as an additional input and generates a larger n à n image, as shown in Figure 2 (Middle).
# 5. Experiments
In this section we describe our experiments and results. We begin by describing the way we evaluate and compare our results. In Section 5.2 we give details about the training. Then we give results on the relative effectiveness of archi- tectural components and our best results on the MNIST, CIFAR-10 and ImageNet datasets.
The conditional network is similar to a standard PixelRNN, but each of its layers is biased with an upsampled version of the small s à s image. The upsampling and biasing pro- cesses are deï¬ned as follows. In the upsampling process, one uses a convolutional network with deconvolutional lay- ers to construct an enlarged feature map of size c à n à n, where c is the number of features in the output map of the upsampling network. Then, in the biasing process, for each
# 5.1. Evaluation
All our models are trained and evaluated on the log- likelihood loss function coming from a discrete distribu- tion. Although natural image data is usually modeled with continuous distributions using density functions, we can compare our results with previous art in the following way.
Pixel Recurrent Neural Networks
In the literature it is currently best practice to add real- valued noise to the pixel values to dequantize the data when using density functions (Uria et al., 2013). When uniform noise is added (with values in the interval [0, 1]), then the log-likelihoods of continuous and discrete models are di- rectly comparable (Theis et al., 2015). In our case, we can use the values from the discrete distribution as a piecewise- uniform continuous function that has a constant value for every interval [i, i + 1], i = 1, 2, . . . 256. This correspond- ing distribution will have the same log-likelihood (on data with added noise) as the original discrete distribution (on discrete data).
In Figure 6 we show a few softmax activations from the model. Although we donât embed prior information about the meaning or relations of the 256 color categories, e.g. that pixel values 51 and 52 are neighbors, the distributions predicted by the model are meaningful and can be multi- modal, skewed, peaked or long tailed. Also note that values 0 and 255 often get a much higher probability as they are more frequent. Another advantage of the discrete distribu- tion is that we do not worry about parts of the distribution mass lying outside the interval [0, 255], which is something that typically happens with continuous distributions.
For MNIST we report the negative log-likelihood in nats as it is common practice in literature. For CIFAR-10 and ImageNet we report negative log-likelihoods in bits per di- mension. The total discrete log-likelihood is normalized by the dimensionality of the images (e.g., 32 Ã 32 Ã 3 = 3072 for CIFAR-10). These numbers are interpretable as the number of bits that a compression scheme based on this model would need to compress every RGB color value (van den Oord & Schrauwen, 2014b; Theis et al., 2015); in practice there is also a small overhead due to arithmetic coding.
# 5.2. Training Details
A 0 2550 255
Our models are trained on GPUs using the Torch toolbox. From the different parameter update rules tried, RMSProp gives best convergence performance and is used for all ex- periments. The learning rate schedules were manually set for every dataset to the highest values that allowed fast con- vergence. The batch sizes also vary for different datasets. For smaller datasets such as MNIST and CIFAR-10 we use smaller batch sizes of 16 images as this seems to regularize the models. For ImageNet we use as large a batch size as allowed by the GPU memory; this corresponds to 64 im- ages/batch for 32 Ã 32 ImageNet, and 32 images/batch for 64 Ã 64 ImageNet. Apart from scaling and centering the images at the input of the network, we donât use any other preprocessing or augmentation. For the multinomial loss function we use the raw pixel color values as categories. For all the PixelRNN models, we learn the initial recurrent state of the network.
Figure 6. Example softmax activations from the model. The top left shows the distribution of the ï¬rst pixel red value (ï¬rst value to sample).
# 5.4. Residual Connections
Another core component of the networks is residual con- nections. In Table 2 we show the results of having residual connections, having standard skip connections or having both, in the 12-layer CIFAR-10 Row LSTM model. We see that using residual connections is as effective as using skip connections; using both is also effective and preserves the advantage.
# 5.3. Discrete Softmax Distribution
Apart from being intuitive and easy to implement, we ï¬nd that using a softmax on discrete pixel values instead of a mixture density approach on continuous pixel values gives better results. For the Row LSTM model with a softmax output distribution we obtain 3.06 bits/dim on the CIFAR- 10 validation set. For the same model with a Mixture of Conditional Gaussian Scale Mixtures (MCGSM) (Theis & Bethge, 2015) we obtain 3.22 bits/dim.
No skip Skip No residual: Residual: 3.22 3.07 3.09 3.06
Table 2. Effect of residual and skip connections in the Row LSTM network evaluated on the Cifar-10 validation set in bits/dim.
When using both the residual and skip connections, we see in Table 3 that performance of the Row LSTM improves with increased depth. This holds for up to the 12 LSTM layers that we tried.
Pixel Recurrent Neural Networks
Figure 7. Samples from models trained on CIFAR-10 (left) and ImageNet 32x32 (right) images. In general we can see that the models capture local spatial dependencies relatively well. The ImageNet model seems to be better at capturing more global structures than the CIFAR-10 model. The ImageNet model was larger and trained on much more data, which explains the qualitative difference in samples.
# layers: 1 2 3 6 9 12 NLL: 3.30 3.20 3.17 3.09 3.08 3.06
Table 3. Effect of the number of layers on the negative log likeli- hood evaluated on the CIFAR-10 validation set (bits/dim).
# 5.5. MNIST
Although the goal of our work was to model natural images on a large scale, we also tried our model on the binary ver- sion (Salakhutdinov & Murray, 2008) of MNIST (LeCun et al., 1998) as it is a good sanity check and there is a lot of previous art on this dataset to compare with. In Table 4 we report the performance of the Diagonal BiLSTM model and that of previous published results. To our knowledge this is the best reported result on MNIST so far.
Model NLL Test DBM 2hl [1]: DBN 2hl [2]: NADE [3]: EoNADE 2hl (128 orderings) [3]: EoNADE-5 2hl (128 orderings) [4]: DLGM [5]: DLGM 8 leapfrog steps [6]: DARN 1hl [7]: MADE 2hl (32 masks) [8]: DRAW [9]: PixelCNN: Row LSTM: Diagonal BiLSTM (1 layer, h = 32): Diagonal BiLSTM (7 layers, h = 16): â 84.62 â 84.55 88.33 85.10 84.68 â 86.60 â 85.51 â 84.13 86.64 ⤠80.97 81.30 80.54 80.75 79.20
# 5.6. CIFAR-10
Next we test our models on the CIFAR-10 dataset (Krizhevsky, 2009). Table 5 lists the results of our mod- els and that of previously published approaches. All our results were obtained without data augmentation. For the proposed networks, the Diagonal BiLSTM has the best performance, followed by the Row LSTM and the Pixel- CNN. This coincides with the size of the respective recep- tive ï¬elds: the Diagonal BiLSTM has a global view, the Row LSTM has a partially occluded view and the Pixel- CNN sees the fewest pixels in the context. This suggests that effectively capturing a large receptive ï¬eld is impor- tant. Figure 7 (left) shows CIFAR-10 samples generated
Table 4. Test set performance of different models on MNIST in nats (negative log-likelihood). Prior results taken from [1] (Salakhutdinov & Hinton, 2009), [2] (Murray & Salakhutdinov, 2009), [3] (Uria et al., 2014), [4] (Raiko et al., 2014), [5] (Rezende et al., 2014), [6] (Salimans et al., 2015), [7] (Gregor et al., 2014), [8] (Germain et al., 2015), [9] (Gregor et al., 2015).
from the Diagonal BiLSTM.
# 5.7. ImageNet
Although to our knowledge the are no published results on the ILSVRC ImageNet dataset (Russakovsky et al., 2015) that we can compare our models with, we give our Ima-
Pixel Recurrent Neural Networks
Figure 8. Samples from models trained on ImageNet 64x64 images. Left: normal model, right: multi-scale model. The single-scale model trained on 64x64 images is less able to capture global structure than the 32x32 model. The multi-scale model seems to resolve this problem. Although these models get similar performance in log-likelihood, the samples on the right do seem globally more coherent.
Model Uniform Distribution: Multivariate Gaussian: NICE [1]: Deep Diffusion [2]: Deep GMMs [3]: RIDE [4]: PixelCNN: Row LSTM: Diagonal BiLSTM: NLL Test (Train) 8.00 4.70 4.48 4.20 4.00 3.47 3.14 (3.08) 3.07 (3.00) 3.00 (2.93)
occluded
# completions
# original
wena Gm ee Ee TAS A bali Mh SN AYN pin 3s 2 ids
Table 5. Test set performance of different models on CIFAR-10 in bits/dim. For our models we give training performance in brack- ets. [1] (Dinh et al., 2014), [2] (Sohl-Dickstein et al., 2015), [3] (van den Oord & Schrauwen, 2014a), [4] personal communication (Theis & Bethge, 2015).
Image size NLL Validation (Train) 32x32: 64x64: 3.86 (3.83) 3.63 (3.57)
Figure 9. Image completions sampled from a model that was trained on 32x32 ImageNet images. Note that diversity of the completions is high, which can be attributed to the log-likelihood loss function used in this generative model, as it encourages mod- els with high entropy. As these are sampled from the model, we can easily generate millions of different completions. It is also interesting to see that textures such as water, wood and shrubbery are also inputed relative well (see Figure 1).
Table 6. Negative log-likelihood performance on 32Ã32 and 64Ã 64 ImageNet in bits/dim.
geNet log-likelihood performance in Table 6 (without data augmentation). On ImageNet the current PixelRNNs do not appear to overï¬t, as we saw that their validation per- formance improved with size and depth. The main con- straint on model size are currently computation time and GPU memory.
Note that the ImageNet models are in general less com- pressible than the CIFAR-10 images. ImageNet has greater variety of images, and the CIFAR-10 images were most
likely resized with a different algorithm than the one we used for ImageNet images. The ImageNet images are less blurry, which means neighboring pixels are less correlated to each other and thus less predictable. Because the down- sampling method can inï¬uence the compression perfor- mance, we have made the used downsampled images avail- able1.
Figure 7 (right) shows 32 Ã 32 samples drawn from our model trained on ImageNet. Figure 8 shows 64 Ã 64 sam- ples from the same model with and without multi-scale
1http://image-net.org/small/download.php
Pixel Recurrent Neural Networks
conditioning. Finally, we also show image completions sampled from the model in Figure 9.
# 6. Conclusion
Graves, Alex and Schmidhuber, J¨urgen. Ofï¬ine handwrit- ing recognition with multidimensional recurrent neural networks. In Advances in Neural Information Process- ing Systems, 2009.
In this paper we signiï¬cantly improve and build upon deep recurrent neural networks as generative models for natural images. We have described novel two-dimensional LSTM layers: the Row LSTM and the Diagonal BiLSTM, that scale more easily to larger datasets. The models were trained to model the raw RGB pixel values. We treated the pixel values as discrete random variables by using a soft- max layer in the conditional distributions. We employed masked convolutions to allow PixelRNNs to model full de- pendencies between the color channels. We proposed and evaluated architectural improvements in these models re- sulting in PixelRNNs with up to 12 LSTM layers.
Gregor, Karol, Danihelka, Ivo, Mnih, Andriy, Blundell, Charles, and Wierstra, Daan. Deep autoregressive net- works. In Proceedings of the 31st International Confer- ence on Machine Learning, 2014.
Gregor, Karol, Danihelka, Ivo, Graves, Alex, and Wierstra, Daan. DRAW: A recurrent neural network for image generation. Proceedings of the 32nd International Con- ference on Machine Learning, 2015.
He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015.
We have shown that the PixelRNNs signiï¬cantly improve the state of the art on the MNIST and CIFAR-10 datasets. We also provide new benchmarks for generative image modeling on the ImageNet dataset. Based on the samples and completions drawn from the models we can conclude that the PixelRNNs are able to model both spatially local and long-range correlations and are able to produce images that are sharp and coherent. Given that these models im- prove as we make them larger and that there is practically unlimited data available to train on, more computation and larger models are likely to further improve the results.
Hochreiter, Sepp and Schmidhuber, J¨urgen. Long short- term memory. Neural computation, 1997.
Kalchbrenner, Nal and Blunsom, Phil. Recurrent continu- ous translation models. In Proceedings of the 2013 Con- ference on Empirical Methods in Natural Language Pro- cessing, 2013.
Kalchbrenner, Nal, Danihelka, Ivo, and Graves, Alex. arXiv preprint Grid long short-term memory. arXiv:1507.01526, 2015.
# Acknowledgements
The authors would like to thank Shakir Mohamed and Guil- laume Desjardins for helpful input on this paper and Lu- cas Theis, Alex Graves, Karen Simonyan, Lasse Espeholt, Danilo Rezende, Karol Gregor and Ivo Danihelka for in- sightful discussions.
# References
Kingma, Diederik P and Welling, Max. Auto-encoding arXiv preprint arXiv:1312.6114, variational bayes. 2013.
Krizhevsky, Alex. Learning multiple layers of features from tiny images. 2009.
Larochelle, Hugo and Murray, Iain. The neural autore- gressive distribution estimator. The Journal of Machine Learning Research, 2011.
Bengio, Yoshua and Bengio, Samy. Modeling high- dimensional discrete data with multi-layer neural net- works. pp. 400â406. MIT Press, 2000.
LeCun, Yann, Bottou, L´eon, Bengio, Yoshua, and Haffner, Patrick. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998.
Dinh, Laurent, Krueger, David, and Bengio, Yoshua. NICE: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516, 2014.
Evaluat- ing probabilities under high-dimensional latent variable models. In Advances in Neural Information Processing Systems, 2009.
Iain, and Larochelle, Hugo. MADE: Masked autoencoder for dis- tribution estimation. arXiv preprint arXiv:1502.03509, 2015.
Graves, Alex. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013.
Neal, Radford M. Connectionist learning of belief net- works. Artiï¬cial intelligence, 1992.
Raiko, Tapani, Li, Yao, Cho, Kyunghyun, and Bengio, Yoshua. Iterative neural autoregressive distribution es- In Advances in Neural Information timator NADE-k. Processing Systems, 2014.
Pixel Recurrent Neural Networks
Rezende, Danilo J, Mohamed, Shakir, and Wierstra, Daan. Stochastic backpropagation and approximate inference In Proceedings of the 31st in deep generative models. International Conference on Machine Learning, 2014.
Russakovsky, Olga, Deng, Jia, Su, Hao, Krause, Jonathan, Satheesh, Sanjeev, Ma, Sean, Huang, Zhiheng, Karpa- thy, Andrej, Khosla, Aditya, Bernstein, Michael, Berg, Alexander C., and Fei-Fei, Li. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 2015.
the 31st International Conference on Machine Learning, 2014.
van den Oord, A¨aron and Schrauwen, Benjamin. Factoring variations in natural images with deep gaussian mixture models. In Advances in Neural Information Processing Systems, 2014a.
van den Oord, A¨aron and Schrauwen, Benjamin. The student-t mixture as a natural image patch prior with ap- plication to image compression. The Journal of Machine Learning Research, 2014b.
Salakhutdinov, Ruslan and Hinton, Geoffrey E. Deep boltz- mann machines. In International Conference on Artiï¬- cial Intelligence and Statistics, 2009.
Salakhutdinov, Ruslan and Murray, Iain. On the quantita- tive analysis of deep belief networks. In Proceedings of the 25th international conference on Machine learning, 2008.
Zhang, Yu, Chen, Guoguo, Yu, Dong, Yao, Kaisheng, Khu- danpur, Sanjeev, and Glass, James. Highway long short- term memory RNNs for distant speech recognition. In Proceedings of the International Conference on Acous- tics, Speech and Signal Processing, 2016.
Salimans, Tim, Kingma, Diederik P, and Welling, Max. Markov chain monte carlo and variational inference: Bridging the gap. Proceedings of the 32nd International Conference on Machine Learning, 2015.
Sohl-Dickstein, Jascha, Weiss, Eric A., Maheswaranathan, Niru, and Ganguli, Surya. Deep unsupervised learning using nonequilibrium thermodynamics. Proceedings of the 32nd International Conference on Machine Learn- ing, 2015.
Stollenga, Marijn F, Byeon, Wonmin, Liwicki, Marcus, and Schmidhuber, Juergen. Parallel multi-dimensional lstm, with application to fast biomedical volumetric im- In Advances in Neural Information age segmentation. Processing Systems 28. 2015.
Sutskever, Ilya, Martens, James, and Hinton, Geoffrey E. Generating text with recurrent neural networks. In Pro- ceedings of the 28th International Conference on Ma- chine Learning, 2011.
Theis, Lucas and Bethge, Matthias. Generative image mod- eling using spatial LSTMs. In Advances in Neural Infor- mation Processing Systems, 2015.
Theis, Lucas, van den Oord, A¨aron, and Bethge, Matthias. A note on the evaluation of generative models. arXiv preprint arXiv:1511.01844, 2015.
Iain, and Larochelle, Hugo. RNADE: The real-valued neural autoregressive density- estimator. In Advances in Neural Information Processing Systems, 2013.
Uria, Benigno, Murray, Iain, and Larochelle, Hugo. A deep and tractable density estimator. In Proceedings of
Pixel Recurrent Neural Networks
See Sea ah = AA Tels ae 45 3 GS a SARs lve TG peer reer Ae Mises Ss Scott ak HSA) Sn Shoes i ey 2 Cob Ae i AREEA 2 ik is Rati iO FE Rg as al 62 Sa bes o>) AR ae pai ee BE ew ep OB AS ale te eK ASI BOBS of eer eee Rls a OM eed Se pie Ge Gils Bes tea ike sate EMRE EY Sas kare Ste ca
Figure 10. Additional samples from a model trained on ImageNet 32x32 (right) images. | {
"id": "1511.01844"
} |
1601.01705 | Learning to Compose Neural Networks for Question Answering | We describe a question answering model that applies to both images and
structured knowledge bases. The model uses natural language strings to
automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly
parameters via reinforcement learning, with only (world, question, answer)
triples as supervision. Our approach, which we term a dynamic neural model
network, achieves state-of-the-art results on benchmark datasets in both visual
and structured domains. | http://arxiv.org/pdf/1601.01705 | Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein | cs.CL, cs.CV, cs.NE | null | null | cs.CL | 20160107 | 20160607 | 6 1 0 2 n u J 7 ] L C . s c [
4 v 5 0 7 1 0 . 1 0 6 1 : v i X r a
# Learning to Compose Neural Networks for Question Answering
Jacob Andreas and Marcus Rohrbach and Trevor Darrell and Dan Klein Department of Electrical Engineering and Computer Sciences University of California, Berkeley {jda,rohrbach,trevor,klein}@eecs.berkeley.edu
# Abstract
We describe a question answering model that applies to both images and structured knowl- edge bases. The model uses natural lan- guage strings to automatically assemble neu- ral networks from a collection of composable modules. Parameters for these modules are learned jointly with network-assembly param- eters via reinforcement learning, with only (world, question, answer) triples as supervi- sion. Our approach, which we term a dynamic neural module network, achieves state-of-the- art results on benchmark datasets in both vi- sual and structured domains.
What cities are in Georgia? Atlanta â t Module inventory (Section 4.1) âand Et =f = (a) Network layout (Section 4.2) and Find[eity] relate[in] (b) Jookup[Georgia] (d) Knowledge source
Figure 1: A learned syntactic analysis (a) is used to assemble a collection of neural modules (b) into a deep neural network (c), and applied to a world representation (d) to produce an answer.
1
# Introduction
This paper presents a compositional, attentional model for answering questions about a variety of world representations, including images and struc- tured knowledge bases. The model translates from questions to dynamically assembled neural net- works, then applies these networks to world rep- resentations (images or knowledge bases) to pro- duce answers. We take advantage of two largely independent lines of work: on one hand, an exten- sive literature on answering questions by mapping from strings to logical representations of meaning; on the other, a series of recent successes in deep neural models for image recognition and captioning. By constructing neural networks instead of logical forms, our model leverages the best aspects of both linguistic compositionality and continuous represen- tations.
Previous work has used manually-speciï¬ed modular structures for visual learning (Andreas et al., 2016). Here we:
⢠learn a network structure predictor jointly with module parameters themselves
⢠extend visual primitives from previous work to reason over structured world representations
Training data consists of (world, question, answer) triples: our approach requires no supervision of net- work layouts. We achieve state-of-the-art perfor- mance on two markedly different question answer- ing tasks: one with questions about natural im- ages, and another with more compositional ques- tions about United States geography.1
Our model has two components, trained jointly: ï¬rst, a collection of neural âmodulesâ that can be freely composed (Figure 1a); second, a network lay- out predictor that assembles modules into complete deep networks tailored to each question (Figure 1b).
# 2 Deep networks as functional programs
We begin with a high-level discussion of the kinds of composed networks we would like to learn.
1We have released our code at http://github.com/ jacobandreas/nmn2
Andreas et al. (2016) describe a heuristic ap- proach for decomposing visual question answering tasks into sequence of modular sub-problems. For example, the question What color is the bird? might be answered in two steps: ï¬rst, âwhere is the bird?â (Figure 2a), second, âwhat color is that part of the image?â (Figure 2c). This ï¬rst step, a generic mod- ule called find, can be expressed as a fragment of a neural network that maps from image features and a lexical item (here bird) to a distribution over pix- els. This operation is commonly referred to as the attention mechanism, and is a standard tool for ma- nipulating images (Xu et al., 2015) and text repre- sentations (Hermann et al., 2015).
The ï¬rst contribution of this paper is an exten- sion and generalization of this mechanism to enable fully-differentiable reasoning about more structured semantic representations. Figure 2b shows how the same module can be used to focus on the entity Georgia in a non-visual grounding domain; more generally, by representing every entity in the uni- verse of discourse as a feature vector, we can obtain a distribution over entities that corresponds roughly to a logical set-valued denotation.
Having obtained such a distribution, existing neu- ral approaches use it to immediately compute a weighted average of image features and project back into a labeling decisionâa describe module (Fig- ure 2c). But the logical perspective suggests a num- ber of novel modules that might operate on atten- tions: e.g. combining them (by analogy to conjunc- tion or disjunction) or inspecting them directly with- out a return to feature space (by analogy to quantiï¬- cation, Figure 2d). These modules are discussed in detail in Section 4. Unlike their formal counterparts, they are differentiable end-to-end, facilitating their integration into learned models. Building on previ- ous work, we learn behavior for a collection of het- erogeneous modules from (world, question, answer) triples.
The second contribution of this paper is a model for learning to assemble such modules composition- ally. Isolated modules are of limited useâto ob- tain expressive power comparable to either formal approaches or monolithic deep networks, they must be composed into larger structures. Figure 2 shows simple examples of composed structures, but for realistic question-answering tasks, even larger net-
black and white true t waists (d) describe (c) state (a) (b) Montgomery Georgia Atlanta Gee® Ges® @ss®
Figure 2: Simple neural module networks, corresponding to the questions What color is the bird? and Are there any states? (a) A neural find module for computing an attention over pixels. (b) The same operation applied to a knowledge base. (c) Using an attention produced by a lower module to identify the color of the region of the image attended to. (d) Performing quantiï¬cation by evaluating an attention directly.
works are required. Thus our goal is to automati- cally induce variable-free, tree-structured computa- tion descriptors. We can use a familiar functional notation from formal semantics (e.g. Liang et al., 2011) to represent these computations.2 We write the two examples in Figure 2 as
(describe[color] find[bird])
and
(exists find[state])
respectively. These are network layouts: they spec- ify a structure for arranging modules (and their lex- ical parameters) into a complete network. Andreas et al. (2016) use hand-written rules to deterministi- cally transform dependency trees into layouts, and are restricted to producing simple structures like the above for non-synthetic data. For full generality, we will need to solve harder problems, like transform- ing What cities are in Georgia? (Figure 1) into
(and
find[city] (relate[in] lookup[Georgia]))
In this paper, we present a model for learning to se- lect such structures from a set of automatically gen- erated candidates. We call this model a dynamic neural module network.
2But note that unlike formal semantics, the behavior of the primitive functions here is itself unknown.
# 3 Related work
There is an extensive literature on database ques- tion answering, in which strings are mapped to log- ical forms, then evaluated by a black-box execu- tion model to produce answers. Supervision may be provided either by annotated logical forms (Wong and Mooney, 2007; Kwiatkowski et al., 2010; An- dreas et al., 2013) or from (world, question, answer) triples alone (Liang et al., 2011; Pasupat and Liang, 2015). In general the set of primitive functions from which these logical forms can be assembled is ï¬xed, but one recent line of work focuses on induc- ing new predicates functions automatically, either from perceptual features (Krishnamurthy and Kol- lar, 2013) or the underlying schema (Kwiatkowski et al., 2013). The model we describe in this paper has a uniï¬ed framework for handling both the per- ceptual and schema cases, and differs from existing work primarily in learning a differentiable execution model with continuous evaluation results.
Neural models for question answering are also a subject of current interest. These include approaches that model the task directly as a multiclass classiï¬- cation problem (Iyyer et al., 2014), models that at- tempt to embed questions and answers in a shared vector space (Bordes et al., 2014) and attentional models that select words from documents sources (Hermann et al., 2015). Such approaches generally require that answers can be retrieved directly based on surface linguistic features, without requiring in- termediate computation. A more structured ap- proach described by Yin et al. (2015) learns a query execution model for database tables without any nat- ural language component. Previous efforts toward unifying formal logic and representation learning in- clude those of Grefenstette (2013), Krishnamurthy and Mitchell (2013), Lewis and Steedman (2013), and Beltagy et al. (2013).
The visually-grounded component of this work relies on recent advances in convolutional net- works for computer vision (Simonyan and Zisser- man, 2014), and in particular the fact that late convo- lutional layers in networks trained for image recog- nition contain rich features useful for other vision tasks while preserving spatial information. These features have been used for both image captioning (Xu et al., 2015) and visual QA (Yang et al., 2015).
Most previous approaches to visual question an- swering either apply a recurrent model to deep rep- resentations of both the image and the question (Ren et al., 2015; Malinowski et al., 2015), or use the question to compute an attention over the input im- age, and then answer based on both the question and the image features attended to (Yang et al., 2015; Xu and Saenko, 2015). Other approaches include the simple classiï¬cation model described by Zhou et al. (2015) and the dynamic parameter prediction network described by Noh et al. (2015). All of these models assume that a ï¬xed computation can be performed on the image and question to compute the answer, rather than adapting the structure of the computation to the question.
As noted, Andreas et al. (2016) previously con- sidered a simple generalization of these attentional approaches in which small variations in the net- work structure per-question were permitted, with the structure chosen by (deterministic) syntactic pro- cessing of questions. Other approaches in this gen- eral family include the âuniversal parserâ sketched by Bottou (2014), the graph transformer networks of Bottou et al. (1997), the knowledge-based neu- ral networks of Towell and Shavlik (1994) and the recursive neural networks of Socher et al. (2013), which use a ï¬xed tree structure to perform further linguistic analysis without any external world rep- resentation. We are unaware of previous work that simultaneously learns both parameters for and struc- tures of instance-speciï¬c networks.
# 4 Model
Recall that our goal is to map from questions and world representations to answers. This process in- volves the following variables:
1. w a world representation 2. x a question 3. y an answer 4. z a network layout 5. θ a collection of model parameters
Our model is built around two distributions: a lay- out model p(z|x; 6) which chooses a layout for a sentence, and a execution model p-(y|ww; 9.) which applies the network specified by z to w.
For ease of presentation, we introduce these mod- els in reverse order. We ï¬rst imagine that z is always
observed, and in|Section 4.1|describe how to evalu- ate and learn modules parameterized by 6. within fixed structures. In we move to the real scenario, where z is unknown. We describe how to predict layouts from questions and learn 6, and 62 jointly without layout supervision.
# 4.1 Evaluating modules
Given a layout z, we assemble the corresponding modules into a full neural network (Figure Tf), and apply it to the knowledge representation. Interme- diate results flow between modules until an answer is produced at the root. We denote the output of the network with layout z on input world w as [z]w: when explicitly referencing the substructure of z, we can alternatively write [m(h!, h?)] for a top-level module m with submodule outputs h! and h?. We then define the execution model:
(1)
# pe(ylw) =
([2]w)y
(This assumes that the root module of z produces a distribution over labels y.) The set of possible layouts z is restricted by module type constraints: some modules (like find above) operate directly on the input representation, while others (like describe above) also depend on input from speciï¬c earlier modules. Two base types are considered in this pa- per are Attention (a distribution over pixels or enti- ties) and Labels (a distribution over answers).
Parameters are tied across multiple instances of the same module, so different instantiated networks may share some parameters but not others. Modules have both parameter arguments (shown in square brackets) and ordinary inputs (shown in parenthe- ses). Parameter arguments, like the running bird example in are provided by the layout, and are used to specialize module behavior for par- ticular lexical items. Ordinary inputs are the re- sult of computation lower in the network. In ad- dition to parameter-specific weights, modules have global weights shared across all instances of the module (but not shared with other modules). We write A,a,B,b,... for global weights and uâ,v! for weights associated with the parameter argument 7. © and © denote (possibly broadcasted) elementwise addition and multiplication respectively. The com- plete set of global weights and parameter-specific weights constitutes 6.. Every module has access to
the world representation, represented as a collection of vectors w1, w2, . . . (or W expressed as a matrix). The nonlinearity Ï denotes a rectiï¬ed linear unit.
The modules used in this paper are shown below, with names and type constraints in the ï¬rst row and a description of the moduleâs computation following.
(â Attention) Lookup lookup[i] produces an attention focused entirely at the index f (i), where the relationship f between words and positions in the input map is known ahead of time (e.g. string matches on database ï¬elds).
= ef (i) (2)
[Lookup [i] ]
where ei is the basis vector that is 1 in the ith position and 0 elsewhere.
(â Attention) Find find[i] computes a distribution over indices by con- catenating the parameter argument with each position of the input feature map, and passing the concatenated vector through a MLP:
[ina tii] = softmax(a © o(Bu' ® CW @d)) (3)
# [ina tii]
Relate (Attention > Attention) relate directs focus from one region of the input to another. It behaves much like the find module, but also conditions its behavior on the current region of attention h. Let w(h) = So), hew*, where hj, is the k** element of h. Then,
[relate [i] (h)]] = softmax(a © o(Bu' @ CW @ Dti(h) Ge)) (4)
And (Attention* â Attention) and performs an operation analogous to set intersec- tion for attentions. The analogy to probabilistic logic suggests multiplying probabilities: [ana(h},h?,...)J =k Ono-:- (5)
(Attention â Labels) Describe describe[i] computes a weighted average of w under the input attention. This average is then used to predict an answer representation. With ¯w as above,
describe[i](h) = softmax(AÏ(B ¯w(h) + vi)) (6)
Exists (Attention â Labels) exists is the existential quantifier, and inspects the incoming attention directly to produce a label, rather than an intermediate feature vector like describe:
Jexists)(h)] = softmax (( max he)a+ ) (7)
What cities are in Georgia?
are Georgia? be eS. 1 (b) what 1 u 1 â I / v °â relate[in] find[city] i (c) Lookup[ Georgia] 2 a relate[in] ° 3 find[city] 2 lookup[ Georgia] (d) 2 relate[in} s¢ Lookup[Georgia]
(a)
Figure 3: Generation of layout candidates. The input sentence (a) is represented as a dependency parse (b). Fragments of this dependency parse are then associated with appropriate modules (c), and these fragments are assembled into full layouts (d).
With z observed, the model we have described so far corresponds largely to that of Andreas et al. (2016), though the module inventory is differentâ in particular, our new exists and relate modules do not depend on the two-dimensional spatial struc- ture of the input. This enables generalization to non- visual world representations.
Learning in this simplified setting is straightfor- ward. Assuming the top-level module in each layout is a describe or exists module, the fully- instan- tiated network corresponds to a distribution over la- bels conditioned on layouts. To train, we maximize DX (wyy,z) 108 P2(y|w; Ve) directly. This can be under- stood as a parameter-tying scheme, where the deci- sions about which parameters to tie are governed by the observed layouts z.
# 4.2 Assembling networks
Next we describe the layout model p(z|; 6). We first use a fixed syntactic parse to generate a small set of candidate layouts, analogously to the way a semantic grammar generates candidate semantic parses in previous work (Berant and Liang, 2014).
A semantic parse differs from a syntactic parse in two primary ways. First, lexical items must be
mapped onto a (possibly smaller) set of semantic primitives. Second, these semantic primitives must be combined into a structure that closely, but not ex- actly, parallels the structure provided by syntax. For example, state and province might need to be identi- ï¬ed with the same ï¬eld in a database schema, while all states have a capital might need to be identiï¬ed with the correct (in situ) quantiï¬er scope.
While we cannot avoid the structure selection problem, continuous representations simplify the lexical selection problem. For modules that accept a vector parameter, we associate these parameters with words rather than semantic tokens, and thus turn the combinatorial optimization problem asso- ciated with lexicon induction into a continuous one. Now, in order to learn that province and state have the same denotation, it is sufï¬cient to learn that their associated parameters are close in some embedding spaceâa task amenable to gradient descent. (Note that this is easy only in an optimizability sense, and not an information-theoretic oneâwe must still learn to associate each independent lexical item with the correct vector.) The remaining combinatorial problem is to arrange the provided lexical items into the right computational structure. In this respect, layout prediction is more like syntactic parsing than ordinary semantic parsing, and we can rely on an off-the-shelf syntactic parser to get most of the way there. In this work, syntactic structure is provided by the Stanford dependency parser (De Marneffe and Manning, 2008).
The construction of layout candidates is depicted in Figure 3, and proceeds as follows:
1. Represent the input sentence as a dependency tree.
2. Collect all nouns, verbs, and prepositional phrases that are attached directly to a wh-word or copula.
3. Associate each of these with a layout frag- ment: Ordinary nouns and verbs are mapped to a single find module. Proper nouns to a sin- gle lookup module. Prepositional phrases are mapped to a depth-2 fragment, with a relate module for the preposition above a find mod- ule for the enclosed head noun.
4. Form subsets of this set of layout fragments. For each subset, construct a layout candidate by
joining all fragments with an and module, and inserting either a measure or describe module at the top (each subset thus results in two parse candidates.)
All layouts resulting from this process feature a relatively ï¬at tree structure with at most one con- junction and one quantiï¬er. This is a strong sim- plifying assumption, but appears sufï¬cient to cover most of the examples that appear in both of our tasks. As our approach includes both categories, re- lations and simple quantiï¬cation, the range of phe- nomena considered is generally broader than pre- vious perceptually-grounded QA work (Krishna- murthy and Kollar, 2013; Matuszek et al., 2012).
Having generated a set of candidate parses, we need to score them. This is a ranking problem; as in the rest of our approach, we solve it using standard neural machinery. In particular, we pro- duce an LSTM representation of the question, a feature-based representation of the query, and pass both representations through a multilayer perceptron (MLP). The query feature vector includes indicators on the number of modules of each type present, as well as their associated parameter arguments. While one can easily imagine a more sophisticated parse- scoring model, this simple approach works well for our tasks.
Formally, for a question x, let hq(x) be an LSTM encoding of the question (i.e. the last hidden layer of an LSTM applied word-by-word to the input ques- tion). Let {z1, z2, . . .} be the proposed layouts for x, and let f (zi) be a feature vector representing the ith layout. Then the score s(zi|x) for the layout zi is
s(zi|v) = a'o(Bhg(x) +Cf(u)+4) (8) ie. the output of an MLP with inputs h,(x) and f(z), and parameters 0, = {a,B,C,d}. Finally, we normalize these scores to obtain a distribution:
n t;0¢) = ene) [> 8(zile) (9) j=l D(%
Having defined a layout selection module p(z|z;9¢) and a network execution model pz(y|w;9e), we are ready to define a model for predicting answers given only (world, question) pairs. The key constraint is that we want to min- imize evaluations of p.(y|w;@-) (which involves
expensive application of a deep network to a large input representation), but can tractably evaluate p(z|x;4¢) for all z (which involves application of a shallow network to a relatively small set of candidates). This is the opposite of the situation usually encountered semantic parsing, where calls to the query execution model are fast but the set of candidate parses is too large to score exhaustively.
In fact, the problem more closely resembles the scenario faced by agents in the reinforcement learn- ing setting (where it is cheap to score actions, but potentially expensive to execute them and obtain re- wards). We adopt a common approach from that lit- erature, and express our model as a stochastic pol- icy. Under this policy, we first sample a layout z from a distribution p(z|2;9¢), and then apply z to the knowledge source and obtain a distribution over answers p(y|z, w; 0c).
After z is chosen, we can train the execution model directly by maximizing log p(y|z, w; 6.) with respect to 6. as before (this is ordinary backprop- agation). Because the hard selection of z is non- differentiable, we optimize p(z|x; 67) using a policy gradient method. The gradient of the reward surface J with respect to the parameters of the policy is
VJ(8¢) = E[V log p(z|x; 4c) - r]
(this is the REINFORCE rule (Williams, 1992)). Here the expectation is taken with respect to rollouts of the policy, and r is the reward. Because our goal is to select the network that makes the most accurate predictions, we take the reward to be identically the negative log-probability from the execution phase, i.e.
[(V log p(z|a; Oe) - log p(y|z,w;4.)] CL
Thus the update to the layout-scoring model at each timestep is simply the gradient of the log-probability of the chosen layout, scaled by the accuracy of that layoutâs predictions. At training time, we approxi- mate the expectation with a single rollout, so at each step we update 6) in the direction (V log p(z|x; 9¢))- log p(y|z, w; 9) for a single z ~ p(z|x;6¢). A. and 0, are optimized using ADADELTA with p = 0.95, e = leâ6 and gradient clipping at a norm of 10.
(10)
What is in the sheepâs ear? What color is she wearing? What is the man dragging? (describe[what] (describe[color] (describe[what] (and find[sheep] find[ear])) find[wear]) find[man]) tag white boat (board)
| i d
i » a
Figure 4: Sample outputs for the visual question answering task. The second row shows the ï¬nal attention provided as in- put to the top-level describe module. For the ï¬rst two exam- ples, the model produces reasonable parses, attends to the cor- rect region of the images (the ear and the womanâs clothing), and generates the correct answer. In the third image, the verb is discarded and a wrong answer is produced.
# 5 Experiments
The framework described in this paper is general, and we are interested in how well it performs on datasets of varying domain, size and linguistic com- plexity. To that end, we evaluate our model on tasks at opposite extremes of both these criteria: a large visual question answering dataset, and a small col- lection of more structured geography questions.
# 5.1 Questions about images
Our ï¬rst task is the recently-introduced Visual Ques- tion Answering challenge (VQA) (Antol et al., 2015). The VQA dataset consists of more than 200,000 images paired with human-annotated ques- tions and answers, as in Figure 4.
We use the VQA 1.0 release, employing the de- velopment set for model selection and hyperparam- eter tuning, and reporting ï¬nal results from the eval- uation server on the test-standard set. For the ex- periments described in this section, the input feature representations wi are computed by the the ï¬fth con- volutional layer of a 16-layer VGGNet after pooling (Simonyan and Zisserman, 2014). Input images are scaled to 448Ã448 before computing their represen- tations. We found that performance on this task was
test-dev test-std Yes/No Number Other All All Zhou (2015) 76.6 Noh (2015) 80.7 Yang (2015) 79.3 81.2 NMN 81.1 D-NMN 35.0 37.2 36.6 38.0 38.6 42.6 41.7 46.1 44.0 45.5 55.7 57.2 58.7 58.6 59.4 55.9 57.4 58.9 58.7 59.4
Table 1: Results on the VQA test server. NMN is the parameter-tying model from Andreas et al. (2015), and D-NMN is the model described in this paper.
best if the candidate layouts were relatively simple: only describe, and and find modules are used, and layouts contain at most two conjuncts.
One weakness of this basic framework is a difï¬- culty modeling prior knowledge about answers (of the form most bears are brown). This kinds of lin- guistic âpriorâ is essential for the VQA task, and easily incorporated. We simply introduce an extra hidden layer for recombining the ï¬nal module net- work output with the input sentence representation hq(x) (see Equation 8), replacing Equation 1 with:
log pz(y|w, x) = (Ahq(x) + B (12)
# z]w)y
(Now modules with output type Labels should be understood as producing an answer embedding rather than a distribution over answers.) This allows the question to inï¬uence the answer directly.
Results are shown in Table 1. The use of dynamic networks provides a small gain, most noticeably on âotherâ questions. We achieve state-of-the-art re- sults on this task, outperforming a highly effective visual bag-of-words model (Zhou et al., 2015), a model with dynamic network parameter prediction (but ï¬xed network structure) (Noh et al., 2015), a more conventional attentional model (Yang et al., 2015), and a previous approach using neural mod- ule networks with no structure prediction (Andreas et al., 2016).
Some examples are shown in Figure 4. In general, the model learns to focus on the correct region of the image, and tends to consider a broad window around the region. This facilitates answering questions like Where is the cat?, which requires knowledge of the surroundings as well as the object in question.
Accuracy Model GeoQA GeoQA+Q LSP-F LSP-W NMN D-NMN 48 51 51.7 54.3 â â 35.7 42.9
Table 2: Results on the GeoQA dataset, and the GeoQA dataset with quantiï¬cation. Our approach outperforms both a purely logical model (LSP-F) and a model with learned percep- tual predicates (LSP-W) on the original dataset, and a ï¬xed- structure NMN under both evaluation conditions.
# 5.2 Questions about geography
The next set of experiments we consider focuses on GeoQA, a geographical question-answering task ï¬rst introduced by Krishnamurthy and Kollar (2013). This task was originally paired with a vi- sual question answering task much simpler than the one just discussed, and is appealing for a number of reasons. In contrast to the VQA dataset, GeoQA is quite small, containing only 263 examples. Two baselines are available: one using a classical se- mantic parser backed by a database, and another which induces logical predicates using linear clas- siï¬ers over both spatial and distributional features. This allows us to evaluate the quality of our model relative to other perceptually grounded logical se- mantics, as well as strictly logical approaches.
The GeoQA domain consists of a set of entities (e.g. states, cities, parks) which participate in vari- ous relations (e.g. north-of, capital-of). Here we take the world representation to consist of two pieces: a set of category features (used by the find module) and a different set of relational features (used by the relate module). For our experiments, we use a sub- set of the features originally used by Krishnamurthy et al. The original dataset includes no quantiï¬ers, and treats the questions What cities are in Texas? and Are there any cities in Texas? identically. Be- cause we are interested in testing the parserâs ability to predict a variety of different structures, we intro- duce a new version of the dataset, GeoQA+Q, which distinguishes these two cases, and expects a Boolean answer to questions of the second kind.
Results are shown in Table 2. As in the orig- inal work, we report the results of leave-one- environment-out cross-validation on the set of 10 en-
Is Key Largo an island? (exists (and lookup[key-largo] find[island])) yes: correct What national parks are in Florida? (and find[park] (relate[in] lookup[florida])) everglades: correct What are some beaches in Florida? (exists (and lookup[beach] (relate[in] lookup[florida]))) yes (daytona-beach): wrong parse What beach city is there in Florida? (and lookup[beach] lookup[city] (relate[in] lookup[florida])) [none] (daytona-beach): wrong module behavior
Figure 5: Example layouts and answers selected by the model on the GeoQA dataset. For incorrect predictions, the correct answer is shown in parentheses.
vironments. Our dynamic model (D-NMN) outper- forms both the logical (LSP-F) and perceptual mod- els (LSP-W) described by (Krishnamurthy and Kol- lar, 2013), as well as a ï¬xed-structure neural mod- ule net (NMN). This improvement is particularly notable on the dataset with quantiï¬ers, where dy- namic structure prediction produces a 20% relative improvement over the ï¬xed baseline. A variety of predicted layouts are shown in Figure 5.
# 6 Conclusion
We have introduced a new model, the dynamic neu- ral module network, for answering queries about both structured and unstructured sources of informa- tion. Given only (question, world, answer) triples as training data, the model learns to assemble neu- ral networks on the ï¬y from an inventory of neural models, and simultaneously learns weights for these modules so that they can be composed into novel structures. Our approach achieves state-of-the-art results on two tasks. We believe that the success of this work derives from two factors:
Continuous representations improve the expres- siveness and learnability of semantic parsers: by re- placing discrete predicates with differentiable neural network fragments, we bypass the challenging com- binatorial optimization problem associated with in- duction of a semantic lexicon. In structured world
representations, neural predicate representations al- low the model to invent reusable attributes and re- lations not expressed in the schema. Perhaps more importantly, we can extend compositional question- answering machinery to complex, continuous world representations like images.
Semantic structure prediction improves general- ization in deep networks: by replacing a ï¬xed net- work topology with a dynamic one, we can tailor the computation performed to each problem instance, using deeper networks for more complex questions and representing combinatorially many queries with comparatively few parameters. In practice, this re- sults in considerable gains in speed and sample efï¬- ciency, even with very little training data.
These observations are not limited to the question answering domain, and we expect that they can be applied similarly to tasks like instruction following, game playing, and language generation.
# Acknowledgments
JA is supported by a National Science Foundation Graduate Fellowship. MR is supported by a fellow- ship within the FIT weltweit-Program of the German Academic Exchange Service (DAAD). This work was additionally supported by DARPA, AFRL, DoD MURI award N000141110688, NSF awards IIS- 1427425 and IIS-1212798, and the Berkeley Vision and Learning Center.
# References
Jacob Andreas, Andreas Vlachos, and Stephen Clark. In 2013. Semantic parsing as machine translation. Proceedings of the Annual Meeting of the Association for Computational Linguistics, Soï¬a, Bulgaria.
Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2016. Neural module networks. In Pro- ceedings of the Conference on Computer Vision and Pattern Recognition.
Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Mar- garet Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual question answer- In Proceedings of the International Conference ing. on Computer Vision.
Islam Beltagy, Cuong Chau, Gemma Boleda, Dan Gar- rette, Katrin Erk, and Raymond Mooney. 2013. Mon- tague meets markov: Deep semantics with probabilis- tic logical form. Proceedings of the Joint Conference
on Distributional and Logical Semantics, pages 11â 21.
Jonathan Berant and Percy Liang. 2014. Semantic pars- In Proceedings of the Annual ing via paraphrasing. Meeting of the Association for Computational Linguis- tics, volume 7, page 92.
Antoine Bordes, Sumit Chopra, and Jason Weston. 2014. Question answering with subgraph embeddings. Pro- ceedings of the Conference on Empirical Methods in Natural Language Processing.
L´eon Bottou, Yoshua Bengio, and Yann Le Cun. 1997. Global training of document processing systems us- ing graph transformer networks. In Proceedings of the Conference on Computer Vision and Pattern Recogni- tion, pages 489â494. IEEE.
L´eon Bottou. 2014. From machine learning to machine reasoning. Machine learning, 94(2):133â149.
Marie-Catherine De Marneffe and Christopher D Man- ning. 2008. The Stanford typed dependencies repre- sentation. In Proceedings of the International Confer- ence on Computational Linguistics, pages 1â8.
Edward Grefenstette. 2013. Towards a formal distribu- tional semantics: Simulating logical calculi with ten- sors. Joint Conference on Lexical and Computational Semantics.
Karl Moritz Hermann, Tomas Kocisky, Edward Grefen- stette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pages 1684â1692.
Mohit Iyyer, Jordan Boyd-Graber, Leonardo Claudino, Richard Socher, and Hal Daum´e III. 2014. A neu- ral network for factoid question answering over para- graphs. In Proceedings of the Conference on Empiri- cal Methods in Natural Language Processing.
Jayant Krishnamurthy and Thomas Kollar. 2013. Jointly learning to parse and perceive: connecting natural lan- guage to the physical world. Transactions of the Asso- ciation for Computational Linguistics.
Jayant Krishnamurthy and Tom Mitchell. 2013. Vec- tor space semantic parsing: A framework for compo- In Proceedings of the sitional vector space models. ACL Workshop on Continuous Vector Space Models and their Compositionality.
Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwa- ter, and Mark Steedman. 2010. Inducing probabilis- tic CCG grammars from logical form with higher- In Proceedings of the Conference order uniï¬cation. on Empirical Methods in Natural Language Process- ing, pages 1223â1233, Cambridge, Massachusetts. Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke Zettlemoyer. 2013. Scaling semantic parsers with on- the-ï¬y ontology matching. In Proceedings of the Con-
ference on Empirical Methods in Natural Language Processing.
Mike Lewis and Mark Steedman. 2013. Combining distributional and logical semantics. Transactions of the Association for Computational Linguistics, 1:179â 192.
2011. Learning dependency-based compositional semantics. In Proceedings of the Human Language Technology Conference of the Association for Computational Lin- guistics, pages 590â599, Portland, Oregon.
Mateusz Malinowski, Marcus Rohrbach, and Mario Fritz. 2015. Ask your neurons: A neural-based approach to answering questions about images. In Proceedings of the International Conference on Computer Vision. Cynthia Matuszek, Nicholas FitzGerald, Luke Zettle- moyer, Liefeng Bo, and Dieter Fox. 2012. A joint model of language and perception for grounded at- tribute learning. In International Conference on Ma- chine Learning.
Hyeonwoo Noh, Paul Hongsuck Seo, and Bohyung Han. 2015. Image question answering using convolutional neural network with dynamic parameter prediction. arXiv preprint arXiv:1511.05756.
Panupong Pasupat and Percy Liang. 2015. Composi- tional semantic parsing on semi-structured tables. In Proceedings of the Annual Meeting of the Association for Computational Linguistics.
Mengye Ren, Ryan Kiros, and Richard Zemel. 2015. Ex- ploring models and data for image question answer- In Advances in Neural Information Processing ing. Systems.
K Simonyan and A Zisserman. 2014. Very deep con- volutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
Richard Socher, John Bauer, Christopher D. Manning, and Andrew Y. Ng. 2013. Parsing with compositional vector grammars. In Proceedings of the Annual Meet- ing of the Association for Computational Linguistics. 1994. Knowledge-based artiï¬cial neural networks. Artiï¬cial Intelligence, 70(1):119â165.
Ronald J Williams. 1992. Simple statistical gradient- following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â256.
Yuk Wah Wong and Raymond J. Mooney. 2007. Learn- ing synchronous grammars for semantic parsing with lambda calculus. In Proceedings of the Annual Meet- ing of the Association for Computational Linguistics, volume 45, page 960.
2015. Ask, attend and answer: Exploring question-guided spatial atten- arXiv preprint tion for visual question answering. arXiv:1511.05234.
Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual In International Conference on Machine attention. Learning.
Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. Stacked attention net- 2015. works for image question answering. arXiv preprint arXiv:1511.02274.
Pengcheng Yin, Zhengdong Lu, Hang Li, and Ben Kao. 2015. Neural enquirer: Learning to query tables. arXiv preprint arXiv:1512.00965.
Matthew D Zeiler. 2012. adaptive learning rate method. arXiv:1212.5701. ADADELTA: An arXiv preprint
Bolei Zhou, Yuandong Tian, Sainbayar Sukhbaatar, Arthur Szlam, and Rob Fergus. 2015. Simple base- arXiv preprint line for visual question answering. arXiv:1512.02167. | {
"id": "1511.05234"
} |
1601.00257 | Modave Lectures on Applied AdS/CFT with Numerics | These lecture notes are intended to serve as an introduction to applied
AdS/CFT with numerics for an audience of graduate students and others with
little background in the subject. The presentation begins with a poor man's
review of current status of quantum gravity, where AdS/CFT correspondence is
believed to be the well formulated quantum gravity in the anti-de Sitter space.
Then we present the basic ingredients in applied AdS/CFT and introduce the
relevant numerics for solving differential equations into which the bulk
dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take
the zero temperature holographic superfluid as a concrete example for case
study. In passing, we also present some new results, which include the
numerical evidence as well as an elegant analytic proof for the equality
between the superfluid density and particle density, namely $\rho_s=\rho$, and
the saturation to the predicted value $\frac{1}{\sqrt{2}}$ by conformal field
theory for the sound speed in the large chemical potential limit. | http://arxiv.org/pdf/1601.00257 | Minyong Guo, Chao Niu, Yu Tian, Hongbao Zhang | gr-qc, hep-th | typos corrected, clarifications made, JHEP style, 1+23 pages, 12
figures, Mathematica code available upon request | PoS Modave2015 (2016) 003 | gr-qc | 20160103 | 20160106 | 6 1 0 2 n a J 6 ] c q - r g [
2 v 7 5 2 0 0 . 1 0 6 1 : v i X r a
Preprint typeset in JHEP style - HYPER VERSION
# Modave Lectures on Applied AdS/CFT with Numerics â
# Minyong Guo
Department of Physics, Beijing Normal University, Beijing, 100875, China minyongguo@mail.bnu.edu.cn
# Chao Niu
School of Physics and Chemistry, Gwangju Institute of Science and Technology, Gwangju 500-712, Korea chaoniu09@gmail.com
# Yu Tian
School of Physics, University of Chinese Academy of Sciences, Beijing 100049, China State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190, China ytian@ucas.ac.cn
# Hongbao Zhang
Department of Physics, Beijing Normal University, Beijing, 100875, China Theoretische Natuurkunde, Vrije Universiteit Brussel, and The International Solvay Institutes, Pleinlaan 2, B-1050 Brussels, Belgium hzhang@vub.ac.be
Abstract: These lecture notes are intended to serve as an introduction to applied AdS/CFT with numerics for an audience of graduate students and others with little background in the subject. The presentation begins with a poor manâs review of current status of quantum gravity, where AdS/CFT correspondence is believed to be the well formulated quantum grav- ity in the anti-de Sitter space. Then we present the basic ingredients in applied AdS/CFT and introduce the relevant numerics for solving diï¬erential equations into which the bulk dynamics collapses. To demonstrate how to apply AdS/CFT with numerics, we take the zero temperature holographic superï¬uid as a concrete example for case study. In passing, we also present some new results, which include the numerical evidence as well as an elegant analytic proof for the equality between the superï¬uid density and particle density, namely Ïs = Ï, and the saturation to the predicted value 1â by conformal ï¬eld theory for the sound speed in the 2 large chemical potential limit.
âBased on the series of lectures given by Hongbao Zhang at the Eleventh International Modave Summer School on Mathematical Physics, held in Modave, Belgium, September 2015.
# Contents
Introduction 2.1 De Sitter space: Meta-observables 2.2 Minkowski space: S-Matrix program 2.3 Anti-de Sitter space: AdS/CFT correspondence 3.1 What AdS/CFT is 3.2 Why AdS/CFT is reliable 3.3 How useful AdS/CFT is 4.1 Newton-Raphson method 4.2 Pseudo-spectral method 4.3 Runge-Kutta method 5.1 Variation of action, Boundary terms, and Choice of ensemble 5.2 Asymptotic expansion, Counter terms, and Holographic renormalization 5.3 Background solution, Free energy, and Phase transition 5.4 Linear response theory, Optical conductivity, and Superï¬uid density 5.5 Time domain analysis, Normal modes, and Sound speed 1 2 3 4 6 6 6 8 9 9 10 10 11 12 12 13 13 16 19
1.
2. Quantum Gravity
3. Applied AdS/CFT
4. Numerics for Solving Diï¬erential Equations
5. Holographic Superï¬uid at Zero Temperature
# 6. Concluding Remarks
# 1. Introduction
Diï¬erent from the other more formal topics in this summer school, the emphasis of these lectures is on the applications of AdS/CFT correspondence and the involved numerical tech- niques. As theoretical physicists, we generically have a theory, or a paradigm as simple as possible, but the real world is always highly sophisticated. So it is usually not suï¬cient for us to play only with our analytical techniques when we try to have a better understanding of the rich world by our beautiful theory. This is how computational physics comes in the lives of theoretical physicists. AdS/CFT correspondence, as an explicit holographic implementation
â 1 â
21
of quantum gravity in anti-de Sitter space, has recently emerged as a powerful tool for one to address some universal behaviors of strongly coupled many body systems, which otherwise would not be amenable to the conventional approaches. Furthermore, applied AdS/CFT has been entering the era of Computational Holography, where numerics plays a more and more important role in such ongoing endeavors. Implementing those well developed techniques in Numerical Relativity is highly desirable but generically required to be geared since AdS has its own diï¬culties. In the course of attacking these unique diï¬culties, some new numerical schemes and computational techniques have also been devised. These lectures are intended as a basic introduction to the necessary numerics in applied AdS/CFT in particular for those beginning practitioners in this active ï¬eld. Hopefully in the end, the readers can appreciate the signiï¬cance of numerics in connecting AdS/CFT to the real world at least as we do.
In the next section, we shall ï¬rst present a poor manâs review of the current status for quantum gravity, where AdS/CFT stands out as the well formulated quantum gravity in anti-de Sitter space. Then we provide a brief introduction to applied AdS/CFT in Section 3, which includes what AdS/CFT is, why AdS/CFT is reliable, and how useful AdS/CFT is. In Section 4, we shall present the main numerical methods for solving diï¬erential equations, which is supposed to be the central task in applied AdS/CFT. Then we take the zero temperature holographic superï¬uid as a concrete application of AdS/CFT with numerics in Section 5, where not only will some relevant concepts be introduced but also some new results will be presented for the ï¬rst time. We conclude these lecture notes with some remarks in the end.
# 2. Quantum Gravity
The very theme in physics is to unify a variety of seemingly distinct phenomena by as a few principles as possible, which can help us to build up a sense of safety while being faced up with the unknown world. This may be regarded as another contribution of the uniï¬cation in physics to our society on top of its various induced technology innovations. With a series of achievements along the road to uniï¬cation in physics, we now end up with the two distinct entities, namely quantum ï¬eld theory and general relativity.
As we know, quantum ï¬eld theory is a powerful framework for us to understand a huge range of phenomena in Nature such as high energy physics and condensed matter physics. Although the underlying philosophies are diï¬erent, they share quantum ï¬eld theory as their common language. In high energy physics, the philosophy is reductionism, where the goal is to ï¬gure out the UV physics for our eï¬ective low energy IR physics. The standard model for particle physics is believed to be an eï¬ective low energy theory. To see what really happens at UV, we are required to go beyond the standard model by reaching a higher energy scale. This is the reason why we built LHC in Geneva. This is also the reason why we plan to go to the Great Collider from the Great Wall in China. While in condensed matter physics, the philosophy is emergence. Actually we have a theory of everything for condensed matter physics, namely QED, or the Schrodinger equation for electrons with Coulomb interaction
â 2 â
J (u=7) North South Pole Pole (y=9) (y=) J (u=0)
Figure 1: The Penrose diagram for the global de Sitter space, where the planar de Sitter space associated with the observer located at the south pole is given by the shaded portion.
among them. What condensed matter physicists are concerned with is how to engineer various low temperature IR ï¬xed points, namely various phases from such a known UV theory. Such a variety of phases gives rise to a man-made multiverse, which is actually resonant to the landscape suggested by string theory.
On the other hand, general relativity tells us that gravity is geometry. Gravity is dif- ferent, so subtle is gravity. The very longstanding issue in fundamental physics is trying to reconcile general relativity with quantum field theory. People like to give a name to it, called Quantum Gravity although we have not fully succeeded along this lane. Here is a poor manâs perspective into the current status of quantum gravity, depending on the asymptotic 1. The reason is twofold. First, due to the existence of Planck scale geometry of spacetime lp = (yar, spacetime is doomed such that one can not define local field operators in a d+1 dimensional gravitational theory. Instead, the observables can live only on the boundary of spacetime. Second, it is the dependence on the asymptopia that embodies the background independence of quantum gravity.
# 2.1 De Sitter space: Meta-observables
If the spacetime is asymptotically de Sitter as
ds2 = âdt2 + l2 cosh2 t l dâ¦2 d, (2.1)
when t â ±â, then by the coordinate transformation u = 2 tanâ1 e
# t l , the metric becomes
ds2 = l2 sin2 u (du2 + dÏ2 + sin2 Ïdâ¦2 dâ1) (2.2)
1This is a poor manâs perspective because we shall try our best not to touch upon string theory although it is evident that this perspective is well shaped by string theory in a direct or indirect way throughout these lecture notes.
â 3 â
with Ï the polar angle for the d-sphere. We plot the Penrose diagram in Figure 1 for de Sitter space. Whence both the past and future conformal inï¬nity I â are spacelike. As a result, any observer can only detect and inï¬uence portion of the whole spacetime. Moreover, any point in I + is causally connected by a null geodesic to its antipodal point in I â for de Sitter. In view of this, Witten has proposed the meta-observables for quantum gravity in de Sitter space, namely
["
cit) = [" Dge (2.3) Ii
Ii with gf and g; a set of data specified on .% ~ respectively. Then one can construct the Hilbert space H; at %~ for quantum gravity in de Sitter space with the inner product (j,i) = (Q¥|#) by CPT transformation ©. The Hilbert space Hy at %+ can be constructed in a similar fashion. At the perturbative level, the dimension of Hilbert space for quantum gravity in de Sitter is infinite, which is evident from the past-future singularity of the meta-correlation functions at those points connected by the aforementioned geodesics. But it is suspected that the non-perturbative dimension of Hilbert space is supposed to be finite. This is all one can say with such mata-observables[1].
However, there are also diï¬erent but more optimistic perspectives. Among others, in- spired by AdS/CFT, Strominger has proposed DS/CFT correspondence. First, with I + identiï¬ed as I â by the above null geodesics, the dual CFT lives only on one sphere rather than two spheres. Second, instead of working with the global de Sitter space, DS/CFT cor- respondence can be naturally formulated in the causal past of any given observer, where the bulk spacetime is the planar de Sitter and the dual CFT lives on I â. For details, the readers are referred to Stromingerâs original paper as well as his Les Houches lectures[2, 3].
# 2.2 Minkowski space: S-Matrix program
The situation is much better if the spacetime is asymptotically ï¬at. As the Penrose diagram for Minkowski space shows in Figure 2, the conformal inï¬nity is lightlike. In this case, the only observable is scattering amplitude, abbreviated as S-Matrix, which connects the out states at I + to the in states at I â2. One can claim to have a well deï¬ned quantum gravity in asymptotically ï¬at space once a sensible recipe is made for the computation of S-Matrix with gravitons. Actually, inspired by BCFW recursion relation[4], there has been much progress achieved over the last few years along this direction by the so called S-Matrix program, in which the scattering amplitude is constructed without the local Lagrangian, resonant to the non-locality of quantum gravity[5]. Traditionally, S-Matrix is computed by the Feynman diagram techniques, where the Feynman rules come from the local Lagrangian. But the computation becomes more and more complicated when the scattering process involves either more external legs or higher loops. While in the S-Matrix program the recipe for the
2Here we are concerned with the scattering amplitude for massless particles, including gravitons, since they are believed to be more fundamental than massive particles. But nevertheless by taking into account the data at i±, the scattering amplitude with massive particles involved can still be constructed in principle as it should be the case.
â 4 â
Figure 2: The Penrose diagram for Minkowski space, where massless particles will always emanate from I â and end at I +.
Figure 3: The Penrose diagram for the global anti-de Sitter space, where the conformal inï¬nity I itself can be a spacetime on which the dynamics can live.
computation of scattering amplitude, made out of the universal properties of S-Matrix, such as Poincare or BMS symmetry, unitarity and analyticity of S-Matrix, turns out to be far more eï¬cient. It is expected that such an ongoing S-Matrix program will lead us eventually towards a well formulated quantum gravity in asymptotically ï¬at space.
â 5 â
# 2.3 Anti-de Sitter space: AdS/CFT correspondence
The best situation is for the spacetime which is asymptotically anti-de Sitter as
ds2 = l2 cos2 Ï (âdt2 + dÏ2 + sin2 Ïdâ¦2 dâ1) (2.4)
with Ï â [0, Ï 2 ). As seen from the Penrose diagram for anti-de Sitter space in Figure 3, the conformal inï¬nity I is timelike in this case, where we can have a well formulated quantum theory for gravity by AdS/CFT correspondence[6, 7, 8]. Namely the quantum gravity in the bulk AdSd+1 can be holographically formulated in terms of CFTd on the boundary without gravity and vice versa. We shall elaborate on AdS/CFT in the subsequent section. Here we would like to mention one very interesting feature about AdS/CFT, that is to say, generically we have no local Lagrangian for the dual CFT, which echoes the aforementioned S-Matrix program somehow.
# 3. Applied AdS/CFT
# 3.1 What AdS/CFT is
To be a little bit more precise about what AdS/CFT is, let us ï¬rst recall the very basic object in quantum ï¬eld theory, namely the generating functional, which is deï¬ned as
Za\J] _ ini [ Dweisal¥l+s da JO). (3.1)
Whence one can obtain the n-point correlation function for the operator O by taking the n-th functional derivative of the generating functional with respect to the source J. For example,
\ bLa (0(e)) = Se, (3.2)
2 5O(x (O(01)O(a2)) = â 224 = 90(@) (3.3) bJ(a)dS (a2) dS (a2)
As we know, we can obtain such a generating functional by perturbative expansion using the Feynman diagram techniques for weakling coupled quantum ï¬eld theory, but obviously such a perturbation method breaks down when the involved quantum ï¬eld theory is strongly coupled except one can ï¬nd its weak dual. AdS/CFT provides us with such a dual for strongly coupled quantum ï¬eld theory by a classical gravitational theory with one extra dimension. So now let us turn to general relativity, where the basic object is the action given by
Sd+1 = 1 16ÏG dd+1x â âg(R + d(d â 1) l2 + Lmatter) (3.4)
for AdS gravity. Here for the present illustration and later usage, we would like to choose the Lagrangian for the matter ï¬elds as
Lmatter = l2 Q2 (â 1 4 F abFab â |DΦ|2 â m2|Φ|2) (3.5)
â 6 â
with F = dA, D = â â iA and Q the charge of complex scalar ï¬eld. The variation of action gives rise to the equations of motion as follows
Gab â d(d â 1) 2l2 gab = l2 Q2 [FacFb c + 2DaΦDbΦ â ( 1 4 FcdF cd + |DΦ|2 + m2|Φ|2)gab], (3.6)
(3.7)
âaF ab = i(ΦDbΦ â ΦDbΦ), DaDaΦ â m2Φ = 0.
(3.8)
Note that the equations of motion are generically second order PDEs. So to extrapolate the bulk solution from the AdS boundary, one is required to specify a pair of boundary conditions for each bulk ï¬eld at the conformal boundary of AdS, which can be read oï¬ from the asymptotical behavior for the bulk ï¬elds near the AdS boundary
ds2 â l2 z2 [dz2 + (γµν + tµνzd)dxµdxν], (3.9)
(3.10)
# Aµ â aµ + bµzdâ2, Φ â Ïâzââ + Ï+zâ+
(3.11)
# a
with â± = d 4 + m2l23. Namely (γµν, tµν) are the boundary data for the bulk metric ï¬eld, (aµ, bµ) for the bulk gauge ï¬eld, and (Ïâ, Ï+) for the bulk scalar ï¬eld. But such pairs usually lead to singular solutions deep into the bulk. To avoid these singular solutions, one can instead specify the only one boundary condition from each pair such as (γµν, aµ, Ïâ). We denote these boundary data by J, whose justiï¬cation will be obvious later on. At the same time we also require the regularity of the desired solution in the bulk. In this sense, the regular solution is uniquely determined by the boundary data J. Thus the on-shell action from the regular solution will be a functional of J.
What AdS/CFT tells us is that this on-shell action in the bulk can be identiï¬ed as the generating functional for strongly coupled quantum ï¬eld theory living on the boundary, i.e.,
Zd[J] = Sd+1[J], (3.12)
where apparently J has a dual meaning, not only serving as the source for the boundary quantum ï¬eld theory but also being the boundary data for the bulk ï¬elds. In particular, γµν sources the operator for the boundary energy momentum tensor whose expectation value is given by (3.3) as tµν, aµ sources a global U (1) conserved current operator whose expectation value is given as bµ, and the expectation value for the operator dual to the source Ïâ is given as Ï+ up to a possible proportional coeï¬cient. The conformal dimension for these dual operators can be read oï¬ from (3.9) by making the scaling transformation (z, xµ) â (αz, αxµ) as d, d â 1, and â+ individually.
3Here we are working with the axial gauge for the bulk metric and gauge ï¬elds, which can always been achieved. In addition, although the mass square is allowed to be negative in the AdS it can not be below the BF bound â d2
â 7 â
Here is a caveat on the validity of (3.12). Although such a boundary/bulk duality is believed to hold in more general circumstances, (3.12) works for the large N strongly coupled quantum ï¬eld theory on the boundary where N and the coupling parameter of the dual , respectively. quantum ï¬eld theory are generically proportional to some powers of In order to capture the 1 N correction to the dual quantum ï¬eld theory by holography, one is required to calculate the one-loop partition function on top of the classical background solution in the bulk. On the other hand, to see the ï¬nite coupling eï¬ect in the dual quantum ï¬eld theory by holography, one is required to work with higher derivative gravity theory in the bulk. But in what follows, for simplicity we shall work exclusively with (3.12) in its applicability regime.
Among others, we would like to conclude this subsection with the three important im- plications of AdS/CFT. First, a ï¬nite temperature quantum ï¬eld theory at ï¬nite chemical potential is dual to a charged black hole in the bulk. Second, the entanglement entropy of the dual quantum ï¬eld theory can be calculated by holography as the the area of the bulk minimal surface anchored onto the entangling surface[11, 12, 13]. Third, the extra bulk di- mension represents the renormalization group ï¬ow direction for the boundary quantum ï¬eld theory with AdS boundary as UV, although the renormalization scheme is supposed to be diï¬erent from the conventional one implemented in quantum ï¬eld theory4.
# 3.2 Why AdS/CFT is reliable
But why AdS/CFT is reliable? In fact, besides its explicit implementations in string theory such as the duality between Type IIB string theory in AdS5 Ã S5 and N = 4 SYM theory on the four dimensional boundary, where some results can be computed on both sides and turn out to match each other, there exist many hints from the inside of general relativity indicating that gravity is holographic. Here we simply list some of them as follows.
⢠Bekenstein-Hawkingâs black hole entropy formula SBH = A 4ldâ1 p [14].
⢠Brown-Henneauxâs asymptotic symmetry analysis for three dimensional gravity[15], 2G successfully reproduces the black hole entropy where the derived central charge 3l by the Cardy formula for conformal ï¬eld theory[16].
⢠Brown-Yorkâs surface tensor formulation of quasi local energy and conserved charges[17]. Once we are brave enough to declare that this surface tensor be not only for the purpose of the bulk gravity but also for a certain system living on the boundary, we shall end up with the long wave limit of AdS/CFT, namely the gravity/ï¬uid correspondence, which has been well tested[18].
On the other hand, we can also see how such an extra bulk dimension emerges from quantum ï¬eld theory perspective. In particular, inspired by Swingleâs seminal work on the connection between the MERA tensor network state for quantum critical systems and AdS
4This implication is sometimes dubbed as RG = GR.
â 8 â
space[19], Qi has recently proposed an exact holographic mapping to generate the bulk Hilbert space of the same dimension from the boundary Hilbert space[20], which echoes the afore- mentioned renormalization group ï¬ow implication of AdS/CFT.
Keeping all of these in mind, we shall take AdS/CFT as a ï¬rst principle and explore its various applications in what follows.
# 3.3 How useful AdS/CFT is
As alluded to above, AdS/CFT is naturally suited for us to address strongly coupled dy- namics and non-equilibrium processes by mapping the involved hard quantum many body problems to classical few body problems. There are two approaches towards the construction of holographic models. One is called the top-down approach, where the microscopic content of the dual boundary theory is generically known because the construction originates in string theory. The other is called the bottom-up approach, which can be regarded as kind of eï¬ective ï¬eld theory with one extra dimension for the dual boundary theory.
By either approach, we can apply AdS/CFT to QCD as well as the QCD underlying quark-gluon plasma, ending up with AdS/QCD[21, 22]. On the other hand, taking into account that there are a bunch of strongly coupled systems in condensed matter physics such as high Tc superconductor, liquid Helium, and non-Fermi liquid, we can also apply AdS/CFT to condensed matter physics, ending up with AdS/CMT[23, 24, 25, 26, 27]. Note that the bulk dynamics boils eventually down to a set of diï¬erential equations, whose solutions are generically not amenable to an analytic treatment. So one of the central tasks in applied AdS/CFT is to ï¬nd the numerical solutions to diï¬erential equations. In the next section, we shall provide a basic introduction to the main numerical methods for solving diï¬erential equations in applied AdS/CFT.
# 4. Numerics for Solving Diï¬erential Equations
Roughly speaking, there are three numerical schemes to solve diï¬erential equations by trans- forming them into algebraic equations, namely ï¬nite diï¬erent method, ï¬nite element method, and spectral method. According to our experience with the numerics in applied AdS/CFT, it is favorable to make a code from scratch for each problem you are faced up with. In particular, the variant of spectral method, namely pseudo-spectral method turns out to be most eï¬cient in solving diï¬erential equations along the space direction where Newton-Raphson iteration method is extensively employed if the resultant algebraic equations are non-linear. On the other hand, ï¬nite diï¬erence method such as Runge-Kutta method is usually used to deal with the dynamical evolution along the time direction. So now we like to elaborate a little bit on Newton-Raphson method, pseudo-spectral method, as well as Runge-Kutta method one by one.
â 9 â
f(x)
Figure 4: Newton-Raphson iteration map is used to ï¬nd the rightmost root for a non-linear algebraic equation.
# 4.1 Newton-Raphson method
To ï¬nd the desired root for a given non-linear function f (x), we can start with a wisely guessed initial point xk. Then as shown in Figure 4 by Newton-Raphson iteration map, we hit the next point xk+1 as
trp =p â f' (we) f(r), (4.1)
which is supposed to be closer to the desired root. By a ï¬nite number of iterations, we eventually end up with a good approximation to the desired root. If we are required to ï¬nd the root for a group of non-linear functions F (X), then the iteration map is given by
Xk+1 = Xk â [( âF âX )â1F ]|Xk , (4.2)
where the formidable Jacobian can be tamed by Taylor expansion trick since the expansion coeï¬cient of the linear term is simply the Jacobian in Taylor expansion F (X) = F (X0) + âF âX |X0(X â X0) + · · ·.
# 4.2 Pseudo-spectral method
As we know, we can expand an analytic function in terms of a set of appropriate spectral functions as
f(a) = > CnTy (x) (4.3)
â 10 â
with N some truncation number, depending on the numerical accuracy you want to achieve. Then the derivative of this function is given by
N f(x) = > CnT" (2). (4.4) n=1
Whence the derivatives at the collocation points can be obtained from the values of this function at these points by the following diï¬erential matrix as
f'(@i) = 35 Dis f(x), (4.5) J
where the matrix D = TâT~! with Tj, = T,(a) and T/, = T/(x;). With this differential matrix, the differential equation in consideration can be massaged into a group of algebraic equations for us to solve the unknown f(x;) by requiring that both the equation hold at the collocation points and the prescribed boundary conditions be satisfied.
This is the underlying idea for pseudo-spectral method. Among others, we would like to point out the two very advantages of pseudo-spectral method, compared to ï¬nite diï¬erence method and ï¬nite element method. First, one can ï¬nd the interpolating function for f (x) by the built-in procedure as follows
f (x) = Tn(x)T â1 ni f (xi). n,i (4.6)
Second, the numerical error decays exponentially with the truncation number N rather than the power law decay followed by the other two methods.
# 4.3 Runge-Kutta method
As mentioned before, we should employ ï¬nite diï¬erence method to march along the time direction. But before that, we are required to massage the involved diï¬erential equation into the following ordinary diï¬erential equation
Ëy = f (y, t), (4.7)
which is actually the key step for one to investigate the temporal evolution in applied AdS/CFT. Once this non-trivial step is achieved, then there are a bunch of ï¬nite diï¬er- ence schemes available for one to move forward. Among others, here we simply present the classical fourth order Runge-Kutta method as follows
k1 = f (yi, ti), ât 2 ât 2 k4 = f (yi + âtk3, ti + ât), ât 2 ât 2 k2 = f (yi + k1, ti + k3 = f (yi + k2, ti + ), ), ti+1 = ti + ât, yi+1 = yi + ât 6 (k1 + 2k2 + 2k3 + k4), (4.8)
â 11 â
because it is user friendly and applicable to all the temporal evolution problems we have been considered so far[28, 29, 30, 31, 32, 33]5.
# 5. Holographic Superï¬uid at Zero Temperature
In this section, we would like to take the zero temperature holographic superï¬uid as an concrete example to demonstrate how to apply AdS/CFT with numerics. In due course, not only shall we introduce some relevant concepts, but also present some new results[34].
The action for the simplest model of holographic superï¬uid is just given by (3.4). To make our life easier, we shall work with the probe limit, namely the back reaction of matter ï¬elds onto the metric is neglected, which can be achieved by taking the large Q limit. Thus we can put the matter ï¬elds on top of the background which is the solution to the vacuum Einstein equation with a negative cosmological constant Î = â d(dâ1) . For simplicity, we shall focus only on the zero temperature holographic superï¬uid, which can be implemented by choosing the AdS soliton as the bulk geometry[35], i.e.,
ds2 = l2 z2 [âdt2 + dx2 + dz2 f (z) + f (z)dθ2]. (5.1)
Here f (z) = 1 â ( z )d with z = z0 the tip where our geometry caps oï¬ and z = 0 the AdS z0 boundary. To guarantee the smooth geometry at the tip, we are required to impose the periodicity 4Ïz0 onto the θ coordinate. The inverse of this periodicity set by z0 is usually 3 interpreted as the conï¬ning scale for the dual boundary theory.
In what follows, we will take the units in which l = 1, 16ÏGQ2 = 1, and z0 = 1. In addition, we shall focus exclusively on the action of matter ï¬elds because the leading Q0 contribution has been frozen by the above ï¬xed background geometry.
# 5.1 Variation of action, Boundary terms, and Choice of ensemble
The variational principle gives rise to the equations of motion if and only if the boundary terms vanish in the variation of action. For our model, the variation of action is given by
5S = / d*leJâG|V.F® + i(®D°S â BD'S)|5 A, â / dleVâhing ES Ay + (f attey g(DaD* â m2) &5® pes hngD*8d5®) + C.C)]. (5.2)
To make the boundary terms vanish, we can fix A, and ® on the boundary. Fixing A, amounts to saying that we are working with the grand canonical ensemble. In order to work with the canonical ensemble where /âhn,F® is fixed instead, we are required to add the additional boundary term J Ba /âhn F Ay to the action, which is essentially the Legendre transformation. On the other hand, fixing ¢_ gives rise to the standard quantization. We can
5It is worthwhile to keep in mind that the accumulated numerical error is of order O(ât4) for this classical Runge-Kutta method.
â 12 â
also have an alternative quantization by ï¬xing Ï+ when â d2 4 + 1[37]. In what follows, we shall restrict our attention onto the grand canonical ensemble and the standard quantization for the case of d = 3 and m2 = â2, whereby ââ = 1 and â+ = 2.
# 5.2 Asymptotic expansion, Counter terms, and Holographic renormalization
What we care about is the on-shell action, which can be shown to have IR divergence gener- ically in the bulk by the asymptotic expansion near the AdS boundary, corresponding to the UV divergence for the dual boundary theory. The procedure to make the on-shell action ï¬nite by adding some appropriate counter terms is called holographic renormalization[38]. For our case, the on-shell action is given by
Sonâshell = aif aev= G(VaF@)A [ee âhng FAs] + 1 ee â â af dey g®(DaD* â m?)® pes hngD*®®) + CC] _ 5 / d2/=Gi(BD â BD) A, â / BxV/âhngFâ Ay] â Lf. _ s(f aa âhngD%® + C.C.). (5.3)
By the asymptotic expansion in (3.10) and (3.11), the divergence comes only from the last \-P6 z two boundary terms and can be read off as . So the holographic renormalization can be readily achieved by adding the boundary term â J @Pa/âh|®|? to the original action. Whence we have
6S, ly ren op Gi") = Fa, 7 OSren a (O) = i =, (5.4)
where jµ corresponds to the conserved particle current and the expectation value for the scalar operator O is interpreted as the condensate order parameter of superï¬uid. If this scalar operator acquires a nonzero expectation value spontaneously in the situation where the source is turned oï¬, the boundary system is driven into a superï¬uid phase.
# 5.3 Background solution, Free energy, and Phase transition
With the assumption that the non-vanishing bulk matter ï¬elds (Φ = zÏ, At, Ax) do not depend on the coordinate θ, the equations of motion can be explicitly written as
6Note that the outward normal vector is given by na = âz( â
âz )a.
â 13 â
0 = â2
t Ï + (z + A2 +3z2âzÏ + (z3 â 1)â2 t Ax â âtâxAt â i(Ïâx Â¯Ï â ¯ÏâxÏ) + 2AxÏ Â¯Ï + 3z2âzAx + (z3 â 1)â2
(5.6)
0 = â2 0 = (z3 â 1)â2 0 = âtâzAt + i(Ïâz Â¯Ï â ¯ÏâzÏ) â âzâxAx,
z Ax, xAt + âtâxAx + 2 ¯ÏÏAt + i( ¯ÏâtÏ â Ψât ¯Ï),
z At + 3z2âzAt â â2 (5.7)
(5.8)
where the third one is the constraint equation and the last one reduces to the conserved equation for the boundary current when evaluated at the AdS boundary, i.e.,
# âtÏ = ââxjx.
ap = âOnj®. (5.9)
To specialize into the homogeneous phase diagram for our holographic model, we further make the following ansatz for our non-vanishing bulk matter ï¬elds
Ï = Ï(z), At = At(z). (5.10)
Then the equations of motion for the static solution reduce to
(5.11)
0 = 3z2âzÏ + (z3 â 1)â2 z Ï + (z â A2 0 = 2AtÏ Â¯Ï + 3z2âzAt + (z3 â 1)â2 0 = Ïâz Â¯Ï â ¯ÏâzÏ,
t )Ï, z At,
(5.12)
(5.13)
where the last equation implies that we can always choose a gauge to make Ï real. It is not hard to see the above equations of motion have a trivial solution
Ï = 0, At = µ, (5.14)
which corresponds to the vacuum phase with zero particle density. On the other hand, to obtain the non-trivial solution dual to the superï¬uid phase, we are required to resort to pseudo-spectral method. As a demonstration, we here plot the nontrivial proï¬le for Ï and At at µ = 2 in Figure 5. The variation of particle density and condensate with respect to the chemical potential is plotted in Figure 6, which indicates that the phase transition from the vacuum to a superï¬uid occurs at µc = 1.715. It is noteworthy that such a phenomenon is rem- iniscent of the recently observed quantum critical behavior of ultra-cold cesium atoms in an optical lattice across the vacuum to superï¬uid transition by tuning the chemical potential[36]. Moreover, the compactiï¬ed dimension in the AdS soliton background can be naturally identi- ï¬ed as the reduced dimension in optical lattices by the very steep harmonic potential as both mechanisms make the eï¬ective dimension of the system in consideration reduced in the low energy regime. On the other hand, note that the particle density shows up at the same time as our superï¬uid condensate, thus it is tempting to claim that this particle density Ï is simply the superï¬uid density Ïs. This claim is also consistent with the fact that we are working with
â 14 â
(5.5)
(5.9)
2.0F 4 "ash 4
Figure 5: The bulk proï¬le for the scalar ï¬eld and time component of gauge ï¬eld at the chemical potential µ = 2.
| 3.0F 3 4 2.55 2.0 Po IkO>l ; 1.06 | 0.5 0 0.0 0 1 2 3 4 0 1 2 3 4 u u
Figure 6: The variation of particle density and condensate with respect to the chemical potential, where we see the second order quantum phase transition take place at µc = 1.715.
a zero temperature superï¬uid where the normal ï¬uid component should disappear. As we will show later on by the linear response theory, this is actually the case.
But to make sure that Figure 6 represents the genuine phase diagram for our holographic model, we are required to check whether the corresponding free energy density is the lowest in the grand canonical ensemble. By holography, the free energy density can be obtained from the renormalized on shell Lagrangian of matter ï¬elds as follows7
1 _ _ F 5 / dz/âgi(®D°S â SDS) A, â Vâhng ApFâ¢|-=0] = Sno | de(aio)?, (5.15)
where we have made use of the source free boundary condition for the scalar ï¬eld at the AdS boundary. As shown in Figure 7, the superï¬uid phase is the thermodynamically favored one
7Here we have used iSLorentzian = âSEuclidean and it = Ï with Ï the Euclidean time identiï¬ed as the inverse of temperature.
â 15 â
Figure 7: The diï¬erence of free energy density for the superï¬uid phase from that for the vacuum phase.
compared to the vacuum phase when the chemical potential is greater than the critical value. So we are done.
# 5.4 Linear response theory, Optical conductivity, and Superï¬uid density
Now let us set up the linear response theory for the later calculation of the optical conductivity of our holographic model. To achieve this, we ï¬rst decompose the ï¬eld Ï into its real and imaginary parts as
Ï = Ïr + iÏi, (5.16)
and assume that the perturbation bulk ï¬elds take the following form
δÏr = δÏr(z)eâiÏt+iqx, δÏi = δÏi(z)eâiÏt+iqx, δAt = δAt(z)eâiÏt+iqx, δAx = δAx(z)eâiÏt+iqx, (5.17) since the background solution is static and homogeneous. With this, the perturbation equa- tions can be simpliï¬ed as
t )δÏr â 2iÏAtδÏi + q2δÏr + 3z2âzδÏr + (z3 â 1)â2 z δÏr
0 = âÏ2δÏr + (z â A2 â2AtÏrδAt, 0 = âÏ2δÏi + (z â A2
t )δÏi + 2iÏAtδÏr + q2δÏi + 3z2âzδÏi + (z3 â 1)â2 z δÏi +iÏÏrδAt + iqÏrδAx, (5.19)
(5.18)
0 = âÏ2δAx â ÏqδAt + 3z2âzδAx + (z3 â 1)â2 0 = (z3 â 1)â2 z δAx + 2Ï2 rδAx â 2iqÏrδÏi, z δAt + 3z2âzδAt + q2δAt + ÏqδAx + 2Ï2 rδAt + 4AtÏrδÏr (5.20)
+2iÏÏrδÏi, (5.21)
0 = âiÏâzδAt â iqâzδAx â 2(âzÏrδÏi â ÏrâzδÏi), (5.22)
where we have used Ïi = 0 for the background solution.
â 16 â
Note that the gauge transformation
A â A + âθ, Ï â Ïeiθ (5.23)
with
θ = 1 i λeâiÏt+iqx (5.24)
induces a spurious solution to the above perturbation equations as
δAt = âλÏ, δAx = λq, Î´Ï = λÏ. (5.25)
We can remove such a redundancy by requiring δAt = 0 at the AdS boundary8. In addition, Î´Ï will also be set to zero at the AdS boundary later on. On the other hand, taking into account the fact that the perturbation equation (5.22) will be automatically satisï¬ed in the whole bulk once the other perturbations are satisï¬ed9, we can forget about (5.22) from now on. That is to say, we can employ the pseudo-spectral method to obtain the desired numerical solution by combining the rest perturbation equations with the aforementioned boundary conditions as well as the other boundary conditions at the AdS boundary, depending on the speciï¬c problem we want to solve.
In particular, to calculate the optical conductivity for our holographic model, we can simply focus on the q = 0 mode and further impose δAx = 1 at the AdS boundary. Then the optical conductivity can be extracted by holography as
Ï(Ï) = âzδAx|z=0 iÏ (5.26)
for any positive frequency Ï10. According to the perturbation equations, the whole calculation is much simpliï¬ed because δAx decouples from the other perturbation bulk ï¬elds. We simply plot the imaginary part of the optical conductivity in Figure 8 for both vacuum and superï¬uid phase, because the real part vanishes due to the reality of the perturbation equation and boundary condition for δAx. As it should be the case, the DC conductivity vanishes for the vacuum phase, but diverges for the superï¬uid phase due to the 1 Ï behavior of the imaginary part of optical conductivity by the Krames-Kronig relation
Im[o(w)] = =P [. a Rel) (5.27) T Joo JâW Ww
Furthermore, according to the hydrodynamical description of superï¬uid, the superï¬uid den- sity Ïs can be obtained by ï¬tting this zero pole as Ïs ÂµÏ [39, 40, 41]. As expected, our numerics shows that the resultant superï¬uid density is exactly the same as the particle density within
â 17 â
Figure 8: The left panel is the imaginary part of optical conductivity for the vacuum phase, and the right panel is for the superï¬uid phase at µ = 6.5.
our numerical accuracy. The other poles correspond to the gapped normal modes for δAx, which we are not interested in since we are focusing on the low energy physics.
Let us come back to the equality between the particle density and superï¬uid density. Although this numerical result is 100 percent reasonable from the physical perspective, it is highly non-trivial in the sense that the superï¬uid density comes from the linear response theory while the particle density is a quantity associated with the equilibrium state. So it is better to have an analytic understanding for this remarkable equality. Here we would like to develop an elegant proof for this equality by a boost trick. To this end, we are ï¬rst required to realize Ïs = âµâzδAx|z=0 with Ï = 0. Such an Ï = 0 perturbation can actually be implemented by a boost
1 1 t= woe" vaâ), x ize (a! â vtâ) (5.28)
acting on the superï¬uid phase. Note that the background metric is invariant under such a boost. As a result, we end up with a new non-trivial solution as follows
1 v = ¢,A,= A,, Al, = At. 5.29 g = 9,4, Vin ae Vice (5.29)
We expand this solution up to the linear order in v as
¢! = $, A, = Ay, Al, = VAs, (5.30)
which means that the linear perturbation δAx is actually proportional to the background solution At. So we have Ïs = Ï immediately.
8The only exception is the Ï case, which can always be separately managed if necessary. 9This result comes from the following two facts. One is related to Bianchi âg âµ(
identity 0 = âava = z4 ) = 0 if the rest equations of motion hold. The other is special to our holographic model, in which the readers are encouraged to show that the z component of Maxwell equation turns out to be satisï¬ed automatically at z = 1 if the rest equations hold there.
10Note that Ï(â¯Ï) = Ï(Ï), so we focus only on the positive frequency here.
â 18 â
1000 600} 4 400+ 4 200 4 | fu | A 0.0 05 1.0 15 2.0 3.0
Figure 9: The density plot of er | with g = 0.3 for the superfluid phase at pp = 6.5. The normal modes can be identified by the peaks, where the red one denotes the hydrodynamic normal mode wo = 0.209.
0.0 05 1.0 15 2.0 25 3.0
Figure 10: The spectral plot of ln |δ ËÏi(Ï, 1)| with q = 0.3 for the superï¬uid phase at µ = 6.5, where the initial data are chosen as δÏi = z with all the other perturbations turned oï¬. The normal modes can be identiï¬ed by the peaks, whose locations are the same as those by the frequency domain analysis within our numerical accuracy.
# 5.5 Time domain analysis, Normal modes, and Sound speed
In what follows we shall use linear response theory to calculate the speed of sound by focusing solely on the hydrodynamic spectrum of normal modes of the gapless Goldstone from the spontaneous symmetry breaking, which is obviously absent from the vacuum phase. As such, the perturbation fields are required to have Dirichlet boundary conditions at the AdS boundary. Then we cast the linear perturbation equations and boundary conditions into the form £L(w)u = 0 with u the perturbation fields evaluated at the grid points by pseudo- spectral method. The normal modes are obtained by the condition det{[£(w)| = 0, which can be further identified by the density plot Feaena with the prime the derivative with respect to w. We demonstrate such a density plot in Figure 9, where the hydrodynamic mode is simply the closest mode to the origin, marked in red. Besides such a frequency domain
â 19 â
08 06+ 4 0.4 02+ 4 0.0 . rae . 0.0 0.2 04 0.6 08 1.0 1.2 14
Figure 11: The dispersion relation for the gapless Goldstone mode in the superï¬uid phase at µ = 6.5, where the sound speed vs = 0.697 is obtained by ï¬tting the long wave modes with Ï0 = vsq.
0.7 | os! 05 0.4 Vs 035
Figure 12: The variation of sound speed with respect to the chemical potential. When the chemical potential is much larger than the conï¬ning scale, the conformality is restored and the sound speed approaches the predicted value 1â 2
analysis of spectrum of normal modes, there is an alternative called time domain analysis, which we would like to elaborate on below. We ï¬rst cast the equations of motion into the following Hamiltonian formalism
(5.31)
âtÏ = iAtÏ + P, âtP = iAtP â (z + A2 âtAx = Î x + âxAt, âtÎ x = i(Ïâx Â¯Ï â ¯ÏâxÏ) â 2AxÏ Â¯Ï â 3z2âzAx + (1 â z3)â2
x + iâxAx)Ï â 2iAxâxÏ + â2 xÏ â 3z2âzÏ + (1 â z3)â2 z Ï, (5.32) (5.33)
z Ax, (5.34)
0 = (z3 â 1)â2 z At + 3z2âzAt + âxÎ x â i( ¯P Ï â P ¯Ï),
# âtâzAt = âi(Ïâz Â¯Ï â ¯ÏâzÏ) + âzâxAx.
(5.36)
Then with the assumption that the perturbation bulk ï¬elds take the form as δ(t, z)eiqx, the
â 20 â
(5.35)
perturbation equations on top of the superï¬uid phase is given by
âtδÏr = âAtδÏi + δPr, âtδÏi = ÏrδAt + AtδÏr + δPi, âtδPr = AtÏrδAt â AtδPi â (z + q2)δÏr â 3z2âzδÏr + (1 â z3)â2 âtδPi = âiqÏrδAx + AtδPr â (z + q2)δÏi â 3z2âzδÏi + (1 â z3)â2 âtδAx = δΠx + iqδAt, âtδΠx = 2iqÏrδÏi â 2Ï2 0 = (z3 â 1)â2
z δÏr, z δÏi,
âtâzδAt = 2âzÏrδÏi â 2ÏrâzδÏi + iqâzδAx. (5.44)
As before, using the source free boundary conditions for all the perturbation ï¬elds, we can obtain the temporal evolution of the perturbation ï¬elds for any given initial data by Runge- Kutta method, where δAt is solved by the constraint equation (5.43). The normal modes can then be identiï¬ed by the peaks in the Fourier transformation of the evolving data. We demonstrate such a spectral plot in Figure 10. As expected, such a time domain analysis gives rise to the same result for the locations of normal modes as that by the frequency domain analysis.
Then the dispersion relation for the gapless Goldstone can be obtained and plotted in Figure 11, whereby the sound speed vs can be obtained by the ï¬tting formula Ï0 = vsq. As shown in Figure 12, the sound speed increases with the chemical potential and saturate to the predicted value 1â by conformal ï¬eld theory when the chemical potential is much larger than 2 the conï¬ning scale[39, 40, 41], which is reasonable since it is believed that the conformality is restored in this limit.
# 6. Concluding Remarks
Like any other uniï¬cation in physics, AdS/CFT correspondence has proven to be a unique tool for one to address various universal behaviors of near-equilibrium as well as far-from- equilibrium dynamics for a variety of strongly coupled systems, which otherwise would be hard to attack. During such an application, numerical computation has been playing a more and more important role in the sense that not only can numerics leave us with some conjectures to develop an analytic proof and some patterns to have an analytic understanding but also brings us to the regime where the analytic treatment is not available at all.
In these lecture notes, we have touched only upon the very basics for the numerics in applied AdS/CFT. In addition, we work only with the probe limit in the concrete example we make use of to demonstrate how to apply AdS/CFT with numerics. The situation will become a little bit involved when the back reaction is taken into account. Regarding this, the readers are suggested to refer to [42] to see how to obtain the stationary inhomogeneous solutions to fully back reacted Einstein equation by Einstein-DeTurck method. On the other
â 21 â
hand, the readers are recommended to refer to [43] to see how to evolve the fully back reacted dynamics, where with a black hole as the initial data it turns out that the Eddington like coordinates are preferred to the Schwarzschild like coordinates.
# Acknowledgments
H.Z. would like to thank the organizers of the Eleventh International Modave Summer School on Mathematical Physics held in Modave, Belgium, September 2015, where the lectures on which these notes are based were given. He is indebted to Nabil Iqbal for his valuable discussions at the summer school. H.Z. would also like to thank the organizers of 2015 International School on Numerical Relativity and Gravitational Waves held in Daejeon, Korea, July 2015, where these lectures were geared to the audience mainly from general relativity and gravity community. He is grateful to Keun-Young Kim, Kyung Kiu Kim, Miok Park, and Sang-Jin Sin for the enjoyable conversations during the school. H.Z. is also grateful to Ben Craps and Alex Sevrin for the fantastic infrastructure they provide at HEP group of VUB and the very freedom as well as various opportunities they oï¬er to him. M.G. is partially supported by NSFC with Grant Nos.11235003, 11375026 and NCET-12-0054. C.N. is supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Science, ICT & Future Planning(NRF- 2014R1A1A1003220) and the 2015 GIST Grant for the FARE Project (Further Advancement of Research and Education at GIST College). Y.T. is partially supported by NSFC with Grant No.11475179. H.Z. is supported in part by the Belgian Federal Science Policy Oï¬ce through the Interuniversity Attraction Pole P7/37, by FWO-Vlaanderen through the project G020714N, and by the Vrije Universiteit Brussel through the Strategic Research Program âHigh-Energy Physicâ. He is also an individual FWO Fellow supported by 12G3515N.
# References
[1] E. Witten, arXiv:hep-th/0106109.
[2] A. Strominger, arXiv:hep-th/0106113.
[3] M. Spradlin, A. Strominger, and A. Volovich, arXiv:hep-th/0110007.
[4] R. Britto, F. Cachazo, B. Feng, and E. Witten, Phys. Rev. Lett. 94 (181602)(2005).
[5] N. Arkani-Hamed, F. Cachazo, and J. Kaplan, JHEP 1009, 016(2010).
[6] J. Maldacena, Adv. Theor. Math. Phys. 2, 231(1998).
# Cole ND
[7] E. Witten, Adv. Theor. Math. Phys. 2, 253(1998).
[8] S. Gubser, I. R. Klebanov, and A. M. Polyakov, Phys. Lett. B 428, 105(1998).
[9] P. Breitenlohner and D. Z. Freedman, Annals Phys. 144, 249(1982).
[10] P. Breitenlohner and D. Z. Freedman, Phys. Lett. B 115, 197(1982).
[11] S. Ryu and T. Takayanagi, Phys. Rev. Lett. 96, 181602(2006).
â 22 â
[12] V. E. Hubeny, M. Rangamani, and T. Takayanagi, JHEP 0707, 062(2007).
[13] A. Lewkowycz and J. Maldacena, JHEP 08, 090(2013).
[14] R. M. Wald, Living. Rev. Rel. 4, 6(2001).
[15] J. D. Brown and M. Henneaux, Commun. Math. Phys. 104, 207(1986).
[16] A. Strominger, JHEP 02, 009(1998).
[17] J. D. Brown and J. W. York, Phys. Rev. D 47, 1407(1993).
[18] V. E. Hubeny, S. Minwalla, and M. Rangamani, arXiv:1107.5780.
[19] B. Swingle, Phys. Rev. D 86, 065007(2012).
[20] X. L. Qi, arXiv:1309.6282.
[21] J. Casalderrey-Solana, H. Liu, D. Mateos, K. Rajagopal, and U. A. Wiedemann, arXiv:1101.0618.
[22] U. Gursoy, E. Kiritsis, L. Mazzanti, G. Michalogiorgakis, and F. Nitti, Lect. Notes Phys. 828, 79(2011).
[23] S. A. Hartnoll, Class. Quant. Grav. 26, 224002(2009).
[24] J. McGreevy, Adv. High Energy Phys. 2010, 723105(2010).
[25] C. P. Herzog, J. Phys. A 42, 343001(2009).
[26] G. T. Horowitz, arXiv:1002.1722.
[27] N. Iqbal, H. Liu, and M. Mezei, arXiv:1110.3814.
[28] W. J. Li, Y. Tian, and H. Zhang, JHEP 07, 030(2013).
[29] N. Callebaut, B. Craps, F. Galli, D. C. Thompson, J. Vanhoof, J. Zaanen, and H. Zhang, JHEP 10, 172(2014).
[30] B. Craps, E. J. Lindgren, A. Taliotis, J. Vanhoof, and H. Zhang, Phys. Rev. D 90, 086004(2014).
[31] R. Li, Y. Tian, H. Zhang, and J. Zhao, Phys. Lett. B 750, 520(2015).
[32] Y. Du, C. Niu, Y. Tian, and H. Zhang, JHEP 12, 018(2015).
[33] Y. Du, S. Q. Lan, Y. Tian, and H. Zhang, JHEP 01, 016(2016).
[34] M. Guo, S. Q. Lan, C. Niu, Y. Tian, and H. Zhang, to appear.
[35] T. Nishioka, S. Ryu, and T. Takayanagi, JHEP 1003, 131(2010).
[36] X. Zhang, C. L. Hung, S. K. Tung, and C. Chin, Science 335, 1070(2012).
[37] I. R. Klebanov and E. Witten, Nucl. Phys. B 556, 89(1999).
[38] K. Skenderis, Class. Quant. Grav.19, 5849(2002).
[39] C. P. Herzog, P. K. Kovtun, and D. T. Son, Phys. Rev. D 79, 066002(2009).
[40] A. Yarom, JHEP 0907, 070(2009).
[41] C. P. Herzog and A. Yarom, Phys. Rev. D 80, 106002(2009).
[42] O. J. C. Dias, J. E. Santos, and B. Way, arXiv:1510.02804.
[43] P. Chesler and L. G. Yaï¬e, JHEP 07, 086(2014).
â 23 â | {
"id": "1510.02804"
} |
1512.05742 | A Survey of Available Corpora for Building Data-Driven Dialogue Systems | During the past decade, several areas of speech and language understanding
have witnessed substantial breakthroughs from the use of data-driven models. In
the area of dialogue systems, the trend is less obvious, and most practical
systems are still built through significant engineering and expert knowledge.
Nevertheless, several recent results suggest that data-driven approaches are
feasible and quite promising. To facilitate research in this area, we have
carried out a wide survey of publicly available datasets suitable for
data-driven learning of dialogue systems. We discuss important characteristics
of these datasets, how they can be used to learn diverse dialogue strategies,
and their other potential uses. We also examine methods for transfer learning
between datasets and the use of external knowledge. Finally, we discuss
appropriate choice of evaluation metrics for the learning objective. | http://arxiv.org/pdf/1512.05742 | Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, Joelle Pineau | cs.CL, cs.AI, cs.HC, cs.LG, stat.ML, 68T01, 68T05, 68T35, 68T50, I.2.6; I.2.7; I.2.1 | 56 pages including references and appendix, 5 tables and 1 figure;
Under review for the Dialogue & Discourse journal. Update: paper has been
rewritten and now includes several new datasets | null | cs.CL | 20151217 | 20170321 | 7 1 0 2
r a M 1 2 ] L C . s c [
3 v 2 4 7 5 0 . 2 1 5 1 : v i X r a
# A Survey of Available Corpora for Building Data-Driven Dialogue Systems
Iulian Vlad Serban DIRO, Universit´e de Montr´eal 2920 chemin de la Tour, Montr´eal, QC H3C 3J7, Canada
{IULIAN.VLAD.SERBAN} AT UMONTREAL DOT CA
Ryan Lowe Department of Computer Science, McGill University 3480 University st, Montr´eal, QC H3A 0E9, Canada
{RYAN.LOWE} AT MAIL DOT MCGILL DOT CA
Peter Henderson Department of Computer Science, McGill University 3480 University st, Montr´eal, QC H3A 0E9, Canada
{PETER.HENDERSON} AT MAIL DOT MCGILL DOT CA
Laurent Charlin Department of Computer Science, McGill University 3480 University st, Montr´eal, QC H3A 0E9, Canada
{LCHARLIN} AT CS DOT MCGILL DOT CA
Joelle Pineau Department of Computer Science, McGill University 3480 University st, Montr´eal, QC H3A 0E9, Canada
{JPINEAU} AT CS DOT MCGILL DOT CA
Editor: David Traum
# Abstract
During the past decade, several areas of speech and language understanding have witnessed sub- stantial breakthroughs from the use of data-driven models. In the area of dialogue systems, the trend is less obvious, and most practical systems are still built through signiï¬cant engineering and expert knowledge. Nevertheless, several recent results suggest that data-driven approaches are fea- sible and quite promising. To facilitate research in this area, we have carried out a wide survey of publicly available datasets suitable for data-driven learning of dialogue systems. We discuss im- portant characteristics of these datasets, how they can be used to learn diverse dialogue strategies, and their other potential uses. We also examine methods for transfer learning between datasets and the use of external knowledge. Finally, we discuss appropriate choice of evaluation metrics for the learning objective.
# 1. Introduction
Dialogue systems, also known as interactive conversational agents, virtual agents or sometimes chatterbots, are useful in a wide range of applications ranging from technical support services to language learning tools and entertainment (Young et al., 2013; Shawar and Atwell, 2007b). Large-
1
scale data-driven methods, which use recorded data to automatically infer knowledge and strategies, are becoming increasingly important in speech and language understanding and generation. Speech recognition performance has increased tremendously over the last decade due to innovations in deep learning architectures (Hinton et al., 2012; Goodfellow et al., 2015). Similarly, a wide range of data- driven machine learning methods have been shown to be effective for natural language processing, including tasks relevant to dialogue, such as dialogue act classiï¬cation (Reithinger and Klesen, 1997; Stolcke et al., 2000), dialogue state tracking (Thomson and Young, 2010; Wang and Lemon, 2013; Ren et al., 2013; Henderson et al., 2013; Williams et al., 2013; Henderson et al., 2014c; Kim et al., 2015), natural language generation (Langkilde and Knight, 1998; Oh and Rudnicky, 2000; Walker et al., 2002; Ratnaparkhi, 2002; Stent et al., 2004; Rieser and Lemon, 2010; Mairesse et al., 2010; Mairesse and Young, 2014; Wen et al., 2015; Sharma et al., 2016), and dialogue policy learning (Young et al., 2013). We hypothesize that, in general, much of the recent progress is due to the availability of large public datasets, increased computing power, and new machine learning models, such as neural network architectures. To facilitate further research on building data-driven dialogue systems, this paper presents a broad survey of available dialogue corpora.
Corpus-based learning is not the only approach to training dialogue systems. Researchers have also proposed training dialogue systems online through live interaction with humans, and ofï¬ine using user simulator models and reinforcement learning methods (Levin et al., 1997; Georgila et al., 2006; Paek, 2006; Schatzmann et al., 2007; Jung et al., 2009; Schatzmann and Young, 2009; GaËsi´c et al., 2010, 2011; Daubigney et al., 2012; GaËsi´c et al., 2012; Su et al., 2013; Gasic et al., 2013; Pietquin and Hastie, 2013; Young et al., 2013; Mohan and Laird, 2014; Su et al., 2015; Piot et al., 2015; Cuay´ahuitl et al., 2015; Hiraoka et al., 2016; Fatemi et al., 2016; Asri et al., 2016; Williams and Zweig, 2016; Su et al., 2016). However, these approaches are beyond the scope of this survey.
This survey is structured as follows. In the next section, we give a high-level overview of di- alogue systems. We brieï¬y discuss the purpose and goal of dialogue systems. Then we describe the individual system components that are relevant for data-driven approaches as well as holistic end-to-end dialogue systems. In Section 3, we discuss types of dialogue interactions and aspects relevant to building data-driven dialogue systems, from a corpus perspective, as well as modalities recorded in each corpus (e.g. text, speech and video). We further discuss corpora constructed from both human-human and human-machine interactions, corpora constructed using natural versus un- natural or constrained settings, and corpora constructed using works of ï¬ction. In Section 4, we present our survey over dialogue corpora according to the categories laid out in Sections 2-3. In particular, we categorize the corpora based on whether dialogues are between humans or between a human and a machine, and whether the dialogues are in written or spoken language. We discuss each corpus in turn while emphasizing how the dialogues were generated and collected, the topic of the dialogues, and the size of the entire corpus. In Section 5, we discuss issues related to: cor- pus size, transfer learning between corpora, incorporation of external knowledge into the dialogue system, data-driven learning for contextualization and personalization, and automatic evaluation metrics. We conclude the survey in Section 6.
# 2. Characteristics of Data-Driven Dialogue Systems
This section offers a broad characterization of data-driven dialogue systems, which structures our presentation of the datasets.
2
# 2.1 An Overview of Dialogue Systems
The standard architecture for dialogue systems, shown in Figure 1, incorporates a Speech Rec- ognizer, Language Interpreter, State Tracker, Response Generator, Natural Language Generator, and Speech Synthesizer. In the case of text-based (written) dialogues, the Speech Recognizer and Speech Synthesizer can be left out. While some of the literature on dialogue systems identiï¬es only the State Tracker and Response Selection components as belonging inside the dialogue man- ager (Young, 2000), throughout this paper we adopt a broader view where language understanding and generation are incorporated within the dialogue system. This leaves space for the development and analysis of end-to-end dialogue systems (Ritter et al., 2011; Vinyals and Le, 2015; Lowe et al., 2015a; Sordoni et al., 2015b; Shang et al., 2015; Li et al., 2015; Serban et al., 2016; Serban et al., 2017b,a; Dodge et al., 2015; Williams and Zweig, 2016; Weston, 2016).
We focus on corpus-based data-driven dialogue systems. That is, systems composed of machine learning solutions using corpora constructed from real-world data. These system components have variables or parameters that are optimized based on statistics observed in dialogue corpora. In particular, we focus on systems where the majority of variables and parameters are optimized. Such corpus-based data-driven systems should be contrasted to systems where each component is hand- crafted by engineers â for example, components deï¬ned by an a priori ï¬xed set of deterministic rules (e.g. Weizenbaum (1966); McGlashan et al. (1992)). These systems should also be contrasted with systems learning online, such as when the free variables and parameters are optimized directly based on interactions with humans (e.g. GaËsi´c et al. (2011)). Still, it is worth noting that it is possible to combine different types of learning within one system. For example, some parameters may be learned using statistics observed in a corpus, while other parameters may be learned through interactions with humans.
While there are substantial opportunities to improve each of the components in Figure 1 through (corpus-based) data-driven approaches, within this survey we focus primarily on datasets suitable to enhance the components inside the Dialogue System box. It is worth noting that the Natural Language Interpreter and Generator are core problems in Natural Language Processing with appli- cations well beyond dialogue systems.
Automatic Speech Natural Language Dialogue State Recognizer Interpreter Tracker Text-To-Speech Natural Language Dialogue a P Response Synthesizer Generator fi Selection Dialogue System
Figure 1: Dialogue System Diagram
3
# 2.2 Tasks and objectives
Dialogue systems have been built for a wide range of purposes. A useful distinction can be made between goal-driven dialogue systems, such as technical support services, and non-goal-driven dia- logue systems, such as language learning tools or computer game characters. Although both types of systems do in fact have objectives, typically the goal-driven dialogue systems have a well-deï¬ned measure of performance that is explicitly related to task completion.
Non-goal-driven Dialogue Systems. Research on non-goal-driven dialogue systems goes back to the mid-60s. It began, perhaps, with Weizenbaumâs famous program ELIZA, a system based only on simple text parsing rules that managed to convincingly mimic a Rogerian psychotherapist by persistently rephrasing statements or asking questions (Weizenbaum, 1966). This line of research was continued by Colby (1981), who used simple text parsing rules to construct the dialogue system PARRY, which managed to mimic the pathological behaviour of a paranoid patient to the extent that clinicians could not distinguish it from real patients. However, neither of these two systems used data-driven learning approaches. Later work, such as the MegaHal system by Hutchens and Alder (1998), started to apply data-driven methods (Shawar and Atwell, 2007b). Hutchens and Alder (1998) proposed modelling dialogue as a stochastic sequence of discrete symbols (words) using 4âth order Markov chains. Given a user utterance, their system generated a response by following a two-step procedure: ï¬rst, a sequence of topic keywords, used to create a seed reply, was ex- tracted from the userâs utterance; second, starting from the seed reply, two separate Markov chains generated the words preceding and proceeding the seed keywords. This procedure produced many candidate responses, from which the highest entropy response was returned to the user. Under the assumption that the coverage of different topics and general ï¬uency is of primary importance, the 4âth order Markov chains were trained on a mixture of data sources ranging from real and ï¬ctive dialogues to arbitrary texts. Unfortunately, until very recently, such data-driven dialogue systems were not applied widely in real-world applications (Perez-Marin and Pascual-Nieto, 2011; Shawar and Atwell, 2007b). Part of the reason for this might be due to their non-goal-driven nature, which made them hard to commercialize. Another barrier to commercialization might have been the lack of theoretical and empirical understanding of such systems. Nevertheless, in a similar spirit over the past few years, neural network architectures trained on large-scale corpora have been investigated. These models have demonstrated promising results for several non-goal-driven dialogue tasks (Rit- ter et al., 2011; Vinyals and Le, 2015; Lowe et al., 2015a; Sordoni et al., 2015b; Shang et al., 2015; Li et al., 2015; Serban et al., 2016; Serban et al., 2017b,a; Dodge et al., 2015; Williams and Zweig, 2016; Weston, 2016). However, they require having sufï¬ciently large corpora â in the hundreds of millions or even billions of words â in order to achieve these results.
Goal-driven Dialogue Systems. Initial work on goal-driven dialogue systems was primarily based on deterministic hand-crafted rules coupled with learned speech recognition models (e.g. off- the-shelf speech recognition software). One example is the SUNDIAL project, which was capable of providing timetable information about trains and airplanes, as well as taking airplane reserva- tions (Aust et al., 1995; McGlashan et al., 1992; Simpson and Eraser, 1993). Later, machine learn- ing techniques were used to classify the intention (or need) of the user, as well as to bridge the gap between text and speech (e.g. by taking into account uncertainty related to the outputs of the speech recognition model) (Gorin et al., 1997). Research in this area started to take off during the mid 1990s, when researchers began to formulate dialogue as a sequential decision making problem based on Markov decision processes (Singh et al., 1999; Young et al., 2013; Paek, 2006; Pieraccini
4
et al., 2009). Unlike non-goal-driven systems, industry played a major role and enabled researchers to have access to (at the time) relatively large dialogue corpora for certain tasks, such as recordings from technical support call centres. Although research in the past decade has continued to push the ï¬eld towards data-driven approaches, commercial systems are highly domain-speciï¬c and heavily based on hand-crafted rules and features (Young et al., 2013). In particular, many of the tasks and datasets available are constrained to narrow domains.
# 2.3 Learning Dialogue System Components
Modern dialogue systems consist of several components, as illustrated in Figure 1. Several of the dialogue system components can be learned through so-called discriminative models, which aim to predict labels or annotations relevant to other parts of the dialogue system. Discriminative models fall into the machine learning paradigm of supervised learning. When the labels of interest are discrete, the models are called classiï¬cation models, which is the most common case. When the labels of interest are continuous, the models are called regression models. One popular approach for tackling the discriminative task is to learn a probabilistic model of the labels conditioned on the available information P (Y |X), where Y is the label of interest (e.g. a discrete variable representing the user intent) and X is the available information (e.g. utterances in the conversation). Another popular approach is to use maximum margin classiï¬ers, such as support vector machines (Cristianini and Shawe-Taylor, 2000).
Although it is beyond the scope of this paper to provide a survey over such system components, we now give a brief example of each component. This will motivate and facilitate the dataset analysis.
Natural Language Interpreter. An example of a discriminative model is the user intent clas- siï¬cation model, which acts as the Natural Language Interpreter. This model is trained to predict the intent of a user conditioned on the utterances of that user. In this case, the intent is called the label (or target or output), and the conditioned utterances are called the conditioning variables (or inputs). Training this model requires examples of pairs of user utterances and intentions. One way to obtain these example pairs would be to ï¬rst record written dialogues between humans carrying out a task, and then to have humans annotate each utterance with its intention label. Depending on the complexity of the domain, this may require training the human annotators to reach a certain level of agreement between annotators.
Dialogue State Tracker. A Dialogue State Tracker might similarly be implemented as a classi- ï¬cation model (Williams et al., 2013). At any given point in the dialogue, such a model will take as input all the user utterances and user intention labels estimated by a Natural Language Interpreter model so far and output a distribution over possible dialogue states. One common way to represent dialogue states are through slot-value pairs. For example, a dialogue system providing timetable information for trains might have three different slots: departure city, arrival city, and departure time. Each slot may take one of several discrete values (e.g. departure city could take values from a list of city names). The task of the Dialogue State Tracker is then to output a distribution over every possible combination of slot-value pairs. This distribution â or alternatively, the K dialogue states with the highest probability â may then be used by other parts of the dialogue system. The Dialogue State Tracker model can be trained on examples of dialogue utterances and dialogue states labelled by humans.
5
Dialogue Response Selection. Given the dialogue state distribution provided by the Dialogue State Tracker, the Dialogue Response Selection component must select the correct system response (or action). This component may also be implemented as a classiï¬cation model that maps dialogue states to a probability over a discrete set of responses. For example, in a dialogue system provid- ing timetable information for trains, the set of responses might include providing information (e.g. providing the departure time of the next train with a speciï¬c departure and arrival city) and clariï¬- cation questions (e.g. asking the user to re-state their departure city). The model may be trained on example pairs of dialogue states and responses.
Natural Language Generator. Given a dialogue system response (e.g. a response providing the departure time of a train), the Natural Language Generator must output the natural language utterance of the system. This has often been implemented in commercial goal-driven dialogue systems using hand-crafted rules. Another option is to learn a discriminative model to select a natural language response. In this case, the output space may be deï¬ned as a set of so-called surface form sentences (e.g. âThe requested train leaves city X at time Yâ, where X and Y are placeholder values). Given the system response, the classiï¬cation model must choose an appropriate surface form. Afterwards, the chosen surface form will have the placeholder values substituted in appropriately (e.g. X will be replaced by the appropriate city name through a database look up). As with other classiï¬cation models, this model may be trained on example pairs of system responses and surface forms.
Discriminative models have allowed goal-driven dialogue systems to make signiï¬cant progress (Williams et al., 2013). With proper annotations, discriminative models can be evaluated automat- ically and accurately. Furthermore, once trained on a given dataset, these models may be plugged into a fully-deployed dialogue system (e.g. a classiï¬cation model for user intents may be used as input to a dialogue state tracker).
# 2.4 End-to-end Dialogue Systems
Not all dialogue systems conform to the architecture shown in Figure 1. In particular, so-called end-to-end dialogue system architectures based on neural networks have shown promising results on several dialogue tasks (Ritter et al., 2011; Vinyals and Le, 2015; Lowe et al., 2015a; Sordoni et al., 2015b; Shang et al., 2015; Li et al., 2015; Serban et al., 2016; Serban et al., 2017b,a; Dodge et al., 2015). In their purest form, these models take as input a dialogue in text form and output a response (or a distribution over responses). We call these systems end-to-end dialogue systems because they possess two important properties. First, they do not contain or require learning any sub-components (such as Natural Language Interpreters or Dialogue State Trackers). Consequently, there is no need to collect intermediate labels (e.g. user intention or dialogue state labels). Second, all model parameters are optimized w.r.t. a single objective function. Often the objective function chosen is maximum log-likelihood (or cross-entropy) on a ï¬xed corpus of dialogues. Although in the original formulation these models depended only on the dialogue context, they may be extended to also depend on outputs from other components (e.g. outputs from the speech recognition tracker), and on external knowledge (e.g. external databases).
End-to-end dialogue systems can be divided into two categories: those that select deterministi- cally from a ï¬xed set of possible responses, and those that attempt to generate responses by keeping a posterior distribution over possible utterances. Systems in the ï¬rst category map the dialogue his- tory, tracker outputs and external knowledge (e.g. a database, which can be queried by the system)
6
# to a response action:
fθ : {dialogue history, tracker outputs, external knowledge} â action at,
where at is the dialogue system response action at time t, and θ is the set of parameters that deï¬nes f . Information retrieval and ranking-based systems â systems that search through a database of dialogues and pick responses with the most similar context, such as the model proposed by Banchs and Li (2012) â belong to this category. In this case, the mapping function fθ projects the dialogue history into a Euclidean space (e.g. using TF-IDF bag-of-words representations). The response is then found by projecting all potential responses into the same Euclidean space, and the response closest to the desirable response region is selected. The neural network proposed by Lowe et al. (2015a) also belongs to this category. In this case, the dialogue history is projected into a Euclidean space using a recurrent neural network encoding the dialogue word-by-word. Similarly, a set of can- didate responses are mapped into the same Euclidean space using another recurrent neural network encoding the response word-by-word. Finally, a relevance score is computed between the dialogue context and each candidate response, and the response with the highest score is returned. Hybrid or combined models, such as the model built on both a phrase-based statistical machine translation system and a recurrent neural network proposed by Sordoni et al. (2015b), also belong to this cate- gory. In this case, a response is generated by deterministically creating a ï¬xed number of answers using the machine translation system and then picking the response according to the score given by a a neural network. Although both of its sub-components are based on probabilistic models, the ï¬nal model does not construct a probability distribution over all possible responses.1
In contrast to a deterministic system, a generative system explicitly computes a full posterior probability distribution over possible system response actions at every turn:
# Pθ(action at | dialogue history, tracker outputs, external knowledge).
Systems based on generative recurrent neural networks belong to this category (Vinyals and Le, 2015). By breaking down eq. (2) into a product of probabilities over words, responses can be generated by sampling word-by-word from their probability distribution. Unlike the deterministic response models, these systems are also able to generate entirely novel responses (e.g. by sampling word-by-word). Highly probable responses, i.e. the response with the highest probability, can fur- ther be generated by using a method known as beam-search (Graves, 2012). These systems project each word into a Euclidean space (known as a word embedding) (Bengio et al., 2003); they also project the dialogue history and external knowledge into a Euclidean space (Wen et al., 2015; Lowe et al., 2015b). Similarly, the system proposed by Ritter et al. (2011) belongs to this category. Their model uses a statistical machine translation model to map a dialogue history to its response. When trained solely on text, these generative models can be viewed as unsupervised learning models, because they aim to reproduce data distributions. In other words, the models learn to assign a prob- ability to every possible conversation, and since they generate responses word by word, they must learn to simulate the behaviour of the agents in the training corpus.
Early reinforcement learning dialogue systems with stochastic policies also belong to this cat- egory (the NJFun system (Singh et al., 2002) is an example of this). In contrast to the neural network and statistical machine translation systems, these reinforcement learning systems typically
1. Although the model does not require intermediate labels, it consists of sub-components whose parameters are trained with different objective functions. Therefore, strictly speaking, this is not an end-to-end model.
7
(1)
(2)
have very small sets of possible hand-crafted system states (e.g. hand-crafted features describing the dialogue state). The action space is also limited to a small set of pre-deï¬ned responses. This makes it possible to apply established reinforcement learning algorithms to train them either online or ofï¬ine, however it also severely limits their application area. As Singh et al. (Singh et al., 2002, p.5) remark: âWe view the design of an appropriate state space as application-dependent, and a task for a skilled system designer.â
# 3. Dialogue Interaction Types & Aspects
This section provides a high-level discussion of different types of dialogue interactions and their salient aspects. The categorization of dialogues is useful for understanding the utility of various datasets for particular applications, as well as for grouping these datasets together to demonstrate available corpora in a given area.
# 3.1 Written, Spoken & Multi-modal Corpora
An important distinction between dialogue corpora is whether participants (interlocutors) interact through written language, spoken language, or in a multi-modal setting (e.g. using both speech and visual modalities). Written and spoken language differ substantially w.r.t. their linguistic properties. .Spoken language tends to be less formal, containing lower information content and many more pronouns than written language (Carter and McCarthy, 2006; Biber and Finegan, 2001, 1986). In particular, the differences are magniï¬ed when written language is compared to spoken face-to- face conversations, which are multi-modal and highly socially situated. As Biber and Finegan (1986) observed, pronouns, questions, and contradictions, as well as that-clauses and if-clauses, appear with a high frequency in face-to-face conversations. Forchini (2012) summarized these differences: â... studies show that face-to-face conversation is interpersonal, situation-dependent has no narrative concern or as Biber and Finegan (1986) put it, is a highly interactive, situated and immediate text type...â Due to these differences between spoken and written language, we will emphasize the distinction between dialogue corpora in written and spoken language in the following sections.
Similarly, dialogues involving visual and other modalities differ from dialogues without these modalities (Card et al., 1983; Goodwin, 1981). When a visual modality is available â for example, when two human interlucators converse face-to-face â body language and eye gaze has a signiï¬cant impact on what is said and how it is said (Gibson and Pick, 1963; Lord and Haith, 1974; Cooper, 1974; Chartrand and Bargh, 1999; de Kok et al., 2013). Aside from the visual modality, dialogue systems may also incorporate other situational modalities, including aspects of virtual environments (Rickel and Johnson, 1999; Traum and Rickel, 2002) and user proï¬les (Li et al., 2016).
# 3.2 Human-Human Vs. Human-Machine Corpora
Another important distinction between dialogue datasets resides in the types of interlocutors â notably, whether it involves interactions between two humans, or between a human and a computer2. The distinction is important because current artiï¬cial dialogue systems are signiï¬cantly constrained.
2. Machine-machine dialogue corpora are not of interest to us, because they typically differ signiï¬cantly from natural human language. Furthermore, user simulation models are outside the scope of this survey.
8
These systems do not produce nearly the same distribution of possible responses as humans do under equivalent circumstances. As stated by Williams and Young (2007):
(Human-human conversation) does not contain the same distribution of understanding errors, and humanâhuman turn-taking is much richer than human-machine dialog. As a result, human-machine dialogue exhibits very different traits than human-human dia- logue (Doran et al., 2001; Moore and Browning, 1992).
The expectation a human interlucator begins with, and the interface through which they interact, also affect the nature of the conversation (J. and D., 1988).
For goal-driven settings, Williams and Young (2007) have previously argued against building data-driven dialogue systems using human-human dialogues: â... using human-human conversation data is not appropriate because it does not contain the same distribution of understanding errors, and because human-human turn-taking is much richer than human-machine dialog.â This line of reasoning seems particularly applicable to spoken dialogue systems, where speech recognition errors can have a critical impact on performance and therefore must be taken into account when learning the dialogue model. The argument is also relevant to goal-driven dialogue systems, where an effective dialogue model can often be learned using reinforcement learning techniques. Williams and Young (2007) also argue against learning from corpora generated between humans and existing dialogue systems: âWhile it would be possible to use a corpus collected from an existing spoken dialogue system, supervised learning would simply learn to approximate the policy used by that spoken dialogue system and an overall performance improvement would therefore be unlikely.â
Thus, it appears, for goal-driven spoken dialogue systems in particular, that the most effective strategy is learning online through interaction with real users. Nonetheless, there exists useful human-machine corpora where the interacting machine uses a stochastic policy that can generate sufï¬cient coverage of the task (e.g. enough good and enough bad dialogue examples) to allow an effective dialogue model to be learned. In this case, the goal is to learn a policy that is eventually better than the original stochastic policy used to generate the corpus through a process known as bootstrapping.
In this survey we focus on data-driven learning from human-human and human-machine di- alogue corpora. Despite the advantages of learning online through interactions with real users, learning based on human-human dialogue corpora may be more suitable for open domain dialogue systems because they reï¬ect natural dialogue interactions. By natural dialogues, we mean conver- sations that are unconstrained and unscripted, e.g. between interlocutors who are not instructed to carry out a particular task, to follow a series of instructions, or to act out a scripted dialogue. In this setting, the dialogue process is relatively unaffected by researchers, e.g. the interlocutors are not in- terrupted by question prompts in the middle of a dialogue. As can be expected, such conversations include a signiï¬cant amount of turn-taking, pauses and common grounding phenomena (Clark and Brennan, 1991). Additionally, they are more diverse, and open up the possibility for the model to learn to understand natural language.
# 3.3 Natural Vs. Unnatural Corpora
The way in which a dialogue corpus is generated and collected can have a signiï¬cant inï¬uence on the trained data-driven dialogue system. In the case of human-human dialogues, an ideal corpus should closely resemble natural dialogues between humans. Arguably, this is the case when conversations
9
between humans are recorded and transcribed, and when the humans in the dialogue represent the true population of users with whom the dialogue system is intended to interact. It is even better if they are unaware of the fact that they are being recorded, but this is not always possible due to ethical considerations and resource constraints.
Due to ethical considerations and resource constraints, researchers may be forced to inform the human interlocutors that they are being recorded or to setup artiï¬cial experiments in which they hire humans and instruct them to carry out a particular task by interacting with a dialogue system. In these cases, there is no guarantee that the interactions in the corpus will reï¬ect true interactions, since the hired humans may behave differently from the true user population. One factor that may cause behavioural differences is the fact that the hired humans may not share the same intentions and motivations as the true user population (Young et al., 2013). The unnaturalness may be further exacerbated by the hiring process, as well as the platform through which they interact. Such factors are becoming more prevalent as researchers increasingly rely on crowdsourcing platforms, such as Amazon Mechanical Turk, to collect and evaluate dialogue data (Jurcıcek et al., 2011).
In the case of Wizard-of-Oz experiments (Bohus and Rudnicky, 2008; Petrik, 2004), a human thinks (s)he is speaking to a machine, but a human operator is in fact controlling the dialogue system. This enables the generation of datasets that are closer in nature to the dialogues humans may wish to have with a good AI dialogue system. Unfortunately, such experiments are expensive and time- consuming to carry out. Ultimately the impact of any unnaturalness in the dialogues depends on the task and context in which the dialogue system is deployed.
# 3.4 Corpora from Fiction
It is also possible to use artiï¬cial dialogue corpora for data-driven learning. This includes cor- pora based on works of ï¬ction such as novels, movie manuscripts and audio subtitles. However, unlike transcribed human-human conversations, novels, movie manuscripts, and audio subtitles de- pend upon events outside the current conversation, which are not observed. This makes data-driven learning more difï¬cult because the dialogue system has to account for unknown factors. The same problem is also observed in certain other media, such as microblogging websites (e.g. Twitter and Weibo), where conversations also may depend on external unobserved events.
Nevertheless, recent studies have found that spoken language in movies resembles spontaneous human spoken language (Forchini, 2009). Although movie dialogues are explicitly written to be spoken and contain certain artiï¬cial elements, many of the linguistic and paralinguistic features contained within the dialogues are similar to natural spoken language, including dialogue acts such as turn-taking and reciprocity (e.g. returning a greeting when greeted). The artiï¬cial differences that exist may even be helpful for data-driven dialogue learning since movie dialogues are more compact, follow a steady rhythm, and contain less garbling and repetition, all while still presenting a clear event or message to the viewer (Dose, 2013; Forchini, 2009, 2012). Unlike dialogues extracted from Wizard-of-Oz human experiments, movie dialogues span many different topics and occur in many different environments (Webb, 2010). They contain different actors with different intentions and relationships to one another, which could potentially allow a data-driven dialogue system to learn to personalize itself to different users by making use of different interaction patterns (Li et al., 2016).
10
# 3.5 Corpus Size
As in other machine learning applications such as machine translation (Al-Onaizan et al., 2000; G¨ulc¸ehre et al., 2015) and speech recognition (Deng and Li, 2013; Bengio et al., 2014), the size of the dialogue corpus is important for building an effective data-driven dialogue (Lowe et al., 2015a; Serban et al., 2016).
There are two primary perspectives on the importance of dataset size for building data-driven dialogue systems. The ï¬rst perspective comes from the machine learning literature: larger datasets place constraints on the dialogue model trained from that data. Datasets with few examples may require strong structural priors placed on the model, such as using a modular system, while large datasets can be used to train end-to-end dialogue systems with less a priori structure. The second comes from a statistical natural language processing perspective: since the statistical complexity of a corpus grows with the linguistic diversity and number of topics, the number of examples required by a machine learning algorithm to model the patterns in it will also grow with the linguistic diversity and number of topics. Consider two small datasets with the same number of dialogues in the domain of bus schedule information: in one dataset the conversations between the users and operator is natural, and the operator can improvise and chitchat; in the other dataset, the operator reads from a script to provide the bus information. Despite having the same size, the second dataset will have less linguistic diversity and not include chitchat topics. Therefore, it will be easier to train a data-driven dialogue system mimcking the behaviour of the operator in the second dataset, however it will also exhibit a highly pedantic style and not be able to chitchat. In addition to this, to have an effective discussion between any two agents, their common knowledge must be represented and understood by both parties. The process of establishing this common knowledge, also known as grounding, is especially critical to repair misunderstandings between humans and dialogue systems (Cahn and Brennan, 1999). Since the number of misunderstandings can grow with the lexical diversity and number of topics (e.g. misunderstanding the paraphrase of an existing word, or misunderstanding a rarely seen keyword), the number of examples required to repair these grow with linguistic diversity and topics. In particular, the effect of linguistic diversity has been observed in practice: Vinyals and Le (2015) train a simple encoder-decoder neural network on a proprietary dataset of technical support dialogues. Although it has a similar size and purpose as the Ubuntu Dialogue Corpus (Lowe et al., 2015a), the qualitative examples shown by Vinyals and Le (2015) are signiï¬cantly superior to those obtained by more complex models on the Ubuntu Corpus (Serban et al., 2017a). This result may likely be explained in part due to the fact that technical support operators often follow a comprehensive script for solving problems. As such, the script would reduce the linguistic diversity of their responses.
Furthermore, since the majority of human-human dialogues are multi-modal and highly am- biguous in nature (Chartrand and Bargh, 1999; de Kok et al., 2013), the size of the corpus may compensate for some of the ambiguity and missing modalities. If the corpus is sufï¬ciently large, then the resolved ambiguities and missing modalities may, for example, be approximated using latent stochastic variables (Serban et al., 2017b). Thus, we include corpus size as a dimension of analysis. We also discuss the beneï¬ts and drawbacks of several popular large-scale datasets in Section 5.1.
11
# 4. Available Dialogue Datasets
There is a vast amount of data available documenting human communication. Much of this data could be used â perhaps after some pre-processing â to train a dialogue system. However, cov- ering all such sources of data would be infeasible. Thus, we restrict the scope of this survey to datasets that have already been used to study dialogue or build dialogue systems, and to very large corpora of interactionsâthat may or may not be strictly considered dialogue datasetsâwhich could be leveraged in the near future to build more sophisticated data-driven dialogue models. We restrict the selection further to contain only corpora generated from spoken or written English, and to cor- pora which, to the best of our knowledge, either are publicly available or will be made available in the near future. We ï¬rst give a brief overview of each of the considered corpora, and later high- light some of the more promising examples, explaining how they could be used to further dialogue research.3
The dialogue datasets analyzed in this paper are listed in Tables 1-5. Column features indicate properties of the datasets, including the number of dialogues, average dialogue length, number of words, whether the interactions are between humans or with an automated system, and whether the dialogues are written or spoken. Below, we discuss qualitative features of the datasets, while statistics can be found in the aforementioned table.
# 4.1 Human-Machine Corpora
As discussed in Subsection 3.2, an important distinction between dialogue datasets is whether they consist of dialogues between two humans or between a human and a machine. Thus, we begin by outlining some of the existing human-machine corpora in several categories based on the types of systems the humans interact with: Restaurant and Travel Information, Open-Domain Knowledge Retrieval, and Other Specialized systems. Note, we also include human-human corpora here where one human plays the role of the machine in a Wizard-of-Oz fashion.
# 4.1.1 RESTAURANT AND TRAVEL INFORMATION
One common theme in human-machine language datasets is interaction with systems which provide restaurant or travel information. Here weâll brieï¬y describe some human-machine dialogue datasets in this domain.
One of the most popular recent sources of such data has come from the datasets for structured dialogue prediction released in conjunction with the Dialog State Tracking Challenge (DSTC) (Williams et al., 2013). As the name implies, these datasets are used to learn a strategy for the Di- alogue State Tracker (sometimes called âbelief trackingâ), which involves estimating the intentions of a user throughout a dialog. State tracking is useful as it can increase the robustness of speech recognition systems, and can provide an implementable framework for real-world dialogue sys- tems. Particularly in the context of goal-oriented dialogue systems (such as those providing travel and restaurant information), state tracking is necessary for creating coherent conversational inter- faces. As such, the ï¬rst three datasets in the DSTCâreferred to as DSTC1, DSTC2, and DSTC3 respectivelyâare medium-sized spoken datasets obtained from human-machine interactions with
3. We form a live list of the corpora discussed in this work, along with links to downloads, at: http://breakend. github.io/DialogDatasets. Pull requests can be made to the Github repository (https://github. com/Breakend/DialogDatasets) hosting the website for continuing updates to the list of corpora.
12
restaurant and travel information systems. All datasets provide labels specifying the current goal and desired action of the system.
DSTC1 (Williams et al., 2013) features conversations with an automated bus information in- terface, where users request bus routes from the system and the system responds with clarifying queries or the desired information. DSTC2 introduces changing user goals in a restaurant booking system, while trying to provide a desired reservation(Henderson et al., 2014b). DSTC3 introduces a small amount of labelled data in the domain of tourist information. It is intended to be used in conjunction with the DSTC2 dataset as a domain adaptation problem (Henderson et al., 2014a).
The Carnegie Mellon Communicator Corpus (Bennett and Rudnicky, 2002) also contains human-machine interactions with a travel booking system. It is a medium-sized dataset of interac- tions with a system providing up-to-the-minute ï¬ight information, hotel information, and car rentals. Conversations with the system were transcribed, along with the userâs comments at the end of the interaction.
The ATIS (Air Travel Information System) Pilot Corpus (Hemphill et al., 1990) is one of the ï¬rst human-machine corpora. It consists of interactions, lasting about 40 minutes each, between human participants and a travel-type booking system, secretly operated by humans. Unlike the Carnegie Mellon Communicator Corpus, it only contains 1041 utterances.
In the Maluuba Frames Corpus (El Asri et al., 2017), one user plays the role of a conversa- tional agent in a Wizard-of-Oz fashion, while the other user is tasked with ï¬nding available travel or vacation accommodations according to a pre-speciï¬ed task. The Wizard is provided with a knowl- edge database which recorded their actions. Semantic frames are annotated in addition to actions which the Wizard performed on the database to accompany a line of dialogue. In this way, the Frames corpus aims to track decision-making processes in travel- and hotel-booking through natu- ral dialog.
# 4.1.2 OPEN-DOMAIN KNOWLEDGE RETRIEVAL
Knowledge retrieval and Question & Answer (QA) corpora are a broad distinction of corpora that we will not extensively review here. Instead, we include only those QA corpora which explicitly record interactions of humans with existing systems. The Ritel corpus (Rosset and Petel, 2006) is a small dataset of 528 dialogs with the Wizard-of-Oz Ritel platform. The projectâs purpose was to integrate spoken language dialogue systems with open-domain information retrieval systems, with the end goal of allowing humans to ask general questions and iteratively reï¬ne their search. The questions in the corpus mostly revolve around politics and the economy, such as âWho is currently presiding the Senate?â, along with some conversations about arts and science-related topics.
Other similar open-domain corpora in this area include WikiQA Yang et al. (2015) and MS MARCO Nguyen et al. (2016), which compile responses from automated Bing searches and hu- man annotators. However, these do not record dialogs, but rather simply gather possible responses to queries. As such, we wonât discuss these datasets further, but rather mention them brieï¬y as examples of other Open-Domain corpora in the ï¬eld.
13
n o i t p i r c s e D # l a t o T # l a t o T # . g v A s c i p o T e p y T s d r o w f o s e u g o l a i d f o s n r u t f o m e t s y s n o i t a m r o f n i e d i r s u B M 7 3 . 0 0 0 , 5 1 6 5 . 3 1 s e l u d e h c s s u B n e k o p S ) 3 1 0 2 , . l a t e m e t s y s g n i k o o b t n a r u a t s e R K 2 3 4 0 0 0 , 3 8 8 . 7 s t n a r u a t s e R n e k o p S ) b 4 1 0 2 , . l a s t s i r u o t r o f n o i t a m r o f n I K 3 0 4 5 6 2 , 2 7 2 . 8 n o i t a m r o f n i t s i r u o T n e k o p S ) a 4 1 0 2 , . l a m e t s y s g n i k o o b d n a g n i n n a l p l e v a r T * M 2 1 8 4 , 5 1 7 6 . 1 1 l e v a r T n e k o p S s u p r o C ) 2 0 0 2 m e t s y s g n i k o o b d n a g n i n n a l p l e v a r T * K 4 1 1 . 1 4 4 . 5 2 l e v a r T n e k o p S n o i t s e u q n i a m o d - n e p o d e t a t o n n a n A k 0 6 2 8 5 * 3 . 9 s c i p o T e s r e v i D / d e t c i r t s e r n U n e k o p S m e t s y s e u g o l a i d m e t s y s r e t u p m o c h t i n e k o p s g n i r e w s n a w t c a r e t n i s n a m u H * K 7 8 . 6 6 2 1 s c i t a m e h t a M n e k o p S ) 6 0 0 2 g n i v o r p m e r o e h t l a c i t a m e h t a m o d o t ) 4 0 0 2 , . l a t e . s t n e m t n i o p p a g n i l u d e h c s r o f m e t s y s A * K 9 6 7 4 4 0 . 4 1 g n i l u d e h c S t n e m t n i o p p A n e k o p S s n o i t a t o n n a t c a e u g o l a i d s e d u l c n I ) 0 1 0 2 . s m e t s y s e u g o l a i d n e v i r d - l a o g r o F â 9 6 3 1 5 1 n o i t a c a V & l e v a r T & A Q , t a h C s n o i t c a d n a d e l e b a l s e m a r f c i t n a m e S g n i k o o B n o i t a d n e m m o c e R . d e t a t o n n a e s a b - e g d e l w o n k a n o n e k a t . e c n a r e t t u r e p s d r o w f o r e b m u n e g a r e v a e h t n o d e s a b d e t a m i x o r p p a e r a s r e b m u n ) * ( d e r r a t S . s t e s a t a d e u g o l a i d . n a m u h a y b d e t a r e p o y l t e r c e s s i e n i h c a m e h t e r e h w , s e u g o l a i d z O - f o - d r a z i W e t a c i d n i ) â ( h t i
e m a N
s m a i l l i
SWITI)
# W
(
# 1 C T S D
# t e n o s r e d n e H
(
# 2 C T S D
# t e n o s r e d n e H
(
# 3 C T S D
r o t a c i n u m m o C U M C
, y k c i n d u R d n a
# t t e n n e B
(
# â s u p r o C
# t o l i P S I T A
# t e
l l i h p m e H
(
# â s u p r o C
# l e t i
# R
, l e t e P d n a
# t e s s o R
(
l a c i t a m e h t a
# M G O L A D
# I
# a k s l o W
(
# s f o o r P
# â s u p r o C H C T A M
# , . l a
# t e
a l i g r o e G
(
â s e m a r F a b u u l a
# M
) 7 1 0 2 , . l a
# t e
i r s A
# l
E (
e n i h c a m - n a m u H
: 1
e l b a T
14
w d e k r a m s t e s a t a D
n o i t p i r c s e D l a t o T # l a t o T # l a t o T s c i p o T h t g n e l s d r o w f o s e u g o l a i d f o y l l a b r e v e t a r o b a l l o c t s u m s r e k a e p s h c i h w n i k s a T P A L H m o r f s e u g o l a i D s r h 8 1 k 7 4 1 8 2 1 g n i c u d o r p e R - p a M . s r e h t o e h t n o d e t n i r p e t u o r a p a m s t n a p i c i t r a p e n o n o e c u d o r p e r o t k s a T . s n o i t a c o l n i a t r e c d n ï¬ o t e n o h p e l e t r e v o g n i t a r o b a l l o c e l p o e P s r h 3 3 * k 0 0 3 6 3 n o i t a c o L k s a T g n i d n i F e c n i v n o c o t s e i r t s g n i l e e f n e e r g - o r p g n o r t s ) y l e n i u n e g ( h t i w r e d a u s r e p A s r h 4 * k 5 3 8 e l y t s e f i L . s e l y t s e f i l n e e r g e r o m g n i t p o d a r e d i s n o c o t s e e d a u s r e p ) 7 0 0 2 , . l a d e n i a r t s n o c h c a e , s e t a b e d e l y t s - d r o f x O n i s c i p o t s u o i r a V * s r h 0 0 2 M 8 . 1 8 0 1 s e t a b e D . s e t a b e d - t s o p d n a - e r p d e d i v o r p s n o i n i p o e c n e i d u A . t c e j b u s e n o o t e s u o H e t i h W d n a s g n i t e e m y t l u c a f m o r f s n o i t c a r e t n I * s r h 0 2 2 M 2 0 0 2 n o i t a c u d E , s c i t i l o P n e k o p S . s e c n e r e f n o c s s e r p ) 0 0 0 2 , w o l r a B a d n a , c i p o t l a c i t i l o p a n o n o i s s u c s i d a : s t n e m i r e p x e o w T s r h 1 1 * k 0 0 1 4 5 s e m a G , s c i t i l o P . e m a g g n i y a l p - e l o r s n o i t a t o n n a h t i w e m a g g n i y a l p - e l o r f l o w e r e W f o g n i d r o c e r A s r h 7 * k 0 6 5 1 e m a G g n i y a l P - e l o R s u p r o C . s s e r g o r p e m a g o t d e t a l e r ) 0 1 0 2 r o t a r e p o n a h t i w s n o i t a s r e v n o c g n i d l o h e l i h w d e d r o c e r e r e w s r e s U s r h 0 5 * k 0 5 4 0 0 1 l a n o i t o m E . s n o i t c a e r l a n o i t o m e e k o v e o t d e n g i s e d s e l o r s t p o d a o h w s n o i t a s r e v n o C . e p y k S r e v o e g n a h c x e n o i t a m r o f n i t s i r u o T s r h 1 2 k 3 7 2 5 3 t s i r u o T ) 6 1 0 2 . s n o r t a p d n a s n a i r a r b i l n e e w t e b s n o i t c a r e t n i e n o h p e l e T * 0 4 1 K 1 2 2 8 s e i r i u q n I y r a r b i L , ) s t i n u e s r u o c s i d ( s e m a r f , s c i p o t n o i s s u c s i d , s t c a e u g o l a i d d e t a t o n n A ) 4 1 0 2 . s r i a p r e w s n a - n o i t s e u q , f l e s t i t c e j o r p s u p r o c e h t : e d u l c n i s c i p o T . s g n i t e e m I S C I f o s g n i d r o c e R s r h 2 7 * K 1 1 5 7 s g n i t e e M I S C I f o s e i r o e h t d n a g n i s s e c o r p e g a u g n a l l a r u t a n , n o i t i n g o c e r h c e e p s c i t a m o t u a . s t o p s t o h d n a , s r i a p r e w s n a - n o i t s e u q , s t c a e u g o l a i D . e g a u g n a l . s e t u o r t h g i e r f d a o r l i a r f o g n i n n a l p e v i t a r o b a l l o C s r h 5 . 6 K 5 5 8 9 t h g i e r F d a o r l i a R g n i n n a l P e t u o R ) 5 9 9 1 . t c e j o r p l i b o m b r e V e h t r o f d e t c e l l o c a t a d h c e e p s s u o e n a t n o p S s r H 8 3 K 0 7 2 6 2 7 t n e m t n i o p p A . e s e n a p a J d n a , n a m r e G , h s i l g n E n i s i s u p r o c l l u F g n i l u d e h c S . s c i t s i t a t s h s i l g n E w o h s y l n o e W h c e e p s h s i l g n E f o e t a r e g a r e v a e h t n o d e s a b s e t a m i t s e e r a s r e b m u n ) * ( d e r r a t S . ) l m t h . y t i l a u q / l a i r o t u t / d o r p e c i o v / s l a i r o t u t / s v c n / g r o . s v c n . w w w ( h c e e p S d n a e c i o V r o f r e t n e C
e m a N
s u p r o C k s a T p a M C R C H
# , . l a
t e n o s r e d n A
(
s u p r o C d n u o r A g n i k l a
# W
# e h T
) 3 1 0 2 , . l a
# t e n a n n e r B
(
e s a b a t a D e v i s a u s r e P n e e r G
e i w o C - s a l g u o D
(
s e t a b e D d e r a u q S e c n e g i l l e t n I
) 6 1 0 2 , . l a
# t e
g n a h Z (
l a n o i s s e f o r P f o
# s u p r o C e h T
( h s i l g n E n a c i r e m A
e s a b a t a D y r c i m M B O N H A M
# i
) 1 1 0 2
# , . l a
# t e
# n u S (
# f l o W P A D
# I
# I
# e h T
, n a j n a r a t t i h C d n a
# g n u H
(
# s u p r o c E N A M E S
# I
) 0 1 0 2 , . l a
t e n w o e K c M
(
a r o p r o C 5 C T S D / 4 C T S D
, 5 1 0 2
# , . l a
# t e m K
# i
(
s u p r o C e u g o l a i D
i u q o L
, r a h c a S d n a u a e n n o s s a P (
s u p r o C A D R M
) 4 0 0 2
# , . l a
# t e
g r e b i r h S (
s u p r o C s e u g o l a i D 3 9 S N A R T
# I
# , n e l l
# âUaTTY
A d n a
# n a m e e H
(
# s u p r o C
l i b o m b r e V
) 0 0 0 2 , . l a
# t e
# r e g r u B
(
15
. s t e s a t a d e u g o l a i d n e k o p s d e n i a r t s n o c n a m u h - n a m u H
: 2
e l b a T
m o r f
n o i t p i r c s e D l a t o T # l a t o T # l a t o T s c i p o T h t g n e l s d r o w f o s e u g o l a i d f o s c i p o t d e ï¬ i c e p s - e r p n o s n o i t a s r e v n o c e n o h p e l e T * s r h 0 0 3 M 3 0 0 4 , 2 s c i p o T l a u s a C ) 2 9 9 1 , . l a t e y e r f d o G ( r o s s e n i s u b l a m r o f m o r f , s t x e t n o c y n a m s e u g o l a i d h s i t i r B * s r h 0 0 0 , 1 M 0 1 4 5 8 s c i p o T l a u s a C ) C N B ( s u p r o C . s n i - e n o h p d n a s w o h s o i d a r o t s g n i t e e m . s d n e i r f e s o l c r o s r e b m e m y l i m a f n e e w t e b s n o i t a s r e v n o c e n o h p e l e T s r h 0 6 * k 0 4 5 0 2 1 s c i p o T l a u s a C ) 7 9 9 1 , . l a t e n r e h t u o S a h t i w s n a c i r e m A n e e w t e b s n o i t a s r e v n o c e n o h p e l e T s r h 0 2 * k 0 8 1 0 6 s c i p o T l a u s a C ) 6 9 9 1 . 3 9 9 1 n i d e d r o c e r k l a t e g a n e e t s u o e n a t n o p S s r h 5 5 k 0 0 5 0 0 1 d e t c i r t s e r n U n o d n o L f o . y l t e r c e s d e d r o c e r e r e w s n o i t a s r e v n o C ) 5 9 9 1 , s a h c u s , s t x e t n o c l a m r o f n i f o y t e i r a v e d i w m o r f s e u g o l a i d h s i t i r B * s r h 0 5 5 M 5 â s c i p o T l a u s a C m a h g n i t t o N d n a . c t e , s t n a r u a t s e r , s n o l a s r i a h h s i l g n E n i ) 8 9 9 1 e l p o e p f o p u o r g a n e e w t e b n o i t c a r e t n i l a r u t a n f o s r u o h l a r e v e S s r h 8 * k 0 7 2 d e t c i r t s e r n U s u p r o C n o i t a s r e v n o C ) 3 1 0 2 . s g n i d r o c e r g n i t e e m e c a f - o t - e c a F s r h 0 0 1 * k 0 0 9 5 7 1 s g n i t e e M , s n o i t a s r e v n o c l a r u t a n d e t p i r c s n u h t i w e s a b a t a d l a u s i v - o i d u A n i m 0 5 1 * k 0 2 0 3 d e t c i r t s e r n U . s n o i t a t o n n a l a u s i v g n i d u l c n i ) 3 1 0 2 , . l a t e o e d i v D 3 h t i w b D C C e h t f o n o i s r e v A n i m 7 1 * k 5 . 2 7 1 d e t c i r t s e r n U e s a b a t a D n o i t a s r e v n o C ) 5 1 0 2 , . l a t e r e t n e v e d n a V ( c i l b u p d n a , e n o h p e l e t , e c a f - o t - e c a f f o n o i t c e l e S * s r h 0 8 k 0 0 8 0 8 2 s c i p o T l a u s a C f o . n i a t i r B m o r f e u g o l a i d n o i s s u c s i d ) 6 0 0 2 , s i l l a , s e i r o m e m r i e h t t u o b a g n i k l a t e v o b a r o 0 6 d e g a e l p o e p f o e u g o l a i D s r h 0 6 k 0 0 8 4 1 3 s c i p o T l a u s a C e h t f o y r u t n e c a m o r f e d i s y r t n u o c e h t f o e r o l k l o f e h t d n a k r o w , s e i l i m a f ) 9 9 9 1 e h t r o f d e z i n a g r o e s a b a t a d l a n o i t a n r e t n I * s r h 0 0 0 , 1 M 0 1 K 1 1 d e t c i r t s e r n U . n o i t i s i u q c a e g a u g n a l d n o c e s d n a t s r ï¬ f o y d u t s ) 5 8 9 1 , e v i t a t n e s e r p e r s w e i v r e t n i d n a s n o i t a s r e v n o c , s e v i t a r r a N * s r h 2 K 0 2 5 9 s c i p o T l a u s a C d n a . a n i l o r a C h t r o N , y t n u o C g r u b n e l k c e M f o s t n e d i s e r e h t f o ) C C N C ) 4 0 0 2 i t s e e r a s r e b m u n ) * ( d e r r a t S . s t e s a t a d e u g o l a i d n e k o p s
h s i l g n E n a c i r e m A D N E I R F L L A C
h s i l g n E n a c i r e m A E M O H L L A C
e s a b a t a D n o i t a s r e v n o C
h s i l g n E n e k o p S y a D
s t c e l a i D h s i l g n E f o
a t a D e g a u g n a L d l i h C e h T
( n o i t c e l l o C n o i t a s r e v n o C
w o n S d n a y e n n i h W
m ¨o r t s n e t S d n a d u r e l s a H
e v i t a r r a N e t t o l r a h C e h T
s u p r o C c i n o r h c a i D e h T
t c e l a i D n r e h t u o S - n o N
s u p r o C g n i t e e
s u p r o C n e k o p S e h T
s u p r o C n e g r e B e h T
e g a u g n a L e g a n e e T
m e t s y S e g n a h c x E
# n a v a n a C
, t t o c S d n a
# y e r b u A
# [epowumny|nyl
l a n o i t a N h s i t i r
e g d i r b m a C e h T
) 2 9 9 1
W d n a
d r a o b h c t i
# Aeq-jussarg
# ) b D C C D 4 (
, y h t r a C c M
f f i d r a C D 4
i t l u M 4 6 D
# t e
# f o
(
# t e
# n a v a n a C
(
t n e s e r P
# ) b D C C
s l a n e R
# M
, h c e e L (
# h c e e p S
# s u p r o C
f f i d r a C
# l e t r e O
# y e v r u S
e r a e B
s t r a A
# oe)
e m a N
c a
# I
# M A
# w S
# M
# B
(
(
(
(
(
(
(
(
(
16
, e d I d n a
# n e p p e R
(
s u o e n a t n o p s n a m u h - n a m u H
: 3
e l b a T
m o r f
n o i t p i r c s e D # l a t o T # l a t o T # l a t o T # l a t o T s c i p o T s d r o w f o s k r o w f o s e u g o l a i d f o s e c n a r e t t u f o . s m l ï¬ n a c i r e m A f o s t p i r c s e i v o M M 6 3 5 7 K 2 3 1 k 4 6 7 e i v o M s e u g o l a i d e m o c o t d e r e t l ï¬ e r a h c i h w s e c n a r e t t u f o s e l p i r T M 3 1 4 1 6 K 5 4 2 k 6 3 7 e i v o M . s e l p i r t X Y X m o r f - - s e u g o l a i d s t p i r c s f o s t e s b u s o w T * M 6 1 0 0 5 , 1 â K 3 6 2 * M 1 e i v o M . ) s m l ï¬ n a c i r e m A / h s i t i r B d e x i m 0 0 5 d n a s m l ï¬ n a c i r e m A 0 0 0 1 ( s t p i r c s d e t a t o n n a , s t p i r c s m l ï¬ m o r f s n o i t a s r e v n o c t r o h S * M 9 7 1 6 K 0 2 2 K 5 0 3 e i v o M . a t a d a t e m r e t c a r a h c h t i w s e u g o l a i d ) 1 1 0 2 , e e L d n a e m o c o t d e r e t l ï¬ e r a h c i h w s e c n a r e t t u f o s e l p i r T * M 2 6 8 7 , 1 K 7 8 k 3 7 1 e i v o M . s e l p i r t X Y X m o r f - - s e u g o l a i d . s a r e p o p a o s n a c i r e m A f o s t p i r c s n a r T M 0 0 1 0 0 0 , 2 2 â M 2 . 1 * M 0 1 w o h s V T s t p i r c s d n a ) y r o e h T g n a B g i B ( y d e m o c a m o r f s t p i r c s V T * k 0 0 6 1 9 1 â K 0 1 * k 0 6 w o h s V T c i t s i u g n i l r o f . w o h s ) s e n o r h T f o e m a G d e t a t o n n a , b D S M I m o r f ( a m a r d s t p i r c S M 6 9 . 2 6 8 K 1 5 1 k 4 6 6 s t p i r c s e i v o M . s e p y t e h c r a r e t c a r a h c d n a s e r u t c u r t s s t p i r c s m o r f s r i a p e s n o p s e r - n o i t c a r e t n i d e n g i l A M 0 2 4 8 1 , 6 M 5 3 . 3 M 7 . 6 e i v o M . s e l t i t b u s e i v o m s e l t i t b u s . d e n g i l a - r e k a e p s t o n e r a h c i h w s e l t i t b u s e i v o M B 1 7 0 9 , 7 0 2 â M 6 3 * M 0 4 1 e i v o M s e l t i t b u s ) 0 6 7 1 â 0 6 5 1 ( m o r f s k r o w l a n o i t c ï¬ d e t p i r c s s u o i r a V M 2 1 . 7 7 1 â â s k r o W n e t t i r W . s g n i d e e c o r p l a i r t t r u o c s a l l e w s a s g n i d e e c o r P l a i r T & r e p s e u g o l a i d f o r e b m u n e g a r e v a n o d e s a b s e t a m i t s e e t a c i d n i ) â ( h t i w d e t o n e d s e i t i t n a u Q . s t e s a t a d e s e h t n i d e t a r a p e s y l t i c i l p x e e b t o n y a m s e u g o l a i D . s u p r o c e h t n i s k r o w r o s t p i r c s f o r e b m u n e h t d n a ) 2 1 0 2 . ) s e t u n i m 6 3 ( e m i t n u r w o h s V T e g a r e v a o t ) s e t u n i m 2 1 1 ( e m i t n u r m l ï¬ e g a r e v a f o o i t a r e h t n o d e s a b d e t s u j d a d e s a b d e t a m i t s e e r a s e i t i t n a u q ) * ( d e r r a t S ( . ) s e c a f r e t n i / m o c . b d m i . w w w / / : p t t h ( e s a b a t a d D B M I e h t m o r f d e p a r c s
. s t e s a t a d e u g o l a i d d e t p i r c s n a m u h - n a m u H
, e e L d n a
s u p r o C e u g o l a i D - e i v o M
) a 2 1 0 2
# s u p r o C
, s h c n a B
) 3 1 0 2
l i z i
# s u p r o C
s e i r e S e n i l n O s t p i r c S m
# m
# Wy
) 6 0 0 2 , r e k l a
# l i
) b 2 1 0 2
M - u c s e l u c i N - u c s e n a D
# F m o r f
a r e p O p a o S n a c i r e m A
# , . l a
, r u e h o C d n a
t p i r c S e i v o M d e r e t l i F
) 6 1 0 2
) 2 1 0 2 , n n a m e d e i T (
(
) 0 6 7 1 â 0 6 5 1 (
) b 4 1 0 2
# t e
) 4 1 0 2 , . l a
e i v o m
, s e i v a D
Joyyema) o[AIg Koy)
r e k l a
# e l y t
) 2 1 0 2 , s h c n a B
# s u p r o C e l T b u S
s e l p i r T - e i v o M
s e l t i t b u S n e p O
# , . l a
W d n a
# s u p r o C D V T
S r e t c a r a h C
C D - e i v o M
# W
# , . l a
# t e
# i
a x i e m A
(
(
: 4
# n a b r e S (
# l l e n r o C
# t e
# s u p r o C
# s u p r o C
# t e
e m a N
# ¨o t y K
# orhy)
# D E C
e l b a T
# y o R
# WILY
# o i N
# l i F
(
(
(
(
(
(
17
e r e w s t e s a t a d w o h s V T
s a w a t a d s i h T
i r e m a T
# e h t
m o r f
d e v i r e d
s e t a m
# sayeuNsy
i t s E
# . s w o h s V T
d n a
# s
# m ï¬
# l
# f o
# s h t g n e l
e g a r e v a
# e h t
d n a
,
# m ï¬
# l
# r e p
s e c n a r e t t u
d n a
# s d r o w
# f o
# r e b m u n
e g a r e v a
# e h t
n o
. ) l m t h . s t n u o c d r o w / t a m r o f / m o c . i r e m a t . w w w / / : p t t h (
# s r e t i r
# W
# r o f
e d i u G
SOLUNOWOS ae splOM YsI[suq Jodoid âe1od109 SWS pue Dy] Sv Yyons âeJodso9 urey99 JO nq â(aoeds ev Aq pomo][o} pue papacaid gouonbes v aq JSNUI pJOM B â9'1) Sadeds UO paseq S]UNOD pJOM SuIsn paynduios ore saturn (,.) paueyg âsjaseyep onsoyeip usyM âso[din o8pa[Mouy sv vyepry}oU! SIAOUT UONepUdUTWIODOY Sapnjouy âsuIa}SAs ONSoyeIp USALIP-[BOS 10] WSS8I ANTE ee SOTAOJ RB WO âTeyD *(pajoe.yxe sanSoyerp ou) sso] DUI wo}skg wos podeios weans yD gx 49901 QI8EE | Suneiodg mungy yy âOUI Uo weeqs eyo nyuNqA wayskg Wo} pajoe.nxe sansoyeiq WOOL MOE ILL Sunesodg munqn, yey âsordoy MATAIOVUT pure âTeonttod â[e1ouad noge suOTIES.AUOD ws al OzS syse} [R1I00g yeyo âsuomtsod Jesoul 10 yeontyod aytoads jnoge sayeqoq WEL MII SPS sontog wni0, â\uaWIdeIesIp Jo (aserydesed *3'9) yuaUIIaIse JO (SIOZ ad4q Jey payejouuy âsUOTRSIOAUOD WINIO} aJeqQaq AkIID Wr'l MOI TZ poloLnsoiuy wmni0j srayeqag âparejouur [ag] pue od) juoweasy âspkary} WINJOJ SuOISsnosIq IpedryIAA pur [eUINOsOATT SOIT TR TZ poloLnsoiuy wniog | saseg yey PLIOM spre, Surdeyd siakvjd usaayjoqg suoles1aAuoZ MTZ 9971 Ise SULIO) OUD yey âuRIe_ JO SIOTMES, OWS oY) UT sioke|d usamjaq suoTRsIoAUOZD - IZ S6 SULIO} OWED yeyo âyey [eJoUNs Jo âsppoiqns asnge oNsaulop JoyjIe WoL s}sod Jppoy WEOI-W6I â¬ELIZ â¬S'LT djoy asnqy wni0, sndiog âJIPP2Y SSO1OV SJUOUTUIOD G/T] - - - poornsomuy) wmni0, âsiskjeue Sumy UIA âsiasn OM] U99MI9qQ PI}d9]]OI SaSeSSoUl SIS #899â08S ME SI poornsomuy) SaSeSSoU SINS (6002 ssunsod winioj JONAS) aL 4098LP L89 poornsomuy) SO[QOINI JONIML Woy payoexe sojdin y-g-V PS) TCP ⬠poioinsoiup SoO]qoIs 1A, JONIME WO pojoesyxo sar[dar pure sjoom], INSTI Wel id palornsezu SO[QOIOIA, (L007 *SWIOOL JeYD OUTTUO DyTIads-a8v WO] SIsOg WOO! 1ST POL palornsezu yyo SPIOM JO sonsoyerp Jo | sun jo uondiiosaq # [RIOL # [RIOL # SAV soidoy, odAL,
e m a N
# s u p r o C
# t a h C S P N
) 7 0 0 2
, l l e t r a
# âTee
M d n a
# h t y s r o F (
# s u p r o C
) 0 1 0 2
# , . l a
# t e
# r e t t i
Jona) Jon,
# r e t t i
# w T
# R
(
s u p r o C e l p i r T r e t t i
# w T
) b 5 1 0 2 , . l a
# t e
i n o d r o S (
# s u p r o C
# t e N e s U
, y r u b t s e
# âAinqisep
W d n a
l u o a h S (
# s u p r o C S M S S U N
) 3 1 0 2
, n a K d n a
# n e h C
(
t i d d e R
s u p r o C e s u b A c i t s e m o D
t i d d e R
) 5 1 0 2 , . l a
t e g n i d a r h c S (
# n a t a C
# f o
# s r e l t t e S
) 2 1 0 2 , . l a
# t e
s o n e t n a f A
(
s u p r o C s d r a C
) 2 1 0 2 , . l a
# t e
i l a l a j D
(
s e g a P k l a T a i d e p i k i W n i
t n e m e e r g A
) 2 1 0 2 , . l a
# t e
s a e r d n A
(
s r e t a b e D e t a e r C y b
t n e m e e r g A
, n w o e K c M d n a
# l a h t n e s o R
(
# s u p r o C
# t n e m u g r A
# t e n r e t n I
) b 2 1 0 2 , . l a
# t e
r e k l a
# JOyTRAA)
# W
(
# s u p r o C C P M
) 0 1 0 2 , . l a
# t e
h k i a h S (
s u p r o C e u g o l a i D u t n u b U
) a 5 1 0 2
# , . l a
# t e
# e w o L (
# s u p r o C
t a h C u t n u b U
) 3 1 0 2
# , a h A d n a
# s u h t U
(
t e s a t a D g o l a i D e i v o M
) 5 1 0 2 , . l a
# t e
# e g d o D
(
18
w n a m u h - n a m u H
: 5
e l b a T
s r e t c a r a h c
# f o
d e t a m
# poyeutysa
i t s e
e c n a r e t t u r e p s d r o w e g a r e v a g n i s u d e t u p m o c
s d n u o b r e p p u d n a
# r e w o l
s e t a c i d n i
)
(,)
(
# e l g n a i r T
. e g a s u g n a l s o t
e u d r e h t e g o t d e t a n e t a c n o c
M 1 2
.
t a h t
# e t o N
# . s u p r o c
# e h t
# d e d r o c e r
# f o
s k c o l b
s u o u g i t n o c
# f o
t r a p
h s i l g n E
# e h t
n o
# y l n o
e r a
) â (
# y b
d e t a c i d n i
s g o l a i D
d e s a b
s e t a m
# soyeUNse
i t s e
s e t a c i d n i
(,,)
. s r i a p A Q d e t a l u m
i s
f o m r o f
e r a u q S
# e h t
# n i
e r a
. ) 5 1 0 2 (
)
(,)
(
t e s a t a d
g n i d a r h c S
g o l a i D e i v o M
# s u p r o c
t i d d e R
r a l i
# seyTUs
m i s
# a
n o
# e h t
m o r f
s e u g o l a i d
# e h t
s a
# s n r u t
e g a r e v a
# e h t
e t a l u c l a c
d n a
# s p u o r g s w e n
# f o
# r e b m u n
# l a t o t
# e h t
# e t o n
# e w
# , t e N e s U
# f o
e s a c
# e h t
# n I
. t a h c
t n a p i c i t r a p - i t l u m a
# n i
n o i t a s r e v n o c
# l l e w s a
# s n e k o t o t
s r e f e r d n a
e z i s
r a l i
# Ie[TUIIS
m i s
f o t e s a t a d r e t t i
w T a n o d e s a b e t a m
i t s e n a
s e t a c i d n i
) â¡ (
. p u o r g s w e n r e p d e t c e l l o c
s t s o p f o r e b m u n e g a r e v a
_ t i d d e r _ e l b a l i a v a _ y l c i l b u p _ y r e v e _ e v a h _ i / 7 g l x b 3 / s t n e m m o c / s t e s a t a d / r / m o c . t i d d e r . w w w / / : s p t t h
# : o t
# s r e f e r
) ⦠(
. s d r o w s a
/ t n e m m o c
# 4.1.3 OTHER
The DIALOG mathematical proof dataset (Wolska et al., 2004) is a Wizard-of-Oz dataset in- volving an automated tutoring system that attempts to advise students on proving mathematical theorems. This is done using a hinting algorithm that provides clues when students come up with an incorrect answer. At only 66 dialogues, the dataset is very small, and consists of a conglomeration of text-based interactions with the system, as well as think-aloud audio and video footage recorded by the users as they interacted with the system. The latter was transcribed and annotated with simple speech acts such as âsignaling emotionsâ or âself-addressingâ.
The MATCH corpus (Georgila et al., 2010) is a small corpus of 447 dialogues based on a Wizard-of-Oz experiment, which collected 50 young and old adults interacting with spoken dialogue systems. These conversations were annotated semi-automatically with dialogue acts and âInforma- tion State Updateâ (ISU) representations of dialogue context. The corpus also contains information about the usersâ cognitive abilities, with the motivation of modeling how the elderly interact with dialogue systems.
# 4.2 Human-Human Spoken Corpora
Naturally, there is much more data available for conversations between humans than conversations between humans and machines. Thus, we break down this category further, into spoken dialogues (this section) and written dialogues (Section 4.3). The distinction between spoken and written dia- logues is important, since the distribution of utterances changes dramatically according to the nature of the interaction. As discussed in Subsection 3.1, spoken dialogues tend to be more colloquial and generally well-formed as the user speaks in train-of-thought manner; they also tend to use shorter words and phrases. Conversely, in written communication, users have the ability to reï¬ect on what they are writing before they send a message. Written dialogues can also contain spelling errors or abbreviations, though, which are generally not transcribed in spoken dialogues.
# 4.2.1 SPONTANEOUS SPOKEN CORPORA
We ï¬rst introduce datasets in which the topics of conversation are either casual, or not pre-speciï¬ed in any way. We refer to these corpora as spontaneous, as we believe they most closely mimic spontaneous and unplanned spoken interactions between humans.
Perhaps one of the most inï¬uential spoken corpora is the Switchboard dataset (Godfrey et al., 1992). This dataset consists of approximately 2,500 dialogues from phone calls, along with word- by-word transcriptions with about 500 total speakers. A computer-driven robot operator system introduced a topic for discussion between two participants, and recorded the resulting conversation. About 70 casual topics were provided, of which about 50 were frequently used. The corpus was originally designed for training and testing various speech processing algorithms; however, it has since been used for a wide variety of other tasks, including the modeling of dialogue acts such as âstatementâ, âquestionâ, and âagreementâ (Stolcke et al., 2000).
Another important dataset is the British National Corpus (BNC) (Leech, 1992), which contains approximately 10 million words of dialogue. These were collected in a variety of contexts ranging from formal business or government meetings, to radio shows and phone-ins. Although most of the conversations are spoken in nature, some of them are also written. BNC covers a large number of sources, and was designed to represent a wide cross-section of British English from the late twentieth century. The corpus also includes part-of-speech (POS) tagging for every word. The vast
19
array of settings and topics covered by this corpus renders it very useful as a general-purpose spoken dialogue dataset.
Other datasets have been collected for the analysis of spoken English over the telephone. The CALLHOME American English Speech Corpus (Canavan et al., 1997) consists of 120 such conversations totalling about 60 hours, mostly between family members or close friends. Similarly, the CALLFRIEND American English-Non-Southern Dialect Corpus (Canavan and Zipperlen, 1996) consists of 60 telephone conversations lasting 5-30 minutes each between English speakers in North America without a Southern accent. It is annotated with speaker information such as sex, age, and education. The goal of the project was to support the development of language identiï¬cation technologies, yet, there are no distinguishing features in either of these corpora in terms of the topics of conversation.
An attempt to capture exclusively teenage spoken language was made in the Bergen Corpus of London Teenager Language (COLT) (Haslerud and Stenstr¨om, 1995). Conversations were recorded surreptitiously by student ârecruitsâ, with a Sony Walkman and a lapel microphone, in order to obtain a better representation of teenager interactions âin-the-wildâ. This dataset has been used to identify trends in language evolution in teenagers (Stenstr¨om et al., 2002).
The Cambridge and Nottingham Corpus of Discourse in English (CANCODE) (McCarthy, 1998) is a subset of the Cambridge International Corpus, containing about 5 million words collected from recordings made throughout the islands of Britain and Ireland. It was constructed by Cam- bridge University Press and the University of Nottingham using dialogue data on general topics between 1995 and 2000. It focuses on interpersonal communication in a range of social contexts, varying from hair salons, to post ofï¬ces, to restaurants. This has been used, for example, to study language awareness in relation to spoken texts and their cultural contexts (Carter, 1998). In the dataset, the relationships between speakers (e.g. roommates, strangers) is labeled and the interac- tion type is provided (e.g. professional, intimate).
Other works have attempted to record the physical elements of conversations between humans. To this end, a small corpus entitled d64 Multimodal Conversational Corpus (Oertel et al., 2013) was collected, incorporating data from 7 video cameras, and the registration of 3-D head, torso, and arm motion using an Optitrack system. Signiï¬cant effort was made to make the data collection process as non-intrusiveâand thus, naturalisticâas possible. Annotations were made to attempt to quantify overall group excitement and pairwise social distance between participants.
A similar attempt to incorporate computer vision features was made in the AMI Meeting Cor- pus (Renals et al., 2007), where cameras, a VGA data projector capture, whiteboard capture, and digital pen capture, were all used in addition to speech recordings for various meeting scenarios. As with the d64 corpus, the AMI Meeting Corpus is a small dataset of multi-participant chats, that has not been disentangled into strict dialogue. The dataset has often been used for analysis of the dynamics of various corporate and academic meeting scenarios.
In a similar vein, the Cardiff Conversation Database (CCDb) (Aubrey et al., 2013) is an audio- visual database containing unscripted natural conversations between pairs of people. The original dataset consisted of 30 ï¬ve minute conversations, 7 of which were fully annotated with transcrip- tions and behavioural annotations such as speaker activity, facial expressions, head motions, and smiles. The content of the conversation is an unconstrained discussion on topics such as movies. While the original dataset featured 2D visual feeds, an updated version with 3D video has also been derived, called the 4D Cardiff Conversation Database (4D CCDb) (Vandeventer et al., 2015).
20
This version contains 17 one-minute conversations from 4 participants on similarly un-constrained topics.
The Diachronic Corpus of Present-Day Spoken English (DCPSE) (Aarts and Wallis, 2006) is a parsed corpus of spoken English made up of two separate datasets. It contains more than 400,000 words from the ICE-GB corpus (collected in the early 1990s) and 400,000 words from the London- Lund Corpus (collected in the late 1960s-early 1980s). ICE-GB refers to the British component of the International Corpus of English (Greenbaum and Nelson, 1996; Greenbaum, 1996) and contains both spoken and written dialogues from English adults who have completed secondary education. The dataset was selected to provide a representative sample of British English. The London-Lund Corpus (Svartvik, 1990) consists exclusively of spoken British conversations, both dialogues and monologues. It contains a selection of face-to-face, telephone, and public discussion dialogues; the latter refers to dialogues that are heard by an audience that does not participate in the dialogue, in- cluding interviews and panel discussions that have been broadcast. The orthographic transcriptions of the datasets are normalised and annotated according to the same criteria; ICE-GB was used as a gold standard for the parsing of DCPSE.
The Spoken Corpus of the Survey of English Dialects (Beare and Scott, 1999) consists of 1000 recordings, with about 0.8 million total words, collected from 1948-1961 in order to document various existing English dialects. People aged 60 and over were recruited, being most likely to speak the traditional âuncontaminatedâ dialects of their area and encouraged to talk about their memories, families, work, and their countryside folklore.
The Child Language Data Exchange System (CHILDES) (MacWhinney and Snow, 1985) is a database organized for the study of ï¬rst and second language acquisition. The database contains 10 million English words and approximately the same number of non-English words. It also contains transcripts, with occasional audio and video recordings of data collected from children and adults learning both ï¬rst and second languages, although the English transcripts are mostly from children. This corpus could be leveraged in order to build automated teaching assistants.
The expanded Charlotte Narrative and Conversation Collection (CNCC), a subset of the ï¬rst release of the American National Corpus (Reppen and Ide, 2004), contains 95 narratives, conver- sations and interviews representative of the residents of Mecklenburg County, North Carolina and its surrounding communities. The purpose of the CNCC was to create a corpus of conversation and conversational narration in a âNew Southâ city at the beginning of the 21st century, that could be used as a resource for linguistic analysis. It was originally released as one of several collections in the New South Voices corpus, which otherwise contained mostly oral histories. Information on speaker age and gender in the CNCC is included in the header for each transcript.
# 4.2.2 CONSTRAINED SPOKEN CORPORA
Next, we discuss domains in which conversations only occur about a particular topic, or intend to solve a speciï¬c task. Not only is the topic of the conversation speciï¬ed beforehand, but participants are discouraged from deviating off-topic. As a result, these corpora are slightly less general than their spontaneous counterparts; however, they may be useful for building goal-oriented dialogue systems. As discussed in Subsection 3.3, this may also make the conversations less natural. We can further subdivide this category into the types of topics they cover: path-ï¬nding or planning tasks, persuasion tasks or debates, Q&A or information retrieval tasks, and miscellaneous topics.
21
Collaborative Path-Finding or Planning Tasks Several corpora focus on task planning or path- ï¬nding through the collaboration of two interlocutors. In these corpora typically one person acts as the decision maker and the other acts as the observer.
A well-known example of such a dataset is the HCRC Map Task Corpus (Anderson et al., 1991), that consists of unscripted, task-oriented dialogues that have been digitally recorded and transcribed. The corpus uses the Map Task (Brown et al., 1984), where participants must collab- orate verbally to reproduce a route on one of the participantâs map on the map of another partic- ipant. The corpus is fairly small, but it controls for the familiarity between speakers, eye contact between speakers, matching between landmarks on the participantsâ maps, opportunities for con- trastive stress, and phonological characteristics of landmark names. By adding these controls, the dataset attempts to focus on solely the dialogue and human speech involved in the planning process. The Walking Around Corpus (Brennan et al., 2013) consists of 36 dialogues between people communicating over mobile telephone. The dialogues have two parts: ï¬rst, a âstationary partnerâ is asked to direct a âmobile partnerâ to ï¬nd 18 destinations on a medium-sized university campus. The stationary partner is equipped with a map marked with the target destinations accompanied by photos of the locations, while the mobile partner is given a GPS navigation system and a camera to take photos. In the second part, the participants are asked to interact in-person in order to duplicate the photos taken by the mobile partner. The goal of the dataset is to provide a testbed for natural lexical entrainment, and to be used as a resource for pedestrian navigation applications.
The TRAINS 93 Dialogues Corpus (Heeman and Allen, 1995) consists of recordings of two interlocutors interacting to solve various planning tasks for scheduling train routes and arranging railroad freight. One user acts the role of a planning assistant system and the other user acts as the coordinator. This was not done in a Wizard-of-Oz fashion, and as such is not considered a Human-Machine corpus. 34 different interlocutors were asked to complete 20 different tasks such as: âDetermine the maximum number of boxcars of oranges that you could get to Bath by 7 AM tomorrow morning. It is now 12 midnight.â The person playing the role of the planning assistant was provided with access to information that is needed to solve the task. Also included in the dataset is the information available to both users, the length of dialogue, and the speaker and âsystemâ interlocutor identities.
The Verbmobil Corpus (Burger et al., 2000) is a multilingual corpus consisting of English, German, and Japanese dialogues collected for the purposes of training and testing the Verbmobil project system. The system was a designed for speech-to-speech machine translation tasks. Dia- logues were recorded in a variety of conditions and settings with room microphones, telephones, or close microphones, and were subsequently transcribed. Users were tasked with planning and scheduling an appointment throughout the course of the dialogue. Note that while there have been several versions of the Verbmobil corpora released, we refer to the entire collection here as described in (Burger et al., 2000). Dialogue acts were annotated in a subset of the corpus (1,505 mixed dia- logues in German, English and Japanese). 76,210 acts were annotated with 32 possible categories of dialogue acts Alexandersson et al. (2000)4.
Persuasion and Debates Another theme recurring among constrained spoken corpora is the ap- pearance of persuasion or debate tasks. These can involve general debates on a topic or tasking a speciï¬c interlocutor to try to convince another interlocutor of some opinion or topic. Generally,
4. Note, this information and further facts about the Verbmobil project and corpus can be found here: http: //verbmobil.dfki.de/facts.html
22
these datasets record the outcome of how convinced the audience is of the argument at the end of the dialogue or debate.
The Green Persuasive Dataset (Douglas-Cowie et al., 2007) was recorded in 2007 to provide data for the HUMAINE project, whose goal is to develop interfaces that can register and respond to emotion. In the dataset, a persuader with strong pro-environmental (âpro-greenâ) feelings tries to convince persuadees to consider adopting more green lifestyles; these interactions are in the form of dialogues. It contains 8 long dialogues, totalling about 30 minutes each. Since the persuadees often either disagree or agree strongly with the persuaders points, this would be good corpus for studying social signs of (dis)-agreement between two people.
The MAHNOB Mimicry Database (Sun et al., 2011) contains 11 hours of recordings, split over 54 sessions between 60 people engaged either in a socio-political discussion or negotiating a tenancy agreement. This dataset consists of a set of fully synchronised audio-visual recordings of natural dyadic (one-on-one) interactions. It is one of several dialogue corpora that provide multi- modal data for analyzing human behaviour during conversations. Such corpora often consist of auditory, visual, and written transcriptions of the dialogues. Here, only audio-visual recordings are provided. The purpose of the dataset was to analyze mimicry (i.e. when one participant mimics the verbal and nonverbal expressions of their counterpart). The authors provide some benchmark video classiï¬cation models to this effect.
The Intelligence Squared Debate Dataset (Zhang et al., 2016) covers the âIntelligence Squaredâ Oxford-style debates taking place between 2006 and 2015. The topics of the debates vary across the dataset, but are constrained within the context of each debate. Speakers are labeled and the full transcript of the debate is provided. Furthermore, the outcome of the debate is provided (how many of the audience members were for the given proposal or against, before and after the debate).
QA or Information Retrieval There are several corpora which feature direct question and an- swering sessions. These may involve general QA, such as in a press conference, or more task- speciï¬c lines of questioning, as to retrieve a speciï¬c set of information.
The Corpus of Professional Spoken American English (CPSAE) (Barlow, 2000) was con- structed using a selection of transcripts of interactions occurring in professional settings. The corpus contains two million words involving over 400 speakers, recorded between 1994-1998. The CPASE has two main components. The ï¬rst is a collection of transcripts (0.9 million words) of White House press conferences, which contains almost exclusively question and answer sessions, with some pol- icy statements by politicians. The second component consists of transcripts (1.1 million words) of faculty meetings and committee meetings related to national tests that involve statements, discus- sions, and questions. The creation of the corpus was motivated by the desire to understand and model more formal uses of the English language.
As previously mentioned, the Dialog State Tracking Challenge (DSTC) consists of a series of datasets evaluated using a âstate trackingâ or âslot ï¬llingâ metric. While the ï¬rst 3 installments of this challenge had conversations between a human participant and a computer, DSTC4 (Kim et al., 2015) contains dialogues between humans. In particular, this dataset has 35 conversations with 21 hours of interactions between tourists and tour guides over Skype, discussing information on hotels, ï¬ights, and car rentals. Due to the small size of the dataset, researchers were encouraged to use transfer learning from other datasets in the DSTC in order to improve state tracking performance. This same training set is used for DSTC5 (Kim et al., 2016) as well. However, the goal of DSTC5
23
is to study multi-lingual speech-act prediction, and therefore it combines the DSTC4 dialogues plus a set of equivalent Chinese dialogs; evaluation is done on a holdout set of Chinese dialogues.
Miscellaneous Lastly, there are several corpora which do not fall into any of the aforementioned categories, involving a range of tasks and situations.
The IDIAP Wolf Corpus (Hung and Chittaranjan, 2010) is an audio-visual corpus containing natural conversational data of volunteers who took part in an adversarial role-playing game called âWerewolfâ. Four groups of 8-12 people were recorded using headset microphones and synchro- nised video cameras, resulting in over 7 hours of conversational data. The novelty of this dataset is that the roles of other players are unknown to game participants, and some of the roles are decep- tive in nature. Thus, there is a signiï¬cant amount of lying that occurs during the game. Although speciï¬c instances of lying are not annotated, each speaker is labeled with their role in the game. In a dialogue setting, this could be useful for analyzing the differences in language when deception is being used.
The SEMAINE Corpus (McKeown et al., 2010) consists of 100 âemotionally colouredâ con- versations. Participants held conversations with an operator who adopted various roles designed to evoke emotional reactions. These conversations were recorded with synchronous video and audio devices. Importantly, the operatorsâ responses were stock phrases that were independent of the con- tent of the userâs utterances, and only dependent on the userâs emotional state. This corpus motivates building dialogue systems with affective and emotional intelligence abilities, since the corpus does not exhibit the natural language understanding that normally occurs between human interlocutors. The Loqui Human-Human Dialogue Corpus (Passonneau and Sachar, 2014) consists of an- notated transcriptions of telephone interactions between patrons and librarians at New York Cityâs Andrew Heiskell Braille & Talking Book Library in 2006. It stands out as it has annotated dis- cussion topics, question-answer pair links (adjacency pairs), dialogue acts, and frames (discourse units).
Similarly, the The ICSI Meeting Recorder Dialog Act (MRDA) Corpus (Shriberg et al., 2004) has annotated dialogue acts, question-answer pair links (adjacency pairs), and dialogue hot spots5. It consists of transcribed recordings of 75 ICSI meetings on several classes of topics including: the ICSI meeting recorder project itself, automatic speech recognition, natural language processing and neural theories of language, and discussions with the annotators for the project.
# 4.2.3 SCRIPTED CORPORA
A ï¬nal category of spoken dialogue consists of conversations that have been pre-scripted for the purpose of being spoken later. We refer to datasets containing such conversations as âscripted cor- poraâ. As discussed in Subsection 3.4, these datasets are distinct from spontaneous human-human conversations, as they inevitably contain fewer âï¬llerâ words and expressions that are common in spoken dialogue. However, they should not be confused with human-human written dialogues, as they are intended to sound like natural spoken conversations when read aloud by the participants. Furthermore, these scripted dialogues are required to be dramatic, as they are generally sourced from movies or TV shows.
There exist multiple scripted corpora based on movies and TV series. These can be sub-divided into two categories: corpora that provide the actual scripts (i.e. the movie script or TV series script) where each utterance is tagged with the appropriate speaker, and those that only contain subtitles
5. For more information on dialogue hot spots and how they relate to dialogue acts, see (Wrede and Shriberg, 2003).
24
and consecutive utterances are not divided or labeled in any way. It is always preferable to have the speaker labels, but there is signiï¬cantly more unlabeled subtitle data available, and both sources of information can be leveraged to build a dialogue system.
The Movie DiC Corpus (Banchs, 2012) is an example of the former caseâit contains about 130,000 dialogues and 6 million words from movie scripts extracted from the Internet Movie Script Data Collection6, carefully selected to cover a wide range of genres. These dialogues also come with context descriptions, as written in the script. One derivation based on this corpus is the Movie Triples Dataset (Serban et al., 2016). There is also the American Film Scripts Corpus and Film Scripts Online Corpus which form the Film Scripts Online Series Corpus, which can be pur- chased 7. The latter consists of a mix of British and American ï¬lm scripts, while the former consists of solely American ï¬lms.
The majority of these datasets consist mostly of raw scripts, which are not guaranteed to portray conversations between only two people. The dataset collected by Nio et al. (2014b), which we refer to as the Filtered Movie Script Corpus, takes over 1 million utterance-response pairs from web- based script resources and ï¬lters them down to 86,000 such pairs. The ï¬ltering method limits the extracted utterances to X-Y-X triples, where X is spoken by the same actor and each of the utterance share some semantic similarity. These triples are then decomposed into X-Y and Y-X pairs. Such ï¬ltering largely removes conversations with more than two speakers, which could be useful in some applications. Particularly, the ï¬ltering method helps to retain semantic context in the dialogue and keeps a back-and-forth conversational ï¬ow that is desired in training many dialogue systems.
The Cornell Movie-Dialogue Corpus (Danescu-Niculescu-Mizil and Lee, 2011) also has short conversations extracted from movie scripts. The distinguishing feature of this dataset is the amount of metadata available for each conversation: this includes movie metadata such as genre, release year, and IMDB rating, as well as character metadata such as gender and position on movie credits. Although this corpus contains 220,000 dialogue excerpts, it only contains 300,000 utterances; thus, many of the excerpts consist of single utterances.
The Corpus of American Soap Operas (Davies, 2012b) contains 100 million words in more than 22,000 transcripts of ten American TV-series soap operas from 2001 and 2012. Because it is based on soap operas it is qualitatively different from the Movie Dic Corpus, which contains movies in the action and horror genres. The corpus was collected to provide insights into colloquial Amer- ican speech, as the vocabulary usage is quite different from the British National Corpus (Davies, 2012a). Unfortunately, this corpus does not come with speaker labels.
Another corpus consisting of dialogues from TV shows is the TVD Corpus (Roy et al., 2014). This dataset consists of 191 movie transcripts from the comedy show The Big Bang Theory, and the drama show Game of Thrones, along with crowd-sourced text descriptions (brief episode sum- maries, longer episode outlines) and various types of metadata (speakers, shots, scenes). Text align- ment algorithms are used to link descriptions and metadata to the appropriate sections of each script. For example, one might align an event description with all the utterances associated with that event in order to develop algorithms for locating speciï¬c events from raw dialogue, such as âperson X tries to convince person Yâ.
Some work has been done in order to analyze character style from movie scripts. This is aided by a dataset collected by Walker et al. (2012a) that we refer to as the Character Style from Film Cor- pus. This corpus was collected from the IMSDb archive, and is annotated for linguistic structures
6. http://www.imsdb.com 7. http://alexanderstreet.com/products/film-scripts-online-series
25
and character archetypes. Features, such as the sentiment behind the utterances, are automatically extracted and used to derive models of the characters in order to generate new utterances similar in style to those spoken by the character. Thus, this dataset could be useful for building dialogue personalization models.
There are two primary movie subtitle datasets: the OpenSubtitles (Tiedemann, 2012) and the SubTle Corpus (Ameixa and Coheur, 2013). Both corpora are based on the OpenSubtitles web- site.8 The OpenSubtitles dataset is a giant collection of movie subtitles, containing over 1 billion words, whereas SubTle Corpus has been pre-processed in order to extract interaction-response pairs that can help dialogue systems deal with out-of-domain (OOD) interactions.
The Corpus of English Dialogues 1560-1760 (CED) (Kyt¨o and Walker, 2006) compiles di- alogues from the mid-16th century until the mid-18th century. The sources vary from real trial transcripts to ï¬ction dialogues. Due to the scripted nature of ï¬ctional dialogues and the fact that the majority of the corpus consists of ï¬ctional dialogue, we classify it here as such. The corpus is com- posed as follows: trial proceedings (285,660 words), witness depositions (172,940 words), drama comedy works (238,590 words), didactic works (236,640 words), prose ï¬ction (223,890 words), and miscellaneous (25,970 words).
# 4.3 Human-Human Written Corpora
We proceed to survey corpora of conversations between humans in written form. As before, we sub-divide this section into spontaneous and constrained corpora, depending on whether there are restrictions on the topic of conversation. However, we make a further distinction between forum, micro-blogging, and chat corpora.
Forum corpora consist of conversations on forum-based websites such as Reddit9 where users can make posts, and other users can make comments or replies to said post. In some cases, com- ments can be nested indeï¬nitely, as users make replies to previous replies. Utterances in forum corpora tend to be longer, and there is no restriction on the number of participants in a discussion. On the other hand, conversations on micro-blogging websites such as Twitter10 tend to have very short utterances as there is an upper bound on the number of characters permitted in each message. As a result, these tend to exhibit highly colloquial language with many abbreviations. The identi- fying feature of chat corpora is that the conversations take place in real-time between users. Thus, these conversations share more similarities with spoken dialogue between humans, such as common grounding phenomena.
4.3.1 SPONTANEOUS WRITTEN CORPORA
We begin with written corpora where the topic of conversation is not pre-speciï¬ed. Such is the case for the NPS Internet Chatroom Conversations Corpus (Forsyth and Martell, 2007), which con- sists of 10,567 English utterances gathered from age-speciï¬c chat rooms of various online chat ser- vices from October and November of 2006. Each utterance was annotated with part-of-speech and dialogue act information; the correctness of this was veriï¬ed manually. The NPS Internet Chatroom Conversations Corpus was one of the ï¬rst corpora of computer-mediated communication (CMC),
8. http://www.opensubtitles.org 9. http://www.reddit.com 10. http://www.twitter.com
26
and it was intended for various NLP applications such as conversation thread topic detection, author proï¬ling, entity identiï¬cation, and social network analysis.
Several corpora of spontaneous micro-blogging conversations have been collected, such as the Twitter Corpus from Ritter et al. (2010), which contains 1.3 million post-reply pairs extracted from Twitter. The corpus was originally constructed to aid in the production of unsupervised approaches to modeling dialogue acts. Larger Twitter corpora have been collected. The Twitter Triples Cor- pus (Sordoni et al., 2015b) is one such example, with a described original dataset of 127 million context-message-response triples, but only a small labeled subset of this corpus has been released. Speciï¬cally, the released labeled subset contains 4,232 pairs that scored an average of greater than 4 on the Likert scale by crowdsourced evaluators for quality of the response to the context-message pair. Similarly, large micro-blogging corpora such as the Sina Weibo Corpus (Shang et al., 2015), which contains 4.5 million post-reply pairs, have been collected; however, the authors have not yet been made publicly available. We do not include the Sina Weibo Corpus (and its derivatives) in the tables in this section, as they are not primarily in English.
The Usenet Corpus (Shaoul and Westbury, 2009) is a gigantic collection of public Usenet postings11 containing over 7 billion words from October 2005 to January 2011. Usenet was a distributed discussion system established in 1980 where participants could post articles to one of 47,860 ânewsgroupâ categories. It is seen as the precursor to many current Internet forums. The corpus derived from these posts has been used for research in collaborative ï¬ltering (Konstan et al., 1997) and role detection (Fisher et al., 2006).
The NUS SMS Corpus (Chen and Kan, 2013) consists of conversations carried out over mobile phone SMS messages between two users. While the original purpose of the dataset was to improve predictive text entry when mobile phones still mapped multiple letters to a single number, aided by video and timing analysis of users entering their messages it could equally be used for analysis of informal dialogue. Unfortunately, the corpus does not consist of dialogues, but rather single SMS messages. SMS messages are similar in style to Twitter, in that they use many abbreviations and acronyms.
Currently, one of the most popular forum-based websites is Reddit12 where users can create discussions and post comments in various sub-forums called âsubredditsâ. Each subreddit addresses its own particular topic. Over 1.7 billion of these comments have been collected in the Reddit Cor- pus.13 Each comment is labeled with the author, score (rating from other users), and position in the comment tree; the position is important as it determines which comment is being replied to. Al- though researchers have not yet investigated dialogue problems using this Reddit discussion corpus, the sheer size of the dataset renders it an interesting candidate for transfer learning. Additionally, researchers have used smaller collections of Reddit discussions for broad discourse classiï¬cation. (Schrading et al., 2015).
Some more curated versions of the Reddit dataset have been collected. The Reddit Domestic Abuse Corpus (Schrading et al., 2015) consists of Reddit posts and comments taken from either subreddits speciï¬c to domestic abuse, or from subreddits representing casual conversations, advice, and general anxiety or anger. The motivation is to build classiï¬ers that can detect occurrences of domestic abuse in other areas, which could provide insights into the prevalence and consequences
11. http://www.usenet.net 12. http://www.reddit.com 13. https://www.reddit.com/r/datasets/comments/3bxlg7/i_have_every_publicly_
available_reddit_comment/
27
of these situations. These conversations have been pre-processed with lower-casing, lemmatizing, and removal of stopwords, and semantic role labels are provided.
# 4.3.2 CONSTRAINED WRITTEN CORPORA
There are also several written corpora where users are limited in terms of topics of conversation. For example, the Settlers of Catan Corpus (Afantenos et al., 2012) contains logs of 40 games of âSettlers of Catanâ, with about 80,000 total labeled utterances. The game is played with up to 4 players, and is predicated on trading certain goods between players. The goal of the game is to be the ï¬rst player to achieve a pre-speciï¬ed number of points. Therefore, the game is adversarial in nature, and can be used to analyze situations of strategic conversation where the agents have diverging motives.
Another corpus that deals with game playing is the Cards Corpus (Djalali et al., 2012), which consists of 1,266 transcripts of conversations between players playing a game in the âCards worldâ. This world is a simple 2-D environment where players collaborate to collect cards. The goal of the game is to collect six cards of a particular suit (cards in the environment are only visible to a player when they are near the location of that player), or to determine that this goal is impossible in the environment. The catch is that each player can only hold 3 cards, thus players must collaborate in order to achieve the goal. Further, each playerâs location is hidden to the other player, and there are a ï¬xed number of non-chatting moves. Thus, players must use the chat to formulate a plan, rather than exhaustively exploring the environment themselves. The dataset has been further annotated by Potts (2012) to collect all locative question-answer pairs (i.e. all questions of the form âWhere are you?â).
The Agreement by Create Debaters Corpus (Rosenthal and McKeown, 2015), the Agree- ment in Wikipedia Talk Pages Corpus (Andreas et al., 2012) and the Internet Argument Corpus (Abbott et al., 2016) all cover dialogs with annotations measuring levels of agreement or disagree- ment in responses to posts in various media. The Agreement by Create Debaters Corpus and the Agreement in Wikipedia Talk Pages Corpus both are formatted in the same way. Post-reply pairs are annotated with whether they are in agreement or disagreement, as well as the type of agreement they are in if applicable (e.g. paraphrasing). The difference between the two corpora is the source: the former is collected from Create Debate forums and the latter from a mix of Wikipedia Discus- sion pages and LiveJournal postings. The Internet Argument Corpus (IAC) (Walker et al., 2012b) is a forum-based corpus with 390,000 posts on 11,000 discussion topics. Each topic is controversial in nature, including subjects such as evolution, gay marriage and climate change; users participate by sharing their opinions on one of these topics. Posts-reply pairs have been labeled as being either in agreement or disagreement, and sarcasm ratings are given to each post.
Another source of constrained text-based corpora are chat-room environments. Such a set-up forms the basis of the MPC Corpus (Shaikh et al., 2010), which consists of 14 multi-party dialogue sessions of approximately 90 minutes each. In some cases, discussion topics were constrained to be about certain political stances, or mock committees for choosing job candidates. An interest- ing feature is that different participants are given different rolesâleader, disruptor, and consensus builderâwith only a general outline of their goals in the conversation. Thus, this dataset could be used to model social phenomena such as agenda control, inï¬uence, and leadership in on-line interactions.
28
The largest written corpus with a constrained topic is the recently released Ubuntu Dialogue Corpus (Lowe et al., 2015a), which has almost 1 million dialogues of 3 turns or more, and 100 It is related to the former Ubuntu Chat Corpus (Uthus and Aha, 2013). Both million words. corpora were scraped from the Ubuntu IRC channel logs.14 On this channel, users can log in and ask a question about a problem they are having with Ubuntu; these questions are answered by other users. Although the chat room allows everyone to chat with each other in a multi-party setting, the Ubuntu Dialogue Corpus uses a series of heuristics to disentangle it into dyadic dialogue. The technical nature and size of this corpus lends itself particularly well to applications in technical support.
Other corpora have been extracted from IRC chat logs. The IRC Corpus (Elsner and Charniak, 2008) contains approximately 50 hours of chat, with an estimated 20,000 utterances from the Linux channel on IRC, complete with the posting times. Therefore, this dataset consists of similarly technical conversations to the Ubuntu Corpus, with the occasional social chat. The purpose of this dataset was to investigate approaches for conversation disentanglement; given a multi-party chat room, one attempts to recover the individual conversations of which it is composed. For this purpose, there are approximately 1,500 utterances with annotated ground-truth conversations.
More recent efforts have combined traditional conversational corpora with question answering and recommendation datasets in order to facilitate the construction of goal-driven dialogue systems. Such is the case for the Movie Dialog Dataset (Dodge et al., 2015). There are four tasks that the authors propose as a prerequisite for a working dialogue system: question answering, recommenda- tion, question answering with recommendation, and casual conversation. The Movie Dialog dataset consists of four sub-datasets used for training models to complete these tasks: a QA dataset from the Open Movie Database (OMDb)15 of 116k examples with accompanying movie and actor metadata in the form of knowledge triples; a recommendation dataset from MovieLens16 with 110k users and 1M questions; a combined recommendation and QA dataset with 1M conversations of 6 turns each; and a discussion dataset from Redditâs movie subreddit. The former is evaluated using recall metrics in a manner similar to Lowe et al. (2015a). It should be noted that, other than the Reddit dataset, the dialogues in the sub-datasets are simulated QA pairs, where each response corresponds to a list of entities from the knowledge base.
# 5. Discussion
We conclude by discussing a number of general issues related to the development and evaluation of data-driven dialogue systems. We also discuss alternative sources of information, user personaliza- tion, and automatic evaluation methods.
# 5.1 Challenges of Learning from Large Datasets
Recently, several large-scale dialogue datasets have been proposed in order to train data-driven dialogue systems; the Twitter Corpus (Ritter et al., 2010) and the Ubuntu Dialogue corpus (Lowe et al., 2015a) are two examples. In this section, we discuss the beneï¬ts and drawbacks of these datasets based on our experience using them for building data-driven models. Unlike the previous
14. http://irclogs.ubuntu.com 15. http://en.omdb.org 16. http://movielens.org
29
section, we now focus explicitly on aspects of high relevance for using these datasets for learning dialogue strategies.
# 5.1.1 THE TWITTER CORPUS
The Twitter Corpus consists of a series of conversations extracted from tweets. While the dataset is large and general-purpose, the micro-blogging nature of the source material leads to several draw- backs for building conversational dialogue agents. However, some of these drawbacks do not apply if the end goal is to build an agent that interacts with users on the Twitter platform.
The Twitter Corpus has an enormous amount of typos, slang, and abbreviations. Due to the 140-character limit, tweets are often very short and compressed. In addition, users frequently use Twitter-speciï¬c devices such as hashtags. Unless one is building a dialogue agent speciï¬cally for Twitter, it is often not desirable to have a chatbot use hashtags and excessive abbreviations as it is not reï¬ective of how humans converse in other environments. This also results in a signiï¬cant increase in the word vocabulary required for dialogue systems trained at the word level. As such, it is not surprising that character-level models have shown promising results on Twitter (Dhingra et al., 2016).
Twitter conversations often contain various kinds of verbal role-playing and imaginative actions similar to stage directions in theater plays (e.g. instead of writing âgoodbyeâ, a user might write â*waves goodbye and leaves*â). These conversations are very different from the majority of text- based chats. Therefore, dialogue models trained on this dataset are often able to provide interesting and accurate responses to contexts involving role-playing and imaginative actions (Serban et al., 2017b).
Another challenge posed by Twitter is that Twitter conversations often refer to recent public events outside the conversation. In order to learn effective responses for such conversations, a dialogue agent must infer the news event under discussion by referencing some form of external knowledge base. This would appear to be a particularly difï¬cult task.
# 5.1.2 THE UBUNTU DIALOGUE CORPUS
The Ubuntu Dialogue Corpus is one of the largest, publicly available datasets containing technical support dialogues. Due to the commercial importance of such systems, the dataset has attracted signiï¬cant attention.17 Thus, the Ubuntu Dialogue Corpus presents the opportunity for anyone to train large-scale data-driven technical support dialogue systems.
Despite this, there are several problems when training data-driven dialogue models on the Ubuntu Dialogue Corpus due to the nature of the data. First, since the corpus comes from a multi- party IRC channel, it needs to be disentangled into separate dialogues. This disentanglement process is noisy, and errors inevitably arise. The most frequent error is when a missing utterance in the di- alogue is not picked up by the extraction procedure (e.g. an utterance from the original multi-party chat was not added to the disentangled dialogue). As a result, for a substantial amount of conver- sations, it is difï¬cult to follow the topic. In particular, this means that some of the Next Utterance Classiï¬cation (NUC) examples, where models must select the correct next response from a list of candidates, are either difï¬cult or impossible for models to predict.
17. Most of the largest technical support datasets are based on commercial technical support channels, which are propri- etary and never released to the public for privacy reasons.
30
Another problem arises from the lack of annotations and labels. Since users try to solve their technical problems, it is perhaps best to build models under a goal-driven dialogue framework, where a dialogue system has to maximize the probability that it will solve the userâs problem at the end of the conversation. However, there are no reward labels available. Thus, it is difï¬cult to model the dataset in a goal-driven dialogue framework. Future work may alleviate this by constructing automatic methods of determining whether a user in a particular conversation solved their problem.
A particular challenge of the Ubuntu Dialogue Corpus is the large number of out-of-vocabulary words, including many technical words related to the Ubuntu operating system, such as commands, software packages, websites, etc. Since these words occur rarely in the dataset, it is difï¬cult to learn their meaning directly from the dataset â for example, it is difï¬cult to obtain meaningful distributed, real-valued vector representations for neural network-based dialogue models. This is further exacerbated by the large number of users who use different nomenclature, acronyms, and speaking styles, and the many typos in the dataset. Thus, the linguistic diversity of the corpus is large.
A ï¬nal challenge of the dataset is the necessity for additional knowledge related to Ubuntu in order to accurately generate or predict the next response in a conversation. We hypothesize that this knowledge is crucial for a system trained on the Ubuntu Dialogue Corpus to be effective in practice, as often solutions to technical problems change over time as new versions of the operating system become available. Thus, an effective dialogue system must learn to combine up-to-date technical information with an understanding of natural language dialogue in order to solve the usersâ problems. We will discuss the use of external knowledge in more detail in Section 5.5.
While these challenges make it difï¬cult to build data-driven dialogue systems, it also presents an important research opportunity. Current data-driven dialogue systems perform rather poorly in terms of generating utterances that are coherent and on-topic (Serban et al., 2017a). As such, there is signiï¬cant room for improvement on these models.
# 5.2 Transfer Learning Between Datasets
While it is not always feasible to obtain large corpora for every new application, the use of other related datasets can effectively bootstrap the learning process. In several branches of machine learn- ing, and in particular in deep learning, the use of related datasets in pre-training the model is an effective method of scaling up to complex environments (Erhan et al., 2010; Kumar et al., 2015).
To build open-domain dialogue systems, it is arguably necessary to move beyond domain- speciï¬c datasets. Instead, like humans, dialogue systems may have to be trained on multiple data sources for solving multiple tasks. To leverage statistical efï¬ciency, it may be necessary to ï¬rst use unsupervised learningâas opposed to supervised learning or ofï¬ine reinforcement learning, which typically only provide a sparse scalar feedback signal for each phrase or sequence of phrasesâand then ï¬ne-tune models based on human feedback. Researchers have already proposed various ways of applying transfer learning to build data-driven dialogue systems, ranging from learning separate sub-components of the dialogue system (e.g. intent and dialogue act classiï¬cation) to learning the entire dialogue system (e.g. in an unsupervised or reinforcement learning framework) using transfer learning (Fabbrizio et al., 2004; Forgues et al., 2014; Serban and Pineau, 2015; Serban et al., 2016; Lowe et al., 2015a; Vandyke et al., 2015; Wen et al., 2016; GaËsi´c et al., 2016; Mo et al., 2016; Genevay and Laroche, 2016; Chen et al., 2016)
31
# 5.3 Topic-oriented & Goal-driven Datasets
Tables 1â5 list the topics of available datasets. Several of the human-human datasets are denoted as having casual or unrestricted topics. In contrast, most human-machine datasets focus on speciï¬c, narrow topics. It is useful to keep this distinction between restricted and unrestricted topics in mind, as goal-driven dialogue systems â which typically have a well-deï¬ned measure of performance related to task completion â are usually developed in the former setting. In some cases, the line between these two types of datasets blurs. For example, in the case of conversations occurring between players of an online game (Afantenos et al., 2012), the outcome of the game is determined by how participants play in the game environment, not by their conversation. In this case, some conversations may have a direct impact on a playerâs performance in the game, some conversations may be related to the game but irrelevant to the goal (e.g. commentary on past events) and some conversations may be completly unrelated to the game.
# 5.4 Incorporating longer memories
Recently, signiï¬cant progress has been made towards incorporating a form of external memory into various neural-network architectures for sequence modeling. Models such as Memory Net- works (Weston et al., 2015; Sukhbaatar et al., 2015) and Neural Turing Machines (NTM) (Graves et al., 2014) store some part of their input in a memory, which is then reasoned over in order to perform a variety of sequence to sequence tasks. These vary from simple problems, such as se- quence copying, to more complex problems, such as question answering and machine translation. Although none of these models are explicitly designed to address dialogue problems, the extension by Kumar et al. (2015) to Dynamic Memory Networks speciï¬cally differentiates between episodic and semantic memory. In this case, the episodic memory is the same as the memory used in the traditional Memory Networks paper that is extracted from the input, while the semantic memory refers to knowledge sources that are ï¬xed for all inputs. The model is shown to work for a variety of NLP tasks, and it is not difï¬cult to envision an application to dialogue utterance generation where the semantic memory is the desired external knowledge source.
# 5.5 Incorporating External Knowledge
Another interesting research direction is the incorporation of external knowledge sources in order to inform the response to be generated. Using external information is of great importance to dialogues systems, particularly in the goal-driven setting. Even non-goal-driven dialogue systems designed to simply entertain the user could beneï¬t from leveraging external information, such as current news articles or movie reviews, in order to better converse about real-world events. This may be particularly useful in data-sparse domains, where there is not enough dialogue training data to reliably learn a response that is appropriate for each input utterance, or in domains that evolve quickly over time.
# 5.5.1 STRUCTURED EXTERNAL KNOWLEDGE
In traditional goal-driven dialogue systems (Levin and Pieraccini, 1997), where the goal is to provide information to the user, there is already extensive use of external knowledge sources. For example, in the Letâs Go! dialogue system (Raux et al., 2005), the user requests information about various bus arrival and departure times. Thus, a critical input to the model is the actual bus schedule, which is
32
used in order to generate the systemâs utterances. Another example is the dialogue system described by N¨oth et al. (2004), which helps users ï¬nd movie information by utilizing movie showtimes from different cinemas. Such examples are abundant both in the literature and in practice. Although these models make use of external knowledge, the knowledge sources in these cases are highly structured and are only used to place hard constraints on the possible states of an utterance to be generated. They are essentially contained in relational databases or structured ontologies, and are only used to provide a deterministic mapping from the dialogue states extracted from an input user utterance to the dialogue system state or the generated response.
Complementary to domain-speciï¬c databases and ontologies are the general natural language processing databases and tools. These include lexical databases such as WordNet (Miller, 1995), which contains lexical relationships between words for over a hundred thousand words, VerbNet (Schuler, 2005) which contains lexical relations between verbs, and FrameNet (Ruppenhofer et al., 2006), which contains âword sensesâ for over ten thousand words along with examples of each word sense. In addition, there exist several natural language processing tools such as part of speech taggers, word category classiï¬ers, word embedding models, named entity recognition models, co- reference resolution models, semantic role labeling models, semantic similarity models and sen- timent analysis models (Manning and Sch¨utze, 1999; Jurafsky and Martin, 2008; Mikolov et al., 2013; Gurevych and Strube, 2004; Lin and Walker, 2011b) that may be used by the Natural Lan- guage Interpreter to extract meaning from human utterances. Since these tools are typically built upon texts and annotations created by humans, using them inside a dialogue system can be inter- preted as a form of structured transfer learning, where the relationships or labels learned from the original natural language processing corpus provide additional information to the dialogue system and improve generalization of the system.
# 5.5.2 UNSTRUCTURED EXTERNAL KNOWLEDGE
Complementary sources of information can be found in unstructured knowledge sources, such as online encyclopedias (Wikipedia (Denoyer and Gallinari, 2007)) as well as domain-speciï¬c It is beyond the scope of this paper to review all possible ways sources (Lowe et al., 2015b). that these unstructured knowledge sources have or could be used in conjunction with a data-driven dialogue system. However, we note that this is likely to be a fruitful research area.
# 5.6 Personalized dialogue agents
When conversing, humans often adapt to their interlocutor to facilitate understanding, and thus improve conversational efï¬ciency and satisfaction. Attaining human-level performance with dia- logue agents may well require personalization, i.e. models that are aware and capable of adapting to their intelocutor. Such capabilities could increase the effectiveness and naturalness of generated dialogues (Lucas et al., 2009; Su et al., 2013). We see personalization of dialogue systems as an important task, which so far has not received much attention. There has been initial efforts on user- speciï¬c models which could be adapted to work in combination with the dialogue models presented in this survey (Lucas et al., 2009; Lin and Walker, 2011a; Pargellis et al., 2004). There has also been interesting work on character modeling in movies (Walker et al., 2011; Li et al., 2016; Mo et al., 2016). There is signiï¬cant potential to learn user models as part of dialogue models. The large datasets presented in this paper, some of which provide multiple dialogues per user, may enable the development of such models.
33
# 5.7 Evaluation metrics
One of the most challenging aspects of constructing dialogue systems lies in their evaluation. While the end goal is to deploy the dialogue system in an application setting and receive real human feed- back, getting to this stage is time consuming and expensive. Often it is also necessary to optimize performance on a pseudo-performance metric prior to release. This is particularly true if a dialogue model has many hyper-parameters to be optimizedâit is infeasible to run user experiments for every parameter setting in a grid search. Although crowdsourcing platforms, such as Amazon Mechanical Turk, can be used for some user testing (Jurcıcek et al., 2011), evaluations using paid subjects can also lead to biased results (Young et al., 2013). Ideally, we would have some automated metrics for calculating a score for each model, and only involve human evaluators once the best model has been chosen with reasonable conï¬dence.
The evaluation problem also arises for non-goal-driven dialogue systems. Here, researchers have focused mainly on the output of the response generation module. Evaluation of such non-goal- driven dialogue systems can be traced back to the Turing test (Turing, 1950), where human judges communicate with both computer programs and other humans over a chat terminal without knowing each otherâs true identity. The goal of the judges was to identify the humans and computer programs under the assumption that a program indistinguishable from a real human being must be intelligent. However, this setup has been criticized extensively with numerous researchers proposing alterna- tive evaluation procedures (Cohen, 2005). More recently, researchers have turned to analyzing the collected dialogues produced after they are ï¬nished (Galley et al., 2015; Pietquin and Hastie, 2013; Shawar and Atwell, 2007a; Schatzmann et al., 2005).
Even when human evaluators are available, it is often difï¬cult to choose a set of informative and consistent criteria that can be used to judge an utterance generated by a dialogue system. For example, one might ask the evaluator to rate the utterance on vague notions such as âappropriatenessâ and ânaturalnessâ, or to try to differentiate between utterances generated by the system and those generated by actual humans (Vinyals and Le, 2015). Schatzmann et al. (2005) suggest two aspects that need to be evaluated for all response generation systems (as well as user simulation models): 1) if the model can generate human-like output, and 2) if the model can reproduce the variety of user behaviour found in corpus. But we lack a deï¬nitive framework for such evaluations.
We complete this discussion by summarizing different approaches to the automatic evaluation problem as they relate to these objectives.
5.7.1 AUTOMATIC EVALUATION METRICS FOR GOAL-DRIVEN DIALOGUE SYSTEMS
User evaluation of goal-driven dialogue systems typically focuses on goal-related performance cri- teria, such as goal completion rate, dialogue length, and user satisfaction (Walker et al., 1997; Schatzmann et al., 2005). These were originally evaluated by human users interacting with the dialogue system, but more recently researchers have also begun to use third-party annotators for evaluating recorded dialogues (Yang et al., 2010). Due to their simplicity, the vast majority of hand- crafted task-oriented dialogue systems have been solely evaluated in this way. However, when using machine learning algorithms to train on large-scale corpora, automatic optimization criteria are re- quired. The challenge with evaluating goal-driven dialogue systems without human intervention is that the process necessarily requires multiple stepsâit is difï¬cult to determine if a task has been solved from a single utterance-response pair from a conversation. Thus, simulated data is often gen- erated by a user simulator (Eckert et al., 1997; Schatzmann et al., 2007; Jung et al., 2009; Georgila
34
et al., 2006; Pietquin and Hastie, 2013). Given a sufï¬ciently accurate user simulation model, an interaction between the dialogue system and the user can be simulated from which it is possible to deduce the desired metrics, such as goal completion rate. Signiï¬cant effort has been made to render the simulated data as realistic as possible, by modeling user intentions. Evaluation of such simulation methods has already been conducted (Schatzmann et al., 2005). However, generating realistic user simulation models remains an open problem.
5.7.2 AUTOMATIC EVALUATION METRICS FOR NON-GOAL-DRIVEN DIALOGUE SYSTEMS
Evaluation of non-goal-driven dialogue systems, whether by automatic means or user studies, re- mains a difï¬cult challenge.
Word Overlap Metrics. One approach is to borrow evaluation metrics from other NLP tasks such as machine translation, which uses BLEU (Papineni et al., 2002) and METEOR (Banerjee and Lavie, 2005) scores. These metrics have been used to compare responses generated by a learned dialogue strategy to the actual next utterance in the conversation, conditioned on a dialogue context (Sordoni et al., 2015b). While BLEU scores have been shown to correlate with human judgements for machine translation (Papineni et al., 2002), their effectiveness for automatically assessing di- alogue response generation is unclear. There are several issues to consider: given the context of a conversation, there often exists a large number of possible responses that âï¬tâ into the dialogue. Thus, the response generated by a dialogue system could be entirely reasonable, yet it may have no words in common with the actual next utterance. In this case, the BLEU score would be very low, but would not accurately reï¬ect the strength of the model. Indeed, even humans who are tasked with predicting the next utterance of a conversation achieve relatively low BLEU scores (Sordoni et al., 2015b). Although the METEOR metric takes into account synonyms and morphological variants of words in the candidate response, it still suffers from the aforementioned problems. In a sense, these measurements only satisfy one direction of Schatzmannâs criteria: high BLEU and METEOR scores imply that the model is generating human-like output, but the model may still not reproduce the variety of user behaviour found in corpus. Furthermore, such metrics will only accurately re- ï¬ect the performance of the dialogue system if given a large number of candidate responses for each given context.
Next Utterance Classiï¬cation. Alternatively, one can narrow the number of possible responses to a small, pre-deï¬ned list, and ask the model to select the most appropriate response from this list. The list includes the actual next response of the conversation (the desired prediction), and the other entries (false positives) are sampled from elsewhere in the corpus (Lowe et al., 2016, 2015a). This next utterance classiï¬cation (NUC) task is derived from the recall and precision metrics for information-retrieval-based approaches. There are several attractive properties of this metric: it is easy to interpret, and the difï¬culty can be adjusted by changing the number of false responses. However, there are drawbacks. In particular, since the other candidate answers are sampled from elsewhere in the corpus, there is a chance that these also represent reasonable responses given the context. This can be alleviated to some extent by reporting Recall@k measures, i.e. whether the correct response is found in the k responses with the highest rankings according to the model. Although current models evaluated using NUC are trained explicitly to maximize the performance on this metric by minimizing the cross-entropy between context-response pairs (Lowe et al., 2015a; Kadlec et al., 2015), the metric could also be used to evaluate a probabilistic generative model trained to output full utterances.
35
Word Perplexity. Another metric proposed to evaluate probabilistic language models (Bengio et al., 2003; Mikolov et al., 2010) that has seen signiï¬cant recent use for evaluating end-to-end dialogue systems is word perplexity (Pietquin and Hastie, 2013; Serban et al., 2016). Perplexity ex- plicitly measures the probability that the model will generate the ground truth next utterance given some context of the conversation. This is particularly appealing for dialogue, as the distribution over words in the next utterance can be highly multi-modal (i.e. many possible responses). A re-weighted perplexity metric has also been proposed where stop-words, punctuation, and end-of-utterance to- kens are removed before evaluating to focus on the semantic content of the phrase (Serban et al., 2016). Both word perplexity, as well as utterance-level recall and precision outlined above, satisfy Schatzmannâs evaluation criteria, since scoring high on these would require the model to produce human-like output and to reproduce most types of conversations in the corpus.
Response Diversity. Recent non-goal-driven dialogue systems based on neural networks have had problems generating diverse responses (Serban et al., 2016). (Li et al., 2015) recently intro- duced two new metrics, distinct-1 and distinct-2, which respectively measure the number of distinct unigrams and bigrams of the generated responses. Although these fail to satisfy either of Schatz- mannâs criteria, they may still be useful in combination with other metrics, such as BLEU, NUC or word perplexity.
# 6. Conclusion
There is strong evidence that over the next few years, dialogue research will quickly move towards large-scale data-driven model approaches. In particular, as is the case for other language-related applications such as speech recognition, machine translation and information retrieval, these ap- proaches will likely come in the form of end-to-end trainable systems. This paper provides an extensive survey of currently available datasets suitable for research, development, and evaluation of such data-driven dialogue systems.
In addition to presenting the datasets, we provide a detailed discussion of several of the is- sues related to the use of datasets in dialogue system research. Several potential directions are highlighted, such as transfer learning and incorporation of external knowledge, which may lead to scalable solutions for end-to-end training of conversational agents.
# Acknowledgements
The authors gratefully acknowledge ï¬nancial support by the Samsung Advanced Institute of Tech- nology (SAIT), the Natural Sciences and Engineering Research Council of Canada (NSERC), the Canada Research Chairs, the Canadian Institute for Advanced Research (CIFAR) and Compute Canada. Early versions of the manuscript beneï¬ted greatly from the proofreading of Melanie Lyman-Abramovitch, and later versions were extensively revised by Genevieve Fried and Nico- las Angelard-Gontier. The authors also thank Nissan Pow, Michael Noseworthy, Chia-Wei Liu, Gabriel Forgues, Alessandro Sordoni, Yoshua Bengio and Aaron Courville for helpful discussions.
# References
B. Aarts and S. A. Wallis. The diachronic corpus of present-day spoken english (DCPSE), 2006. R. Abbott, B. Ecker, P. Anand, and M. Walker. Internet argument corpus 2.0: An sql schema for dialogic social media
and the corpora to go with it. In Language Resources and Evaluation Conference, LREC2016, 2016.
36
S. Afantenos, N. Asher, F. Benamara, A. Cadilhac, C´edric D´egremont, P. Denis, M. Guhe, S. Keizer, A. Lascarides, O. Lemon, et al. Developing a corpus of strategic conversation in the settlers of catan. In SeineDial 2012-The 16th workshop on the semantics and pragmatics of dialogue, 2012.
Y. Al-Onaizan, U. Germann, U. Hermjakob, K. Knight, P. Koehn, D. M., and K. Yamada. Translating with scarce resources. In AAAI, 2000.
J. Alexandersson, R. Engel, M. Kipp, S. Koch, U. K¨ussner, N. Reithinger, and M. Stede. Modeling negotiation dialogs. In Verbmobil: Foundations of Speech-to-Speech Translation, pages 441â451. Springer, 2000.
D. Ameixa and L. Coheur. From subtitles to human interactions: introducing the subtle corpus. Technical report, Tech. rep., 2013.
D. Ameixa, Luisa Coheur, P. Fialho, and P. Quaresma. Luke, I am your father: dealing with out-of-domain requests by using movies subtitles. In Intelligent Virtual Agents, pages 13â21, 2014.
A. H. Anderson, M. Bader, E. G. Bard, E. Boyle, G. Doherty, S. Garrod, S. Isard, J. Kowtko, J. McAllister, J. Miller, et al. The HCRC map task corpus. Language and speech, 34(4):351â366, 1991.
J. Andreas, S. Rosenthal, and K. McKeown. Annotating agreement and disagreement in threaded discussion. In LREC, pages 818â822. Citeseer, 2012.
L. E. Asri, J. He, and K. Suleman. A sequence-to-sequence model for user simulation in spoken dialogue systems. arXiv preprint arXiv:1607.00070, 2016.
A. J. Aubrey, D. Marshall, P. L. Rosin, J. Vandeventer, D. W. Cunningham, and C. Wallraven. Cardiff conversation database (CCDb): A database of natural dyadic conversations. In Computer Vision and Pattern Recognition Workshops (CVPRW), IEEE Conference on, pages 277â282, 2013.
H. Aust, M. Oerder, F. Seide, and V. Steinbiss. The philips automatic train timetable information system. Speech Communication, 17(3):249â262, 1995.
A. Aw, M. Zhang, J. Xiao, and J. Su. A phrase-based statistical model for sms text normalization. In Proceedings of the COLING, pages 33â40, 2006.
R. E. Banchs. Movie-DiC: a movie dialogue corpus for research and development. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers, 2012.
R. E. Banchs and H. Li. IRIS: a chat-oriented dialogue system based on the vector space model. In Proceedings of the ACL 2012 System Demonstrations, 2012.
S. Banerjee and A. Lavie. METEOR: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the ACL workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, 2005.
M. Barlow. Corpus of spoken, professional american-english, 2000. J. Beare and B. Scott. The spoken corpus of the survey of english dialects: language variation and oral history.
In Proceedings of ALLC/ACH, 1999.
Y. Bengio, R. Ducharme, P. Vincent, and C. Janvin. A neural probabilistic language model. The Journal of Machine Learning Research, 3:1137â1155, 2003.
Y. Bengio, I. Goodfellow, and A. Courville. Deep learning. An MIT Press book in preparation. Draft chapters available at http://www. iro. umontreal. ca/ bengioy/dlbook, 2014.
C. Bennett and A. I Rudnicky. The carnegie mellon communicator corpus, 2002. D. Biber and E. Finegan. An initial typology of english text types. Corpus linguistics II: New studies in the analysis and
exploitation of computer corpora, pages 19â46, 1986.
D. Biber and E. Finegan. Diachronic relations among speech-based and written registers in english. Variation in English: multi-dimensional studies, pages 66â83, 2001.
S. Bird, S. Browning, R. Moore, and M. Russell. Dialogue move recognition using topic spotting techniques. In Spoken Dialogue Systems-Theories and Applications, 1995.
A. W. Black, S. Burger, A. Conkie, H. Hastie, S. Keizer, O. Lemon, N. Merigaud, G. Parent, G. Schubiner, B. Thomson, In Special Interest Group on et al. Spoken dialog challenge 2010: Comparison of live and control test results. Discourse and Dialogue (SIGDIAL), 2011.
D. Bohus and A. I Rudnicky. Sorry, I didnt catch that! In Recent Trends in Discourse and Dialogue, pages 123â154. Springer, 2008.
S. E. Brennan, K. S. Schuhmann, and K. M. Batres. Entrainment on the move and in the lab: The walking around corpus.
In Proceedings of the 35th Annual Conference of the Cognitive Science Society, 2013. G. Brown, A. Anderson, R. Shillcock, and G. Yule. Teaching talk. Cambridge: CUP, 1984. S. Burger, K. Weilhammer, F. Schiel, and H. G. Tillmann. Verbmobil data collection and annotation.
In Verbmobil: Foundations of speech-to-speech translation, pages 537â549. Springer, 2000.
37
J. E. Cahn and S. E. Brennan. A psychological model of grounding and repair in dialog. In AAAI Symposium on Psychological Models of Communication in Collaborative Systems, 1999.
A. Canavan and G. Zipperlen. Callfriend american english-non-southern dialect. Linguistic Data Consortium, 10:1, 1996. A. Canavan, D. Graff, and G. Zipperlen. Callhome american english speech. Linguistic Data Consortium, 1997. S. K. Card, T. P. Moran, and A. Newell. The Psychology of Human-Computer Interaction. L. Erlbaum Associates Inc.,
Hillsdale, NJ, USA, 1983. ISBN 0898592437.
R. Carter. Orders of reality: Cancode, communication, and culture. ELT journal, 52(1):43â56, 1998. R. Carter and M. McCarthy. Cambridge grammar of English: a comprehensive guide; spoken and written English
grammar and usage. Ernst Klett Sprachen, 2006.
Tanya L. Chartrand and J. A. Bargh. The chameleon effect: the perceptionâbehavior link and social interaction. Journal of personality and social psychology, 76(6):893, 1999.
T. Chen and M. Kan. Creating a live, public short message service corpus: the nus sms corpus. Language Resources and Evaluation, 47(2):299â335, 2013.
Y.-N. Chen, D. Hakkani-T¨ur, and X. He. Zero-shot learning of intent embeddings for expansion by convolutional deep structured semantic models. In Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Confer- ence on, pages 6045â6049. IEEE, 2016.
A. Clark. Pre-processing very noisy text. In Proc. of Workshop on Shallow Processing of Large Corpora, pages 12â22, 2003.
H. H. Clark and S. E. Brennan. Grounding in communication. Perspectives on socially shared cognition, 13:127â149, 1991.
P. R. Cohen. If not turingâs test, then what? AI magazine, 26(4):61, 2005. K. M. Colby. Modeling a paranoid mind. Behavioral and Brain Sciences, 4:515â534, 1981. R. M. Cooper. The control of eye ï¬xation by the meaning of spoken language: A new methodology for the real-time investigation of speech perception, memory, and language processing. Cognitive Psychology, 6(1):84â107, 1974. N. Cristianini and J. Shawe-Taylor. An Introduction to Support Vector Machines: And Other Kernel-based Learning
Methods. Cambridge University Press, 2000.
H. Cuay´ahuitl, S. Renals, O. Lemon, and H. Shimodaira. Human-computer dialogue simulation using hidden markov models. In Automatic Speech Recognition and Understanding, 2005 IEEE Workshop on, pages 290â295, 2005.
H. Cuay´ahuitl, S. Keizer, and O. Lemon. Strategic dialogue management via deep reinforcement learning. arXiv preprint arXiv:1511.08099, 2015.
C. Danescu-Niculescu-Mizil and L. Lee. Chameleons in imagined conversations: A new approach to understanding coordination of linguistic style in dialogs. In Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, ACL, 2011.
L. Daubigney, M. Geist, S. Chandramohan, and O. Pietquin. A comprehensive reinforcement learning framework for dialogue management optimization. IEEE Journal of Selected Topics in Signal Processing, 6(8):891â902, 2012.
M. Davies. Comparing the corpus of american soap operas, COCA, and the BNC, 2012a. M. Davies. Corpus of american soap operas, 2012b. I. de Kok, D. Heylen, and L. Morency. Speaker-adaptive multimodal prediction model for listener responses. In Proceed-
ings of the 15th ACM on International conference on multimodal interaction, 2013.
L. Deng and X. Li. Machine learning paradigms for speech recognition: An overview. Audio, Speech, and Language Processing, IEEE Transactions on, 21(5):1060â1089, 2013.
L. Denoyer and P. Gallinari. The wikipedia xml corpus. In Comparative Evaluation of XML Information Retrieval Systems, pages 12â19. Springer, 2007.
B. Dhingra, Z. Zhou, D. Fitzpatrick, M. Muehl, and W. Cohen. Tweet2vec: Character-based distributed representations for social media. arXiv preprint arXiv:1605.03481, 2016.
A. Djalali, S. Lauer, and C. Potts. Corpus evidence for preference-driven interpretation. In Logic, Language and Meaning, pages 150â159. Springer, 2012.
J. Dodge, A. Gane, X. Zhang, A. Bordes, S. Chopra, A. Miller, A. Szlam, and J. Weston. Evaluating prerequisite qualities for learning end-to-end dialog systems. arXiv preprint arXiv:1511.06931, 2015.
S. Dose. Flipping the script: A corpus of american television series (cats) for corpus-based language learning and teaching. Corpus Linguistics and Variation in English: Focus on Non-native Englishes, 2013.
E. Douglas-Cowie, R. Cowie, I. Sneddon, C. Cox, O. Lowry, M. Mcrorie, J. Martin, L. Devillers, S. Abrilian, A. Batliner, et al. The humaine database: addressing the collection and annotation of naturalistic and induced emotional data. In Affective computing and intelligent interaction, pages 488â500. Springer, 2007.
38
W. Eckert, E. Levin, and R. Pieraccini. User modeling for spoken dialogue system evaluation. In Automatic Speech Recognition and Understanding, 1997. Proceedings., 1997 IEEE Workshop on, pages 80â87, 1997.
L. El Asri, H. Schulz, S. Sharma, J. Zumer, J. Harris, E. Fine, R. Mehrotra, and K. Suleman. Frames: Acorpus for adding memory to goal-oriented dialogue systems. preprint on webpage at http://www.maluuba.com/ publications/, 2017.
M. Elsner and E. Charniak. You talking to me? a corpus and algorithm for conversation disentanglement. In Association for Computational Linguistics (ACL), 2008.
D. Erhan, Y. Bengio, A. Courville, Pierre-A. Manzagol, and P. Vincent. Why does unsupervised pre-training help deep learning? Journal of Machine Learning Research, 11, 2010.
G. Di Fabbrizio, G. Tur, and D. Hakkani-Tr. Bootstrapping spoken dialog systems with data reuse. In Special Interest Group on Discourse and Dialogue (SIGDIAL), 2004.
M. Fatemi, L. E. Asri, H. Schulz, J. He, and K. Suleman. Policy networks with two-stage training for dialogue systems. In Special Interest Group on Discourse and Dialogue (SIGDIAL), 2016.
D. Fisher, M. Smith, and H. T Welser. You are who you talk to: Detecting roles in usenet newsgroups. In Proceedings of the 39th Annual Hawaii International Conference on System Sciences (HICSSâ06), volume 3, pages 59bâ59b, 2006.
P. Forchini. Spontaneity reloaded: American face-to-face and movie conversation compared. In Corpus Linguistics, 2009.
P. Forchini. Movie language revisited. Evidence from multi-dimensional analysis and corpora. Peter Lang, 2012. G. Forgues, J. Pineau, J. LarchevËeque, and R. Tremblay. Bootstrapping dialog systems with word embeddings. In Work- shop on Modern Machine Learning and Natural Language Processing, Advances in neural information processing systems (NIPS), 2014.
E. N. Forsyth and C. H. Martell. Lexical and discourse analysis of online chat dialog. In International Conference on Semantic Computing (ICSC)., pages 19â26, 2007.
M. Frampton and O. Lemon. Recent research advances in reinforcement learning in spoken dialogue systems. The Knowledge Engineering Review, 24(04):375â408, 2009.
M. Galley, C. Brockett, A. Sordoni, Y. Ji, M. Auli, C. Quirk, M. Mitchell, J. Gao, and B. Dolan. deltaBLEU: A dis- criminative metric for generation tasks with intrinsically diverse targets. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL, pages 445â450, 2015.
M. GaËsi´c, F. JurËc´ıËcek, S. Keizer, F. Mairesse, B. Thomson, K. Yu, and S. Young. Gaussian processes for fast policy optimisation of pomdp-based dialogue managers. In Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 201â204. Association for Computational Linguistics, 2010.
M. GaËsi´c, F. JurËc´ıËcek, B. Thomson, K. Yu, and S. Young. On-line policy optimisation of spoken dialogue systems via live interaction with human subjects. In IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), pages 312â317. IEEE, 2011.
M. GaËsi´c, M. Henderson, B. Thomson, P. Tsiakoulis, and S. Young. Policy optimisation of pomdp-based dialogue systems without state space compression. In Spoken Language Technology Workshop (SLT), 2012 IEEE, pages 31â36. IEEE, 2012.
M. Gasic, C. Breslin, M. Henderson, D. Kim, M. Szummer, B. Thomson, P. Tsiakoulis, and S. Young. On-line pol- icy optimisation of Bayesian spoken dialogue systems via human interaction. In IEEE International Conference on Acoustics, Speech and Signal Processing, pages 8367â8371, 2013.
M. GaËsi´c, N. MrkËsi´c, L. M. Rojas-Barahona, P.-H. Su, S. Ultes, D. Vandyke, T.-H. Wen, and S. Young. Dialogue manager domain adaptation using gaussian process reinforcement learning. Computer Speech & Language, 2016.
A. Genevay and R. Laroche. Transfer learning for user adaptation in spoken dialogue systems. In Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems, pages 975â983. International Foundation for Autonomous Agents and Multiagent Systems, 2016.
K. Georgila, J. Henderson, and O. Lemon. User simulation for spoken dialogue systems: learning and evaluation. In Proceedings of INTERSPEECH, 2006.
K. Georgila, M. Wolters, J. D. Moore, and R. H. Logie. The MATCH corpus: A corpus of older and younger users interactions with spoken dialogue systems. Language Resources and Evaluation, 44(3):221â261, 2010.
J. Gibson and A. D. Pick. Perception of another personâs looking behavior. The American journal of psychology, 76(3): 386â394, 1963.
J. J Godfrey, E. C Holliman, and J McDaniel. SWITCHBOARD: Telephone speech corpus for research and development. In International Conference on Acoustics, Speech, and Signal Processing (ICASSP-92), 1992.
39
I. Goodfellow, A. Courville, and Y. Bengio. Deep learning. Book in preparation for MIT Press, 2015. URL http: //goodfeli.github.io/dlbook/.
J. T. Goodman. A bit of progress in language modeling extended version. Machine Learning and Applied Statistics Group Microsoft Research. Technical Report, MSR-TR-2001-72, 2001.
C. Goodwin. Conversational Organization: Interaction Between Speakers and Hearers. New York: Academic Press, 1981.
A. L. Gorin, G. Riccardi, and J. H. Wright. How may I help you? Speech Communication, 23(1):113â127, 1997. A. Graves. Sequence transduction with recurrent neural networks. In Proceedings of the 29th International Conference
on Machine Learning (ICML), Representation Learning Workshop, 2012.
A. Graves, G. Wayne, and I. Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014. S. Greenbaum. Comparing English worldwide: The international corpus of English. Clarendon Press, 1996. S. Greenbaum and G Nelson. The international corpus of english (ICE) project. World Englishes, 15(1):3â15, 1996. C. G¨ulc¸ehre, O. Firat, K. Xu, K. Cho, L. Barrault, H. Lin, F. Bougares, H. Schwenk, and Y. Bengio. On using monolingual
corpora in neural machine translation. CoRR, abs/1503.03535, 2015.
I. Gurevych and M. Strube. Semantic similarity applied to spoken dialogue summarization. In Proceedings of the 20th international conference on Computational Linguistics, 2004.
V. Haslerud and A. Stenstr¨om. The bergen corpus of london teenager language (COLT). Spoken English on Computer. Transcription, Mark-up and Application. London: Longman, pages 235â242, 1995.
P. A. Heeman and J. F. Allen. The TRAINS 93 Dialogues. Technical report, DTIC Document, 1995. C. T. Hemphill, J. J. Godfrey, and G. R. Doddington. The atis spoken language systems pilot corpus. In Proceedings of
the DARPA speech and natural language workshop, pages 96â101, 1990.
M. Henderson, B. Thomson, and S. Young. Deep neural network approach for the dialog state tracking challenge. In Special Interest Group on Discourse and Dialogue (SIGDIAL), 2013.
M. Henderson, B. Thomson, and J. Williams. Dialog state tracking challenge 2 & 3, 2014a. M. Henderson, B. Thomson, and J. Williams. The second dialog state tracking challenge. In Special Interest Group on
Discourse and Dialogue (SIGDIAL), 2014b.
M. Henderson, B. Thomson, and S. Young. Word-based dialog state tracking with recurrent neural networks. In 15th Special Interest Group on Discourse and Dialogue (SIGDIAL), page 292, 2014c.
G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T.a N. Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. Signal Processing Magazine, IEEE, 29(6):82â97, 2012.
T. Hiraoka, G. Neubig, K. Yoshino, T. Toda, and S. Nakamura. Active learning for example-based dialog systems. In Proc Intl Workshop on Spoken Dialog Systems, Saariselka, Finland, 2016.
H. Hung and G. Chittaranjan. The IDIAP wolf corpus: exploring group behaviour in a competitive role-playing game. In Proceedings of the international conference on Multimedia, pages 879â882, 2010.
J. L. Hutchens and M. D. Alder. Introducing MegaHAL. In Proceedings of the Joint Conferences on New Methods in Language Processing and Computational Natural Language Learning, 1998.
Arne J. and Nils D. Talking to a computer is not like talking to your best friend. In Proceedings of The ï¬rst Scandinivian Conference on Artiï¬cial Intelligence, 1988.
S. Jung, C. Lee, K. Kim, M. Jeong, and G. G. Lee. Data-driven user simulation for automated evaluation of spoken dialog systems. Computer Speech & Language, 23(4):479â509, 2009.
D. Jurafsky and J. H. Martin. Speech and language processing, 2nd Edition. Prentice Hall, 2008. F. Jurcıcek, S. Keizer, M. GaËsic, F. Mairesse, B. Thomson, K. Yu, and S. Young. Real user evaluation of spoken dialogue
systems using amazon mechanical turk. In Proceedings of INTERSPEECH, volume 11, 2011.
F. JurËc´ıËcek, B. Thomson, and S. Young. Reinforcement learning for parameter estimation in statistical spoken dialogue systems. Computer Speech & Language, 26(3):168â192, 2012.
R. Kadlec, M. Schmid, and J. Kleindienst. Improved deep learning baselines for ubuntu corpus dialogs. Neural Informa- tion Processing Systems Workshop on Machine Learning for Spoken Language Understanding, 2015.
M. Kaufmann and J. Kalita. Syntactic normalization of twitter messages. In International conference on natural language processing, Kharagpur, India, 2010.
S. Kim, L. F. DHaro, R. E. Banchs, J. Williams, and M. Henderson. Dialog state tracking challenge 4, 2015. S. Kim, L. F. DHaro, R. E. Banchs, J. D. Williams, M. Henderson, and K. Yoshino. The ï¬fth dialog state tracking
challenge. In IEEE Spoken Language Technology Workshop (SLT), 2016.
D. Koller and N. Friedman. Probabilistic graphical models: principles and techniques. MIT press, 2009.
40
J. A Konstan, B. N. Miller, D. Maltz, J. L. Herlocker, L. R. Gordon, and J. Riedl. Grouplens: applying collaborative ï¬ltering to usenet news. Communications of the ACM, 40(3):77â87, 1997.
A. Kumar, O. Irsoy, J. Su, J. Bradbury, R. English, B. Pierce, P. Ondruska, I. Gulrajani, and R. Socher. Ask me anything: Dynamic memory networks for natural language processing. Neural Information Processing Systems (NIPS), 2015.
M. Kyt¨o and T. Walker. Guide to A corpus of English dialogues 1560-1760. Acta Universitatis Upsaliensis, 2006. I. Langkilde and K. Knight. Generation that exploits corpus-based statistical knowledge. In Proceedings of the 36th An- nual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics-Volume 1, pages 704â710. Association for Computational Linguistics, 1998.
G. Leech. 100 million words of english: the british national corpus (BNC). Language Research, 28(1):1â13, 1992. E. Levin and R. Pieraccini. A stochastic model of computer-human interaction for learning dialogue strategies.
Eurospeech, volume 97, pages 1883â1886, 1997. In
E. Levin, R. Pieraccini, and W. Eckert. Learning dialogue strategies within the markov decision process framework. In Automatic Speech Recognition and Understanding, 1997. Proceedings., 1997 IEEE Workshop on, pages 72â79. IEEE, 1997.
J. Li, M. Galley, C. Brockett, J. Gao, and B. Dolan. A diversity-promoting objective function for neural conversation models. arXiv preprint arXiv:1510.03055, 2015.
J. Li, M. Galley, C. Brockett, J. Gao, and Bill D. A persona-based neural conversation model. In ACL, pages 994â1003, 2016.
G. Lin and M. Walker. All the worldâs a stage: Learning character models from ï¬lm. In AAAI Conference on Artiï¬cial Intelligence and Interactive Digital Entertainment, 2011a.
G. I. Lin and M. A. Walker. All the worldâs a stage: Learning character models from ï¬lm. In AIIDE, 2011b. C. Lord and M. Haith. The perception of eye contact. Attention, Perception, & Psychophysics, 16(3):413â416, 1974. R. Lowe, N. Pow, I. Serban, and J. Pineau. The ubuntu dialogue corpus: A large dataset for research in unstructured
multi-turn dialogue systems. In Special Interest Group on Discourse and Dialogue (SIGDIAL), 2015a.
R. Lowe, N. Pow, I. V. Serban, L. Charlin, and J. Pineau. Incorporating unstructured textual knowledge sources into neural dialogue systems. Neural Information Processing Systems Workshop on Machine Learning for Spoken Language Understanding, 2015b.
R. Lowe, I. V. Serban, M. Noseworthy, L. Charlin, and J. Pineau. On the evaluation of dialogue systems with next utterance classiï¬cation. In Special Interest Group on Discourse and Dialogue (SIGDIAL), 2016.
J. M. Lucas, F. Fernndez, J. Salazar, J. Ferreiros, and R. San Segundo. Managing speaker identity and user proï¬les in a spoken dialogue system. In Procesamiento del Lenguaje Natural, number 43 in 1, pages 77â84, 2009.
B. MacWhinney and C. Snow. The child language data exchange system. Journal of child language, 12(02):271â295, 1985.
F. Mairesse and S. Young. Stochastic language generation in dialogue using factored language models. Computational Linguistics, 2014.
F. Mairesse, M. GaËsi´c, F. JurËc´ıËcek, S. Keizer, B. Thomson, K. Yu, and S. Young. Phrase-based statistical language generation using graphical models and active learning. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1552â1561. Association for Computational Linguistics, 2010. C. D. Manning and H. Sch¨utze. Foundations of statistical natural language processing. MIT press, 1999. M. McCarthy. Spoken language and applied linguistics. Ernst Klett Sprachen, 1998. S. McGlashan, N. Fraser, N. Gilbert, E. Bilange, P. Heisterkamp, and N. Youd. Dialogue management for telephone information systems. In Proceedings of the third conference on Applied natural language processing, pages 245â246. Association for Computational Linguistics, 1992.
G. McKeown, M. F Valstar, R. Cowie, and M. Pantic. The SEMAINE corpus of emotionally coloured character interac- tions. In Multimedia and Expo (ICME), 2010 IEEE International Conference on, pages 1079â1084, 2010.
T. Mikolov, M. Karaï¬Â´at, L. Burget, J. Cernock`y, and Sanjeev Khudanpur. Recurrent neural network based language model. In 11th Proceedings of INTERSPEECH, pages 1045â1048, 2010.
T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. Distributed representations of words and phrases and their
compositionality. In Advances in neural information processing systems, pages 3111â3119, 2013. G. A. Miller. WordNet: a lexical database for english. Communications of the ACM, 38(11):39â41, 1995. X. A. Miro, S. Bozonnet, N. Evans, C. Fredouille, G. Friedland, and O. Vinyals. Speaker diarization: A review of recent
research. Audio, Speech, and Language Processing, IEEE Transactions on, 20(2):356â370, 2012.
K. Mo, S. Li, Y. Zhang, J. Li, and Q. Yang. Personalizing a dialogue system with transfer learning. arXiv preprint arXiv:1610.02891, 2016.
S. Mohan and J. Laird. Learning goal-oriented hierarchical tasks from situated interactive instruction. In AAAI, 2014.
41
T. Nguyen, M. Rosenberg, X. Song, J. Gao, S. Tiwary, R. Majumder, and L. Deng. Ms marco: A human generated machine reading comprehension dataset. arXiv preprint arXiv:1611.09268, 2016.
L. Nio, S. Sakti, G. Neubig, T. Toda, M. Adriani, and Sa. Nakamura. Developing non-goal dialog system based on In Natural Interaction with Robots, Knowbots and Smartphones, pages 355â361. examples of drama television. Springer, 2014a.
L Nio, S. Sakti, G. Neubig, T. Toda, and S. Nakamura. Conversation dialog corpora from television and movie scripts. In 17th Oriental Chapter of the International Committee for the Co-ordination and Standardization of Speech Databases and Assessment Techniques (COCOSDA), pages 1â4, 2014b.
E. N¨oth, A. Horndasch, F. Gallwitz, and J. Haas. Experiences with commercial telephone-based dialogue systems. itâ Information Technology (vormals it+ ti), 46(6/2004):315â321, 2004.
C. Oertel, F. Cummins, J. Edlund, P. Wagner, and N. Campbell. D64: A corpus of richly recorded conversational interaction. Journal on Multimodal User Interfaces, 7(1-2):19â28, 2013.
A. H. Oh and A. I. Rudnicky. Stochastic language generation for spoken dialogue systems. In Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2000), Workshop on Conversational Systems, volume 3, pages 27â32. Association for Computational Linguistics, 2000.
T. Paek. Reinforcement learning for spoken dialogue systems: Comparing strengths and weaknesses for practical deploy- ment. In Proc. Dialog-on-Dialog Workshop, INTERSPEECH, 2006.
K. Papineni, S. Roukos, T Ward, and W Zhu. BLEU: a method for automatic evaluation of machine translation. Proceedings of the 40th annual meeting on Association for Computational Linguistics (ACL), 2002. In
A. N. Pargellis, H-K. J. Kuo, and C. Lee. An automatic dialogue generation platform for personalized dialogue applica- tions. Speech Communication, 42(3-4):329â351, 2004. doi: 10.1016/j.specom.2003.10.003.
R. Passonneau and E. Sachar. Loqui human-human dialogue corpus (transcriptions and annotations), 2014. D. Perez-Marin and I. Pascual-Nieto. Conversational Agents and Natural Language Interaction: Techniques and Effective
Practices. IGI Global, 2011.
S. Petrik. Wizard of Oz Experiments on Speech Dialogue Systems. PhD thesis, Technischen Universitat Graz, 2004. R. Pieraccini, D. Suendermann, K. Dayanidhi, and J. Liscombe. Are we there yet? research in commercial spoken dialog
systems. In Text, Speech and Dialogue, pages 3â13, 2009.
O. Pietquin. A framework for unsupervised learning of dialogue strategies. Presses Universit´e Catholique de Louvain, 2004.
O. Pietquin. A probabilistic description of man-machine spoken communication. In Multimedia and Expo, 2005. ICME 2005. IEEE International Conference on, pages 410â413, 2005.
O. Pietquin. Learning to ground in spoken dialogue systems. In Acoustics, Speech and Signal Processing, 2007. ICASSP 2007. IEEE International Conference on, volume 4, pages IVâ165, 2007.
O Pietquin and T. Dutoit. A probabilistic framework for dialog simulation and optimal strategy learning. IEEE Transac- tions on Audio, Speech, and Language Processing, 14(2):589â599, 2006.
O. Pietquin and H. Hastie. A survey on metrics for the evaluation of user simulations. The knowledge engineering review, 28(01):59â73, 2013.
B. Piot, M. Geist, and O. Pietquin. Imitation learning applied to embodied conversational agents. In 4th Workshop on Machine Learning for Interactive Systems (MLIS 2015), volume 43, 2015.
S. Png and J. Pineau. Bayesian reinforcement learning for pomdp-based dialogue systems. In IEEE International Con- ference on Acoustics, Speech and Signal Processing (ICASSP), pages 2156â2159, 2011.
C. Potts. Goal-driven answers in the cards dialogue corpus. In Proceedings of the 30th west coast conference on formal linguistics, pages 1â20, 2012.
A. Ratnaparkhi. Trainable approaches to surface natural language generation and their application to conversational dialog systems. Computer Speech & Language, 16(3):435â455, 2002.
A. Raux, B. Langner, D. Bohus, A. W. Black, and M. Eskenazi. Lets go public! taking a spoken dialog system to the real world. In Proceedings of INTERSPEECH. Citeseer, 2005.
N. Reithinger and M. Klesen. Dialogue act classiï¬cation using language models. In EuroSpeech, 1997. H. Ren, W. Xu, Y. Zhang, and Y. Yan. Dialog state tracking using conditional random ï¬elds. In Special Interest Group
on Discourse and Dialogue (SIGDIAL), 2013.
S. Renals, T. Hain, and H. Bourlard. Recognition and understanding of meetings the AMI and AMIDA projects. In IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU), 2007.
R. Reppen and N. Ide. The american national corpus overall goals and the ï¬rst release. Journal of English Linguistics, 32(2):105â113, 2004.
42
J. Rickel and W. L. Johnson. Animated agents for procedural training in virtual reality: Perception, cognition, and motor control. Applied artiï¬cial intelligence, 13(4-5):343â382, 1999.
V. Rieser and O. Lemon. Natural language generation as planning under uncertainty for spoken dialogue systems. In Empirical methods in natural language generation, pages 105â120. Springer, 2010.
A. Ritter, C. Cherry, and B. Dolan. Unsupervised modeling of twitter conversations. In North American Chapter of the Association for Computational Linguistics (NAACL 2010), 2010.
A. Ritter, C. Cherry, and W. B. Dolan. Data-driven response generation in social media. In Proceedings of the conference on Empirical Methods in Natural Language Processing, 2011.
S. Rosenthal and K. McKeown. I couldnt agree more: The role of conversational structure in agreement and disagreement detection in online discussions. In Special Interest Group on Discourse and Dialogue (SIGDIAL), page 168, 2015. S. Rosset and S. Petel. The ritel corpus-an annotated human-machine open-domain question answering spoken dialog
corpus. In The International Conference on Language Resources and Evaluation (LREC), 2006.
S. Rossignol, O. Pietquin, and M. Ianotto. Training a bn-based user model for dialogue simulation with missing data. In Proceedings of the International Joint Conference on Natural Language Processing, pages 598â604, 2011.
A. Roy, C. Guinaudeau, H. Bredin, and C. Barras. TVD: a reproducible and multiply aligned tv series dataset. In The International Conference on Language Resources and Evaluation (LREC), volume 2, 2014.
J. Ruppenhofer, M. Ellsworth, M. R.L. Petruck, C. R. Johnson, and J. Scheffczyk. FrameNet II: Extended Theory and Practice. International Computer Science Institute, 2006. Distributed with the FrameNet data.
J. Schatzmann and S. Young. The hidden agenda user simulation model. IEEE transactions on audio, speech, and language processing, 17(4):733â747, 2009.
J. Schatzmann, K. Georgila, and S. Young. Quantitative evaluation of user simulation techniques for spoken dialogue systems. In Special Interest Group on Discourse and Dialogue (SIGDIAL), 2005.
J. Schatzmann, K. Weilhammer, M. Stuttle, and S. Young. A survey of statistical user simulation techniques for reinforcement-learning of dialogue management strategies. The Knowledge Engineering Review, 21(02):97â126, 2006.
J. Schatzmann, B. Thomson, K. Weilhammer, . Ye, and S. Young. Agenda-based user simulation for bootstrapping a pomdp dialogue system. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Companion Volume, Short Papers, pages 149â152, 2007.
J. N. Schrading. Analyzing domestic abuse using natural language processing on social media data. Masterâs thesis, Rochester Institute of Technology, 2015. http://scholarworks.rit.edu/theses.
N. Schrading, C. O. Alm, R. Ptucha, and C. M. Homan. An analysis of domestic a.se discourse on reddit. In Empirical Methods in Natural Language Processing (EMNLP), 2015.
K. K. Schuler. VerbNet: A broad-coverage, comprehensive verb lexicon. PhD thesis, University of Pennsylvania, 2005. Paper AAI3179808.
I. V. Serban. Maximum likelihood learning and inference in conditional random ï¬elds. Bachelorâs thesis, University of Copenhagen, Denmark, 2012. http://www.blueanalysis.com/thesis/thesis.pdf.
I. V. Serban and J. Pineau. Text-based speaker identiï¬cation for multi-participant open-domain dialogue systems. Neural Information Processing Systems Workshop on Machine Learning for Spoken Language Understanding, 2015.
I. V. Serban, A. Sordoni, Y. Bengio, A. Courville, and J. Pineau. Building End-To-End Dialogue Systems Using Genera- tive Hierarchical Neural Networks. In AAAI, 2016. In press.
I. V. Serban, T. Klinger, G. Tesauro, K. Talamadupula, B. Zhou, Y. Bengio, and A. Courville. Multiresolution recurrent neural networks: An application to dialogue response generation. In AAAI Conference, 2017a.
I. V. Serban, A. Sordoni, R. Lowe, L. Charlin, J. Pineau, A. Courville, and Y. Bengio. A hierarchical latent variable encoder-decoder model for generating dialogues. In AAAI Conference, 2017b.
S. Shaikh, T. Strzalkowski, G. A. Broadwell, J. Stromer-Galley, S. M. Taylor, and N. Webb. Mpc: A multi-party chat corpus for modeling social phenomena in discourse. In The International Conference on Language Resources and Evaluation (LREC), 2010.
L. Shang, Z. Lu, and H. Li. Neural responding machine for short-text conversation. arXiv preprint arXiv:1503.02364,
2015.
C. Shaoul and C. Westbury. A usenet corpus (2005-2009), 2009. S. Sharma, J. He, K. Suleman, H. Schulz, and P. Bachman. Natural language generation in dialogue using lexicalized and
delexicalized data. arXiv preprint arXiv:1606.03632, 2016.
B. A. Shawar and E. Atwell. Different measurements metrics to evaluate a chatbot system. In Proceedings of the Workshop on Bridging the Gap: Academic and Industrial Research in Dialog Technologies, pages 89â96, 2007a.
B. A. Shawar and Eric Atwell. Chatbots: are they really useful? In LDV Forum, volume 22, pages 29â49, 2007b.
43
E. Shriberg, R. Dhillon, S. Bhagat, J. Ang, and H. Carvey. The ICSI meeting recorder dialog act (mrda) corpus. Technical report, DTIC Document, 2004.
A. Simpson and N. M Eraser. Black box and glass box evaluation of the sundial system. In Third European Conference on Speech Communication and Technology, 1993.
S. Singh, D. Litman, M. Kearns, and M. Walker. Optimizing dialogue management with reinforcement learning: Experi- ments with the njfun system. Journal of Artiï¬cial Intelligence Research, pages 105â133, 2002.
S. P. Singh, M. J. Kearns, D. J. Litman, and M. A. Walker. Reinforcement learning for spoken dialogue systems. In Neural Information Processing Systems, 1999.
A. Sordoni, Y. Bengio, H. Vahabi, C. Lioma, J. G. Simonsen, and J. Nie. A hierarchical recurrent encoder-decoder for generative context-aware query suggestion. In Proceedings of the 24th ACM International Conference on Information and Knowledge Management (CIKM 2015), 2015a.
A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell, J. Nie, J. Gao, and B. Dolan. A neural network approach In Conference of the North American Chapter of the to context-sensitive generation of conversational responses. Association for Computational Linguistics (NAACL-HLT 2015), 2015b.
A. Stenstr¨om, G. Andersen, and I. K. Hasund. Trends in teenage talk: Corpus compilation, analysis and ï¬ndings, volume 8. J. Benjamins, 2002.
A. Stent, R. Prasad, and M. Walker. Trainable sentence planning for complex information presentation in spoken dialog systems. In Proceedings of the 42nd annual meeting on association for computational linguistics, page 79. Association for Computational Linguistics, 2004.
A. Stolcke, K. Ries, N. Coccaro, E. Shriberg, R. Bates, D. Jurafsky, P. Taylor, R. Martin, C. Van Ess-Dykema, and M. Meteer. Dialogue act modeling for automatic tagging and recognition of conversational speech. Computational linguistics, 26(3):339â373, 2000.
P.-H. Su, Y.-B. Wang, T.-H. Yu, and L.-S. Lee. A dialogue game framework with personalized training using reinforce- ment learning for computer-assisted language learning. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 8213â8217. IEEE, 2013.
P.-H. Su, D. Vandyke, M. Gasic, D. Kim, N. Mrksic, T.-H. Wen, and S. Young. Learning from real users: Rating dialogue success with neural networks for reinforcement learning in spoken dialogue systems. In INTERSPEECH, 2015.
P.-H. Su, M. Gasic, N. Mrksic, L. Rojas-Barahona, S. Ultes, D. Vandyke, T.-H. Wen, and S. Young. Continuously learning neural dialogue management. arXiv preprint arXiv:1606.02689, 2016.
S. Sukhbaatar, A. Szlam, J. Weston, and R. Fergus. End-to-end memory networks. In Neural Information Processing Systems (NIPS), 2015.
X. Sun, J. Lichtenauer, M. Valstar, A. Nijholt, and M. Pantic. A multimodal database for mimicry analysis. In Affective Computing and Intelligent Interaction, pages 367â376. Springer, 2011.
J. Svartvik. The London-Lund corpus of spoken English: Description and research. Number 82 in 1. Lund University Press, 1990.
B. Thomson and S. Young. Bayesian update of dialogue state: A POMDP framework for spoken dialogue systems. Computer Speech & Language, 24(4):562â588, 2010.
J. Tiedemann. Parallel data, tools and interfaces in opus. In The International Conference on Language Resources and Evaluation (LREC), 2012.
S. E. Tranter, D. Reynolds, et al. An overview of automatic speaker diarization systems. Audio, Speech, and Language Processing, IEEE Transactions on, 14(5):1557â1565, 2006.
D. Traum and J. Rickel. Embodied agents for multi-party dialogue in immersive virtual worlds. In Proceedings of the ï¬rst international joint conference on Autonomous agents and multiagent systems: part 2, pages 766â773. ACM, 2002.
A. M. Turing. Computing machinery and intelligence. Mind, pages 433â460, 1950. D. C Uthus and D. W Aha. The ubuntu chat corpus for multiparticipant chat analysis.
In AAAI Spring Symposium: Analyzing Microtext, 2013.
J. Vandeventer, A. J. Aubrey, P. L. Rosin, and D. Marshall. 4d cardiff conversation database (4D CCDb): A 4D database In Proceedings of the 1st Joint Conference on Facial Analysis, Animation and of natural, dyadic conversations. Auditory-Visual Speech Processing (FAAVSP 2015), 2015.
D. Vandyke, P.-H. Su, M. Gasic, N. Mrksic, T.-H. Wen, and S. Young. Multi-domain dialogue success classiï¬ers for policy training. In Automatic Speech Recognition and Understanding (ASRU), 2015 IEEE Workshop on, pages 763â 770. IEEE, 2015.
O. Vinyals and Q. Le. A neural conversational model. arXiv preprint arXiv:1506.05869, 2015.
44
M. A. Walker, D. J. Litman, C. A. Kamm, and A. Abella. Paradise: A framework for evaluating spoken dialogue agents. In Proceedings of the eighth conference on European chapter of the Association for Computational Linguistics, pages 271â280, 1997.
M. A. Walker, O. C. Rambow, and M. Rogati. Training a sentence planner for spoken dialogue using boosting. Computer Speech & Language, 16(3):409â433, 2002.
M. A. Walker, R. Grant, J. Sawyer, G. I. Lin, N. Wardrip-Fruin, and M. Buell. Perceived or not perceived: Film character models for expressive nlg. In ICIDS, pages 109â121, 2011.
M. A Walker, G. I. Lin, and J. Sawyer. An annotated corpus of ï¬lm dialogue for learning and characterizing character style. In The International Conference on Language Resources and Evaluation (LREC), pages 1373â1378, 2012a. M. A Walker, J. E. F. Tree, P. Anand, R. Abbott, and J. King. A corpus for research on deliberation and debate. In The
International Conference on Language Resources and Evaluation (LREC), pages 812â817, 2012b.
Z. Wang and O. Lemon. A simple and generic belief tracking mechanism for the dialog state tracking challenge: On the believability of observed information. In Special Interest Group on Discourse and Dialogue (SIGDIAL), 2013.
S. Webb. A corpus driven study of the potential for vocabulary learning through watching movies. International Journal of Corpus Linguistics, 15(4):497â519, 2010.
J. Weizenbaum. ELIZAa computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1):36â45, 1966.
T. Wen, M. GaËsic, D. Kim, N. MrkËsic, P. Su, D. Vandyke, and S. Young. Stochastic language generation in dialogue using recurrent neural networks with convolutional sentence reranking. Special Interest Group on Discourse and Dialogue (SIGDIAL), 2015.
T.-H. Wen, M. Gasic, N. Mrksic, L. M. Rojas-Barahona, P.-H. Su, D. Vandyke, and S. Young. Multi-domain neural In Conference of the North American Chapter of the network language generation for spoken dialogue systems. Association for Computational Linguistics (NAACL-HLT 2016), 2016.
J. Weston. Dialog-based language learning. arXiv preprint arXiv:1604.06045, 2016. J. Weston, S. Chopra, and A. Bordes. Memory networks.
In International Conference on Learning Representations (ICLR), 2015.
J. Williams, A. Raux, D. Ramachandran, and A. Black. The dialog state tracking challenge. In Special Interest Group on Discourse and Dialogue (SIGDIAL), 2013.
J. D. Williams and S. Young. Partially observable markov decision processes for spoken dialog systems. Computer Speech & Language, 21(2):393â422, 2007.
J. D. Williams and G. Zweig. End-to-end lstm-based dialog control optimized with supervised and reinforcement learning. arXiv preprint arXiv:1606.01269, 2016.
M. Wolska, Q. B. Vo, D. Tsovaltzi, I. Kruijff-Korbayov´a, E. Karagjosova, H. Horacek, A. Fiedler, and C. Benzm¨uller. An annotated corpus of tutorial dialogs on mathematical theorem proving. In The International Conference on Language Resources and Evaluation (LREC), 2004.
B. Wrede and E. Shriberg. Relationship between dialogue acts and hot spots in meetings. In Automatic Speech Recogni- tion and Understanding, 2003. ASRUâ03. 2003 IEEE Workshop on, pages 180â185. IEEE, 2003.
Yi Yang, Wen-tau Yih, and Christopher Meek. Wikiqa: A challenge dataset for open-domain question answering. In EMNLP, pages 2013â2018. Citeseer, 2015.
Z. Yang, B. Li, Y. Zhu, I. King, G. Levow, and H. Meng. Collection of user judgments on spoken dialog system with crowdsourcing. In Spoken Language Technology Workshop (SLT), 2010 IEEE, pages 277â282, 2010.
S. Young, M. Gasic, B. Thomson, and J. D. Williams. POMDP-based statistical spoken dialog systems: A review. Proceedings of the IEEE, 101(5):1160â1179, 2013.
S. J. Young. Probabilistic methods in spokenâdialogue systems. Philosophical Transactions of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences, 358(1769), 2000.
J. Zhang, R. Kumar, S. Ravi, and C. Danescu-Niculescu-Mizil. Conversational ï¬ow in oxford-style debates. In Confer- ence of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT 2016), 2016.
45
# Appendix A. Learning from Dialogue Corpora
In this appendix section, we review some existing computational architectures suitable for learning dialogue strategies directly from data. The goal is not to provide full technical details on the methods available to achieve this â though we provide appropriate citations for the interested reader â but rather to illustrate concretely how the datasets described above can, and have, been used in different dialogue learning efforts. As such, we limit this review to a small set of existing work.
# A.1 Data Pre-processing
Before applying machine learning methods to a dialogue corpus, it is common practice to perform some form of pre-processing. The aim of pre-processing is to standardize a dataset with minimal loss of information. This can reduce data scarcity, and eventually make it easier for models to learn from the dataset. In natural language processing, it is commonly acknowledged that pre-processing can have a signiï¬cant effect on the results of the natural language processing systemâthe same observation holds for dialogue. Although the speciï¬c procedure for pre-processing is task- and data-dependent, in this section we highlight a few common approaches, in order to give a general idea of where pre-processing can be effective for dialogue systems.
Pre-processing is often used to remove anomalies in the data. For text-based corpora, this can include removing acronyms, slang, misspellings and phonemicization (e.g. where words are written according to their pronunciation instead of their correct spelling). For some models, such as the generative dialogue models discussed later, tokenization (e.g. deï¬ning the smallest unit of input) is also critical. In datasets collected from mobile text, forum, microblog and chat-based settings, it is common to observe a signiï¬cant number of acronyms, abbreviations, and phonemicizations that are speciï¬c to the topic and userbase (Clark, 2003). Although there is no widely accepted standard for handling such occurrences, many NLP systems incorporate some form of pre-processing to normalize these entries (Kaufmann and Kalita, 2010; Aw et al., 2006; Clark, 2003). For example, there are look-up tables, such as the IRC Beginner List18, which can be used to translate the most common acronyms and slang into standard English. Another common strategy is to use stemming and lemmatization to replace many words with a single item (e.g. walking and walker both replaced by walk). Of course, depending on the task at hand and the corpus size, an option is also to leave the acronyms and phonemicized words as they are.
In our experience, almost all dialogue datasets contain some amount of spelling errors. By correcting these, we expect to reduce data sparsity. This can be done by using automatic spelling correctors. However, it is important to inspect their effectiveness. For example, for movie scripts, Serban et al. (2016) found that automatic spelling correctors introduced more spelling errors than they corrected, and a better strategy was to use Wikipediaâs most commonly misspelled words19 to lookup and replace potential spelling errors. Transcribed spoken language corpora often include many non-words in their transcriptions (e.g. uh, oh). Depending on whether or not these provide additional information to the dialogue system, researchers may also want to remove these words by using automatic spelling correctors.
18. http://www.ircbeginner.com/ircinfo/abbreviations.html 19. https://en.wikipedia.org/wiki/Commonly_misspelled_English_words
46
# A.2 Segmenting Speakers and Conversations
Some dialogue corpora, such as those based on movie subtitles, come without explicit speaker segmentation. However, it is often possible to estimate the speaker segmentation, which is useful to build a model of a given speakerâas compared to a model of the conversation as a whole. For text-based corpora, Serban and Pineau (2015) have recently proposed the use of recurrent neural networks to estimate turn-taking and speaker labels in movie scripts with promising results.
In the speech recognition literature, this is the subtask of speaker diarisation (Miro et al., 2012; Tranter et al., 2006). When the audio stream of the speech is available, the segmentation is quite accurate with classiï¬cation error rates as low as 5%.
A strategy sometimes used for segmentation of spoken dialogues is based on labelling a small subset of the corpus, known as the gold corpus, and training a speciï¬c segmentation model based on this. The remaining corpus is then segmented iteratively according to the segmentation model, after which the gold corpus is expanded with the most conï¬dent segmentations and the segmentation model is retrained. This process is sometimes known as embedded training, and is widely used in other speech recognition tasks (Jurafsky and Martin, 2008). It appears to work well in practice, but has the disadvantage that the interpretation of the label can drift. Naturally, this approach can be applied to text dialogues as well in a straightforward manner.
In certain corpora, such as those based on chat channels or extracted from movie subtitles, many conversations occur in sequence. In some cases, there are no labels partitioning the beginning and end of separate conversations. Similarly, certain corpora with multiple speakers, such as corpora based on chat channels, contain several conversations occurring in parallel (e.g. simultaneously) but do not contain any segmentation separating these conversations. This makes it hard to learn a meaningful model from such conversations, because they do not represent consistent speakers or coherent semantic topics.
To leverage such data towards learning individual conversations, researchers have proposed methods to automatically estimate segmentations of conversations (Lowe et al., 2015a; Nio et al., 2014a). Former solutions were mostly based on hand-crafted rules and seemed to work well upon manual inspection. For chat forums, one solution involves thresholding the beginning and end of conversations based on time (e.g. delay of more than x minutes between utterances), and eliminat- ing speakers from the conversation unless they are referred to explicitly by other speakers (Lowe et al., 2015a). More advanced techniques involve maximum-entropy classiï¬ers, which leverage the content of the utterances in addition to the discourse structure and timing information (Elsner and Charniak, 2008). For movie scripts, researchers have proposed the use of simple information- retrieval similarity measures, such as cosine similarity, to identify conversations (Nio et al., 2014a). Based on the their performance on estimating turn-taking and speaker labels, recurrent neural net- works also hold promise for segmenting conversations (Serban and Pineau, 2015).
# A.3 Discriminative Model Architectures
As discussed in Subsection 2.3, discriminative models aim to predict certain labels or annotations manually associated with a portion of a dialogue. For example, a discriminative model might be trained to predict the intent of a person in a dialogue, or the topic, or a speciï¬c piece of information.
47
In the following subsections, we discuss research directions where discriminative models have been developed to solve dialogue-related tasks.20 This is primarily meant to review and contrast the work from a data-driven learning perspective.
A.3.1 DIALOGUE ACT CLASSIFICATION AND DIALOGUE TOPIC SPOTTING
Here we consider the simple task known as dialogue act classiï¬cation (or dialogue move recogni- tion). In this task, the goal is to classify a user utterance, independent of the rest of the conversation, as one out of K dialogue acts: P (A | U ), where A is the discrete variable representing the dialogue act and U is the userâs utterance. This falls under the general umbrella of text classiï¬cation tasks, though its application is speciï¬c to dialogue. Like the dialogue state tracker model, a dialogue act classiï¬cation model could be plugged into a dialogue system as an additional natural language understanding component.
Early approaches for this task focused on using n-gram models for classiï¬cation (Reithinger and Klesen, 1997; Bird et al., 1995). For example, Reithinger et al. assumed that each dialogue act is generated by its own language model. They trained an n-gram language model on the utterances of each dialogue act, Pθ(U |A), and afterwards use Bayesâ rule to assign the probability of a new dialogue act Pθ(A|U ) to be proportional to the probability of generating the utterance under the language model Pθ(U |A).
However, a major problem with this approach is the lack of datasets with annotated dialogue acts. More recent work by Forgues et al. (2014) acknowledged this problem, and tried to overcome the data scarcity issue by leveraging word embeddings learned from other, larger text corpora. They created an utterance-level representation by combining the word embeddings of each word, for example, by summing the word embeddings or taking the maximum w.r.t. each dimension. These utterance-level representations, together with word counts, were then given as inputs to a linear classiï¬er to classify the dialogue acts. Thus, Forgues et al. showed that by leveraging another, substantially larger, corpus they were able to improve performance on their original task.
This makes the work on dialogue act classiï¬cation very appealing from a data-driven perspec- tive. First, it seems that the accuracy can be improved by leveraging alternative data sources. Sec- ond, unlike the dialogue state tracking models, dialogue act classiï¬cation models typically involve relatively little feature hand-crafting thus suggesting that data-driven approaches may be more pow- erful for these tasks.
# A.3.2 DIALOGUE STATE TRACKING
The core task of the DSTC (Williams et al., 2013) adds more complexity by focusing on tracking the state of a conversation. This is framed as a classiï¬cation problem: for every time step t of the dialogue, the model is given the current input to the dialogue state tracker (including ASR and SLU outputs) together with external knowledge sources (e.g. bus timetables). The required output is a probability distribution over a set of Nt predeï¬ned hypotheses, in addition to the REST hypothesis (which represents the probability that none of the previous Nt hypotheses are correct). The goal is to match the distribution over hypotheses as closely as possible to the real annotated
20. It is important to note that although discriminative models have been favored to model supervised problems in the dialogue-system literature, in principle generative models (P (X, Y )) instead of discriminative models (P (Y |X)) could be used.
48
data. By providing an open dataset with accurate labels, it has been possible for researchers to perform rigourous comparative evaluations of different classiï¬cation models for dialogue systems. Models for the DSTC include both statistical approaches and hand-crafted systems. An example of the latter is the system proposed in Wang and Lemon (2013), which relies on having access to a marginal conï¬dence score Pt(u, s, v) for a user dialogue u(s = v) with slot s and value v given by a subsystem at time t. The marginal conï¬dence score gives a heuristic estimate of the probability of a slot taking a particular value. The model must then aggregate all these estimates and conï¬dence scores to compute probabilities for each hypothesis.
In this model, the SLU component may for example give the marginal conï¬dence score (in- form(data.day=today)=0.9) in the bus scheduling DSTC, meaning that it believes with high conï¬- dence (0.9) that the user has requested information for the current day. This marginal conï¬dence score is used to update the belief state of the system bt(s, v) at time t using a set of hand-crafted updates to the probability distribution over hypotheses. From a data-driven learning perspective, this approach does not make efï¬cient use of the dataset, but instead relies heavily on the accuracy of the hand-crafted tracker outputs.
More sophisticated models for the DSTC take a dynamic Bayesian approach by modeling the latent dialogue state and observed tracker outputs in a directed graphical model (Thomson and Young, 2010). These models are sometimes called generative state tracking models, though they are still discriminative in nature as they only attempt to model the state of the dialogue and not the words and speech acts in each dialogue. For simplicity we drop the index i in the following equations. Similar to before, let xt be the observed tracker outputs at time t. Let st be the dialogue state at time t, which represents the state of the world including, for example, the user actions (e.g. deï¬ned by slot-value pairs) and system actions (e.g. number of times a piece of information has been requested). For the DSTC, the state st must represent the true current slot-value pair at time t. Let rt be the reward observed at time t, and let at be the action taken by the dialogue system at time t. This general framework, also known as a partially-observable Markov decision process (POMDP) then deï¬nes the graphical model:
Pθ(xt, st, rt|at, stâ1) = Pθ(xt|st, at)Pθ(st|stâ1, at)Pθ(rt|st, at), (3)
where at is assumed to be a deterministic variable of the dialogue history. This variable is given in the DSTC, because it comes from the policy used to interact with the humans when gathering the datasets. This approach is attractive from a data-driven learning perspective, because it models the uncertainty (e.g. noise and ambiguity) inherent in all variables of interest. Thus, we might expect such a model to be more robust in real applications.
Now, since all variables are observed in this task, and since the goal is to determine st given the other variables, we are only interested in:
Pθ(st|xt, rt, at) â Pθ(xt|st, at)Pθ(st|stâ1, at)Pθ(rt|st, at), (4)
which can then be normalized appropriately since st is a discrete stochastic variable. However, due to the temporal dependency between st and stâ1, the complexity of the model is similar to a hidden Markov model, and thus both learning and inference become intractable when the state, observation and action spaces are too large. Indeed, as noted by Young et al. (2013), the number of states, actions and observations can easily reach 1010 conï¬gurations in some dialogue systems. Thus, it is necessary to make simplifying assumptions on the distribution Pθ(st|xt, rt, at) and to approximate
49
the learning and inference procedures (Young et al., 2013). With appropriate structural assumptions and approximations, these models perform well compared to baseline systems on the DSTC (Black et al., 2011).
Non-bayesian data-driven models have also been proposed. These models are sometimes called discriminative state tracking models, because they do not assume a generation process for the tracker outputs, xt or for any other variables, but instead only condition on them. For example, Henderson et al. (2013) proposed to use a feed-forward neural network. At each time step t, they extracted a set of features and then concatenate a window of W feature vectors together. These are given as input to the neural network, which outputs the probability of each hypothesis from the set of hypotheses. By learning a discriminative model and using a window over the last time steps, they do not face the intractability issues of dynamic Bayesian networks. Instead, their system can be trained with gradient descent methods. This approach could eventually scale to large datasets, and is therefore very attractive for data-driven learning. However, unlike the dynamic Bayesian approaches, these models do not represent probability distributions over variables apart from the state of the dialogue. Without probability distributions, it is not clear how to deï¬ne a conï¬dence interval over the predictions. Thus the models might not provide adequate information to determine when to seek conï¬rmation or clariï¬cation following unclear statements.
Researchers have also investigated the use of conditional random ï¬elds (CRFs) for state tracking (Ren et al., 2013). This class of models also falls under the umbrella of discriminative state tracking models; however, they are able to take into account temporal dependencies within dialogues by modeling a complete joint distribution over states:
Pθ(S|X) â fi(sc, xc), câC i (5)
where C is the set of factors, i.e. sets of state and tracker variables across time, sc is the set of states associated with factor c, xc is the set of observations associated with factor c, and {fi}i is a set of functions parametrized by parameters θ. There exist certain functions fi, for which exact inference is tractable and learning the parameters θ is efï¬cient (Koller and Friedman, 2009; Serban, 2012). For example, Ren et al. (2013) propose a set of factors which create a linear dependency structure between the dialogue states while conditioning on all the observed tracker outputs:
Pθ(S|X) â fi(stâ1, st, st+1, X). t i (6)
This creates a dependency between all dialogue states, forcing them be coherent with each other. This should be contrasted to the feed-forward neural network approach, which does not enforce any sort of consistency between different predicted dialogue states. The CFR models can be trained with gradient descent to optimize the exact log-likelihood, but exact inference is typically intractable. Therefore, an approximate inference procedure, such as loopy belief propagation, is necessary to approximate the posterior distribution over states st.
In summary, there exist different approaches to building discriminative learning architectures for dialogue. While they are fairly straightforward to evaluate and often form a crucial component for real-world dialogue systems, by themselves they only offer a limited view of what we ultimately want to accomplish with dialogue models. They often require labeled data, which is often difï¬cult to acquire on a large scale (except in the case of answer re-ranking) and require manual feature selection, which reduces their potential effectiveness. Since each model is trained independently
50
of the other models and components with which it interacts in the complete dialogue system, one cannot give guarantees on the performance of the ï¬nal dialogue system by evaluating the individual models alone. Thus, we desire models that are capable of producing probability distributions over all possible responses instead of over all annotated labelsâin other words, models that can actually generate new responses by selecting the highest probability next utterance. This is the subject of the next section.
# A.4 Response Generation Models
Both the response re-ranking approach and the generative response model approach have allowed for the use of large-scale unannotated dialogue corpora for training dialogue systems. We therefore close this section by discussing these classes of approaches
In general, approaches which aim to generate responses have the potential to learn semantically more powerful representations of dialogues compared to models trained for dialogue state tracking or dialogue act classiï¬cation tasks: the concepts they are able to represent are limited only by the content of the dataset, unlike the dialogue state tracking or dialogue act classiï¬cation models which are limited by the annotation scheme used (e.g. the set of possible slot-value pairs pre-speciï¬ed for the DSTC).
# A.4.1 RE-RANKING RESPONSE MODELS
Researchers have recently turned their attention to the problem of building models that produce answers by re-ranking a set of candidate answers, and outputting the one with the highest rank or probability. While the task may seem artiï¬cial, the main advantage is that it allows the use of completely un-annotated datasets. Unlike dialogue state tracking, this task does not require datasets where experts have labeled every utterance and system response. This task only requires knowing the sequence of utterances, which can be extracted automatically from transcribed conversations.
Banchs and Li (2012) construct an information retrieval system based on movie scripts using the vector space model. Their system searches through a database of movie scripts to ï¬nd a dialogue similar to the current dialogue with the user, and then emits the response from the closest dialogue in the database. Similarly, Ameixa et al. (2014) also use an information retrieval system, but based on movie subtitles instead of movie scripts. They show that their system gives sensible responses to questions, and that bootstrapping an existing dialogue system from movie subtitles improves an- swering out-of-domain questions. Both approaches assume that the responses given in the movie scripts and movie subtitle corpora are appropriate. Such information retrieval systems consist of a relatively small set of manually tuned parameters. For this reason, they do not require (annotated) labels and can therefore take advantage of raw data (in this case movie scripts and movie subtitles). However, these systems are effectively nearest-neighbor methods. They do not learn rich represen- tations from dialogues which can be used, for example, to generalize to previously unseen situations. Furthermore, it is unclear how to transform such models into full dialogue agents. They are not ro- bust and it is not clear how to maintain the dialogue state. Contrary to search engines, which present an entire page of results, the dialogue system is only allowed to give a single response to the user.
(Lowe et al., 2015a) also propose a re-ranking approach using the Ubuntu Dialogue Corpus. The authors propose an afï¬nity model between a context c (e.g. ï¬ve consecutive utterances in a conversation) and a potential reply r. Given a context-reply pair the model compares the output of a context-speciï¬c LSTM against that of a response-speciï¬c LSTM neural network and outputs
51
whether or not the response is correct for the given context. The model maximizes the likelihood of a correct context-response pair:
cri) TH) max > Po(true response | ¢;,r;)/i"») (1 â Py(true response i
where θ stands for the set of all model parameters and Ici(·) denotes a function that returns 1 when ri is the correct response to ci and 0 otherwise. Learning in the model uses stochastic gradient descent. As is typical with neural network architectures, this learning procedure scales to large datasets. Given a context, the trained model can be used to pick an appropriate answer from a set of potential answers. This model assumes that the responses given in the corpus are appropriate (i.e., this model does not generate novel responses). However, unlike the above information retrieval systems, this model is not provided with a similarity metric as in the vector space model, but instead must learn the semantic relevance of a response to a context. This approach is more attractive from a data-driven learning perspective because it uses the dataset more efï¬ciently and avoids costly hand tuning of parameters.
# A.4.2 FULL GENERATIVE RESPONSE MODELS
Generative dialogue response strategies are designed to automatically produce utterances by com- posing text (see Section 2.4). A straightforward way to deï¬ne the set of dialogue system actions is by considering them as sequences of words which form utterances. Sordoni et al. (2015b) and Ser- ban et al. (2016) both use this approach. They assume that both the user and the system utterances can be represented by the same generative distribution:
om) Po(ui,...,ur) = Po(uz | wet) (8) t=1
# t=1 T
# N
Pθ(wt,n | wt,<n, u<t), t=1 n=1
=
where the dialogue consists of T utterances u1, . . . , uT and wt,n is the nth token in utterance t. The variable u<t indicates the sequence of utterances which preceed ut and similarly for wt,<n. Further, the probability of the ï¬rst utterance is deï¬ned as P (u1|u<1) = P (u1), and the ï¬rst word of each utterance only conditions on the previous utterance, i.e. wt,<1 is ânullâ. Tokens can be words, as well as speech and dialogue acts. The set of tokens depends on the particular application domain, but in general the set must be able to represent all desirable system actions. In particular, the set must contain an end-of-utterance token to allow the model to express turn-taking. This approach is similar to language modeling. For differentiable models, training is based on maximum log- likelihood using stochastic gradient descent methods. As discussed in Subsection 2.4, these models project words and dialogue histories onto an Euclidian space. Furthermore, when trained on text only, they can be thought of as unsupervised machine learning models.
Sordoni et al. (2015b) use the above approach to generate responses for posts on Twitter. Specif- ically, Pθ(um | u<m) is given by a recurrent neural network which generates a response word-by- word based on Eq. (9). The model learns its parameters using stochastic gradient descent on a corpus of Twitter messages. The authors then combine their generative model with a machine translation
52
(9)
system and demonstrate that the hybrid system outperforms a state-of-the-art machine translation system (Ritter et al., 2011).
Serban et al. (2016) extend the above model to generate responses for movie subtitles and movie scripts. Speciï¬cally, Serban et al. (2016) adapt a hierarchical recurrent neural network (Sordoni et al., 2015a), which they argue is able to represent the common ground between the dialogue interlocutors. They also propose to add speech and dialogue acts to the vocabulary of the model to make the interaction with the system more natural. However, since the model is used in a standalone manner, i.e., without combining it with a machine translation system, the majority of the generated responses are highly generic (e.g. Iâm sorry or I donât know). The authors conclude that this is a limitation of all neural network-based generative models for dialogue (e.g., (Serban et al., 2016; Sordoni et al., 2015b; Vinyals and Le, 2015)). The problem appears to lie in the distribution of words in the dialogue utterances, which primarily consist of pronouns, punctuation tokens and a few common verbs but rarely nouns, verbs and adjectives. When trained on a such a skewed distribution, the models do not learn to represent the semantic content of dialogues very well. This issue is exacerbated by the fact that dialogue is inherently ambiguous and multi-modal, which makes it more likely for the model to fall back on a generic response. As a workaround, Li et al. (2015) increase response diversity by changing the objective function at generation time to also maximize the mutual information between the context, i.e. the previous utterances, and the response utterance. However, it is not clear what impact this artiï¬cial diversity has on the effectiveness or naturalness of the dialogue system. It is possible that the issue may require larger corpora to learn semantic representations of dialogue, more context (e.g. longer conversations, user proï¬les and task-speciï¬c corpora) and multi-modal interfaces to reduce uncertainty. Further research is needed to resolve this question.
Wen et al. (2015) train a neural network to generate natural language responses for a closed- dialogue domain. They use Amazon Mechanical Turk21 to collect a dataset of dialogue acts and utterance pairs. They then train recurrent neural networks to generate a single utterance as in Eq. (9), but condition on the speciï¬ed dialogue act:
Pθ(U |A) = Pθ(wn | w<n, A), (10)
# n
where A is the dialogue act represented by a discrete variable, U is the generated utterance given A and wn is the nth word in the utterance. Based on a hybrid approach combining different recurrent neural networks for answer generation and convolutional neural networks for re-ranking answers, they are able to generate diverse utterances representing the dialogue acts in their datasets.
Similar to the models which re-rank answers, generative models may be used as complete di- alogue systems or as response generation components of other dialogue systems. However, unlike the models which re-rank answers, the word-by-word generative models can generate entirely new utterances never seen before in the training set. Further, in certain models such as those cited above, response generation scales irrespective of dataset size.
# A.5 User Simulation Models
In the absence of large datasets, some researchers have turned to building user simulation models (sometimes referred to as âuser modelsâ) to train dialogue strategies. User simulation models aim
21. http://www.mturk.com
53
to produce natural, varied and consistent interactions from a ï¬xed corpus, as stated by Pietquin and Hastie (2013, p. 2): âAn efï¬cient user simulation should not only reproduce the statistical distribution of dialogue acts measured in the data but should also reproduce complete dialogue structures.â As such, they model the conditional probability of the user utterances given previous user and system utterances:
Pθ(uuser t <t , usystem |uuser <t ), (11)
and usystem t where θ are the model parameters, uuser utterance (or action) respectively at time t. Similarly, uuser and system utterances that precede uuser are the user utterance (or action) and the system indicate the sequence of user t <t and usystem <t and usystem t , respectively. t
There are two main differences between user simulation models and the generative response models discussed in Subsection A.4.2. First, user simulation models never model the distribution over system utterances, but instead only model the conditional distribution over user utterances given previous user and system utterances. Second, user simulation models usually model dia- logue acts as opposed to word tokens. Since a single dialogue act may represent many different utterances, the models generalize well across paraphrases. However, training such user simulation models requires access to a dialogue corpus with annotated dialogue acts, and limits their applica- tion to training dialogue systems which work on the same set of dialogue acts. For spoken dialogue systems, user simulation models are usually combined with a model over speech recognition errors based on the automatic speech recognition system but, for simplicity, we omit this aspect in our analysis.
Researchers initially experimented with n-gram-based user simulation models (Eckert et al., 1997; Georgila et al., 2006), which are deï¬ned as:
Pθ(uuser t |usystem tâ1 tâ2, . . . , usystem , uuser tânâ1) = θuuser t ,usystem tâ2,...,usystem tâ1 ,uuser tânâ1 , (12)
where n is an even integer, and θ is an n-dimensional tensor (table) which satisï¬es:
θuuser t ,usystem tâ2,...,usystem tâ1 ,uuser tânâ1 = 1. (13)
# uuser t
These models are trained either to maximize the log-likelihood of the observations by setting θuuser equal to (a constant times) the number of occurrences of each correspond- ing n-gram , or on a related objective function which encourages smoothness and therefore reduces data sparsity for larger nâs (Goodman, 2001). Even with smoothing, n has to be kept small and these models are therefore unable to maintain the history and goals of the user over several utter- ances (Schatzmann et al., 2005). Consequently, the goal of the user changes over time, which has a detrimental effect on the performance of the dialogue system trained using the user simulator.
Several solutions have been proposed to solve the problem of maintaining the history of the dialogue. Pietquin (2004) propose to condition the n-gram model on the userâs goal:
Pθ(uuser t |usystem tâ1 tâ2, . . . , usystem , uuser tânâ1, g), (14)
where g is the goal of the user deï¬ned as a set of slot-value pairs. Unfortunately, not only must the goal lie within a set of hand-crafted slot-value pairs, but its distribution when simulating must
54
also be deï¬ned by experts. Using a more data-driven approach, Georgila et al. (2006) propose to condition the n-gram model on additional features:
Pθ(uuser t |usystem tâ1 tâ2, . . . , usystem <t , usystem , uuser tânâ1, f (uuser <t )), (15)
where f (uuser ) is a function mapping all previous user and system utterances to a low- dimensional vector that summarizes the previous interactions between the user and the system (e.g. slot-value pairs that the user has provided the system up to time t). Now, θ can be learned using maximum log-likelihood with stochastic gradient descent.
More sophisticated probabilistic models have been proposed based on directed graphical mod- els, such as hidden Markov models and input-output hidden Markov models (Cuay´ahuitl et al., 2005), and undirected graphical models, such as conditional random ï¬elds based on linear chains (Jung et al., 2009). Inspired by Pietquin (2005), Pietquin (2007) and Rossignol et al. (2011) propose the following directed graphical model:
Pθ(uuser t <t , usystem |uuser <t ) = Pθ(uuser t <t , usystem |gt, kt, uuser <t <t , usystem )Pθ(gt|kt)Pθ(kt|k<t, uuser <t gt,kt )
where gt is a discrete random variable representing the userâs goal at time t (e.g. a set of slot-value pairs), and kt is another discrete random variable representing the userâs knowledge at time t (e.g. a set of slot-value pairs). This model allows the user to change goals during the dialogue, which would be the case, for example, if the user is notiï¬ed by the dialogue system that the original goal cannot be accomplished. The dependency on previous user and system utterances for uuser and kt may be limited to a small number of previous turns as well as a set of hand-crafted features computed on these utterances. For example, the conditional probability:
Pθ(uuser t <t , usystem |gt, kt, uuser <t ), (17)
may be approximated by an n-gram model with additional features as in Georgila et al. (2006). Generating user utterances can be done in a straightforward manner by using ancestral sampling: ï¬rst, sample kt given k<t and the previous user and system utterances; then, sample gt given kt; and ï¬nally, sample uuser given gt, kt and the previous user and system utterances. The model can be trained using maximum log-likelihood. If all variables are observed, i.e. gt and kt have been given by human annotators, then the maximum-likelihood parameters can be found similarly to n- gram models by counting the co-occurrences of variables. If some variables are missing, they can be estimated using the expectation-maximization (EM) algorithm, since the dependencies form a linear chain. Rossignol et al. (2011) also propose to regularize the model by assuming a Dirichlet distribution prior over the parameters, which is straightforward to combine with the EM algorithm. User simulation models are particularly useful in the development of dialogue systems based on reinforcement learning methods (Singh et al., 2002; Schatzmann et al., 2006; Pietquin and Dutoit, 2006; Frampton and Lemon, 2009; JurËc´ıËcek et al., 2012; Png and Pineau, 2011; Young et al., 2013). Furthermore, many user simulation models, such as those trainable with stochastic gradient descent or co-occurrence statistics, are able to scale to large corpora. In the light of the increasing availability of large dialogue corpora, there are ample opportunities for building novel user simulation models, which aim to better represent real user behavior, and in turn for training dialogue systems, which aim to solve more general and more difï¬cult tasks. Despite their similarities, research on user simulation
55
(16)
models and full generative models has progressed independently of each other so far. Therefore, it also seems likely that there is fruitful work to be done in transferring and merging ideas between these two areas.
56 | {
"id": "1511.06931"
} |
1512.04455 | Memory-based control with recurrent neural networks | Partially observed control problems are a challenging aspect of reinforcement
learning. We extend two related, model-free algorithms for continuous control
-- deterministic policy gradient and stochastic value gradient -- to solve
partially observed domains using recurrent neural networks trained with
backpropagation through time.
We demonstrate that this approach, coupled with long-short term memory is
able to solve a variety of physical control problems exhibiting an assortment
of memory requirements. These include the short-term integration of information
from noisy sensors and the identification of system parameters, as well as
long-term memory problems that require preserving information over many time
steps. We also demonstrate success on a combined exploration and memory problem
in the form of a simplified version of the well-known Morris water maze task.
Finally, we show that our approach can deal with high-dimensional observations
by learning directly from pixels.
We find that recurrent deterministic and stochastic policies are able to
learn similarly good solutions to these tasks, including the water maze where
the agent must learn effective search strategies. | http://arxiv.org/pdf/1512.04455 | Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, David Silver | cs.LG | NIPS Deep Reinforcement Learning Workshop 2015 | null | cs.LG | 20151214 | 20151214 | 5 1 0 2
c e D 4 1 ] G L . s c [
1 v 5 5 4 4 0 . 2 1 5 1 : v i X r a
# Memory-based control with recurrent neural networks
# Nicolas Heess* Jonathan J Hunt* Timothy P Lillicrap David Silver
Google Deepmind * These authors contributed equally. heess, jjhunt, countzero, davidsilver @ google.com
# Abstract
Partially observed control problems are a challenging aspect of reinforcement learning. We extend two related, model-free algorithms for continuous control â deterministic policy gradient and stochastic value gradient â to solve partially observed domains using recurrent neural networks trained with backpropagation through time. We demonstrate that this approach, coupled with long-short term memory is able to solve a variety of physical control problems exhibiting an as- sortment of memory requirements. These include the short-term integration of in- formation from noisy sensors and the identiï¬cation of system parameters, as well as long-term memory problems that require preserving information over many time steps. We also demonstrate success on a combined exploration and mem- ory problem in the form of a simpliï¬ed version of the well-known Morris water maze task. Finally, we show that our approach can deal with high-dimensional observations by learning directly from pixels. We ï¬nd that recurrent deterministic and stochastic policies are able to learn similarly good solutions to these tasks, including the water maze where the agent must learn effective search strategies.
# Introduction
The use of neural networks for solving continuous control problems has a long tradition. Several recent papers successfully apply model-free, direct policy search methods to the problem of learning neural network control policies for challenging continuous domains with many degrees of freedoms [2, 6, 14, 21, 22, 12]. However, all of this work assumes fully observed state.
Many real world control problems are partially observed. Partial observability can arise from dif- ferent sources including the need to remember information that is only temporarily available such as a way sign in a navigation task, sensor limitations or noise, unobserved variations of the plant under control (system identiï¬cation), or state-aliasing due to function approximation. Partial ob- servability also arises naturally in many tasks that involve control from vision: a static image of a dynamic scene provides no information about velocities, occlusions occur as a consequence of the three-dimensional nature of the world, and most vision sensors are bandwidth-limited and only have a restricted ï¬eld-of-view.
Resolution of partial observability is non-trivial. Existing methods can roughly be divided into two broad classes:
On the one hand there are approaches that explicitly maintain a belief state that corresponds to the distribution over the world state given the observations so far. This approach has two major disadvantages: The ï¬rst is the need for a model, and the second is the computational cost that is typically associated with the update of the belief state [8, 23].
1
On the other hand there are model free approaches that learn to form memories based on interactions with the world. This is challenging since it is a priori unknown which features of the observations will be relevant later, and associations may have to be formed over many steps. For this reason, most model free approaches tend to assume the fully-observed case. In practice, partial observability is often solved by hand-crafting a solution such as providing multiple-frames at each timestep to allow velocity estimation [16, 14].
In this work we investigate a natural extension of two recent, closely related policy gradient algo- rithms for learning continuous-action policies to handle partially observed problems. We primarily consider the Deterministic Policy Gradient algorithm (DPG) [24], which is an off-policy policy gradient algorithm that has recently produced promising results on a broad range of difï¬cult, high- dimensional continuous control problems, including direct control from pixels [14]. DPG is an actor-critic algorithm that uses a learned approximation of the action-value (Q) function to obtain approximate action-value gradients. These are then used to update a deterministic policy via the chain-rule. We also consider DPGâs stochastic counterpart, SVG(0) ([6]; SVG stands for âStochastic Value Gradientsâ) which similarly updates the policy via backpropagation of action-value gradients from an action-value critic but learns a stochastic policy.
We modify both algorithms to use recurrent networks trained with backpropagation through time. We demonstrate that the resulting algorithms, Recurrent DPG (RDPG) and Recurrent SVG(0) (RSVG(0)), can be applied to a number of partially observed physical control problems with di- verse memory requirements. These problems include: short-term integration of sensor information to estimate the system state (pendulum and cartpole swing-up tasks without velocity information); system identiï¬cation (cart pole swing-up with variable and unknown pole-length); long-term mem- ory (a robot arm that needs to reach out and grab a payload to move it to the position the arm started from); as well as a simpliï¬ed version of the water maze task which requires the agent to learn an exploration strategy to ï¬nd a hidden platform and then remember the platformâs position in order to return to it subsequently. We also demonstrate successful control directly from pixels.
Our results suggest that actor-critic algorithms that rely on bootstrapping for estimating the value function can be a viable option for learning control policies in partially observed domains. We further ï¬nd that, at least in the setup considered here, there is little performance difference between stochastic and deterministic policies, despite the former being typically presumed to be preferable in partially observed domains.
# 2 Background
We model our environment as discrete-time, partially-observed Markov Decision process (POMDP). A POMDP is described a set of environment states S and a set of actions A, an initial state distribu- tion p0(s0), a transition function p(st+1|st, at) and reward function r(st, at). This underlying MDP is partially observed when the agent is unable to observe the state st directly and instead receives observations from the set O which are conditioned on the underlying state p(ot|st).
The agent only indirectly observes the underlying state of the MDP through the observations. An optimal agent may, in principle, require access to the entire history ht = (o1, a1, o2, a2, ...atâ1, ot).
The goal of the agent is thus to learn a policy Ï(ht) which maps from the history to a distribution over actions P (A) which maximizes the expected discounted reward (below we consider both stochastic and deterministic policies). For stochastic policies we want to maximise
Sars ; (1) J=E, t=1
where the trajectories Ï = (s1, o1, a1, s2, . . . ) are drawn from the trajectory distribution induced by the policy Ï: p(s1)p(o1|s1)Ï(a1|h1)p(s2|s1, a1)p(o2|s2)Ï(a2|h2) . . . and where ht is deï¬ned as above. For deterministic policies we replace Ï with a deterministic function µ which maps directly from states S to actions A and we replace at â¼ Ï(·|ht) with at = µ(ht). In the algorithms below we make use of the action-value function QÏ. For a fully observed MDP, when we have access to s, the action-value function is deï¬ned as the expected future discounted reward when in state st the agent takes action at and thereafter follows policy Ï. Since we are
2
interested in the partially observed case where the agent does not have access to s we instead deï¬ne QÏ in terms of h:
Q" (ht, ar) = Es, jn, [re(St,¢)] + Exy jn, a: » y'r(stris oa (2) i=1
where Ï>t = (st+1, ot+1, at+1 . . . ) is the future trajectory and the two expectations are taken with respect to the conditionals p(st|ht) and p(Ï>t|ht, at) of the trajectory distribution associated with Ï. Note that this equivalent to deï¬ning QÏ in terms of the belief state since h is a sufï¬cient statistic.
Obviously, for most POMDPs of interest, it is not tractable to condition on the entire sequence of observations. A central challenge is to learn how to summarize the past in a scalable way.
# 3 Algorithms
# 3.1 Recurrent DPG
We extend the Deterministic Policy Gradient (DPG) algorithm for MDPs introduced in [24] to deal with partially observed domains and pixels. The core idea of the DPG algorithm for the fully ob- served case is that for a deterministic policy µθ with parameters θ, and given access to the true action-value function associated with the current policy Qµ, the policy can be updated by backprop- agation:
oJ) _» AQ" (s,a) au (s) a0 owen da , (3) 06 a=9(s)
where the expectation is taken with respect to the (discounted) state visitation distribution ϵ induced by the current policy µθ [24]. Similar ideas had previously been exploited in NFQCA [4] and in the ADP [13] community. In practice the exact action-value function Qµ is replaced by an approximate (critic) QÏ with parameters Ï that is differentiable in a and which can be learned e.g. with Q- learning.
In order to ensure the applicability of our approach to large observation spaces (e.g. from pixels), we use neural networks for all function approximators. These networks, with convolutional layers have proven effective at many sensory processing tasks [11, 18], and been demonstrated to be effective for scaling reinforcement learning to large state spaces [14, 16]. [14] proposed modiï¬cations to DPG necessary in order to learn effectively with deep neural networks which we make use of here (cf. sections 3.1.1, 3.1.2).
Under partial observability the optimal policy and the associated action-value function are both functions of the entire preceding observation-action history ht. The primary change we introduce is the use of recurrent neural networks, rather than feedforward networks, in order to allow the network to learn to preserve (limited) information about the past which is needed in order to solve the POMDP. Thus, writing µ(h) and Q(h, a) rather than µ(s) and Q(s, a) we obtain the following policy update:
aJ(0) 0 (4) Oa 00 a=p9 (he) ee OQ" (ht, a) |
where we have written the expectation now explicitly over entire trajectories Ï = (s1, o1, a1, s2, o2, a2, . . . ) which are drawn from the trajectory distribution induced by the current policy and ht = (o1, a1, . . . , otâ1, atâ1, ot) is the observation-action trajectory preï¬x at time step t, both as introduced above1. In practice, as in the fully observed case, we replace Qµ by learned approximation QÏ (which is also a recurrent network with parameters Ï). Thus, rather than di- rectly conditioning on the entire observation history, we effectively train recurrent neural networks to summarize this history in their recurrent state using backpropagation through time (BPTT). For
1 A discount factor γt appears implicitly in the update which is absorbed in the discounted state-visitation distribution in eq. 3. In practice we ignore this term as is often done in policy gradient implementations in practice (e.g. [26]).
3
long episodes or continuing tasks it is possible to use truncated BPTT, although we do not use this here.
The full algorithm is given below (Algorithm 1).
RDPG is an algorithm for learning deterministic policies. As discussed in the literature [25, 20] it is possible to construct examples where deterministic policies perform poorly under partial ob- servability. In RDPG the policy is conditioned on the entire history but since we are using function approximation state aliasing may still occur, especially early in learning. We therefore also inves- tigate a recurrent version of the stochastic counterpart to DPG: SVG(0) [6] (DPG can be seen as the deterministic limit of SVG(0)). In addition to learning stochastic policies SVG(0) also admits on-policy learning whereas DPG is inherently off policy (see below).
Similar to DPG, SVG(0) updates the policy by backpropagation âQ/âa from the action-value func- tion, but does so for stochastic policies. This is enabled through a âre-parameterizationâ (e.g. [10, 19]) of the stochastic policy: The stochastic policy is represented in terms of a ï¬xed, inde- pendent noise source and a parameterized deterministic function that transforms a draw from that noise source, i.e., in our case, a = Ïθ(h, ν) with ν ⼠β(·) where β is some ï¬xed distribution. For instance, a Gaussian policy Ïθ(a|h) = N (a|µθ(h), Ï2) can be re-parameterized as follows: a = Ïθ(h, ν) = µθ(h) + Ïν where ν â¼ N (·|0, 1). See [6] for more details.
The stochastic policy is updated as follows:
aI(O) _ 1-1 0Q⢠(hy, a) On (hi, V4) 9 = Erw | da 30 t a=n9 (hyve) (5)
with Ï drawn from the trajectory distribution which is conditioned on IID draws of νt from β at each time step. The full algorithm is provided in the supplementary (Algorithm 2).
# 3.1.1 Off-policy learning and experience replay
DPG is typically used in an off-policy setting due to the fact that the policy is deterministic but exploration is needed in order to learn the gradient of Q with respect to the actions. Furthermore, in practice, data efï¬ciency and stability can also be greatly improved by using experience replay (e.g. [4, 5, 14, 16, 6]) and we use the same approach here (see Algorithms 1, 2). Thus, during learning we store experienced trajectories in a database and then replace the expectation in eq. (4) with trajectories sampled from the database.
One consequence of this is a bias in the state distribution in eqs. (3, 5) which no longer corresponds to the state distribution induced by the current policy . With function approximation this can lead to a bias in the learned policy, although this typically ignored in practice. RDPG and RSVG(0) may similarly be affected; in fact since policies (and Q) are not just a function of the state but of an entire action-observation history (eq. 4) the bias might be more severe.
One potential advantage of (R)SVG(0) in this context is that it allows on-policy learning although we do not explore this possibility here. We found that off-policy learning with experience replay remained effective in the partially observed case.
# 3.1.2 Target networks
A second algorithmic feature that has been found to greatly improve the stability of neural-network based reinforcement learning algorithms that rely on bootstrapping for learning value functions is the use of target networks [4\[14] [16] {6]: The algorithm maintains two copies of the value function Q and of the policy 7 each, with parameters 6 and 6â, and w and wââ respectively. 6 and w are the parameters that are being updated by the algorithm; 0â and wâ track them with some delay and are used to compute the âtargets valuesâ for the Q function update. Different authors have explored different approaches to updating 6â and wâ. In this work we use âsoft updatesâ as in [14] (see Algorithms|T]and[2]below).
4
# Algorithm 1 RDPG algorithm
Initialize critic network Q* (at, ht) and actor ju°(h+) with parameters w and 0. Initialize target networks Q*â and we "with weights wâ + w, 0â + 0. Initialize replay buffer R. for episodes = 1, Mdo initialize empty history ho fort=1,T do receive observation 0; hy < heâ1, Gtâ1, 0; (append observation and previous action to history) select action a; = 1° (ht) + ⬠(with e: exploration noise) end for Store the sequence (01,41, 71...07,a7,1rr) in R a Sample a minibatch of N episodes (0, a}, rj, ...0, ap, Tp )im1,...,N from R Construct histories hi = (o',a4,...ai_4, 04) Compute target values for each sample episode (y;{, ...y/,) using the recurrent target
(y;{, ...y/,) et
T ) using the recurrent target networks
Ut =r + 7Q° â(ni et "(hi 41))
Compute critic update (using BPTT)
xP LD (vi Oi a) POE
Compute actor update (using BPTT)
oer P(hi)) Op? (hi D> ( a On)
Update actor and critic using Adam [9] Update the target networks
w & twt(lâ7)w" Oe rO+ (1 â7)0'
end for
# 4 Results
We tested our algorithms on a variety of partial-observed environments, covering different types of memory problems. Videos of the learned policies for all the domains are included in our sup- plementary videos2, we encourage viewing them as these may provide a better intuition for the environments. All physical control problems except the simulated water maze (section 4.3) were simulated in MuJoCo [28]. We tested both standard recurrent networks as well as LSTM networks.
# 4.1 Sensor integration and system identiï¬cation
Physical control problems with noisy sensors are one of the paradigm examples of partially-observed environments. A large amount of research has focused on how to efï¬ciently integrate noisy sensory information over multiple timesteps in order to derive accurate estimates of the system state, or to estimate derivatives of important properties of the system [27].
Here, we consider two simple, standard control problems often used in reinforcement learning, the under-actuated pendulum and cartpole swing up. We modify these standard benchmarks tasks such that in both cases the agent receives no direct information of the velocity of any of the components, i.e. for the pendulum swing-up task the observation comprises only the angle of the pendulum, and
2Video of all the learned policies is available at https://youtu.be/V4_vb1D5NNQ
5
Figure (1) (a) The reward curve for the partially-observed pendulum task. Both RDPG and RSVG(0) are able to learn policies which bring the pendulum to an upright position. (b) The reward curve for the cartpole with no velocity and varying cartpole lengths. RDPG with LSTM, is able to reliably learn a good solution for this task; a purely feedforward agent (DDPG), which will not be able to estimate velocities nor to infer the pole length, is not able to solve the problem.
(a) (b)
(a) (b)
(a)
(b)
(c)
(d)
Figure 2: Reward curves for the (a) hidden target reacher task, and (b) return to start gripper task. In both cases the RDPG-agents with LSTMs are able to ï¬nd good policies whereas the feedforward agents fail on the memory component. (In both cases the feedforward agents perform clearly better than random which is expected from the setup of the tasks: For instance, as can be seen in the video, the gripper without memory is still able to grab the payload and move it to a âdefaultâ position.) Example frames from the 3 joint reaching task (c) and the gripper task (d).
for cartpole swing-up it is limited to the angle of the pole and the position of the cart. Velocity is crucial for solving the task and thus it must be estimated from the history of the system. Figure 1a shows the learning curves for pendulum swing-up. Both RDPG and RSVG0 were tested on the pendulum task, and are able to learn good solutions which bring the pole to upright.
For the cartpole swing-up task, in addition to not providing the agent with velocity information, we also varied the length of the pole from episode to episode. The pole length is invisible to the agent and needs to be inferred from the response of the system. In this task the sensor integration problem is thus paired with the need for system identiï¬cation. As can be seen in ï¬gure 1b, the RDPG agent with an LSTM network reliably solves this task every time while a simple feedforward agent (DDPG) fails entirely. RDPG with a simple RNN performs considerably less well than the LSTM agent, presumably due to relatively long episodes (T=350 steps) and the failure to backpropagate gradients effectively through the plain RNN. We found that a feedforward agent that does receive velocity information can solve the variable-length swing-up task partly but does so less reliably than the recurrent agent as it is unable to identify the relevant system parameters (not shown).
# 4.2 Memory tasks
Another type of partially-observed task, which has been less studied in the context of reinforcement learning, involves the need to remember explicit information over a number of steps. We constructed two tasks like this. One was a 3-joint reacher which must reach for a randomly positioned target, but the position of the target is only provided to the agent in the initial observation (the entire episode is 80 timesteps). As a harder variant of this task, we constructed a 5 joint gripper which must reach for a (fully-observed) payload from a randomized initial conï¬guration and then return the payload to the initial position of its âhandâ (T=100). Note that this is a challenging control problem even in the fully observed case. The results for both tasks are shown in ï¬gure 2, RDPG agents with LSTM networks solve both tasks reliably whereas purely feedforward agents fail on the memory components of the task as can be seen in the supplemental video.
6
(a) (b) (c) (d) (e)
Figure 3: (a) shows the reward curve for different agents performing the water maze task. Both recurrent algorithms are capable of learning good solutions to the problem, while the non-recurrent agent (DDPG) is not. It is particularly notable that despite learning a deterministic policy, RDPG is able ï¬nd search strategies that allow it to locate the platform. (b) This shows the number of steps the agents take to reach the platform after a reset, normalized by the number of steps taken for the ï¬rst attempt. Note that on the 2nd and 3rd attempts the recurrent agents are able to reach the platform much more quickly, indicating they are learning to remember and recall the position of the platform. Example trajectories for the (c) RDPG, (d) RSVG(0) and (e) DDPG agents. Trajectory of the ï¬rst attempt is purple, second is blue and third is yellow.
# 4.3 Water maze
The Morris water maze has been used extensively in rodents for the study of memory [3]. We tested our algorithms on a simpliï¬ed version of the task. The agent moves in a 2-dimensional circular space where a small region of the space is an invisible âplatformâ where the agent receives a positive reward. At the beginning of the episode the agent and platform are randomly positioned in the tank. The platform position is not visible to the agent but it âseesâ when it is on platform. The agent needs to search for and stay on the platform to receive reward by controlling its acceleration. After 5 steps on the platform the agent is reset randomly to a new position in the tank but the platform stays in place for the rest of the episode (T=200). The agent needs to remember the position of the platform to return to it quickly.
It is sometimes presumed that a stochastic policy is required in order to solve problems like this, which require learning a search strategy. Although there is some variability in the results, we found that both RDPG and RSVG(0) were able to ï¬nd similarly good solutions (ï¬gure 3a), indicating RDPG is able to learn reasonable, deterministic search strategies. Both solutions were able to make use of memory to return to the platform more quickly after discovering it during the initial search (ï¬gure 3b). A non-recurrent agent (DDPG) is able to learn a limited search strategy but fails to exploit memory to return the platform after having been reset to a random position in the tank.
# 4.4 High-dimensional observations
We also tested our agents, with convolutional networks, on solving tasks directly from high- dimensional pixel spaces. We tested on the pendulum task (but now the agent is given only a static rendering of the pendulum at each timestep), and a two-choice reaching task, where the target dis- appears after 5 frames (and the agent is not allowed to move during the ï¬rst 5 frames to prevent it from encoding the target position in its initial trajectory).
We found that RDPG was able to learn effective policies from high-dimensional observations which integrate information from multiple timesteps to estimate velocity and remember the visually queued target for the full length of the episode (in the reacher task). Figure 4 shows the results.
7
2 8 = a 4
0
(a) (b) (c)
Figure 4: RDPG was able to learn good policies directly from high-dimensional renderings for pendulum (a), and a two choice reaching task with a disappearing target (b). (c) Example frame from the reaching task.
# 5 Discussion
# 5.1 Variants
In the experiments presented here, the actor and critic networks are entirely disjoint. However, par- ticularly when learning deep, convolutional networks the ï¬lters required in the early layers may be similar between the policy and the actor. Sharing these early layers could improve computational efï¬ciency and learning speed. Similar arguments apply to the recurrent part of the network, which could be shared between the actor and the critic. Such sharing, however, can also result in instabili- ties as updates to one network may unknowingly damage or shift the other network. For this reason, we have not used any sharing here, although it is a potential topic for further investigation.
# 5.2 Related work
There is a large body of literature on solving partially observed control problems. We focus on the most closely related work that aims to solve such problems with learned memory.
Several groups [15, 1, 5] have studied the use of model-free algorithms with recurrent networks to solve POMDPs with discrete action spaces. [1] focused on relatively long-horizon (âdeepâ) memory problems in small state-action spaces. In contrast, [5] modiï¬ed the Atari DQN architecture [16] (i.e. they perform control from high-dimensional pixel inputs) and demonstrated that recurrent Q learning [15] can perform the required information integration to resolve short-term partial observability (e.g. to estimate velocities) that is achieved via stacks of frames in the original DQN architecture.
Continuous action problems with relatively low-dimensional observation spaces have been con- [30] trained LSTM-based stochastic policies using Reinforce; sidered e.g. in [30, 31, 29, 32]. [31, 29, 32] used actor-critic architectures. The algorithm of [31] can be seen as a special case of DPG where the deterministic policy produces the parameters of an action distribution from which the actions are then sampled. This requires suitable exploration at the level of distribution parame- ters (e.g. exploring in terms of means and variances of a Gaussian distribution); in contrast, SVG(0) also learns stochastic policies but allows exploration at the action level only.
All works mentioned above, except for [32], consider the memory to be internal to the policy and learn the RNN parameters using BPTT, back-propagating either TD errors or policy gradients. [32] instead take the view of [17] and consider memory as extra state dimensions that can can be read and set by the policy. They optimize the policy using guided policy search [12] which performs explicit trajectory optimization along reference trajectories and, unlike our approach, requires a well deï¬ned full latent state and access to this latent state during training.
# 6 Conclusion
We have demonstrated that two related model-free approaches can be extended to learn effectively with recurrent neural networks on a variety of partially-observed problems, including directly from pixel observations. Since these algorithms learn using standard backpropagation through time, we
8
are able to beneï¬t from innovations in supervised recurrent neural networks, such as long-short term memory networks [7], to solve challenging memory problems such as the Morris water maze.
# References
[1] B. Bakker. Reinforcement learning with long short-term memory. In NIPS, 2002. [2] D. Balduzzi and M. Ghifary. Compatible value gradients for reinforcement learning of contin-
uous deep policies. arXiv preprint arXiv:1509.03005, 2015.
[3] R. DHooge and P. P. De Deyn. Applications of the morris water maze in the study of learning and memory. Brain research reviews, 36(1):60â90, 2001.
[4] R. Hafner and M. Riedmiller. Reinforcement learning in feedback control. Machine learning, 84(1-2):137â169, 2011.
[5] M. Hausknecht and P. Stone. Deep recurrent q-learning for partially observable mdps. arXiv preprint arXiv:1507.06527, 2015.
[6] N. Heess, G. Wayne, D. Silver, T. Lillicrap, T. Erez, and Y. Tassa. Learning continuous control policies by stochastic value gradients. In NIPS, 2015.
[7] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735â 1780, 1997.
[8] L. P. Kaelbling, M. L. Littman, and A. R. Cassandra. Planning and acting in partially observ- able stochastic domains. Artiï¬cial intelligence, 101(1):99â134, 1998.
[9] D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
[10] D. P. Kingma and M. Welling. Auto-encoding variational bayes. CoRR, abs/1312.6114, 2013. [11] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â1105, 2012.
[12] S. Levine, C. Finn, T. Darrell, and P. Abbeel. End-to-end training of deep visuomotor policies. arXiv preprint arXiv:1504.00702, 2015.
[13] F. L. Lewis and D. Vrabie. Reinforcement learning and adaptive dynamic programming for feedback control. Circuits and Systems Magazine, IEEE, 9(3):32â50, 2009.
[14] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015. [15] L.-J. Lin and T. M. Mitchell. Reinforcement learning with hidden states. In J.-A. Meyer, H. L. Roitblat, and S. W. Wilson, editors, From animals to animats 2, pages 271â280. MIT Press, Cambridge, MA, USA, 1993.
[16] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. Human-level control through deep rein- forcement learning. Nature, 518(7540):529â533, 2015.
[17] L. Peshkin, N. Meuleau, and L. P. Kaelbling. Learning policies with external memory. In ICML, 1999.
[18] A. S. Razavian, H. Azizpour, J. Sullivan, and S. Carlsson. Cnn features off-the-shelf: an astounding baseline for recognition. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2014 IEEE Conference on, pages 512â519. IEEE, 2014.
[19] D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of the 31th International Conference on Machine Learning, ICML 2014, Beijing, China, 21-26 June 2014, pages 1278â1286, 2014. [20] B. Sallans. Reinforcement learning for factored markov decision processes. PhD thesis, Cite-
seer, 2002.
[21] J. Schulman, S. Levine, P. Abbeel, M. I. Jordan, and P. Moritz. Trust region policy optimiza- tion. In ICML, 2015.
[22] J. Schulman, P. Moritz, S. Levine, M. I. Jordan, and P. Abbeel. High-dimensional continuous control using generalized advantage estimation. CoRR, abs/1506.02438, 2015.
9
[23] G. Shani, J. Pineau, and R. Kaplow. A survey of point-based pomdp solvers. Autonomous Agents and Multi-Agent Systems, 27(1):1â51, 2013.
[24] D. Silver, G. Lever, N. Heess, T. Degris, D. Wierstra, and M. Riedmiller. Deterministic policy gradient algorithms. In ICML, 2014.
[25] S. P. Singh. Learning without state-estimation in partially observable markovian decision pro- cesses. In ICML, 1994.
[26] P. Thomas. Bias in natural actor-critic algorithms. In Proceedings of The 31st International Conference on Machine Learning, pages 441â448, 2014.
[27] S. Thrun, W. Burgard, and D. Fox. Probabilistic robotics. MIT press, 2005. [28] E. Todorov, T. Erez, and Y. Tassa. Mujoco: A physics engine for model-based control. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pages 5026â5033. IEEE, 2012.
[29] H. Utsunomiya and K. Shibata. Contextual behaviors and internal representations acquired by reinforcement learning with a recurrent neural network in a continuous state and action space task. In M. Kppen, N. Kasabov, and G. Coghill, editors, Advances in Neuro-Information Processing, volume 5507 of Lecture Notes in Computer Science, pages 970â978. Springer Berlin Heidelberg, 2009.
[30] D. Wierstra, A. F¨orster, J. Peters, and J. Schmidhuber. Solving deep memory pomdps with recurrent policy gradients. In ICANN, 2007.
[31] D. Wierstra and J. Schmidhuber. Policy gradient critics. In ECML, 2007. [32] M. Zhang, S. Levine, Z. McCarthy, C. Finn, and P. Abbeel. Policy learning with continuous
memory states for partially observed robotic control. CoRR, abs/1507.01273, 2015.
10
# 7 Supplementary
Algorithm 2 RSVG(0) algorithm
Initialize critic network Qâ (az, hy) and actor 7°(h,) with parameters w and 0. Initialize target networks Q*â and xâ with weights w! â w, 6â + 0. Initialize replay buffer R. for episodes = 1, Mdo initialize empty history ho fort=1,T do receive observation 0; hy < heâ1, Gtâ1, 0; (append observation and previous action to history) select action a, = 7°(h:,v) with v ~ B) end for Store the sequence (01,41, 71...07,a7,1rr) in R a Sample a minibatch of N episodes (0), a, 7}, ...0'p, ap, Tp)i=1,...,n from R Construct histories hi = (o',a4,...ai_4, 04) Compute target values for each sample episode (y{, ...yâ-) using the recurrent target Y= rit 7Q? (bigs. 7? (hig) with v~ B
T ) using the recurrent target networks
Compute critic update (using BPTT)
0Q* (hi Aw = TL Dwi Qâ (hi, a4) a oon hia) a)
Compute actor update (using BPTT)
1 OQâ (hi, 7° (hi,v)) On? (hi, v) - Aé Nr » » Ja a8 with v~ 8
Update actor and critic using Adam [9] Update the target networks
w! & tw t (L-T)w 0 +704 (1-7/0
end for
11 | {
"id": "1509.03005"
} |
1512.03385 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation. | http://arxiv.org/pdf/1512.03385 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | cs.CV | Tech report | null | cs.CV | 20151210 | 20151210 | 5 1 0 2 c e D 0 1
] V C . s c [ 1 v 5 8 3 3 0 . 2 1 5 1 : v i X r a
# Deep Residual Learning for Image Recognition
# Kaiming He
# Xiangyu Zhang
# Shaoqing Ren
# Jian Sun
# Microsoft Research
# @microsoft.com kahe, v-xiangz, v-shren, jiansun } {
# Abstract
Deeper neural networks are more difï¬cult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learn- ing residual functions with reference to the layer inputs, in- stead of learning unreferenced functions. We provide com- prehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layersâ8 à deeper than VGG nets [41] but still having lower complex- ity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classiï¬cation task. We also present analysis on CIFAR-10 with 100 and 1000 layers.
The depth of representations is of central importance for many visual recognition tasks. Solely due to our ex- tremely deep representations, we obtain a 28% relative im- provement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet local- ization, COCO detection, and COCO segmentation.
# 1. Introduction
Deep convolutional neural networks [22, 21] have led to a series of breakthroughs for image classiï¬cation [21, 50, 40]. Deep networks naturally integrate low/mid/high- level features [50] and classiï¬ers in an end-to-end multi- layer fashion, and the âlevelsâ of features can be enriched by the number of stacked layers (depth). Recent evidence [41, 44] reveals that network depth is of crucial importance, and the leading results [41, 44, 13, 16] on the challenging ImageNet dataset [36] all exploit âvery deepâ [41] models, with a depth of sixteen [41] to thirty [16]. Many other non- trivial visual recognition tasks [8, 12, 7, 32, 27] have also
and http://mscoco.org/dataset/#detections-challenge2015.
56-layer 20-layer 56-layer 20-layer * iter, (led) * ter. (1e4)
Figure 1. Training error (left) and test error (right) on CIFAR-10 with 20-layer and 56-layer âplainâ networks. The deeper network has higher training error, and thus test error. Similar phenomena on ImageNet is presented in Fig. 4.
greatly beneï¬ted from very deep models.
Driven by the signiï¬cance of depth, a question arises: Is learning better networks as easy as stacking more layers? An obstacle to answering this question was the notorious problem of vanishing/exploding gradients [1, 9], which hamper convergence from the beginning. This problem, however, has been largely addressed by normalized initial- ization [23, 9, 37, 13] and intermediate normalization layers [16], which enable networks with tens of layers to start con- verging for stochastic gradient descent (SGD) with back- propagation [22].
When deeper networks are able to start converging, a degradation problem has been exposed: with the network depth increasing, accuracy gets saturated (which might be unsurprising) and then degrades rapidly. Unexpectedly, such degradation is not caused by overï¬tting, and adding more layers to a suitably deep model leads to higher train- ing error, as reported in [11, 42] and thoroughly veriï¬ed by our experiments. Fig. 1 shows a typical example.
The degradation (of training accuracy) indicates that not all systems are similarly easy to optimize. Let us consider a shallower architecture and its deeper counterpart that adds more layers onto it. There exists a solution by construction to the deeper model: the added layers are identity mapping, and the other layers are copied from the learned shallower model. The existence of this constructed solution indicates that a deeper model should produce no higher training error than its shallower counterpart. But experiments show that our current solvers on hand are unable to ï¬nd solutions that
1
x y weight layer F(x) V relu x weight layer identity
Figure 2. Residual learning: a building block.
are comparably good or better than the constructed solution (or unable to do so in feasible time).
In this paper, we address the degradation problem by introducing a deep residual In- stead of hoping each few stacked layers directly ï¬t a desired underlying mapping, we explicitly let these lay- ers ï¬t a residual mapping. Formally, denoting the desired (x), we let the stacked nonlinear underlying mapping as x. The orig- layers ï¬t another mapping of (x) F inal mapping is recast into (x)+x. We hypothesize that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to ï¬t an identity mapping by a stack of nonlinear layers.
(x) + x can be realized by feedfor- ward neural networks with âshortcut connectionsâ (Fig. 2). Shortcut connections [2, 34, 49] are those skipping one or more layers. In our case, the shortcut connections simply perform identity mapping, and their outputs are added to Identity short- the outputs of the stacked layers (Fig. 2). cut connections add neither extra parameter nor computa- tional complexity. The entire network can still be trained end-to-end by SGD with backpropagation, and can be eas- ily implemented using common libraries (e.g., Caffe [19]) without modifying the solvers.
We present comprehensive experiments on ImageNet [36] to show the degradation problem and evaluate our method. We show that: 1) Our extremely deep residual nets are easy to optimize, but the counterpart âplainâ nets (that simply stack layers) exhibit higher training error when the depth increases; 2) Our deep residual nets can easily enjoy accuracy gains from greatly increased depth, producing re- sults substantially better than previous networks.
Similar phenomena are also shown on the CIFAR-10 set [20], suggesting that the optimization difï¬culties and the effects of our method are not just akin to a particular dataset. We present successfully trained models on this dataset with over 100 layers, and explore models with over 1000 layers. On the ImageNet classiï¬cation dataset [36], we obtain excellent results by extremely deep residual nets. Our 152- layer residual net is the deepest network ever presented on ImageNet, while still having lower complexity than VGG nets [41]. Our ensemble has 3.57% top-5 error on the
2
ImageNet test set, and won the 1st place in the ILSVRC 2015 classiï¬cation competition. The extremely deep rep- resentations also have excellent generalization performance on other recognition tasks, and lead us to further win the 1st places on: ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation in ILSVRC & COCO 2015 competitions. This strong evidence shows that the residual learning principle is generic, and we expect that it is applicable in other vision and non-vision problems.
# 2. Related Work
Residual Representations. In image recognition, VLAD [18] is a representation that encodes by the residual vectors with respect to a dictionary, and Fisher Vector [30] can be formulated as a probabilistic version [18] of VLAD. Both of them are powerful shallow representations for image re- trieval and classiï¬cation [4, 48]. For vector quantization, encoding residual vectors [17] is shown to be more effec- tive than encoding original vectors.
In low-level vision and computer graphics, for solv- ing Partial Differential Equations (PDEs), the widely used Multigrid method [3] reformulates the system as subprob- lems at multiple scales, where each subproblem is respon- sible for the residual solution between a coarser and a ï¬ner scale. An alternative to Multigrid is hierarchical basis pre- conditioning [45, 46], which relies on variables that repre- sent residual vectors between two scales. It has been shown [3, 45, 46] that these solvers converge much faster than stan- dard solvers that are unaware of the residual nature of the solutions. These methods suggest that a good reformulation or preconditioning can simplify the optimization.
Shortcut Connections. Practices and theories that lead to shortcut connections [2, 34, 49] have been studied for a long time. An early practice of training multi-layer perceptrons (MLPs) is to add a linear layer connected from the network input to the output [34, 49]. In [44, 24], a few interme- diate layers are directly connected to auxiliary classiï¬ers for addressing vanishing/exploding gradients. The papers of [39, 38, 31, 47] propose methods for centering layer re- sponses, gradients, and propagated errors, implemented by shortcut connections. In [44], an âinceptionâ layer is com- posed of a shortcut branch and a few deeper branches.
Concurrent with our work, âhighway networksâ [42, 43] present shortcut connections with gating functions [15]. These gates are data-dependent and have parameters, in contrast to our identity shortcuts that are parameter-free. When a gated shortcut is âclosedâ (approaching zero), the layers in highway networks represent non-residual func- tions. On the contrary, our formulation always learns residual functions; our identity shortcuts are never closed, and all information is always passed through, with addi- tional residual functions to be learned. In addition, high-
way networks have not demonstrated accuracy gains with extremely increased depth (e.g., over 100 layers).
# 3. Deep Residual Learning
# 3.1. Residual Learning
(x) as an underlying mapping to be ï¬t by a few stacked layers (not necessarily the entire net), with x denoting the inputs to the ï¬rst of these layers. If one hypothesizes that multiple nonlinear layers can asymptoti- cally approximate complicated functions2, then it is equiv- alent to hypothesize that they can asymptotically approxi- mate the residual functions, i.e., x (assuming that the input and output are of the same dimensions). So rather than expect stacked layers to approximate (x), we explicitly let these layers approximate a residual function x. The original function thus becomes (x) := F (x)+x. Although both forms should be able to asymptot- F ically approximate the desired functions (as hypothesized), the ease of learning might be different.
This reformulation is motivated by the counterintuitive phenomena about the degradation problem (Fig. 1, left). As we discussed in the introduction, if the added layers can be constructed as identity mappings, a deeper model should have training error no greater than its shallower counter- part. The degradation problem suggests that the solvers might have difï¬culties in approximating identity mappings by multiple nonlinear layers. With the residual learning re- formulation, if identity mappings are optimal, the solvers may simply drive the weights of the multiple nonlinear lay- ers toward zero to approach identity mappings.
In real cases, it is unlikely that identity mappings are op- timal, but our reformulation may help to precondition the If the optimal function is closer to an identity problem. mapping than to a zero mapping, it should be easier for the solver to ï¬nd the perturbations with reference to an identity mapping, than to learn the function as a new one. We show by experiments (Fig. 7) that the learned residual functions in general have small responses, suggesting that identity map- pings provide reasonable preconditioning.
# 3.2. Identity Mapping by Shortcuts
We adopt residual learning to every few stacked layers. A building block is shown in Fig. 2. Formally, in this paper we consider a building block deï¬ned as:
(x, Wi y = (1)
) + x. }
{ Here x and y are the input and output vectors of the lay- ers considered. The function ) represents the } residual mapping to be learned. For the example in Fig. 2 = W2Ï(W1x) in which Ï denotes that has two layers,
# F
# F
2This hypothesis, however, is still an open question. See [28].
3
ReLU [29] and the biases are omitted for simplifying no- tations. The operation + x is performed by a shortcut connection and element-wise addition. We adopt the sec- ond nonlinearity after the addition (i.e., Ï(y), see Fig. 2).
The shortcut connections in Eqn.(1) introduce neither ex- tra parameter nor computation complexity. This is not only attractive in practice but also important in our comparisons between plain and residual networks. We can fairly com- pare plain/residual networks that simultaneously have the same number of parameters, depth, width, and computa- tional cost (except for the negligible element-wise addition). must be equal in Eqn.(1). F If this is not the case (e.g., when changing the input/output channels), we can perform a linear projection Ws by the shortcut connections to match the dimensions:
y = (x, Wi (2)
) + Wsx. }
# F
{
We can also use a square matrix Ws in Eqn.(1). But we will show by experiments that the identity mapping is sufï¬cient for addressing the degradation problem and is economical, and thus Ws is only used when matching dimensions.
is ï¬exible. Exper- The form of the residual function F iments in this paper involve a function that has two or three layers (Fig. 5), while more layers are possible. But if has only a single layer, Eqn.(1) is similar to a linear layer: F y = W1x + x, for which we have not observed advantages. We also note that although the above notations are about fully-connected layers for simplicity, they are applicable to convolutional layers. The function ) can repre- } sent multiple convolutional layers. The element-wise addi- tion is performed on two feature maps, channel by channel.
# 3.3. Network Architectures
We have tested various plain/residual nets, and have ob- served consistent phenomena. To provide instances for dis- cussion, we describe two models for ImageNet as follows.
Plain Network. Our plain baselines (Fig. 3, middle) are mainly inspired by the philosophy of VGG nets [41] (Fig. 3, left). The convolutional layers mostly have 3 3 ï¬lters and à follow two simple design rules: (i) for the same output feature map size, the layers have the same number of ï¬l- ters; and (ii) if the feature map size is halved, the num- ber of ï¬lters is doubled so as to preserve the time com- plexity per layer. We perform downsampling directly by convolutional layers that have a stride of 2. The network ends with a global average pooling layer and a 1000-way fully-connected layer with softmax. The total number of weighted layers is 34 in Fig. 3 (middle).
It is worth noticing that our model has fewer ï¬lters and lower complexity than VGG nets [41] (Fig. 3, left). Our 34- layer baseline has 3.6 billion FLOPs (multiply-adds), which is only 18% of VGG-19 (19.6 billion FLOPs).
# VGG-19
# 34-layer plain
# 34-layer residual
image image output 3x3 conv, 64 size: 224 u Â¥ 3x3 conv, 64 pool, /2 output size: 112 3x3 conv, 128 Â¥v 3x3 conv, 128 â7x7 conv, 64, /2 v v pool, /2 pool, /2 output see56 [333 ony 256 2 conw 64 Â¥v Â¥ Ba con 256 Ba con 6 38 conv, 56 Ba con 64 Â¥ Â¥ 38 conv, 256 Ba con 64 v 3x3 conv, 64 Ba conv, 64 5 Za pool, /2 3x3 conv, 128, /2 output sue28 Tagen Bacon 28 Â¥ Ba conv, 512 Bid conv, 128 Â¥v Â¥v Ba conv, S12 Bid conv, 198 Bd conv, S12 Bid conv, 128 v 3rd conv, 128 3d conv, 198 Bd conv, 128 output WOO size: 14 pool, /2 3x3 conv, 256, /2 3d conv, SD 3rd conv, 256 Â¥v es Bacon Sia Bacon B56 Â¥v Â¥ Bacon 52 3a conv 256 Been sz 33 conv, 256 Â¥ BS eon 256 3rd conv, 256 3x3 con, 256 Ba conv, 256 Bed eon 256 Â¥ 3rd conv, 256 3rd conv, 256 utput Vo cue pool, /2 30d conv, 512, 2 sive: 7 v 3d conv, 52 Bd conv, 512 Â¥v Ba conv, 512 3rd conv, 52 3x3 conv, 52 Â¥ output 7.4096 avg pool size:1 Â¥v
# image
â7x7 conv, 64, /2 v pool, /2
# [3 conv 6a ¥v [acon 6a
# [scom6a ¥ [secon ea
# 3rd conv, 64
# Bid conv, 64
c= [36 conv, 128,72
# (arenas ee Bad conv, 128 ¥v
# Bd con, 128
# Bd conv, 128 ¥
# Bd eony, 28
# Bid conv, 128
# Bid conv, 128
# EEE
3x3 conv, 256, /2
# 3a conv, 256
# [amenase v [scones
# [pean ¥v [eens
# Bad conv, 256
# 3rd conv, 256
# Bed conv, 256
# [eens
# Bad conv, 256
3rd conv, 256 3xd conv, 512, 72 ¥v
# Wes =
# Bd conv, 51D
# Bd conv, 512 v
# Bd conv, 512
# 3rd conv, 512
# 3rd conv, S12
# avg pool
64096
1000
%1000
7 1000
Figure 3. Example network architectures for ImageNet. Left: the VGG-19 model [41] (19.6 billion FLOPs) as a reference. Mid- dle: a plain network with 34 parameter layers (3.6 billion FLOPs). Right: a residual network with 34 parameter layers (3.6 billion FLOPs). The dotted shortcuts increase dimensions. Table 1 shows more details and other variants.
4
Residual Network. Based on the above plain network, we insert shortcut connections (Fig. 3, right) which turn the network into its counterpart residual version. The identity shortcuts (Eqn.(1)) can be directly used when the input and output are of the same dimensions (solid line shortcuts in Fig. 3). When the dimensions increase (dotted line shortcuts in Fig. 3), we consider two options: (A) The shortcut still performs identity mapping, with extra zero entries padded for increasing dimensions. This option introduces no extra parameter; (B) The projection shortcut in Eqn.(2) is used to match dimensions (done by 1 1 convolutions). For both options, when the shortcuts go across feature maps of two sizes, they are performed with a stride of 2.
# 3.4. Implementation
Our implementation for ImageNet follows the practice in [21, 41]. The image is resized with its shorter side ran- domly sampled in [256, 480] for scale augmentation [41]. A 224 224 crop is randomly sampled from an image or its horizontal ï¬ip, with the per-pixel mean subtracted [21]. The standard color augmentation in [21] is used. We adopt batch normalization (BN) [16] right after each convolution and before activation, following [16]. We initialize the weights as in [13] and train all plain/residual nets from scratch. We use SGD with a mini-batch size of 256. The learning rate starts from 0.1 and is divided by 10 when the error plateaus, 104 iterations. We and the models are trained for up to 60 use a weight decay of 0.0001 and a momentum of 0.9. We do not use dropout [14], following the practice in [16].
In testing, for comparison studies we adopt the standard 10-crop testing [21]. For best results, we adopt the fully- convolutional form as in [41, 13], and average the scores at multiple scales (images are resized such that the shorter side is in
). }
# { 4. Experiments
# 4.1. ImageNet Classiï¬cation
We evaluate our method on the ImageNet 2012 classiï¬- cation dataset [36] that consists of 1000 classes. The models are trained on the 1.28 million training images, and evalu- ated on the 50k validation images. We also obtain a ï¬nal result on the 100k test images, reported by the test server. We evaluate both top-1 and top-5 error rates.
Plain Networks. We ï¬rst evaluate 18-layer and 34-layer plain nets. The 34-layer plain net is in Fig. 3 (middle). The 18-layer plain net is of a similar form. See Table 1 for de- tailed architectures.
The results in Table 2 show that the deeper 34-layer plain net has higher validation error than the shallower 18-layer plain net. To reveal the reasons, in Fig. 4 (left) we com- pare their training/validation errors during the training pro- cedure. We have observed the degradation problem - the
Tayer name | output size T8-layer 34-layer 30-layer 101-layer 152-layer convl | 112x112 7X7, 64, stride 2 3x3 max pool, stride 2 1x1, 64 1x1, 64 1x1, 64 conv2.x | 56x56 [ an | x2 [ et | 3 3x3,64 | x3 3x3,64 | x3 3x3, 64 | x3 a 7 1x1, 256 1x1, 256 1x1, 256 Ix, 128 1x1, 128 1x1, 128 cony3.x | 28x28 [ ae }2 [ a be | x4} | 3x3, 128 | x4 3x3, 128 | x4 3x3, 128 | x8 7 a 1x1, 512 1x1, 512 1x1, 512 1x1, 256 1x1, 256 1x1, 256 2 conv4.x | 14x14 [ poe }= [33 ze | 3x3,256 | x6]] 3x3,256 | x23 |] 3x3,256 | x36 79, 290 HO, 0 1x1, 1024 11, 1024 1x1, 1024 1x1, 512 1x1, 512 1x1, 512 2 convs.x | 7x7 [ Ra }= [ 3383 }s 3x3,512 | x3 | | 3x3,512 | x3 3x3,512 | x3 pone oe 1x1, 2048 1x1, 2048 1x1, 2048 xl average pool, 1000-d fe, softmax FLOPs 18x10" 3.6x10" 3.8x10" 76x10" 11.3x10"
Table 1. Architectures for ImageNet. Building blocks are shown in brackets (see also Fig. 5), with the numbers of blocks stacked. Down- sampling is performed by conv3 1, conv4 1, and conv5 1 with a stride of 2.
âplain-34 l=. . A . : 0 10 30 iter. (1e4) 60} AMA DAM 30 - -----------j4.--=25> ae âResNet-18 NR aly âResNet-34 34-layer 205 10 20 30 40 50 iter. (1e4)
Figure 4. Training on ImageNet. Thin curves denote training error, and bold curves denote validation error of the center crops. Left: plain networks of 18 and 34 layers. Right: ResNets of 18 and 34 layers. In this plot, the residual networks have no extra parameter compared to their plain counterparts.
18 layers 34 layers plain 27.94 28.54 ResNet 27.88 25.03
Table 2. Top-1 error (%, 10-crop testing) on ImageNet validation. Here the ResNets have no extra parameter compared to their plain counterparts. Fig. 4 shows the training procedures.
34-layer plain net has higher training error throughout the whole training procedure, even though the solution space of the 18-layer plain network is a subspace of that of the 34-layer one.
We argue that this optimization difï¬culty is unlikely to be caused by vanishing gradients. These plain networks are trained with BN [16], which ensures forward propagated signals to have non-zero variances. We also verify that the backward propagated gradients exhibit healthy norms with BN. So neither forward nor backward signals vanish. In fact, the 34-layer plain net is still able to achieve compet- itive accuracy (Table 3), suggesting that the solver works to some extent. We conjecture that the deep plain nets may have exponentially low convergence rates, which impact the
reducing of the training error3. The reason for such opti- mization difï¬culties will be studied in the future.
Residual Networks. Next we evaluate 18-layer and 34- layer residual nets (ResNets). The baseline architectures are the same as the above plain nets, expect that a shortcut connection is added to each pair of 3 3 ï¬lters as in Fig. 3 (right). In the ï¬rst comparison (Table 2 and Fig. 4 right), we use identity mapping for all shortcuts and zero-padding for increasing dimensions (option A). So they have no extra parameter compared to the plain counterparts.
We have three major observations from Table 2 and Fig. 4. First, the situation is reversed with residual learn- ing â the 34-layer ResNet is better than the 18-layer ResNet (by 2.8%). More importantly, the 34-layer ResNet exhibits considerably lower training error and is generalizable to the validation data. This indicates that the degradation problem is well addressed in this setting and we manage to obtain accuracy gains from increased depth.
Second, compared to its plain counterpart, the 34-layer
3We have experimented with more training iterations (3Ã) and still ob- served the degradation problem, suggesting that this problem cannot be feasibly addressed by simply using more iterations.
5
model VGG-16 [41] GoogLeNet [44] PReLU-net [13] top-1 err. 28.07 - 24.27 top-5 err. 9.33 9.15 7.38 plain-34 ResNet-34 A ResNet-34 B ResNet-34 C ResNet-50 ResNet-101 28.54 25.03 24.52 24.19 22.85 21.75 21.43 10.02 7.76 7.46 7.40 6.71 6.05 5.71 ResNet-152
Table 3. Error rates (%, 10-crop testing) on ImageNet validation. VGG-16 is based on our test. ResNet-50/101/152 are of option B that only uses projections for increasing dimensions.
method VGG [41] (ILSVRCâ14) GoogLeNet [44] (ILSVRCâ14) VGG [41] (v5) PReLU-net [13] BN-inception [16] ResNet-34 B ResNet-34 C ResNet-50 ResNet-101 ResNet-152 top-1 err. - - 24.4 21.59 21.99 21.84 21.53 20.74 19.87 19.38 top-5 err. 8.43â 7.89 7.1 5.71 5.81 5.71 5.60 5.25 4.60 4.49
Table 4. Error rates (%) of single-model results on the ImageNet validation set (except â reported on the test set).
method VGG [41] (ILSVRCâ14) GoogLeNet [44] (ILSVRCâ14) VGG [41] (v5) PReLU-net [13] BN-inception [16] ResNet (ILSVRCâ15) top-5 err. (test) 7.32 6.66 6.8 4.94 4.82 3.57
Table 5. Error rates (%) of ensembles. The top-5 error is on the test set of ImageNet and reported by the test server.
ResNet reduces the top-1 error by 3.5% (Table 2), resulting from the successfully reduced training error (Fig. 4 right vs. left). This comparison veriï¬es the effectiveness of residual learning on extremely deep systems.
Last, we also note that the 18-layer plain/residual nets are comparably accurate (Table 2), but the 18-layer ResNet converges faster (Fig. 4 right vs. left). When the net is ânot overly deepâ (18 layers here), the current SGD solver is still able to ï¬nd good solutions to the plain net. In this case, the ResNet eases the optimization by providing faster conver- gence at the early stage.
# Identity vs. Projection Shortcuts. We have shown that
6
256-d ix, 64 yeu 3x3, 64 rela 1x1, 256
Figure 5. A deeper residual function F for ImageNet. Left: a building block (on 56Ã56 feature maps) as in Fig. 3 for ResNet- 34. Right: a âbottleneckâ building block for ResNet-50/101/152.
parameter-free, identity shortcuts help with training. Next we investigate projection shortcuts (Eqn.(2)). In Table 3 we compare three options: (A) zero-padding shortcuts are used for increasing dimensions, and all shortcuts are parameter- free (the same as Table 2 and Fig. 4 right); (B) projec- tion shortcuts are used for increasing dimensions, and other shortcuts are identity; and (C) all shortcuts are projections. Table 3 shows that all three options are considerably bet- ter than the plain counterpart. B is slightly better than A. We argue that this is because the zero-padded dimensions in A indeed have no residual learning. C is marginally better than B, and we attribute this to the extra parameters introduced by many (thirteen) projection shortcuts. But the small dif- ferences among A/B/C indicate that projection shortcuts are not essential for addressing the degradation problem. So we do not use option C in the rest of this paper, to reduce mem- ory/time complexity and model sizes. Identity shortcuts are particularly important for not increasing the complexity of the bottleneck architectures that are introduced below.
Deeper Bottleneck Architectures. Next we describe our deeper nets for ImageNet. Because of concerns on the train- ing time that we can afford, we modify the building block as a bottleneck design4. For each residual function , we use a stack of 3 layers instead of 2 (Fig. 5). The three layers are 1 1 layers are responsible for reducing and then increasing (restoring) 3 layer a bottleneck with smaller dimensions, leaving the 3 input/output dimensions. Fig. 5 shows an example, where both designs have similar time complexity.
The parameter-free identity shortcuts are particularly im- portant for the bottleneck architectures. If the identity short- cut in Fig. 5 (right) is replaced with projection, one can show that the time complexity and model size are doubled, as the shortcut is connected to the two high-dimensional ends. So identity shortcuts lead to more efï¬cient models for the bottleneck designs.
50-layer ResNet: We replace each 2-layer block in the
4Deeper non-bottleneck ResNets (e.g., Fig. 5 left) also gain accuracy from increased depth (as shown on CIFAR-10), but are not as economical as the bottleneck ResNets. So the usage of bottleneck designs is mainly due to practical considerations. We further note that the degradation problem of plain nets is also witnessed for the bottleneck designs.
34-layer net with this 3-layer bottleneck block, resulting in a 50-layer ResNet (Table 1). We use option B for increasing dimensions. This model has 3.8 billion FLOPs.
101-layer and 152-layer ResNets: We construct 101- layer and 152-layer ResNets by using more 3-layer blocks (Table 1). Remarkably, although the depth is signiï¬cantly increased, the 152-layer ResNet (11.3 billion FLOPs) still has lower complexity than VGG-16/19 nets (15.3/19.6 bil- lion FLOPs).
The 50/101/152-layer ResNets are more accurate than the 34-layer ones by considerable margins (Table 3 and 4). We do not observe the degradation problem and thus en- joy signiï¬cant accuracy gains from considerably increased depth. The beneï¬ts of depth are witnessed for all evaluation metrics (Table 3 and 4).
Comparisons with State-of-the-art Methods. In Table 4 we compare with the previous best single-model results. Our baseline 34-layer ResNets have achieved very compet- itive accuracy. Our 152-layer ResNet has a single-model top-5 validation error of 4.49%. This single-model result outperforms all previous ensemble results (Table 5). We combine six models of different depth to form an ensemble (only with two 152-layer ones at the time of submitting). This leads to 3.57% top-5 error on the test set (Table 5). This entry won the 1st place in ILSVRC 2015.
# 4.2. CIFAR-10 and Analysis
We conducted more studies on the CIFAR-10 dataset [20], which consists of 50k training images and 10k test- ing images in 10 classes. We present experiments trained on the training set and evaluated on the test set. Our focus is on the behaviors of extremely deep networks, but not on pushing the state-of-the-art results, so we intentionally use simple architectures as follows.
The plain/residual architectures follow the form in Fig. 3 32 images, with (middle/right). The network inputs are 32 3 convo- the per-pixel mean subtracted. The ï¬rst layer is 3 lutions. Then we use a stack of 6n layers with 3 3 convo- lutions on the feature maps of sizes respectively, with 2n layers for each feature map size. The numbers of ï¬lters are respectively. The subsampling is per- formed by convolutions with a stride of 2. The network ends with a global average pooling, a 10-way fully-connected layer, and softmax. There are totally 6n+2 stacked weighted layers. The following table summarizes the architecture:
output map size # layers # ï¬lters 32Ã32 1+2n 16 16Ã16 2n 32 8Ã8 2n 64
When shortcut connections are used, they are connected 3 layers (totally 3n shortcuts). On this to the pairs of 3 dataset we use identity shortcuts in all cases (i.e., option A),
7
method Maxout [10] NIN [25] DSN [24] error (%) 9.38 8.81 8.22 FitNet [35] Highway [42, 43] Highway [42, 43] ResNet ResNet ResNet ResNet ResNet ResNet # layers 19 19 32 20 32 44 56 110 1202 # params 2.5M 2.3M 1.25M 8.80 0.27M 8.75 0.46M 7.51 0.66M 7.17 0.85M 6.97 1.7M 19.4M 7.93 8.39 7.54 (7.72±0.16) 6.43 (6.61±0.16)
Table 6. Classiï¬cation error on the CIFAR-10 test set. All meth- ods are with data augmentation. For ResNet-110, we run it 5 times and show âbest (mean±std)â as in [43].
so our residual models have exactly the same depth, width, and number of parameters as the plain counterparts.
We use a weight decay of 0.0001 and momentum of 0.9, and adopt the weight initialization in [13] and BN [16] but with no dropout. These models are trained with a mini- batch size of 128 on two GPUs. We start with a learning rate of 0.1, divide it by 10 at 32k and 48k iterations, and terminate training at 64k iterations, which is determined on a 45k/5k train/val split. We follow the simple data augmen- tation in [24] for training: 4 pixels are padded on each side, 32 crop is randomly sampled from the padded and a 32 image or its horizontal ï¬ip. For testing, we only evaluate the single view of the original 32
à , leading to 20, 32, 44, and 56-layer networks. Fig. 6 (left) shows the behaviors of the plain nets. The deep plain nets suffer from increased depth, and exhibit higher training error when going deeper. This phenomenon is similar to that on ImageNet (Fig. 4, left) and on MNIST (see [42]), suggesting that such an optimization difï¬culty is a fundamental problem.
Fig. 6 (middle) shows the behaviors of ResNets. Also similar to the ImageNet cases (Fig. 4, right), our ResNets manage to overcome the optimization difï¬culty and demon- strate accuracy gains when the depth increases.
We further explore n = 18 that leads to a 110-layer ResNet. In this case, we ï¬nd that the initial learning rate of 0.1 is slightly too large to start converging5. So we use 0.01 to warm up the training until the training error is below 80% (about 400 iterations), and then go back to 0.1 and con- tinue training. The rest of the learning schedule is as done previously. This 110-layer network converges well (Fig. 6, middle). It has fewer parameters than other deep and thin
5With an initial learning rate of 0.1, it starts converging (<90% error) after several epochs, but still reaches similar accuracy.
S6-layer 20-layer T 2 3 â+
# iter.
# (Le)
# iter.
(1e4)
# iter.
# (Let)
Figure 6. Training on CIFAR-10. Dashed lines denote training error, and bold lines denote testing error. Left: plain networks. The error of plain-110 is higher than 60% and not displayed. Middle: ResNets. Right: ResNets with 110 and 1202 layers.
F=-plain-20 F=-plain-20 plain-56 Net-20 IâResNet-56 I ResNet-110 0 20 layer index (sorted by magnitude) "
Figure 7. Standard deviations (std) of layer responses on CIFAR- 10. The responses are the outputs of each 3Ã3 layer, after BN and before nonlinearity. Top: the layers are shown in their original order. Bottom: the responses are ranked in descending order.
training data test data VGG-16 ResNet-101 07+12 VOC 07 test 73.2 76.4 07++12 VOC 12 test 70.4 73.8
Table 7. Object detection mAP (%) on the PASCAL VOC 2007/2012 test sets using baseline Faster R-CNN. See also Ta- ble 10 and 11 for better results.
metric VGG-16 ResNet-101 mAP@.5 41.5 48.4 mAP@[.5, .95] 21.2 27.2
Table 8. Object detection mAP (%) on the COCO validation set using baseline Faster R-CNN. See also Table 9 for better results.
networks such as FitNet [35] and Highway [42] (Table 6), yet is among the state-of-the-art results (6.43%, Table 6).
Analysis of Layer Responses. Fig. 7 shows the standard deviations (std) of the layer responses. The responses are the outputs of each 3 3 layer, after BN and before other nonlinearity (ReLU/addition). For ResNets, this analy- sis reveals the response strength of the residual functions. Fig. 7 shows that ResNets have generally smaller responses than their plain counterparts. These results support our ba- sic motivation (Sec.3.1) that the residual functions might be generally closer to zero than the non-residual functions. We also notice that the deeper ResNet has smaller magni- tudes of responses, as evidenced by the comparisons among ResNet-20, 56, and 110 in Fig. 7. When there are more layers, an individual layer of ResNets tends to modify the signal less.
Exploring Over 1000 layers. We explore an aggressively deep model of over 1000 layers. We set n = 200 that leads to a 1202-layer network, which is trained as described above. Our method shows no optimization difï¬culty, and this 103-layer network is able to achieve training error <0.1% (Fig. 6, right). Its test error is still fairly good (7.93%, Table 6).
But there are still open problems on such aggressively deep models. The testing result of this 1202-layer network is worse than that of our 110-layer network, although both
have similar training error. We argue that this is because of overï¬tting. The 1202-layer network may be unnecessarily large (19.4M) for this small dataset. Strong regularization such as maxout [10] or dropout [14] is applied to obtain the best results ([10, 25, 24, 35]) on this dataset. In this paper, we use no maxout/dropout and just simply impose regular- ization via deep and thin architectures by design, without distracting from the focus on the difï¬culties of optimiza- tion. But combining with stronger regularization may im- prove results, which we will study in the future.
# 4.3. Object Detection on PASCAL and MS COCO
Our method has good generalization performance on other recognition tasks. Table 7 and 8 show the object de- tection baseline results on PASCAL VOC 2007 and 2012 [5] and COCO [26]. We adopt Faster R-CNN [32] as the de- tection method. Here we are interested in the improvements of replacing VGG-16 [41] with ResNet-101. The detection implementation (see appendix) of using both models is the same, so the gains can only be attributed to better networks. Most remarkably, on the challenging COCO dataset we ob- tain a 6.0% increase in COCOâs standard metric (mAP@[.5, .95]), which is a 28% relative improvement. This gain is solely due to the learned representations.
Based on deep residual nets, we won the 1st places in several tracks in ILSVRC & COCO 2015 competitions: Im- ageNet detection, ImageNet localization, COCO detection, and COCO segmentation. The details are in the appendix.
8
# References
[1] Y. Bengio, P. Simard, and P. Frasconi. Learning long-term dependen- cies with gradient descent is difï¬cult. IEEE Transactions on Neural Networks, 5(2):157â166, 1994.
[2] C. M. Bishop. Neural networks for pattern recognition. Oxford university press, 1995.
[3] W. L. Briggs, S. F. McCormick, et al. A Multigrid Tutorial. Siam, 2000.
[4] K. Chatï¬eld, V. Lempitsky, A. Vedaldi, and A. Zisserman. The devil is in the details: an evaluation of recent feature encoding methods. In BMVC, 2011.
[5] M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zis- serman. The Pascal Visual Object Classes (VOC) Challenge. IJCV, pages 303â338, 2010.
[6] S. Gidaris and N. Komodakis. Object detection via a multi-region & semantic segmentation-aware cnn model. In ICCV, 2015.
[7] R. Girshick. Fast R-CNN. In ICCV, 2015. [8] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hier- archies for accurate object detection and semantic segmentation. In CVPR, 2014.
[9] X. Glorot and Y. Bengio. Understanding the difï¬culty of training deep feedforward neural networks. In AISTATS, 2010.
[10] I. J. Goodfellow, D. Warde-Farley, M. Mirza, A. Courville, and Y. Bengio. Maxout networks. arXiv:1302.4389, 2013.
[11] K. He and J. Sun. Convolutional neural networks at constrained time cost. In CVPR, 2015.
[12] K. He, X. Zhang, S. Ren, and J. Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. In ECCV, 2014. [13] K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectiï¬ers: Surpassing human-level performance on imagenet classiï¬cation. In ICCV, 2015.
[14] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov. Improving neural networks by preventing co- adaptation of feature detectors. arXiv:1207.0580, 2012.
[15] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735â1780, 1997.
[16] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015. [17] H. Jegou, M. Douze, and C. Schmid. Product quantization for nearest
neighbor search. TPAMI, 33, 2011.
[18] H. Jegou, F. Perronnin, M. Douze, J. Sanchez, P. Perez, and C. Schmid. Aggregating local image descriptors into compact codes. TPAMI, 2012.
[19] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv:1408.5093, 2014.
[20] A. Krizhevsky. Learning multiple layers of features from tiny im- ages. Tech Report, 2009.
[21] A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classiï¬cation with deep convolutional neural networks. In NIPS, 2012.
[22] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to hand- written zip code recognition. Neural computation, 1989.
[23] Y. LeCun, L. Bottou, G. B. Orr, and K.-R. M¨uller. Efï¬cient backprop. In Neural Networks: Tricks of the Trade, pages 9â50. Springer, 1998. [24] C.-Y. Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu. Deeply-
supervised nets. arXiv:1409.5185, 2014.
[25] M. Lin, Q. Chen, and S. Yan. Network in network. arXiv:1312.4400, 2013.
[26] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Doll´ar, and C. L. Zitnick. Microsoft COCO: Common objects in context. In ECCV. 2014.
[27] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015.
9
[28] G. Mont´ufar, R. Pascanu, K. Cho, and Y. Bengio. On the number of linear regions of deep neural networks. In NIPS, 2014.
[29] V. Nair and G. E. Hinton. Rectiï¬ed linear units improve restricted boltzmann machines. In ICML, 2010.
[30] F. Perronnin and C. Dance. Fisher kernels on visual vocabularies for image categorization. In CVPR, 2007.
[31] T. Raiko, H. Valpola, and Y. LeCun. Deep learning made easier by linear transformations in perceptrons. In AISTATS, 2012.
[32] S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: Towards real-time object detection with region proposal networks. In NIPS, 2015.
[33] S. Ren, K. He, R. Girshick, X. Zhang, and J. Sun. Object detection networks on convolutional feature maps. arXiv:1504.06066, 2015.
[34] B. D. Ripley. Pattern recognition and neural networks. Cambridge university press, 1996.
[35] A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta, and Y. Bengio. Fitnets: Hints for thin deep nets. In ICLR, 2015.
[36] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Imagenet
Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. large scale visual recognition challenge. arXiv:1409.0575, 2014. [37] A. M. Saxe, J. L. McClelland, and S. Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv:1312.6120, 2013.
[38] N. N. Schraudolph. Accelerated gradient descent by factor-centering decomposition. Technical report, 1998.
[39] N. N. Schraudolph. Centering neural network gradient factors. In Neural Networks: Tricks of the Trade, pages 207â226. Springer, 1998.
[40] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. Le- Cun. Overfeat: Integrated recognition, localization and detection using convolutional networks. In ICLR, 2014.
[41] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015.
[42] R. K. Srivastava, K. Greff, and J. Schmidhuber. Highway networks. arXiv:1505.00387, 2015.
[43] R. K. Srivastava, K. Greff, and J. Schmidhuber. Training very deep networks. 1507.06228, 2015.
[44] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Er- han, V. Vanhoucke, and A. Rabinovich. Going deeper with convolu- tions. In CVPR, 2015.
[45] R. Szeliski. Fast surface interpolation using hierarchical basis func- tions. TPAMI, 1990.
[46] R. Szeliski. Locally adapted hierarchical basis preconditioning. In SIGGRAPH, 2006.
[47] T. Vatanen, T. Raiko, H. Valpola, and Y. LeCun. Pushing stochas- tic gradient towards second-order methodsâbackpropagation learn- In Neural Information ing with transformations in nonlinearities. Processing, 2013.
[48] A. Vedaldi and B. Fulkerson. VLFeat: An open and portable library of computer vision algorithms, 2008.
[49] W. Venables and B. Ripley. Modern applied statistics with s-plus. 1999.
[50] M. D. Zeiler and R. Fergus. Visualizing and understanding convolu- tional neural networks. In ECCV, 2014.
# A. Object Detection Baselines
In this section we introduce our detection method based on the baseline Faster R-CNN [32] system. The models are initialized by the ImageNet classiï¬cation models, and then ï¬ne-tuned on the object detection data. We have experi- mented with ResNet-50/101 at the time of the ILSVRC & COCO 2015 detection competitions.
Unlike VGG-16 used in [32], our ResNet has no hidden fc layers. We adopt the idea of âNetworks on Conv fea- ture mapsâ (NoC) [33] to address this issue. We compute the full-image shared conv feature maps using those lay- ers whose strides on the image are no greater than 16 pixels (i.e., conv1, conv2 x, conv3 x, and conv4 x, totally 91 conv layers in ResNet-101; Table 1). We consider these layers as analogous to the 13 conv layers in VGG-16, and by doing so, both ResNet and VGG-16 have conv feature maps of the same total stride (16 pixels). These layers are shared by a region proposal network (RPN, generating 300 proposals) [32] and a Fast R-CNN detection network [7]. RoI pool- ing [7] is performed before conv5 1. On this RoI-pooled feature, all layers of conv5 x and up are adopted for each region, playing the roles of VGG-16âs fc layers. The ï¬nal classiï¬cation layer is replaced by two sibling layers (classi- ï¬cation and box regression [7]).
For the usage of BN layers, after pre-training, we com- pute the BN statistics (means and variances) for each layer on the ImageNet training set. Then the BN layers are ï¬xed during ï¬ne-tuning for object detection. As such, the BN layers become linear activations with constant offsets and scales, and BN statistics are not updated by ï¬ne-tuning. We ï¬x the BN layers mainly for reducing memory consumption in Faster R-CNN training.
# PASCAL VOC
Following [7, 32], for the PASCAL VOC 2007 test set, we use the 5k trainval images in VOC 2007 and 16k train- val images in VOC 2012 for training (â07+12â). For the PASCAL VOC 2012 test set, we use the 10k trainval+test images in VOC 2007 and 16k trainval images in VOC 2012 for training (â07++12â). The hyper-parameters for train- ing Faster R-CNN are the same as in [32]. Table 7 shows the results. ResNet-101 improves the mAP by >3% over VGG-16. This gain is solely because of the improved fea- tures learned by ResNet.
# MS COCO
The MS COCO dataset [26] involves 80 object cate- gories. We evaluate the PASCAL VOC metric (mAP @ IoU = 0.5) and the standard COCO metric (mAP @ IoU = .5:.05:.95). We use the 80k images on the train set for train- ing and the 40k images on the val set for evaluation. Our detection system for COCO is similar to that for PASCAL VOC. We train the COCO models with an 8-GPU imple- mentation, and thus the RPN step has a mini-batch size of
10
8 images (i.e., 1 per GPU) and the Fast R-CNN step has a mini-batch size of 16 images. The RPN step and Fast R- CNN step are both trained for 240k iterations with a learn- ing rate of 0.001 and then for 80k iterations with 0.0001.
Table 8 shows the results on the MS COCO validation set. ResNet-101 has a 6% increase of mAP@[.5, .95] over VGG-16, which is a 28% relative improvement, solely con- tributed by the features learned by the better network. Re- markably, the mAP@[.5, .95]âs absolute increase (6.0%) is nearly as big as mAP@.5âs (6.9%). This suggests that a deeper network can improve both recognition and localiza- tion.
# B. Object Detection Improvements
For completeness, we report the improvements made for the competitions. These improvements are based on deep features and thus should beneï¬t from residual learning.
MS COCO Box reï¬nement. Our box reï¬nement partially follows the it- erative localization in [6]. In Faster R-CNN, the ï¬nal output is a regressed box that is different from its proposal box. So for inference, we pool a new feature from the regressed box and obtain a new classiï¬cation score and a new regressed box. We combine these 300 new predictions with the orig- inal 300 predictions. Non-maximum suppression (NMS) is applied on the union set of predicted boxes using an IoU threshold of 0.3 [8], followed by box voting [6]. Box re- ï¬nement improves mAP by about 2 points (Table 9).
Global context. We combine global context in the Fast R-CNN step. Given the full-image conv feature map, we pool a feature by global Spatial Pyramid Pooling [12] (with a âsingle-levelâ pyramid) which can be implemented as âRoIâ pooling using the entire imageâs bounding box as the RoI. This pooled feature is fed into the post-RoI layers to obtain a global context feature. This global feature is con- catenated with the original per-region feature, followed by the sibling classiï¬cation and box regression layers. This new structure is trained end-to-end. Global context im- proves mAP@.5 by about 1 point (Table 9).
Multi-scale testing. In the above, all results are obtained by single-scale training/testing as in [32], where the imageâs shorter side is s = 600 pixels. Multi-scale training/testing has been developed in [12, 7] by selecting a scale from a feature pyramid, and in [33] by using maxout layers. In our current implementation, we have performed multi-scale testing following [33]; we have not performed multi-scale training because of limited time. In addition, we have per- formed multi-scale testing only for the Fast R-CNN step (but not yet for the RPN step). With a trained model, we compute conv feature maps on an image pyramid, where the 200, 400, 600, 800, 1000 imageâs shorter sides are s . }
training data test data mAP baseline Faster R-CNN (VGG-16) baseline Faster R-CNN (ResNet-101) +box reï¬nement +context +multi-scale testing ensemble COCO train COCO val @.5 41.5 48.4 49.9 51.1 53.8 @[.5, .95] 21.2 27.2 29.9 30.0 32.5 COCO trainval COCO test-dev @.5 @[.5, .95] 53.3 55.7 59.0 32.2 34.9 37.4
Table 9. Object detection improvements on MS COCO using Faster R-CNN and ResNet-101.
net VGG-16 ResNet-101 data 07+12 07+12 system baseline baseline baseline+++ ResNet-101 COCO+07+12 mAP areo 73.2 76.5 79.0 70.9 65.5 52.1 83.1 84.7 86.4 52.0 81.9 65.7 84.8 84.6 77.5 76.7 38.8 73.6 73.9 83.0 72.6 76.4 79.8 80.7 76.2 68.3 55.9 85.1 85.3 89.8 56.7 87.8 69.4 88.3 88.9 80.9 78.4 41.7 78.6 79.8 85.3 72.0 85.6 90.0 89.6 87.8 80.8 76.1 89.9 89.9 89.6 75.5 90.0 80.7 89.6 90.3 89.1 88.7 65.4 88.1 85.6 89.0 86.8 bike bird boat bottle bus car cat chair cow table dog horse mbike person plant sheep sofa train tv
Table 10. Detection results on the PASCAL VOC 2007 test set. The baseline is the Faster R-CNN system. The system âbaseline+++â include box reï¬nement, context, and multi-scale testing in Table 9.
mAP areo 70.4 84.9 79.8 74.3 53.9 49.8 77.5 75.9 88.5 45.6 77.1 55.3 86.9 81.7 80.9 79.6 40.1 72.6 60.9 81.2 61.5 baseline baseline 73.8 86.5 81.6 77.2 58.0 51.0 78.6 76.6 93.2 48.6 80.4 59.0 92.1 85.3 84.8 80.7 48.1 77.3 66.5 84.7 65.6 baseline+++ ResNet-101 COCO+07++12 83.8 92.1 88.4 84.8 75.9 71.4 86.3 87.8 94.2 66.8 89.4 69.2 93.9 91.9 90.9 89.6 67.9 88.2 76.8 90.3 80.0 net VGG-16 ResNet-101 data 07++12 07++12 system bike bird boat bottle bus car cat chair cow table dog horse mbike person plant sheep sofa train Table 11. Detection results on the PASCAL VOC 2012 test set (http://host.robots.ox.ac.uk:8080/leaderboard/ displaylb.php?challengeid=11&compid=4). The baseline is the Faster R-CNN system. The system âbaseline+++â include box reï¬nement, context, and multi-scale testing in Table 9. tv
We select two adjacent scales from the pyramid following [33]. RoI pooling and subsequent layers are performed on the feature maps of these two scales [33], which are merged by maxout as in [33]. Multi-scale testing improves the mAP by over 2 points (Table 9).
GoogLeNet [44] (ILSVRCâ14) our single model (ILSVRCâ15) our ensemble (ILSVRCâ15) val2 - 60.5 63.6 test 43.9 58.8 62.1
Using validation data. Next we use the 80k+40k trainval set for training and the 20k test-dev set for evaluation. The test- dev set has no publicly available ground truth and the result is reported by the evaluation server. Under this setting, the results are an mAP@.5 of 55.7% and an mAP@[.5, .95] of 34.9% (Table 9). This is our single-model result.
Ensemble. In Faster R-CNN, the system is designed to learn region proposals and also object classiï¬ers, so an ensemble can be used to boost both tasks. We use an ensemble for proposing regions, and the union set of proposals are pro- cessed by an ensemble of per-region classiï¬ers. Table 9 shows our result based on an ensemble of 3 networks. The mAP is 59.0% and 37.4% on the test-dev set. This result won the 1st place in the detection task in COCO 2015.
# PASCAL VOC
We revisit the PASCAL VOC dataset based on the above model. With the single model on the COCO dataset (55.7% mAP@.5 in Table 9), we ï¬ne-tune this model on the PAS- CAL VOC sets. The improvements of box reï¬nement, con- text, and multi-scale testing are also adopted. By doing so
Table 12. Our results (mAP, %) on the ImageNet detection dataset. Our detection system is Faster R-CNN [32] with the improvements in Table 9, using ResNet-101.
we achieve 85.6% mAP on PASCAL VOC 2007 (Table 10) and 83.8% on PASCAL VOC 2012 (Table 11)6. The result on PASCAL VOC 2012 is 10 points higher than the previ- ous state-of-the-art result [6].
# ImageNet Detection
The ImageNet Detection (DET) task involves 200 object categories. The accuracy is evaluated by mAP@.5. Our object detection algorithm for ImageNet DET is the same as that for MS COCO in Table 9. The networks are pre- trained on the 1000-class ImageNet classiï¬cation set, and are ï¬ne-tuned on the DET data. We split the validation set into two parts (val1/val2) following [8]. We ï¬ne-tune the detection models using the DET training set and the val1 set. The val2 set is used for validation. We do not use other ILSVRC 2015 data. Our single model with ResNet-101 has
6http://host.robots.ox.ac.uk:8080/anonymous/3OJ4OJ.html,
submitted on 2015-11-26.
11
LOC network VGGâs [41] VGG-16 LOC method testing 1-crop ResNet-101 1-crop ResNet-101 dense ResNet-101 dense RPN+RCNN ResNet-101 dense RPN+RCNN ensemble dense RPN RPN RPN LOC error on GT CLS 33.1 [41] 13.3 11.7 classiï¬cation network ResNet-101 ResNet-101 ensemble top-5 LOC error on predicted CLS 14.4 10.6 8.9
Table 13. Localization error (%) on the ImageNet validation. In the column of âLOC error on GT classâ ([41]), the ground truth class is used. In the âtestingâ column, â1-cropâ denotes testing on a center crop of 224Ã224 pixels, âdenseâ denotes dense (fully convolutional) and multi-scale testing.
58.8% mAP and our ensemble of 3 models has 62.1% mAP on the DET test set (Table 12). This result won the 1st place in the ImageNet detection task in ILSVRC 2015, surpassing the second place by 8.5 points (absolute).
# C. ImageNet Localization
The ImageNet Localization (LOC) task [36] requires to classify and localize the objects. Following [40, 41], we assume that the image-level classiï¬ers are ï¬rst adopted for predicting the class labels of an image, and the localiza- tion algorithm only accounts for predicting bounding boxes based on the predicted classes. We adopt the âper-class re- gressionâ (PCR) strategy [40, 41], learning a bounding box regressor for each class. We pre-train the networks for Im- ageNet classiï¬cation and then ï¬ne-tune them for localiza- tion. We train networks on the provided 1000-class Ima- geNet training set.
Our localization algorithm is based on the RPN frame- work of [32] with a few modiï¬cations. Unlike the way in [32] that is category-agnostic, our RPN for localization is designed in a per-class form. This RPN ends with two sib- 1 convolutional layers for binary classiï¬cation (cls) ling 1 and box regression (reg), as in [32]. The cls and reg layers are both in a per-class from, in contrast to [32]. Speciï¬- cally, the cls layer has a 1000-d output, and each dimension is binary logistic regression for predicting being or not be- ing an object class; the reg layer has a 1000 4-d output consisting of box regressors for 1000 classes. As in [32], our bounding box regression is with reference to multiple translation-invariant âanchorâ boxes at each position.
As in our ImageNet classiï¬cation training (Sec. 3.4), we randomly sample 224 224 crops for data augmentation. We use a mini-batch size of 256 images for ï¬ne-tuning. To avoid negative samples being dominate, 8 anchors are ran- domly sampled for each image, where the sampled positive and negative anchors have a ratio of 1:1 [32]. For testing, the network is applied on the image fully-convolutionally.
Table 13 compares the localization results. Following [41], we ï¬rst perform âoracleâ testing using the ground truth class as the classiï¬cation prediction. VGGâs paper [41] re-
12
method OverFeat [40] (ILSVRCâ13) GoogLeNet [44] (ILSVRCâ14) VGG [41] (ILSVRCâ14) ours (ILSVRCâ15) top-5 localization err val 30.0 - 26.9 8.9 test 29.9 26.7 25.3 9.0
Table 14. Comparisons of localization error (%) on the ImageNet dataset with state-of-the-art methods.
ports a center-crop error of 33.1% (Table 13) using ground truth classes. Under the same setting, our RPN method us- ing ResNet-101 net signiï¬cantly reduces the center-crop er- ror to 13.3%. This comparison demonstrates the excellent performance of our framework. With dense (fully convolu- tional) and multi-scale testing, our ResNet-101 has an error of 11.7% using ground truth classes. Using ResNet-101 for predicting classes (4.6% top-5 classiï¬cation error, Table 4), the top-5 localization error is 14.4%.
The above results are only based on the proposal network (RPN) in Faster R-CNN [32]. One may use the detection network (Fast R-CNN [7]) in Faster R-CNN to improve the results. But we notice that on this dataset, one image usually contains a single dominate object, and the proposal regions highly overlap with each other and thus have very similar RoI-pooled features. As a result, the image-centric training of Fast R-CNN [7] generates samples of small variations, which may not be desired for stochastic training. Motivated by this, in our current experiment we use the original R- CNN [8] that is RoI-centric, in place of Fast R-CNN.
Our R-CNN implementation is as follows. We apply the per-class RPN trained as above on the training images to predict bounding boxes for the ground truth class. These predicted boxes play a role of class-dependent proposals. For each training image, the highest scored 200 proposals are extracted as training samples to train an R-CNN classi- ï¬er. The image region is cropped from a proposal, warped to 224 224 pixels, and fed into the classiï¬cation network as in R-CNN [8]. The outputs of this network consist of two sibling fc layers for cls and reg, also in a per-class form. This R-CNN network is ï¬ne-tuned on the training set us- ing a mini-batch size of 256 in the RoI-centric fashion. For testing, the RPN generates the highest scored 200 proposals for each predicted class, and the R-CNN network is used to update these proposalsâ scores and box positions.
This method reduces the top-5 localization error to 10.6% (Table 13). This is our single-model result on the validation set. Using an ensemble of networks for both clas- siï¬cation and localization, we achieve a top-5 localization error of 9.0% on the test set. This number signiï¬cantly out- performs the ILSVRC 14 results (Table 14), showing a 64% relative reduction of error. This result won the 1st place in the ImageNet localization task in ILSVRC 2015. | {
"id": "1505.00387"
} |
1512.02167 | Simple Baseline for Visual Question Answering | We describe a very simple bag-of-words baseline for visual question
answering. This baseline concatenates the word features from the question and
CNN features from the image to predict the answer. When evaluated on the
challenging VQA dataset [2], it shows comparable performance to many recent
approaches using recurrent neural networks. To explore the strength and
weakness of the trained model, we also provide an interactive web demo and
open-source code. . | http://arxiv.org/pdf/1512.02167 | Bolei Zhou, Yuandong Tian, Sainbayar Sukhbaatar, Arthur Szlam, Rob Fergus | cs.CV, cs.CL | One comparison method's scores are put into the correct column, and a
new experiment of generating attention map is added | null | cs.CV | 20151207 | 20151215 | 5 1 0 2 c e D 5 1
] V C . s c [
2 v 7 6 1 2 0 . 2 1 5 1 : v i X r a
# Simple Baseline for Visual Question Answering
Bolei Zhou1, Yuandong Tian2, Sainbayar Sukhbaatar2, Arthur Szlam2, and Rob Fergus2
1Massachusetts Institute of Technology 2Facebook AI Research
# Abstract
We describe a very simple bag-of-words baseline for visual question answering. This baseline concatenates the word features from the question and CNN features from the image to predict the answer. When evaluated on the challenging VQA dataset [2], it shows comparable performance to many recent approaches using recurrent neural networks. To explore the strength and weakness of the trained model, we also provide an interactive web demo1, and open-source code2.
# Introduction
Combining Natural Language Processing with Computer Vision for high-level scene interpretation is a recent trend, e.g., image captioning [10, 15, 7, 4]. These works have beneï¬ted from the rapid development of deep learning for visual recognition (object recognition [8] and scene recognition [20]), and have been made possible by the emergence of large image datasets and text corpus (e.g., [9]). Beyond image captioning, a natural next step is visual question answering (QA) [12, 2, 5].
Compared with the image captioning task, in which an algorithm is required to generate free-form text description for a given image, visual QA can involve a wider range of knowledge and reasoning skills. A captioning algorithm has the liberty to pick the easiest relevant descriptions of the image, whereas as responding to a question needs to ï¬nd the correct answer for *that* question. Further- more, the algorithms for visual QA are required to answer all kinds of questions people might ask about the image, some of which might be relevant to the image contents, such as âwhat books are under the televisionâ and âwhat is the color of the boatâ, while others might require knowledge or reasoning beyond the image content, such as âwhy is the baby crying?â and âwhich chair is the most expensive?â. Building robust algorithms for visual QA that perform at near human levels would be an important step towards solving AI.
Recently, several papers have appeared on arXiv (after CVPRâ16 submission deadline) proposing neural network architectures for visual question answering, such as [13, 17, 5, 18, 16, 3, 11, 1]. Some of them are derived from the image captioning framework, in which the output of a recurrent neural network (e.g., LSTM [16, 11, 1]) applied to the question sentence is concatenated with visual features from VGG or other CNNs to feed a classiï¬er to predict the answer. Other models integrate visual attention mechanisms [17, 13, 3] and visualize how the network learns to attend the local image regions relevant to the content of the question.
Interestingly, we notice that in one of the earliest VQA papers [12], the simple baseline Bag-of- words + image feature (referred to as BOWIMG baseline) outperforms the LSTM-based models on a synthesized visual QA dataset built up on top of the image captions of COCO dataset [9]. For the recent much larger COCO VQA dataset [2], the BOWIMG baseline performs worse than the LSTM-based models [2].
1http://visualqa.csail.mit.edu 2https://github.com/metalbubble/VQAbaseline
1
Image feature â~~_ â ©] cafeteria:0.01 yes:0.81 no:0.15 are these people family? â» |O| are ââ~ i | people:0.02 Softmax One-hot vector
Figure 1: Framework of the iBOWIMG. Features from the question sentence and image are con- catenated then feed into softmax to predict the answer.
In this work, we carefully implement the BOWIMG baseline model. We call it iBOWIMG to avoid confusion with the implementation in [2]. With proper setup and training, this simple baseline model shows comparable performance to many recent recurrent network-based approaches for visual QA. Further analysis shows that the baseline learns to correlate the informative words in the question sentence and visual concepts in the image with the answer. Furthermore, such correlations can be used to compute reasonable spatial attention map with the help of the CAM technique proposed in [20]. The source code and the visual QA demo based on the trained model are publicly available. In the demo, iBOWIMG baseline gives answers to any question relevant to the given images. Playing with the visual QA models interactively could reveal the strengths and weakness of the trained model.
# iBOWIMG for Visual Question Answering
In most of the recent proposed models, visual QA is simpliï¬ed to a classiï¬cation task: the number of the different answers in the training set is the number of the ï¬nal classes the models need to learn to predict. The general pipeline of those models is that the word feature extracted from the question sentence is concatenated with the visual feature extracted from the image, then they are fed into a softmax layer to predict the answer class. The visual feature is usually taken from the top of the VGG network or GoogLeNet, while the word features of the question sentence are usually the popular LSTM-based features [12, 2].
In our iBOWIMG model, we simply use naive bag-of-words as the text feature, and use the deep fea- tures from GoogLeNet [14] as the visual features. Figure 1 shows the framework of the iBOWIMG model, which can be implemented in Torch with no more than 10 lines of code. The input question is ï¬rst converted to a one-hot vector, which is transformed to word feature via a word embedding layer and then is concatenated with the image feature from CNN. The combined feature is sent to the softmax layer to predict the answer class, which essentially is a multi-class logistic regression model.
# 3 Experiments
Here we train and evaluate the iBOWIMG model on the Full release of COCO VQA dataset [2], the largest VQA dataset so far. In the COCO VQA dataset, there are 3 questions annotated by Amazon Mechanical Turk (AMT) workers for each image in the COCO dataset. For each question, 10 answers are annotated by another batch of AMT workers. To pre-process the annotation for training, we perform majority voting on the 10 ground-truth answers to get the most certain answer
2
# Table 1: Performance comparison on test-dev.
IMG [2] BOW [2] BOWIMG [2] LSTMIMG [2] CompMem [6] NMN+LSTM [1] WR Sel. [13] ACK [16] DPPnet [11] iBOWIMG Overall 28.13 48.09 52.64 53.74 52.62 54.80 - 55.72 57.22 55.72 Open-Ended yes/no 64.01 75.66 75.55 78.94 78.33 77.70 - 79.23 80.71 76.55 number 00.42 36.70 33.67 35.24 35.93 37.20 - 36.13 37.24 35.03 others 03.77 27.14 37.37 36.42 34.46 39.30 - 40.08 41.69 42.62 Overall 30.53 53.68 58.97 57.17 - - 60.96 - 62.48 61.68 Multiple-Choice yes/no 69.87 75.71 75.59 78.95 - - - - 80.79 76.68 number 00.45 37.05 34.35 35.80 - - - - 38.94 37.05 others 03.76 38.64 50.33 43.41 - - - - 52.16 54.44
for each question. Here the answer could be in single word or multiple words. Then we have the 3 question-answer pairs from each image for training. There are in total 248,349 pairs in train2014 and 121,512 pairs in val2014, for 123,287 images overall in the training set. Here train2014 and val2014 are the standard splits of the image set in the COCO dataset.
To generate the training set and validation set for our model, we ï¬rst randomly split the images of COCO val2014 into 70% subset A and 30% subset B. To avoid potential overï¬tting, questions shar- ing the same image will be placed into the same split. The question-answer pairs from the images of COCO train2014 + val2014 subset A are combined and used for training, while the val2014 subset B is used as validation set for parameter tuning. After we ï¬nd the best model parameters, we combine the whole train2014 and val2014 to train the ï¬nal model. We submit the prediction result given by the ï¬nal model on the testing set (COCO test2015) to the evaluation server, to get the ï¬nal accuracy on the test-dev and test-standard set. For Open-Ended Question track, we take the top-1 predicted answer from the softmax output. For the Multiple-Choice Question track, we ï¬rst get the softmax probability for each of the given choices then select the most conï¬dent one.
The code is implemented in Torch. The training takes about 10 hours on a single GPU NVIDIA Titan Black.
# 3.1 Benchmark Performance
According to the evaluation standard of the VQA dataset, the result of the any proposed VQA models should report accuracy on the test-standard set for fair comparison. We report our baseline on the test-dev set in Table 1 and the test-standard set in Table 2. The test-dev set is used for debugging and validation experiments and allows for unlimited submission to the evaluation server, while test- standard is used for model comparison with limited submission times.
Since this VQA dataset is rather new, the publicly available models evaluated on the dataset are all from non-peer reviewed arXiv papers. We include the performance of the models available at the time of writing (Dec.5, 2015) [2, 6, 1, 13, 16, 11]. Note that some models are evaluated on either test-dev or test-standard for either Open-Ended or Multiple-Choice track.
The full set of the VQA dataset was released on Oct.6 2015; previously the v0.1 version and v0.9 version had been released. We notice that some models are evaluated using non-standard setups, rendering performance comparisons difï¬cult. [17] (arXiv dated at Nov.17 2015) used v0.9 version of VQA with their own split of training and testing; [18] (arXiv dated at Nov.7 2015) used their own split of training and testing for the val2014; [3] (arXiv dated at Nov.18 2015) used v0.9 version of VQA dataset. So these are not included in the comparison.
Except for these IMG, BOW, BOWIMG baselines provided in the [2], all the compared methods use either deep or recursive neural networks. However, our iBOWIMG baseline shows comparable performances against these much more complex models, except for DPPnet [11] that is about 1.5% better.
3
# Table 2: Performance comparison on test-standard.
LSTMIMG [2] NMN+LSTM [1] ACK [16] DPPnet [11] iBOWIMG Overall 54.06 55.10 55.98 57.36 55.89 Open-Ended yes/no - - 79.05 80.28 76.76 number - - 36.10 36.92 34.98 others - - 40.61 42.24 42.62 Overall - - - 62.69 61.97 Multiple-Choice yes/no - - - 80.35 76.86 number - - - 38.79 37.30 others - - - 52.79 54.60
# 3.2 Training Details
Learning rate and weight clip. We ï¬nd that setting up a different learning rate and weight clipping for the word embedding layer and softmax layer leads to better performance. The learning rate for the word embedding layer should be much higher than the learning rate of softmax layer to learn a good word embedding. From the performance of BOW in Table 1, we can see that a good word model is crucial to the accuracy, as BOW model alone could achieve closely to 48%, even without looking at the image content.
Model parameters to tune. Though our model could be considered as the simplest baseline so far for visual QA, there are several model parameters to tune: 1) the number of epochs to train. 2) the learning rate and weight clip. 3) the threshold for removing less frequent question word and answer classes. We iterate to search the best value of each model parameter separately on the val2014 subset B. In our best model, there are 5,746 words in the dictionary of question sentence, 5,216 classes of answers. The speciï¬c model parameters can be found in the source code.
# 3.3 Understanding the Visual QA model
From the comparisons above, we can see that our baseline model performs as well as the recurrent neural network models on the VQA dataset. Furthermore, due to its simplicity, the behavior of the model could be easily interpreted, demonstrating what it learned for visual QA.
Essentially, the BOWIMG baseline model learns to memorize the correlation between the answer class and the informative words in the question sentence along with the visual feature. We split the learned weights of softmax into two parts, one part for the word feature and the other part for the visual feature. Therefore,
r = Mwxw + Mvxv. (1)
Here the softmax matrix M is decomposed into the weights Mw for word feature xw and the weights Mv for the visual feature xv whereas M = [Mw, Mv]. r is the response of the answer class before softmax normalization. Denote the response rw = Mwxw as the contribution from question words and rv = Mvxv as the contribution from the image contents. Thus for each predicted answer, we know exactly the proportions of contribution from word and image content respectively. We also could rank rw and rv to know what the predicted answer could be if the model only relies on one side of information.
Figure 2 shows some examples of the predictions, revealing that the question words usually have dominant inï¬uence on predicting the answer. For example, the correctly predicted answers for the two questions given for the ï¬rst image âwhat is the color of sofaâ and âwhich brand is the laptopâ come mostly from the question words, without the need for image. This demonstrates the bias in the frequency of object and actions appearing in the images of COCO dataset. For the second image, we ask âwhat are they doingâ: the words-only prediction gives âplaying wii (10.62), eating (9.97), playing frisbee (9.24)â, while full prediction gives the correct prediction âplaying baseball (10.67 = 2.01 [image] + 8.66 [word])â.
To further understand the answers predicted by the model given the visual feature and question sentence, we ï¬rst decompose the word contribution of the answer into single words of the ques- tion sentence, then we visualize the informative image regions relevant to the answer through the technique proposed in [19].
4
Question: what is the color of the sofa Predictions: brown (Score: 12.89 = 1.01 [image] + 11.88 [word]) red (score: 11.92 = 1.13 [image] + 10.79 [word]) yellow (score: 11.91 = 1.08 [image] + 10.84 [word] Based on image only: books (3.15), yes (3.14), no (2.95) Based on word only: brown (11.88), gray (11.18), tan (11.16) Question: which brand is the laptop Predictions: apple (Score: 10.87 = 1.10 [image] + 9.77 [word]) dell (score: 9.83 = 0.71 [image] + 9.12 [word)) toshiba (score: 9.76 = 1.18 [image] + 8.58 [word]) Based on image only: books (3.15), yes (3.14), no (2.95) Based on word only: apple (9.77), hp (9.18), dell (9.12) Question: what are they doing Predictions: playing baseball (score: 10.67 = 2.01 [image] + 8.66 [word)) baseball (score: 9.65 = 4.84 [image] + 4.82 [word]) grazing (score: 9.34 = 0.53 [image] + 8.81 [word)) Based on word only: playing wii (10.62), eating (9.97), playing frisbee (9.24) Based on image only: umpire (4.85), baseball (4.84), batter (4.46) Question: how many people inside Predictions: 3 (score: 13.39 = 2.75 [image] + 10.65 [word]) 2 (score: 12.76 = 2.49 [image] + 10.27 [word]) 5 (score: 12.72 = 1.83 [image] + 10.89 [word]) Based on image only: umpire (4.85), baseball (4.84), batter (4.46) Based on word only: 8 (11.24), 7 (10.95), 5 (10.89) what gaming system are they playing s: (score: 19.35 = 0.64 [image] + 18.71 [word]) soccer (score: 13.23 = 0.34 [image] + 12.89 [word] mario kart (Score: 13.17 = 0.11 [image] + 13.06 [word] Question: are they having fun Predictions: yes (score: 10.65 = 3.98 [image] + 6.68 [word] no (score: 8.06 = 3.33 [image] + 4.73 [word)]) library (score: 6.20 = 4.40 [image] + 1.80 [word)) Based on image only: library (4.40), yes (3.98), i don't know (3.85) Based on image only: library (4.40), yes (3.98), i don't know (3.85) Based on word only: wii (18.71), mario kart (13.06), soccer (12.89) Based on word only: yes (6.68), no (4.73), fly kite (3.43)
Figure 2: Examples of visual question answering from the iBOWIMG baseline. For each image there are two questions and the top 3 predicted answers from the model. The prediction score of each answer is decomposed into the contributions of image and words respectively. The predicted answers which rely purely on question words or image are also shown.
Question: What are they doing? Prediction: texting (score: 12.02=3.78 [image] + 8.24 [word]) Word importance: doing(7.01) are(1.05) they(0.49) what(-0.3) What is he eating? n: hot dog (score: 13.01=5.02 [image] + 7.99 [word]) Word importance: eating(4.12) what(2.81) is(0.74) he(0.30) Question: Is there a cat? Prediction: yes (score: 11.48 = 4.35 [image] + 7.13 [word]) word importance: is(2.65) there(2.46) a(1.70) cat(0.30) ion: Where is the cat? ion: shelf (score: 10.81 = 3.23 [image] + 7.58 [word]) word importance: where(3.89) cat(1.88) the(1.79) is(0.01)
Figure 3: The examples of the word importance of question sentences and the informative image regions relevant to the predicted answers.
Since there are just two linear transformations (one is word embedding and the other is softmax matrix multiplication) from the one hot vector to the answer response, we could easily know the importance of each single word in the question to the predicted answer. In Figure 3, we plot the ranked word importance for each word in the question sentence. In the ï¬rst image question word âdoingâ is informative to the answer âtextingâ while in the second image question word âeatingâ is informative to the answer âhot dogâ.
To highlight the informative image regions relevant to the predicted answer we apply a technique called Class Activation Mapping (CAM) proposed in [19]. The CAM technique leverages the linear relation between the softmax prediction and the ï¬nal convolutional feature map, which allows us to identify the most discriminative image regions relevant to the predicted result. In Figure 3 we plot the heatmaps generated by the CAM associated with the predicted answer, which highlight the
5
Predictions: flying kites (score: 12.86 = 1.64 [image] + 11.22 [word] + playing baseball (score: 12.38 = 3.18 [image] + 9.20 {word) playing frisbee (score: 11.96 = 1.72 [image] + 10.24 [word]) Based on image only: baseball (4.74), batting (4.44), glove (4.12), Based on word only: playing wii (11.49), flying kites (11.22), playing frisbee (10.24), Question: where is the place Predictions: + field (score: 10.63 = 3.05 [image] + 7.58 [word]) + park (score: 9.69 = 2.96 [image] + 6.73 [word]) + in air (score: 9.67 = 2.27 [image] + 7.40 [word]) Based on image only: baseball (4.74), batting (4.44), glove (4.12), Based on word only: above stove (8.23), behind clouds (8.08), on floor (8.03),
Figure 4: Snapshot of the visual question answering demo. People could type questions into the demo and the demo will give answer predictions. Here we show the answer predictions for two questions.
informative image regions such as the cellphone in the ï¬rst image to the answer âtextingâ and the hot dog in the ï¬rst image to the answer âhot dogâ. The example in lower part of Figure 3 shows the heatmaps generated by two different questions and answers. Visual features from CNN already have implicit attention and selectivity over the image region, thus the resulting class activation maps are similar to the maps generated by the attention mechanisms of the VQA models in [13, 17, 18].
# Interactive Visual QA Demo
Question answering is essentially an interactive activity, thus it would be good to make the trained models able to interact with people in real time. Aided by the simplicity of the baseline model, we built a web demo that people could type question about a given image and our AI system powered by iBOWIMG will reply the most possible answers. Here the deep feature of the images are extracted beforehand. Figure 4 shows a snapshot of the demo. People could play with the demo to see the strength and weakness of VQA model.
# 5 Concluding Remarks
For visual question answering on COCO dataset, our implementation of a simple baseline achieves comparable performance to several recently proposed recurrent neural network-based approaches. To reach the correct prediction, the baseline captures the correlation between the informative words in the question and the answer, and that between image contents and the answer. How to move beyond this, from memorizing the correlations to actual reasoning and understanding of the question and image, is a goal for future research.
# References
[1] J. Andreas, M. Rohrbach, T. Darrell, and D. Klein. Deep compositional question answering with neural module networks. arXiv preprint arXiv:1511.02799, 2015.
[2] S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. L. Zitnick, and D. Parikh. Vqa: Visual question answering. arXiv preprint arXiv:1505.00468, 2015.
[3] K. Chen, J. Wang, L.-C. Chen, H. Gao, W. Xu, and R. Nevatia. Abc-cnn: An attention based convolutional neural network for visual question answering. arXiv preprint arXiv:1511.05960, 2015.
[4] J. Devlin, S. Gupta, R. Girshick, M. Mitchell, and C. L. Zitnick. Exploring nearest neighbor approaches for image captioning. arXiv preprint arXiv:1505.04467, 2015.
6
[5] H. Gao, J. Mao, J. Zhou, Z. Huang, L. Wang, and W. Xu. Are you talking to a machine? dataset and methods for multilingual image question answering. arXiv preprint arXiv:1505.05612, 2015.
[6] A. Jiang, F. Wang, F. Porikli, and Y. Li. Compositional memory for visual question answering. arXiv preprint arXiv:1511.05676, 2015.
[7] R. Kiros, R. Salakhutdinov, and R. Zemel. Multimodal neural language models. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 595â603, 2014. [8] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in Neural Information Processing Systems, pages 1097â1105, 2012.
[9] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Doll´ar, and C. L. Zitnick. Microsoft coco: Common objects in context. In Computer VisionâECCV 2014, pages 740â755. Springer, 2014.
[10] J. Mao, W. Xu, Y. Yang, J. Wang, and A. Yuille. Deep captioning with multimodal recurrent neural networks (m-rnn). arXiv preprint arXiv:1412.6632, 2014.
[11] H. Noh, P. H. Seo, and B. Han. Image question answering using convolutional neural network with dynamic parameter prediction. arXiv preprint arXiv:1511.05756, 2015.
[12] M. Ren, R. Kiros, and R. Zemel. Exploring models and data for image question answering. In NIPS, volume 1, page 3, 2015.
[13] K. J. Shih, S. Singh, and D. Hoiem. Where to look: Focus regions for visual question answer- ing. arXiv preprint arXiv:1511.07394, 2015.
[14] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. arXiv preprint arXiv:1409.4842, 2014. [15] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A neural image caption
generator. arXiv preprint arXiv:1411.4555, 2014.
[16] Q. Wu, P. Wang, C. Shen, A. v. d. Hengel, and A. Dick. Ask me anything: Free- form visual question answering based on knowledge from external sources. arXiv preprint arXiv:1511.06973, 2015.
[17] H. Xu and K. Saenko. Ask, attend and answer: Exploring question-guided spatial attention for visual question answering. arXiv preprint arXiv:1511.05234, 2015.
[18] Z. Yang, X. He, J. Gao, L. Deng, and A. Smola. Stacked attention networks for image question answering. arXiv preprint arXiv:1511.02274, 2015.
[19] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba. Learning deep features for discriminative localization. arXiv preprint arXiv:1512.04150, 2015.
[20] B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva. Learning deep features for scene In Advances in Neural Information Processing Systems, recognition using places database. pages 487â495, 2014.
7 | {
"id": "1511.05234"
} |
1512.00567 | Rethinking the Inception Architecture for Computer Vision | Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | http://arxiv.org/pdf/1512.00567 | Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna | cs.CV | null | null | cs.CV | 20151202 | 20151211 | 5 1 0 2 c e D 1 1
] V C . s c [ 3 v 7 6 5 0 0 . 2 1 5 1 : v i X r a
# Rethinking the Inception Architecture for Computer Vision
# Christian Szegedy Google Inc. szegedy@google.com
# Vincent Vanhoucke vanhoucke@google.com
# Sergey Ioffe sioffe@google.com
Jonathon Shlens shlens@google.com
# Zbigniew Wojna University College London zbigniewwojna@gmail.com
# Abstract
Convolutional networks are at the core of most state- of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in vari- ous benchmarks. Although increased model size and com- putational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efï¬ciency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we are explor- ing ways to scale up networks in ways that aim at utilizing the added computation as efï¬ciently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classiï¬cation challenge validation set demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6% top-5 error for single frame evaluation using a network with a computa- tional cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3.5% top-5 error and 17.3% top-1 error.
larly high performance in the 2014 ILSVRC [16] classiï¬ca- tion challenge. One interesting observation was that gains in the classiï¬cation performance tend to transfer to signiï¬- cant quality gains in a wide variety of application domains. This means that architectural improvements in deep con- volutional architecture can be utilized for improving perfor- mance for most other computer vision tasks that are increas- ingly reliant on high quality, learned visual features. Also, improvements in the network quality resulted in new appli- cation domains for convolutional networks in cases where AlexNet features could not compete with hand engineered, crafted solutions, e.g. proposal generation in detection[4].
Although VGGNet [18] has the compelling feature of architectural simplicity, this comes at a high cost: evalu- ating the network requires a lot of computation. On the other hand, the Inception architecture of GoogLeNet [20] was also designed to perform well even under strict con- straints on memory and computational budget. For exam- ple, GoogleNet employed only 5 million parameters, which represented a 12Ã reduction with respect to its predeces- sor AlexNet, which used 60 million parameters. Further- more, VGGNet employed about 3x more parameters than AlexNet.
# 1. Introduction
Since the 2012 ImageNet competition [16] winning en- try by Krizhevsky et al [9], their network âAlexNetâ has been successfully applied to a larger variety of computer vision tasks, for example to object-detection [5], segmen- tation [12], human pose estimation [22], video classiï¬ca- tion [8], object tracking [23], and superresolution [3].
These successes spurred a new line of research that fo- cused on ï¬nding higher performing convolutional neural networks. Starting in 2014, the quality of network architec- tures signiï¬cantly improved by utilizing deeper and wider networks. VGGNet [18] and GoogLeNet [20] yielded simi-
The computational cost of Inception is also much lower than VGGNet or its higher performing successors [6]. This has made it feasible to utilize Inception networks in big-data scenarios[17], [13], where huge amount of data needed to be processed at reasonable cost or scenarios where memory or computational capacity is inherently limited, for example in mobile vision settings. It is certainly possible to mitigate parts of these issues by applying specialized solutions to tar- get memory use [2], [15] or by optimizing the execution of certain operations via computational tricks [10]. However, these methods add extra complexity. Furthermore, these methods could be applied to optimize the Inception archi- tecture as well, widening the efï¬ciency gap again.
Still, the complexity of the Inception architecture makes
1
it more difï¬cult to make changes to the network. If the ar- chitecture is scaled up naively, large parts of the computa- tional gains can be immediately lost. Also, [20] does not provide a clear description about the contributing factors that lead to the various design decisions of the GoogLeNet architecture. This makes it much harder to adapt it to new use-cases while maintaining its efï¬ciency. For example, if it is deemed necessary to increase the capacity of some Inception-style model, the simple transformation of just doubling the number of all ï¬lter bank sizes will lead to a 4x increase in both computational cost and number of pa- rameters. This might prove prohibitive or unreasonable in a lot of practical scenarios, especially if the associated gains are modest. In this paper, we start with describing a few general principles and optimization ideas that that proved to be useful for scaling up convolution networks in efï¬cient ways. Although our principles are not limited to Inception- type networks, they are easier to observe in that context as the generic structure of the Inception style building blocks is ï¬exible enough to incorporate those constraints naturally. This is enabled by the generous use of dimensional reduc- tion and parallel structures of the Inception modules which allows for mitigating the impact of structural changes on nearby components. Still, one needs to be cautious about doing so, as some guiding principles should be observed to maintain high quality of the models.
# 2. General Design Principles
Here we will describe a few design principles based on large-scale experimentation with various architectural choices with convolutional networks. At this point, the util- ity of the principles below are speculative and additional future experimental evidence will be necessary to assess their accuracy and domain of validity. Still, grave devia- tions from these principles tended to result in deterioration in the quality of the networks and ï¬xing situations where those deviations were detected resulted in improved archi- tectures in general.
1. Avoid representational bottlenecks, especially early in the network. Feed-forward networks can be repre- sented by an acyclic graph from the input layer(s) to the classiï¬er or regressor. This deï¬nes a clear direction for the information ï¬ow. For any cut separating the in- puts from the outputs, one can access the amount of information passing though the cut. One should avoid bottlenecks with extreme compression. In general the representation size should gently decrease from the in- puts to the outputs before reaching the ï¬nal represen- tation used for the task at hand. Theoretically, infor- mation content can not be assessed merely by the di- mensionality of the representation as it discards impor- tant factors like correlation structure; the dimensional-
ity merely provides a rough estimate of information content.
2. Higher dimensional representations are easier to pro- cess locally within a network. Increasing the activa- tions per tile in a convolutional network allows for more disentangled features. The resulting networks will train faster.
3. Spatial aggregation can be done over lower dimen- sional embeddings without much or any loss in rep- resentational power. For example, before performing a more spread out (e.g. 3 Ã 3) convolution, one can re- duce the dimension of the input representation before the spatial aggregation without expecting serious ad- verse effects. We hypothesize that the reason for that is the strong correlation between adjacent unit results in much less loss of information during dimension re- duction, if the outputs are used in a spatial aggrega- tion context. Given that these signals should be easily compressible, the dimension reduction even promotes faster learning.
4. Balance the width and depth of the network. Optimal performance of the network can be reached by balanc- ing the number of ï¬lters per stage and the depth of the network. Increasing both the width and the depth of the network can contribute to higher quality net- works. However, the optimal improvement for a con- stant amount of computation can be reached if both are increased in parallel. The computational budget should therefore be distributed in a balanced way between the depth and width of the network.
Although these principles might make sense, it is not straightforward to use them to improve the quality of net- works out of box. The idea is to use them judiciously in ambiguous situations only.
# 3. Factorizing Convolutions with Large Filter Size
Much of the original gains of the GoogLeNet net- work [20] arise from a very generous use of dimension re- duction. This can be viewed as a special case of factorizing convolutions in a computationally efï¬cient manner. Con- sider for example the case of a 1 à 1 convolutional layer followed by a 3 à 3 convolutional layer. In a vision net- work, it is expected that the outputs of near-by activations are highly correlated. Therefore, we can expect that their activations can be reduced before aggregation and that this should result in similarly expressive local representations.
Here we explore other ways of factorizing convolutions in various settings, especially in order to increase the com- putational efï¬ciency of the solution. Since Inception net- works are fully convolutional, each weight corresponds to
Figure 1. Mini-network replacing the 5 Ã 5 convolutions.
one multiplication per activation. Therefore, any reduction in computational cost results in reduced number of param- eters. This means that with suitable factorization, we can end up with more disentangled parameters and therefore with faster training. Also, we can use the computational and memory savings to increase the ï¬lter-bank sizes of our network while maintaining our ability to train each model replica on a single computer.
# 3.1. Factorization into smaller convolutions
Convolutions with larger spatial ï¬lters (e.g. 5 à 5 or 7 à 7) tend to be disproportionally expensive in terms of computation. For example, a 5 à 5 convolution with n ï¬l- ters over a grid with m ï¬lters is 25/9 = 2.78 times more computationally expensive than a 3 à 3 convolution with the same number of ï¬lters. Of course, a 5 à 5 ï¬lter can cap- ture dependencies between signals between activations of units further away in the earlier layers, so a reduction of the geometric size of the ï¬lters comes at a large cost of expres- siveness. However, we can ask whether a 5 à 5 convolution could be replaced by a multi-layer network with less pa- rameters with the same input size and output depth. If we zoom into the computation graph of the 5 à 5 convolution, we see that each output looks like a small fully-connected network sliding over 5 à 5 tiles over its input (see Figure 1). Since we are constructing a vision network, it seems natural to exploit translation invariance again and replace the fully connected component by a two layer convolutional archi- tecture: the ï¬rst layer is a 3 à 3 convolution, the second is a fully connected layer on top of the 3 à 3 output grid of the ï¬rst layer (see Figure 1). Sliding this small network over the input activation grid boils down to replacing the 5 à 5 convolution with two layers of 3 à 3 convolution (compare Figure 4 with 5).
This setup clearly reduces the parameter count by shar- ing the weights between adjacent tiles. To analyze the ex-
Figure 2. One of several control experiments between two Incep- tion models, one of them uses factorization into linear + ReLU layers, the other uses two ReLU layers. After 3.86 million opera- tions, the former settles at 76.2%, while the latter reaches 77.2% top-1 Accuracy on the validation set.
pected computational cost savings, we will make a few sim- plifying assumptions that apply for the typical situations: We can assume that n = αm, that is that we want to change the number of activations/unit by a constant alpha factor. Since the 5 à 5 convolution is aggregating, α is typically slightly larger than one (around 1.5 in the case of GoogLeNet). Having a two layer replacement for the 5 à 5 layer, it seems reasonable to reach this expansion in α in both two steps: increasing the number of ï¬lters by steps. In order to simplify our estimate by choosing α = 1 (no expansion), If we would naivly slide a network without reusing the computation between neighboring grid tiles, we would increase the computational cost. sliding this network can be represented by two 3 à 3 convolutional layers which reuses the activations between adjacent tiles. This way, we end up with a net 9+9 25 à reduction of computation, resulting in a relative gain of 28% by this factorization. The exact same saving holds for the parameter count as each parame- ter is used exactly once in the computation of the activation of each unit. Still, this setup raises two general questions: Does this replacement result in any loss of expressiveness? If our main goal is to factorize the linear part of the compu- tation, would it not suggest to keep linear activations in the ï¬rst layer? We have ran several control experiments (for ex- ample see ï¬gure 2) and using linear activation was always inferior to using rectiï¬ed linear units in all stages of the fac- torization. We attribute this gain to the enhanced space of variations that the network can learn especially if we batch- normalize [7] the output activations. One can see similar effects when using linear activations for the dimension re- duction components.
# 3.2. Spatial Factorization into Asymmetric Convo- lutions
The above results suggest that convolutions with ï¬lters larger 3 à 3 a might not be generally useful as they can always be reduced into a sequence of 3 à 3 convolutional
Figure 3. Mini-network replacing the 3 Ã 3 convolutions. The lower layer of this network consists of a 3 Ã 1 convolution with 3 output units.
Filter Concat
Figure 4. Original Inception module as described in [20].
layers. Still we can ask the question whether one should factorize them into smaller, for example 2 à 2 convolutions. However, it turns out that one can do even better than 2 à 2 by using asymmetric convolutions, e.g. n à 1. For example using a 3 à 1 convolution followed by a 1 à 3 convolution is equivalent to sliding a two layer network with the same receptive ï¬eld as in a 3 à 3 convolution (see ï¬gure 3). Still the two-layer solution is 33% cheaper for the same number of output ï¬lters, if the number of input and output ï¬lters is equal. By comparison, factorizing a 3 à 3 convolution into a two 2 à 2 convolution represents only a 11% saving of computation.
In theory, we could go even further and argue that one can replace any n à n convolution by a 1 à n convolu-
Filter Concat
Figure 5. Inception modules where each 5 Ã 5 convolution is re- placed by two 3 Ã 3 convolution, as suggested by principle 3 of Section 2.
tion followed by a n à 1 convolution and the computational cost saving increases dramatically as n grows (see ï¬gure 6). In practice, we have found that employing this factorization does not work well on early layers, but it gives very good re- sults on medium grid-sizes (On m à m feature maps, where m ranges between 12 and 20). On that level, very good re- sults can be achieved by using 1 à 7 convolutions followed by 7 à 1 convolutions.
# 4. Utility of Auxiliary Classiï¬ers
[20] has introduced the notion of auxiliary classiï¬ers to improve the convergence of very deep networks. The origi- nal motivation was to push useful gradients to the lower lay- ers to make them immediately useful and improve the con- vergence during training by combating the vanishing gra- dient problem in very deep networks. Also Lee et al[11] argues that auxiliary classiï¬ers promote more stable learn- Interestingly, we found that ing and better convergence. auxiliary classiï¬ers did not result in improved convergence early in the training: the training progression of network with and without side head looks virtually identical before both models reach high accuracy. Near the end of training, the network with the auxiliary branches starts to overtake the accuracy of the network without any auxiliary branch and reaches a slightly higher plateau.
Also [20] used two side-heads at different stages in the network. The removal of the lower auxiliary branch did not have any adverse effect on the ï¬nal quality of the network. Together with the earlier observation in the previous para-
Filter Concat
Figure 6. Inception modules after the factorization of the n à n convolutions. In our proposed architecture, we chose n = 7 for the 17 à 17 grid. (The ï¬lter sizes are picked using principle 3) .
graph, this means that original the hypothesis of [20] that these branches help evolving the low-level features is most likely misplaced. Instead, we argue that the auxiliary clas- siï¬ers act as regularizer. This is supported by the fact that the main classiï¬er of the network performs better if the side branch is batch-normalized [7] or has a dropout layer. This also gives a weak supporting evidence for the conjecture that batch normalization acts as a regularizer.
# 5. Efï¬cient Grid Size Reduction
Traditionally, convolutional networks used some pooling operation to decrease the grid size of the feature maps. In order to avoid a representational bottleneck, before apply- ing maximum or average pooling the activation dimension of the network ï¬lters is expanded. For example, starting a d à d grid with k ï¬lters, if we would like to arrive at a d 2 à d 2 grid with 2k ï¬lters, we ï¬rst need to compute a stride-1 con- volution with 2k ï¬lters and then apply an additional pooling step. This means that the overall computational cost is dom- inated by the expensive convolution on the larger grid using 2d2k2 operations. One possibility would be to switch to pooling with convolution and therefore resulting in 2( d 2 )2k2
Filter Concat
Figure 7. Inception modules with expanded the ï¬lter bank outputs. This architecture is used on the coarsest (8 à 8) grids to promote high dimensional representations, as suggested by principle 2 of Section 2. We are using this solution only on the coarsest grid, since that is the place where producing high dimensional sparse representation is the most critical as the ratio of local processing (by 1 à 1 convolutions) is increased compared to the spatial ag- gregation.
tx1x1024 [Fully connected 8x8x1280 5x5x128 I 1x1 Convolution Inception 5x5x768 5x5 Average pooling with stride 3 17x17x768
Figure 8. Auxiliary classiï¬er on top of the last 17Ã17 layer. Batch normalization[7] of the layers in the side head results in a 0.4% absolute gain in top-1 accuracy. The lower axis shows the number of itertions performed, each with batch size 32.
reducing the computational cost by a quarter. However, this creates a representational bottlenecks as the overall dimen- sionality of the representation drops to ( d 2 )2k resulting in less expressive networks (see Figure 9). Instead of doing so, we suggest another variant the reduces the computational cost even further while removing the representational bot- tleneck. (see Figure 10). We can use two parallel stride 2 blocks: P and C. P is a pooling layer (either average or maximum pooling) the activation, both of them are stride 2 the ï¬lter banks of which are concatenated as in ï¬gure 10.
17x17x640 17x17x640 | 17x17x320 F rig 35x35x320 Pooling 35x35x640 35x35x320
Figure 9. Two alternative ways of reducing the grid size. The so- lution on the left violates the principle 1 of not introducing an rep- resentational bottleneck from Section 2. The version on the right is 3 times more expensive computationally.
Filter Concat 3x3 stride 2 17x17x640 i = 3x3 17x17x320 17x17x320 stride 1 i I con oo Pool 1x1 1x1 stride 2 35x35x320 Base
Figure 10. Inception module that reduces the grid-size while ex- pands the ï¬lter banks. It is both cheap and avoids the representa- tional bottleneck as is suggested by principle 1. The diagram on the right represents the same solution but from the perspective of grid sizes rather than the operations.
# 6. Inception-v2
Here we are connecting the dots from above and pro- pose a new architecture with improved performance on the ILSVRC 2012 classiï¬cation benchmark. The layout of our network is given in table 1. Note that we have factorized the traditional 7 à 7 convolution into three 3 à 3 convolu- tions based on the same ideas as described in section 3.1. For the Inception part of the network, we have 3 traditional inception modules at the 35 à 35 with 288 ï¬lters each. This is reduced to a 17 à 17 grid with 768 ï¬lters using the grid reduction technique described in section 5. This is is fol- lowed by 5 instances of the factorized inception modules as depicted in ï¬gure 5. This is reduced to a 8 à 8 à 1280 grid with the grid reduction technique depicted in ï¬gure 10. At the coarsest 8 à 8 level, we have two Inception modules as depicted in ï¬gure 6, with a concatenated output ï¬lter bank size of 2048 for each tile. The detailed structure of the net- work, including the sizes of ï¬lter banks inside the Inception modules, is given in the supplementary material, given in the model.txt that is in the tar-ï¬le of this submission.
type conv conv conv padded pool conv conv conv 3ÃInception 5ÃInception 2ÃInception pool linear softmax patch size/stride or remarks 3Ã3/2 3Ã3/1 3Ã3/1 3Ã3/2 3Ã3/1 3Ã3/2 3Ã3/1 As in ï¬gure 5 As in ï¬gure 6 As in ï¬gure 7 8 à 8 logits classiï¬er input size 299Ã299Ã3 149Ã149Ã32 147Ã147Ã32 147Ã147Ã64 73Ã73Ã64 71Ã71Ã80 35Ã35Ã192 35Ã35Ã288 17Ã17Ã768 8Ã8Ã1280 8 à 8 à 2048 1 à 1 à 2048 1 à 1 à 1000
Table 1. The outline of the proposed network architecture. The output size of each module is the input size of the next one. We are using variations of reduction technique depicted Figure 10 to reduce the grid sizes between the Inception blocks whenever ap- plicable. We have marked the convolution with 0-padding, which is used to maintain the grid size. 0-padding is also used inside those Inception modules that do not reduce the grid size. All other layers do not use padding. The various ï¬lter bank sizes are chosen to observe principle 4 from Section 2.
However, we have observed that the quality of the network is relatively stable to variations as long as the principles from Section 2 are observed. Although our network is 42 layers deep, our computation cost is only about 2.5 higher than that of GoogLeNet and it is still much more efï¬cient than VGGNet.
# 7. Model Regularization via Label Smoothing
Here we propose a mechanism to regularize the classiï¬er layer by estimating the marginalized effect of label-dropout during training.
For each training example x, our model computes the probability of each label k ⬠{1...K}: p(k|z) = sees: Here, z; are the /ogits or unnormalized log- probabilities. Consider the ground-truth distribution over labels q(k|x) for this training example, normalized so that >, a(k|z) = 1. For brevity, let us omit the dependence of p and q on example x. We define the loss for the ex- ample as the cross entropy: £ = â ian log(p(k))q(k). Minimizing this is equivalent to maximizing the expected log-likelihood of a label, where the label is selected accord- ing to its ground-truth distribution q(k). Cross-entropy loss is differentiable with respect to the logits z;, and thus can be used for gradient training of deep models. The gradient has a rather simple form: we = p(k) â q(k), which is bounded between â1 and 1.
Consider the case of a single ground-truth label y, so that q(y) = 1 and q(k) = 0 for all k # y. In this case,
minimizing the cross entropy is equivalent to maximizing the log-likelihood of the correct label. For a particular ex- ample x with label y, the log-likelihood is maximized for q(k) = dx,y, where 54,4 is Dirac delta, which equals 1 for k = y and 0 otherwise. This maximum is not achievable for finite z, but is approached if zy >> zz for all k A y â that is, if the logit corresponding to the ground-truth la- bel is much great than all other logits. This, however, can cause two problems. First, it may result in over-fitting: if the model learns to assign full probability to the ground- truth label for each training example, it is not guaranteed to generalize. Second, it encourages the differences between the largest logit and all others to become large, and this, combined with the bounded gradient a, reduces the abil- ity of the model to adapt. Intuitively, this happens because the model becomes too confident about its predictions.
We propose a mechanism for encouraging the model to be less confident. While this may not be desired if the goal is to maximize the log-likelihood of training labels, it does regularize the model and makes it more adaptable. The method is very simple. Consider a distribution over labels u(k), independent of the training example x, and a smooth- ing parameter ¢. For a training example with ground-truth label y, we replace the label distribution q(k|) = dx, with
qd (k\x) = (1 â â¬)dx,y + eu(k)
which is a mixture of the original ground-truth distribution q(k|x) and the fixed distribution u(k), with weights 1 â ⬠and ¢, respectively. This can be seen as the distribution of the label k obtained as follows: first, set it to the ground- truth label k = y; then, with probability â¬, replace k with a sample drawn from the distribution u(k). We propose to use the prior distribution over labels as u(k). In our exper- iments, we used the uniform distribution u(k) = 1/K, so that
⬠KE 1(k) = (1 â¬)bky +
We refer to this change in ground-truth label distribution as label-smoothing regularization, or LSR.
Note that LSR achieves the desired goal of preventing the largest logit from becoming much larger than all others. Indeed, if this were to happen, then a single g(k) would approach 1 while all others would approach 0. This would result in a large cross-entropy with q/(k) because, unlike q(k) = dp,y, all q/(k) have a positive lower bound.
Another interpretation of LSR can be obtained by con- sidering the cross entropy:
K H(q,p)=- So log p(k)q'(k) = (1-e)H(q, p)+eH (u, p) k=1
Thus, LSR is equivalent to replacing a single cross-entropy loss H(q, p) with a pair of such losses H(q, p) and H(u, p).
The second loss penalizes the deviation of predicted label distribution p from the prior wu, with the relative weight -. Note that this deviation could be equivalently captured by the KL divergence, since H(u,p) = Dxz(ullp) + H(u) and H(u) is fixed. When w is the uniform distribution, H(u,p) is a measure of how dissimilar the predicted dis- tribution p is to uniform, which could also be measured (but not equivalently) by negative entropy âH(p); we have not experimented with this approach.
In our ImageNet experiments with K = 1000 classes, we used u(k) = 1/1000 and ⬠= 0.1. For ILSVRC 2012, we have found a consistent improvement of about 0.2% ab- solute both for top-1 error and the top-5 error (cf. Table{3).
# 8. Training Methodology
We have trained our networks with stochastic gradient utilizing the TensorFlow [1] distributed machine learning system using 50 replicas running each on a NVidia Kepler GPU with batch size 32 for 100 epochs. Our earlier experi- ments used momentum with a decay of 0.9, while our best models were achieved using RMSProp with de- cay of 0.9 and « = 1.0. We used a learning rate of 0.045, decayed every two epoch using an exponential rate of 0.94. In addition, gradient clipping with threshold 2.0 was found to be useful to stabilize the training. Model evalua- tions are performed using a running average of the parame- ters computed over time.
# 9. Performance on Lower Resolution Input
A typical use-case of vision networks is for the the post- classiï¬cation of detection, for example in the Multibox [4] context. This includes the analysis of a relative small patch of the image containing a single object with some context. The tasks is to decide whether the center part of the patch corresponds to some object and determine the class of the object if it does. The challenge is that objects tend to be relatively small and low-resolution. This raises the question of how to properly deal with lower resolution input.
The common wisdom is that models employing higher resolution receptive ï¬elds tend to result in signiï¬cantly im- proved recognition performance. However it is important to distinguish between the effect of the increased resolution of the ï¬rst layer receptive ï¬eld and the effects of larger model capacitance and computation. If we just change the reso- lution of the input without further adjustment to the model, then we end up using computationally much cheaper mod- els to solve more difï¬cult tasks. Of course, it is natural, that these solutions loose out already because of the reduced computational effort. In order to make an accurate assess- ment, the model needs to analyze vague hints in order to be able to âhallucinateâ the ï¬ne details. This is computa- tionally costly. The question remains therefore: how much
Receptive Field Size Top-1 Accuracy (single frame) 79 Ã 79 75.2% 151 Ã 151 76.4% 299 Ã 299 76.6%
Table 2. Comparison of recognition performance when the size of the receptive ï¬eld varies, but the computational cost is constant.
does higher input resolution helps if the computational ef- fort is kept constant. One simple way to ensure constant effort is to reduce the strides of the ï¬rst two layer in the case of lower resolution input, or by simply removing the ï¬rst pooling layer of the network.
For this purpose we have performed the following three experiments:
1. 299 à 299 receptive ï¬eld with stride 2 and maximum pooling after the ï¬rst layer.
2. 151 à 151 receptive ï¬eld with stride 1 and maximum pooling after the ï¬rst layer.
3. 79 à 79 receptive ï¬eld with stride 1 and without pool- ing after the ï¬rst layer.
All three networks have almost identical computational cost. Although the third network is slightly cheaper, the cost of the pooling layer is marginal and (within 1% of the total cost of the)network. In each case, the networks were trained until convergence and their quality was measured on the validation set of the ImageNet ILSVRC 2012 classiï¬ca- tion benchmark. The results can be seen in table 2. Al- though the lower-resolution networks take longer to train, the quality of the ï¬nal result is quite close to that of their higher resolution counterparts.
However, if one would just naively reduce the network size according to the input resolution, then network would perform much more poorly. However this would an unfair comparison as we would are comparing a 16 times cheaper model on a more difï¬cult task.
Also these results of table 2 suggest, one might con- sider using dedicated high-cost low resolution networks for smaller objects in the R-CNN [5] context.
# 10. Experimental Results and Comparisons
Table 3 shows the experimental results about the recog- nition performance of our proposed architecture (Inception- v2) as described in Section 6. Each Inception-v2 line shows the result of the cumulative changes including the high- lighted new modiï¬cation plus all the earlier ones. Label Smoothing refers to method described in Section 7. Fac- torized 7 à 7 includes a change that factorizes the ï¬rst 7 à 7 convolutional layer into a sequence of 3 à 3 convo- lutional layers. BN-auxiliary refers to the version in which
Network GoogLeNet [20] BN-GoogLeNet BN-Inception [7] Inception-v2 Inception-v2 RMSProp Inception-v2 Label Smoothing Inception-v2 Factorized 7 Ã 7 Inception-v2 BN-auxiliary Top-1 Error 29% 26.8% 25.2% 23.4% Top-5 Error 9.2% - 7.8 - 23.1% 6.3 22.8% 6.1 21.6% 5.8 21.2% 5.6% Cost Bn Ops 1.5 1.5 2.0 3.8 3.8 3.8 4.8 4.8
Table 3. Single crop experimental results comparing the cumula- tive effects on the various contributing factors. We compare our numbers with the best published single-crop inference for Ioffe at al [7]. For the âInception-v2â lines, the changes are cumulative and each subsequent line includes the new change in addition to the previous ones. The last line is referring to all the changes is what we refer to as âInception-v3â below. Unfortunately, He et al [6] reports the only 10-crop evaluation results, but not single crop results, which is reported in the Table 4 below.
Network GoogLeNet [20] GoogLeNet [20] VGG [18] BN-Inception [7] PReLU [6] PReLU [6] Inception-v3 Inception-v3 Crops Evaluated 10 144 - 144 10 - 12 144 Top-1 Error 9.15% 7.89% 6.8% 5.82% 24.27% 7.38% 21.59% 5.71% 19.47% 4.48% 18.77% 4.2% Top-5 Error - - 24.4% 22%
Table 4. Single-model, multi-crop experimental results compar- ing the cumulative effects on the various contributing factors. We compare our numbers with the best published single-model infer- ence results on the ILSVRC 2012 classiï¬cation benchmark.
the fully connected layer of the auxiliary classiï¬er is also batch-normalized, not just the convolutions. We are refer- ring to the model in last row of Table 3 as Inception-v3 and evaluate its performance in the multi-crop and ensemble set- tings.
All our evaluations are done on the 48238 non- blacklisted examples on the ILSVRC-2012 validation set, as suggested by [16]. We have evaluated all the 50000 ex- amples as well and the results were roughly 0.1% worse in top-5 error and around 0.2% in top-1 error. In the upcom- ing version of this paper, we will verify our ensemble result on the test set, but at the time of our last evaluation of BN- Inception in spring [7] indicates that the test and validation set error tends to correlate very well.
Network VGGNet [18] GoogLeNet [20] PReLU [6] BN-Inception [7] Inception-v3 Models Evaluated 2 7 - 6 4 Crops Evaluated - 144 - 144 144 Top-1 Top-5 Error Error 23.7% 6.8% - 6.67% - 4.94% 20.1% 4.9% 17.2% 3.58%â
Table 5. Ensemble evaluation results comparing multi-model, multi-crop reported results. Our numbers are compared with the best published ensemble inference results on the ILSVRC 2012 classiï¬cation benchmark. âAll results, but the top-5 ensemble result reported are on the validation set. The ensemble yielded 3.46% top-5 error on the validation set.
# 11. Conclusions
We have provided several design principles to scale up convolutional networks and studied them in the context of the Inception architecture. This guidance can lead to high performance vision networks that have a relatively mod- est computation cost compared to simpler, more monolithic architectures. Our highest quality version of Inception-v3 reaches 21.2%, top-1 and 5.6% top-5 error for single crop evaluation on the ILSVR 2012 classiï¬cation, setting a new state of the art. This is achieved with relatively modest (2.5Ã) increase in computational cost compared to the net- work described in Ioffe et al [7]. Still our solution uses much less computation than the best published results based on denser networks: our model outperforms the results of He et al [6] â cutting the top-5 (top-1) error by 25% (14%) relative, respectively â while being six times cheaper com- putationally and using at least ï¬ve times less parameters (estimated). Our ensemble of four Inception-v3 models reaches 3.5% with multi-crop evaluation reaches 3.5% top- 5 error which represents an over 25% reduction to the best published results and is almost half of the error of ILSVRC 2014 winining GoogLeNet ensemble.
We have also demonstrated that high quality results can be reached with receptive ï¬eld resolution as low as 79 à 79. This might prove to be helpful in systems for detecting rel- atively small objects. We have studied how factorizing con- volutions and aggressive dimension reductions inside neural network can result in networks with relatively low computa- tional cost while maintaining high quality. The combination of lower parameter count and additional regularization with batch-normalized auxiliary classiï¬ers and label-smoothing allows for training high quality networks on relatively mod- est sized training sets.
# References
[1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghe- mawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia,
R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Man´e, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Vi´egas, O. Vinyals, P. War- den, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng. Tensor- Flow: Large-scale machine learning on heterogeneous sys- tems, 2015. Software available from tensorï¬ow.org.
[2] W. Chen, J. T. Wilson, S. Tyree, K. Q. Weinberger, and Y. Chen. Compressing neural networks with the hashing trick. In Proceedings of The 32nd International Conference on Machine Learning, 2015.
[3] C. Dong, C. C. Loy, K. He, and X. Tang. Learning a deep convolutional network for image super-resolution. In Com- puter VisionâECCV 2014, pages 184â199. Springer, 2014. [4] D. Erhan, C. Szegedy, A. Toshev, and D. Anguelov. Scalable object detection using deep neural networks. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Confer- ence on, pages 2155â2162. IEEE, 2014.
[5] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich fea- ture hierarchies for accurate object detection and semantic In Proceedings of the IEEE Conference on segmentation. Computer Vision and Pattern Recognition (CVPR), 2014. [6] K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectiï¬ers: Surpassing human-level performance on imagenet classiï¬cation. arXiv preprint arXiv:1502.01852, 2015. [7] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of The 32nd International Conference on Ma- chine Learning, pages 448â456, 2015.
[8] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei. Large-scale video classiï¬cation with con- In Computer Vision and Pat- volutional neural networks. tern Recognition (CVPR), 2014 IEEE Conference on, pages 1725â1732. IEEE, 2014.
Imagenet classiï¬cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â1105, 2012.
[10] A. Lavin. Fast algorithms for convolutional neural networks. arXiv preprint arXiv:1509.09308, 2015.
[11] C.-Y. Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu. Deeply- supervised nets. arXiv preprint arXiv:1409.5185, 2014. [12] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recogni- tion, pages 3431â3440, 2015.
[13] Y. Movshovitz-Attias, Q. Yu, M. C. Stumpe, V. Shet, S. Arnoud, and L. Yatziv. Ontological supervision for ï¬ne grained classiï¬cation of street view storefronts. In Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1693â1702, 2015.
[14] R. Pascanu, T. Mikolov, and Y. Bengio. On the difï¬- culty of training recurrent neural networks. arXiv preprint arXiv:1211.5063, 2012.
[15] D. C. Psichogios and L. H. Ungar. Svd-net: an algorithm that automatically selects network structure. IEEE transac- tions on neural networks/a publication of the IEEE Neural Networks Council, 5(3):513â515, 1993.
[16] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. 2014.
[17] F. Schroff, D. Kalenichenko, and J. Philbin. Facenet: A uni- ï¬ed embedding for face recognition and clustering. arXiv preprint arXiv:1503.03832, 2015.
[18] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
[19] I. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the importance of initialization and momentum in deep learning. In Proceedings of the 30th International Conference on Ma- chine Learning (ICML-13), volume 28, pages 1139â1147. JMLR Workshop and Conference Proceedings, May 2013.
[20] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1â9, 2015.
[21] T. Tieleman and G. Hinton. Divide the gradient by a run- ning average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4, 2012. Accessed: 2015- 11-05.
[22] A. Toshev and C. Szegedy. Deeppose: Human pose estima- tion via deep neural networks. In Computer Vision and Pat- tern Recognition (CVPR), 2014 IEEE Conference on, pages 1653â1660. IEEE, 2014.
[23] N. Wang and D.-Y. Yeung. Learning a deep compact image In Advances in Neural representation for visual tracking. Information Processing Systems, pages 809â817, 2013. | {
"id": "1502.01852"
} |
1511.08630 | A C-LSTM Neural Network for Text Classification | Neural network models have been demonstrated to be capable of achieving
remarkable performance in sentence and document modeling. Convolutional neural
network (CNN) and recurrent neural network (RNN) are two mainstream
architectures for such modeling tasks, which adopt totally different ways of
understanding natural languages. In this work, we combine the strengths of both
architectures and propose a novel and unified model called C-LSTM for sentence
representation and text classification. C-LSTM utilizes CNN to extract a
sequence of higher-level phrase representations, and are fed into a long
short-term memory recurrent neural network (LSTM) to obtain the sentence
representation. C-LSTM is able to capture both local features of phrases as
well as global and temporal sentence semantics. We evaluate the proposed
architecture on sentiment classification and question classification tasks. The
experimental results show that the C-LSTM outperforms both CNN and LSTM and can
achieve excellent performance on these tasks. | http://arxiv.org/pdf/1511.08630 | Chunting Zhou, Chonglin Sun, Zhiyuan Liu, Francis C. M. Lau | cs.CL | null | null | cs.CL | 20151127 | 20151130 | 2015:
5 1 0 2
# v o N 0 3
# ] L C . s c [
arXiv:1511.08630v2 [cs.CL]
2 v 0 3 6 8 0 . 1 1 5 1 : v i X r a
# A C-LSTM Neural Network for Text Classiï¬cation
Chunting Zhou1, Chonglin Sun2, Zhiyuan Liu3, Francis C.M. Lau1 Department of Computer Science, The University of Hong Kong1 School of Innovation Experiment, Dalian University of Technology2 Department of Computer Science and Technology, Tsinghua University, Beijing3
# Abstract
Neural network models have been demon- strated to be capable of achieving remarkable performance in sentence and document mod- eling. Convolutional neural network (CNN) and recurrent neural network (RNN) are two mainstream architectures for such modeling tasks, which adopt totally different ways of understanding natural languages. In this work, we combine the strengths of both architectures and propose a novel and uniï¬ed model called C-LSTM for sentence representation and text classiï¬cation. C-LSTM utilizes CNN to ex- tract a sequence of higher-level phrase repre- sentations, and are fed into a long short-term memory recurrent neural network (LSTM) to obtain the sentence representation. C-LSTM is able to capture both local features of phrases as well as global and temporal sentence se- mantics. We evaluate the proposed archi- tecture on sentiment classiï¬cation and ques- tion classiï¬cation tasks. The experimental re- sults show that the C-LSTM outperforms both CNN and LSTM and can achieve excellent performance on these tasks.
# 1 Introduction
As one of the core steps in NLP, sentence modeling aims at representing sentences as meaningful features for tasks such as sentiment classiï¬cation. Traditional sentence modeling uses the bag-of- words model which often suffers from the curse of dimensionality; others use composition based methods instead, e.g., an algebraic operation over semantic word vectors to produce the semantic sentence vector. However, such methods may not
perform well due to the loss of word order informa- tion. More recent models for distributed sentence representation fall into two categories according to the form of input sentence: sequence-based models and tree-structured models. Sequence-based models from word construct sequences by taking in account the relationship be- tween successive words (Johnson and Zhang, 2015). Tree-structured models treat each word token as a node in a syntactic parse tree and learn sentence representations from leaves to the root in a recursive manner (Socher et al., 2013b).
(CNNs) (RNNs) have and recurrent neural networks emerged architectures and are often combined with sequence-based (Tai et al., 2015; or Lei et al., 2015; Kim, 2014; Kalchbrenner et al., 2014; Mou et al., 2015).
Owing to the capability of capturing local cor- relations of spatial or temporal structures, CNNs have achieved top performance in computer vi- sion, speech recognition and NLP. For sentence modeling, CNNs perform excellently in extracting n-gram features at different positions of a sentence through convolutional ï¬lters, and can learn short and long-range relations through pooling opera- tions. CNNs have been successfully combined with both sequence-based model (Denil et al., 2014; Kalchbrenner et al., 2014) tree-structured model (Mou et al., 2015) in sentence modeling.
The other popular neural network architecture â RNN â is able to handle sequences of any length and capture long-term dependencies. To avoid the
problem of gradient exploding or vanishing in the standard RNN, Long Short-term Memory RNN (LSTM) (Hochreiter and Schmidhuber, 1997) and other variants (Cho et al., 2014) were designed for better remembering and memory accesses. Along with the sequence-based (Tang et al., 2015) or the tree-structured (Tai et al., 2015) models, RNNs have achieved remarkable results in sentence or document modeling.
To conclude, CNN is able to learn local response from temporal or spatial data but lacks the ability of learning sequential correlations; on the other hand, RNN is specilized for sequential modelling but unable to extract features in a parallel way. It has been shown that higher-level modeling of xt can help to disentangle underlying factors of variation within the input, which should then make it easier to learn temporal structure between successive time steps (Pascanu et al., 2014). For example, Sainath et al. (Sainath et al., 2015) have obtained respectable improvements in WER by learning a deep LSTM from multi-scale inputs. We explore training the LSTM model directly from sequences of higher- level representaions while preserving the sequence order of these representaions. In this paper, we introduce a new architecture short for C-LSTM by combining CNN and LSTM to model sentences. To beneï¬t from the advantages of both CNN and RNN, we design a simple end-to-end, uniï¬ed architecture by feeding the output of a one-layer CNN into LSTM. The CNN is constructed on top of the pre-trained word vectors from massive unlabeled text data to learn higher-level representions of n-grams. Then to learn sequential correlations from higher-level suqence representations, the feature maps of CNN are organized as sequential window features to serve as the input of LSTM. In this way, instead of constructing LSTM directly from the input sentence, we ï¬rst transform each sentence into successive window (n-gram) features to help disentangle factors of variations within sentences. We choose sequence-based input other than relying on the syntactic parse trees before feeding in the neural network, thus our model doesnât rely on any external language knowledge and complicated pre-processing.
In our experiments, we evaluate the semantic sentence representations learned from C-LSTM
with two tasks: sentiment classiï¬cation and 6-way question classiï¬cation. Our evaluations show that the C-LSTM model can achieve excellent results with several benchmarks as compared with a wide range of baseline models. We also show that the combination of CNN and LSTM outperforms individual multi-layer CNN models and RNN models, which indicates that LSTM can learn long- term dependencies from sequences of higher-level representations better than the other models.
# 2 Related Work
network mod- Deep in many els distributed NLP word, representa- tion (Mikolov et al., 2013b; Le and Mikolov, 2014), parsing (Socher et al., 2013a), statistical machine translation (Devlin et al., 2014), sentiment clas- siï¬cation (Kim, 2014), etc. Learning distributed sentence representation through neural network models requires little external domain knowledge and can reach satisfactory results in related tasks like sentiment classiï¬cation, text categorization.
In many recent sentence representation learning works, neural network models are constructed upon either the input word sequences or the transformed syntactic parse tree. Among them, convolutional neural network (CNN) and recurrent neural network (RNN) are two popular ones.
The capability of capturing local correlations along with extracting higher-level correlations through pooling empowers CNN to model sen- tences naturally from consecutive context windows. In (Collobert et al., 2011), Collobert et al. applied convolutional ï¬lters to successive windows for a given sequence to extract global features by max-pooling. As a slight variant, Kim et al. (2014) proposed a CNN architecture with multiple ï¬lters (with a varying window size) and two âchannelsâ To capture word relations of of word vectors. varying sizes, Kalchbrenner et al. (2014) proposed In a more a dynamic k-max pooling mechanism. apply recent work (Lei et al., 2015), Tao et al. tensor-based operations between words to replace linear operations on concatenated word vectors layer and explore in the standard convolutional
the non-linear interactions between nonconsective n-grams. Mou et al. (2015) also explores convolu- tional models on tree-structured sentences.
As a sequence model, RNN is able to deal with variable-length input sequences and discover long-term dependencies. Various variants of RNN have been proposed to better store and access (Hochreiter and Schmidhuber, 1997; memories Cho et al., 2014). With the ability of explicitly modeling time-series data, RNNs are being increas- ingly applied to sentence modeling. For example, Tai et al. (2015) adjusted the standard LSTM to tree-structured topologies and obtained superior results over a sequential LSTM on related tasks.
In this paper, we stack CNN and LSTM in a uniï¬ed architecture for semantic sentence mod- eling. The combination of CNN and LSTM can be seen in some computer vision tasks like image and speech recogni- caption (Xu et al., 2015) tion (Sainath et al., 2015). Most of these models use multi-layer CNNs and train CNNs and RNNs separately or throw the output of a fully connected layer of CNN into RNN as inputs. Our approach is different: we apply CNN to text data and feed con- secutive window features directly to LSTM, and so our architecture enables LSTM to learn long-range fea- dependencies from higher-order sequential tures. In (Li et al., 2015), the authors suggest that sequence-based models are sufï¬cient to capture the compositional semantics for many NLP tasks, thus in this work the CNN is directly built upon word sequences other than the syntactic parse tree. Our experiments on sentiment classiï¬cation and 6-way question classiï¬cation tasks clearly demonstrate the superiority of our model over single CNN or LSTM model and other related sequence-based models.
# 3 C-LSTM Model
The architecture of the C-LSTM model is shown in Figure 1, which consists of two main components: convolutional neural network (CNN) and long short- term memory network (LSTM). The following two subsections describe how we apply CNN to extract higher-level sequences of word features and LSTM to capture long-term dependencies over window fea- ture sequences respectively.
The movie is awesome ! L Ã d iput x feature maps window feature sequence LSTM
Figure 1: The architecture of C-LSTM for sentence modeling. Blocks of the same color in the feature map layer and window feature sequence layer corresponds to features for the same win- dow. The dashed lines connect the feature of a window with the source feature map. The ï¬nal output of the entire model is the last hidden unit of LSTM.
# 3.1 N-gram Feature Extraction through Convolution
The one-dimensional convolution involves a ï¬lter vector sliding over a sequence and detecting fea- tures at different positions. Let xi â Rd be the d-dimensional word vectors for the i-th word in a sentence. Let x â RLÃd denote the input sentence where L is the length of the sentence. Let k be the length of the ï¬lter, and the vector m â RkÃd is a ï¬l- ter for the convolution operation. For each position j in the sentence, we have a window vector wj with k consecutive word vectors, denoted as:
wj = [xj, xj+1, · · · , xj+kâ1] (1)
Here, the commas represent row vector concatena- tion. A ï¬lter m convolves with the window vectors (k-grams) at each position in a valid way to gener- ate a feature map c â RLâk+1; each element cj of the feature map for window vector wj is produced as follows:
cj = f (wj ⦠m + b), (2)
where ⦠is element-wise multiplication, b â R is a bias term and f is a nonlinear transformation func- tion that can be sigmoid, hyperbolic tangent, etc. In our case, we choose ReLU (Nair and Hinton, 2010) as the nonlinear function. The C-LSTM model uses multiple ï¬lters to generate multiple feature maps. For n ï¬lters with the same length, the generated n
feature maps can be rearranged as feature represen- tations for each window wj,
W = [c1; c2; · · · ; cn] (3)
Here, semicolons represent column vector concate- nation and ci is the feature map generated with the i-th ï¬lter. Each row Wj of W â R(Lâk+1)Ãn is the new feature representation generated from n ï¬lters for the window vector at position j. The new succes- sive higher-order window representations then are fed into LSTM which is described below.
A max-over-pooling or dynamic k-max pooling is often applied to feature maps after the convolu- tion to select the most or the k-most important fea- tures. However, LSTM is speciï¬ed for sequence input, and pooling will break such sequence orga- nization due to the discontinuous selected features. Since we stack an LSTM neural neural network on top of the CNN, we will not apply pooling after the convolution operation.
# 3.2 Long Short-Term Memory Networks
Recurrent neural networks (RNNs) are able to prop- agate historical information via a chain-like neu- ral network architecture. While processing se- quential data, it looks at the current input xt as well as the previous output of hidden state htâ1 at each time step. However, standard RNNs be- comes unable to learn long-term dependencies as the gap between two time steps becomes large. To address this issue, LSTM was ï¬rst introduced in (Hochreiter and Schmidhuber, 1997) and re- emerged as a successful architecture since Ilya et al. (2014) obtained remarkable performance in sta- tistical machine translation. Although many vari- ants of LSTM were proposed, we adopt the standard architecture (Hochreiter and Schmidhuber, 1997) in this work.
The LSTM architecture has a range of repeated modules for each time step as in a standard RNN. At each time step, the output of the module is con- trolled by a set of gates in Rd as a function of the old hidden state htâ1 and the input at the current time step xt: the forget gate ft, the input gate it, and the output gate ot. These gates collectively decide how to update the current memory cell ct and the cur- rent hidden state ht. We use d to denote the mem- ory dimension in the LSTM and all vectors in this
architecture share the same dimension. The LSTM transition functions are deï¬ned as follows:
it = Ï(Wi · [htâ1, xt] + bi) ft = Ï(Wf · [htâ1, xt] + bf ) qt = tanh(Wq · [htâ1, xt] + bq) ot = Ï(Wo · [htâ1, xt] + bo) ct = ft â ctâ1 + it â qt ht = ot â tanh(ct) (4)
Here, Ï is the logistic sigmoid function that has an output in [0, 1], tanh denotes the hyperbolic tangent function that has an output in [â1, 1], and â denotes the elementwise multiplication. To understand the mechanism behind the architecture, we can view ft as the function to control to what extent the informa- tion from the old memory cell is going to be thrown away, it to control how much new information is go- ing to be stored in the current memory cell, and ot to control what to output based on the memory cell ct. LSTM is explicitly designed for time-series data for learning long-term dependencies, and therefore we choose LSTM upon the convolution layer to learn such dependencies in the sequence of higher-level features.
# 4 Learning C-LSTM for Text Classiï¬cation
For text classiï¬cation, we regard the output of the hidden state at the last time step of LSTM as the document representation and we add a softmax layer on top. We train the entire model by minimizing the cross-entropy error. Given a training sample x(i) and its true label y(i) â {1, 2, · · · , k} where k is the number of possible labels and the estimated proba- y(i) j â [0, 1] for each label j â {1, 2, · · · , k}, bilities e the error is deï¬ned as:
k L(x(i), y(i)) = X j=1 1{y(i) = j} log( y(i) j ), e (5)
such where that otherwise 1{condition is false} = 0. We employ stochas- tic gradient descent (SGD) to learn the model parameters optimizer RM- Sprop (Tieleman and Hinton, 2012).
# 4.1 Padding and Word Vector Initialization
First, we use maxlen to denote the maximum length of the sentence in the training set. As the convo- lution layer in our model requires ï¬xed-length in- put, we pad each sentence that has a length less than maxlen with special symbols at the end that indicate the unknown words. For a sentence in the test dataset, we pad sentences that are shorter than maxlen in the same way, but for sentences that have a length longer than maxlen, we simply cut extra words at the end of these sentences to reach maxlen.
We initialize word vectors with the publicly avail- able word2vec vectors1 that are pre-trained using about 100B words from the Google News Dataset. The dimensionality of the word vectors is 300. We also initialize the word vector for the unknown words from the uniform distribution [-0.25, 0.25]. We then ï¬ne-tune the word vectors along with other model parameters during training.
# 4.2 Regularization
For regularization, we employ two commonly used techniques: dropout (Hinton et al., 2012) and L2 weight regularization. We apply dropout to pre- vent co-adaptation. In our model, we either apply dropout to word vectors before feeding the sequence of words into the convolutional layer or to the output of LSTM before the softmax layer. The L2 regular- ization is applied to the weight of the softmax layer.
# 5 Experiments
We evaluate the C-LSTM model on two tasks: (1) sentiment classiï¬cation, and (2) question type clas- siï¬cation. In this section, we introduce the datasets and the experimental settings.
# 5.1 Datasets
Sentiment Classiï¬cation: Our task in this regard is to predict the sentiment polarity of movie reviews. We use the Stanford Sentiment Treebank (SST) benchmark (Socher et al., 2013b). This dataset consists of 11855 movie reviews and are split into train (8544), dev (1101), and test (2210). Sentences in this corpus are parsed and all phrases along with the sentences are fully annotated with
1http://code.google.com/p/word2vec/
5 labels: very positive, positive, neural, negative, very negative. We consider two classiï¬cation tasks on this dataset: ï¬ne-grained classiï¬cation with 5 labels and binary classiï¬cation by removing the neural labels. dataset has a split of train (6920) / dev (872) / test (1821). Since the data is provided in the format of sub-sentences, we train the model on both phrases and sentences but only test on the sentences as in several previous works (Socher et al., 2013b; Kalchbrenner et al., 2014). Question type classiï¬cation: Question classiï¬ca- tion is an important step in a question answering system that classiï¬es a question into a speciï¬c type, e.g. âwhat is the highest waterfall in the United States?â is a question that belongs to âlocationâ. For this task, we use the benchmark TREC (Li and Roth, 2002). TREC divides all ques- including location, tions into 6 categories, human, entity, abbreviation, description and numeric. The training dataset contains 5452 labelled questions while the testing dataset contains 500 questions.
# 5.2 Experimental Settings
We implement our model based on Theano (Bastien et al., 2012) â a python library, which sup- ports efï¬cient symbolic differentiation and transpar- ent use of a GPU. To beneï¬t from the efï¬ciency of parallel computation of the tensors, we train the model on a GPU. For text preprocessing, we only convert all characters in the dataset to lower case.
For SST, we conduct hyperparameter (number of ï¬lters, ï¬lter length in CNN; memory dimension in LSTM; dropout rate and which layer to apply, etc.) tuning on the validation data in the standard split. For TREC, we hold out 1000 samples from the train- ing dataset for hyperparameter search and train the model using the remaining data.
In our ï¬nal settings, we only use one convolu- tional layer and one LSTM layer for both tasks. For the ï¬lter size, we investigated ï¬lter lengths of 2, 3 and 4 in two cases: a) single convolutional layer with the same ï¬lter length, and b) multiple convolu- tional layers with different lengths of ï¬lters in paral- lel. Here we denote the number of ï¬lters of length i by ni for ease of clariï¬cation. For the ï¬rst case, each n-gram window is transformed into ni convoluted
Model SVM NBoW Paragraph Vector RAE MV-RNN RNTN DRNN CNN-non-static CNN-multichannel DCNN Molding-CNN Dependency Tree-LSTM Constituency Tree-LSTM LSTM Bi-LSTM C-LSTM Fine-grained (%) Binary (%) Reported in (Socher et al., 2013b) (Kalchbrenner et al., 2014) (Le and Mikolov, 2014) (Socher, Pennington, et al., 2011) (Socher et al., 2012) (Socher et al., 2013b) (Irsoy and Cardie, 2014) (Kim, 2014) (Kim, 2014) (Kalchbrenner et al., 2014) (Lei et al., 2015) (Tai et al., 2015) (Tai et al., 2015) our implementation our implementation our implementation 79.4 80.5 87.8 82.4 82.9 85.4 86.6 87.2 88.1 86.8 88.6 85.7 88.0 86.6 87.9 87.8 40.7 42.4 48.7 43.2 44.4 45.7 49.8 48.0 47.4 48.5 51.2 48.4 51.0 46.6 47.8 49.2
Table 1: Comparisons with baseline models on Stanford Sentiment Treebank. Fine-grained is a 5-class classiï¬cation task. Binary is a 2-classiï¬cation task. The second block contains the recursive models. The third block are methods related to convolutional neural networks. The fourth block contains methods using LSTM (the ï¬rst two methods in this block also use syntactic parsing trees). The ï¬rst block contains other baseline methods. The last block is our model.
features after convolution and the sequence of win- dow representations is fed into LSTM. For the latter case, since the number of windows generated from each convolution layer varies when the ï¬lter length varies (see L â k + 1 below equation (3)), we cut the window sequence at the end based on the maximum ï¬lter length that gives the shortest number of win- dows. Each window is represented as the concatena- tion of outputs from different convolutional layers. We also exploit different combinations of different ï¬lter lengths. We further present experimental anal- ysis of the exploration on ï¬lter size later. According to the experiments, we choose a single convolutional layer with ï¬lter length 3.
For SST, the number of ï¬lters of length 3 is set to be 150 and the memory dimension of LSTM is set to be 150, too. The word vector layer and the LSTM layer are dropped out with a probability of 0.5. For TREC, the number of ï¬lters is set to be 300 and the memory dimension is set to be 300. The word vec- tor layer and the LSTM layer are dropped out with a probability of 0.5. We also add L2 regularization with a factor of 0.001 to the weights in the softmax layer for both tasks.
# 6 Results and Model Analysis
In this section, we show our evaluation results on sentiment classiï¬cation and question type classiï¬ca- tion tasks. Moreover, we give some model analysis on the ï¬lter size conï¬guration.
# 6.1 Sentiment Classiï¬cation
The results are shown in Table 1. We compare our model with a large set of well-performed models on the Stanford Sentiment Treebank.
Generally, the baseline models consist of recur- sive models, convolutional neural network mod- els, LSTM related models and others. The re- cursive models employ a syntactic parse tree as the sentence structure and the sentence representa- tion is computed recursively in a bottom-up man- ner along the parse tree. Under this category, we choose recursive autoencoder (RAE), matrix-vector (MV-RNN), tensor based composition (RNTN) and multi-layer stacked (DRNN) recursive neural net- work as baselines. Among CNNs, we compare with Kimâs (2014) CNN model with ï¬ne-tuned word vec- tors (CNN-non-static) and multi-channels (CNN- multichannel), DCNN with dynamic k-max pool-
Acc Reported in 95.0 Silva et al .(2011) 91.8 Zhao et al .(2015) 92.4 Zhao et al .(2015) 93.6 Kim (2014) 92.2 Kim (2014) 93.0 Kalchbrenner et al. (2014) our implementation 93.2 our implementation 93.0 our implementation 94.6 Model SVM Paragraph Vector Ada-CNN CNN-non-static CNN-multichannel DCNN LSTM Bi-LSTM C-LSTM
Table 2: The 6-way question type classiï¬cation accuracy on TREC.
ing, Taoâs CNN (Molding-CNN) with low-rank ten- sor based non-linear and non-consecutive convo- lutions. Among LSTM related models, we ï¬rst compare with two tree-structured LSTM models (Dependence Tree-LSTM and Constituency Tree- LSTM) that adjust LSTM to tree-structured network topologies. Then we implement one-layer LSTM and Bi-LSTM by ourselves. Since we could not tune the result of Bi-LSTM to be as good as what has been reported in (Tai et al., 2015) even if following their untied weight conï¬guration, we report our own results. For other baseline methods, we compare against SVM with unigram and bigram features, NBoW with average word vector features and para- graph vector that infers the new paragraph vector for unseen documents.
To the best of our knowledge, we achieve the fourth best published result for the 5-class classi- ï¬cation task on this dataset. For the binary clas- siï¬cation task, we achieve comparable results with respect to the state-of-the-art ones. From Table 1, we have the following observations: (1) Although we did not beat the state-of-the-art ones, as an end- to-end model, the result is still promising and com- parable with thoes models that heavily rely on lin- guistic annotations and knowledge, especially syn- tactic parse trees. This indicates C-LSTM will be more feasible for various scenarios. (2) Compar- ing our results against single CNN and LSTM mod- els shows that LSTM does learn long-term depen- dencies across sequences of higher-level represen- tations better. We could explore in the future how to learn more compact higher-level representations by replacing standard convolution with other non-
linear feature mapping functions or appealing to tree-structured topologies before the convolutional layer.
# 6.2 Question Type Classiï¬cation
The prediction accuracy on TREC question classiï¬- cation is reported in Table 2. We compare our model with a variety of models. The SVM classiï¬er uses unigrams, bigrams, wh-word, head word, POS tags, parser, hypernyms, WordNet synsets as engineered features and 60 hand-coded rules. Ada-CNN is a self-adaptiive hierarchical sentence model with gat- ing networks. Other baseline models have been in- troduced in the last task. From Table 2, we have the following observations: (1) Our result consistently outperforms all published neural baseline models, which means that C-LSTM captures intentions of TREC questions well. (2) Our result is close to that of the state-of-the-art SVM that depends on highly engineered features. Such engineered features not only demands human laboring but also leads to the error propagation in the existing NLP tools, thus couldnât generalize well in other datasets and tasks. With the ability of automatically learning semantic sentence representations, C-LSTM doesnât require any human-designed features and has a better scali- bility.
# 6.3 Model Analysis
Here we investigate the impact of different ï¬lter con- ï¬gurations in the convolutional layer on the model performance.
In the convolutional layer of our model, ï¬lters are used to capture local n-gram features. Intuitively, multiple convolutional layers in parallel with differ-
0.950 0.945 0.940 y c a r u c c A 0.935 0.930 0.925 0.920 S:2 S:3 S:4 M:2,3 Filter configuration M:2,4 M:3,4 M:2,3,4
Figure 2: Prediction accuracies on TREC questions with dif- ferent ï¬lter size strategies. For the horizontal axis, S means single convolutional layer with the same ï¬lter length, and M means multiple convolutional layers in parallel with different ï¬lter lengths.
ent ï¬lter sizes should perform better than single con- volutional layers with the same length ï¬lters in that different ï¬lter sizes could exploit features of differ- ent n-grams. However, we found in our experiments that single convolutional layer with ï¬lter length 3 al- ways outperforms the other cases.
We show in Figure 2 the prediction accuracies on the 6-way question classiï¬cation task using differ- ent ï¬lter conï¬gurations. Note that we also observe the similar phenomenon in the sentiment classiï¬ca- tion task. For each ï¬lter conï¬guration, we report in Figure 2 the best result under extensive grid-search on hyperparameters. It it shown that single convolu- tional layer with ï¬lter length 3 performs best among all ï¬lter conï¬gurations. For the case of multiple convolutional layers in parallel, it is shown that ï¬l- ter conï¬gurations with ï¬lter length 3 performs better that those without tri-gram ï¬lters, which further con- ï¬rms that tri-gram features do play a signiï¬cant role in capturing local features in our tasks. We conjec- ture that LSTM could learn better semantic sentence representations from sequences of tri-gram features.
# 7 Conclusion and Future Work
We have described a novel, uniï¬ed model called C- LSTM that combines convolutional neural network with long short-term memory network (LSTM). C- LSTM is able to learn phrase-level features through
a convolutional layer; sequences of such higher- level representations are then fed into the LSTM to learn long-term dependencies. We evaluated the learned semantic sentence representations on senti- ment classiï¬cation and question type classiï¬cation tasks with very satisfactory results.
We could explore in the future ways to replace the standard convolution with tensor-based operations or tree-structured convolutions. We believe LSTM will beneï¬t from more structured higher-level repre- sentations.
# References
[Bastien et al.2012] Fr´ed´eric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud Bergeron, Nicolas Bouchard, and Yoshua Ben- gio. 2012. Theano: new features and speed im- provements. Deep Learning and Unsupervised Fea- ture Learning NIPS 2012 Workshop.
[Cho et al.2014] Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learn- ing phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.
[Collobert et al.2011] Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language process- ing (almost) from scratch. The Journal of Machine Learning Research, 12:2493â2537.
[Denil et al.2014] Misha Denil, Alban Demiraj, Nal Kalchbrenner, Phil Blunsom, and Nando de Freitas. 2014. Modelling, visualising and summarising doc- uments with a single convolutional neural network. arXiv preprint arXiv:1406.3830. Devlin, Zbib, [Devlin et al.2014] Jacob Thomas Lamar, Richard Zhongqiang Huang, Schwartz, and John Makhoul. Fast and 2014. robust neural network joint models for statistical machine translation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, volume 1, pages 1370â1380.
[Hinton et al.2012] Geoffrey E Hinton, Nitish Srivas- tava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. The Computing Research Repository (CoRR).
[Hochreiter and Schmidhuber1997] Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735â1780.
[Irsoy and Cardie2014] Ozan Irsoy and Claire Cardie. 2014. Deep recursive neural networks for composi- tionality in language. In Advances in Neural Informa- tion Processing Systems, pages 2096â2104.
[Johnson and Zhang2015] Rie Johnson and Tong Zhang. 2015. Effective use of word order for text categoriza- tion with convolutional neural networks. Human Lan- guage Technologies: The 2015 Annual Conference of the North American Chapter of the ACL, pages 103â 112.
[Kalchbrenner et al.2014] Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convo- lutional neural network for modelling sentences. Association for Computational Linguistics (ACL). [Kim2014] Yoon Kim. 2014. Convolutional neural net- works for sentence classiï¬cation. In Proceedings of Empirical Methods on Natural Language Processing. [Le and Mikolov2014] Quoc Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 1188â1196.
[Lei et al.2015] Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2015. Molding cnns for text: non-linear, non-consecutive convolutions. In Proceedings of Em- pirical Methods on Natural Language Processing. [Li and Roth2002] Xin Li and Dan Roth. 2002. Learn- ing question classiï¬ers. In Proceedings of the 19th in- ternational conference on Computational linguistics- Volume 1, pages 1â7. Association for Computational Linguistics.
[Li et al.2015] Jiwei Li, Dan Jurafsky, and Eudard Hovy. 2015. When are tree structures necessary for deep learning of representations? In Proceedings of Em- pirical Methods on Natural Language Processing.
Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in neural infor- mation processing systems, pages 3111â3119.
[Mou et al.2015] Lili Mou, Hao Peng, Ge Li, Yan Xu, Lu Zhang, and Zhi Jin. 2015. Discriminative neural sentence modeling by tree-based convolution. Unpublished manuscript: http://arxiv. org/abs/1504. 01106v5. Version, 5.
[Nair and Hinton2010] Vinod Nair and Geoffrey E Hin- ton. 2010. Rectiï¬ed linear units improve restricted boltzmann machines. In Proceedings of the 27th In- ternational Conference on Machine Learning (ICML- 10), pages 807â814.
[Pascanu et al.2014] Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. 2014. How to
construct deep recurrent neural networks. In Proceed- ings of the conference on International Conference on Learning Representations (ICLR).
[Sainath et al.2015] Tara N Sainath, Oriol Vinyals, An- drew Senior, and Hasim Sak. 2015. Convolutional, long short-term memory, fully connected deep neural networks. IEEE International Conference on Acous- tics, Speech and Signal Processing.
[Silva et al.2011] Joao Silva, Lu´ısa Coheur, Ana Cristina Mendes, and Andreas Wichert. 2011. From symbolic to sub-symbolic information in question classiï¬cation. Artiï¬cial Intelligence Review, 35(2):137â154.
Brody Huval, Christopher D Manning, and Andrew Y Ng. 2012. Semantic compositionality through recursive matrix- vector spaces. In Proceedings of Empirical Methods on Natural Language Processing, pages 1201â1211.
John Bauer, Christopher D Manning, and Andrew Y Ng. 2013a. Parsing with compositional vector grammars. In In Proceedings of the ACL conference. Citeseer.
[Socher et al.2013b] Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013b. Recur- sive deep models for semantic compositionality over In Proceedings of Empirical a sentiment treebank. Methods on Natural Language Processing, volume 1631, page 1642. Citeseer.
[Sutskever et al.2014] Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural informa- tion processing systems, pages 3104â3112.
[Tai et al.2015] Kai Sheng Tai, Richard Socher, and Improved semantic Christopher D Manning. 2015. representations from tree-structured long short-term memory networks. Association for Computational Linguistics (ACL).
[Tang et al.2015] Duyu Tang, Bing Qin, and Ting Liu. 2015. Document modeling with gated recurrent neural network for sentiment classiï¬cation. In Proceedings of Empirical Methods on Natural Language Process- ing.
[Tieleman and Hinton2012] T. Tieleman and G Hinton. 2012. Lecture 6.5 - rmsprop, coursera: Neural net- works for machine learning.
[Xu et al.2015] Kelvin Xu, Jimmy Ba, Ryan Kiros, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In Pro- ceedings of 2015th International Conference on Ma- chine Learning.
[Zhao et al.2015] Han Zhao, Zhengdong Lu, and Pascal Poupart. 2015. Self-adaptive hierarchical sentence
model. In Proceedings of International Joint Confer- ences on Artiï¬cial Intelligence. | {
"id": "1511.08630"
} |
1511.06939 | Session-based Recommendations with Recurrent Neural Networks | We apply recurrent neural networks (RNN) on a new domain, namely recommender
systems. Real-life recommender systems often face the problem of having to base
recommendations only on short session-based data (e.g. a small sportsware
website) instead of long user histories (as in the case of Netflix). In this
situation the frequently praised matrix factorization approaches are not
accurate. This problem is usually overcome in practice by resorting to
item-to-item recommendations, i.e. recommending similar items. We argue that by
modeling the whole session, more accurate recommendations can be provided. We
therefore propose an RNN-based approach for session-based recommendations. Our
approach also considers practical aspects of the task and introduces several
modifications to classic RNNs such as a ranking loss function that make it more
viable for this specific problem. Experimental results on two data-sets show
marked improvements over widely used approaches. | http://arxiv.org/pdf/1511.06939 | Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk | cs.LG, cs.IR, cs.NE | Camera ready version (17th February, 2016) Affiliation update (29th
March, 2016) | null | cs.LG | 20151121 | 20160329 | 6 1 0 2
r a M 9 2 ] G L . s c [
4 v 9 3 9 6 0 . 1 1 5 1 : v i X r a
Published as a conference paper at ICLR 2016
# SESSION-BASED RECOMMENDATIONS WITH RECURRENT NEURAL NETWORKS
Bal´azs Hidasi â Gravity R&D Inc. Budapest, Hungary balazs.hidasi@gravityrd.com
# Alexandros Karatzoglou Telefonica Research Barcelona, Spain alexk@tid.es
Linas Baltrunas â Netï¬ix Los Gatos, CA, USA lbaltrunas@netflix.com
Domonkos Tikk Gravity R&D Inc. Budapest, Hungary domonkos.tikk@gravityrd.com
# ABSTRACT
We apply recurrent neural networks (RNN) on a new domain, namely recom- mender systems. Real-life recommender systems often face the problem of having to base recommendations only on short session-based data (e.g. a small sportsware website) instead of long user histories (as in the case of Netï¬ix). In this situation the frequently praised matrix factorization approaches are not accurate. This prob- lem is usually overcome in practice by resorting to item-to-item recommendations, i.e. recommending similar items. We argue that by modeling the whole session, more accurate recommendations can be provided. We therefore propose an RNN- based approach for session-based recommendations. Our approach also considers practical aspects of the task and introduces several modiï¬cations to classic RNNs such as a ranking loss function that make it more viable for this speciï¬c problem. Experimental results on two data-sets show marked improvements over widely used approaches.
# INTRODUCTION
Session-based recommendation is a relatively unappreciated problem in the machine learning and recommender systems community. Many e-commerce recommender systems (particularly those of small retailers) and most of news and media sites do not typically track the user-idâs of the users that visit their sites over a long period of time. While cookies and browser ï¬ngerprinting can provide some level of user recognizability, those technologies are often not reliable enough and moreover raise privacy concerns. Even if tracking is possible, lots of users have only one or two sessions on a smaller e-commerce site, and in certain domains (e.g. classiï¬ed sites) the behavior of users often shows session-based traits. Thus subsequent sessions of the same user should be handled independently. Consequently, most session-based recommendation systems deployed for e-commerce are based on relatively simple methods that do not make use of a user proï¬le e.g. item- to-item similarity, co-occurrence, or transition probabilities. While effective, those methods often take only the last click or selection of the user into account ignoring the information of past clicks.
The most common methods used in recommender systems are factor models (Koren et al., 2009; Weimer et al., 2007; Hidasi & Tikk, 2012) and neighborhood methods (Sarwar et al., 2001; Ko- ren, 2008). Factor models work by decomposing the sparse user-item interactions matrix to a set of d dimensional vectors one for each item and user in the dataset. The recommendation problem is then treated as a matrix completion/reconstruction problem whereby the latent factor vectors are then used to ï¬ll the missing entries by e.g. taking the dot product of the corresponding userâitem latent factors. Factor models are hard to apply in session-based recommendation due to the absence
âThe author spent 3 months at Telefonica Research during the research of this topic. â This work was done while the author was a member of the Telefonica Research group in Barcelona, Spain
1
Published as a conference paper at ICLR 2016
of a user proï¬le. On the other hand, neighborhood methods, which rely on computing similari- ties between items (or users) are based on co-occurrences of items in sessions (or user proï¬les). Neighborhood methods have been used extensively in session-based recommendations.
The past few years have seen the tremendous success of deep neural networks in a number of tasks such as image and speech recognition (Russakovsky et al., 2014; Hinton et al., 2012) where unstruc- tured data is processed through several convolutional and standard layers of (usually rectiï¬ed linear) units. Sequential data modeling has recently also attracted a lot of attention with various ï¬avors of RNNs being the model of choice for this type of data. Applications of sequence modeling range from test-translation to conversation modeling to image captioning.
While RNNs have been applied to the aforementioned domains with remarkable success little atten- tion, has been paid to the area of recommender systems. In this work we argue that RNNs can be applied to session-based recommendation with remarkable results, we deal with the issues that arise when modeling such sparse sequential data and also adapt the RNN models to the recommender setting by introducing a new ranking loss function suited to the task of training these models. The session-based recommendation problem shares some similarities with some NLP-related problems in terms of modeling as long as they both deals with sequences. In the session-based recommenda- tion we can consider the ï¬rst item a user clicks when entering a web-site as the initial input of the RNN, we then would like to query the model based on this initial input for a recommendation. Each consecutive click of the user will then produce an output (a recommendation) that depends on all the previous clicks. Typically the item-set to choose from in recommenders systems can be in the tens of thousands or even hundreds of thousands. Apart from the large size of the item set, another challenge is that click-stream datasets are typically quite large thus training time and scalability are really important. As in most information retrieval and recommendation settings, we are interested in focusing the modeling power on the top-items that the user might be interested in, to this end we use ranking loss function to train the RNNs.
2 RELATED WORK
2.1 SESSION-BASED RECOMMENDATION
Much of the work in the area of recommender systems has focused on models that work when a user identiï¬er is available and a clear user proï¬le can be built. In this setting, matrix factorization methods and neighborhood models have dominated the literature and are also employed on-line. One of the main approaches that is employed in session-based recommendation and a natural solution to the problem of a missing user proï¬le is the item-to-item recommendation approach (Sarwar et al., 2001; Linden et al., 2003) in this setting an item to item similarity matrix is precomputed from the available session data, that is items that are often clicked together in sessions are deemed to be similar. This similarity matrix is then simply used during the session to recommend the most similar items to the one the user has currently clicked. While simple, this method has been proven to be effective and is widely employed. While effective, these methods are only taking into account the last click of the user, in effect ignoring the information of the past clicks.
A somewhat different approach to session-based recommendation are Markov Decision Processes (MDPs) (2002). MDPs are models of sequential stochastic decision problems. An MDP is defined as a four-tuple (S, A, Rwd, tr) where S is the set of states, A is a set of actions Rwd is a reward function and tr is the state-transition function. In recommender systems actions can be equated with recommendations and the simplest MPDs are essentially first order Markov chains where the next recommendation can be simply computed on the basis of the transition probability between items. The main issue with applying Markov chains in session-based recommendation is that the state space quickly becomes unmanageable when trying to include all possible sequences of user selections.
The extended version of the General Factorization Framework (GFF) (Hidasi & Tikk, 2015) is ca- pable of using session data for recommendations. It models a session by the sum of its events. It uses two kinds of latent representations for items, one represents the item itself, the other is for representing the item as part of a session. The session is then represented as the average of the feature vectors of part-of-a-session item representation. However, this approach does not consider any ordering within the session.
2
Published as a conference paper at ICLR 2016
2.2 DEEP LEARNING IN RECOMMENDERS
One of the ï¬rst related methods in the neural networks literature where the use of Restricted Boltz- mann Machines (RBM) for Collaborative Filtering (Salakhutdinov et al., 2007). In this work an RBM is used to model user-item interaction and perform recommendations. This model has been shown to be one of the best performing Collaborative Filtering models. Deep Models have been used to extract features from unstructured content such as music or images that are then used together with more conventional collaborative ï¬ltering models. In Van den Oord et al. (2013) a convolutional deep network is used to extract feature from music ï¬les that are then used in a factor model. More recently Wang et al. (2015) introduced a more generic approach whereby a deep network is used to extract generic content-features from any types of items, these features are then incorporated in a standard collaborative ï¬ltering model to enhance the recommendation performance. This approach seems to be particularly useful in settings where there is not sufï¬cient user-item interaction information.
# 3 RECOMMENDATIONS WITH RNNS
Recurrent Neural Networks have been devised to model variable-length sequence data. The main difference between RNNs and conventional feedforward deep models is the existence of an internal hidden state in the units that compose the network. Standard RNNs update their hidden state h using the following update function:
ht = g(W xt + U htâ1) (1) Where g is a smooth and bounded function such as a logistic sigmoid function xt is the input of the unit at time t. An RNN outputs a probability distribution over the next element of the sequence, given its current state ht.
A Gated Recurrent Unit (GRU) (Cho et al., 2014) is a more elaborate model of an RNN unit that aims at dealing with the vanishing gradient problem. GRU gates essentially learn when and by how much to update the hidden state of the unit. The activation of the GRU is a linear interpolation between the previous activation and the candidate activation Ëht: ht = (1 â zt)htâ1 + zt Ëht
where the update gate is given by:
zt = Ï(Wzxt + Uzhtâ1) (3)
while the candidate activation function Ëht is computed in a similar manner:
hy = tanh (Wx, + U(r, © hy_1)) (4)
and ï¬naly the reset gate rt is given by:
rt = Ï(Wrxt + Urhtâ1) (5)
3.1 CUSTOMIZING THE GRU MODEL
We used the GRU-based RNN in our models for session-based recommendations. The input of the network is the actual state of the session while the output is the item of the next event in the session. The state of the session can either be the item of the actual event or the events in the session so far. In the former case 1-of-N encoding is used, i.e. the input vectorâs length equals to the number of items and only the coordinate corresponding to the active item is one, the others are zeros. The latter setting uses a weighted sum of these representations, in which events are discounted if they have occurred earlier. For the stake of stability, the input vector is then normalized. We expect this to help because it reinforces the memory effect: the reinforcement of very local ordering constraints which are not well captured by the longer memory of RNN. We also experimented with adding an additional embedding layer, but the 1-of-N encoding always performed better.
The core of the network is the GRU layer(s) and additional feedforward layers can be added between the last layer and the output. The output is the predicted preference of the items, i.e. the likelihood of being the next in the session for each item. When multiple GRU layers are used, the hidden state of the previous layer is the input of the next one. The input can also be optionally connected
3
Published as a conference paper at ICLR 2016
dake] Buippaqui3 wa}! Uo sas09s :ndjno Bulpoo N-JO-T âway! jenqoe :ynduj
Figure 1: General architecture of the network. Processing of one event of the event stream at once.
to GRU layers deeper in the network, as we found that this improves performance. See the whole architecture on Figure 1, which depicts the representation of a single event within a time series of events.
Since recommender systems are not the primary application area of recurrent neural networks, we modiï¬ed the base network to better suit the task. We also considered practical points so that our solution could be possibly applied in a live environment.
3.1.1 SESSION-PARALLEL MINI-BATCHES
RNNs for natural language processing tasks usually use in-sequence mini-batches. For example it is common to use a sliding window over the words of sentences and put these windowed fragments next to each other to form mini-batches. This does not ï¬t our task, because (1) the length of sessions can be very different, even more so than that of sentences: some sessions consist of only 2 events, while others may range over a few hundreds; (2) our goal is to capture how a session evolves over time, so breaking down into fragments would make no sense. Therefore we use session-parallel mini-batches. First, we create an order for the sessions. Then, we use the ï¬rst event of the ï¬rst X sessions to form the input of the ï¬rst mini-batch (the desired output is the second events of our active sessions). The second mini-batch is formed from the second events and so on. If any of the sessions end, the next available session is put in its place. Sessions are assumed to be independent, thus we reset the appropriate hidden state when this switch occurs. See Figure 2 for more details.
Mini-batch1 Mini-batch2 Mini-batch3 Session1 Session2 is fina bos Input Session3 â Session4 el SessionS fsa fs2 fsa] Output
Figure 2: Session-parallel mini-batch creation
3.1.2 SAMPLING ON THE OUTPUT
Recommender systems are especially useful when the number of items is large. Even for a medium- sized webshop this is in the range of tens of thousands, but on larger sites it is not rare to have
4
Published as a conference paper at ICLR 2016
hundreds of thousands of items or even a few millions. Calculating a score for each item in each step would make the algorithm scale with the product of the number of items and the number of events. This would be unusable in practice. Therefore we have to sample the output and only compute the score for a small subset of the items. This also entails that only some of the weights will be updated. Besides the desired output, we need to compute scores for some negative examples and modify the weights so that the desired output is highly ranked.
The natural interpretation of an arbitrary missing event is that the user did not know about the existence of the item and thus there was no interaction. However there is a low probability that the user did know about the item and chose not to interact, because she disliked the item. The more popular the item, the more probable it is that the user knows about it, thus it is more likely that a missing event expresses dislike. Therefore we should sample items in proportion of their popularity. Instead of generating separate samples for each training example, we use the items from the other training examples of the mini-batch as negative examples. The beneï¬t of this approach is that we can further reduce computational times by skipping the sampling. Additionally, there are also beneï¬ts on the implementation side from making the code less complex to faster matrix operations. Meanwhile, this approach is also a popularity-based sampling, because the likelihood of an item being in the other training examples of the mini-batch is proportional to its popularity.
# 3.1.3 RANKING LOSS
The core of recommender systems is the relevance-based ranking of items. Although the task can also be interpreted as a classiï¬cation task, learning-to-rank approaches (Rendle et al., 2009; Shi et al., 2012; Steck, 2015) generally outperform other approaches. Ranking can be pointwise, pair- wise or listwise. Pointwise ranking estimates the score or the rank of items independently of each other and the loss is deï¬ned in a way so that the rank of relevant items should be low. Pairwise rank- ing compares the score or the rank of pairs of a positive and a negative item and the loss enforces that the rank of the positive item should be lower than that of the negative one. Listwise ranking uses the scores and ranks of all items and compares them to the perfect ordering. As it includes sorting, it is usually computationally more expensive and thus not used often. Also, if there is only one relevant item â as in our case â listwise ranking can be solved via pairwise ranking.
We included several pointwise and pairwise ranking losses into our solution. We found that point- wise ranking was unstable with this network (see Section 4 for more comments). Pairwise ranking losses on the other hand performed well. We use the following two.
⢠BPR: Bayesian Personalized Ranking (Rendle et al., 2009) is a matrix factorization method that uses pairwise ranking loss. It compares the score of a positive and a sampled negative item. Here we compare the score of the positive item with several sampled items and use their average as the loss. The loss at a given point in one session is deï¬ned as: Ls = â 1 j=1 log (Ï (Ërs,i â Ërs,j)), where NS is the sample size, Ërs,k is the score on item k NS at the given point of the session, i is the desired item (next item in the session) and j are the negative samples.
⢠TOP1: This ranking loss was devised by us for this task. It is the regularized approximation of the relative rank of the relevant item. The relative rank of the relevant item is given by 1 j=1 I{Ërs,j > Ërs,i}. We approximate I{·} with a sigmoid. Optimizing for this NS would modify parameters so that the score for i would be high. However this is unstable as certain positive items also act as negative examples and thus scores tend to become increasingly higher. To avoid this, we want to force the scores of the negative examples to be around zero. This is a natural expectation towards the scores of negative items. Thus we added a regularization term to the loss. It is important that this term is in the same range as the relative rank and acts similarly to it. The ï¬nal loss function is as follows: Ls = 1 NS
# 4 EXPERIMENTS
We evaluate the proposed recursive neural network against popular baselines on two datasets.
5
Published as a conference paper at ICLR 2016
The ï¬rst dataset is that of RecSys Challenge 20151. This dataset contains click-streams of an e- commerce site that sometimes end in purchase events. We work with the training set of the challenge and keep only the click events. We ï¬lter out sessions of length 1. The network is trained on â¼ 6 months of data, containing 7,966,257 sessions of 31,637,239 clicks on 37,483 items. We use the sessions of the subsequent day for testing. Each session is assigned to either the training or the test set, we do not split the data mid-session. Because of the nature of collaborative ï¬ltering methods, we ï¬lter out clicks from the test set where the item clicked is not in the train set. Sessions of length one are also removed from the test set. After the preprocessing we are left with 15,324 sessions of 71,222 events for the test set. This dataset will be referred to as RSC15.
The second dataset is collected from a Youtube-like OTT video service platform. Events of watching a video for at least a certain amount of time were collected. Only certain regions were subject to this collection that lasted for somewhat shorter than 2 months. During this time item-to-item recommendations were provided after each video at the left side of the screen. These were provided by a selection of different algorithms and inï¬uenced the behavior of the users. Preprocessing steps are similar to that of the other dataset with the addition of ï¬ltering out very long sessions as they were probably generated by bots. The training data consists of all but the last day of the aforementioned period and has â¼ 3 million sessions of â¼ 13 million watch events on 330 thousand videos. The test set contains the sessions of the last day of the collection period and has â¼ 37 thousand sessions with â¼ 180 thousand watch events. This dataset will be referred to as VIDEO.
The evaluation is done by providing the events of a session one-by-one and checking the rank of the item of the next event. The hidden state of the GRU is reset to zero after a session ï¬nishes. Items are ordered in descending order by their score and their position in this list is their rank. With RSC15, all of the 37,483 items of the train set were ranked. However, this would have been impractical with VIDEO, due to the large number of items. There we ranked the desired item against the most popular 30,000 items. This has negligible effect on the evaluations as rarely visited items often get low scores. Also, popularity based pre-ï¬ltering is common in practical recommender systems.
As recommender systems can only recommend a few items at once, the actual item a user might pick should be amongst the ï¬rst few items of the list. Therefore, our primary evaluation metric is recall@20 that is the proportion of cases having the desired item amongst the top-20 items in all test cases. Recall does not consider the actual rank of the item as long as it is amongst the top-N. This models certain practical scenarios well where there is no highlighting of recommendations and the absolute order does not matter. Recall also usually correlates well with important online KPIs, such as click-through rate (CTR)(Liu et al., 2012; Hidasi & Tikk, 2012). The second metric used in the experiments is MRR@20 (Mean Reciprocal Rank). That is the average of reciprocal ranks of the desired items. The reciprocal rank is set to zero if the rank is above 20. MRR takes into account the rank of the item, which is important in cases where the order of recommendations matter (e.g. the lower ranked items are only visible after scrolling).
4.1 BASELINES
We compare the proposed network to a set of commonly used baselines.
⢠POP: Popularity predictor that always recommends the most popular items of the training set. Despite its simplicity it is often a strong baseline in certain domains.
S-POP: This baseline recommends the most popular items of the current session. The rec- ommendation list changes during the session as items gain more events. Ties are broken up using global popularity values. This baseline is strong in domains with high repetitiveness. ⢠Item-KNN: Items similar to the actual item are recommended by this baseline and simi- larity is deï¬ned as the cosine similarity between the vector of their sessions, i.e. it is the number of co-occurrences of two items in sessions divided by the square root of the product of the numbers of sessions in which the individual items are occurred. Regularization is also included to avoid coincidental high similarities of rarely visited items. This baseline is one of the most common item-to-item solutions in practical systems, that provides recom- mendations in the âothers who viewed this item also viewed these onesâ setting. Despite of its simplicity it is usually a strong baseline (Linden et al., 2003; Davidson et al., 2010).
# 1http://2015.recsyschallenge.com/
6
Published as a conference paper at ICLR 2016
Table 1: Recall@20 and MRR@20 using the baseline methods
Baseline RSC15 VIDEO Recall@20 MRR@20 Recall@20 MRR@20 POP S-POP Item-KNN BPR-MF 0.0050 0.2672 0.5065 0.2574 0.0012 0.1775 0.2048 0.0618 0.0499 0.1301 0.5508 0.0692 0.0117 0.0863 0.3381 0.0374
# Table 2: Best parametrizations for datasets/loss functions
Dataset Loss Mini-batch Dropout Learning rate Momentum RSC15 RSC15 RSC15 VIDEO VIDEO VIDEO TOP1 BPR Cross-entropy TOP1 BPR Cross-entropy 50 50 500 50 50 200 0.5 0.2 0 0.4 0.3 0.1 0.01 0.05 0.01 0.05 0.1 0.05 0 0.2 0 0 0 0.3
⢠BPR-MF: BPR-MF (Rendle et al., 2009) is one of the commonly used matrix factorization methods. It optimizes for a pairwise ranking objective function (see Section 3) via SGD. Matrix factorization cannot be applied directly to session-based recommendations, because the new sessions do not have feature vectors precomputed. However we can overcome this by using the average of item feature vectors of the items that had occurred in the session so far as the user feature vector. In other words we average the similarities of the feature vectors between a recommendable item and the items of the session so far.
Table 1 shows the results for the baselines. The item-KNN approach clearly dominates the other methods.
4.2 PARAMETER & STRUCTURE OPTIMIZATION
We optimized the hyperparameters by running 100 experiments at randomly selected points of the parameter space for each dataset and loss function. The best parametrization was further tuned by individually optimizing each parameter. The number of hidden units was set to 100 in all cases. The best performing parameters were then used with hidden layers of different sizes. The optimization was done on a separate validation set. Then the networks were retrained on the training plus the validation set and evaluated on the ï¬nal test set.
The best performing parametrizations are summarized in table 2. Weight matrices were initialized by random numbers drawn uniformly from [âx, x] where x depends on the number of rows and columns of the matrix. We experimented with both rmsprop (Dauphin et al., 2015) and adagrad (Duchi et al., 2011). We found adagrad to give better results.
We brieï¬y experimented with other units than GRU. We found both the classic RNN unit and LSTM to perform worse.
We tried out several loss functions. Pointwise ranking based losses, such as cross-entropy and MRR optimization (as in Steck (2015)) were usually unstable, even with regularization. For example cross-entropy yielded only 10 and 6 numerically stable networks of the 100 random runs for RSC15 and VIDEO respectively. We assume that this is due to independently trying to achieve high scores for the desired items and the negative push is small for the negative samples. On the other hand pairwise ranking-based losses performed well. We found the ones introduced in Section 3 (BPR and TOP1) to perform the best.
Several architectures were examined and a single layer of GRU units was found to be the best performer. Adding addition layers always resulted in worst performance w.r.t. both training loss and recall and MRR measured on the test set. We assume that this is due to the generally short
7
Published as a conference paper at ICLR 2016
Table 3: Recall@20 and MRR@20 for different types of a single layer of GRU, compared to the best baseline (item-KNN). Best results per dataset are highlighted.
Loss / #Units Recall@20 RSC15 MRR@20 Recall@20 VIDEO MRR@20 TOP1 100 BPR 100 Cross-entropy 100 TOP1 1000 BPR 1000 Cross-entropy 1000 0.5853 (+15.55%) 0.6069 (+19.82%) 0.6074 (+19.91%) 0.6206 (+22.53%) 0.6322 (+24.82%) 0.5777 (+14.06%) 0.2305 (+12.58%) 0.2407 (+17.54%) 0.2430 (+18.65%) 0.2693 (+31.49%) 0.2467 (+20.47%) 0.2153 (+5.16%) 0.6141 (+11.50%) 0.5999 (+8.92%) 0.6372 (+15.69%) 0.6624 (+20.27%) 0.6311 (+14.58%) â 0.3511 (+3.84%) 0.3260 (-3.56%) 0.3720 (+10.04%) 0.3891 (+15.08%) 0.3136 (-7.23%) â
lifespan of the sessions not requiring multiple time scales of different resolutions to be properly represented. However the exact reason of this is unknown as of yet and requires further research. Using embedding of the items gave slightly worse results, therefore we kept the 1-of-N encoding. Also, putting all previous events of the session on the input instead of the preceding one did not result in additional accuracy gain; which is not surprising as GRU â like LSTM â has both long and short term memory. Adding additional feed-forward layers after the GRU layer did not help either. However increasing the size of the GRU layer improved the performance. We also found that it is beneï¬cial to use tanh as the activation function of the output layer.
4.3 RESULTS
Table 3 shows the results of the best performing networks. Cross-entropy for the VIDEO data with 1000 hidden units was numerically unstable and thus we present no results for that scenario. The results are compared to the best baseline (item-KNN). We show results with 100 and 1000 hidden units. The running time depends on the parameters and the dataset. Generally speaking the difference in runtime between the smaller and the larger variant is not too high on a GeForce GTX Titan X GPU and the training of the network can be done in a few hours2. On CPU, the smaller network can be trained in a practically acceptable timeframe. Frequent retraining is often desirable for recommender systems, because new users and items are introduced frequently.
The GRU-based approach has substantial gain over the item-KNN in both evaluation metrics on both datasets, even if the number of units is 1003. Increasing the number of units further improves the results for pairwise losses, but the accuracy decreases for cross-entropy. Even though cross-entropy gives better results with 100 hidden units, the pairwise loss variants surpass these results as the number of units increase. Although, increasing the number of units increases the training times, we found that it was not too expensive to move from 100 units to 1000 on GPU. Also, the cross-entropy based loss was found to be numerically unstable as the result of the network individually trying to increase the score for the target items, while the negative push is relatively small for the other items. Therefore we suggest using any of the two pairwise losses. The TOP1 loss performs slightly better on these two datasets, resulting in â¼ 20 â 30% accuracy gain over the best performing baseline.
# 5 CONCLUSION & FUTURE WORK
In this paper we applied a kind of modern recurrent neural network (GRU) to new application do- main: recommender systems. We chose the task of session based recommendations, because it is a practically important area, but not well researched. We modiï¬ed the basic GRU in order to ï¬t the task better by introducing session-parallel mini-batches, mini-batch based output sampling and ranking loss function. We showed that our method can signiï¬cantly outperform popular baselines that are used for this task. We think that our work can be the basis of both deep learning applications in recommender systems and session based recommendations in general.
2Using Theano with ï¬xes for the subtensor operators on GPU. 3Except for using the BPR loss on the VIDEO data and evaluating for MRR.
8
Published as a conference paper at ICLR 2016
Our immediate future work will focus on the more thorough examination of the proposed network. We also plan to train the network on automatically extracted item representation that is built on content of the item itself (e.g. thumbnail, video, text) instead of the current input.
# ACKNOWLEDGMENTS
The work leading to these results has received funding from the European Unionâs Seventh Frame- work Programme (FP7/2007-2013) under CrowdRec Grant Agreement n⦠610594.
# REFERENCES
Cho, Kyunghyun, van Merri¨enboer, Bart, Bahdanau, Dzmitry, and Bengio, Yoshua. On the proper- ties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259, 2014.
Dauphin, Yann N, de Vries, Harm, Chung, Junyoung, and Bengio, Yoshua. Rmsprop and equi- librated adaptive learning rates for non-convex optimization. arXiv preprint arXiv:1502.04390, 2015.
Davidson, James, Liebald, Benjamin, Liu, Junning, et al. The YouTube video recommendation system. In Recsysâ10: ACM Conf. on Recommender Systems, pp. 293â296, 2010. ISBN 978-1- 60558-906-0.
Duchi, John, Hazan, Elad, and Singer, Yoram. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12:2121â2159, 2011.
Hidasi, B. and Tikk, D. Fast ALS-based tensor factorization for context-aware recommendation from implicit feedback. In ECML-PKDDâ12, Part II, number 7524 in LNCS, pp. 67â82. Springer, 2012.
Hidasi, Bal´azs and Tikk, Domonkos. General factorization framework for context-aware recommen- dations. Data Mining and Knowledge Discovery, pp. 1â30, 2015. ISSN 1384-5810. doi: 10.1007/ s10618-015-0417-y. URL http://dx.doi.org/10.1007/s10618-015-0417-y.
Hinton, Geoffrey, Deng, Li, Yu, Dong, Dahl, George E, Mohamed, Abdel-rahman, Jaitly, Navdeep, Senior, Andrew, Vanhoucke, Vincent, Nguyen, Patrick, Sainath, Tara N, et al. Deep neural net- works for acoustic modeling in speech recognition: The shared views of four research groups. Signal Processing Magazine, IEEE, 29(6):82â97, 2012.
Koren, Y. Factorization meets the neighborhood: a multifaceted collaborative ï¬ltering model. In SIGKDDâ08: ACM Int. Conf. on Knowledge Discovery and Data Mining, pp. 426â434, 2008.
Koren, Yehuda, Bell, Robert, and Volinsky, Chris. Matrix factorization techniques for recommender systems. Computer, 42(8):30â37, 2009.
Linden, G., Smith, B., and York, J. Amazon. com recommendations: Item-to-item collaborative ï¬ltering. Internet Computing, IEEE, 7(1):76â80, 2003.
Liu, Qiwen, Chen, Tianjian, Cai, Jing, and Yu, Dianhai. Enlister: Baiduâs recommender system for the biggest Chinese Q&A website. In RecSys-12: Proc. of the 6th ACM Conf. on Recommender Systems, pp. 285â288, 2012.
Rendle, S., Freudenthaler, C., Gantner, Z., and Schmidt-Thieme, L. BPR: Bayesian personalized ranking from implicit feedback. In UAIâ09: 25th Conf. on Uncertainty in Artiï¬cial Intelligence, pp. 452â461, 2009. ISBN 978-0-9749039-5-8.
Russakovsky, Olga, Deng, Jia, Su, Hao, Krause, Jonathan, Satheesh, Sanjeev, Ma, Sean, Huang, Zhiheng, Karpathy, Andrej, Khosla, Aditya, Bernstein, Michael S., Berg, Alexander C., and Li, Fei-Fei. Imagenet large scale visual recognition challenge. CoRR, abs/1409.0575, 2014. URL http://arxiv.org/abs/1409.0575.
9
Published as a conference paper at ICLR 2016
Salakhutdinov, Ruslan, Mnih, Andriy, and Hinton, Geoffrey. Restricted boltzmann machines for collaborative ï¬ltering. In Proceedings of the 24th international conference on Machine learning, pp. 791â798. ACM, 2007.
Sarwar, Badrul, Karypis, George, Konstan, Joseph, and Riedl, John. Item-based collaborative ï¬lter- ing recommendation algorithms. In Proceedings of the 10th international conference on World Wide Web, pp. 285â295. ACM, 2001.
Shani, Guy, Brafman, Ronen I, and Heckerman, David. An mdp-based recommender system. In Proceedings of the Eighteenth conference on Uncertainty in artiï¬cial intelligence, pp. 453â460. Morgan Kaufmann Publishers Inc., 2002.
Shi, Yue, Karatzoglou, Alexandros, Baltrunas, Linas, Larson, Martha, Oliver, Nuria, and Hanjalic, Alan. Climf: Learning to maximize reciprocal rank with collaborative less-is-more ï¬ltering. In Proceedings of the Sixth ACM Conference on Recommender Systems, RecSys â12, pp. 139â146, New York, NY, USA, 2012. ACM. ISBN 978-1-4503-1270-7. doi: 10.1145/2365952.2365981. URL http://doi.acm.org/10.1145/2365952.2365981.
Steck, Harald. Gaussian ranking by matrix factorization. In Proceedings of the 9th ACM Confer- ence on Recommender Systems, RecSys â15, pp. 115â122, New York, NY, USA, 2015. ACM. ISBN 978-1-4503-3692-5. doi: 10.1145/2792838.2800185. URL http://doi.acm.org/ 10.1145/2792838.2800185.
Van den Oord, Aaron, Dieleman, Sander, and Schrauwen, Benjamin. Deep content-based music recommendation. In Advances in Neural Information Processing Systems, pp. 2643â2651, 2013.
Wang, Hao, Wang, Naiyan, and Yeung, Dit-Yan. Collaborative deep learning for recommender In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge systems. Discovery and Data Mining, KDD â15, pp. 1235â1244, New York, NY, USA, 2015. ACM.
Weimer, Markus, Karatzoglou, Alexandros, Le, Quoc Viet, and Smola, Alex. Maximum margin ma- trix factorization for collaborative ranking. Advances in neural information processing systems, 2007.
10 | {
"id": "1502.04390"
} |
1511.06807 | Adding Gradient Noise Improves Learning for Very Deep Networks | Deep feedforward and recurrent networks have achieved impressive results in
many perception and language processing applications. This success is partially
attributed to architectural innovations such as convolutional and long
short-term memory networks. The main motivation for these architectural
innovations is that they capture better domain knowledge, and importantly are
easier to optimize than more basic architectures. Recently, more complex
architectures such as Neural Turing Machines and Memory Networks have been
proposed for tasks including question answering and general computation,
creating a new set of optimization challenges. In this paper, we discuss a
low-overhead and easy-to-implement technique of adding gradient noise which we
find to be surprisingly effective when training these very deep architectures.
The technique not only helps to avoid overfitting, but also can result in lower
training loss. This method alone allows a fully-connected 20-layer deep network
to be trained with standard gradient descent, even starting from a poor
initialization. We see consistent improvements for many complex models,
including a 72% relative reduction in error rate over a carefully-tuned
baseline on a challenging question-answering task, and a doubling of the number
of accurate binary multiplication models learned across 7,000 random restarts.
We encourage further application of this technique to additional complex modern
architectures. | http://arxiv.org/pdf/1511.06807 | Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, James Martens | stat.ML, cs.LG | null | null | stat.ML | 20151121 | 20151121 | 5 1 0 2
v o N 1 2 ] L M . t a t s [ 1 v 7 0 8 6 0 . 1 1 5 1 : v i X r a
# Under review as a conference paper at ICLR 2016
# ADDING GRADIENT NOISE IMPROVES LEARNING FOR VERY DEEP NETWORKS
Arvind Neelakantanâ, Luke Vilnisâ College of Information and Computer Sciences University of Massachusetts Amherst {arvind,luke}@cs.umass.edu
# Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach Google Brain {qvl,ilyasu,lukaszkaiser,kkurach}@google.com
# James Martens University of Toronto jmartens@cs.toronto.edu
# ABSTRACT
Deep feedforward and recurrent networks have achieved impressive results in many perception and language processing applications. This success is partially attributed to architectural innovations such as convolutional and long short-term memory networks. The main motivation for these architectural innovations is that they capture better domain knowledge, and importantly are easier to optimize than more basic architectures. Recently, more complex architectures such as Neural Turing Machines and Memory Networks have been proposed for tasks including question answering and general computation, creating a new set of optimization challenges. In this paper, we discuss a low-overhead and easy-to-implement tech- nique of adding gradient noise which we ï¬nd to be surprisingly effective when training these very deep architectures. The technique not only helps to avoid overï¬tting, but also can result in lower training loss. This method alone allows a fully-connected 20-layer deep network to be trained with standard gradient de- scent, even starting from a poor initialization. We see consistent improvements for many complex models, including a 72% relative reduction in error rate over a carefully-tuned baseline on a challenging question-answering task, and a dou- bling of the number of accurate binary multiplication models learned across 7,000 random restarts. We encourage further application of this technique to additional complex modern architectures.
1
# INTRODUCTION
Deep neural networks have shown remarkable success in diverse domains including image recog- nition (Krizhevsky et al., 2012), speech recognition (Hinton et al., 2012) and language processing applications (Sutskever et al., 2014; Bahdanau et al., 2014). This broad success comes from a con- ï¬uence of several factors. First, the creation of massive labeled datasets has allowed deep networks to demonstrate their advantages in expressiveness and scalability. The increase in computing power has also enabled training of far larger networks with more forgiving optimization dynamics (Choro- manska et al., 2015). Additionally, architectures such as convolutional networks (LeCun et al., 1998) and long short-term memory networks (Hochreiter & Schmidhuber, 1997) have proven to be easier to optimize than classical feedforward and recurrent models. Finally, the success of deep networks is also a result of the development of simple and broadly applicable learning techniques such as dropout (Srivastava et al., 2014), ReLUs (Nair & Hinton, 2010), gradient clipping (Pascanu
âFirst two authors contributed equally. Work was done when all authors were at Google, Inc.
1
# Under review as a conference paper at ICLR 2016
et al., 2013; Graves, 2013), optimization and weight initialization strategies (Glorot & Bengio, 2010; Sutskever et al., 2013; He et al., 2015).
Recent work has aimed to push neural network learning into more challenging domains, such as question answering or program induction. These more complicated problems demand more com- plicated architectures (e.g., Graves et al. (2014); Sukhbaatar et al. (2015)) thereby posing new opti- mization challenges. In order to achieve good performance, researchers have reported the necessity of additional techniques such as supervision in intermediate steps (Weston et al., 2014), warm- starts (Peng et al., 2015), random restarts, and the removal of certain activation functions in early stages of training (Sukhbaatar et al., 2015).
A recurring theme in recent works is that commonly-used optimization techniques are not always sufï¬cient to robustly optimize the models. In this work, we explore a simple technique of adding annealed Gaussian noise to the gradient, which we ï¬nd to be surprisingly effective in training deep neural networks with stochastic gradient descent. While there is a long tradition of adding random weight noise in classical neural networks, it has been under-explored in the optimization of modern deep architectures. In contrast to theoretical and empirical results on the regularizing effects of conventional stochastic gradient descent, we ï¬nd that in practice the added noise can actually help us achieve lower training loss by encouraging active exploration of parameter space. This exploration proves especially necessary and fruitful when optimizing neural network models containing many layers or complex latent structures.
The main contribution of this work is to demonstrate the broad applicability of this simple method to the training of many complex modern neural architectures. Furthermore, to the best of our knowl- edge, our added noise schedule has not been used before in the training of deep networks. We consistently see improvement from injected gradient noise when optimizing a wide variety of mod- els, including very deep fully-connected networks, and special-purpose architectures for question answering and algorithm learning. For example, this method allows us to escape a poor initializa- tion and successfully train a 20-layer rectiï¬er network on MNIST with standard gradient descent. It also enables a 72% relative reduction in error in question-answering, and doubles the number of ac- curate binary multiplication models learned across 7,000 random restarts. We hope that practitioners will see similar improvements in their own research by adding this simple technique, implementable in a single line of code, to their repertoire.
# 2 RELATED WORK
Adding random noise to the weights, gradient, or the hidden units has been a known technique amongst neural network practitioners for many years (e.g., An (1996)). However, the use of gradient noise has been rare and its beneï¬ts have not been fully documented with modern deep networks.
Weight noise (Steijvers, 1996) and adaptive weight noise (Graves, 2011; Blundell et al., 2015), which usually maintains a Gaussian variational posterior over network weights, similarly aim to improve learning by added noise during training. They normally differ slightly from our proposed method in that the noise is not annealed and at convergence will be non-zero. Additionally, in adaptive weight noise, an extra set of parameters for the variance must be maintained.
Similarly, the technique of dropout (Srivastava et al., 2014) randomly sets groups of hidden units to zero at train time to improve generalization in a manner similar to ensembling.
An annealed Gaussian gradient noise schedule was used to train the highly non-convex Stochastic Neighbor Embedding model in Hinton & Roweis (2002). The gradient noise schedule that we found to be most effective is very similar to the Stochastic Gradient Langevin Dynamics algorithm of Welling & Teh (2011), who use gradients with added noise to accelerate MCMC inference for logistic regression and independent component analysis models. This use of gradient information in MCMC sampling for machine learning to allow faster exploration of state space was previously proposed by Neal (2011).
Various optimization techniques have been proposed to improve the training of neural networks. Most notable is the use of Momentum (Polyak, 1964; Sutskever et al., 2013; Kingma & Ba, 2014) or adaptive learning rates (Duchi et al., 2011; Dean et al., 2012; Zeiler, 2012). These methods are normally developed to provide good convergence rates for the convex setting, and then heuristically
2
# Under review as a conference paper at ICLR 2016
applied to nonconvex problems. On the other hand, injecting noise in the gradient is more suitable for nonconvex problems. By adding even more stochasticity, this technique gives the model more chances to escape local minima (see a similar argument in Bottou (1992)), or to traverse quickly through the âtransientâ plateau phase of early learning (see a similar analysis for momentum in Sutskever et al. (2013)). This is born out empirically in our observation that adding gradient noise can actually result in lower training loss. In this sense, we suspect adding gradient noise is similar to simulated annealing (Kirkpatrick et al., 1983) which exploits random noise to explore complex optimization landscapes. This can be contrasted with well-known beneï¬ts of stochastic gradient descent as a learning algorithm (Robbins & Monro, 1951; Bousquet & Bottou, 2008), where both theory and practice have shown that the noise induced by the stochastic process aids generalization by reducing overï¬tting.
# 3 METHOD
We consider a simple technique of adding time-dependent Gaussian noise to the gradient g at every training step t:
gt â gt + N (0, Ï2 t ) Our experiments indicate that adding annealed Gaussian noise by decaying the variance works better than using ï¬xed Gaussian noise. We use a schedule inspired from Welling & Teh (2011) for most of our experiments and take:
Ï2 t = η (1 + t)γ (1)
with η selected from {0.01, 0.3, 1.0} and γ = 0.55. Higher gradient noise at the beginning of training forces the gradient away from 0 in the early stages.
# 4 EXPERIMENTS
In the following experiments, we consider a variety of complex neural network architectures: Deep networks for MNIST digit classiï¬cation, End-To-End Memory Networks (Sukhbaatar et al., 2015) and Neural Programmer (Neelakantan et al., 2015) for question answering, Neural Random Access Machines (Kurach et al., 2015) and Neural GPUs (Kaiser & Sutskever, 2015) for algorithm learning. The models and results are described as follows.
4.1 DEEP FULLY-CONNECTED NETWORKS
For our ï¬rst set of experiments, we examine the impact of adding gradient noise when training a very deep fully-connected network on the MNIST handwritten digit classiï¬cation dataset (LeCun et al., 1998). Our network is deep: it has 20 hidden layers, with each layer containing 50 hidden units. We use the ReLU activation function (Nair & Hinton, 2010).
In this experiment, we add gradient noise sampled from a Gaussian distribution with mean 0, and decaying variance according to the schedule in Equation (1) with η = 0.01. We train with SGD without momentum, using the ï¬xed learning rates of 0.1 and 0.01. Unless otherwise speciï¬ed, the weights of the network are initialized from a Gaussian with mean zero, and standard deviation of 0.1, which we call Simple Init.
The results of our experiment are in Table 1. When trained from Simple Init we can see that adding noise to the gradient helps in achieving higher average and best accuracy over 20 runs using each learning rate for a total of 40 runs (Table 1, Experiment 1). We note that the average is closer to 50% because the small learning rate of 0.01 usually gives very slow convergence. We also try our approach on a more shallow network of 5 layers, but adding noise does not improve the training in that case.
Next, we experiment with clipping the gradients with two threshold values: 100 and 10 (Table 1, Experiment 2, and 3). Here, we ï¬nd training with gradient noise is insensitive to the gradient clipping values. By tuning the clipping threshold, it is possible to get comparable accuracy without noise for this problem.
3
# Under review as a conference paper at ICLR 2016
In our fourth and ï¬fth experiment (Table 1, Experiment 4, and 5), we use two analytically-derived ReLU initialization techniques (which we term Good Init) recently-proposed by Sussillo & Abbott (2014) and He et al. (2015), and ï¬nd that adding gradient noise does not help. Previous work has found that stochastic gradient descent with carefully tuned initialization, momentum, learning rate, and learning rate decay can optimize such extremely deep fully-connected ReLU networks (Srivastava et al., 2015). It would be harder to ï¬nd such a robust initialization technique for the more complex heterogeneous architectures considered in later sections. Accordingly, we ï¬nd in later experiments (e.g., Section 4.3) that random restarts and the use of a momentum-based optimizer like Adam are not sufï¬cient to achieve the best results in the absence of added gradient noise.
To understand how sensitive the methods are to poor initialization, in addition to the sub-optimal Simple Init, we run an experiment where all the weights in the neural network are initialized at zero. The results (Table 1, Experiment 5) show that if we do not add noise to the gradient, the networks fail to learn. If we add some noise, the networks can learn and reach 94.5% accuracy.
Experiment 1: Simple Init, No Gradient Clipping Best Test Accuracy Average Test Accuracy 89.9% 96.7% 11.3%
Setting No Noise With Noise No Noise + Dropout 43.1% 52.7% 10.8%
No Noise With Noise 90.0% 96.7% 46.3% 52.3%
No Noise With Noise 95.7% 97.0% 51.6% 53.6%
Experiment 4: Good Init (Sussillo & Abbott, 2014) + Gradient Clipping Threshold = 10 No Noise With Noise
Experiment 5: Good Init (He et al., 2015) + Gradient Clipping Threshold = 10 No Noise With Noise 97.4% 97.2% 91.7% 91.7%
No Noise With Noise 11.4% 94.5% 10.1% 49.7%
Table 1: Average and best test accuracy percentages on MNIST over 40 runs. Higher values are better.
In summary, these experiments show that if we are careful with initialization and gradient clipping values, it is possible to train a very deep fully-connected network without adding gradient noise. However, if the initialization is poor, optimization can be difï¬cult, and adding noise to the gradient is a good mechanism to overcome the optimization difï¬culty.
The implication of this set of results is that added gradient noise can be an effective mechanism for training very complex networks. This is because it is more difï¬cult to initialize the weights properly for complex networks. In the following, we explore the training of more complex networks such as End-To-End Memory Networks and Neural Programmer, whose initialization is less well studied.
4
# Under review as a conference paper at ICLR 2016
4.2 END-TO-END MEMORY NETWORKS
We test added gradient noise for training End-To-End Memory Networks (Sukhbaatar et al., 2015), a new approach for Q&A using deep networks.1 Memory Networks have been demonstrated to perform well on a relatively challenging toy Q&A problem (Weston et al., 2015).
In Memory Networks, the model has access to a context, a question, and is asked to predict an answer. Internally, the model has an attention mechanism which focuses on the right clue to answer the question. In the original formulation (Weston et al., 2015), Memory Networks were provided with additional supervision as to what pieces of context were necessary to answer the question. This was replaced in the End-To-End formulation by a latent attention mechanism implemented by a softmax over contexts. As this greatly complicates the learning problem, the authors implement a two-stage training procedure: First train the networks with a linear attention, then use those weights to warmstart the model with softmax attention.
In our experiments with Memory Networks, we use our standard noise schedule, using noise sam- pled from a Gaussian distribution with mean 0, and decaying variance according to Equation (1) with η = 1.0. This noise is added to the gradient after clipping. We also ï¬nd for these experiments that a ï¬xed standard deviation also works, but its value has to be tuned, and works best at 0.001. We set the number of training epochs to 200 because we would like to understand the behaviors of Memory Networks near convergence. The rest of the training is identical to the experimental setup proposed by the original authors. We test this approach with the published two-stage training approach, and additionally with a one-stage training approach where we train the networks with softmax attention and without warmstarting. Results are reported in Table 2. We ï¬nd some ï¬uctuations during each run of the training, but the reported results reï¬ect the typical gains obtained by adding random noise.
Setting One-stage training No Noise 9.6% Training error: Validation error: 19.5% Validation error: 16.6% 5.9% Validation error: 10.9% Validation error: 10.8% With Noise 10.5% Training error: Two-stage training Training error: 6.2% Training error:
Table 2: The effects of adding gradient noise to End-To-End Memory Networks. Lower values are better.
We ï¬nd that warmstarting does indeed help the networks. In both cases, adding random noise to the gradient also helps the network both in terms of training errors and validation errors. Added noise, however, is especially helpful for the training of End-To-End Memory Networks without the warmstarting stage.
4.3 NEURAL PROGRAMMER
Neural Programmer is a neural network architecture augmented with a small set of built-in arithmetic and logic operations that learns to induce latent programs. It is proposed for the task of question answering from tables (Neelakantan et al., 2015). Examples of operations on a table include the sum of a set of numbers, or the list of numbers greater than a particular value. Key to Neural Programmer is the use of âsoft selectionâ to assign a probability distribution over the list of operations. This probability distribution weighs the result of each operation, and the cost function compares this weighted result to the ground truth. This soft selection, inspired by the soft attention mechanism of Bahdanau et al. (2014), allows for full differentiability of the model. Running the model for several steps of selection allows the model to induce a complex program by chaining the operations, one after the other. Figure 1 shows the architecture of Neural Programmer at a high level.
In a synthetic table comprehension task, Neural Programmer takes a question and a table (or database) as input and the goal is to predict the correct answer. To solve this task, the model has to induce a program and execute it on the table. A major challenge is that the supervision signal is
# 1Code available at: https://github.com/facebook/MemNN
5
# Under review as a conference paper at ICLR 2016
Timestep t=1,2,...,T Arithmetic and LZ | logic operations + Sa Pr) Controller f selection Apply . Ss Data +| Memory |+â>
# Input Pur
# Output
Figure 1: Neural Programmer, a neural network with built-in arithmetic and logic operations. At every time step, the controller selectes an operation and a data segment. Figure reproduced with permission from Neelakantan et al. (2015).
in the form of the correct answer and not the program itself. The model runs for a ï¬xed number of steps, and at each step selects a data segment and an operation to apply to the selected data segment. Soft selection is performed at training time so that the model is differentiable, while at test time hard selection is employed. Table 3 shows examples of programs induced by the model.
Question greater 17.27 A and lesser -19.21 D count What are the number of elements whose ï¬eld in column A is greater than 17.27 and ï¬eld in Column D is lesser than -19.21. t Selected Op 1 Greater Lesser 2 And 3 Count 4 Selected Column A D - -
Table 3: Example program induced by the model using T = 4 time steps. We show the selected columns in cases in which the selected operation acts on a particular column.
Similar to the above experiments with Memory Networks, in our experiments with Neural Pro- grammer, we add noise sampled from a Gaussian distribution with mean 0, and decaying variance according to Equation (1) with η = 1.0 to the gradient after clipping. The model is optimized with Adam (Kingma & Ba, 2014), which combines momentum and adaptive learning rates.
For our ï¬rst experiment, we train Neural Programmer to answer questions involving a single column of numbers. We use 72 different hyper-parameter conï¬gurations with and without adding annealed random noise to the gradients. We also run each of these experiments for 3 different random ini- tializations of the model parameters and we ï¬nd that only 1/216 runs achieve 100% test accuracy without adding noise while 9/216 runs achieve 100% accuracy when random noise is added. The 9 successful runs consisted of models initialized with all the three different random seeds, demon- strating robustness to initialization. We ï¬nd that when using dropout (Srivastava et al., 2014) none of the 216 runs give 100% accuracy.
We consider a more difï¬cult question answering task where tables have up to ï¬ve columns contain- ing numbers. We also experiment on a task containing one column of numbers and another column of text entries. Table 4 shows the performance of adding noise vs. no noise on Neural Programmer.
Figure 2 shows an example of the effect of adding random noise to the gradients in our experiment with 5 columns. The differences between the two models are much more pronounced than in Table 4 because Table 4 shows the results after careful hyperparameter selection.
In all cases, we see that added gradient noise improves performance of Neural Programmer. Its performance when combined with or used instead of dropout is mixed depending on the problem, but the positive results indicate that it is worth attempting on a case-by-case basis.
6
# Under review as a conference paper at ICLR 2016
Setting Five columns Text entries No Noise With Noise Dropout Dropout With Noise 97.4% 95.3% 99.1% 97.6% 98.7% 98.8% 99.2% 97.3%
Table 4: The effects of adding random noise to the gradient on Neural Programmer. Higher values are better. Adding random noise to the gradient always helps the model. When the models are applied to these more complicated tasks than the single column experiment, using dropout and noise together seems to be beneï¬cial in one case while using only one of them achieves the best result in the other case.
300 Train Loss: Noise Vs. No Noise roo zest Accuracy: Noise Vs. No Noise --- no noise --- no noise { 3000 --- noise go|| "77 Boise ' Fy i g 2500 = 60 : 6 ' 8 g i g ' 4 < ' = 2000hy,.. . 8 40 : lho . PS ! x ep ert tieebrore ! â 1500 an Se Natal 100% 3010015030020 300 "030100150 20025000 No. of epochs No. of epochs
Figure 2: Noise vs. No Noise in our experiment with tables containing 5 columns. The models trained with noise generalizes almost always better.
4.4 NEURAL RANDOM ACCESS MACHINES
We now conduct experiments with Neural Random-Access Machines (NRAM) (Kurach et al., 2015). NRAM is a model for algorithm learning that can store data, and explicitly manipulate and derefer- ence pointers. NRAM consists of a neural network controller, memory, registers and a set of built-in operations. This is similar to the Neural Programmer in that it uses a controller network to compose built-in operations, but both reads and writes to an external memory. An operation can either read (a subset of) contents from the memory, write content to the memory or perform an arithmetic opera- tion on either input registers or outputs from other operations. The controller runs for a ï¬xed number of time steps. At every step, the model selects both the operation to be executed and its inputs. These selections are made using soft attention (Bahdanau et al., 2014) making the model end-to-end dif- ferentiable. NRAM uses an LSTM (Hochreiter & Schmidhuber, 1997) controller. Figure 3 gives an overview of the model.
For our experiment, we consider a problem of searching k-th elementâs value on a linked list. The network is given a pointer to the head of the linked list, and has to ï¬nd the value of the k-th element. Note that this is highly nontrivial because pointers and their values are stored at random locations in memory, so the model must learn to traverse a complex graph for k steps.
Because of this complexity, training the NRAM architecture can be unstable, especially when the number of steps and operations is large. We once again experiment with the decaying noise schedule from Equation (1), setting η = 0.3. We run a large grid search over the model hyperparameters (de- tailed in Kurach et al. (2015)), and use the top 3 for our experiments. For each of these 3 settings, we try 100 different random initializations and look at the percentage of runs that give 100% accuracy across each one for training both with and without noise.
As in our experiments with Neural Programmer, we ï¬nd that gradient clipping is crucial when training with noise. This is likely because the effect of random noise is washed away when gradients become too large. For models trained with noise we observed much better reproduce rates, which are presented in Table 5. Although it is possible to train the model to achieve 100% accuracy without
7
# Under review as a conference paper at ICLR 2016
binarized LSTM ï¬nish? r1 r2 r3 r4r s r e t s i g e m1 m2 m3 r1 r2 r3 r4
# memory tape
Figure 3: One timestep of the NRAM architecture with R = 4 registers and a memory tape. m1, m2 and m3 are example operations built-in to the model. The operations can read and write from memory. At every time step, the LSTM controller softly selects the operation and its inputs. Figure reproduced with permission from Kurach et al. (2015).
noise, it is less robust across multiple random restarts, with over 10x as many initializations leading to a correct answer when using noise.
Hyperparameter-1 Hyperparameter-2 Hyperparameter-3 Average No Noise With Noise 1% 5% 0% 22% 3% 7% 1.3% 11.3%
Table 5: Percentage of successful runs on k-th element task. Higher values are better. All tests were performed with the same set of 100 random initializations (seeds).
4.5 CONVOLUTIONAL GATED RECURRENT NETWORKS (NEURAL GPUS)
Convolutional Gated Recurrent Networks (CGRN) or Neural GPUs (Kaiser & Sutskever, 2015) are a recently proposed model that is capable of learning arbitrary algorithms. CGRNs use a stack of convolution layers, unfolded with tied parameters like a recurrent network. The input data (usually a list of symbols) is ï¬rst converted to a three dimensional tensor representation containing a sequence of embedded symbols in the ï¬rst two dimensions, and zeros padding the next dimension. Then, multiple layers of modiï¬ed convolution kernels are applied at each step. The modiï¬ed kernel is a combination of convolution and Gated Recurrent Units (GRU) (Cho et al., 2014). The use of con- volution kernels allows computation to be applied in parallel across the input data, while the gating mechanism helps the gradient ï¬ow. The additional dimension of the tensor serves as a working memory while the repeated operations are applied at each layer. The output at the ï¬nal layer is the predicted answer.
The key difference between Neural GPUs and other architectures for algorithmic tasks (e.g., Neural Turing Machines (Graves et al., 2014)) is that instead of using sequential data access, convolution kernels are applied in parallel across the input, enabling the use of very deep and wide models. The model is referred to as Neural GPUs because the input data is accessed in parallel. Neural GPUs were shown to outperform previous sequential architectures for algorithm learning on tasks such as binary addition and multiplication, by being able to generalize from much shorter to longer data cases.
In our experiments, we use Neural GPUs for the task of binary multiplication. The input consists two concatenated sequences of binary digits separated by an operator token, and the goal is to multiply
8
# Under review as a conference paper at ICLR 2016
the given numbers. During training, the model is trained on 20-digit binary numbers while at test time, the task is to multiply 200-digit numbers. Once again, we add noise sampled from Gaussian distribution with mean 0, and decaying variance according to the schedule in Equation (1) with η = 1.0, to the gradient after clipping. The model is optimized using Adam (Kingma & Ba, 2014).
Table 6 gives the results of a large-scale experiment using Neural GPUs over 7290 experimental runs. The experiment shows that models trained with added gradient noise are more robust across many random initializations and parameter settings. As you can see, adding gradient noise both allows us to achieve the best performance, with the number of models with < 1% error over twice as large as without noise. But it also helps throughout, improving the robustness of training, with more models training to lower error rates as well. This experiment shows that the simple technique of added gradient noise is effective even in regimes where we can afford a very large numbers of random restarts.
Setting No Noise With Noise Error < 1% Error < 2% Error < 3% Error < 5% 28 58 90 159 172 282 387 570
Table 6: Number of successful runs on 7290 random trials. Higher values are better. The models are trained on length 20 and tested on length 200.
# 5 CONCLUSION
In this paper, we discussed a set of experiments which show the effectiveness of adding noise to the gradient. We found that adding noise to the gradient during training helps training and generalization of complicated neural networks. We suspect that the effects are pronounced for complex models because they have many local minima.
We believe that this surprisingly simple yet effective idea, essentially a single line of code, should be in the toolset of neural network practitioners when facing issues with training neural networks. We also believe that this set of empirical results can give rise to further formal analysis of why adding noise is so effective for very deep neural networks.
Acknowledgements We sincerely thank Marcin Andrychowicz, Dmitry Bahdanau, Samy Bengio, Oriol Vinyals for suggestions and the Google Brain team for help with the project.
# REFERENCES
An, Guozhong. The effects of adding noise during backpropagation training on a generalization performance. Neural Computation, 1996.
Bahdanau, Dzmitry, Cho, Kyunghyun, and Bengio, Yoshua. Neural machine translation by jointly learning to align and translate. ICLR, 2014.
Blundell, Charles, Cornebise, Julien, Kavukcuoglu, Koray, and Wierstra, Daan. Weight uncertainty in neural networks. ICML, 2015.
Bottou, L´eon. Stochastic gradient learning in neural networks. In Neuro-N¨ımes, 1992.
Bousquet, Olivier and Bottou, L´eon. The tradeoffs of large scale learning. In NIPS, 2008.
Cho, Kyunghyun, Van Merri¨enboer, Bart, Gulcehre, Caglar, Bahdanau, Dzmitry, Bougares, Fethi, Schwenk, Holger, and Bengio, Yoshua. Learning phrase representations using RNN encoder- decoder for statistical machine translation. In EMNLP, 2014.
Choromanska, Anna, Henaff, Mikael, Mathieu, Micha¨el, Arous, G´erard Ben, and LeCun, Yann. The loss surfaces of multilayer networks. In AISTATS, 2015.
9
# Under review as a conference paper at ICLR 2016
Dean, Jeffrey, Corrado, Greg, Monga, Rajat, Chen, Kai, Devin, Matthieu, Mao, Mark, Senior, An- drew, Tucker, Paul, Yang, Ke, Le, Quoc V, et al. Large scale distributed deep networks. In NIPS, 2012.
Duchi, John, Hazan, Elad, and Singer, Yoram. Adaptive subgradient methods for online learning and stochastic optimization. JMLR, 2011.
Glorot, Xavier and Bengio, Yoshua. Understanding the difï¬culty of training deep feedforward neural networks. In Proc. AISTATS, pp. 249â256, 2010.
Graves, Alex. Practical variational inference for neural networks. In NIPS, 2011.
Graves, Alex. Generating sequences with recurrent neural networks. arXiv preprint arxiv:1308.0850, 2013.
Graves, Alex, Wayne, Greg, and Danihelka, Ivo. Neural Turing Machines. arXiv preprint arXiv:1410.5401, 2014.
He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Delving deep into rectiï¬ers: Surpass- ing human-level performance on imagenet classiï¬cation. ICCV, 2015.
Hinton, Geoffrey and Roweis, Sam. Stochastic neighbor embedding. In NIPS, 2002.
Hinton, Geoffrey, Deng, Li, Yu, Dong, Dahl, George, rahman Mohamed, Abdel, Jaitly, Navdeep, Senior, Andrew, Vanhoucke, Vincent, Nguyen, Patrick, Sainath, Tara, and Kingsbury, Brian. Deep neural networks for acoustic modeling in speech recognition. Signal Processing Magazine, 2012.
Hochreiter, Sepp and Schmidhuber, J¨urgen. Long short-term memory. Neural Computation, 1997.
Kaiser, Lukasz and Sutskever, Ilya. Neural GPUs learn algorithms. In Arxiv, 2015.
Kingma, Diederik and Ba, Jimmy. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Kirkpatrick, Scott, Vecchi, Mario P, et al. Optimization by simulated annealing. Science, 1983.
Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. ImageNet classiï¬cation with deep con- volutional neural networks. In NIPS, 2012.
Kurach, Karol, Andrychowicz, Marcin, and Sutskever, Ilya. Neural random access machine. In Arxiv, 2015.
LeCun, Yann, Bottou, L´eon, Bengio, Yoshua, and Haffner, Patrick. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998.
Nair, Vinod and Hinton, Geoffrey. Rectiï¬ed linear units improve Restricted Boltzmann Machines. In ICML, 2010.
Neal, Radford M. MCMC using Hamiltonian dynamics. Handbook of Markov Chain Monte Carlo, 2011.
Neelakantan, Arvind, Le, Quoc V., and Sutskever, Ilya. Neural Programmer: Inducing latent pro- grams with gradient descent. In Arxiv, 2015.
Pascanu, Razvan, Mikolov, Tomas, and Bengio, Yoshua. On the difï¬culty of training recurrent neural networks. Proc. ICML, 2013.
Peng, Baolin, Lu, Zhengdong, Li, Hang, and Wong, Kam-Fai. Towards neural network-based rea- soning. arXiv preprint arxiv:1508.05508, 2015.
Polyak, Boris Teodorovich. Some methods of speeding up the convergence of iteration methods. USSR Computational Mathematics and Mathematical Physics, 1964.
Robbins, Herbert and Monro, Sutton. A stochastic approximation method. The annals of mathemat- ical statistics, 1951.
10
# Under review as a conference paper at ICLR 2016
Srivastava, Nitish, Hinton, Geoffrey, Krizhevsky, Alex, Sutskever, Ilya, and Salakhutdinov, Ruslan. Dropout: A simple way to prevent neural networks from overï¬tting. JMLR, 2014.
Srivastava, Rupesh Kumar, Greff, Klaus, and Schmidhuber, J¨urgen. Training very deep networks. NIPS, 2015.
Steijvers, Mark. A recurrent network that performs a context-sensitive prediction task. In CogSci, 1996.
Sukhbaatar, Sainbayar, Szlam, Arthur, Weston, Jason, and Fergus, Rob. End-to-end memory net- works. In NIPS, 2015.
Sussillo, David and Abbott, L.F. Random walks: Training very deep nonlinear feed-forward net- works with smart initialization. Arxiv, 2014.
Sutskever, Ilya, Martens, James, Dahl, George, and Hinton, Geoffrey. On the importance of initial- ization and momentum in deep learning. In ICML, 2013.
Sutskever, Ilya, Vinyals, Oriol, and Le, Quoc V. Sequence to sequence learning with neural net- works. In NIPS, 2014.
Welling, Max and Teh, Yee Whye. Bayesian learning via stochastic gradient Langevin dynamics. In ICML, 2011.
Weston, Jason, Chopra, Sumit, and Bordes, Antoine. Memory networks. arXiv preprint arXiv:1410.3916, 2014.
Weston, Jason, Bordes, Antoine, Chopra, Sumit, and Mikolov, Tomas. Towards AI-complete ques- tion answering: a set of prerequisite toy tasks. In ICML, 2015.
Zeiler, Matthew D. Adadelta: An adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012.
11 | {
"id": "1508.05508"
} |
1511.06789 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | 6 1 0 2
t c O 8 1 ] V C . s c [ 3 v 9 8 7 6 0 . 1 1 5 1 : v i X r a
# The Unreasonable Eï¬ectiveness of Noisy Data for Fine-Grained Recognition
Jonathan Krause!* Benjamin Sapp?** | Andrew Howard? Howard Zhou? Alexander Toshev? Tom Duerig? James Philbin?** Li Fei-Fei!
# 1Stanford University
# 2Zoox
# 3Google
{jkrause,feifeili}@cs.stanford.edu {bensapp,james}@zoox.com {howarda,howardzhou,toshev,tduerig}@google.com
Abstract. Current approaches for ï¬ne-grained recognition do the fol- lowing: First, recruit experts to annotate a dataset of images, optionally also collecting more structured data in the form of part annotations and bounding boxes. Second, train a model utilizing this data. Toward the goal of solving ï¬ne-grained recognition, we introduce an alternative approach, leveraging free, noisy data from the web and simple, generic methods of recognition. This approach has beneï¬ts in both performance and scalability. We demonstrate its eï¬cacy on four ï¬ne-grained datasets, greatly exceeding existing state of the art without the manual collec- tion of even a single label, and furthermore show ï¬rst results at scaling to more than 10,000 ï¬ne-grained categories. Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using their an- notated training sets. We compare our approach to an active learning approach for expanding ï¬ne-grained datasets.
# 1 Introduction
Fine-grained recognition refers to the task of distinguishing very similar cate- gories, such as breeds of dogs [27,37], species of birds [60,58,5,4], or models of cars [70,30]. Since its inception, great progress has been made, with accuracies on the popular CUB-200-2011 bird dataset [60] steadily increasing from 10.3% [60] to 84.6% [69].
The predominant approach in ï¬ne-grained recognition today consists of two steps. First, a dataset is collected. Since ï¬ne-grained recognition is a task in- herently diï¬cult for humans, this typically requires either recruiting a team of experts [58,38] or extensive crowd-sourcing pipelines [30,4]. Second, a method for recognition is trained using these expert-annotated labels, possibly also re- quiring additional annotations in the form of parts, attributes, or relation- ships [75,26,36,5]. While methods following this approach have shown some suc- cess [5,75,36,28], their performance and scalability is constrained by the paucity
Work done while J. Krause was interning at Google ** Work done while B. Sapp and J. Philbin were at Google
2 Krause et al.
Fig. 1. There are more than 14,000 species of birds in the world. In this work we show that using noisy data from publicly-available online sources can not only improve recognition of categories in todayâs datasets, but also scale to very large numbers of ï¬ne-grained categories, which is extremely expensive with the traditional approach of manually collecting labels for ï¬ne-grained datasets. Here we show 4,225 of the 10,982 categories recognized in this work.
of data available due to these limitations. With this traditional approach it is prohibitive to scale up to all 14,000 species of birds in the world (Fig. 1), 278,000 species of butterï¬ies and moths, or 941,000 species of insects [24].
In this paper, we show that it is possible to train eï¬ective models of ï¬ne- grained recognition using noisy data from the web and simple, generic methods of recognition [55,54]. We demonstrate recognition abilities greatly exceeding current state of the art methods, achieving top-1 accuracies of 92.3% on CUB- 200-2011 [60], 85.4% on Birdsnap [4], 93.4% on FGVC-Aircraft [38], and 80.8% on Stanford Dogs [27] without using a single manually-annotated training label from the respective datasets. On CUB, this is nearly at the level of human ex- perts [6,58]. Building upon this, we scale up the number of ï¬ne-grained classes recognized, reporting ï¬rst results on over 10,000 species of birds and 14,000 species of butterï¬ies and moths.
The rest of this paper proceeds as follows: After an overview of related work in Sec. 2, we provide an analysis of publicly-available noisy data for ï¬ne-grained recognition in Sec. 3, analyzing its quantity and quality. We describe a more traditional active learning approach for obtaining larger quantities of ï¬ne-grained data in Sec. 4, which serves as a comparison to purely using noisy data. We present extensive experiments in Sec. 5, and conclude with discussion in Sec. 6.
# 2 Related Work
Fine-Grained Recognition. The majority of research in ï¬ne-grained recogni- tion has focused on developing improved models for classiï¬cation [1,3,5,7,9,8,14,16,18,20,21,22,28,29,36,37,41,42,49,51,50,66,68,69,71,73,72,76,77,75,78].
The Unreasonable Eï¬ectiveness of Noisy Data for Fine-Grained Recognition
3
While these works have made great progress in modeling ï¬ne-grained categories given the limited data available, very few works have considered the impact of that data [69,68,58]. Xu et al. [69] augment datasets annotated with category labels and parts with web images in a multiple instance learning framework, and Xie et al. [68] do multitask training, where one task uses a ground truth ï¬ne- grained dataset and the other does not require ï¬ne-grained labels. While both of these methods have shown that augmenting ï¬ne-grained datasets with addi- tional data can help, in our work we present results which completely forgo the use of any curated ground truth dataset. In one experiment hinting at the use of noisy data, Van Horn et al. [58] show the possibility of learning 40 bird classes from Flickr images. Our work validates and extends this idea, using similar intu- ition to signiï¬cantly improve performance on existing ï¬ne-grained datasets and scale ï¬ne-grained recognition to over ten thousand categories, which we believe is necessary in order to fully explore the research direction.
Considerable work has also gone into the challenging task of curating ï¬ne- grained datasets [4,58,27,30,31,59,65,60,70] and developing interactive methods for recognition with a human in the loop [6,62,61,63]. While these works have demonstrated eï¬ective strategies for collecting images of ï¬ne-grained categories, their scalability is ultimately limited by the requirement of manual annotation. Our work provides an alternative to these approaches.
Learning from Noisy Data. Our work is also inspired by methods that pro- pose to learn from web data [15,10,11,45,34,19] or reason about label noise [39,67,58,52,43]. Works that use web data typically focus on detection and classiï¬cation of a set of coarse-grained categories, but have not yet examined the ï¬ne-grained setting. Methods that reason about label noise have been divided in their results: some have shown that reasoning about label noise can have a substantial eï¬ect on recognition performance [66], while others demonstrate little change from re- ducing the noise level or having a noise-aware model [52,43,58]. In our work, we demonstrate that noisy data can be surprisingly eï¬ective for ï¬ne-grained recognition, providing evidence in support of the latter hypothesis.
# 3 Noisy Fine-Grained Data
In this section we provide an analysis of the imagery publicly available for ï¬ne- grained recognition, which we collect via web search.1 We describe its quantity, distribution, and levels of noise, reporting each on multiple ï¬ne-grained domains.
# 3.1 Categories
We consider four domains of ï¬ne-grained categories: birds, aircraft, Lepidoptera (a taxonomic order including butterï¬ies and moths), and dogs. For birds and
1 Google image search: http://images.google.com
4 Krause et al.
[Ea tmem]| 100 Num. Images Num. Images Num. Images
Fig. 2. Distributions of the number of images per category available via image search for the categories in CUB, Birdsnap, and L-Bird (far left), FGVC and L-Aircraft (mid- dle left), and L-Butterï¬y (middle right). At far right we aggregate and plot the average number of images per category in each dataset in addition to the training sets of each curated dataset we consider, denoted CUB-GT, Birdsnap-GT, and FGVC-GT.
Lepidoptera, we obtained lists of ï¬ne-grained categories from Wikipedia, result- ing in 10,982 species of birds and 14,553 species of Lepidoptera, denoted L-Bird (âLarge Birdâ) and L-Butterï¬y. For aircraft, we assembled a list of 409 types of aircraft by hand (including aircraft in the FGVC-Aircraft [38] dataset, abbre- viated FGVC). For dogs, we combine the 120 dog breeds in Stanford Dogs [27] with 395 other categories to obtain the 515-category L-Dog. We evaluate on two other ï¬ne-grained datasets in addition to FGVC and Stanford Dogs: CUB- 200-2011 [60] and Birdsnap [4], for a total of four evaluation datasets. CUB and Birdsnap include 200 and 500 species of common birds, respectively, FGVC has 100 aircraft variants, and Stanford Dogs contains 120 breeds of dogs. In this section we focus our analysis on the categories in L-Bird, L-Butterï¬y, and L-Aircraft in addition to the categories in their evaluation datasets.
# 3.2 Images from the Web
We obtain imagery via Google image search results, using all returned images as images for a given category. For L-Bird and L-Butterï¬y, queries are for the scientiï¬c name of the category, and for L-Aircraft and L-Dog queries are simply for the category name (e.g. âBoeing 737-200â or âPembroke Welsh Corgiâ).
Quantifying the Data. How much ï¬ne-grained data is available? In Fig. 2 we plot distributions of the number of images retrieved for each category and report aggregates across each set of categories. We note several trends: Cate- gories in existing datasets, which are typically common within their ï¬ne-grained domain, have more images per category than the long-tail of categories present in the larger L-Bird, L-Aircraft, or L-Butterï¬y, with the eï¬ect most pronounced in L-Bird and L-Butterï¬y. Further, domains of ï¬ne-grained categories have sub- stantially diï¬erent distributions, i.e. L-Bird and L-Aircraft have more images per category than L-Butterï¬y. This makes sense â ï¬ne-grained categories and domains of categories that are more common and have a larger enthusiast base will have more imagery since more photos are taken of them. We also note that results tend to be limited to roughly 800 images per category, even for the most common categories, which is likely a restriction placed on public search results.
The Unreasonable Eï¬ectiveness of Noisy Data for Fine-Grained Recognition
Fig. 3. Examples of cross-domain noise for birds, butterï¬ies, airplanes, and dogs. Images are generally of related categories that are outside the domain of interest, e.g. a map of a birdâs typical habitat or a t-shirt containing the silhouette of a dog.
Most striking is the large diï¬erence between the number of images available via web search and in existing ï¬ne-grained datasets: even Birdsnap, which has an average of 94.8 images per category, contains only 13% as many images as can be obtained with a simple image search. Though their labels are noisy, web searches unveil an order of magnitude more data which can be used to learn ï¬ne-grained categories.
In total, for all four datasets, we obtained 9.8 million images for 26,458 categories, requiring 151.8GB of disk space.2
Noise. Though large amounts of imagery are freely available for ï¬ne-grained categories, focusing only on scale ignores a key issue: noise. We consider two types of label noise, which we call cross-domain noise and cross-category noise. We deï¬ne cross-domain noise to be the portion of images that are not of any category in the same ï¬ne-grained domain, i.e. for birds, it is the fraction of images that do not contain a bird (examples in Fig. 3). In contrast, cross-category noise is the portion of images that have the wrong label within a ï¬ne-grained domain, i.e. an image of a bird with the wrong species label.
To quantify levels of cross-domain noise, we manually label a 1,000 image sample from each set of search results, with results in Fig. 4. Although levels of noise are not too high for any set of categories (max. 34.2% for L-Butterï¬y), we notice an interesting correlation: cross-domain noise decreases moderately as the number of images per category (Fig. 2) increases. We hypothesize that categories with many search results have a corresponding large pool of images to draw results from, and thus actual search results will tend to be higher-precision. In contrast to cross-domain noise, cross-category noise is much harder to quantify, since doing so eï¬ectively requires ground truth ï¬ne-grained labels of query results. To examine cross-category noise from at least one vantage point, we show the confusion matrix of given versus predicted labels on 30 categories in the CUB [60] test set and their web images in Fig. 6, left and right, which we generate via a classiï¬er trained on the CUB training set, acting as a noisy
2 URLs available at https://github.com/google/goldfinch
5
6
Krause et al.
0.40-â 0.35) g go. 30) 2 £0.25 é 0.20 I a So. 15) goal. U0.10 0.05, 9-0 CoB FOVe | Buttery Birdsnap L-Aircraft
80, 3 £ 5 S 60 g a & s 40. 5 s 20- âcoe Deuter Birdsnap \-aitraft
Fig. 4. The cross-domain noise in search results for each domain.
Fig. 5. The percentage of images retained after ï¬ltering.
proxy for ground truth labels. In these confusion matrices, cross-category noise is reï¬ected as a strong oï¬-diagonal pattern, while cross-domain noise would manifest as a diï¬use pattern of noise, since images not of the same domain are an equally bad ï¬t to all categories. Based on this interpretation, the web images show a moderate amount more cross-category noise than the clean CUB test set, though the general confusion pattern is similar.
We propose a simple, yet eï¬ective strategy to reduce the eï¬ects of cross- category noise: exclude images that appear in search results for more than one category. This approach, which we refer to as ï¬ltering, speciï¬cally targets images for which there is explicit ambiguity in the category label (examples in Fig. 7). As we demonstrate experimentally, ï¬ltering can improve results while reducing training time via the use of a more compact training set â we show the portion of images kept after ï¬ltering in Fig. 5. Agreeing with intuition, ï¬ltering removes more images when there are more categories. Anecdotally, we have also tried a few techniques to combat cross-domain noise, but initial experiments did not see any improvement in recognition so we do not expand upon them here. While reducing cross-domain noise should be beneï¬cial, we believe that it is not as important as cross-category noise in ï¬ne-grained recognition due to the absence of out-of-domain classes during testing.
# 4 Data via Active Learning
In this section we brieï¬y describe an active learning-based approach for collecting large quantities of ï¬ne-grained data. Active learning and other human-in-the- loop systems have previously been used to create datasets in a more cost-eï¬cient way than manual annotation [74,12,47], and our goal is to compare this more traditional approach with simply using noisy data, particularly when considering the application of ï¬ne-grained recognition. In this paper, we apply active learning to the 120 dog breeds in the Stanford Dogs [27] dataset.
Our system for active learning begins by training a classiï¬er on a seed set of input images and labels (i.e. the Stanford Dogs training set), then proceeds by iteratively picking a set of images to annotate, obtaining labels with hu- man annotators, and re-training the classiï¬er. We use a convolutional neural
The Unreasonable Eï¬ectiveness of Noisy Data for Fine-Grained Recognition
CUB Web
Pagoreater Necklaced Laughingthrush = âCuban Emerald lscevoade avohintsh catanvieo re Headed Lahingtish Mi, Ke West cuore back Headed satator y Red: Biled Pigeon Northern Potoo AA oiled Toucon (Chestnut Mandibled Toucan
Fig. 6. Confusion matrices of the pre- dicted label (column) given the provided label (row) for 30 CUB categories on the CUB test set (left) and search results for CUB categories (right). For visualization purposes we remove the diagonal.
Fig. 7. Examples of images removed via ï¬ltering and the categories whose re- sults they appeared in. Some share similar names (left examples), while others share similar locations (right examples).
network [32,54,25] for the classiï¬er, and now describe the key steps of sample selection and human annotation in more detail.
Sample Selection. There are many possible criterion for sample selection [47]. We employ conï¬dence-based sampling: For each category c, we select the b ËP (c) images with the top class scores fc(x) as determined by our current model, where ËP (c) is a desired prior distribution over classes, b is a budget on the number of images to annotate, and fc(x) is the output of the classiï¬er. The intuition is as follows: even when fc(x) is large, false positives still occur quite frequently â in Fig. 8 left, observe that the false positive rate is about 20% at the highest conï¬dence range, which might have a large impact on the model. This contrasts with approaches that focus sampling in uncertain regions [33,2,40,17]. We ï¬nd that images sampled with uncertainty criteria are typically ambiguous and dif- ï¬cult or even impossible for both models and humans to annotate correctly, as demonstrated in Fig. 8 bottom row: unconï¬dent samples are often heavily oc- cluded, at unusual viewpoints, or of mixed, ambiguous breeds, making it unlikely that they can be annotated eï¬ectively. This strategy is similar to the âexpected model changeâ sampling criteria [48], but done for each class independently.
Human Annotation. Our interface for human annotation of the selected im- ages is shown in Fig. 9. Careful construction of the interface, including the addi- tion of both positive and negative examples, as well as hidden âgold standardâ images for immediate feedback, improves annotation accuracy considerably (see Sec. A.2 for quantitative results). Final category decisions are made via majority vote of three annotators.
7
8 Krause et al.
most conf dent: aad| false positive rate â1-confidence
Fig. 8. Left: Classiï¬er conï¬dence versus false positive rate on 100,000 randomly sam- pled from Flickr images (YFCC100M [56]) with dog detections. Even the most conï¬dent images have a 20% false positive rate. Right: Samples from Flickr. Rectangles below images denote correct (green), incorrect (red), or ambiguous (yellow). Top row: Sam- ples with high conï¬dence for class âPugâ from YFCC100M. Bottom row: Samples with low conï¬dence score for class âPugâ.
Fig. 9. Our tool for binary anno- tation of ï¬ne-grained categories. In- structional positive images are pro- vided in the upper left and negatives are provided in the lower left.
# 5 Experiments
# 5.1 Implementation Details
The base classiï¬er we use in all noisy data experiments is the Inception-v3 con- volutional neural network architecture [55], which is among the state of the art methods for generic object recognition [44,53,23]. Learning rate schedules are de- termined by performance on a holdout subset of the training data, which is 10% of the training data for control experiments training on ground truth datasets, or 1% when training on the larger noisy web data. Unless otherwise noted, all recognition results use as input a single crop in the center of the image.
Our active learning comparison uses the Yahoo Flickr Creative Commons 100M dataset [56] as its pool of unlabeled images, which we ï¬rst pre-ï¬lter with a binary dog classiï¬er and localizer [54], resulting in 1.71 million candidate dogs. We perform up to two rounds of active learning, with a sampling budget B of 10à the original dataset size per round3. For experiments on Stanford Dogs, we use the CNN of [25], which is pre-trained on a version of ILSVRC [44,13] with dog data removed, since Stanford Dogs is a subset of ILSVRC training data.
# 5.2 Removing Ground Truth from Web Images
One subtle point to be cautious about when using web images is the risk of inad- vertently including images from ground truth test sets in the web training data.
3 To be released.
The Unreasonable Eï¬ectiveness of Noisy Data for Fine-Grained Recognition
Training Data Acc. Dataset Training Data Acc. Dataset CUB-GT Web (raw) Web (ï¬ltered) L-Bird L-Bird(MC) L-Bird+CUB-GT L-Bird+CUB-GT(MC) 84.4 87.7 89.0 91.9 92.3 92.2 92.8 CUB [60] 88.1 FGVC-GT 90.7 Web (raw) 91.1 Web (ï¬ltered) 90.9 L-Aircraft 93.4 L-Aircraft(MC) L-Aircraft+FGVC-GT 94.5 L-Aircraft+FGVC-GT(MC) 95.9 FGVC [38] Stanford-GT Web (raw) Web (ï¬ltered) L-Dog L-Dog(MC) L-Dog+Stanford-GT L-Dog+Stanford-GT(MC) 80.6 78.5 78.4 78.4 80.8 84.0 85.9 Birdsnap [4] Stanford Dogs [27]
78.2 Birdsnap-GT 76.1 Web (raw) 78.2 Web (ï¬ltered) 82.8 L-Bird 85.4 L-Bird(MC) L-Bird+Birdsnap-GT 83.9 L-Bird+Birdsnap-GT(MC) 85.4 Table 1. Comparison of data source used during training with recognition perfor- mance, given in terms of Top-1 accuracy. âCUB-GTâ indicates training only on the ground truth CUB training set, âWeb (raw)â trains on all search results for CUB categories, and âWeb (ï¬ltered)â applies ï¬ltering between categories within a domain (birds). L-Bird denotes training ï¬rst on L-Bird, then ï¬ne-tuning on the subset of cate- gories under evaluation (i.e. the ï¬ltered web images), and L-Bird+CUB-GT indicates training on L-Bird, then ï¬ne-tuning on Web (ï¬ltered), and ï¬nally ï¬ne-tuning again on CUB-GT. Similar notation is used for the other datasets. â(MC)â indicates using multiple crops at test time (see text for details). We note that only the rows with â-GTâ make use of the ground truth training set; all other rows rely solely on noisy web imagery.
To deal with this concern, we performed an aggressive deduplication procedure with all ground truth test sets and their corresponding web images. This process follows Wang et al. [64], which is a state of the art method for learning a simi- larity metric between images. We tuned this procedure for high near-duplicate recall, manually verifying its quality. More details are included in the Sec. B.
# 5.3 Main Results
We present our main recognition results in Tab. 1, where we compare perfor- mance when the training set consists of either the ground truth training set, raw web images of the categories in the corresponding evaluation dataset, web im- ages after applying our ï¬ltering strategy, all web images of a particular domain, or all images including even the ground truth training set.
On CUB-200-2011 [60], the smallest dataset we consider, even using raw search results as training data results in a better model than the annotated training set, with ï¬ltering further improving results by 1.3%. For Birdsnap [4], the largest of the ground truth datasets we evaluate on, raw data mildly under- performs using the ground truth training set, though ï¬ltering improves results to be on par. On both CUB and Birdsnap, training ï¬rst on the very large set of categories in L-Bird results in dramatic improvements, improving performance on CUB further by 2.9% and on Birdsnap by 4.6%. This is an important point:
9
10 Krause et al.
even if the end task consists of classifying only a small number of categories, training with more ï¬ne-grained categories yields signiï¬cantly more eï¬ective net- works. This can also be thought of as a form of transfer learning within the same ï¬ne-grained domain, allowing features learned on a related task to be use- ful for the ï¬nal classiï¬cation problem. When permitted access to the annotated ground truth training sets for additional ï¬ne-tuning and domain transfer, results increase by another 0.3% on CUB and 1.1% on Birdsnap.
For the aircraft categories in FGVC, results are largely similar but weaker in magnitude. Training on raw web data results in a signiï¬cant gain of 2.6% compared to using the curated training set, and ï¬ltering, which did not aï¬ect the size of the training set much (Fig. 5), changes results only slightly in a positive direction. Counterintuitively, pre-training on a larger set of aircraft does not improve results on FGVC. Our hypothesis for the diï¬erence between birds and aircraft in this regard is this: since there are many more species of birds in L- Bird than there are aircraft in L-Aircraft (10,982 vs 409), not only is the training size of L-Bird larger, but each training example provides stronger information because it distinguishes between a larger set of mutually-exclusive categories. Nonetheless, when access to the curated training set is available for ï¬ne-tuning, performance dramatically increases to 94.5%. On Stanford Dogs we see results similar to FGVC, though for dogs we happen to see a mild loss when comparing to the ground truth training set, not much diï¬erence with ï¬ltering or using L-Dog, and a large boost from adding in the ground truth training set.
An additional factor that can inï¬uence performance of web models is domain shift â if images in the ground truth test set have very diï¬erent visual properties compared to web images, performance will naturally diï¬er. Similarly, if category names or deï¬nitions within a dataset are even mildly oï¬, web-based methods will be at a disadvantage without access to the ground truth training set. Adding the ground truth training data ï¬xes this domain shift, making web-trained models quickly recover, with a particularly large gain if the network has already learned a good representation, matching the pattern of results for Stanford Dogs.
Limits of Web-Trained Models. To push our models to their limits, we additionally evaluate using 144 image crops at test time, averaging predic- tions across each crop, denoted â(MC)â in Tab. 1. This brings results up to 92.3%/92.8% on CUB (without/with CUB training data), 85.4%/85.4% on Bird- snap, 93.4%/95.9% on FGVC, and 80.8%/85.9% on Stanford Dogs. We note that this is close to human expert performance on CUB, which is estimated to be be- tween 93% [6] and 95.6% [58].
Comparison with Prior Work. We compare our results to prior work on CUB, the most competitive ï¬ne-grained dataset, in Tab. 2. While even our baseline model using only ground truth data from Tab. 1 was at state of the art levels, by forgoing the CUB training set and only training using noisy data from the web, our models greatly outperform all prior work. On FGVC, which is more recent and fewer works have evaluated on, the best prior performing
The Unreasonable Eï¬ectiveness of Noisy Data for Fine-Grained Recognition
Method Alignments [21] PDD [51] PB R-CNN [75] Weak Sup. [78] PN-DCN [5] Two-Level [66] Consensus [49] NAC [50] FG-Without [29] STN [26] Bilinear [36] Augmenting [69] Noisy Data+CNN [55] Web Training Annotations Acc. 53.6 GT 60.6 GT+BB+Parts 73.9 GT+BB+Parts 75.0 GT 75.7 GT+BB+Parts 77.9 GT 78.3 GT+BB+Parts 81.0 GT 82.0 GT+BB GT 84.1 84.1 GT GT+BB+Parts+Web 84.6 92.3
Table 2. Comparison with prior work on CUB-200- 2011 [60]. We only include no methods which annotations at time. Here âGTâ refers to using category Truth Ground labels in the training set of CUB, âBBoxâ indicates using bounding boxes, and âPartsâ uses part annotations.
method we are aware of is the Bilinear CNN model of Lin et al. [36], which has accuracy 84.1% (ours is 93.4% without FGVC training data, 95.9% with), and on Birdsnap, which is even more recent, the best performing method we are aware of that uses no extra annotations during test time is the original 66.6% by Berg et al. [4] (ours is 85.4%). On Stanford Dogs, the most competitive related work is [46], which uses an attention-based recurrent neural network to achieve 76.8% (ours is 80.8% without ground truth training data, 85.9% with).
We identify two key reasons for these large improvements: The ï¬rst is the use of a strong generic classiï¬er [55]. A number of prior works have identiï¬ed the importance of having well-trained CNNs as components in their systems for ï¬ne-grained recognition [36,26,29,75,5], which our work provides strong evidence for. On all four evaluation datasets, our CNN of choice [55], trained on the ground truth training set alone and without any architectural modiï¬cations, performs at levels at or above the previous state-of-the-art. The second reason for improvement is the large utility of noisy web data for ï¬ne-grained recognition, which is the focus of this work.
We ï¬nally remind the reader that our work focuses on the application-level problem of recognizing a given set of ï¬ne-grained categories, which might not come with their own expert-annotated training images. The use of existing test sets serves to provide an accurate measure of performance and put our work in a larger context, but results may not be strictly comparable with prior work that operates within a single given dataset.
Comparison with Active Learning. We compare using noisy web data with a more traditional active learning-based approach (Sec. 4) under several diï¬erent settings in Tab. 3. We ï¬rst verify the eï¬cacy of active learning itself: when training the network from scratch (i.e. no ï¬ne-tuning), active learning improves performance by up to 15.6%, and when ï¬ne-tuning, results still improve by 1.5%. How does active learning compare to using web data? Purely using ï¬ltered web data compares favorably to non-ï¬ne-tuned active learning methods (4.4% better), though lags behind the ï¬ne-tuned models somewhat. To better compare
12 Krause et al.
Table 3. Active learning-based results [27], presented in on Stanford Dogs terms of top-1 accuracy. Methods with â(scratch)â indicate training from scratch and â(ft)â indicates ï¬ne-tuning from a network pre-trained on ILSVRC, with web models also ï¬ne-tuned. âsubsampleâ refers to downsampling the active learn- ing data to be the same size as the ï¬ltered web images. Note that Stanford-GT is a subset of active learning data, which is denoted âA.L.â.
Acc. Training Procedure 58.4 Stanford-GT (scratch) 65.8 A.L., one round (scratch) 74.0 A.L., two rounds (scratch) 80.6 Stanford-GT (ft) 81.6 A.L., one round (ft) A.L., one round (ft, subsample) 78.8 82.1 A.L., two rounds (ft) Web (ï¬ltered) 78.4 Web (ï¬ltered) + Stanford-GT 82.6
the active learning and noisy web data, we factor out the diï¬erence in scale by performing an experiment with subsampled active learning data, setting it to be the same size as the ï¬ltered web data. Surprisingly, performance is very similar, with only a 0.4% advantage for the cleaner, annotated active learning data, highlighting the eï¬ectiveness of noisy web data despite the lack of manual annotation. If we furthermore augment the ï¬ltered web images with the Stanford Dogs training set, which the active learning method notably used both as training data and its seed set of images, performance improves to even be slightly better than the manually-annotated active learning data (0.5% improvement).
These experiments indicate that, while more traditional active learning-based approaches towards expanding datasets are eï¬ective ways to improve recognition performance given a suitable budget, simply using noisy images retrieved from the web can be nearly as good, if not better. As web images require no manual annotation and are openly available, we believe this is strong evidence for their use in solving ï¬ne-grained recognition.
Very Large-Scale Fine-Grained Recognition. A key advantage of using noisy data is the ability to scale to large numbers of ï¬ne-grained classes. However, this poses a challenge for evaluation â it is infeasible to manually annotate images with one of the 10,982 categories in L-Bird, 14,553 categories in L-Butterï¬y, and would even be very time-consuming to annotate images with the 409 categories in L-Aircraft. Therefore, we turn to an approximate evaluation, establishing a rough estimate on true performance. Speciï¬cally, we query Flickr for up to 25 images of each category, keeping only those images whose title strictly contains the name of each category, and aggressively deduplicate these images with our training set in order to ensure a fair evaluation. Although this is not a perfect evaluation set, and is thus an area where annotation of ï¬ne-grained datasets is particularly valuable [58], we ï¬nd that it is remarkably clean on the surface: based on a 1,000-image estimate, we measure the cross-domain noise of L-Bird at only 1%, L-Butterï¬y at 2.3%, and L-Aircraft at 4.5%. An independent evaluation [58] further measures all sources of noise combined to be only 16% when searching
The Unreasonable Eï¬ectiveness of Noisy Data for Fine-Grained Recognition
Long-Billed âYellow-Crowne Spiderhunter Gonolek Forest Kingfisher White-Browed Coucal Pacific Reef Heron African Rail Brown Thrasher Zebra Swallowtail c ark Pe Rufous-Naped â Smoke-Colored Lorauin's âAdmiral > Be pero General Atomics MQ-1 Predator Blue Glassy Tiger Idas Blue Cessna 150 ornier Do 31 âAero l-39 Albatross : Boeing B-50 Consolidated C-87 Superfortress Liberator Express Douglas 0-46 Lockheed U~
Fig. 10. Classiï¬cation results on very large-scale ï¬ne-grained recognition. From top to bottom, depicted are examples of categories in L-Bird, L-Butterï¬y, and L-Aircraft, along with their category name. The ï¬rst examples in each row are correctly predicted by our models, while the last two examples in each row are errors, with our prediction in grey and correct category (according to Flickr metadata) printed below.
for bird species. In total, this yields 42,115 testing images for L-Bird, 42,046 for L-Butterï¬y, and 3,131 for L-Aircraft.
Given the diï¬culty and noise, performance is surprisingly high: On L-Bird top-1 accuracy is 73.1%/75.8% (1/144 crops), for L-Butterï¬y it is 65.9%/68.1%, and for L-Aircraft it is 72.7%/77.5%. Corresponding mAP numbers, which are better suited for handling class imbalance, are 61.9, 54.8, and 70.5, reported for the single crop setting. We show qualitative results in Fig. 10. These cate- gories span multiple continents in space (birds, butterï¬ies) and decades in time (aircraft), demonstrating the breadth of categories in the world that can be rec- ognized using only public sources of noisy ï¬ne-grained data. To the best of our knowledge, these results represent the largest number of ï¬ne-grained categories distinguished by any single system to date.
How Much Data is Really Necessary? In order to better understand the utility of noisy web data for ï¬ne-grained recognition, we perform a control ex- periment on the web data for CUB. Using the ï¬ltered web images as a base, we train models using progressively larger subsets of the results as training data, taking the top ranked images across categories for each experiment. Performance versus the amount of training data is shown in Fig. 11. Surprisingly, relatively few web images are required to do as well as training on the CUB training set, and adding more noisy web images always helps, even when at the limit of search results. Based on this analysis, we estimate that one noisy web image for CUB categories is âworthâ 0.507 ground truth training images [57].
Error Analysis. Given the high performance of these models, what room is left for improvement? In Fig. 12 we show the taxonomic distribution of the remaining
13
14 Krause et al.
Impact of Training Data Quantity
83 fey 8 5 86 a a 2 2 Web H CUB-GT TOk 20k 30k 40k 50k 60k 70k 80k 90k Num. Web Images
Portion of Errors vs. Taxonomic Rank 79 60 $50 Dad ic) 830 20 10 â Genus Family Order Class
Fig. 11. Number of web images used for training vs. performance on CUB-200- 2011 [60]. We vary the amount of web training data in multiples of the CUB training set size (5,994 images). Also shown is performance when training on the ground truth CUB training set (CUB-GT). Fig. 12. The errors on L-Bird that fall in each taxonomic rank, represented as a portion of all errors made. For each error made, we calculate the taxonomic rank of the least common ancestor of the predicted and test category.
errors on L-Bird. The vast majority of errors (74.3%) are made between very similar classes at the genus level, indicating that most of the remaining errors are indeed between extremely similar categories, and only very few errors (7.4%) are made between dissimilar classes, whose least common ancestor is the âAvesâ (i.e. Bird) taxonomic class. This suggests that most errors still made by the models are fairly reasonable, corroborating the qualitative results of Fig. 10.
# 6 Discussion
In this work we have demonstrated the utility of noisy data toward solving the problem of ï¬ne-grained recognition. We found that the combination of a generic classiï¬cation model and web data, ï¬ltered with a simple strategy, was surprisingly eï¬ective at discriminating ï¬ne-grained categories. This approach performs favorably when compared to a more traditional active learning method for expanding datasets, but is even more scalable, which we demonstrated ex- perimentally on up to 14,553 ï¬ne-grained categories. One potential limitation of the approach is the availability of imagery for categories either not found or not described in the public domain, for which an alternative method such as active learning may be better suited. Another limitation is the current focus on classiï¬cation, which may be problematic if applications arise where multiple objects are present or localization is otherwise required. Nonetheless, with these insights on the unreasonable eï¬ectiveness of noisy data, we are optimistic for applications of ï¬ne-grained recognition in the near future.
The Unreasonable Eï¬ectiveness of Noisy Data for Fine-Grained Recognition
# 7 Acknowledgments
We thank Gal Chechik, Chuck Rosenberg, Zhen Li, Timnit Gebru, Vignesh Ra- manathan, Oliver Groth, and the anonymous reviewers for valuable feedback.
15
16 Krause et al.
# Appendix
# A Active Learning Details
Here we provide additional details for our active learning baseline, including further description of the interface, improvements in rater quality as a result of this interface, statistics of the number of positives obtained per class in each round of active learning, and qualitative examples of images obtained.
# A.1 Interface
Designing an eï¬ective rater tool is of critical importance when getting non- experts to rate ï¬ne-grained categories. We seek to give the raters simple decisions and to provide them with as much information as possible to make the correct decision in a generic and scalable way. Fig. 13 shows our rater interface, which includes the following components to serve this purpose:
Instructional positive images inform the rater of within-class variation. These images are obtained from the seed dataset input to active learning. Many rater tools only provide this (e.g. [35]), which does not provide a clear class boundary concept on its own. We also provide links to Google Image Search and encourage raters to research the full space of examples of the class concept.
Instructional negative images help raters deï¬ne the decision boundary be- tween the right class and easily confused other classes. We show the top two most confused categories, determined by the active learningâs current model. This aids in classiï¬cation: in Fig. 13, if the rater studies the positive class âBernese moun- tain dogâ, they may form a mental decision rule based on fur color pattern alone. However, when studying the negative, easily confused classes âEntlebucherâ and âAppenzellerâ, the rater can reï¬ne the decision on more appropriate ï¬ne-grained distinctions â in this case, hair length is a key discriminative attribute.
Batching questions by class has the beneï¬t of allowing raters to learn about and focus on one ï¬ne-grained category at a time. Batching questions may also allow raters to build a better mental model of the class via a human form of semi-supervised learning, although this phenomena is more diï¬cult to isolate and measure.
Golden questions for rater feedback and quality control. We use the original supervised seed dataset to add a number of known correct and incor- rect images in the batch to be rated, which we use to give short- and long-term feedback to raters. Short-term feedback comes in the form of a pop-up win- dow informing the rater the moment they make an incorrect judgment, allowing
The Unreasonable Eï¬ectiveness of Noisy Data for Fine-Grained Recognition
Correct examples Unknowns, please rate:
Fig. 13. Our tool for binary annotation of ï¬ne-grained categories. Instructional posi- tive images are provided in the upper left and negatives are provided in the lower left. This is a higher-resolution version of the ï¬gure in the main text.
them to update their mental model while working on the task. Long-term feed- back summarizes a daysâ worth of rating to give the rater a summary of overall performance.
# A.2 Rater Quality Improvements
To determine the impact of our annotation framework improvements for ï¬ne- grained categories, we performed a control experiment with a more standard crowdsourcing interface, which provides only a category name, description, and image search link. Annotation quality is determined on a set of diï¬cult binary questions (images mistaken by a classiï¬er on the Stanford Dogs test set). Using our interface, annotators were both more accurate and faster, with a 16.5% relative reduction in error (from 28.5% to 23.8%) and a 2.4à improvement in speed (4.1 to 1.68 seconds per image).
# A.3 Annotation Statistics and Examples
In Fig. 14 we show the distribution of images judged correct by human anno- tators after active learning selection of 1000 images per class for Stanford Dogs classes. The categories are sorted by the number of positive training examples collected in the ï¬rst iteration of active learning. The 10 categories with the most positive training examples collected after both rounds of mining are: Pug, Golden Retriever, Boston Terrier, West Highland White Terrier, Labrador Re- triever, Boxer, Maltese, German Shepherd, Pembroke Welsh Corgi, and Beagle. The 10 categories with the fewest positive training examples are: Kerry Blue Terrier, Komondor, Irish Water Spaniel, Curly Coated Retriever, Bouvier des Flandres, Clumber Spaniel, Bedlington Terrier, Afghan Hound, Aï¬enpinscher,
17
18 Krause et al.
1009
[5 Active learning, round 1 [5 Active learning, round 2 000 3 «og Num. images / class 2 Class id
Fig. 14. Counts of positive training examples obtained per category from active learn- ing, for the Stanford Dogs dataset.
and Sealyham Terrier. These counts are inï¬uenced by the true counts of cat- egories in the YFCC100M [56] dataset and our active learnerâs ability to ï¬nd them.
In Fig. 15, we show positive training examples obtained from active learning for select categories, comparing examples obtained in iterations 1 and 2.
# B Deduplication Details
Here we provide more details on our method for removing any ground truth images from web search results, which we took great care in doing. Our general approach follows Wang et al. [64], which is a state of the art method for learning a similarity metric between images. To scale [64] to the millions of images con- sidered in this work, we binarize the output for an eï¬cient hashing-based exact search. Hamming distance corresponds to dissimilarity: identical images have distance 0, images with diï¬erent resolutions, aspect ratios, or slightly diï¬erent crops tend to have distances of up to roughly 4 and 8, and more substantial variations, e.g. images of diï¬erent views from the same photographer, or very diï¬erent crops, roughly have distances up to 10, beyond which the vast majority of image pairs are actually distinct. Qualitative examples are provided in Fig. 16. We tuned our dissimilarity threshold for recall and manually veriï¬ed it â the goal is to ensure that images that have even a moderate degree of similarity to test images did not appear in our training set. For example, of a sample of 183 image pairs at distance 16 in the large-scale bird experiments, zero were judged by a human to be too similar, and we used a still more conservative threshold of 18. In the case of L-Bird, 2,996 images were removed as being too similar to an image in either the CUB or Birdsnap test set.
The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition Pembroke Welsh Corgi Airedale Tecior Siberian Husky Komondor Pomeranian Samoyed Becnase Mountain Dog French Bulldog Gorman Shorthaired Chihuahua
19
Fig. 15. Positive training examples obtained from active learning, from the YFCC100M dataset, for select categories from Stanford Dogs.
# C Remaining Errors: Qualitative
Here we highlight one type of error that our image search model made on CUB [62] â ï¬nding errors in the test set. We show an example in Fig. 17, where the true species for each image is actually a bird species not in the 200 CUB bird species. This highlights one potential advantage of our approach: by relying on category names, web training data is tied more strongly to the semantic mean- ing of a category instead of simply a 1-of-K label. This also provides evidence for the âdomain shiftâ hypothesis when ï¬ne-tuning on ground truth datasets, as irregularities like this can be learned, resulting in higher performance on the benchmark dataset under consideration.
# D Network Visualization
In order to examine the impact of web-trained models of ï¬ne-grained recognition from another vantage point, here we present one visualization of network inter- nals. Speciï¬cally, in Fig. 18 we visualize gradients with respect to the square of the norm of the last convolutional layer in the network, backpropagated into the input image, and visualized as a function of training data. This provides some indication of the importance of each pixel with respect to the overall network activation. Though these examples are only qualitative, we observe that the gra- dients for the network trained on L-Bird are generally more focused on the bird when compared to gradients for the network trained on CUB, indicating that the network has learned a better representation of which parts of an image are discriminative.
20 Krause et al.
Distance Distance 0 7 1 8 2 9 3 10 4 11 5 12 6
Fig. 16. Example pairs of images and their distance according to our deduplication method. Distances 1-3 have slight pixel-level diï¬erences due to compression and the image pair at distance 4 have diï¬erent scales. At distances 5 and 6 the images are of diï¬erent crops, with distance 6 additionally exhibiting slight lighting diï¬erences. The images at distance 7 have slightly diï¬erent scales and compression, at distance 8 there are cropping and lighting diï¬erences, and distance 9 features diï¬erent crops and additional text in the corner of one photo. At distance 10 and higher we have image pairs which have high-level visual similarities but are distinct.
oo us |
us
oo |
Fig. 17. Examples of mistakes made by a web-trained model on the CUB-200-2011 [62] test set, whose ground truth label is âHooded Orioleâ, but which are actually of another species not in CUB, âBlack-Hooded Oriole.â
The Unreasonable Eï¬ectiveness of Noisy Data for Fine-Grained Recognition
Image CUB-200 L-Bird Image CUB-200 L-Bird
Fig. 18. Gradients with respect to the squared norm of the last convolutional layer on ten random CUB test set images. Each row contains, in order, an input image, gradients for a model trained on the CUB-200 [62] training set, and gradients for a model trained on the larger L-Bird. Gradients have been scaled to ï¬t in [0,255]. Figure best viewed in high resolution on a monitor.
21
22 Krause et al.
# References
1. Angelova, A., Zhu, S., Lin, Y.: Image segmentation for large-scale subcategory ï¬ower recognition. In: Workshop on Applications of Computer Vision (WACV). pp. 39â45. IEEE (2013)
2. Balcan, M.F., Broder, A., Zhang, T.: Margin based active learning. In: Learning Theory, pp. 35â50. Springer (2007)
3. Berg, T., Belhumeur, P.N.: Poof: Part-based one-vs.-one features for ï¬ne-grained categorization, face veriï¬cation, and attribute estimation. In: Computer Vision and Pattern Recognition (CVPR). pp. 955â962. IEEE (2013)
4. Berg, T., Liu, J., Lee, S.W., Alexander, M.L., Jacobs, D.W., Belhumeur, P.N.: Birdsnap: Large-scale ï¬ne-grained visual categorization of birds. In: Computer Vi- sion and Pattern Recognition (CVPR) (June 2014)
5. Branson, S., Van Horn, G., Perona, P., Belongie, S.: Improved bird species recog- nition using pose normalized deep convolutional nets. In: British Machine Vision Conference (BMVC) (2014)
6. Branson, S., Van Horn, G., Wah, C., Perona, P., Belongie, S.: The ignorant led by the blind: A hybrid humanâmachine vision system for ï¬ne-grained categorization. International Journal of Computer Vision (IJCV) pp. 1â27 (2014)
7. Chai, Y., Lempitsky, V., Zisserman, A.: Bicos: A bi-level co-segmentation method for image classiï¬cation. In: International Conference on Computer Vision (ICCV). IEEE (2011)
8. Chai, Y., Lempitsky, V., Zisserman, A.: Symbiotic segmentation and part local- ization for ï¬ne-grained categorization. In: International Conference on Computer Vision (ICCV). pp. 321â328. IEEE (2013)
9. Chai, Y., Rahtu, E., Lempitsky, V., Van Gool, L., Zisserman, A.: Tricos: A tri-level class-discriminative co-segmentation method for image classiï¬cation. In: European Conference on Computer Vision (ECCV), pp. 794â807. Springer (2012)
10. Chen, X., Gupta, A.: Webly supervised learning of convolutional networks. In: International Conference on Computer Vision (ICCV). IEEE (2015)
11. Chen, X., Shrivastava, A., Gupta, A.: Neil: Extracting visual knowledge from web data. In: International Conference on Computer Vision (ICCV). pp. 1409â1416. IEEE (2013)
12. Collins, B., Deng, J., Li, K., Fei-Fei, L.: Towards scalable dataset construction: An active learning approach. In: European Conference on Computer Vision (ECCV), pp. 86â98. Springer (2008)
13. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: A Large- Scale Hierarchical Image Database. In: Computer Vision and Pattern Recognition (CVPR) (2009)
14. Deng, J., Krause, J., Fei-Fei, L.: Fine-grained crowdsourcing for ï¬ne-grained recog- nition. In: Computer Vision and Pattern Recognition (CVPR). pp. 580â587 (2013) 15. Divvala, S.K., Farhadi, A., Guestrin, C.: Learning everything about anything: Webly-supervised visual concept learning. In: Computer Vision and Pattern Recog- nition (CVPR). pp. 3270â3277. IEEE (2014)
16. Duan, K., Parikh, D., Crandall, D., Grauman, K.: Discovering localized at- tributes for ï¬ne-grained recognition. In: Computer Vision and Pattern Recognition (CVPR). pp. 3474â3481. IEEE
17. Erkan, A.N.: Semi-supervised learning via generalized maximum entropy. Ph.D. thesis, New York University (2010)
The Unreasonable Eï¬ectiveness of Noisy Data for Fine-Grained Recognition
18. Farrell, R., Oza, O., Zhang, N., Morariu, V.I., Darrell, T., Davis, L.S.: Birdlets: Subordinate categorization using volumetric primitives and pose-normalized ap- pearance. In: International Conference on Computer Vision (ICCV). pp. 161â168. IEEE (2011)
19. Fergus, R., Fei-Fei, L., Perona, P., Zisserman, A.: Learning object categories from internet image searches. Proceedings of the IEEE 98(8), 1453â1466 (2010)
20. Gavves, E., Fernando, B., Snoek, C.G., Smeulders, A.W., Tuytelaars, T.: Fine- grained categorization by alignments. In: International Conference on Computer Vision (ICCV). pp. 1713â1720. IEEE
21. Gavves, E., Fernando, B., Snoek, C.G., Smeulders, A.W., Tuytelaars, T.: Local alignments for ï¬ne-grained categorization. International Journal of Computer Vi- sion (IJCV) pp. 1â22 (2014)
22. Goering, C., Rodner, E., Freytag, A., Denzler, J.: Nonparametric part transfer for ï¬ne-grained recognition. In: Computer Vision and Pattern Recognition (CVPR). pp. 2489â2496. IEEE (2014)
23. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Computer Vision and Pattern Recognition (CVPR). IEEE (2016)
24. Hinchliï¬, C.E., Smith, S.A., Allman, J.F., Burleigh, J.G., Chaudhary, R., Coghill, L.M., Crandall, K.A., Deng, J., Drew, B.T., Gazis, R., Gude, K., Hibbett, D.S., Katz, L.A., Laughinghouse, H.D., McTavish, E.J., Midford, P.E., Owen, C.L., Ree, R.H., Rees, J.A., Soltis, D.E., Williams, T., Cranston, K.A.: Synthesis of phy- logeny and taxonomy into a comprehensive tree of life. Proceedings of the National Academy of Sciences (2015), http://www.pnas.org/content/early/2015/09/16/ 1423041112.abstract
25. Ioï¬e, S., Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: International Conference on Machine Learning (ICML) (2015)
26. Jaderberg, M., Simonyan, K., Zisserman, A., Kavukcuoglu, K.: Spatial transformer networks. In: Neural Information Processing Systems (NIPS) (2015)
27. Khosla, A., Jayadevaprakash, N., Yao, B., Fei-Fei, L.: Novel dataset for ï¬ne- grained image categorization. In: First Workshop on Fine-Grained Visual Cat- egorization, Conference on Computer Vision and Pattern Recognition (CVPR). Colorado Springs, CO (June 2011)
28. Krause, J., Gebru, T., Deng, J., Li, L.J., Fei-Fei, L.: Learning features and parts for ï¬ne-grained recognition. In: International Conference on Pattern Recognition (ICPR). Stockholm, Sweden (August 2014)
29. Krause, J., Jin, H., Yang, J., Fei-Fei, L.: Fine-grained recognition without part annotations. In: Conference on Computer Vision and Pattern Recognition (CVPR). IEEE
30. Krause, J., Stark, M., Deng, J., Fei-Fei, L.: 3d object representations for ï¬ne- grained categorization. In: 4th International IEEE Workshop on 3D Representation and Recognition (3dRR-13). IEEE (2013)
31. Kumar, N., Belhumeur, P.N., Biswas, A., Jacobs, D.W., Kress, W.J., Lopez, I.C., Soares, J.V.: Leafsnap: A computer vision system for automatic plant species iden- tiï¬cation. In: European Conference on Computer Vision (ECCV), pp. 502â516. Springer (2012)
32. LeCun, Y., Bottou, L., Bengio, Y., Haï¬ner, P.: Gradient-based learning applied to document recognition. Proceedings of the IEEE 86(11), 2278â2324 (1998)
33. Lewis, D.D., Catlett, J.: Heterogeneous uncertainty sampling for supervised learn- ing. In: International Conference on Machine Learning (ICML). pp. 148â156 (1994)
23
24 Krause et al.
34. Li, L.J., Fei-Fei, L.: Optimol: automatic online picture collection via incremental model learning. International Journal of Computer Vision (IJCV) 88(2), 147â168 (2010)
35. Lin, T., Maire, M., Belongie, S., Bourdev, L.D., Girshick, R.B., Hays, J., Perona, P., Ramanan, D., Doll´ar, P., Zitnick, C.L.: Microsoft COCO: common objects in context. CoRR abs/1405.0312 (2014), http://arxiv.org/abs/1405.0312
36. Lin, T.Y., RoyChowdhury, A., Maji, S.: Bilinear cnn models for ï¬ne-grained visual recognition. In: International Conference on Computer Vision (ICCV). IEEE 37. Liu, J., Kanazawa, A., Jacobs, D., Belhumeur, P.: Dog breed classiï¬cation using part localization. In: European Conference on Computer Vision (ECCV), pp. 172â 185. Springer (2012)
38. Maji, S., Kannala, J., Rahtu, E., Blaschko, M., Vedaldi, A.: Fine-grained visual classiï¬cation of aircraft. Tech. rep. (2013)
39. Mnih, V., Hinton, G.E.: Learning to label aerial images from noisy data. In: Inter- national Conference on Machine Learning (ICML). pp. 567â574 (2012)
40. Mozafari, B., Sarkar, P., Franklin, M., Jordan, M., Madden, S.: Scaling up crowd- sourcing to very large datasets: a case for active learning. Proceedings of the VLDB Endowment 8(2), 125â136 (2014)
41. Nilsback, M.E., Zisserman, A.: A visual vocabulary for ï¬ower classiï¬cation. In: Computer Vision and Pattern Recognition (CVPR). vol. 2, pp. 1447â1454. IEEE (2006)
42. Pu, J., Jiang, Y.G., Wang, J., Xue, X.: Which looks like which: Exploring inter- class relationships in ï¬ne-grained visual categorization. In: European Conference on Computer Vision (ECCV), pp. 425â440. Springer (2014)
43. Reed, S., Lee, H., Anguelov, D., Szegedy, C., Erhan, D., Rabinovich, A.: Train- ing deep neural networks on noisy labels with bootstrapping. arXiv preprint arXiv:1412.6596 (2014)
44. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L.: ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV) pp. 1â42 (April 2015)
45. Schroï¬, F., Criminisi, A., Zisserman, A.: Harvesting image databases from the web. Pattern Analysis and Machine Intelligence (PAMI) 33(4), 754â766 (2011)
46. Sermanet, P., Frome, A., Real, E.: Attention for ï¬ne-grained categorization. arXiv preprint arXiv:1412.7054 (2014)
47. Settles, B.: Active learning literature survey. University of Wisconsin, Madison 52(55-66), 11 (2010)
48. Settles, B., Craven, M., Ray, S.: Multiple-instance active learning. In: Advances in Neural Information Processing Systems (NIPS). pp. 1289â1296 (2008)
49. Shih, K.J., Mallya, A., Singh, S., Hoiem, D.: Part localization using multi-proposal consensus for ï¬ne-grained categorization. In: British Machine Vision Conference (BMVC) (2015)
50. Simon, M., Rodner, E.: Neural activation constellations: Unsupervised part model discovery with convolutional networks. In: ICCV (2015)
51. Simon, M., Rodner, E., Denzler, J.: Part detector discovery in deep convolutional neural networks. In: Asian Conference on Computer Vision (ACCV). vol. 2, pp. 162â177 (2014)
52. Sukhbaatar, S., Fergus, R.: Learning from noisy labels with deep neural networks. arXiv preprint arXiv:1406.2080 (2014)
53. Szegedy, C., Ioï¬e, S., Vanhoucke, V.: Inception-v4, inception-resnet and the impact of residual connections on learning. arXiv preprint arXiv:1602.07261 (2016)
The Unreasonable Eï¬ectiveness of Noisy Data for Fine-Grained Recognition
54. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Computer Vision and Pattern Recognition (CVPR) (2015)
55. Szegedy, C., Vanhoucke, V., Ioï¬e, S., Shlens, J., Wojna, Z.: Rethinking the incep- tion architecture for computer vision. In: Computer Vision and Pattern Recogni- tion (CVPR). IEEE (2016)
56. Thomee, B., Shamma, D.A., Friedland, G., Elizalde, B., Ni, K., Poland, D., Borth, D., Li, L.J.: The new data and new challenges in multimedia research. arXiv preprint arXiv:1503.01817 (2015)
57. Torralba, A., Efros, A., et al.: Unbiased look at dataset bias. In: Computer Vision and Pattern Recognition (CVPR). pp. 1521â1528. IEEE (2011)
58. Van Horn, G., Branson, S., Farrell, R., Haber, S., Barry, J., Ipeirotis, P., Perona, P., Belongie, S.: Building a bird recognition app and large scale dataset with citizen scientists: The ï¬ne print in ï¬ne-grained dataset collection. In: Computer Vision and Pattern Recognition (CVPR). IEEE (2015)
59. Vedaldi, A., Mahendran, S., Tsogkas, S., Maji, S., Girshick, B., Kannala, J., Rahtu, E., Kokkinos, I., Blaschko, M.B., Weiss, D., Taskar, B., Simonyan, K., Saphra, N., Mohamed, S.: Understanding objects in detail with ï¬ne-grained attributes. In: Computer Vision and Pattern Recognition (CVPR) (2014)
60. Wah, C., Branson, S., Welinder, P., Perona, P., Belongie, S.: The Caltech-UCSD Birds-200-2011 Dataset. Tech. Rep. CNS-TR-2011-001, California Institute of Technology (2011)
61. Wah, C., Belongie, S.: Attribute-based detection of unfamiliar classes with humans in the loop. In: Computer Vision and Pattern Recognition (CVPR). pp. 779â786. IEEE (2013)
62. Wah, C., Branson, S., Perona, P., Belongie, S.: Multiclass recognition and part localization with humans in the loop. In: International Conference on Computer Vision (ICCV). pp. 2524â2531. IEEE (2011)
63. Wah, C., Horn, G., Branson, S., Maji, S., Perona, P., Belongie, S.: Similarity com- parisons for interactive ï¬ne-grained categorization. In: Computer Vision and Pat- tern Recognition (CVPR) (2014)
64. Wang, J., Song, Y., Leung, T., Rosenberg, C., Wang, J., Philbin, J., Chen, B., Wu, Y.: Learning ï¬ne-grained image similarity with deep ranking. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1386â1393 (2014)
65. Welinder, P., Branson, S., Mita, T., Wah, C., Schroï¬, F., Belongie, S., Perona, P.: Caltech-UCSD Birds 200. Tech. Rep. CNS-TR-2010-001, California Institute of Technology (2010)
66. Xiao, T., Xu, Y., Yang, K., Zhang, J., Peng, Y., Zhang, Z.: The application of two-level attention models in deep convolutional neural network for ï¬ne-grained image classiï¬cation. In: Computer Vision and Pattern Recognition (CVPR). IEEE 67. Xiao, T., Xia, T., Yang, Y., Huang, C., Wang, X.: Learning from massive noisy labeled data for image classiï¬cation. In: Computer Vision and Pattern Recognition (CVPR). IEEE
68. Xie, S., Yang, T., Wang, X., Lin, Y.: Hyper-class augmented and regularized deep learning for ï¬ne-grained image classiï¬cation. In: Computer Vision and Pattern Recognition (CVPR). IEEE
69. Xu, Z., Huang, S., Zhang, Y., Tao, D.: Augmenting strong supervision using web data for ï¬ne-grained categorization. In: International Conference on Computer Vision (ICCV) (2015)
25
26 Krause et al.
70. Yang, L., Luo, P., Loy, C.C., Tang, X.: A large-scale car dataset for ï¬ne-grained categorization and veriï¬cation. In: Computer Vision and Pattern Recognition (CVPR). IEEE
71. Yang, S., Bo, L., Wang, J., Shapiro, L.G.: Unsupervised template learning for ï¬ne- grained object recognition. In: Advances in Neural Information Processing Systems (NIPS). pp. 3122â3130 (2012)
72. Yao, B., Bradski, G., Fei-Fei, L.: A codebook-free and annotation-free approach for ï¬ne-grained image categorization. In: Computer Vision and Pattern Recognition (CVPR). pp. 3466â3473. IEEE (2012)
73. Yao, B., Khosla, A., Fei-Fei, L.: Combining randomization and discrimination for ï¬ne-grained image categorization. In: Computer Vision and Pattern Recognition (CVPR). pp. 1577â1584. IEEE (2011)
74. Yu, F., Zhang, Y., Song, S., Seï¬, A., Xiao, J.: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015)
75. Zhang, N., Donahue, J., Girshick, R., Darrell, T.: Part-based r-cnns for ï¬ne-grained category detection. In: European Conference on Computer Vision (ECCV), pp. 834â849. Springer (2014)
76. Zhang, N., Farrell, R., Darrell, T.: Pose pooling kernels for sub-category recogni- tion. In: Computer Vision and Pattern Recognition (CVPR). pp. 3665â3672. IEEE (2012)
77. Zhang, N., Farrell, R., Iandola, F., Darrell, T.: Deformable part descriptors for ï¬ne-grained recognition and attribute prediction. In: International Conference on Computer Vision (ICCV). pp. 729â736. IEEE (2013)
78. Zhang, Y., Wei, X.s., Wu, J., Cai, J., Lu, J., Nguyen, V.A., Do, M.N.: Weakly su- pervised ï¬ne-grained image categorization. arXiv preprint arXiv:1504.04943 (2015) | {
"id": "1503.01817"
} |
1511.06488 | Resiliency of Deep Neural Networks under Quantization | The complexity of deep neural network algorithms for hardware implementation
can be much lowered by optimizing the word-length of weights and signals.
Direct quantization of floating-point weights, however, does not show good
performance when the number of bits assigned is small. Retraining of quantized
networks has been developed to relieve this problem. In this work, the effects
of retraining are analyzed for a feedforward deep neural network (FFDNN) and a
convolutional neural network (CNN). The network complexity is controlled to
know their effects on the resiliency of quantized networks by retraining. The
complexity of the FFDNN is controlled by varying the unit size in each hidden
layer and the number of layers, while that of the CNN is done by modifying the
feature map configuration. We find that the performance gap between the
floating-point and the retrain-based ternary (+1, 0, -1) weight neural networks
exists with a fair amount in 'complexity limited' networks, but the discrepancy
almost vanishes in fully complex networks whose capability is limited by the
training data, rather than by the number of connections. This research shows
that highly complex DNNs have the capability of absorbing the effects of severe
weight quantization through retraining, but connection limited networks are
less resilient. This paper also presents the effective compression ratio to
guide the trade-off between the network size and the precision when the
hardware resource is limited. | http://arxiv.org/pdf/1511.06488 | Wonyong Sung, Sungho Shin, Kyuyeon Hwang | cs.LG, cs.NE | null | null | cs.LG | 20151120 | 20160107 | 6 1 0 2 n a J 7 ] G L . s c [
3 v 8 8 4 6 0 . 1 1 5 1 : v i X r a
# Under review as a conference paper at ICLR 2016
# RESILIENCY OF DEEP NEURAL NETWORKS UNDER QUANTIZATION
Wonyong Sung, Sungho Shin & Kyuyeon Hwang Department of Electrical and Computer Engineering Seoul National University Seoul, 08826 Korea wysung@snu.ac.kr shshin@dsp.snu.ac.kr kyuyeon.hwang@gmail.com
# ABSTRACT
The complexity of deep neural network algorithms for hardware implementation can be much lowered by optimizing the word-length of weights and signals. Direct quantization of ï¬oating-point weights, however, does not show good performance when the number of bits assigned is small. Retraining of quantized networks has been developed to relieve this problem. In this work, the effects of quantization are analyzed for a feedforward deep neural network (FFDNN) and a convolutional neural network (CNN) when their network complexity is changed. The complex- ity of the FFDNN is controlled by varying the unit size in each hidden layer and the number of layers, while that of the CNN is done by modifying the feature map conï¬guration. We ï¬nd that some performance gap exists between the ï¬oating- point and the retrain-based ternary (+1, 0, -1) weight neural networks when the size is not large enough, but the discrepancy almost vanishes in fully complex net- works whose capability is limited by the training data, rather than by the number of connections. This research shows that highly complex DNNs have the capa- bility of absorbing the effects of severe weight quantization through retraining, but connection limited networks are less resilient. This paper also presents the effective compression ratio to guide the trade-off between the network size and the precision when the hardware resource is limited.
# INTRODUCTION
Deep neural networks (DNNs) begin to ï¬nd many real-time applications, such as speech recognition, autonomous driving, gesture recognition, and robotic control (Sak et al., 2015; Chen et al., 2015; Jalab et al., 2015; Corradini et al., 2015). Although most of deep neural networks are implemented using GPUs (Graphics Processing Units) in these days, their implementation in hardware can give many beneï¬ts in terms of power consumption and system size (Ovtcharov et al., 2015). FPGA based implementation examples of CNN show more than 10 times advantage in power consumption (Ovtcharov et al., 2015).
Neural network algorithms employ many multiply and add (MAC) operations that mimic the oper- ations of biological neurons. This suggests that reconï¬gurable hardware arrays that contain quite homogeneous hardware blocks, such as MAC units, can give very efï¬cient solution to real-time neu- ral network system design. Early studies on word-length determination of neural networks reported the needed precision of at least 8 bits (Holt & Baker, 1991). Our recent works show that the pre- cision required for implementing FFDNN, CNN or RNN needs not be very high, especially when the quantized networks are trained again to learn the effects of lowered precision. In the ï¬xed-point optimization examples shown in Hwang & Sung (2014); Anwar et al. (2015); Shin et al. (2015), neural networks with ternary weights showed quite good performance which was close to that of ï¬oating-point arithmetic.
In this work, we try to know if retraining can recover the performance of FFDNN and CNN under quantization with only ternary (+1, 0, -1) levels or 3 bits (+3, +2, +1, 0, -1, -2, -3) for weight
1
# Under review as a conference paper at ICLR 2016
representation. Note that bias values are not quantized. For this study, the network complexity is changed to analyze their effects on the performance gap between ï¬oating-point and retrained low- precision ï¬xed-point deep neural networks.
We conduct our experiments with a feed-forward deep neural network (FFDNN) for phoneme recog- nition and a convolutional neural network (CNN) for image classiï¬cation. To control the network size, not only the number of units in each layer but also the number of hidden layers are varied in the FFDNN. For the CNN, the number of feature maps for each layer and the number of layers are both changed. The FFDNN uses the TIMIT corpus and the CNN employs the CIFAR-10 dataset. We also propose a metric called effective compression ratio (ECR) for comparing extremely quantized bigger networks with moderately quantized or ï¬oating-point networks with the smaller size. This analysis intends to ï¬nd an insight to the knowledge representation capability of highly quantized networks, and also provides a guideline to network size and word-length determination for efï¬cient hardware implementation of DNNs.
# 2 RELATED WORK
Fixed-point implementation of signal processing algorithms has long been of interest for VLSI based design of multimedia and communication systems. Some of early works used statistical modeling of quantization noise for application to linear digital ï¬lters. The simulation-based word-length op- timization method utilized simulation tools to evaluate the ï¬xed-point performance of a system, by which non-linear algorithms can be optimized (Sung & Kum, 1995). Ternary (+1, 0, -1) coefï¬- cients based digital ï¬lters were used to eliminate multiplications at the cost of higher quantization noise. The implementation of adaptive ï¬lters with ternary weights were developed, but it demanded oversampling to remove the quantization effects (Hussain et al., 2007).
Fixed-point neural network design also has been studied with the same purpose of reducing the hard- ware implementation cost (Moerland & Fiesler, 1997). In Holt & Baker (1991), back propagation simulation with 16-bit integer arithmetic was conducted for several problems, such as NetTalk, Par- ity, Protein and so on. This work conducted the experiments while changing the number of hidden units, which was, however, relatively small numbers. The integer simulations showed quite good results for NetTalk and Parity, but not for Protein benchmarks. With direct quantization of trained weights, this work also conï¬rmed satisfactory operation of neural networks with 8-bit precision. An implementation with ternary weights were reported for neural network design with optical ï¬ber networks (Fiesler et al., 1990). In this ternary network design, the authors employed retraining after direct quantization to improve the performance of a shallow network.
Recently, ï¬xed-point design of DNNs is revisited, and FFDNN and CNN with ternary weights show quite good performances that are very close to the ï¬oating-point results. The ternary weight based FFDNN and CNN are used for VLSI and FPGA based implementations, by which the algorithms can operate with only on-chip memory consuming very low power (Kim et al., 2014). Binary weight based deep neural network design is also studied (Courbariaux et al., 2015). Pruned ï¬oating-point weights are also utilized for efï¬cient GPU based implementations, where small valued weights are forced to zero to reduce the number of arithmetic operations and the memory space for weight storage (Yu et al., 2012b; Han et al., 2015). A network restructuring technique using singular value decomposition technique is also studied (Xue et al., 2013; Rigamonti et al., 2013).
# 3 FIXED-POINT FFDNN AND CNN DESIGN
This section explains the design of FFDNN and CNN with varying network complexity and, also, the ï¬xed-point optimization procedure.
3.1 FFDNN AND CNN DESIGN
A feedforward deep neural network with multiple hidden layers are depicted in Figure 1. Each layer k has a signal vector yk, which is propagated to the next layer by multiplying the weight matrix Wk+1, adding biases bk+1, and applying the activation function Ïk+1(·) as follows:
Yer = Oep1(Weriyk + be41)- dd)
2
Under review as a conference paper at ICLR 2016
in-hl| h1-h2 h2-h3 h3-h4 h4-out ol PTET Te Input hl h2 h3 h4 Output
Figure 1: Feed-forward deep neural network with 4 hidden layers.
Input C1 S1 C2 $2 C3 $3 Fil
Figure 2: CNN structure with 3 convolution layers and 1 fully-connected layers.
One of the most popular activation functions is the rectiï¬ed linear unit deï¬ned as
Relu(x) = max(0, x). (2)
In this work, an FFDNN for phoneme recognition is used. The reference DNN has four hidden layers. Each of the hidden layers has Nh units; the value of Nh is changed to control the complexity of the network. We conduct experiments with the Nh size of 32, 64, 128, 256, 512, and 1024. The number of hidden layers is also reduced. The input layer of the network has 1,353 units to accept 11 frames of a Fourier-transform-based ï¬lter-bank with 40 coefï¬cients (+energy) distributed on a mel-scale, together with their ï¬rst and second temporal derivatives. The output layer consists of 61 softmax units which correspond to 61 target phoneme labels. Phoneme recognition experiments were performed on the TIMIT corpus. The standard 462 speaker set with all SA records removed was used for training, and a separate development set of 50 speaker was used for early stopping. Re- sults are reported for the 24-speaker core test set. The network was trained using a backpropagation algorithm with 128 mini-batch size. Initial learning rate was 10â5 and it was decreased until 10â7 during the training. Momentum was 0.9 and RMSProp was adopted for weights update (Tieleman & Hinton, 2012). The dropout technique was employed with 0.2 dropout rate in each layer.
The CNN used is for CIFAR-10 dataset. It contains a training set of 50,000 and a test set of 10,000 32Ã32 RGB color images representing airplanes, automobiles, birds, cats, deers, dogs, frogs, horses, ships and trucks. We divided the training set to 40,000 images for training and 10,000 images for validation. This CNN has 3 convolution and pooling layers and a fully connected hidden layer with 64 units, and the output has 10 softmax units as shown in Figure 2. We control the number of feature maps in each convolution layer. The reference size has 32-32-64 feature maps with 5 by 5 kernel size as used in Krizhevskey (2014). We did not perform any preprocessing and data augmentation such as ZCA whitening and global contrast normalization. To know the effects of network size variation, the number of feature maps is reduced or increased. The conï¬gurations of the feature maps used for the experiments are 8-8-16, 16-16-32, 32-32-64, 64-64-128, 96-96-192, and 128-128-256. The number of feature map layers is also changed, resulting in 32-32-64, 32-64,
3
# Under review as a conference paper at ICLR 2016
and 64 map conï¬gurations. Note that the fully connected layer in the CNN is not changed. The network was trained using a backpropagation algorithm with 128 mini-batch size. Initial learning rate was 0.001 and it was decreased to 10â8 during the training procedure. Momentum was 0.8 and RMSProp was applied for weights update.
3.2 FIXED-POINT OPTIMIZATION OF DNNS
Reducing the word-length of weights brings several advantages in hardware based implementation of neural networks. First, it lowers the arithmetic precision, and thereby reduces the number of gates needed for multipliers. Second, the size of memory for storing weights is minimized, which would be a big advantage when keeping them on a chip, instead of external DRAM or NAND ï¬ash memory. Note that FFDNNs and recurrent neural networks demand a very large number of weights. Third, the reduced arithmetic precision or minimization of off-chip memory accesses leads to low power consumption. However, we need to concern the quantization effects that degrade the system performance.
Direct quantization converts a ï¬oating-point value to the closest integer number, which is conven- tionally used in signal processing system design. However, direct quantization usually demands more than 8 bits, and does not show good performance when the number of bits is small. In ï¬xed- point deep neural network design, retraining of quantized weights shows quite good performance.
The ï¬xed-point DNN algorithm design consists of three steps: ï¬oating-point training, direct quan- tization, and retraining of weights. The ï¬oating-point training procedure can be any of the state of the art techniques, which may include unsupervised learning and dropout. Note that ï¬xed-point op- timization needs to be based on the best performing ï¬oating-point weights. Thus, the ï¬oating-point weight optimization may need to be conducted several times with different initializations, and this step consumes the most of the time. After the ï¬oating-point training, direct quantization is followed.
For direct quantization, uniform quantization function is employed and the function Q(·) is deï¬ned as follows :
fu = sont) -d-min( 05). M=2) 5
where sgn(·) is a sign function, â is a quantization step size, and M represents the number of quantization levels. Note that M needs to be an odd number since the weight values can be posi- tive or negative. When M is 7, the weights are represented by -3·â, -2·â, -1·â, 0, +1·â, +2·â, +3·â,which can be represented in 3 bits. The quantization step size â is determined to minimize the L2 error, E, depicted as follows.
E=- DY (Qw) - wi) (4)
where N is the number of weights in each weight group, wi is the i-th weight value represented in ï¬oating-point. This process needs some iterations, but does not take much time.
For network retraining, we maintain both ï¬oating-point and quantized weights because the amount of weight updates in each training step is much smaller than the quantization step size â. The forward and backward propagation is conducted using quantized weights, but the weight update is applied to the ï¬oating-point weights and newly quantized values are generated at each iteration. This retraining procedure usually converges quickly and does not take much time when compared to the ï¬oating-point training.
# 4 ANALYSIS OF QUANTIZATION EFFECTS
# 4.1 DIRECT QUANTIZATION
The performance of the FFDNN and the CNN with directly quantized weights is analyzed while varying the number of units in each layer or the number of feature maps, respectively. In this analysis, the quantization is performed on each weight group, which is illustrated in Figure 1 and
4
# Under review as a conference paper at ICLR 2016
Figure 2, to know the sensitivity of word-length reduction. In this sub-section, we try to analyze the effects of direct quantization.
The quantized weight can be represented as follows,
wq i = wi + wd i (5)
where wd assume that the distortion wd i is the distortion of each weight due to quantization. In the direct quantization, we can i is not dependent each other.
(a) (b)
Figure 3: Computation model for a unit in the hidden layer j ((a): ï¬oating-point, (b): distortion).
(a) (b)
s â@ 8 5 $ 2 2 a
Figure 4: Sensitivity analysis of direct quantization ((a): FFDNN, (b): CNN). In the ï¬gure (b), x-axis label â8-16â represents the number of feature map is â8-8-16â.
Consider a computation procedure for a unit in a hidden layer, the signal from the previous layer is summed up after multiplication with the weights as illustrated in Figure 3a. We can also assemble a model for distortion, which is shown in Figure 3b. In the distortion model, since wd i is independent each other, we can assume that the effects of the summed distortion is reduced according to the random process theory. This analysis means that the quantization effects are reduced when the number of units in the anterior layer increases, but slowly.
Figure 4a illustrates the performance of the FFDNN with ï¬oating-point arithmetic, 2-bit direct quan- tization of all the weights, and 2-bit direct quantization only on the weight group âIn-h1â, âh1-h2â, and âh4-outâ. Consider the quantization performance of the âIn-h1â layer, the phone-error rate is higher than the ï¬oating-point result with an almost constant amount, about 10%. Note that the num- ber of input to the âIn-h1â layer is ï¬xed, 1353, regardless of the hidden unit size. Thus, the amount of distortion delivered to each unit of the hidden layer 1 can be considered unchanged. Figure 4a also shows the quantization performance on âh1-h2â and âh4-outâ layers, which informs the trend of
5
# Under review as a conference paper at ICLR 2016
(a) (b)
Figure 5: Performance of direct quantization with multiple precision ((a): FFDNN, (b): CNN).
reduced gap to the ï¬oating-point performance as the network size increases. This can be explained by the sum of increased number of independent distortions when the network size grows. The per- formance of all 2-bit quantization also shows the similar trend of reduced gap to the ï¬oating-point performance. But, apparently, the performance of 2-bit directly quantized networks is not satisfac- tory.
In Figure 4b, a similar analysis is conducted to the CNN with direct quantization when the number of feature maps increases or decreases. In the CNN, the number of input to each output is determined by the number of input feature maps and the kernel size. For example, at the ï¬rst layer C1, the number of input signal for computing one output is only 75 (=3Ã25) regardless of the network size, where the input map size is always 3 and the kernel size is 25. However, at the second layer C2, the number of input feature maps increases as the network size grows. When the feature map of 32-32-64 is considered, the number of input for the C2 layer grows to 800 (=32Ã25). Thus, we can expect a reduced distortion as the number of feature maps increases.
Figure 5a shows the performance of direct quantization with 2, 4, 6, and 8-bit precision when the network complexity varies. In the FFDNN, 6 bit direct quantization seems enough when the network size is larger than 128. But, small FFDNNs demand 8 bits for near ï¬oating-point performance. The CNN in Figure 5b also shows the similar trend. The direct quantization requires about 6 bits when the feature map conï¬guration is 16-16-32 or larger.
# 4.2 EFFECTS OF RETRAINING ON QUANTIZED NETWORKS
Retraining is conducted on the directly quantized networks using the same data for ï¬oating-point training. The ï¬xed-point performance of the FFDNN is shown in Figure 6a when the number of hidden units in each layer varies. The performance of direct 2 bits (ternary levels), direct 3 bits (7- levels), retrain-based 2 bits, and retrain-based 3 bits are compared with the ï¬oating-point simulation. We can ï¬nd that the performance gap between the ï¬oating-point and the retrain-based ï¬xed-point networks converges very fast as the network size grows. Although the performance gap between the direct and the ï¬oating-point networks also converges, the rate of convergence is signiï¬cantly different. In this ï¬gure, the performance of the ï¬oating-point network almost saturates when the network size is about 1024. Note that the TIMIT corpus that is used for training has only 3 hours of data. Thus, the network with 1024 hidden units can be considered in the âtraining-data limited regionâ. Here, the gap between the ï¬oating-point and ï¬xed-point networks almost vanishes when the network is in the âtraining-data limited regionâ. However, when the network size is limited, such as 32, 64, 128, or 256, there is some performance gap between the ï¬oating-point and highly quantized networks even if retraining on the quantized networks is performed.
The similar experiments are conducted for the CNN with varying feature map sizes, and the results are shown in Figure 6b. The conï¬guration of the feature maps used for the experiments are 8-8-16,
6
# Under review as a conference paper at ICLR 2016
(a) (b)
# Phone error rate (%)
Figure 6: Comparison of retrain-based and direct quantization for DNN (a) and CNN (b). All the weights are quantized with ternary and 7-level weights. In the ï¬gure (b), x-axis label â8-16â represents the number of feature map is â8-8-16â.
16-16-32, 32-32-64, 64-64-128, 96-96-192, and 128-128-256. The size of the fully connected layer is not changed. In this ï¬gure, the ï¬oating-point and the ï¬xed-point performances with retraining also converge very fast as the number of feature maps increases. The ï¬oating-point performance saturates when the feature map size is 128-128-256, and the gap is less than 1% when comparing the ï¬oating-point and the retrain-based 2-bit networks. However, also, there is some performance gap when the number of feature maps is reduced. This suggests that a fairly high performance feature extraction can be designed even using very low-precision weights if the number of feature maps can be increased.
# 4.3 FIXED-POINT PERFORMANCES WHEN VARYING THE DEPTH
It is well known that increasing the depth usually results in positive effects on the performance of a DNN (Yu et al., 2012a). The network complexity of a DNN is changed by increasing or reducing the number of hidden layers or feature map levels. The result of ï¬xed-point and ï¬oating-point performances when varying the number of hidden layers for the FFDNN is summarized in Table 1. The number of units in each hidden layer is 512. This table shows that both the ï¬oating-point and the ï¬xed-point performances of the FFDNN increase when adding hidden layers from 0 to 4. The performance gap between the ï¬oating-point and the ï¬xed-point networks shrinks as the number of levels increases.
Table 1: Framewise phoneme error rate on TIMIT with respect to the depth in DNN
Number of layers (Floating-point result) 1 (34.67%) 2 (31.51%) 3 (30.81%) 4 (30.31%) # Quantization levels Direct Retraining Difference 3-level 7-level 3-level 7-level 3-level 7-level 3-level 7-level 69.88% 56.81% 47.74% 36.99% 49.27% 36.58% 48.13% 34.77% 38.58% 36.57% 33.89% 33.04% 33.05% 31.72% 31.86% 31.49% 3.91% 1.90% 2.38% 1.53% 2.24% 0.91% 1.55% 1.18%
The network complexity of the CNN is also varied by reducing the level of feature maps as shown in Table 2. As expected, the performance of both the ï¬oating-point and retrain-based low-precision networks degrades as the number of levels is reduced. The performance gap between them is very small with 7-level quantization for all feature map levels.
7
# Under review as a conference paper at ICLR 2016
These results for the FFDNN and the CNN with varied number of levels also show that the ef- fects of quantization can be much reduced by retraining when the network contains some redundant complexity.
Table 2: Miss classiï¬cation rate on CIFAR-10 with respect to the depth in CNN
Layer (Floating-point result) 64 (34.19%) 32-64 (29.29%) 32-32-64 (26.87%) # Quantization levels Direct Retraining Difference 3-level 7-level 3-level 7-level 3-level 7-level 72.95% 46.60% 55.30% 39.80% 79.88% 47.91% 35.37% 34.15% 29.51% 29.32% 27.94% 26.95% 1.18% -0.04% 0.22% 0.03% 1.07% 0.08%
# 5 EFFECTIVE COMPRESSION RATIO
So far we have examined the effect of direct and retraining-based quantization to the ï¬nal classiï¬ca- tion error rates. As the number of quantization level decreases, more memory space can be saved at the cost of sacriï¬cing the accuracy. Therefore, there is a trade-off between the total memory space for storing weights and the ï¬nal classiï¬cation accuracy. In practice, investigating this trade-off is important for deciding the optimal bit-widths for representing weights and implementing the most efï¬cient neural network hardware.
In this section, we propose a guideline for ï¬nding the optimal bit-widths in terms of the total number of bits consumed by the network weights when the desired accuracy or the network size is given. Note that we assume 2n â 1 quantization levels are represented by n bits (i.e. 2 bits are required for representing a ternary weight). For simplicity, all layers are quantized with the same number of quantization levels. However, the similar approach can be applied to the layer-wise quantization analysis.
(a) (b)
# Phone error rate (%)
Figure 7: Framewise phone error rate of phoneme recognition DNNs with respect to the total number of bits for weights with (a) direct quantization and (b) after retraining.
The optimal combination of the bit-width and layer size can be found when the number of total bits or the accuracy is given as shown in Figure 7. The ï¬gure shows the framewise phoneme error rate on TIMIT with respect to the number of total bits, while varying the layer size of DNNs with various number of quantization bits from 2 to 8 bits. The network has 4 hidden layers with the uniform sizes. With direct quantization, the optimal hardware design can be achieved with about 5 bits. On the other hand, the weight representation with only 2 bits shows the best performance after retraining.
8
# Under review as a conference paper at ICLR 2016
floating result âsâ 2 bit direct â+â 3 bit direct â+â 2 bit retrain â4~ 3 bit retrain 2 i=} ~ I} Phone error rate (%) a i} 40F 2 i=} 30b # of params
Figure 8: Obtaining effective number of parameters for the uncompressed network.
(a) (b)
# ratio
# Effective
Figure 9: Effective compression ratio (ECR) with respect to the layer size and the number of bits per weights for (a) direct quantization and (b) retrain-based quantization.
The remaining question is how much memory space can be saved by quantization while maintaining the accuracy. To examine this, we introduce a metric called effective compression ratio (ECR), which is deï¬ned as follows:
ECR = Effective uncompressed size Compressed size (6)
The compressed size is the total memory bits required for storing all weights with quantization. The effective uncompressed size is the total memory size with 32-bit ï¬oating point representation when the network achieves the same accuracy as that of the quantized network.
Figure 8 describes how to obtain the effective number of parameters for uncompressed networks. Speciï¬cally, by varying the size, we ï¬nd the number of total parameters of the ï¬oating-point network that shows the same accuracy as the quantized one. After that, the effective uncompressed size can be computed by multiplying 32 bits to the effective number of parameters.
Once we get the corresponding effective uncompressed size for the speciï¬c network size and the number of quantization bits, the ECR can be computed by (6). The ECRs for the direct and retrain- based quantization for various network sizes and quantization bits are shown in Figure 9. For the direct quantization, 5 bit quantization shows the best ECR except for the layer size of 1024. On the other hand, even 2 bit quantization performs better than the others after retraining. That is, after retraining, a bigger network with extreme ternary (2 bit) quantization is more efï¬cient in terms of
9
# Under review as a conference paper at ICLR 2016
the memory usage for weights than any other smaller networks with higher quantization bits when they are compared at the same accuracy.
# 6 DISCUSSION
In this study, we control the network size by changing the number of units in the hidden layers, the number of feature maps, or the number of levels. At any case, reduced complexity lowers the resiliency to quantization. We are now conducting similar experiments to the recurrent neural networks that are known to be more sensitive to quantization (Shin et al., 2015). This work seems to be directly related to several network optimization methods, such as pruning, fault tolerance, and decomposition (Yu et al., 2012b; Han et al., 2015; Xue et al., 2013; Rigamonti et al., 2013). In the pruning, retraining of weights is conducted after zeroing small valued weights. The effects of pruning, fault tolerance, and network decomposition efï¬ciency would be dependent on the redundant representation capability of DNNs.
This study can be applied to hardware efï¬cient DNN design. For design with limited hardware resources, when the size of the reference DNN is relatively small, it is advised to employ a very low-precision arithmetic and, instead, increase the network complexity as much as the hardware capacity allows. But, when the DNNs are in the performance saturation region, this strategy does not always gain much because growing the âalready-bigâ network size brings almost no performance advantages. This can be observed in Figure 7b and Figure 9b where 6 bit quantization performed best at the largest layer size (1,024).
# 7 CONCLUSION
We analyze the performance of ï¬xed-point deep neural networks, an FFDNN for phoneme recogni- tion and a CNN for image classiï¬cation, while not only changing the arithmetic precision but also varying their network complexity. The low-precision networks for this analysis are obtained by us- ing the retrain based quantization method, and the network complexity is controlled by changing the conï¬gurations of the hidden layers or feature maps. The performance gap between the ï¬oating- point and the ï¬xed-point neural networks with ternary weights (+1, 0, -1) almost vanishes when the DNNs are in the performance saturation region for the given training data. However, when the complexity of DNNs are reduced, by lowering either the number of units, feature maps, or hidden layers, the performance gap between them increases. In other words, a large size network that may contain redundant representation capability for the given training data does not hurt by the lowered precision, but a very compact network does.
# ACKNOWLEDGMENTS
This work was supported in part by the Brain Korea 21 Plus Project and the National Re- search Foundation of Korea (NRF) grants funded by the Korea government (MSIP) (No. 2015R1A2A1A10056051).
# REFERENCES
Anwar, Sajid, Hwang, Kyuyeon, and Sung, Wonyong. Fixed point optimization of deep convo- In Acoustics, Speech and Signal Processing lutional neural networks for object recognition. (ICASSP), 2015 IEEE International Conference on, pp. 1131â1135. IEEE, 2015.
Chen, Chenyi, Seff, Ari, Kornhauser, Alain, and Xiao, Jianxiong. Deepdriving: Learning affordance for direct perception in autonomous driving. arXiv preprint arXiv:1505.00256, 2015.
Corradini, Maria Letizia, Giantomassi, Andrea, Ippoliti, Gianluca, Longhi, Sauro, and Orlando, Giuseppe. Robust control of robot arms via quasi sliding modes and neural networks. In Advances and Applications in Sliding Mode Control systems, pp. 79â105. Springer, 2015.
Courbariaux, Matthieu, Bengio, Yoshua, and David, Jean-Pierre. Binaryconnect: Training deep neu- ral networks with binary weights during propagations. arXiv preprint arXiv:1511.00363, 2015.
10
# Under review as a conference paper at ICLR 2016
Fiesler, Emile, Choudry, Amar, and Caulï¬eld, H John. Weight discretization paradigm for optical neural networks. In The Hagueâ90, 12-16 April, pp. 164â173. International Society for Optics and Photonics, 1990.
Han, Song, Mao, Huizi, and Dally, William J. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. 2015.
Holt, Jordan L and Baker, Thomas E. Back propagation simulations using limited precision calcula- tions. In Neural Networks, 1991., IJCNN-91-Seattle International Joint Conference on, volume 2, pp. 121â126. IEEE, 1991.
Hussain, B Zahir M et al. Short word-length lms ï¬ltering. In Signal Processing and Its Applications, 2007. ISSPA 2007. 9th International Symposium on, pp. 1â4. IEEE, 2007.
Hwang, Kyuyeon and Sung, Wonyong. Fixed-point feedforward deep neural network design using weights +1, 0, and -1. In Signal Processing Systems (SiPS), 2014 IEEE Workshop on, pp. 1â6. IEEE, 2014.
Jalab, Hamid A, Omer, Herman, et al. Human computer interface using hand gesture recognition based on neural network. In Information Technology: Towards New Smart World (NSITNSW), 2015 5th National Symposium on, pp. 1â6. IEEE, 2015.
Kim, Jonghong, Hwang, Kyuyeon, and Sung, Wonyong. X1000 real-time phoneme recognition In Acoustics, Speech and Signal Processing VLSI using feed-forward deep neural networks. (ICASSP), 2014 IEEE International Conference on, pp. 7510â7514. IEEE, 2014.
Krizhevskey, A. CUDA-convnet, 2014.
Moerland, Perry and Fiesler, Emile. Neural network adaptations to hardware implementations. Technical report, IDIAP, 1997.
Ovtcharov, Kalin, Ruwase, Olatunji, Kim, Joo-Young, Fowers, Jeremy, Strauss, Karin, and Chung, Eric S. Accelerating deep convolutional neural networks using specialized hardware. Microsoft Research Whitepaper, 2, 2015.
Rigamonti, Roberto, Sironi, Amos, Lepetit, Vincent, and Fua, Pascal. Learning separable ï¬lters. In Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, pp. 2754â2761. IEEE, 2013.
Sak, Has¸im, Senior, Andrew, Rao, Kanishka, and Beaufays, Franc¸oise. Fast and accurate recurrent neural network acoustic models for speech recognition. arXiv preprint arXiv:1507.06947, 2015.
Shin, Sungho, Hwang, Kyuyeon, and Sung, Wonyong. Fixed point performance analysis of recurrent neural networks. arXiv preprint arXiv:1512.01322, 2015.
Sung, Wonyong and Kum, Ki-II. Simulation-based word-length optimization method for ï¬xed-point digital signal processing systems. Signal Processing, IEEE Transactions on, 43(12):3087â3090, 1995.
Tieleman, Tijmen and Hinton, Geoffrey. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4, 2012.
Xue, Jian, Li, Jinyu, and Gong, Yifan. Restructuring of deep neural network acoustic models with singular value decomposition. In INTERSPEECH, pp. 2365â2369, 2013.
Yu, Dong, Deng, Alex Acero, Dahl, George, Seide, Frank, and Li, Gang. More data + deeper model = better accuracy. In keynote at International Workshop on Statistical Machine Learning for Speech Processing, 2012a.
Yu, Dong, Seide, Frank, Li, Gang, and Deng, Li. Exploiting sparseness in deep neural networks for large vocabulary speech recognition. In Acoustics, Speech and Signal Processing (ICASSP), 2012 IEEE International Conference on, pp. 4409â4412. IEEE, 2012b.
11 | {
"id": "1505.00256"
} |
1511.06297 | Conditional Computation in Neural Networks for faster models | Deep learning has become the state-of-art tool in many applications, but the
evaluation and training of deep models can be time-consuming and
computationally expensive. The conditional computation approach has been
proposed to tackle this problem (Bengio et al., 2013; Davis & Arel, 2013). It
operates by selectively activating only parts of the network at a time. In this
paper, we use reinforcement learning as a tool to optimize conditional
computation policies. More specifically, we cast the problem of learning
activation-dependent policies for dropping out blocks of units as a
reinforcement learning problem. We propose a learning scheme motivated by
computation speed, capturing the idea of wanting to have parsimonious
activations while maintaining prediction accuracy. We apply a policy gradient
algorithm for learning policies that optimize this loss function and propose a
regularization mechanism that encourages diversification of the dropout policy.
We present encouraging empirical results showing that this approach improves
the speed of computation without impacting the quality of the approximation. | http://arxiv.org/pdf/1511.06297 | Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, Doina Precup | cs.LG | ICLR 2016 submission, revised | null | cs.LG | 20151119 | 20160107 | 6 1 0 2
n a J 7 ] G L . s c [
2 v 7 9 2 6 0 . 1 1 5 1 : v i X r a
# Under review as a conference paper at ICLR 2016
# CONDITIONAL COMPUTATION IN NEURAL NETWORKS FOR FASTER MODELS
Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau & Doina Precup School of Computer Science McGill University Montreal, Canada {ebengi,pbacon,jpineau,dprecup}@cs.mcgill.ca
# ABSTRACT
Deep learning has become the state-of-art tool in many applications, but the eval- uation and training of deep models can be time-consuming and computationally expensive. The conditional computation approach has been proposed to tackle this problem (Bengio et al., 2013; Davis & Arel, 2013). It operates by selectively activating only parts of the network at a time. In this paper, we use reinforcement learning as a tool to optimize conditional computation policies. More speciï¬- cally, we cast the problem of learning activation-dependent policies for dropping out blocks of units as a reinforcement learning problem. We propose a learning scheme motivated by computation speed, capturing the idea of wanting to have parsimonious activations while maintaining prediction accuracy. We apply a pol- icy gradient algorithm for learning policies that optimize this loss function and propose a regularization mechanism that encourages diversiï¬cation of the dropout policy. We present encouraging empirical results showing that this approach im- proves the speed of computation without impacting the quality of the approxima- tion.
Keywords Neural Networks, Conditional Computing, REINFORCE
1
# INTRODUCTION
Large-scale neural networks, and in particular deep learning architectures, have seen a surge in popularity in recent years, due to their impressive empirical performance in complex supervised learning tasks, including state-of-the-art performance in image and speech recognition (He et al., 2015). Yet the task of training such networks remains a challenging optimization problem. Several related problems arise: very long training time (several weeks on modern computers, for some prob- lems), potential for over-ï¬tting (whereby the learned function is too speciï¬c to the training data and generalizes poorly to unseen data), and more technically, the vanishing gradient problem (Hochre- iter, 1991; Bengio et al., 1994), whereby the gradient information gets increasingly diffuse as it propagates from layer to layer.
Recent approaches (Bengio et al., 2013; Davis & Arel, 2013) have proposed the use of conditional computation in order to address this problem. Conditional computation refers to activating only some of the units in a network, in an input-dependent fashion. For example, if we think weâre looking at a car, we only need to compute the activations of the vehicle detecting units, not of all features that a network could possible compute. The immediate effect of activating fewer units is that propagating information through the network will be faster, both at training as well as at test time. However, one needs to be able to decide in an intelligent fashion which units to turn on and off, depending on the input data. This is typically achieved with some form of gating structure, learned in parallel with the original network.
A secondary effect of conditional computation is that during training, information will be propagated along fewer links. Intuitively, this allows sharper gradients on the links that do get activated. More- over, because only parts of the network are active, and fewer parameters are used in the computation,
1
# Under review as a conference paper at ICLR 2016
the net effect can be viewed as a form of regularization of the main network, as the approximator has to use only a small fraction of the possible parameters in order to produce an action.
In this paper, we explore the formulation of conditional computation using reinforcement learning. We propose to learn input-dependent activation probabilities for every node (or blocks of nodes), while trying to jointly minimize the prediction errors at the output and the number of participating nodes at every layer, thus reducing the computational load. One can also think of our method as being related to standard dropout, which has been used as a tool to both regularize and speed up the computation. However, we emphasize that dropout is in fact a form of âunconditionalâ computation, in which the computation paths are data-independent. Therefore, usual dropout is less likely to lead to specialized computation paths within a network.
We present the problem formulation, and our solution to the proposed optimization problem, us- ing policy search methods (Deisenroth et al., 2013). Preliminary results are included for standard classiï¬cation benchmarks.
# 2 PROBLEM FORMULATION
Our model consists in a typical fully-connected neural network model, joined with stochastic per- layer policies that activate or deactivate nodes of the neural network in an input-dependent manner, both at train and test time. The exact algorithm is detailed in appendix A.
We cast the problem of learning the input-dependent activation probabilities at each layer in the framework of Markov Decision Processes (MDP) (Puterman, 1994). We define a discrete time, continuous state and discrete action MDP (S,U, P(-|s,u),C). An action u ⬠{0,1}* in this model consists in the application of a mask over the units of a given layer. We define the state space of the MDP over the vector-valued activations s ⬠R* of all nodes at the previous layer. The cost C is the loss of the neural network architecture (in our case the negative log-likelihood). This MDP is single-step: an input is seen, an action is taken, a reward is observed and we are at the end state.
Similarly to the way dropout is described (Hinton et al., 2012), each node or block in a given layer has an associated Bernoulli distribution which determines its probability of being activated. We train a different policy for each layer l, and parameterize it (separately of the neural network) such that it is input-dependent. For every layer l of k units, we deï¬ne a policy as a k-dimensional Bernoulli distribution:
k x (us) = Il ot (lâo)t-), oi = [sigm(Zs + d)};, (1) i=1
where the Ïi denotes the participation probability, to be computed from the activations s of the layer below and the parameters θl = {Z(l), d(l)}. We denote the sigmoid function by sigm, the weight matrix by Z, and the bias vector by d. The output of a typical hidden layer h(x) that uses this policy is multiplied element-wise with the mask u sampled from the probabilities Ï, and becomes (h(x) â u). For clarity we did not superscript u, s and Ïi with l, but each layer has its own.
# 3 LEARNING SIGMOID-BERNOULLI POLICIES
We use REINFORCE (Williams, 1992) (detailed in appendix B) to learn the parameters ÎÏ = {θ1, ..., θL} of the sigmoid-Bernoulli policies. Since the nature of the observation space changes at each decision step, we learn L disjoint policies (one for each layer l of the deep network). As a consequence, the summation in the policy gradient disappears and becomes:
Vol = E{C(x)Vo, log a (u |s)} Q)
since θl = {Z(l), d(l)} only appears in the l-th decision stage and the gradient is zero otherwise.
Estimating (2) from samples requires propagating through many instances at a time, which we achieve through mini-batches of size mb . Under the mini-batch setting, s(l) becomes a matrix and Ï(· | ·) a vector of dimension mb . Taking the gradient of the parameters with respect to the
2
# Under review as a conference paper at ICLR 2016
log action probabilities can then be seen as forming a Jacobian. We can thus re-write the empirical average in matrix form:
my 1 Val & â So Oxi) Vo, loge (ul | 80) = âe" Va, loge (UM | SO 3 £5, DCC) Vo own (ul? [9\?) = Foe" Vo logx (U9 18) â)
where C(x;) is the total cost for input x; and mp is the number of examples in the mini-batch. The term c! denotes the row vector containing the total costs for every example in the mini-batch.
# 3.1 FAST VECTOR-JACOBIAN MULTIPLICATION
While Eqn (3) suggests that the Jacobian might have to be formed explicitly, Pearlmutter (1994) showed that computing a differential derivative suffices to compute left or right vector-Jacobian (or Hessian) multiplication. The same trick has also recently been revived with the class of so- called âHessian-freeâ (Martens, 2010) methods for artificial neural networks. Using the notation of Pearlmutter (1994), we write Ro, {-} = c' Vo, for the differential operator.
1 ~ Orr) Vol ~ Re, {log n(UY |S )} (4)
3.2 SPARSITY AND VARIANCE REGULARIZATIONS
In order to favour activation policies with sparse actions, we add two penalty terms Lb and Le that depend on some target sparsity rate Ï . The ï¬rst term pushes the policy distribution Ï to activate each unit with probability Ï in expectation over the data. The second term pushes the policy distribution to have the desired sparsity of activations for each example. Thus, for a low Ï , a valid conï¬guration would be to learn a few high probability activations for some part of the data and low probability activations for the rest of the data, which results in having activation probability Ï in expectation.
n 1 Ly = > \IE{o3} â rMlo De = E{I(â 9) 23) ~ TIl2} (5) j 5
Since we are in a minibatch setting, these expectations can be approximated over the minibatch:
Roy me 1mm 4 Ly SO |â You) - Tle Le ® â SO MM(â Yo oy) - Te (6) 5 my My n j
We ï¬nally add a third term, Lv, in order to favour the aforementioned conï¬gurations, where units only have a high probability of activation for certain examples, and low for the rest. We aim to max- imize the variances of activations of each unit, across the data. This encourages unitsâ activations to be varied, and while similar in spirit to the Lb term, this term explicitly discourages learning a uniform distribution.
2
1 my 1 mp 2 Ly = â SO vari fois} & -S ⢠(«. _ (= Yu) (7) ij 6
3.3 ALGORITHM
We interleave the learning of the network parameters ÎN N and the learning of the policy parameters ÎÏ. We ï¬rst update the network and policy parameters to minimize the following regularized loss function via backpropagation (Rumelhart et al., 1988):
L=âlog P(Y |X, Onn) + As(Lo + Le) + Av(Lv) + Ax2||Oww ||? + Ax2||Oxl| where A, can be understood as a trade-off parameter between prediction accuracy and parsimony of computation (obtained through sparse node activation), and A, as a trade-off parameter between a stochastic policy and a more input dependent saturated policy. We then minimize the cost function C with a REINFORCE-style approach to update the policy parameters (Williams, 1992):
C = â log P (Y | X, ÎN N ) As previously mentioned, we use minibatch stochastic gradient descent as well as minibatch policy gradient updates. A detailed algorithm is available in appendix A.
3
# Under review as a conference paper at ICLR 2016
3.4 BLOCK ACTIVATION POLICY
To achieve computational gain, instead of activating single units in hidden layers, we activate con- tiguous (equally-sized) groups of units together (independently for each example in the minibatch), thus reducing the action space as well as the number of probabilities to compute and sample. As such, there are two potential speedups. First, the policy is much smaller and faster to compute. Second, it offers a computational advantage in the computation of the hidden layer themselves, since we are now performing a matrix multiplication of the following form:
((H â MH )W ) â MO
where MH and MO are binary mask matrices. MO is obtained for each layer from the sampling of the policy as described in eq. 1: each sampled action (0 or 1) is repeated so as to span the corresponding block. MH is simply the mask of the previous layer. MH and MO resemble this (here there are 3 blocks of size 2):
0 1 0 0 1 0 1 0 ... 1 1 0 1 0 0 1 0 0 1
This allows us to quickly perform matrix multiplication by only considering the non-zero output elements as well as the non-zero elements in H â MH .
# 4 EXPERIMENTS
4.1 MODEL IMPLEMENTATION
The proposed model was implemented within Theano (Bergstra et al., 2010), a standard library for deep learning and neural networks. In addition to using optimizations offered by Theano, we also implemented specialized matrix multiplication code for the operation exposed in section 3.4. A straightforward and fairly naive CPU implementation of this operation yielded speedups of up to 5-10x, while an equally naive GPU implementation yielded speedups of up to 2-4x, both for sparsity rates of under 20% and acceptable matrix and block sizes.1
We otherwise use fairly standard methods for our neural network. The weight matrices are initialized using the heuristic of Glorot & Bengio (2010). We use a constant learning rate throughout minibatch SGD. We also use early stopping (Bishop, 2006) to avoid overï¬tting. We only use fully-connected layers with tanh activations (reLu activations offer similar performance).
4.2 MODEL EVALUATION
We ï¬rst evaluate the performance of our model on the MNIST digit dataset. We use a single hidden layer of 16 blocks of 16 units (256 units total), with a target sparsity rate of Ï = 6.25% = 1/16, learning rates of 10â3 for the neural network and 5 à 10â5 for the policy, λv = λs = 200 and λL2 = 0.005. Under these conditions, a test error of around 2.3% was achieved. A normal neural network with the same number of hidden units achieves a test error of around 1.9%, while a normal neural network with a similar amount of computation (multiply-adds) being made (32 hidden units) achieves a test error of around 2.8%.
Looking at the activation of the policy (1c), we see that it tends towards what was hypothesized in section 3.2, i.e. where examples activate most units with low probability and some units with high probability. We can also observe that the policy is input-dependent in ï¬gures 1a and 1b, since we see different activation patterns for inputs of class â0â and inputs of class â1â.
Since the computation performed in our model is sparse, one could hope that it achieves this perfor- mance with less computation time, yet we consistently observe that models that deal with MNIST are too small to allow our specialized (3.4) sparse implementation to make a substantial difference. We include this result to highlight conditions under which it is less desirable to use our model.
1Implementations used in this paper are available at http://github.com/bengioe/condnet/
4
# Under review as a conference paper at ICLR 2016
(a) (b) (c) (d)
Figure 1: MNIST, (a,b,c), probability distribution of the policy, each exampleâs probability (y axis) of activating each unit (x axis) is plotted as a transparent red dot. Redder regions represent more examples falling in the probability region. Plot (a) is for class â0â, (b) for class â1â, (c) for all classes. (d), weight matrix of the policy.
model condnet condnet condnet bdNN bdNN NN NN NN test error 0.511 0.514 0.497 0.629 0.590 0.560 0.546 0.497 Ï 1/24 1/16 1/16 0.17 0.2 - - - #blocks 24,24 16,32 10,10 10,10 10,10 64,64 128,128 480,480 block size 64 16 64 64 64 1 1 1 test time 6.8s(26.2s) 1.4s (8.2s) 2.0s(10.4s) 1.93s(10.3s) 2.8s(10.3s) 1.23s 2.31s 8.34s speedup 3.8Ã 5.7Ã 5.3Ã 5.3Ã 3.5Ã - - -
Figure 2: CIFAR-10, condnet: our approach, NN: Neural Network without the conditional activa- tions, bdNN, block dropout Neural Network using a uniform policy. âspeedupâ is how many times faster the forward pass is when using a specialized implementation (3.4). âtest timeâ is the time required to do a full pass over the test dataset using the implementation, on a CPU, running on a single core; in parenthesis is the time without the optimization.
Next, we consider the performance of our model on the CIFAR-10 (Krizhevsky & Hinton, 2009) image dataset. A brief hyperparameter search was made, and a few of the best models are shown in ï¬gure 2. These results show that it is possible to achieve similar performance with our model (de- noted condnet) as with a normal neural network (denoted NN), yet using sensibly reduced computa- tion time. A few things are worth noting; we can set Ï to be lower than 1 over the number of blocks, since the model learns a policy that is actually not as sparse as Ï , mostly because REINFORCE pulls the policy towards higher probabilities on average. For example our best performing model has a target of 1/16 but learns policies that average an 18% sparsity rate (we used λv = λs = 20, except for the ï¬rst layer λv = 40, we used λL2 = 0.01, and the learning rates were 0.001 for the neural net, 10â5 and 5 à 10â4 for the ï¬rst and second policy layers respectively). The neural networks without conditional activations are trained with L2 regularization as well as regular unit-wise dropout. We also train networks with the same architecture as our models, using blocks, but with a uniform policy (as in original dropout) instead of a learned conditional one. This model (denoted bdNN) does not perform as well as our model, showing that the dropout noise by itself is not sufï¬cient, and that learning a policy is required to fully take beneï¬t of this architecture.
5
# Under review as a conference paper at ICLR 2016
0.40 T T e * ° a NN 0.35 e 4 > ee 4 A be e@ = condnet: e 0.30 0.25 0.20 valid error (%) 0.15 0.10 0.05 time of validation (sec)
Figure 3: SVHN, each point is an experiment. The x axis is the time required to do a full pass over the valid dataset (log scale, lower is better). Note that we plot the full hyperparameter exploration results, which is why condnet results are so varied.
model condnet condnet condnet NN NN NN test error 0.183 0.139 0.073 0.116 0.100 0.091 Ï 1/11 1/25,1/7 1/22 - - - #blocks 13,8 27,7 25,22 288,928 800,736 1280,1056 block size 16 16 32 1 1 1 test time 1.5s(2.2s) 2.8s (4.3s) 10.2s(14.1s) 4.8s 10.7s 16.8s speedup 1.4Ã 1.6Ã 1.4Ã - - -
Figure 4: SVHN results (see ï¬g 2)
Finally we tested our model on the Street View House Numbers (SVHN) (Netzer et al., 2011) dataset, which also yielded encouraging results (ï¬gure 3). As we restrain the capacity of the models (by increasing sparsity or decreasing number of units), condnets retain acceptable performance with low run times, while plain neural networks suffer highly (their performance dramatically decreases with lower run times). The best condnet model has a test error of 7.3%, and runs a validation epoch in 10s (14s without speed optimization), while the best standard neural network model has a test error of 9.1%, and runs in 16s. Note that the variance in the SVHN results (ï¬gure 3) is due to the mostly random hyperparameter exploration, where block size, number of blocks, Ï , λv, λs, as well of learning rates are randomly picked. The normal neural network results were obtained by varying the number of hidden units of a 2-hidden-layer model.
For all three datasets and all condnet models used, the required training time was higher, but still reasonable. On average experiments took 1.5 to 3 times longer (wall time).
# 4.3 EFFECTS OF REGULARIZATION
The added regularization proposed in section 3.2 seems to play an important role in our ability to train the conditional model. When using only the prediction score, we observed that the algorithm tried to compensate by recruiting more units and saturating their participation probability, or even failed by dismissing very early what were probably considered bad units. In practice, the variance regularization term Lv only slightly affects the prediction accuracy and learned policies of models, but we have observed that it signiï¬cantly speeds up the training process, probably by encouraging policies to become less uniform earlier in the learning process. This can
6
# Under review as a conference paper at ICLR 2016
(a) (b)
Figure 5: CIFAR-10, (a) each pair of circle and triangle is an experiment made with a given lambda (x axis), resulting in a model with a certain error and running time (y axes). As λs increases the running time decreases, but so does performance. (b) The same model is being trained with different values of λv. Redder means lower, greener means higher.
be seen in ï¬gure 5b, where we train a model with different values of λv. When λv is increased, the ï¬rst few epochs have a much lower error rate.
It is possible to tune some hyperparameters to affect the point at which the trade-off between com- putation speed and performance lies, thus one could push the error downwards at the expense of also more computation time. This is suggested by ï¬gure 5a, which shows the effect of one such hyperparameter (λs) on both running times and performance for the CIFAR dataset. Here it seems that λ â¼ [300, 400] offers the best trade-off, yet other values could be selected, depending on the speciï¬c requirements of an application.
# 5 RELATED WORK
Ba & Frey (2013) proposed a learning algorithm called standout for computing an input-dependent dropout distribution at every node. As opposed to our layer-wise method, standout computes a one- shot dropout mask over the entire network, conditioned on the input to the network. Additionally, masks are unit-wise, while our approach uses masks that span blocks of units. Bengio et al. (2013) introduced Stochastic Times Smooth neurons as gaters for conditional computation within a deep neural network. STS neurons are highly non-linear and non-differentiable functions learned using estimators of the gradient obtained through REINFORCE. They allow a sparse binary gater to be computed as a function of the input, thus reducing computations in the then sparse activation of hidden layers.
Stollenga et al. (2014) recently proposed to learn a sequential decision process over the ï¬lters of a convolutional neural network (CNN). As in our work, a direct policy search method was chosen to ï¬nd the parameters of a control policy. Their problem formulation differs from ours mainly in the notion of decision âstageâ. In their model, an input is ï¬rst fed through a network, the activations are computed during forward propagation then they are served to the next decision stage. The goal of the policy is to select relevant ï¬lters from the previous stage so as to improve the decision accuracy on the current example. They also use a gradient-free evolutionary algorithm, in contrast to our gradient-based method.
The Deep Sequential Neural Network (DSNN) model of Denoyer & Gallinari (2014) is possibly closest to our approach. The control process is carried over the layers of the network and uses the output of the previous layer to compute actions. The REINFORCE algorithm is used to train the pol- icy with the reward/cost function being deï¬ned as the loss at the output in the base network. DSNN considers the general problem of choosing between between different type of mappings (weights) in
7
# Under review as a conference paper at ICLR 2016
a composition of functions. However, they test their model on datasets in which different modes are proeminent, making it easy for a policy to distinguish between them.
Another point of comparison for our work are attention models (Mnih et al., 2014; Gregor et al., 2015; Xu et al., 2015). These models typically learn a policy, or a form of policy, that allows them to selectively attend to parts of their input sequentially, in a visual 2D environnement. Both attention and our approach aim to reduce computation times. While attention aims to perform dense compu- tations on subsets of the inputs, our approach aims to be more general, since the policy focuses on subsets of the whole computation (it is in a sense more distributed). It should also be possible to combine these approaches, since one acts on the input space and the other acts on the representa- tion space, altough the resulting policies would be much more complex, and not necessarily easily trainable.
# 6 CONCLUSION
This paper presents a method for tackling the problem of conditional computation in deep networks by using reinforcement learning. We propose a type of parameterized conditional computation pol- icy that maps the activations of a layer to a Bernoulli mask. The reinforcement signal accounts for the loss function of the network in its prediction task, while the policy network itself is regularized to account for the desire to have sparse computations. The REINFORCE algorithm is used to train policies to optimize this cost. Our experiments show that it is possible to train such models at the same levels of accuracy as their standard counterparts. Additionally, it seems possible to execute these similarly accurate models faster due to their sparsity. Furthermore, the model has a few simple parameters that allow to control the trade-off between accuracy and running time.
The use of REINFORCE could be replaced by a more efï¬cient policy search algorithm, and also, perhaps, one in which rewards (or costs) as described above are replaced by a more sequential variant. The more direct use of computation time as a cost may prove beneï¬cial. In general, we consider conditional computation to be an area in which reinforcement learning could be very useful, and deserves further study.
All the running times reported in the Experiments section are for a CPU, running on a single core. The motivation for this is to explore deployment of large neural networks on cheap, low-power, single core CPUs such as phones, while retaining high model capacity and expressiveness. While the results presented here show that our model for conditional computation can achieve speedups in this context, it is worth also investigating adaptation of these sparse computation models in multi- core/GPU architectures; this is the subject of ongoing work.
# ACKNOWLEDGEMENTS
The authors gratefully acknowledge ï¬nancial support for this work by the Samsung Advanced In- stitute of Technology (SAIT), the Natural Sciences and Engineering Research Council of Canada (NSERC) and the Fonds de recherche du Qu´ebec - Nature et Technologies (FQRNT).
# REFERENCES
In Burges, C.J.C., Bottou, L., Welling, M., Ghahramani, Z., and Weinberger, K.Q. (eds.), Ad- vances in Neural Information Processing Systems 26, pp. 3084â3092. Curran Associates, Inc., 2013. URL http://papers.nips.cc/paper/5032-adaptive-dropout-for- training-deep-neural-networks.pdf.
Bengio, Y., Simard, P., and Frasconi, P. Learning long-term dependencies with gradient descent is difï¬cult. IEEE Transactions on Neural Nets, pp. 157â166, 1994.
Bengio, Yoshua, L´eonard, Nicholas, and Courville, Aaron. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.
Bergstra, James, Breuleux, Olivier, Bastien, Fr´ed´eric, Lamblin, Pascal, Pascanu, Razvan, Des- jardins, Guillaume, Turian, Joseph, Warde-Farley, David, and Bengio, Yoshua. Theano: a CPU
8
# Under review as a conference paper at ICLR 2016
and GPU math expression compiler. In Proceedings of the Python for Scientiï¬c Computing Con- ference (SciPy), June 2010. Oral Presentation.
Bishop, Christopher M. Pattern Recognition and Machine Learning (Information Science and Statis- tics). Springer-Verlag New York, Inc., Secaucus, NJ, USA, 2006. ISBN 0387310738.
Davis, Andrew and Arel, Itamar. Low-rank approximations for conditional feedforward computation in deep neural networks. arXiv preprint arXiv:1312.4461, 2013.
Deisenroth, Marc Peter, Neumann, Gerhard, and Peters, Jan. A survey on policy search for robotics. Foundations and Trends in Robotics, 2(1-2):1â142, 2013. doi: 10.1561/2300000021. URL http://dx.doi.org/10.1561/2300000021.
Denoyer, Ludovic and Gallinari, Patrick. Deep sequential neural network. CoRR, abs/1410.0510, 2014. URL http://arxiv.org/abs/1410.0510.
Glorot, Xavier and Bengio, Yoshua. Understanding the difï¬culty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artiï¬cial Intelligence and Statistics, AISTATS 2010, Chia Laguna Resort, Sardinia, Italy, May 13-15, 2010, pp. 249â256, 2010. URL http://www.jmlr.org/proceedings/papers/v9/glorot10a.html.
Gregor, Karol, Danihelka, Ivo, Graves, Alex, and Wierstra, Daan. Draw: A recurrent neural network for image generation. arXiv preprint arXiv:1502.04623, 2015.
He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Delving deep into rectiï¬ers: Sur- passing human-level performance on imagenet classiï¬cation. arXiv preprint arXiv:1502.01852, 2015.
Hinton, Geoffrey E., Srivastava, Nitish, Krizhevsky, Alex, Sutskever, Ilya, and Salakhutdinov, Improving neural networks by preventing co-adaptation of feature detectors. CoRR, Ruslan. abs/1207.0580, 2012. URL http://arxiv.org/abs/1207.0580.
Hochreiter, S. Untersuchungen zu dynamischen neuronalen Netzen. Diploma thesis, T.U. M¨unich, 1991.
Krizhevsky, Alex and Hinton, Geoffrey. Learning multiple layers of features from tiny images, 2009.
Martens, James. Deep learning via hessian-free optimization. In Proceedings of the 27th Interna- tional Conference on Machine Learning (ICML-10), June 21-24, 2010, Haifa, Israel, pp. 735â 742, 2010. URL http://www.icml2010.org/papers/458.pdf.
Mnih, Volodymyr, Heess, Nicolas, Graves, Alex, and kavukcuoglu, koray. Recurrent models of vi- sual attention. In Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N.D., and Weinberger, K.Q. (eds.), Advances in Neural Information Processing Systems 27, pp. 2204â2212. Curran Asso- ciates, Inc., 2014. URL http://papers.nips.cc/paper/5542-recurrent-models- of-visual-attention.pdf.
Netzer, Yuval, Wang, Tao, Coates, Adam, Bissacco, Alessandro, Wu, Bo, and Ng, Andrew Y. Read- ing digits in natural images with unsupervised feature learning. In NIPS workshop on deep learn- ing and unsupervised feature learning, volume 2011, pp. 5. Granada, Spain, 2011.
Pearlmutter, Barak A. Fast exact multiplication by the hessian. Neural Comput., 6(1):147â doi: 10.1162/neco.1994.6.1.147. URL http: 160, January 1994. //dx.doi.org/10.1162/neco.1994.6.1.147. ISSN 0899-7667.
Puterman, Martin L. Markov Decision Processes: Discrete Stochastic Dynamic Programming. John Wiley & Sons, Inc., New York, NY, USA, 1st edition, 1994. ISBN 0471619779.
Rumelhart, David E, Hinton, Geoffrey E, and Williams, Ronald J. Learning representations by back-propagating errors. Cognitive modeling, 5, 1988.
Silver, David, Lever, Guy, Heess, Nicolas, Degris, Thomas, Wierstra, Daan, and Riedmiller, Martin. Deterministic policy gradient algorithms. In Proceedings of the 31th International Conference on Machine Learning, ICML 2014, Beijing, China, 21-26 June 2014, pp. 387â395, 2014. URL http://jmlr.org/proceedings/papers/v32/silver14.html.
9
# Under review as a conference paper at ICLR 2016
Stollenga, Marijn F, Masci, Jonathan, Gomez, Faustino, and Schmidhuber, J¨urgen. Deep networks with internal selective attention through feedback connections. In Ghahra- mani, Z., Welling, M., Cortes, C., Lawrence, N.D., and Weinberger, K.Q. (eds.), Ad- vances in Neural Information Processing Systems 27, pp. 3545â3553. Curran Associates, Inc., 2014. URL http://papers.nips.cc/paper/5276-deep-networks-with- internal-selective-attention-through-feedback-connections.pdf.
Williams, Ronald J. Simple statistical gradient-following algorithms for connectionist rein- doi: forcement learning. Machine Learning, 8(3-4):229â256, 1992. 10.1007/BF00992696. URL http://dx.doi.org/10.1007/BF00992696. ISSN 0885-6125.
Xu, Kelvin, Ba, Jimmy, Kiros, Ryan, Courville, Aaron, Salakhutdinov, Ruslan, Zemel, Richard, and Bengio, Yoshua. Show, attend and tell: Neural image caption generation with visual attention. arXiv preprint arXiv:1502.03044, 2015.
10
# Under review as a conference paper at ICLR 2016
# A ALGORITHM
The forward pass in our model is done as described in the algorithm below (1), both at train time and test time.
input: x 1 h0 â x 2 u0 â 1 ; 3 for each hidden layer l â 1, ..., L do 4
pl â sigm(Z(l)hlâ1 + d(l)) = Ïl(ul|sl = hlâ1) ul â¼ Ber(pl) ; if blocksize > 1 then
5
// the input mask is ones
// sample Bernoulli from probablities pl
extend ul by repeating each value blocksize times
|
# i
end // this operation can be performed efficiently as described in section 3.4: hy & f (WO (hy-1 @ w-1) + bY) @ uy
8
# 9 10 end
# Algorithm 1: Single-input forward pass
This algorithm can easily be extended to the minibatch setting by replacing vector operations by matrix operations. Note that in the case of classiï¬cation, the last layer is a softmax layer and is not multiplied by a mask.
# input: x
1 y = forward(x) ; // c+ C(x) = â log P(Â¥|x) Lect rs(Ly + Le) + Av(Lv) + Ar2||Onn ||? + Ax2||Oxll? 3
// given the output of the forward pass
// as in sections 3.2 and 3.3 // update the neural network weights:
4 ÎN N â ÎN N â αâÎN N L // update the policy weights:
for each hidden layer | ⬠1,...,L do 0 â 0 â ax cVo, logp, âaVo,L Se REINFORCE
# âαâθl L ;
// where pl is computed as in algorithm 1
# 7 end
Algorithm 2: Single-input backward pass
Note that in line 4, some gradients are zeroes, for example the gradient of the L2 regularisation of ÎÏ with respect to ÎN N is zero. Similarly in line 5, the gradient of c with respect to ÎÏ is zero, which is why we have to use REINFORCE to approximate a gradient in the direction that minimizes c.
This algorithm can be extended to the minibatch setting efï¬ciently by replacing the gradient compu- tations in line 7 with the use of the so called R-op, as described in section 3.1, and other computations as is usually done in the minibatch setting with matrix operations.
# B REINFORCE
REINFORCE (Williams, 1992), also known as the likelihood-ratio method, is a policy search algo- rithm. It aims to use gradient methods to improve a given parameterized policy.
In reinforcement learning, a sequence of state-action-reward tuples is described as a trajectory Ï . The objective function of a parameterized policy Ïθ for the cumulative return of a trajectory Ï is described as:
t=1 J(0) =E% {Eris = wo}
11
# Under review as a conference paper at ICLR 2016
where s0 is the initial state of the trajectory. Let R(Ï ) denote the return for trajectory Ï . The gradient of the objective with respect to the parameters of the policy is:
âθJ(θ) = âθEÏθ
VoJ(0) = VoExâ {R(r)} =Vo | P{r|O}R(r)dr = [ vole tro} Rr) ar (8)
Note that the interchange in (8) is only valid under some assumptions (see Silver et al. (2014)).
VoJ(0) = | Vo [P{7|9} R(r)] dr = | [R(7)VoP{r|} + VoR(r)P{r]|9}] dr (9) R(r) t T T T [layrercin + VeR(r)| P{r|0}d = E® {R(r)Vo log P{r|0} + VoR(7)} (10)
The product rule of derivatives is used in (9), and the derivative of a log in (10). Since R(Ï ) does not depend on θ directly, the gradient âθR(Ï ) is zero. We end up with this gradient:
âθJ(θ) = EÏθ Ï {R(Ï )âθ log P{Ï |θ}} (11)
Without knowing the transition probabilities, we cannot compute the probability of our trajectories P{Ï |θ}, or their gradient. Fortunately we are in a MDP setting, and we can make use of the Markov property of the trajectories to compute the gradient:
T Vo log P{r|0} = Vo log | p(so) | [ Pfse+ilse, a }0(az|s2) t=1 T = Vo log (so) + )> Vo log P{si41|8, a0} + Vo log mo(ai|s:) (12) t=1 ] Ma Vo log 79 (aus) 1
In (12), p(s0) does not depend on θ, so the gradient is zero. Similarly, P{st+1|st, at} does not depend on θ (not directly at least), so the gradient is also zero. We end up with the gradient of the log policy, which is easy to compute.
In our particular case, the trajectories only have a single step and the reward of the trajectory is the neural network cost C(x), thus the summation dissapears and the gradient found in (2) is found by taking the log of the probability of our Bernoulli sample: âθl C(x) = E {C(x)âθl log Ïθl (u|s)}
Vo,C(x) = E{C(x)Vo, log 7, (uls)} =E {ever oe] ov(1- nih i=l =E {evs Slog [oui + (1â oa) - wih i=l
12 | {
"id": "1502.01852"
} |
1511.06279 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | 6 1 0 2
b e F 9 2 ] G L . s c [
4 v 9 7 2 6 0 . 1 1 5 1 : v i X r a
Published as a conference paper at ICLR 2016
# NEURAL PROGRAMMER-INTERPRETERS
Scott Reed & Nando de Freitas Google DeepMind London, UK scott.ellison.reed@gmail.com nandodefreitas@google.com
# ABSTRACT
We propose the neural programmer-interpreter (NPI): a recurrent and composi- tional neural network that learns to represent and execute programs. NPI has three learnable components: a task-agnostic recurrent core, a persistent key-value pro- gram memory, and domain-speciï¬c encoders that enable a single NPI to operate in multiple perceptually diverse environments with distinct affordances. By learning to compose lower-level programs to express higher-level programs, NPI reduces sample complexity and increases generalization ability compared to sequence-to- sequence LSTMs. The program memory allows efï¬cient learning of additional tasks by building on existing programs. NPI can also harness the environment (e.g. a scratch pad with read-write pointers) to cache intermediate results of com- putation, lessening the long-term memory burden on recurrent hidden units. In this work we train the NPI with fully-supervised execution traces; each program has example sequences of calls to the immediate subprograms conditioned on the input. Rather than training on a huge number of relatively weak labels, NPI learns from a small number of rich examples. We demonstrate the capability of our model to learn several types of compositional programs: addition, sorting, and canonicalizing 3D models. Furthermore, a single NPI learns to execute these pro- grams and all 21 associated subprograms.
# INTRODUCTION
Teaching machines to learn new programs, to rapidly compose new programs from existing pro- grams, and to conditionally execute these programs automatically so as to solve a wide variety of tasks is one of the central challenges of AI. Programs appear in many guises in various AI prob- lems; including motor behaviours, image transformations, reinforcement learning policies, classical algorithms, and symbolic relations.
In this paper, we develop a compositional architecture that learns to represent and interpret pro- grams. We refer to this architecture as the Neural Programmer-Interpreter (NPI). The core module is an LSTM-based sequence model that takes as input a learnable program embedding, program arguments passed on by the calling program, and a feature representation of the environment. The output of the core module is a key indicating what program to call next, arguments for the following program and a ï¬ag indicating whether the program should terminate. In addition to the recurrent core, the NPI architecture includes a learnable key-value memory of program embeddings. This program-memory is essential for learning and re-using programs in a continual manner. Figures 1 and 2 illustrate the NPI on two different tasks.
We show in our experiments that the NPI architecture can learn 21 programs, including addition, sorting, and trajectory planning from image pixels. Crucially, this can be achieved using a single core model with the same parameters shared across all tasks. Different environments (for example images, text, and scratch-pads) may require speciï¬c perception modules or encoders to produce the features used by the shared core, as well as environment-speciï¬c actuators. Both perception modules and actuators can be learned from data when training the NPI architecture.
To train the NPI we use curriculum learning and supervision via example execution traces. Each program has example sequences of calls to the immediate subprograms conditioned on the input.
1
Published as a conference paper at ICLR 2016
HGOTO [fe ACT 12 | 12 . 72] ine 12 ne fla iz +2] 72] 12) * [ap GOTO() HGOTO() LGOTO() ACT(LEFT) LGOTO() ACT(LEFT) GOTO() | VGOTO() DGOTO() ACT(DOWN) end state
Figure 1: Example execution of canonicalizing 3D car models. The task is to move the camera such that a target angle and elevation are reached. There is a read-only scratch pad containing the target (angle 1, elevation 2 here). The image encoder is a convnet trained from scratch on pixels.
Ae SSS Pi y | ! u + 2 2 AOD) ACT@ZWRITE) ADDI) ââCARRY)~â«ACT@LEFT) âcaRRY) AcT@nwere)
Figure 2: Example execu- tion trace of single-digit addi- tion. The task is to perform a single-digit add on the num- bers at pointer locations in the ï¬rst two rows. The carry (row 3) and output (row 4) should be updated to reï¬ect the addi- tion. At each time step, an ob- servation of the environment (viewed from each pointer on a scratch pad) is encoded into a ï¬xed-length vector.
By using neural networks to represent the subprograms and learning these from data, the approach can generalize on tasks involving rich perceptual inputs and uncertainty.
We may envision two approaches to provide supervision. In one, we provide a very large number of labeled examples, as in object recognition, speech and machine translation. In the other, the approached followed in this paper, the aim is to provide far fewer labeled examples, but where the labels contain richer information allowing the model to learn compositional structure. While unsupervised and reinforcement learning play important roles in perception and motor control, other cognitive abilities are possible thanks to rich supervision and curriculum learning. This is indeed the reason for sending our children to school.
An advantage of our approach to model building and training is that the learned programs exhibit strong generalization. Speciï¬cally, when trained to sort sequences of up to twenty numbers in length, they can sort much longer sequences at test time. In contrast, the experiments will show that more standard sequence to sequence LSTMs only exhibit weak generalization, see Figure 6.
A trained NPI with ï¬xed parameters and a learned library of programs, can act both as an interpreter and as a programmer. As an interpreter, it takes input in the form of a program embedding and input data and subsequently executes the program. As a programmer, it uses samples drawn from a new task to generate a new program embedding that can be added to its library of programs.
# 2 RELATED WORK
Several ideas related to our approach have a long history. For example, the idea of using dynam- ically programmable networks in which the activations of one network become the weights (the
2
Published as a conference paper at ICLR 2016
program) of a second network was mentioned in the Sigma-Pi units section of the inï¬uential PDP paper (Rumelhart et al., 1986). This idea appeared in (Sutskever & Hinton, 2009) in the context of learning higher order symbolic relations and in (Donnarumma et al., 2015) as the key ingredient of an architecture for prefrontal cognitive control. Schmidhuber (1992) proposed a related meta-learning idea, whereby one learns the parameters of a slowly changing network, which in turn generates context dependent weight changes for a second rapidly changing network. These approaches have only been demonstrated in very limited settings. In cognitive science, several theories of brain areas controlling other brain parts so as to carry out multiple tasks have been proposed; see for example Schneider & Chein (2003); Anderson (2010) and Donnarumma et al. (2012).
Related problems have been studied in the literature on hierarchical reinforcement learning (e.g., Dietterich (2000); Andre & Russell (2001); Sutton et al. (1999) and Schaul et al. (2015)), imitation and apprenticeship learning (e.g., Kolter et al. (2008) and Rothkopf & Ballard (2013)) and elicita- tion of options through human interaction (Subramanian et al., 2011). These ideas have held great promise, but have not enjoyed signiï¬cant impact. We believe the recurrent compositional neural representations proposed in this paper could help these approaches in the future, and in particular in overcoming feature engineering.
Several recent advancements have extended recurrent networks to solve problems beyond simple sequence prediction. Graves et al. (2014) developed a neural Turing machine capable of learning and executing simple programs such as repeat copying, simple priority sorting and associative recall. Vinyals et al. (2015) developed Pointer Networks that generalize the notion of encoder attention in order to provide the decoder a variable-sized output space depending on the input sequence length. This model was shown to be effective for combinatorial optimization problems such as the traveling salesman and Delaunay triangulation. While our proposed model is trained on execution traces in- stead of input and output pairs, in exchange for this richer supervision we beneï¬t from compositional program structure, improving data efï¬ciency on several problems.
This work is also closely related to program induction. Most previous work on program induc- tion, i.e. inducing a program given example input and output pairs, has used genetic program- ming (Banzhaf et al., 1998) to evolve useful programs from candidate populations. Mou et al. (2014) process program symbols to learn max-margin program embeddings with the help of parse trees. Zaremba & Sutskever (2014) trained LSTM models to read in the text of simple programs Joulin & Mikolov (2015) aug- character-by-character and correctly predict the program output. mented a recurrent network with a pushdown stack, allowing for generalization to longer input sequences than seen during training for several algorithmic patterns.
Contemporary to this work, several papers have also studied program induction with variants of recurrent neural networks (Zaremba & Sutskever, 2015; Zaremba et al., 2015; Kaiser & Sutskever, 2015; Kurach et al., 2015; Neelakantan et al., 2015). While we share a similar motivation, our approach is distinct in that we explicitly incorporate compositional structure into the network using a program memory, allowing the model to learn new programs by combining sub-programs.
# 3 MODEL
The NPI core is a long short-term memory (LSTM) network (Hochreiter & Schmidhuber, 1997) that acts as a router between programs conditioned on the current state observation and previous hidden unit states. At each time step, the core module can select another program to invoke using content-based addressing. It emits the probability of ending the current program with a single binary unit. If this probability is over threshold (we used 0.5), control is returned to the caller by popping the callerâs LSTM hidden units and program embedding off of a program call stack and resuming execution in this context.
The NPI may also optionally write arguments (ARG) that are passed by reference or value to the invoked sub-programs. For example, an argument could indicate a speciï¬c location in the input sequence (by reference), or it could specify a number to write down at a particular location in the sequence (by value). The subsequent state consists of these arguments and observations of the environment. The approach is illustrated in Figures 1 and 2.
It must be emphasized that there is a single inference core. That is, all the LSTM instantiations executing arbitrary programs share the same parameters. Different programs correspond to program embeddings, which are stored in a learnable persistent memory. The programs therefore have a more
3
Published as a conference paper at ICLR 2016
succinct representation than neural programs encoded as the full set of weights in a neural network (Rumelhart et al., 1986; Graves et al., 2014).
The output of an NPI, conditioned on an input state and a program to run, is a sequence of actions in a given environment. In this work, we consider several environments: a 1-D array with read-only pointers and a swap action, a 2-D scratch pad with read-write pointers, and a CAD renderer with controllable elevation and azimuth movements. Note that the sequence of actions for a program is not ï¬xed, but dependent also on the input state.
3.1 INFERENCE
Denote the environment observation at time t as et â E, and the current program arguments as at â A. The form of et can vary dramatically by environment; for example it could be a color image or an array of numbers. The program arguments at can also vary by environment, but in the experiments for this paper we always used a 3-tuple of integers (at(1), at(2), at(3)). Given the environment and arguments at time t, a ï¬xed-length state encoding st â RD is extracted by a domain-speciï¬c encoder fenc : E ÃA â RD. In section 4 we provide examples of several encoders. Note that a single NPI network can have multiple encoders for multiple environments, and encoders can potentially also be shared across tasks. We denote the current program embedding as pt â RP . The previous hidden unit and cell states are h(l) tâ1 â RM , l = 1, ..., L where L is the number of layers in the LSTM. The program and state vectors are then propagated forward through an LSTM mapping flstm as in (Sutskever et al., 2014). How to fuse pt and st within flstm is an implementation detail, but in this work we concatenate and feed through a 2-layer MLP with rectiï¬ed linear (ReLU) hidden activation and linear decoder. From the top LSTM hidden state hL t , several decoders generate the outputs. The probability of ï¬nishing the program and returning to the caller 1 is computed by fend : RM â [0, 1]. The lookup key embedding used for retrieving the next program from memory is computed by fprog : RM â RK. Note that RK can be much smaller than RP because the key only need act as the identiï¬er of a program, while the program embedding must have enough capacity to conditionally generate a sequence of actions. The contents of the arguments to the next program to be called are generated by farg : RM â A. The feed-forward steps of program inference are summarized below:
st = fenc(et, at) (1) ht = flstm(st, pt, htâ1) (2) rt = fend(ht), kt = fprog(ht), at+1 = farg(ht) (3) where rt, kt and at+1 correspond to the end-of-program probability, program key embedding, and output arguments at time t, respectively. These yield input arguments at time t + 1. To simplify the notation, we have abstracted properties such as layers and cell memory in the sequence-to-sequence LSTM of equation (2); see (Sutskever et al., 2014) for details. The NPI representation is equipped with key-value memory structures M key â RN ÃK and M prog â RN ÃP storing program keys and program embeddings, respectively, where N is the current number of programs in memory. We can add more programs by adding rows to memory.
During training, the next program identiï¬er is provided to the model as ground-truth, so that its embedding can be retrieved from the corresponding row of M prog. At test time, we compute the âprogram IDâ by comparing the key embedding kt to each row of M key storing all program keys. Then the program embedding is retrieved from M prog as follows: iâ = arg max i=1..N
The next environmental state et+1 will be determined by the dynamics of the environment and can be affected by both the choice of program pt and the contents of the output arguments at, i.e.
et+1 â¼ fenv(et, pt, at) (5) The transition mapping fenv is domain-speciï¬c and will be discussed in Section 4. A description of the inference procedure is given in Algorithm 1.
1In our implementation, a program may ï¬rst call a subprogram before itself ï¬nishing. The only exception is the ACT program that signals a low-level action to the environment, e.g. moving a pointer one step left or writing a value. By convention ACT does not call any further sub-programs.
4
Published as a conference paper at ICLR 2016
Algorithm 1 Neural programming inference
1: Inputs: Environment observation e, program id i, arguments a, stop threshold a 2: function RUN(i, a) 3: heoO,r-0,p + MP9 > Init LSTM and return probability. 4: while r < ado 5: 8 © fenc(e, a), h © fistm(s, p, h) > Feed-forward NPI one step. 6: r& fend lh), k= fprog(h), a2 â farg(h) 7: in © arg max( Mj)" k > Decide the next program to run. j=1.N 8: if i == ACT then e + fenv(e,p, a) > Update the environment based on ACT. 9: else RUN(i2, a2) > Run subprogram 72 with arguments a
Each task has a set of actions that affect the environment. For example, in addition there are LEFT and RIGHT actions that move a speciï¬ed pointer, and a WRITE action which writes a value at a speciï¬ed location. These actions are encapsulated into a general-purpose ACT program shared across tasks, and the concrete action to be taken is indicated by the NPI-generated arguments at.
Note that the core LSTM module of our NPI representation is completely agnostic to the data modal- ity used to produce the state encoding. As long as the same ï¬xed-length embedding is extracted, the same module can in practice route between programs related to sorting arrays just as easily as between programs related to rotating 3D objects. In the experimental sections, we provide details of the modality-speciï¬c deep neural networks that we use to produce these ï¬xed-length state vectors.
3.2 TRAINING To train we use execution traces â¬;"â : {ez, iz, ax} and £2 : {iz41, ar41, 7}, t = 1,...T, where T is the sequence length. Program IDs i, and i,+1 are row-indices in M*°Y and M°*°S of the programs to run at time ¢ and t+ 1, respectively. We propose to directly maximize the probability of the correct execution trace output â¬Â°ââ conditioned on â¬'â?: @*
θâ = arg max log P (ξout|ξinp; θ) (6) θ (ξinp,ξout)
where θ are the parameters of our model. Since the traces are variable in length depending on the input, we apply the chain rule to model the joint probability over ξout
T log P(Eoutl&inp: 9) = > log P(E EL? on 61"? 8) (7) t=1
Note that for many problems the input history ξinp is critical to deciding future actions 1 because the environment observation at the current time-step et alone does not contain enough in- formation. The hidden unit activations of the LSTM in NPI are capable of capturing these temporal dependencies. The single-step conditional probability in equation (7) can be factorized into three further conditional distributions, corresponding to predicting the next program, next arguments, and whether to halt execution: |ξinp 1
(8) where ht is the output of flstm at time t, carrying information from previous time steps. We train by gradient ascent on the likelihood in equation (7).
We used an adaptive curriculum in which training examples for each mini-batch are fetched with fre- quency proportional to the modelâs current prediction error for the corresponding program. Specif- ically, we set the sampling frequency using a softmax over average prediction error across all pro- grams, with conï¬gurable temperature. Every 1000 steps of training we re-estimated these prediction errors. Intuitively, this forces the model to focus on learning the program for which it currently per- forms worst in executing. We found that the adaptive curriculum immediately worked much better than our best-performing hand-designed curriculum, allowing a multi-task NPI to achieve compara- ble performance to single-task NPI on all tasks.
We also note that our program has a distinct memory advantage over basic LSTMs because all sub- programs can be trained in parallel. For programs whose execution length grows e.g. quadratically
5
Published as a conference paper at ICLR 2016
Figure 3: Illustration of the addition environment used in our experiments.
(a) Example scratch pad and pointers used for computing â96 + 125 = 221â. Carry step is being implemented. (b) Actual trace of addition program generated by our model on the problem shown to the left. Note that we substituted the ACT calls in the trace with more human-readable steps.
with the input sequence length, an LSTM will by highly constrained by device memory to train on short sequences. By exploiting compositionality, an effective curriculum can often be developed with sublinear-length subprograms, enabling our NPI model to train on order of magnitude larger sequences than the LSTM.
# 4 EXPERIMENTS
This section describes the environment and state encoder function for each task, and shows example outputs and prediction accuracy results. For all tasks, the core LSTM had two layers of size 256. We trained the NPI using the ADAM solver (Kingma & Ba, 2015) with base learning rate 0.0001, batch size 1, and decayed the learning rate by a factor of 0.95 every 10,000 steps.
# 4.1 TASK AND ENVIRONMENT DESCRIPTIONS
In this section we provide an overview of the tasks used to evaluate our model. Table 2 in the appendix provides a full listing of all the programs and subprograms learned by our model.
# ADDITION
The task in this environment is to read in the digits of two base-10 numbers and produce the digits of the answer. Our goal is to teach the model the standard (at least in the US) grade school algorithm of adding, in which one works from right to left applying single-digit add and carry operations.
In this environment, the network is endowed with a âscratch padâ with which to store intermediate computations; e.g. to record carries. There are four pointers; one for each of the two input numbers, one for the carry, and another to write the output. At each time step, a pointer can be moved left or right, or it can record a value to the pad. Figure 3a illustrates the environment of this model, and Figure 3b provides a real execution trace generated by our model.
For the state encoder fenc, the model is allowed a view of the scratch pad from the perspective of each of the four pointers. That is, the model sees the current values at pointer locations of the two inputs, the carry row and the output row, as 1-of-K encodings, where K is 10 because we are working in base 10. We also append the values of the input argument tuple at:
fenc(Q, i1, i2, i3, i4, at) = M LP ([Q(1, i1), Q(2, i2), Q(3, i3), Q(4, i4), at(1), at(2), at(3)]) (9) where Q â R4ÃN ÃK, and i1, ..., i4 are pointers, one per scratch pad row. The ï¬rst dimension of Q corresponds to scratch pad rows, N is the number of columns (digits) and K is the one-hot encoding dimension. To begin the ADD program, we set the initial arguments to a default value and initialize all pointers to be at the rightmost column. The only subprogram with non-default arguments is ACT, in which case the arguments indicate an action to be taken by a speciï¬ed pointer.
# SORTING
In this section we apply our model to a setting with potentially much longer execution traces: sorting an array of numbers using bubblesort. As in the case of addition we can use a scratch pad to store intermediate states of the array. We deï¬ne the encoder as follows:
fenc(Q, i1, i2, at) = M LP ([Q(1, i1), Q(1, i2), at(1), at(2), at(3)]) (10)
6
Published as a conference paper at ICLR 2016
Figure 4: Illustration of the sorting environment used in our experiments.
(a) Example scratch pad and pointers used for sorting. Several steps of the BUBBLE subprogram are shown. (b) Excerpt from the trace of the learned bubblesort program.
where Q â R1ÃN ÃK is the pad, N is the array length and K is the array entry embedding dimension. Figure 4 shows an example series of array states and an excerpt of an execution trace.
# CANONICALIZING 3D MODELS
We also apply our model to a vision task with a very different perceptual environment - pixels. Given a rendering of a 3D car, we would like to learn a visual program that âcanonicalizesâ the model with respect to its pose. Whatever the starting position, the program should generate a trajectory of actions that delivers the camera to the target view, e.g. frontal pose at a 15⦠elevation. For training data, we used renderings of the 3D car CAD models from (Fidler et al., 2012).
This is a nontrivial problem because different starting positions will require quite different trajec- tories to reach the target. Further complicating the problem is the fact that the model will need to generalize to different car models than it saw during training.
We again use a scratch pad, but here it is a very simple read-only pad that only contains a target camera elevation and azimuth â i.e., the âcanonical poseâ. Since observations come in the form of image pixels, we use a convolutional neural network fCN N as the image encoder:
fenc(Q, x, i1, i2, at) = M LP ([Q(1, i1), Q(2, i2), fCN N (x), at(1), at(2), at(3)])
where x â RHÃW Ã3 is a car rendering at the current pose, Q â R2Ã1ÃK is the pad containing canonical azimuth and elevation, i1, i2 are the (ï¬xed at 1) pointer locations, and K is the one-hot encoding dimension of pose coordinates. We set K = 24 corresponding to 15⦠pose increments.
Note, critically, that our NPI model only has access to pixels of the rendering and the target pose, and is not provided the pose of query frames. We are also aware that one solution to this problem would be to train a pose classiï¬er network and then ï¬nd the shortest path to canonical pose via classical methods. That is also a sensible approach. However, our purpose here is to show that our method generalizes beyond the scratch pad domain to detailed images of 3D objects, and also to other environments with a single multi-task model.
# 4.2 SAMPLE COMPLEXITY AND GENERALIZATION
Both LSTMs and Neural Turing Machines can learn to perform sorting to a limited degree, although they have not been shown to generalize well to much longer arrays than were seen during training. However, we are interested not only in whether sorting can be accomplished, but whether a particular sorting algorithm (e.g. bubblesort) can be learned by the model, and how effectively in terms of sample complexity and generalization.
We compare the generalization ability of our model to a ï¬at sequence-to-sequence LSTM (Sutskever et al., 2014), using the same number of layers (2) and hidden units (256). Note that a ï¬at 2 version of NPI could also learn sorting of short arrays, but because bubblesort runs in O(N 2) for arrays of length N , the execution traces quickly become far too long to store the required number of LSTM states in memory. Our NPI architecture can train on much larger arrays by exploiting compositional structure; the memory requirements of any given subprogram can be restricted to O(N ).
2By ï¬at in this case, we mean non-compositional, not making use of subprograms, and only making calls to ACT in order to swap values and move pointers.
7
Published as a conference paper at ICLR 2016
Sorting per-sequence accuracy vs. # training examples *â2â_+_e a i TT # Training examples he Seq?Seq â® NPI
Sorting per-sequence accuracy vs sequence length r. + oe 2 ¢ 100 I I I Training l 50 sequence lengths l 25 ! i] ! 9 S20 a3 Sequence length a Seq?Seq â® NPI
Figure 5: Sample complexity. Test accuracy of sequence-to-sequence LSTM versus NPI on length-20 arrays of single-digit numbers. Note that NPI is able to mine and train on subprogram traces from each bubblesort example.
Figure 6: Strong vs. weak generalization. Test accuracy of sequence-to-sequence LSTM ver- sus NPI on varying-length arrays of single-digit numbers. Both models were trained on arrays of single-digit numbers up to length 20.
A strong indicator of whether a neural network has learned a program well is whether it can run the program on inputs of previously-unseen sizes. To evaluate this property, we train both the sequence- to-sequence LSTM and NPI to perform bubblesort on arrays of single-digit numbers from length 2 to length 20. Compared to ï¬xed-length inputs this raises the challenge level during training, but in exchange we can get a more ï¬exible and generalizable sorting program.
To handle variable-sized inputs, the state representation must have some information about input se- quence length and the number of steps taken so far. For example, the main BUBBLESORT program naturally needs to call its helper function BUBBLE a number of times dependent on the sequence length. We enable this in our model by adding a third pointer that acts as a counter; each time BUB- BLE is called the pointer is advanced by one step. The scratch pad environment also provides a bit indicating whether a pointer is at the start or end of a sequence, equivalent in purpose to end tokens used in a sequence-to-sequence model.
For each length, we provided 64 example bubblesort traces, for a total of 1,216 examples. Then, we evaluated whether the network can learn to sort arrays beyond length 20. We found that the trained model generalizes well, and is capable of sorting arrays up to size 60; see Figure 6. At 60 and beyond, we observed a failure mode in which sweeps of pointers across the array would take the wrong number of steps, suggesting that the limiting performance factor is related to counting. In stark contrast, when provided with the 1,216 examples, the sequence-to-sequence LSTMs fail to generalize beyond arrays of length 25 as shown in Figure 6.
To study sample complexity further, we ï¬x the length of the arrays to 20 and vary the number of training examples. We see in Figure 5 that NPI starts learning with 2 examples and is able to sort almost perfectly with only 8 examples. The sequence-to-sequence model on the other hand requires 64 examples to start learning and only manages to sort well with over 250 examples.
Figure 7 shows several example canonicalization trajectories generated by our model, starting from the leftmost car. The image encoder was a convolutional network with three passes of stride-2 convolution and pooling, trained on renderings of size 128 à 128. The canonical target pose in this case is frontal with 15⦠elevation. At test time, from an initial rendering, NPI is able to canonicalize cars of varying appearance from multiple starting positions. Importantly, it can generalize to car appearances not encountered in the training set as shown in Figure 7.
# 4.3 LEARNING NEW PROGRAMS WITH A FIXED CORE
One challenge for continual learning of neural-network-based agents is that training on new tasks and experiences can lead to degraded performance in old tasks. The learning of new tasks may require that the network weights change substantially, so care must be taken to avoid catastrophic forgetting (Mccloskey & Cohen, 1989; OReilly et al., 2014). Using NPI, one solution is to ï¬x the weights of the core routing module, and only make sparse updates to the program memory.
When adding a new program the core moduleâs routing computation will be completely unaffected; all the learning for a new task occurs in program embedding space. Of course, the addition of new programs to the memory adds a new choice of program at each time step, and an old program could
8
Published as a conference paper at ICLR 2016
GOTO 12 Co 22 1 2 3 HGOTO BGOTO) LGoTo aia aria â Gee RGOTO ' 2 3 ACT (LEFT) ASKGRERW) = a & ACT (LEFT) 4 5 6 atecrd ACT (LEFT) ACT (UP) ACT (LEFT) fips Sih r_.) ACT(LEFT) 7 GOTO 1 2 1 2 3 VGOTO HGOTO UGOTO pe RGOTO Se, a @ ACT (UP) ACT (RIGHT) ACT (RIGHT) GOTO 1 2 ACT (RIGHT) HGOTO 1 A ; VvGOTO 4 5 6 LGOTO _ _ __ DGOTO ACT(LEFT) te? ama ea ACT (DOWN) 2 VGOTO hs t=3 = ACT (DOWN) 2607 oun)
Figure 7: Example canonicalization of several different test set cars. The network is able to generate and execute the appropriate plan based on the starting car image. This NPI was trained on trajectories starting at azimuth (â75â¦...75â¦) , elevation (0â¦...60â¦) in 15⦠increments. The training trajectories target azimuth 0⦠and elevation 15â¦, as in the generated traces above.
mistakenly call a newly added program. To overcome this, when learning a new set of program vectors with a ï¬xed core, in practice we train not only on example traces of the new program, but also traces of existing programs. Alternatively, a simpler approach is to prevent existing programs from calling subsequently added programs, allowing addition of new programs without ever looking back at training data for known programs. In either case, note that only the memory slots of the new programs are updated, and all other weights, including other program embeddings, are ï¬xed.
Table 1 shows the result of adding a maximum-ï¬nding program MAX to a multitask NPI trained on addition, sorting and canonicalization. MAX ï¬rst calls BUBBLESORT and then a new program RJMP, which moves pointers to the right of the sorted array, where the max element can be read. During training we froze all weights except for the two newly-added program embeddings. We ï¬nd that NPI learns MAX perfectly without forgetting the other tasks. In particular, after training a single multi-task model as outlined in the following section, learning the MAX program with this ï¬xed-core multi-task NPI results in no performance deterioration for all three tasks.
4.4 SOLVING MULTIPLE TASKS WITH A SINGLE NETWORK
In this section we perform a controlled experiment to compare the performance of a multi-task NPI with several single-task NPI models. Table 1 shows the results for addition, sorting and canonical- izing 3D car models. We trained and evaluated on 10-digit numbers for addition, length-5 arrays for sorting, and up to four-step trajectories for canonicalization. As shown in Table 1, one multi-task NPI can learn all three programs (and necessarily the 21 subprograms) with comparable accuracy compared to each single-task NPI.
Task Addition Sorting Canon. seen car Canon. unseen Maximum Single Multi 97.0 100.0 100.0 100.0 91.4 89.5 89.9 88.7 - - + Max 97.0 100.0 91.4 89.9 100.0 Table 1: Per-sequence % accuracy. â+ Maxâ indicates performance after addition of the ad- ditional max-ï¬nding subprograms to memory. âunseenâ uses a test set with disjoint car mod- els from the training set, while âseen carâ uses the same car models but different trajectories.
# 5 CONCLUSION
We have shown that the NPI can learn programs in very dissimilar environments with different affordances. In the context of sorting we showed that NPI exhibits very strong generalization in comparison to sequence-to-sequence LSTMs. We also showed how a trained NPI with a ï¬xed core can continue to learn new programs without forgetting already learned programs.
ACKNOWLEDGMENTS
We sincerely thank Arun Nair and Ed Grefenstette for helpful suggestions.
9
Published as a conference paper at ICLR 2016
# REFERENCES
Anderson, Michael L. Neural reuse: A fundamental organizational principle of the brain. Behavioral and Brain Sciences, 33:245â266, 8 2010.
Andre, David and Russell, Stuart J. Programmable reinforcement learning agents. In Advances in Neural Information Processing Systems, pp. 1019â1025. 2001.
Banzhaf, Wolfgang, Nordin, Peter, Keller, Robert E, and Francone, Frank D. Genetic programming: An introduction, volume 1. Morgan Kaufmann San Francisco, 1998.
Dietterich, Thomas G. Hierarchical reinforcement learning with the MAXQ value function decom- position. Journal of Artiï¬cial Intelligence Research, 13:227â303, 2000.
Donnarumma, Francesco, Prevete, Roberto, and Trautteur, Giuseppe. Programming in the brain: A neural network theoretical framework. Connection Science, 24(2-3):71â90, 2012.
Donnarumma, Francesco, Prevete, Roberto, Chersi, Fabian, and Pezzulo, Giovanni. A programmer- interpreter neural network architecture for prefrontal cognitive control. International Journal of Neural Systems, 25(6):1550017, 2015.
Fidler, Sanja, Dickinson, Sven, and Urtasun, Raquel. 3D object detection and viewpoint estimation with a deformable 3D cuboid model. In Advances in neural information processing systems, 2012.
Graves, Alex, Wayne, Greg, and Danihelka, Ivo. Neural Turing machines. arXiv preprint arXiv:1410.5401, 2014.
Hochreiter, Sepp and Schmidhuber, J¨urgen. Long short-term memory. Neural computation, 9(8): 1735â1780, 1997.
Joulin, Armand and Mikolov, Tomas. Inferring algorithmic patterns with stack-augmented recurrent nets. In NIPS, 2015.
Kaiser, Åukasz and Sutskever, Ilya. Neural gpus learn algorithms. arXiv preprint arXiv:1511.08228, 2015.
Kingma, Diederik and Ba, Jimmy. Adam: A method for stochastic optimization. 2015.
Kolter, Zico, Abbeel, Pieter, and Ng, Andrew Y. Hierarchical apprenticeship learning with appli- In Advances in Neural Information Processing Systems, pp. cation to quadruped locomotion. 769â776. 2008.
Kurach, Karol, Andrychowicz, Marcin, and Sutskever, Ilya. Neural random-access machines. arXiv preprint arXiv:1511.06392, 2015.
Mccloskey, Michael and Cohen, Neal J. Catastrophic interference in connectionist networks: The sequential learning problem. In The psychology of learning and motivation, volume 24, pp. 109â 165. 1989.
Mou, Lili, Li, Ge, Liu, Yuxuan, Peng, Hao, Jin, Zhi, Xu, Yan, and Zhang, Lu. Building program vector representations for deep learning. arXiv preprint arXiv:1409.3358, 2014.
Neelakantan, Arvind, Le, Quoc V, and Sutskever, Ilya. Neural programmer: Inducing latent pro- grams with gradient descent. arXiv preprint arXiv:1511.04834, 2015.
OReilly, Randall C., Bhattacharyya, Rajan, Howard, Michael D., and Ketz, Nicholas. Complemen- tary learning systems. Cognitive Science, 38(6):1229â1248, 2014.
Rothkopf, ConstantinA. and Ballard, DanaH. Modular inverse reinforcement learning for visuomo- tor behavior. Biological Cybernetics, 107(4):477â490, 2013.
Rumelhart, D. E., Hinton, G. E., and McClelland, J. L. Parallel distributed processing: Explorations in the microstructure of cognition, vol. 1. chapter A General Framework for Parallel Distributed Processing, pp. 45â76. MIT Press, 1986.
10
Published as a conference paper at ICLR 2016
Schaul, Tom, Horgan, Daniel, Gregor, Karol, and Silver, David. Universal value function approxi- mators. In International Conference on Machine Learning, 2015.
Schmidhuber, J¨urgen. Learning to control fast-weight memories: An alternative to dynamic recur- rent networks. Neural Computation, 4(1):131â139, 1992.
Schneider, Walter and Chein, Jason M. Controlled and automatic processing: behavior, theory, and biological mechanisms. Cognitive Science, 27(3):525â559, 2003.
Subramanian, Kaushik, Isbell, Charles, and Thomaz, Andrea. Learning options through human interaction. In IJCAI Workshop on Agents Learning Interactively from Human Teachers, 2011.
Sutskever, Ilya and Hinton, Geoffrey E. Using matrices to model symbolic relationship. In Advances in Neural Information Processing Systems, pp. 1593â1600. 2009.
Sutskever, Ilya, Vinyals, Oriol, and Le, Quoc VV. Sequence to sequence learning with neural net- works. In Advances in neural information processing systems, pp. 3104â3112, 2014.
Sutton, Richard S., Precup, Doina, and Singh, Satinder. Between MDPs and semi-MDPs: A frame- work for temporal abstraction in reinforcement learning. Artiï¬cial Intelligence, 112(1-2):181â 211, 1999.
Vinyals, Oriol, Fortunato, Meire, and Jaitly, Navdeep. Pointer networks. Advances in Neural Infor- mation Processing Systems (NIPS), 2015.
Zaremba, Wojciech and Sutskever, Ilya. Learning to execute. arXiv preprint arXiv:1410.4615, 2014.
Zaremba, Wojciech and Sutskever, Ilya. Reinforcement learning neural turing machines. arXiv preprint arXiv:1505.00521, 2015.
Zaremba, Wojciech, Mikolov, Tomas, Joulin, Armand, and Fergus, Rob. Learning simple algorithms from examples. arXiv preprint arXiv:1511.07275, 2015.
11
Published as a conference paper at ICLR 2016
# 6 APPENDIX
6.1 LISTING OF LEARNED PROGRAMS
Below we list the programs learned by our model:
Program ADD ADD1 CARRY LSHIFT RSHIFT ACT BUBBLESORT BUBBLE RESET BSTEP COMPSWAP LSHIFT RSHIFT ACT GOTO HGOTO LGOTO RGOTO VGOTO UGOTO DGOTO ACT RJMP MAX Descriptions Perform multi-digit addition Perform single-digit addition Mark a 1 in the carry row one unit left Shift a speciï¬ed pointer one step left Shift a speciï¬ed pointer one step right Move a pointer or write to the scratch pad Perform bubble sort (ascending order) Perform one sweep of pointers left to right Move both pointers all the way left Conditionally swap and advance pointers Conditionally swap two elements Shift a speciï¬ed pointer one step left Shift a speciï¬ed pointer one step right Swap two values at pointer locations or move a pointer Change 3D car pose to match the target Move horizontally to the target angle Move left to match the target angle Move right to match the target angle Move vertically to the target elevation Move up to match the target elevation Move down to match the target elevation Move camera 15⦠up, down, left or right Move all pointers to the rightmost posiiton Find maximum element of an array Calls ADD1, LSHIFT ACT, CARRY ACT ACT ACT - BUBBLE, RESET ACT, BSTEP LSHIFT COMPSWAP, RSHIFT ACT ACT ACT - HGOTO, VGOTO LGOTO, RGOTO ACT ACT UGOTO, DGOTO ACT ACT - RSHIFT BUBBLESORT,RJMP
Table 2: Programs learned for addition, sorting and 3D car canonicalization. Note the the ACT program has a different effect depending on the environment and on the passed-in arguments.
6.2 GENERATED EXECUTION TRACE OF BUBBLESORT Figure 8 shows the sequence of program calls for BUBBLESORT. Pointers 1 and 2 are used to im-
Figure 8: Generated execution trace from our trained NPI sorting the array [9,2,5].
# BUBBLESORT
BUBBLE BUBBLE BUBBLE PTR 2 RIGHT PTR 2 RIGHT PTR 2 RIGHT BSTEP BSTEP BSTEP COMPSWAP COMPSWAP COMPSWAP SWAP 1 2 RSHIFT RSHIFT RSHIFT PTR 1 RIGHT PTR 1 RIGHT PTR 1 RIGHT PTR 2 RIGHT PTR 2 RIGHT PTR 2 RIGHT BSTEP BSTEP BSTEP COMPSWAP COMPSWAP COMPSWAP SWAP 1 2 RSHIFT RSHIFT RSHIFT PTR 1 RIGHT PTR 1 RIGHT PTR 1 RIGHT PTR 2 RIGHT PTR 2 RIGHT PTR 2 RIGHT RESET RESET RESET LSHIFT LSHIFT LSHIFT PTR 1 LEFT PTR 1 LEFT PTR 1 LEFT PTR 2 LEFT PTR 2 LEFT PTR 2 LEFT LSHIFT LSHIFT LSHIFT PTR 1 LEFT PTR 1 LEFT PTR 1 LEFT PTR 2 LEFT PTR 2 LEFT PTR 2 LEFT PTR 3 RIGHT PTR 3 RIGHT PTR 3 RIGHT
plement the âbubbleâ operation involving the comparison and swapping of adjacent array elements. The third pointer (referred to in the trace as âPTR 3â) is used to count the number of calls to BUB- BLE. After every call to RESET the swapping pointers are moved to the beginning of the array and the counting pointer is advanced by 1. When it has reached the end of the scratch pad, the model learns to halt execution of BUBBLESORT.
12
Published as a conference paper at ICLR 2016
6.3 ADDITIONAL EXPERIMENT ON ADDITION GENERALIZATION
Based on reviewer feedback, we conducted an additional comparison of NPI and sequence-to- sequence models for the addition task, to evaluate the generalization ability. we implemented addi- tion in a sequence to sequence model, training to model sequences of the following form, e.g. for â90 + 160 = 250â we represent the sequence as:
90X160X250
For the simple Seq2Seq baseline above (same number of LSTM layers and hidden units as NPI), we observed that the model could predict one or two digits reliably, but did not generalize even up to 20-digit addition. However, we are aware that others have gotten multi-digit addition of the above form to work to some extent with curriculum learning (Zaremba & Sutskever, 2014). In order to make a more competitive baseline, we helped Seq2Seq in two ways: 1) reverse input digits and stack the two numbers on top of each other to form a 2-channel sequence, and 2) reverse input digits and generate reversed output digits immediately at each time step.
In the approach of 1), the seq2seq model schematically looks like this:
output: XXXX250 input 1: 090XXXX input 2: 061XXXX
In the approach of 2), the sequence looks like this:
# output: 052 input 1: 090 input 2: 061
Both 1) which we call s2s-stacked and 2) which we call s2s-easy are much stronger competitors to NPI than even the proposed addition baseline. We compare the generalization performance of NPI to these baselines in the ï¬gure below:
Addition generalization: NPI vs Seq2Seq 100.0% eee @-© _¢ nPI@32 per- sequence â@ S2S-stack@32 75.0% per-character â®- S2S-stack@512 per-character 7 â@ S2S-easy@32 50.0% per-sequence â@ S2S-easy@64 per-sequence 25.0% 0.0% 10 100 1000 Test sequence length
Figure 9: Comparing NPI and Seq2Seq variants on addition generalization to longer sequences.
We found that NPI trained on 32 examples for problem lengths 1,...,20 generalizes with 100% ac- curacy to all the lengths we tried (up to 3000). s2s-easy trained on twice as many examples gen- eralizes to just over length 2000 problems. s2s-stacked barely generalizes beyond 5, even with far more data. This suggests that locality of computation makes a large impact on generalization per- formance. Even when we carefully ordered and stacked the input numbers for Seq2Seq, NPI still had an edge in performance. In contrast to Seq2Seq, NPI is taught (supervised for now) to move its pointers so that the key operations (e.g. single digit add, carry) can be done using only local information, and this appears to help generalization.
13 | {
"id": "1511.04834"
} |
1511.06434 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | In recent years, supervised learning with convolutional networks (CNNs) has
seen huge adoption in computer vision applications. Comparatively, unsupervised
learning with CNNs has received less attention. In this work we hope to help
bridge the gap between the success of CNNs for supervised learning and
unsupervised learning. We introduce a class of CNNs called deep convolutional
generative adversarial networks (DCGANs), that have certain architectural
constraints, and demonstrate that they are a strong candidate for unsupervised
learning. Training on various image datasets, we show convincing evidence that
our deep convolutional adversarial pair learns a hierarchy of representations
from object parts to scenes in both the generator and discriminator.
Additionally, we use the learned features for novel tasks - demonstrating their
applicability as general image representations. | http://arxiv.org/pdf/1511.06434 | Alec Radford, Luke Metz, Soumith Chintala | cs.LG, cs.CV | Under review as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160107 | 6 1 0 2
# n a J
7
] G L . s c [ 2 v 4 3 4 6 0 . 1 1 5 1 : v i X r a
Under review as a conference paper at ICLR 2016
UNSUPERVISED REPRESENTATION LEARNING WITH DEEP CONVOLUTIONAL GENERATIVE ADVERSARIAL NETWORKS
Alec Radford & Luke Metz indico Research Boston, MA {alec,luke}@indico.io
# Soumith Chintala Facebook AI Research New York, NY soumith@fb.com
# ABSTRACT
In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsuper- vised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolu- tional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image repre- sentations.
1
# INTRODUCTION
Learning reusable feature representations from large unlabeled datasets has been an area of active research. In the context of computer vision, one can leverage the practically unlimited amount of unlabeled images and videos to learn good intermediate representations, which can then be used on a variety of supervised learning tasks such as image classiï¬cation. We propose that one way to build good image representations is by training Generative Adversarial Networks (GANs) (Goodfellow et al., 2014), and later reusing parts of the generator and discriminator networks as feature extractors for supervised tasks. GANs provide an attractive alternative to maximum likelihood techniques. One can additionally argue that their learning process and the lack of a heuristic cost function (such as pixel-wise independent mean-square error) are attractive to representation learning. GANs have been known to be unstable to train, often resulting in generators that produce nonsensical outputs. There has been very limited published research in trying to understand and visualize what GANs learn, and the intermediate representations of multi-layer GANs.
In this paper, we make the following contributions
⢠We propose and evaluate a set of constraints on the architectural topology of Convolutional GANs that make them stable to train in most settings. We name this class of architectures Deep Convolutional GANs (DCGAN)
⢠We use the trained discriminators for image classiï¬cation tasks, showing competitive per- formance with other unsupervised algorithms.
⢠We visualize the ï¬lters learnt by GANs and empirically show that speciï¬c ï¬lters have learned to draw speciï¬c objects.
1
# Under review as a conference paper at ICLR 2016
⢠We show that the generators have interesting vector arithmetic properties allowing for easy manipulation of many semantic qualities of generated samples.
2 RELATED WORK
2.1 REPRESENTATION LEARNING FROM UNLABELED DATA
Unsupervised representation learning is a fairly well studied problem in general computer vision research, as well as in the context of images. A classic approach to unsupervised representation learning is to do clustering on the data (for example using K-means), and leverage the clusters for improved classiï¬cation scores. In the context of images, one can do hierarchical clustering of image patches (Coates & Ng, 2012) to learn powerful image representations. Another popular method is to train auto-encoders (convolutionally, stacked (Vincent et al., 2010), separating the what and where components of the code (Zhao et al., 2015), ladder structures (Rasmus et al., 2015)) that encode an image into a compact code, and decode the code to reconstruct the image as accurately as possible. These methods have also been shown to learn good feature representations from image pixels. Deep belief networks (Lee et al., 2009) have also been shown to work well in learning hierarchical representations.
2.2 GENERATING NATURAL IMAGES
Generative image models are well studied and fall into two categories: parametric and non- parametric.
The non-parametric models often do matching from a database of existing images, often matching patches of images, and have been used in texture synthesis (Efros et al., 1999), super-resolution (Freeman et al., 2002) and in-painting (Hays & Efros, 2007).
Parametric models for generating images has been explored extensively (for example on MNIST digits or for texture synthesis (Portilla & Simoncelli, 2000)). However, generating natural images of the real world have had not much success until recently. A variational sampling approach to generating images (Kingma & Welling, 2013) has had some success, but the samples often suffer from being blurry. Another approach generates images using an iterative forward diffusion process (Sohl-Dickstein et al., 2015). Generative Adversarial Networks (Goodfellow et al., 2014) generated images suffering from being noisy and incomprehensible. A laplacian pyramid extension to this approach (Denton et al., 2015) showed higher quality images, but they still suffered from the objects looking wobbly because of noise introduced in chaining multiple models. A recurrent network approach (Gregor et al., 2015) and a deconvolution network approach (Dosovitskiy et al., 2014) have also recently had some success with generating natural images. However, they have not leveraged the generators for supervised tasks.
2.3 VISUALIZING THE INTERNALS OF CNNS
One constant criticism of using neural networks has been that they are black-box methods, with little understanding of what the networks do in the form of a simple human-consumable algorithm. In the context of CNNs, Zeiler et. al. (Zeiler & Fergus, 2014) showed that by using deconvolutions and ï¬ltering the maximal activations, one can ï¬nd the approximate purpose of each convolution ï¬lter in the network. Similarly, using a gradient descent on the inputs lets us inspect the ideal image that activates certain subsets of ï¬lters (Mordvintsev et al.).
# 3 APPROACH AND MODEL ARCHITECTURE
Historical attempts to scale up GANs using CNNs to model images have been unsuccessful. This motivated the authors of LAPGAN (Denton et al., 2015) to develop an alternative approach to it- eratively upscale low resolution generated images which can be modeled more reliably. We also encountered difï¬culties attempting to scale GANs using CNN architectures commonly used in the supervised literature. However, after extensive model exploration we identiï¬ed a family of archi-
2
# Under review as a conference paper at ICLR 2016
tectures that resulted in stable training across a range of datasets and allowed for training higher resolution and deeper generative models.
Core to our approach is adopting and modifying three recently demonstrated changes to CNN archi- tectures.
The ï¬rst is the all convolutional net (Springenberg et al., 2014) which replaces deterministic spatial pooling functions (such as maxpooling) with strided convolutions, allowing the network to learn its own spatial downsampling. We use this approach in our generator, allowing it to learn its own spatial upsampling, and discriminator.
Second is the trend towards eliminating fully connected layers on top of convolutional features. The strongest example of this is global average pooling which has been utilized in state of the art image classiï¬cation models (Mordvintsev et al.). We found global average pooling increased model stability but hurt convergence speed. A middle ground of directly connecting the highest convolutional features to the input and output respectively of the generator and discriminator worked well. The ï¬rst layer of the GAN, which takes a uniform noise distribution Z as input, could be called fully connected as it is just a matrix multiplication, but the result is reshaped into a 4-dimensional tensor and used as the start of the convolution stack. For the discriminator, the last convolution layer is ï¬attened and then fed into a single sigmoid output. See Fig. 1 for a visualization of an example model architecture.
Third is Batch Normalization (Ioffe & Szegedy, 2015) which stabilizes learning by normalizing the input to each unit to have zero mean and unit variance. This helps deal with training problems that arise due to poor initialization and helps gradient ï¬ow in deeper models. This proved critical to get deep generators to begin learning, preventing the generator from collapsing all samples to a single point which is a common failure mode observed in GANs. Directly applying batchnorm to all layers however, resulted in sample oscillation and model instability. This was avoided by not applying batchnorm to the generator output layer and the discriminator input layer.
The ReLU activation (Nair & Hinton, 2010) is used in the generator with the exception of the output layer which uses the Tanh function. We observed that using a bounded activation allowed the model to learn more quickly to saturate and cover the color space of the training distribution. Within the discriminator we found the leaky rectiï¬ed activation (Maas et al., 2013) (Xu et al., 2015) to work well, especially for higher resolution modeling. This is in contrast to the original GAN paper, which used the maxout activation (Goodfellow et al., 2013).
# Architecture guidelines for stable Deep Convolutional GANs
⢠Replace any pooling layers with strided convolutions (discriminator) and fractional-strided convolutions (generator).
Use batchnorm in both the generator and the discriminator. ⢠Remove fully connected hidden layers for deeper architectures. ⢠Use ReLU activation in generator for all layers except for the output, which uses Tanh. ⢠Use LeakyReLU activation in the discriminator for all layers.
# 4 DETAILS OF ADVERSARIAL TRAINING
We trained DCGANs on three datasets, Large-scale Scene Understanding (LSUN) (Yu et al., 2015), Imagenet-1k and a newly assembled Faces dataset. Details on the usage of each of these datasets are given below.
No pre-processing was applied to training images besides scaling to the range of the tanh activation function [-1, 1]. All models were trained with mini-batch stochastic gradient descent (SGD) with a mini-batch size of 128. All weights were initialized from a zero-centered Normal distribution with standard deviation 0.02. In the LeakyReLU, the slope of the leak was set to 0.2 in all models. While previous GAN work has used momentum to accelerate training, we used the Adam optimizer (Kingma & Ba, 2014) with tuned hyperparameters. We found the suggested learning rate of 0.001, to be too high, using 0.0002 instead. Additionally, we found leaving the momentum term β1 at the
3
# Under review as a conference paper at ICLR 2016
Project and reshape G@)
Figure 1: DCGAN generator used for LSUN scene modeling. A 100 dimensional uniform distribu- tion Z is projected to a small spatial extent convolutional representation with many feature maps. A series of four fractionally-strided convolutions (in some recent papers, these are wrongly called deconvolutions) then convert this high level representation into a 64 Ã 64 pixel image. Notably, no fully connected or pooling layers are used.
suggested value of 0.9 resulted in training oscillation and instability while reducing it to 0.5 helped stabilize training.
# 4.1 LSUN
As visual quality of samples from generative image models has improved, concerns of over-ï¬tting and memorization of training samples have risen. To demonstrate how our model scales with more data and higher resolution generation, we train a model on the LSUN bedrooms dataset containing a little over 3 million training examples. Recent analysis has shown that there is a direct link be- tween how fast models learn and their generalization performance (Hardt et al., 2015). We show samples from one epoch of training (Fig.2), mimicking online learning, in addition to samples after convergence (Fig.3), as an opportunity to demonstrate that our model is not producing high quality samples via simply overï¬tting/memorizing training examples. No data augmentation was applied to the images.
# 4.1.1 DEDUPLICATION
To further decrease the likelihood of the generator memorizing input examples (Fig.2) we perform a simple image de-duplication process. We ï¬t a 3072-128-3072 de-noising dropout regularized RELU autoencoder on 32x32 downsampled center-crops of training examples. The resulting code layer activations are then binarized via thresholding the ReLU activation which has been shown to be an effective information preserving technique (Srivastava et al., 2014) and provides a convenient form of semantic-hashing, allowing for linear time de-duplication . Visual inspection of hash collisions showed high precision with an estimated false positive rate of less than 1 in 100. Additionally, the technique detected and removed approximately 275,000 near duplicates, suggesting a high recall.
4.2 FACES
We scraped images containing human faces from random web image queries of peoples names. The people names were acquired from dbpedia, with a criterion that they were born in the modern era. This dataset has 3M images from 10K people. We run an OpenCV face detector on these images, keeping the detections that are sufï¬ciently high resolution, which gives us approximately 350,000 face boxes. We use these face boxes for training. No data augmentation was applied to the images.
4
# Under review as a conference paper at ICLR 2016
Figure 2: Generated bedrooms after one training pass through the dataset. Theoretically, the model could learn to memorize training examples, but this is experimentally unlikely as we train with a small learning rate and minibatch SGD. We are aware of no prior empirical evidence demonstrating memorization with SGD and a small learning rate.
Figure 3: Generated bedrooms after ï¬ve epochs of training. There appears to be evidence of visual under-ï¬tting via repeated noise textures across multiple samples such as the base boards of some of the beds.
4.3 IMAGENET-1K
We use Imagenet-1k (Deng et al., 2009) as a source of natural images for unsupervised training. We train on 32 Ã 32 min-resized center crops. No data augmentation was applied to the images.
5
# Under review as a conference paper at ICLR 2016
5 EMPIRICAL VALIDATION OF DCGANS CAPABILITIES
5.1 CLASSIFYING CIFAR-10 USING GANS AS A FEATURE EXTRACTOR
One common technique for evaluating the quality of unsupervised representation learning algo- rithms is to apply them as a feature extractor on supervised datasets and evaluate the performance of linear models ï¬tted on top of these features.
On the CIFAR-10 dataset, a very strong baseline performance has been demonstrated from a well tuned single layer feature extraction pipeline utilizing K-means as a feature learning algorithm. When using a very large amount of feature maps (4800) this technique achieves 80.6% accuracy. An unsupervised multi-layered extension of the base algorithm reaches 82.0% accuracy (Coates & Ng, 2011). To evaluate the quality of the representations learned by DCGANs for supervised tasks, we train on Imagenet-1k and then use the discriminatorâs convolutional features from all layers, maxpooling each layers representation to produce a 4 à 4 spatial grid. These features are then ï¬attened and concatenated to form a 28672 dimensional vector and a regularized linear L2-SVM classiï¬er is trained on top of them. This achieves 82.8% accuracy, out performing all K-means based approaches. Notably, the discriminator has many less feature maps (512 in the highest layer) compared to K-means based techniques, but does result in a larger total feature vector size due to the many layers of 4 à 4 spatial locations. The performance of DCGANs is still less than that of Exemplar CNNs (Dosovitskiy et al., 2015), a technique which trains normal discriminative CNNs in an unsupervised fashion to differentiate between speciï¬cally chosen, aggressively augmented, exemplar samples from the source dataset. Further improvements could be made by ï¬netuning the discriminatorâs representations, but we leave this for future work. Additionally, since our DCGAN was never trained on CIFAR-10 this experiment also demonstrates the domain robustness of the learned features.
Table 1: CIFAR-10 classiï¬cation results using our pre-trained model. Our DCGAN is not pre- trained on CIFAR-10, but on Imagenet-1k, and the features are used to classify CIFAR-10 images.
Model 1 Layer K-means 3 Layer K-means Learned RF View Invariant K-means Exemplar CNN DCGAN (ours) + L2-SVM Accuracy Accuracy (400 per class) max # of features units 80.6% 82.0% 81.9% 84.3% 82.8% 63.7% (±0.7%) 70.7% (±0.7%) 72.6% (±0.7%) 77.4% (±0.2%) 73.8% (±0.4%) 4800 3200 6400 1024 512
5.2 CLASSIFYING SVHN DIGITS USING GANS AS A FEATURE EXTRACTOR
On the StreetView House Numbers dataset (SVHN)(Netzer et al., 2011), we use the features of the discriminator of a DCGAN for supervised purposes when labeled data is scarce. Following similar dataset preparation rules as in the CIFAR-10 experiments, we split off a validation set of 10,000 examples from the non-extra set and use it for all hyperparameter and model selection. 1000 uniformly class distributed training examples are randomly selected and used to train a regularized linear L2-SVM classiï¬er on top of the same feature extraction pipeline used for CIFAR-10. This achieves state of the art (for classiï¬cation using 1000 labels) at 22.48% test error, improving upon another modifcation of CNNs designed to leverage unlabled data (Zhao et al., 2015). Additionally, we validate that the CNN architecture used in DCGAN is not the key contributing factor of the modelâs performance by training a purely supervised CNN with the same architecture on the same data and optimizing this model via random search over 64 hyperparameter trials (Bergstra & Bengio, 2012). It achieves a signï¬cantly higher 28.87% validation error.
6
# INVESTIGATING AND VISUALIZING THE INTERNALS OF THE NETWORKS
We investigate the trained generators and discriminators in a variety of ways. We do not do any kind of nearest neighbor search on the training set. Nearest neighbors in pixel or feature space are
6
# Under review as a conference paper at ICLR 2016
Table 2: SVHN classiï¬cation with 1000 labels
Model KNN TSVM M1+KNN M1+TSVM M1+M2 SWWAE without dropout SWWAE with dropout DCGAN (ours) + L2-SVM Supervised CNN with the same architecture error rate 77.93% 66.55% 65.63% 54.33% 36.02% 27.83% 23.56% 22.48% 28.87% (validation)
trivially fooled (Theis et al., 2015) by small image transforms. We also do not use log-likelihood metrics to quantitatively assess the model, as it is a poor (Theis et al., 2015) metric.
6.1 WALKING IN THE LATENT SPACE
The ï¬rst experiment we did was to understand the landscape of the latent space. Walking on the manifold that is learnt can usually tell us about signs of memorization (if there are sharp transitions) and about the way in which the space is hierarchically collapsed. If walking in this latent space results in semantic changes to the image generations (such as objects being added and removed), we can reason that the model has learned relevant and interesting representations. The results are shown in Fig.4.
6.2 VISUALIZING THE DISCRIMINATOR FEATURES
Previous work has demonstrated that supervised training of CNNs on large image datasets results in very powerful learned features (Zeiler & Fergus, 2014). Additionally, supervised CNNs trained on scene classiï¬cation learn object detectors (Oquab et al., 2014). We demonstrate that an unsupervised DCGAN trained on a large image dataset can also learn a hierarchy of features that are interesting. Using guided backpropagation as proposed by (Springenberg et al., 2014), we show in Fig.5 that the features learnt by the discriminator activate on typical parts of a bedroom, like beds and windows. For comparison, in the same ï¬gure, we give a baseline for randomly initialized features that are not activated on anything that is semantically relevant or interesting.
6.3 MANIPULATING THE GENERATOR REPRESENTATION
6.3.1 FORGETTING TO DRAW CERTAIN OBJECTS
In addition to the representations learnt by a discriminator, there is the question of what representa- tions the generator learns. The quality of samples suggest that the generator learns speciï¬c object representations for major scene components such as beds, windows, lamps, doors, and miscellaneous furniture. In order to explore the form that these representations take, we conducted an experiment to attempt to remove windows from the generator completely.
On 150 samples, 52 window bounding boxes were drawn manually. On the second highest con- volution layer features, logistic regression was ï¬t to predict whether a feature activation was on a window (or not), by using the criterion that activations inside the drawn bounding boxes are posi- tives and random samples from the same images are negatives. Using this simple model, all feature maps with weights greater than zero ( 200 in total) were dropped from all spatial locations. Then, random new samples were generated with and without the feature map removal.
The generated images with and without the window dropout are shown in Fig.6, and interestingly, the network mostly forgets to draw windows in the bedrooms, replacing them with other objects.
7
# Under review as a conference paper at ICLR 2016
Figure 4: Top rows: Interpolation between a series of 9 random points in Z show that the space learned has smooth transitions, with every image in the space plausibly looking like a bedroom. In the 6th row, you see a room without a window slowly transforming into a room with a giant window. In the 10th row, you see what appears to be a TV slowly being transformed into a window.
# 6.3.2 VECTOR ARITHMETIC ON FACE SAMPLES
In the context of evaluating learned representations of words (Mikolov et al., 2013) demonstrated that simple arithmetic operations revealed rich linear structure in representation space. One canoni- cal example demonstrated that the vector(âKingâ) - vector(âManâ) + vector(âWomanâ) resulted in a vector whose nearest neighbor was the vector for Queen. We investigated whether similar structure emerges in the Z representation of our generators. We performed similar arithmetic on the Z vectors of sets of exemplar samples for visual concepts. Experiments working on only single samples per concept were unstable, but averaging the Z vector for three examplars showed consistent and stable generations that semantically obeyed the arithmetic. In addition to the object manipulation shown in (Fig. 7), we demonstrate that face pose is also modeled linearly in Z space (Fig. 8).
These demonstrations suggest interesting applications can be developed using Z representations learned by our models. It has been previously demonstrated that conditional generative models can learn to convincingly model object attributes like scale, rotation, and position (Dosovitskiy et al., 2014). This is to our knowledge the ï¬rst demonstration of this occurring in purely unsupervised
8
# Under review as a conference paper at ICLR 2016
Random filters Trained filters
Figure 5: On the right, guided backpropagation visualizations of maximal axis-aligned responses for the ï¬rst 6 learned convolutional features from the last convolution layer in the discriminator. Notice a signiï¬cant minority of features respond to beds - the central object in the LSUN bedrooms dataset. On the left is a random ï¬lter baseline. Comparing to the previous responses there is little to no discrimination and random structure.
Figure 6: Top row: un-modiï¬ed samples from model. Bottom row: the same samples generated with dropping out âwindowâ ï¬lters. Some windows are removed, others are transformed into objects with similar visual appearance such as doors and mirrors. Although visual quality decreased, overall scene composition stayed similar, suggesting the generator has done a good job disentangling scene representation from object representation. Extended experiments could be done to remove other objects from the image and modify the objects the generator draws.
models. Further exploring and developing the above mentioned vector arithmetic could dramat- ically reduce the amount of data needed for conditional generative modeling of complex image distributions.
# 7 CONCLUSION AND FUTURE WORK
We propose a more stable set of architectures for training generative adversarial networks and we give evidence that adversarial networks learn good representations of images for supervised learning and generative modeling. There are still some forms of model instability remaining - we noticed as models are trained longer they sometimes collapse a subset of ï¬lters to a single oscillating mode.
9
# Under review as a conference paper at ICLR 2016
uU __,â__ ââ ~ ; smiling neutral neutral smiling man woman woman man uââ_ } Sy man man woman . with glasses without glasses without glasses woman with glasses Results of doing the same arithmetic in pixel space -fi+=& -fitG@=&
Figure 7: Vector arithmetic for visual concepts. For each column, the Z vectors of samples are averaged. Arithmetic was then performed on the mean vectors creating a new vector Y . The center sample on the right hand side is produce by feeding Y as input to the generator. To demonstrate the interpolation capabilities of the generator, uniform noise sampled with scale +-0.25 was added to Y to produce the 8 other samples. Applying arithmetic in the input space (bottom two examples) results in noisy overlap due to misalignment.
Further work is needed to tackle this from of instability. We think that extending this framework
10
# Under review as a conference paper at ICLR 2016
Figure 8: A âturnâ vector was created from four averaged samples of faces looking left vs looking right. By adding interpolations along this axis to random samples we were able to reliably transform their pose.
to other domains such as video (for frame prediction) and audio (pre-trained features for speech synthesis) should be very interesting. Further investigations into the properties of the learnt latent space would be interesting as well.
# ACKNOWLEDGMENTS
We are fortunate and thankful for all the advice and guidance we have received during this work, especially that of Ian Goodfellow, Tobias Springenberg, Arthur Szlam and Durk Kingma. Addition- ally weâd like to thank all of the folks at indico for providing support, resources, and conversations, especially the two other members of the indico research team, Dan Kuster and Nathan Lintz. Finally, weâd like to thank Nvidia for donating a Titan-X GPU used in this work.
# REFERENCES
Bergstra, James and Bengio, Yoshua. Random search for hyper-parameter optimization. JMLR, 2012.
Coates, Adam and Ng, Andrew. Selecting receptive ï¬elds in deep networks. NIPS, 2011.
Coates, Adam and Ng, Andrew Y. Learning feature representations with k-means. In Neural Net- works: Tricks of the Trade, pp. 561â580. Springer, 2012.
Deng, Jia, Dong, Wei, Socher, Richard, Li, Li-Jia, Li, Kai, and Fei-Fei, Li. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 248â255. IEEE, 2009.
Denton, Emily, Chintala, Soumith, Szlam, Arthur, and Fergus, Rob. Deep generative image models using a laplacian pyramid of adversarial networks. arXiv preprint arXiv:1506.05751, 2015.
Dosovitskiy, Alexey, Springenberg, Jost Tobias, and Brox, Thomas. Learning to generate chairs with convolutional neural networks. arXiv preprint arXiv:1411.5928, 2014.
11
# Under review as a conference paper at ICLR 2016
Dosovitskiy, Alexey, Fischer, Philipp, Springenberg, Jost Tobias, Riedmiller, Martin, and Brox, Thomas. Discriminative unsupervised feature learning with exemplar convolutional neural net- works. In Pattern Analysis and Machine Intelligence, IEEE Transactions on, volume 99. IEEE, 2015.
Efros, Alexei, Leung, Thomas K, et al. Texture synthesis by non-parametric sampling. In Computer Vision, 1999. The Proceedings of the Seventh IEEE International Conference on, volume 2, pp. 1033â1038. IEEE, 1999.
Freeman, William T, Jones, Thouis R, and Pasztor, Egon C. Example-based super-resolution. Com- puter Graphics and Applications, IEEE, 22(2):56â65, 2002.
Goodfellow, Ian J, Warde-Farley, David, Mirza, Mehdi, Courville, Aaron, and Bengio, Yoshua. Maxout networks. arXiv preprint arXiv:1302.4389, 2013.
Goodfellow, Ian J., Pouget-Abadie, Jean, Mirza, Mehdi, Xu, Bing, Warde-Farley, David, Ozair, Sherjil, Courville, Aaron C., and Bengio, Yoshua. Generative adversarial nets. NIPS, 2014.
Gregor, Karol, Danihelka, Ivo, Graves, Alex, and Wierstra, Daan. Draw: A recurrent neural network for image generation. arXiv preprint arXiv:1502.04623, 2015.
Hardt, Moritz, Recht, Benjamin, and Singer, Yoram. Train faster, generalize better: Stability of stochastic gradient descent. arXiv preprint arXiv:1509.01240, 2015.
Hauberg, Sren, Freifeld, Oren, Larsen, Anders Boesen Lindbo, Fisher III, John W., and Hansen, Lars Kair. Dreaming more data: Class-dependent distributions over diffeomorphisms for learned data augmentation. arXiv preprint arXiv:1510.02795, 2015.
Hays, James and Efros, Alexei A. Scene completion using millions of photographs. ACM Transac- tions on Graphics (TOG), 26(3):4, 2007.
Ioffe, Sergey and Szegedy, Christian. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
Kingma, Diederik P and Ba, Jimmy Lei. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Kingma, Diederik P and Welling, Max. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
Lee, Honglak, Grosse, Roger, Ranganath, Rajesh, and Ng, Andrew Y. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In Proceedings of the 26th Annual International Conference on Machine Learning, pp. 609â616. ACM, 2009.
Loosli, Ga¨elle, Canu, St´ephane, and Bottou, L´eon. Training invariant support vector machines using In Bottou, L´eon, Chapelle, Olivier, DeCoste, Dennis, and Weston, Jason selective sampling. (eds.), Large Scale Kernel Machines, pp. 301â320. MIT Press, Cambridge, MA., 2007. URL http://leon.bottou.org/papers/loosli-canu-bottou-2006.
Maas, Andrew L, Hannun, Awni Y, and Ng, Andrew Y. Rectiï¬er nonlinearities improve neural network acoustic models. In Proc. ICML, volume 30, 2013.
Mikolov, Tomas, Sutskever, Ilya, Chen, Kai, Corrado, Greg S, and Dean, Jeff. Distributed repre- sentations of words and phrases and their compositionality. In Advances in neural information processing systems, pp. 3111â3119, 2013.
Inceptionism : Going deeper into neural networks. http://googleresearch.blogspot.com/2015/06/ inceptionism-going-deeper-into-neural.html. Accessed: 2015-06-17.
Nair, Vinod and Hinton, Geoffrey E. Rectiï¬ed linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 807â 814, 2010.
12
# Under review as a conference paper at ICLR 2016
Netzer, Yuval, Wang, Tao, Coates, Adam, Bissacco, Alessandro, Wu, Bo, and Ng, Andrew Y. Read- ing digits in natural images with unsupervised feature learning. In NIPS workshop on deep learn- ing and unsupervised feature learning, volume 2011, pp. 5. Granada, Spain, 2011.
Oquab, M., Bottou, L., Laptev, I., and Sivic, J. Learning and transferring mid-level image represen- tations using convolutional neural networks. In CVPR, 2014.
Portilla, Javier and Simoncelli, Eero P. A parametric texture model based on joint statistics of complex wavelet coefï¬cients. International Journal of Computer Vision, 40(1):49â70, 2000.
Rasmus, Antti, Valpola, Harri, Honkala, Mikko, Berglund, Mathias, and Raiko, Tapani. Semi- supervised learning with ladder network. arXiv preprint arXiv:1507.02672, 2015.
Sohl-Dickstein, Jascha, Weiss, Eric A, Maheswaranathan, Niru, and Ganguli, Surya. Deep unsuper- vised learning using nonequilibrium thermodynamics. arXiv preprint arXiv:1503.03585, 2015.
Springenberg, Jost Tobias, Dosovitskiy, Alexey, Brox, Thomas, and Riedmiller, Martin. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014.
Srivastava, Rupesh Kumar, Masci, Jonathan, Gomez, Faustino, and Schmidhuber, J¨urgen. Under- standing locally competitive networks. arXiv preprint arXiv:1410.1165, 2014.
Theis, L., van den Oord, A., and Bethge, M. A note on the evaluation of generative models. arXiv:1511.01844, Nov 2015. URL http://arxiv.org/abs/1511.01844.
Vincent, Pascal, Larochelle, Hugo, Lajoie, Isabelle, Bengio, Yoshua, and Manzagol, Pierre-Antoine. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. The Journal of Machine Learning Research, 11:3371â3408, 2010.
Xu, Bing, Wang, Naiyan, Chen, Tianqi, and Li, Mu. Empirical evaluation of rectiï¬ed activations in convolutional network. arXiv preprint arXiv:1505.00853, 2015.
Yu, Fisher, Zhang, Yinda, Song, Shuran, Seff, Ari, and Xiao, Jianxiong. Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365, 2015.
Zeiler, Matthew D and Fergus, Rob. Visualizing and understanding convolutional networks. Computer VisionâECCV 2014, pp. 818â833. Springer, 2014. In
Zhao, Junbo, Mathieu, Michael, Goroshin, Ross, and Lecun, Yann. Stacked what-where auto- encoders. arXiv preprint arXiv:1506.02351, 2015.
13
Under review as a conference paper at ICLR 2016
# 8 SUPPLEMENTARY MATERIAL
8.1 EVALUATING DCGANS CAPABILITY TO CAPTURE DATA DISTRIBUTIONS
We propose to apply standard classiï¬cation metrics to a conditional version of our model, evaluating the conditional distributions learned. We trained a DCGAN on MNIST (splitting off a 10K validation set) as well as a permutation invariant GAN baseline and evaluated the models using a nearest neighbor classiï¬er comparing real data to a set of generated conditional samples. We found that removing the scale and bias parameters from batchnorm produced better results for both models. We speculate that the noise introduced by batchnorm helps the generative models to better explore and generate from the underlying data distribution. The results are shown in Table 3 which compares our models with other techniques. The DCGAN model achieves the same test error as a nearest neighbor classiï¬er ï¬tted on the training dataset - suggesting the DCGAN model has done a superb job at modeling the conditional distributions of this dataset. At one million samples per class, the DCGAN model outperforms Inï¬MNIST (Loosli et al., 2007), a hand developed data augmentation pipeline which uses translations and elastic deformations of training examples. The DCGAN is competitive with a probabilistic generative data augmentation technique utilizing learned per class transformations (Hauberg et al., 2015) while being more general as it directly models the data instead of transformations of the data.
Table 3: Nearest neighbor classiï¬cation results.
Model AlignMNIST Inï¬MNIST Real Data GAN DCGAN (ours) Test Error @50K samples Test Error @10M samples - - 3.1% 6.28% 2.98% 1.4% 2.6% - 5.65% 1.48%
®AYNAHKWONYD Ses eHcwh-O OWN TA ALMLAO 9mY EYWAKwYWHâVY% MENS YLWYAO YON OC ANRWN-~O Peneatw L~o me Oye PrA CG YPâ-âO KwWprâG AQYOYLaAL-O WAN AWW HXG NIST ag? SONY DO Groundtruth =
Figure 9: Side-by-side illustration of (from left-to-right) the MNIST dataset, generations from a baseline GAN, and generations from our DCGAN .
14
# Under review as a conference paper at ICLR 2016
iD. Sees CDikk Bic 3 DET Gr : De
Figure 10: More face generations from our Face DCGAN.
15
# Under review as a conference paper at ICLR 2016
Figure 11: Generations of a DCGAN that was trained on the Imagenet-1k dataset.
16 | {
"id": "1505.00853"
} |
1511.06342 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | 6 1 0 2
b e F 2 2 ] G L . s c [
4 v 2 4 3 6 0 . 1 1 5 1 : v i X r a
Published as a conference paper at ICLR 2016
# ACTOR-MIMIC DEEP MULTITASK AND TRANSFER REINFORCEMENT LEARNING
Emilio Parisotto, Jimmy Ba, Ruslan Salakhutdinov Department of Computer Science University of Toronto Toronto, Ontario, Canada {eparisotto,jimmy,rsalakhu}@cs.toronto.edu
# ABSTRACT
The ability to act in multiple environments and transfer previous knowledge to new situations can be considered a critical aspect of any intelligent agent. To- wards this goal, we deï¬ne a novel method of multitask and transfer learning that enables an autonomous agent to learn how to behave in multiple tasks simultane- ously, and then generalize its knowledge to new domains. This method, termed âActor-Mimicâ, exploits the use of deep reinforcement learning and model com- pression techniques to train a single policy network that learns how to act in a set of distinct tasks by using the guidance of several expert teachers. We then show that the representations learnt by the deep policy network are capable of general- izing to new tasks with no prior expert guidance, speeding up learning in novel environments. Although our method can in general be applied to a wide range of problems, we use Atari games as a testing environment to demonstrate these methods.
# INTRODUCTION
Deep Reinforcement Learning (DRL), the combination of reinforcement learning methods and deep neural network function approximators, has recently shown considerable success in high- dimensional challenging tasks, such as robotic manipulation (Levine et al., 2015; Lillicrap et al., 2015) and arcade games (Mnih et al., 2015). These methods exploit the ability of deep networks to learn salient descriptions of raw state input, allowing the agent designer to essentially bypass the lengthy process of feature engineering. In addition, these automatically learnt descriptions often sig- niï¬cantly outperform hand-crafted feature representations that require extensive domain knowledge. One such DRL approach, the Deep Q-Network (DQN) (Mnih et al., 2015), has achieved state-of- the-art results on the Arcade Learning Environment (ALE) (Bellemare et al., 2013), a benchmark of Atari 2600 arcade games. The DQN uses a deep convolutional neural network over pixel inputs to parameterize a state-action value function. The DQN is trained using Q-learning combined with sev- eral tricks that stabilize the training of the network, such as a replay memory to store past transitions and target networks to deï¬ne a more consistent temporal difference error.
Although the DQN maintains the same network architecture and hyperparameters for all games, the approach is limited in the fact that each network only learns how to play a single game at a time, despite the existence of similarities between games. For example, the tennis-like game of pong and the squash-like game of breakout are both similar in that each game consists of trying to hit a moving ball with a rectangular paddle. A network trained to play multiple games would be able to generalize its knowledge between the games, achieving a single compact state representation as the inter-task similarities are exploited by the network. Having been trained on enough source tasks, the multitask network can also exhibit transfer to new target tasks, which can speed up learning. Training DRL agents can be extremely computationally intensive and therefore reducing training time is a signiï¬cant practical beneï¬t.
1
Published as a conference paper at ICLR 2016
The contribution of this paper is to develop and evaluate methods that enable multitask and trans- fer learning for DRL agents, using the ALE as a test environment. To ï¬rst accomplish multitask learning, we design a method called âActor-Mimicâ that leverages techniques from model compres- sion to train a single multitask network using guidance from a set of game-speciï¬c expert networks. The particular form of guidance can vary, and several different approaches are explored and tested empirically. To then achieve transfer learning, we treat a multitask network as being a DQN which was pre-trained on a set of source tasks. We show experimentally that this multitask pre-training can result in a DQN that learns a target task signiï¬cantly faster than a DQN starting from a random initialization, effectively demonstrating that the source task representations generalize to the target task.
# 2 BACKGROUND: DEEP REINFORCEMENT LEARNING
A Markov Decision Process (MDP) is defined as a tuple (S, A, T, R, 7) where S is a set of states, A is a set of actions, T(sâ|s, a) is the transition probability of ending up in state sâ when executing action a in state s, R is the reward function mapping states in S to rewards in R, and Â¥ is a discount factor. An agentâs behaviour in an MDP is represented as a policy 7(a|s) which defines the probability of executing action a in state s. For a given policy, we can further define the Q-value function Q*(s,a) = E[v}49 y'r1|80 = 8, a0 = a] where H is the step when the game ends. The Q-function represents the expected future discounted reward when starting in a state s, executing a, and then following policy 7 until a terminating state is reached. There always exists at least one optimal state-action value function, Q*(s, a), such that Vs ⬠S,a ⬠A, Q*(s,a) = max, Qâ¢(s, a) (Sutton & Barto} |T998). The optimal Q-function can be rewritten as a Bellman equation:
Q*(s,a) = E r+7-max Q*(sâ,aâ)|. (1) s'NT(-|s,a) aeA
An optimal policy can be constructed from the optimal Q-function by choosing, for a given state, the action with highest Q-value. Q-learning, a reinforcement learning algorithm, uses iterative backups of the Q-function to converge towards the optimal Q-function. Using a tabular representation of the Q-function, this is equivalent to setting Q("*))(s,a) = Es xT (-|s,a)[7 + *Maxaca Qâ¢(s!,aâ)] for the (n+1)th update step (Sutton & Barto}|I Because the state space in the ALE is too large to tractably store a tabular representation of the Q-function, the Deep Q-Network (DQN) approach uses a deep function approximator to represent the state-action value function (Mnih et al.||2015). To train a DQN on the (n+1)th step, we set the networkâs loss to
LOAD (9) = E 8,a,7,8'~M(-) 2 (r-+9 nay a(e'sa's0) â Qs, 4; gon) | » Q) acA
where M(-) is a uniform probability distribution over a replay memory, which is a set of the m previous (s,a,7,sâ) transition tuples seen during play, where m is the size of the memory. The replay memory is used to reduce correlations between adjacent states and is shown to have large effect on the stability of training the network in some games.
3 ACTOR-MIMIC
3.1 POLICY REGRESSION OBJECTIVE
Given a set of source games S1, ..., SN , our ï¬rst goal is to obtain a single multitask policy network that can play any source game at as near an expert level as possible. To train this multitask policy network, we use guidance from a set of expert DQN networks E1, ..., EN , where Ei is an expert specialized in source task Si. One possible deï¬nition of âguidanceâ would be to deï¬ne a squared loss that would match Q-values between the student network and the experts. As the range of the expert value functions could vary widely between games, we found it difï¬cult to directly distill knowledge from the expert value functions. The alternative we develop here is to instead match policies by ï¬rst transforming Q-values using a softmax. Using the softmax gives us outputs which
2
Published as a conference paper at ICLR 2016
are bounded in the unit interval and so the effects of the different scales of each expertâs Q-function are diminished, achieving higher stability during learning. Intuitively, we can view using the softmax from the perspective of forcing the student to focus more on mimicking the action chosen by the guiding expert at each state, where the exact values of the state are less important. We call this method âActor-Mimicâ as it is an actor, i.e. policy, that mimics the decisions of a set of experts. In particular, our technique ï¬rst transforms each expert DQN into a policy network by a Boltzmann distribution deï¬ned over the Q-value outputs,
e7 Qn, (8,0) Tp, (as) = a re Ce (3) aââ¬Ap,
Tp, (as) = a re Ce (3) aââ¬Ap, where 7 is a temperature parameter and Aj, is the action space used by the expert E;, Ap, C A. Given a state s from source task 5;, we then define the policy objective over the multitask network as the cross-entropy between the expert networkâs policy and the current multitask policy:
Li policy(θ) = ÏEi(a|s) log ÏAMN(a|s; θ), (4)
# aâAEi
where ÏAMN(a|s; θ) is the multitask Actor-Mimic Network (AMN) policy, parameterized by θ. In contrast to the Q-learning objective which recursively relies on itself as a target value, we now have a stable supervised training signal (the expert network output) to guide the multitask network.
To acquire training data, we can sample either the expert network or the AMN action outputs to generate the trajectories used in the loss. Empirically we have observed that sampling from the AMN while it is learning gives the best results. We later prove that in either case of sampling from the expert or AMN as it is learning, the AMN will converge to the expert policy using the policy regression loss, at least in the case when the AMN is a linear function approximator. We use an e-greedy policy no matter which network we sample actions from, which with probability ⬠picks a random action uniformly and with probability 1 â ⬠chooses an action from the network.
3.2 FEATURE REGRESSION OBJECTIVE
We can obtain further guidance from the expert networks in the following way. Let hAMN(s) and hEi(s) be the hidden activations in the feature (pre-output) layer of the AMN and iâth expert net- work computed from the input state s, respectively. Note that the dimension of hAMN(s) does not necessarily need to be equal to hEi(s), and this is the case in some of our experiments. We deï¬ne a feature regression network fi(hAMN(s)) that, for a given state s, attempts to predict the features hEi(s) from hAMN(s). The architecture of the mapping fi can be deï¬ned arbitrarily, and fi can be trained using the following feature regression loss:
Liocaturenegression(9,97,) = ||filhamn(s; 0); 07,) â he, (s)II3 5 (5) where @ and @,, are the parameters of the AMN and i'â feature regression network, respectively. When training this objective, the error is fully back-propagated from the feature regression network output through the layers of the AMN. In this way, the feature regression objective provides pressure on the AMN to compute features that can predict an expertâs features. A justification for this objec- tive is that if we have a perfect regression from multitask to expert features, all the information in the expert features is contained in the multitask features. The use of the separate feature prediction network f; for each task enables the multitask network to have a different feature dimension than the experts as well as prevent issues with identifiability. Empirically we have found that the feature regression objectiveâs primary benefit is that it can increase the performance of transfer learning in some target tasks.
3.3 ACTOR-MIMIC OBJECTIVE
Combining both regression objectives, the Actor-Mimic objective is thus deï¬ned as F eatureRegression(θ, θfi),
policy(θ) + β â Li (6) where β is a scaling parameter which controls the relative weighting of the two objectives. Intu- itively, we can think of the policy regression objective as a teacher (expert network) telling a student (AMN) how they should act (mimic expertâs actions), while the feature regression objective is anal- ogous to a teacher telling a student why it should act that way (mimic expertâs thinking process).
3
Published as a conference paper at ICLR 2016
3.4 TRANSFERING KNOWLEDGE: ACTOR-MIMIC AS PRETRAINING
Now that we have a method of training a network that is an expert at all source tasks, we can proceed to the task of transferring source task knowledge to a novel but related target task. To enable transfer to a new task, we ï¬rst remove the ï¬nal softmax layer of the AMN. We then use the weights of AMN as an instantiation for a DQN that will be trained on the new target task. The pretrained DQN is then trained using the same training procedure as the one used with a standard DQN. Multitask pretraining can be seen as initializing the DQN with a set of features that are effective at deï¬ning policies in related tasks. If the source and target tasks share similarities, it is probable that some of these pretrained features will also be effective at the target task (perhaps after slight ï¬ne-tuning).
4 CONVERGENCE PROPERTIES OF ACTOR-MIMIC We further study the convergence properties of the proposed Actor-Mimic under a framework similar to (Perkins & Precup, 2002). The analysis mainly focuses on L2-regularized policy regression with- out feature regression. Without losing generality, the following analysis focuses on learning from a single game expert softmax policy ÏE. The analysis can be readily extended to consider multiple experts on multiple games by absorbing different games into the same state space. Let DÏ(s) be the stationary distribution of the Markov decision process under policy Ï over states s â S. The policy regression objective function can be rewritten using expectation under the stationary distribution of the Markov decision process:
1 eal san |(meCals) aavetalss6))] + AIO a s~ DAMN, e-greedy (.)
where ((-) is the cross-entropy measure and ) is the coefficient of weight decay that is necessary in the following analysis of the policy regression. Under Actor-Mimic, the learning agent interacts with the environment by following an e-greedy strategy of some Q function. The mapping from a Q function to an ¢-greedy policy 7¢-greeay is denoted by an operator Iâ, where Te-greeay = I'(Q). To avoid confusion onwards, we use notation p(a|s; 0) for the softmax policies in the policy regression objective.
Assume each state in a Markov decision process is represented by a compact K-dimensional feature representation Ï(s) â RK. Consider a linear function approximator for Q values with parameter matrix θ â RKÃ|A|, ËQ(s, a; θ) = Ï(s)T θa, where θa is the ath column of θ. The corresponding softmax policy of the linear approximator is deï¬ned by p(a|s; θ) â exp{ ËQ(s, a; θ)}.
4.1 STOCHASTIC STATIONARY POLICY For any stationary policy Ïâ, the stationary point of the objective function Eq. (7) can be found by setting its gradient w.r.t. θ to zero. Let Pθ be a |S| à |A| matrix where its ith row jth column element is the softmax policy prediction p(aj|si; θ) from the linear approximator. Similarly, let Î E be a |S| à |A| matrix for the softmax policy prediction from the expert model. Additionally, let DÏ be a diagonal matrix whose entries are DÏ(s). A simple gradient following algorithm on the objective function Eq. (7) has the following expected update rule using a learning rate αt > 0 at the tth iteration:
AO, = âa4 | ®7 Dy (Po,_, â We) + AOâ-1]- (8)
Lemma 1. Under a fixed policy x* and a learning rate schedule that satisfies \~?-, a, = ~, 1 a? < 00, the parameters 0, updated by the stochastic gradient descent learning algorithm described above, asymptotically almost surely converge to a unique solution 0°.
When the policy Ïâ is ï¬xed, the objective function Eq. (7) is convex and is the same as a multinomial logistic regression problem with a bounded Lipschitz constant due to its compact input features. Hence there is a unique stationary point θâ such that âθâ = 0. The proof of Lemma 1 follows the stochastic approximation argument (Robbins & Monro, 1951).
4.2 STOCHASTIC ADAPTIVE POLICY Consider the following learning scheme to adapt the agentâs policy. The learning agent interacts with the environment and samples states by following a fixed e-greedy policy 7â. Given the samples
4
Published as a conference paper at ICLR 2016
BOXING 405 ATLANTIS © [âAMN | IâDQN |âDQN-Max |DQN-Mean BREAKOUT _. 104 CRAZY CLIMBER a 3 8 00r sz Or os- PONG SEAQUEST SPACE INVADERS 0008 0 o0sz 0 00s 000r oszt 0 50 100 0 50 100 50 100 0 50 100
Figure 1: The Actor-Mimic and expert DQN training curves for 100 training epochs for each of the 8 games. A training epoch is 250,000 frames and for each training epoch we evaluate the networks with a testing epoch that lasts 125,000 frames. We report AMN and expert DQN test reward for each testing epoch and the mean and max of DQN performance. The max is calculated over all testing epochs that the DQN experienced until convergence while the mean is calculated over the last ten epochs before the DQN training was stopped. In the testing epoch we use ¢ = 0.05 in the e-greedy policy. The y-axis is the average unscaled episode reward during a testing epoch. The AMN results are averaged over 2 separately trained networks.
and the expert prediction, the linear function approximator parameters are updated using Eq. to a unique stationary point 0â. The new parameters 6â are then used to establish a new e-greedy policy xâ = T(Qor) through the Iâ operator over the linear function Qg. The agent under the new policy m"â subsequently samples a new set of states and actions from the Markov decision process to update its parameters. The learning agent therefore generates a sequence of policies {7!, 77, 73, ...}. The proof for the following theorem is given in Appendi Theorem 1. Assume the Markov decision process is irreducible and aperiodic for any policy 7 induced by the V operator and is Lipschitz continuous with a constant c., then the sequence of policies and model parameters generated by the iterative algorithm above converges almost surely to a unique solution x* and 6*.
4.3. PERFORMANCE GUARANTEE The convergence theorem implies the Actor-Mimic learning algorithm also belongs to the family of no-regret algorithms in the online learning framework, see{Ross et al.| for more details. Their theoretical analysis can be directly applied to Actor-Mimic and results in a performance guarantee bound on how well the Actor-Mimic model performs with respect to the guiding expert. Let Zi (s, a) be the t-step reward of executing 7 in the initial state s and then following policy aâ. The cost-to-go for a policy 7 after T-steps is defined as Jp(7) = âT Es~ pi.) [R(s, a)], where R(s, a) is the reward after executing action a in state s. Proposition 1. For the iterative algorithm described in Section (42), if the loss function in Eq. converges to ⬠with the solution Tamy and LE a aals, 1) = Zh awa(s, a) > uforallactionsa ⬠A andt ⬠{1,-++ ,T}, then the cost-to-go of Actor-Mimic Jr(mamn) grows linearly after executing T actions: Jp (mamn) < Jr (mE) + uTe/ log 2.
The above linear growth rate of the cost-to-go is achieved through sampling from AMN action output ÏAMN, while the cost grows quadratically if the algorithm only samples from the expert action output. Our empirical observations conï¬rm this theoretical prediction.
# 5 EXPERIMENTS
In the following experiments, we validate the Actor-Mimic method by demonstrating its effective- ness at both multitask and transfer learning in the Arcade Learning Environment (ALE). For our experiments, we use subsets of a collection of 20 Atari games. 19 games of this set were among the 29 games that the DQN method performed at a super-human level. We additionally chose 1 game, the game of Seaquest, on which the DQN had performed poorly when compared to a human expert. Details on the training procedure are described in Appendix B.
5.1 MULTITASK To ï¬rst evaluate the actor-mimic objective on multitask learning, we demonstrate the effectiveness of training an AMN over multiple games simultaneously. In this particular case, since our focus is
5
Published as a conference paper at ICLR 2016
Network DQN AMN 100% Ã AMN DQN Mean Max Mean Max Mean Max Atlantis Boxing Breakout Crazy Climber 57279 81.47 273.15 541000 88.02 377.96 165065 76.264 347.01 370.32 81.860 584196 288.2% 93.61% 127.0% 108.0% 93.00% 97.98% 96189 117593 57070 74342 59.33% 63.22% Enduro Pong Seaquest 457.60 19.581 4278.9 808.00 20.140 6200.5 499.3 15.275 1177.3 1466.0 18.780 686.77 109.1% 78.01% 27.51% 85.00% 93.25% 23.64% Space Invaders 1669.2 2109.7 1142.4 1349.0 68.44% 63.94%
Table 1: Actor-Mimic results on a set of eight Atari games. We compare the AMN performance to that of the expert DQNs trained separately on each game. The expert DQNs were trained until convergence and the AMN was trained for 100 training epochs, which is equivalent to 25 million input frames per source game. For the AMN, we report maximum test reward ever achieved in epochs 1-100 and mean test reward in epochs 91-100. For the DQN, we report maximum test reward ever achieved until convergence and mean test reward in the last 10 epochs of DQN training. Additionally, at the last row of the table we report the percentage ratio of the AMN reward to the expert DQN reward for every game for both mean and max rewards. These percentage ratios are plotted in Figure 6. The AMN results are averaged over 2 separately trained networks.
on multitask learning and not transfer learning, we disregard the feature regression objective and set β to 0. Figure 1 and Table 1 show the results of an AMN trained on 8 games simultaneously with the policy regression objective, compared to an expert DQN trained separately for each game. The AMN and every individual expert DQN in this case had the exact same network architecture. We can see that the AMN quickly reaches close-to-expert performance on 7 games out of 8, only taking around 20 epochs or 5 million training frames to settle to a stable behaviour. This is in comparison to the expert networks, which were trained for up to 50 million frames.
One result that was observed during training is that the AMN often becomes more consistent in its behaviour than the expert DQN, with a noticeably lower reward variance in every game except Atlantis and Pong. Another surprising result is that the AMN achieves a signiï¬cantly higher mean reward in the game of Atlantis and relatively higher mean reward in the games of Breakout and Enduro. This is despite the fact that the AMN is not being optimized to improve reward over the expert but just replicate the expertâs behaviour. We also observed this increase in source task perfor- mance again when we later on increased the AMN model complexity for the transfer experiments (see Atlantis experiments in Appendix D). The AMN had the worst performance on the game of Seaquest, which was a game on which the expert DQN itself did not do very well. It is possible that a low quality expert policy has difï¬culty teaching the AMN to even replicate its own (poor) behaviour. We compare the performance of our AMN against a baseline of two different multitask DQN architectures in Appendix C.
5.2 TRANSFER We have found that although a small AMN can learn how to behave at a close-to-expert level on multiple source tasks, a larger AMN can more easily transfer knowledge to target tasks after be- ing trained on the source tasks. For the transfer experiments, we therefore signiï¬cantly increased the AMN model complexity relative to that of an expert. Using a larger network architecture also allowed us to scale up to playing 13 source games at once (see Appendix D for source task perfor- mance using the larger AMNs). We additionally found that using an AMN trained for too long on the source tasks hurt transfer, as it is likely overï¬tting. Therefore for the transfer experiments, we train the AMN on only 4 million frames for each of the source games.
To evaluate the Actor-Mimic objective on transfer learning, the previously described large AMNs will be used as a weight initialization for DQNs which are each trained on a different target task. We additionally independently evaluate the beneï¬t of the feature regression objective during transfer by having one AMN trained with only the policy regression objective (AMN-policy) and another trained using both feature and policy regression (AMN-feature). The results are then compared to the baseline of a DQN that was initialized with random weights.
The performance on a set of 7 target games is detailed in Table 2 (learning curves are plotted in Figure 7). We can see that the AMN pretraining provides a deï¬nite increase in learning speed for the 3 games of Breakout, Star Gunner and Video Pinball. The results in Breakout and Video Pinball demonstrate that the policy regression objective alone provides signiï¬cant positive transfer in some target tasks. The reason for this large positive transfer might be due to the source game Pong having very similar mechanics to both Video Pinball and Breakout, where one must use a paddle to prevent a ball from falling off screen. The machinery used to detect the ball in Pong would likely be useful in detecting the ball for these two target tasks, given some ï¬ne-tuning. Additionally, the feature regression objective causes a signiï¬cant speed-up in the game of Star Gunner compared to both the random initialization and the network trained solely with policy regression. Therefore even though the feature regression objective can slightly hurt transfer in some source games, it can provide large
6
Published as a conference paper at ICLR 2016
Breakout 1mil | 2mil | 3 mil 4 mil 5 mil 6 mil 7 mil 8mil | 9mil | 10 mil Random 1.182 | 5.278 | 29.13 102.3 202.8 212.8 252.9 211.8 | 243.5 258.7 AMN-policy | 18.35 | 102.1 | 216.0 | 271.1 | 308.6 | 286.3 284.6 | 318.8 | 281.6 | 311.3 AMN-feature | 16.23 | 119.0 | 153.7 191.8 172.6 233.9 248.5 178.8 | 235.6 225.5 Gopher Tmil | 2mil | 3 mil 4 mil 5 mil 6 mil 7 mil 8mil | 8mil | 10 mil Random 294.0 | 578.9 1360 1540 1820 1133 633.0 1306 1758 1539 AMN-policy | 715.0 | 612.7 | 1362 924.8 1029 1186 1081 936.7 1251 1142 AMN-feature | 636.2 | 1110 | 918.8 1073 1028 810.1 1008 868.8 1054 982.4 Krull Tmil |] 2mil | 3 mil 4 mil 5 mil 6 mil 7 mil 8mil | 9 mil 10 mil Random 4302 | 6193 6576 7030 6754 5294 5949 5557 5366 6005 AMN-policy | 5827 | 7279 6838 6971 7277 7129 7854 8012 7244 7835 AMN-feature | 5033 | 7256 7008 7582 7665 8016 8133 6536 7832 6923 Road Runner | 1 mil | 2 mil | 3 mil 4 mil 5 mil 6 mil 7 mil 8mil | 9mil | 10 mil Random 327.5 | 988.1 | 16263 | 27183 | 26639 | 29488 33197 | 27683 | 25235 | 31647 AMN-policy | 1561 | 5119 | 19483 | 22132 | 23391 | 23813 34673 | 33476 | 31967 | 31416 AMN-feature | 1349 | 6659 | 18074 | 16858 | 18099 | 22985 27023 | 24149 | 28225 | 23342 Robotank Tmil | 2mil | 3 mil 4 mil 5 mil 6 mil 7 mil 8mil | 9mil | 10 mil Random 4.830 | 6.965 | 9.825 | 13.22 | 21.07 | 22.54 31.94 | 29.80 | 37.12 | 34.04 AMN-policy | 3.502 | 4.522 | 11.03 9.215 16.89 17.31 18.66 20.58 | 23.58 23.02 AMN-feature | 3.550 | 6.162 | 13.94 17.58 17.57 20.72 20.13 21.13 | 26.14 23.29 Star Gunner | I mil | 2mil | 3 mil 4 mil 5 mil 6 mil 7 mil 8mil | 9mil | 10 mil Random 221.2 | 468.5 | 927.6 1084 1508 1626 3286 16017 | 36273 | 45322 AMN-policy | 274.3 | 302.0 | 978.4 1667 4000 14655 31588 | 45667 | 38738 | 53642 AMN-feature | 1405 | 4570 | 18111 | 23406 | 36070 | 46811 | 50667 | 49579 | 50440 | 56839 Video Pinball | I mil |] 2 mil ] 3 mil 4 mil 5 mil 6 mil 7 mil 8mil | 9 mil 10 mil Random 2323 | 8549 6780 5842 10383 | 11093 8468 5476 9964 11893 AMN-policy | 2583 | 25821 | 95949 | 143729 | 57114 | 106873 | 111074 | 73523 | 34908 | 123337 AMN-feature | 1593 | 3958 | 21341 12421 15409 | 18992 15920 | 48690 | 24366 | 26379
Table 2: Actor-Mimic transfer results for a set of 7 games. The 3 networks are trained as DQNs on the target task, with the only difference being the weight initialization. âRandomâ means random initial weights, âAMN- policyâ means a weight initialization with an AMN trained using policy regression and âAMN-featureâ means a weight initialization with an AMN trained using both policy and feature regression (see text for more details). We report the average test reward every 4 training epochs (equivalent to 1 million training frames), where the average is over 4 testing epochs that are evaluated immediately after each training epoch. For each game, we bold out the network results that have the highest average testing reward for that particular column.
beneï¬ts in others. The positive transfer in Breakout, Star Gunner and Video Pinball saves at least up to 5 million frames of training time in each game. Processing 5 million frames with the large model is equivalent to around 4 days of compute time on a NVIDIA GTX Titan.
On the other hand, for the games of Krull and Road Runner (although the multitask pretraining does help learning at the start) the effect is not very pronounced. When running Krull we observed that the policy learnt by any DQN regardless of the initialization was a sort of unexpected local maximum. In Krull, the objective is to move between a set of varied minigames and complete each one. One of the minigames, where the player must traverse a spiderweb, gives extremely high reward by simply jumping quickly in a mostly random fashion. What the DQN does is it kills itself on purpose in the initial minigame, runs to the high reward spiderweb minigame, and then simply jumps in the corner of the spiderweb until it is terminated by the spider. Because it is relatively easy to get stuck in this local maximum, and very hard to get out of it (jumping in the minigame gives unproportionally high reward compared to the other minigames), transfer does not really help learning.
For the games of Gopher and Robotank, we can see that the multitask pretraining does not have any signiï¬cant positive effect. In particular, multitask pretraining for Robotank even seems to slow down learning, providing an example of negative transfer. The task in Robotank is to control a tank turret in a 3D environment to destroy other tanks, so itâs possible that this game is so signiï¬cantly different from any source task (being the only ï¬rst-person 3D game) that the multitask pretraining does not provide any useful prior knowledge.
6 RELATED WORK The idea of using expert networks to guide a single mimic network has been studied in the context of supervised learning, where it is known as model compression. The goal of model compression is to reduce the computational complexity of a large model (or ensemble of large models) to a single smaller mimic network while maintaining as high an accuracy as possible. To obtain high accuracy, the mimic network is trained using rich output targets provided by the experts. These output targets are either the ï¬nal layer logits (Ba & Caruana, 2014) or the high-temperature softmax outputs of the experts (Hinton et al., 2015). Our approach is most similar to the technique of (Hinton et al., 2015)
7
Published as a conference paper at ICLR 2016
which matches the high-temperature outputs of the mimic network with that of the expert network. In addition, we also tried an objective that provides expert guidance at the feature level instead of only at the output level. A similar idea was also explored in the model compression case (Romero et al., 2015), where a deep and thin mimic network used a larger expert networkâs intermediate features as guiding hints during training. In contrast to these model compression techniques, our method is not concerned with decreasing test time computation but instead using experts to provide otherwise unavailable supervision to a mimic network on several distinct tasks.
Actor-Mimic can also be considered as part of the larger Imitation Learning class of methods, which use expert guidance to teach an agent how to act. One such method, called DAGGER (Ross et al., 2011), is similar to our approach in that it trains a policy to directly mimic an expertâs behaviour while sampling actions from the mimic agent. Actor-Mimic can be considered as an extension of this work to the multitask case. In addition, using a deep neural network to parameterize the policy provides us with several advantages over the more general Imitation Learning framework. First, we can exploit the automatic feature construction ability of deep networks to transfer knowledge to new tasks, as long as the raw data between tasks is in the same form, i.e. pixel data with the same dimen- sions. Second, we can deï¬ne objectives which take into account intermediate representations of the state and not just the policy outputs, for example the feature regression objective which provides a richer training signal to the mimic network than just samples of the expertâs action output.
Recent work has explored combining expert-guided Imitation Learning and deep neural networks in the single-task case. Guo et al. (2014) use DAGGER with expert guidance provided by Monte-Carlo Tree Search (MCTS) policies to train a deep neural network that improves on the original DQNâs performance. Some disadvantages of using MCTS experts as guidance are that they require both access to the (hidden) RAM state of the emulator as well as an environment model. Another re- lated method is that of guided policy search (Levine & Koltun, 2013), which combines a regularized importance-sampled policy gradient with guiding trajectory samples generated using differential dy- namic programming. The goal in that work was to learn continuous control policies which improved upon the basic policy gradient method, which is prone to poor local minima.
A wide variety of methods have also been studied in the context of RL transfer learning (see Tay- lor & Stone (2009) for a more comprehensive review). One related approach is to use a dual state representation with a set of task-speciï¬c and task-independent features known as âproblem-spaceâ and âagent-spaceâ descriptors, respectively. For each source task, a task-speciï¬c value function is learnt on the problem-space descriptors and then these learnt value functions are transferred to a single value function over the agent-space descriptors. Because the agent-space value function is deï¬ned over features which maintain constant semantics across all tasks, this value function can be directly transferred to new tasks. Banerjee & Stone (2007) constructed agent-space features by ï¬rst generating a ï¬xed-depth game tree of the current state, classifying each future state in the tree as either {win, lose, draw, nonterminal} and then coalescing all states which have the same class or subtree. To transfer the source tasks value functions to agent-space, they use a simple weighted av- erage of the source task value functions, where the weight is proportional to the number of times that a speciï¬c agent-space descriptor has been seen during play in that source task. In a related method, Konidaris & Barto (2006) transfer the value function to agent-space by using regression to predict every source tasks problem-space value function from the agent-space descriptors. A drawback of these methods is that the agent- and problem-space descriptors are either hand-engineered or gener- ated from a perfect environment model, thus requiring a signiï¬cant amount of domain knowledge. 7 DISCUSSION
In this paper we deï¬ned Actor-Mimic, a novel method for training a single deep policy network over a set of related source tasks. We have shown that a network trained using Actor-Mimic is capable of reaching expert performance on many games simultaneously, while having the same model complexity as a single expert. In addition, using Actor-Mimic as a multitask pretraining phase can signiï¬cantly improve learning speed in a set of target tasks. This demonstrates that the features learnt over the source tasks can generalize to new target tasks, given a sufï¬cient level of similarity between source and target tasks. A direction of future work is to develop methods that can enable a targeted knowledge transfer from source tasks by identifying related source tasks for the given target task. Using targeted knowledge transfer can potentially help in cases of negative transfer observed in our experiments.
Acknowledgments: This work was supported by Samsung and NSERC.
8
Published as a conference paper at ICLR 2016
# REFERENCES
Ba, Jimmy and Caruana, Rich. Do deep nets really need to be deep? In Advances in Neural Information Processing Systems, pp. 2654â2662, 2014.
Banerjee, Bikramjit and Stone, Peter. General game learning using knowledge transfer. In Interna- tional Joint Conferences on Artiï¬cial Intelligence, pp. 672â677, 2007.
Bellemare, Marc G., Naddaf, Yavar, Veness, Joel, and Bowling, Michael. The arcade learning envi- ronment: An evaluation platform for general agents. Journal of Artiï¬cial Intelligence Research, 47:253â279, 2013.
Bertsekas, Dimitri P. Dynamic programming and optimal control, volume 1. Athena Scientiï¬c Belmont, MA, 1995.
Guo, Xiaoxiao, Singh, Satinder, Lee, Honglak, Lewis, Richard L, and Wang, Xiaoshi. Deep learning for real-time atari game play using ofï¬ine monte-carlo tree search planning. In Advances in Neural Information Processing Systems 27, pp. 3338â3346, 2014.
Hinton, Geoffrey, Vinyals, Oriol, and Dean, Jeff. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
Kingma, Diederik P. and Ba, Jimmy. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2015.
Konidaris, George and Barto, Andrew G. Autonomous shaping: Knowledge transfer in reinforce- In Proceedings of the 23rd international conference on Machine learning, pp. ment learning. 489â496, 2006.
Levine, Sergey and Koltun, Vladlen. Guided policy search. In Proceedings of the 30th international conference on Machine Learning, 2013.
Levine, Sergey, Finn, Chelsea, Darrell, Trevor, and Abbeel, Pieter. End-to-end training of deep visuomotor policies. CoRR, abs/1504.00702, 2015.
Lillicrap, Timothy P., Hunt, Jonathan J., Pritzel, Alexander, Heess, Nicholas, Erez, Tom, Tassa, Yuval, Silver, David, and Wierstra, Daan. Continuous control with deep reinforcement learning. CoRR, abs/1509.02971, 2015.
Mnih, Volodymyr, Kavukcuoglu, Koray, Silver, David, Rusu, Andrei A., Veness, Joel, Bellemare, Marc G., Graves, Alex, Riedmiller, Martin, Fidjeland, Andreas K., Ostrovski, Georg, Petersen, Stig, Beattie, Charles, Sadik, Amir, Antonoglou, Ioannis, King, Helen, Kumaran, Dharshan, Wier- stra, Daan, Legg, Shane, and Hassabis, Demis. Human-level control through deep reinforcement learning. Nature, 518(7540):529â533, 2015.
Perkins, Theodore J and Precup, Doina. A convergent form of approximate policy iteration. Advances in neural information processing systems, pp. 1595â1602, 2002. In
Robbins, Herbert and Monro, Sutton. A stochastic approximation method. The annals of mathemat- ical statistics, pp. 400â407, 1951.
Romero, Adriana, Ballas, Nicolas, Kahou, Samira Ebrahimi, Chassang, Antoine, Gatta, Carlo, and In International Conference on Learning Bengio, Yoshua. Fitnets: Hints for thin deep nets. Representations, 2015.
Ross, Stephane, Gordon, Geoffrey, and Bagnell, Andrew. A reduction of imitation learning and structured prediction to no-regret online learning. Journal of Machine Learning Research, 15: 627â635, 2011.
Seneta, E. Sensitivity analysis, ergodicity coefï¬cients, and rank-one updates for ï¬nite markov chains. Numerical solution of Markov chains, 8:121â129, 1991.
Sutton, Richard S. and Barto, Andrew G. Reinforcement learning: An introduction. MIT Press Cambridge, 1998.
Taylor, Matthew E and Stone, Peter. Transfer learning for reinforcement learning domains: A survey. The Journal of Machine Learning Research, 10:1633â1685, 2009.
9
Published as a conference paper at ICLR 2016
# APPENDIX A PROOF OF THEOREM 1
Lemma 2. For any two policies t!,xâ, the stationary distributions over the states under the policies are bounded: \|D,1 â Dz2\| < cp||t! â x? ||, for some cp > 0.
Proof. Let T 1 and T 2 be the two transition matrices under the stationary distributions DÏ1 , DÏ2. For any ij elements T 1
IT; -â T3ll = | SS p(sila, 85) (w'(a|s;) â e's] (9)
(10)
<|Al|l7" (als;) â 7?(a|s5)|| <|Al||7? â 1? lI-0.
(11)
The above bound for any ijââ elements implies the Euclidean distance of the transition matrices is also upper bounded ||T'? â T?|| < |S||.A|||7+ â 7? |]. ) has shown that ||D1 â D,2|| < yor llT? âT?||.0, where A! is the largest eigenvalue 0 . Hence, there is a constant cp > 0 such that ||D,.1 â D,2|| < cp||x1 â 2? ||.
Lemma 3. For any two softmax policy Pg:, P92 matrices from the linear function approximator, || Pox â Po2|| < ey|| 20! â 06 ||, for some cy > 0.
Proof. Note that the ith row jth column element p(aj|si) in a softmax policy matrix P is computed by the softmax transformation on the Q function:
e(si.03) Sy e@ (sian) â k Diy = p(aj|si) = softmar( Q(si.4))) = (12)
Because the softmax function is a monotonically increasing element-wise function on matrices, the Euclidean distance of the softmax transformation is upper bounded by the largest Jacobian in the domain of the softmax function. Namely, for câ, = maxzepom softmaz || dsoftman(s) |
\|softmaz(x') â softmax(x?)|| < c,||a! â x?||,Vx', x? ⬠Dom softmaz. (13)
By bounding the elements in P matrix, it gives || Pp: â Po2|| < cs||Qo. â Qeel| = csl|eo! -â 06? |),
Theorem 1. Assume the Markov decision process is irreducible and aperiodic for any policy 7 induced by the T operator and is Lipschitz continuous with a constant C., the sequence of policies and model parameters generated by the iterative algorithm above converges almost surely to a unique solution x* and 0*.
Proof. We follow a similar contraction argument made in|Perkins & Precup|(2002) , and show the it- erative algorithm is a contraction process. Namely, for any two policies 7* and 7, the learning algo- rithm above produces new policies '(Qg:), [(Qo2) after one iteration, where ||'(Qo:)âF'(Qog2)|| < B|\x! â 1? ||. Here || - || is the Euclidean norm and f ⬠(0, 1).
By Lipschtiz continuity,
IIF(Qo.) â P(Qez)|| <cellQo1 â Qo2|| = cell 0" â &6|| (14) <ce||B||||6" â 67||- (15)
10
Published as a conference paper at ICLR 2016
Let θ1 and θ2 be the stationary points of Eq. (7) under Ï1 and Ï2. That is, âθ1 = âθ2 = 0 respectively. Rearranging Eq. (8) gives,
# 1 λ 1 λ
|" â | =]? Daa (Poy =.) â ©" Dya( Poe ~The) (16)
=> 67 (D2 â Dy)Me + 87 D1 Por â ®T Daa Por + 87 Dai Poo â ®7 D2 Pp? || (17)
# 1 λ 1 λ
=>
61 (D,2 â Dy). + ©" Djs (Pp â Po2) + 87 (Dy â D2) Pp2| (18) [27 ||| De â D2] lel] + 27 |ll_Dall|LPox â Poe ll + [27 ||Dx â Deal ll Poe
â¤
<ellx! â 1? |]. (20)
The last inequality is given by Lemma 2 and 3 and the compactness of ®. For a Lipschtiz constant Ce > ¢, there exists a 9 such that ||T'(Qo:)âI'(Qo2) || < ||! âx2||. Hence, the sequence of policies generated by the algorithm converges almost surely to a unique fixed point 7* from Lemma|I]and the Contraction Mapping Theorem|Bertsekas|(1995). Furthermore, the model parameters converge w.p. | to a stationary point 0* under the fixed point policy 7*.
# APPENDIX B AMN TRAINING DETAILS
All of our Actor-Mimic Networks (AMNs) were trained using the Adam (Kingma & Bal |2015) ding to optimization algorithm. The AMNs have a single 18-unit output, with each output correspon one of the 18 possible Atari player actions. Having the full 18-action output simplifies the multitask case when each game has a different subset of valid actions. While playing a certain game, we mask out AMN action outputs that are not valid for that game and take the softmax over only the subset of valid actions. We use a replay memory for each game to reduce correlations between successive frames and stabilize network training. Because the memory requirements of having the standard replay memory size of 1,000,000 frames for each game are prohibitive when we are training over many source games, for AMNs we use a per-game 100,000 frame replay memory. AMN training was stable even with only a per-game equivalent of a tenth of the replay memory size of the DQN experts. For the transfer experiments with the feature regression objective, we set the scaling parameter ( to 0.01 and the feature prediction network f; was set to a linear projection from the AMN features to the i*â expert features. For the policy regression objective, we use a softmax temperature of | in all cases. Additionally, during training for all AMNs we use an e-greedy policy with ⬠set to a constant 0.1. Annealing ⬠from | did not provide any noticeable benefit. During training, we choose actions based on the AMN and not the expert DQN. We do not use weight decay during AMN training as we empirically found that it did not provide any large benefits.
For the experiments using the DQN algorithm, we optimize the networks with RMSProp. Since the DQNs are trained on a single game their output layers only contain the player actions that are valid in the particular game that they are trained on. The experts guiding the AMNs used the same architecture, hyperparameters and training procedure as that of Mnih et al. (2015). We use the full 1,000,000 frame replay memory when training any DQN.
# APPENDIX C MULTITASK DQN BASELINE RESULTS
As a baseline, we trained DQN networks over 8 games simultaneously to test their performance against the Actor-Mimic method. We tried two different architectures, the ï¬rst is using the basic DQN procedure on all 8 games. This network has a single 18 action output shared by all games, but when we train or test in a particular game, we mask out and ignore the action values from actions that are invalid for that particular game. This architecture is denoted the Multitask DQN (MDQN). The second architecture is a DQN but where each game has a separate fully-connected feature layer and action output. In this architecture only the convolutions are shared between games, and thus the features and action values are completely separate. This was to try to mitigate the destabilizing
11
(19)
Published as a conference paper at ICLR 2016
BOXING a. 10% ATLANTIS 3 FS BREAKOUT _. «104 CRAZY CLIMBER 8 5 a |âAMN, |âDQN | -MDQN A x iS 7 & 8 a AJ â \ , - \ Ae nee 8 ole o V o ENDURO wy 3 SEAQUEST a SPACE INVADERS 3 8 8 g 8 8 8 By i & | 8 . a nd ° 8 â Vy ° 0 20 40 0 20 40 0 20 40 0 20 40
Figure 2: The Actor-Mimic, expert DQN, and Multitask DQN (MDQN) training curves for 40 training epochs for each of the 8 games. A training epoch is 250,000 frames and for each training epoch we evaluate the networks with a testing epoch that lasts 125,000 frames. We report AMN, expert DQN and MDQN test reward for each testing epoch. In the testing epoch we use « = 0.05 in the e-greedy policy. The y-axis is the average unscaled episode reward during a testing epoch.
BOXING BREAKOUT , x10% |âAMN |âDQN | -MCDQN ATLANTIS S 3 0b _. x104 CRAZY CLIMBER a 002 SL ° 8 ok ° 3 » g SEAQUEST a 3 8 8 8 8 8 8 io i ol. & oly okt 0 20 40 0 20 40 0 40 0 20 40
Figure 3: The Actor-Mimic, expert DQN, and Multitask Convolutions DQN (MCDQN) training curves for 40 training epochs for each of the 8 games. A training epoch is 250,000 frames and for each training epoch we evaluate the networks with a testing epoch that lasts 125,000 frames. We report AMN, expert DQN and MCDQN test reward for each testing epoch. In the testing epoch we use ⬠= 0.05 in the e-greedy policy. The y-axis is the average unscaled episode reward during a testing epoch.
effect that the different value scales of each game had during learning. This architecture is denoted the Multitask Convolutions DQN (MCDQN).
The results for the MDQN and MCDQN are shown in Figures 2 and 3, respectively. From the ï¬gures, we can observe that the AMN is far more stable during training as well as being consistently higher in performance than either the MDQN or MCDQN methods. In addition, it can be seen that the MDQN and MCDQN will often focus on performing reasonably well on a small subset of the source games, such as on Boxing and Enduro, while making little to no progress in others, such as Breakout or Pong. Between the MDQN and MCDQN, we can see that the MCDQN hardly improves results even though it has signiï¬cantly larger computational cost that scales linearly with the number of source games.
For the speciï¬c details of the architectures we tested, for the MDQN the architecture was: 8x8x4x32- 4 1 â 4x4x32x64-2 â 3x3x64x64-1 â 512 fully-connected units â 18 actions. This is exactly the same network architecture as used for the 8 game AMN in Section 5.1. For the MCDQN, the bottom convolutional layers were the same as the MDQN, except there are 8 parallel subnetworks on top of the convolutional layers. These game-speciï¬c subnetworks had the architecture: 512 fully- connected units â 18 actions. All layers except the action outputs were followed with a rectiï¬er non-linearity.
1 Here we represent convolutional layers as WxWxCxN-S, where W is the width of the (square) convolution kernel, C is the number of input images, N is the number of ï¬lter maps and S is the convolution stride.
12
Published as a conference paper at ICLR 2016
# APPENDIX D ACTOR-MIMIC NETWORK MULTITASK RESULTS FOR TRANSFER PRETRAINING
The network used for transfer consisted of the following architecture: 8x8x4x256-4 1 â 4x4x256x512-2 â 3x3x512x512-1 â 3x3x512x512-1 â 2048 fully-connected units â 1024 fully- connected units â 18 actions. All layers except the ï¬nal one were followed with a rectiï¬er non- linearity.
13
Published as a conference paper at ICLR 2016
# atlantis
105
6X assault 5000 410000 beam rider . ââAMN-policy 5/ J âDQN 4000 +}|âDQN-Max 8000 âDQN-Mean 4 3000 6000 3f 4 2000 2 4000 1000 i} 2000 0 1 1 0 0 ie} 5 10 15 0 0 4 . boxing 42 10 ; crazy climber x104 100 - | âââs === ==] 10 2.5 Y 50 } 8 ; 6 / ok ~ 4 2 P| -50 Lâ 0) 5 10 15 0 enduro fishing derby - kangaroo 1000 20 10000 9) 0 â | 800 /o~ / â 8000 a / . -20 \ 600 fo { 6000 _7 40F | 400 f | } 4000 J -60 | 2 7 / | ; y 200 yo âsol | / 2000 7 / wa / _/ 0 ââ ~100 - - 9 L_+_ââ 0 5 10 15 0 5 10 15 0 pong name this game 30 seaquest 10000 g : 7000 20 ==} 6000 8000 y, \ 5000 1 ~/ | 10 ( \ 6000 } , o\ | _ Sf 4000 1 } i¢) } \ | â / | \] ~ Sf sooo | / | 7 \ wy, 3000 Z / : ' 2000 a Ae 2000 / a lana AN 20; ââ_ 1000 7 47 ⢠0 . -30 . ââ . 0 5 10 15 5 10 15 it) 10 15 space invaders 2500 - . 2000 | 1500 1000 500 â 0 : 1 1
Figure 4: The Actor-Mimic training curves for the network trained solely with the policy regression objective (AMN-policy). The AMN-policy is trained for 16 epochs, or 4 million frames per game. We compare against the (smaller network) expert DQNs, which are trained until convergence. We also report the maximum test reward the expert DQN achieved over all training epochs, as well as the mean testing reward achieved over the last 10 epochs.
14
Published as a conference paper at ICLR 2016
10%
# atlantis
g assault 5000 10000 beam rider -âAMIN-feature 5 âpan 4000 }|âDQN-Max 8000 âDQN-Mean . 4 3000 2000 1000 0 100 50 0 -50 0 fishing derby 1000 enduro 20 ig derby 10000 kangaroo 800 8000 600 6000 400 4000 200 â30 2000 0 -100 ° 0 5 15 oO 5 10 15 0 30 peng seaquest 10000 7000 6000 8000 5000 6000 4000 4000 3000 2000 2000 1000 o - -30 o 0 5 10 15 0 5 10 15 0 space invaders 2500 2000 1500 1000 a /S 500 7 o
Figure 5: The Actor-Mimic training curves for the network trained with both the feature and policy regression objective (AMN-feature). The AMN-feature is trained for 16 epochs, or 4 million frames per game. We compare against the (smaller network) expert DQNs, which are trained until convergence. We also report the maximum test reward the expert DQN achieved over all training epochs, as well as the mean testing reward achieved over the last 10 epochs.
15
Published as a conference paper at ICLR 2016
# APPENDIX E TABLE 1 BARPLOT
200 200 100 Relative Mean Score (100% x AM) Relative Max Seore (100% x AMN) âatlantis boxiug breakout ery enduro pongâseaquest space climber invaders boxing breakout crazy enduro pong seaquest space climber invaders
Figure 6: Plots showing relative mean reward improvement (left) and relative max reward improvement (right) of the multitask AMN over the expert DQNs. See Table 1 for details on how these values were calculated.
APPENDIX F TABLE 2 LEARNING CURVES
00r
002
KRULL BREAKOUT 8 GOPHER 8 x4104 ROAD RUNNER 8 a eee g 3 8 3 3 iN) |âRandom » |âAMN-Policy 8 IâAMN-Feature ° 8 ° 5 ROBOTANK > x104 STAR GUNNER _. «104 VIDEO PINBALL a 8 ° a ° ° ° 0 5 40 0 5 10 0 5 10
Figure 7: Learning curve plots of the results in Table2.
16 | {
"id": "1503.02531"
} |
1511.05756 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | We tackle image question answering (ImageQA) problem by learning a
convolutional neural network (CNN) with a dynamic parameter layer whose weights
are determined adaptively based on questions. For the adaptive parameter
prediction, we employ a separate parameter prediction network, which consists
of gated recurrent unit (GRU) taking a question as its input and a
fully-connected layer generating a set of candidate weights as its output.
However, it is challenging to construct a parameter prediction network for a
large number of parameters in the fully-connected dynamic parameter layer of
the CNN. We reduce the complexity of this problem by incorporating a hashing
technique, where the candidate weights given by the parameter prediction
network are selected using a predefined hash function to determine individual
weights in the dynamic parameter layer. The proposed network---joint network
with the CNN for ImageQA and the parameter prediction network---is trained
end-to-end through back-propagation, where its weights are initialized using a
pre-trained CNN and GRU. The proposed algorithm illustrates the
state-of-the-art performance on all available public ImageQA benchmarks. | http://arxiv.org/pdf/1511.05756 | Hyeonwoo Noh, Paul Hongsuck Seo, Bohyung Han | cs.CV, cs.CL, cs.LG | null | null | cs.CV | 20151118 | 20151118 | 5 1 0 2
v o N 8 1 ] V C . s c [ 1 v 6 5 7 5 0 . 1 1 5 1 : v i X r a
# Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction
Hyeonwoo Noh Bohyung Han Paul Hongsuck Seo Department of Computer Science and Engineering, POSTECH, Korea {hyeonwoonoh , hsseo, bhhan}@postech.ac.kr
# Abstract
We tackle image question answering (ImageQA) prob- lem by learning a convolutional neural network (CNN) with a dynamic parameter layer whose weights are determined adaptively based on questions. For the adaptive parameter prediction, we employ a separate parameter prediction net- work, which consists of gated recurrent unit (GRU) taking a question as its input and a fully-connected layer gener- ating a set of candidate weights as its output. However, it is challenging to construct a parameter prediction network for a large number of parameters in the fully-connected dy- namic parameter layer of the CNN. We reduce the complex- ity of this problem by incorporating a hashing technique, where the candidate weights given by the parameter pre- diction network are selected using a predeï¬ned hash func- tion to determine individual weights in the dynamic param- eter layer. The proposed networkâjoint network with the CNN for ImageQA and the parameter prediction networkâ is trained end-to-end through back-propagation, where its weights are initialized using a pre-trained CNN and GRU. The proposed algorithm illustrates the state-of-the-art per- formance on all available public ImageQA benchmarks.
# 1. Introduction
One of the ultimate goals in computer vision is holistic scene understanding [30], which requires a system to cap- ture various kinds of information such as objects, actions, events, scene, atmosphere, and their relations in many dif- ferent levels of semantics. Although signiï¬cant progress on various recognition tasks [5, 8, 21, 24, 26, 27, 31] has been made in recent years, these works focus only on solv- ing relatively simple recognition problems in controlled set- tings, where each dataset consists of concepts with similar level of understanding (e.g. object, scene, bird species, face identity, action, texture etc.). There has been less efforts made on solving various recognition problems simultane- ously, which is more complex and realistic, even though this is a crucial step toward holistic scene understanding.
Q: What type of animal is this? Q:Is this animal alone?
Q Is it snowing? Q: Is this picture taken during the day?
Q: What kind of oranges are these? Q Is the fruit sliced?
Q: What is leaning on the wall? Q: How many boards are there?
Figure 1. Sample images and questions in VQA dataset [1]. Each question requires different type and/or level of understanding of the corresponding input image to ï¬nd correct answers.
Image question answering (ImageQA) [1, 17, 23] aims to solve the holistic scene understanding problem by propos- ing a task unifying various recognition problems. ImageQA is a task automatically answering the questions about an in- put image as illustrated in Figure 1. The critical challenge of this problem is that different questions require different types and levels of understanding of an image to ï¬nd correct answers. For example, to answer the question like âhow is the weather?â we need to perform classiï¬cation on multiple choices related to weather, while we should decide between yes and no for the question like âis this picture taken dur- ing the day?â For this reason, not only the performance on a single recognition task but also the capability to select a proper task is important to solve ImageQA problem.
ImageQA problem has a short history in computer vi- sion and machine learning community, but there already ex- ist several approaches [10, 16, 17, 18, 23]. Among these methods, simple deep learning based approaches that per- form classiï¬cation on a combination of features extracted from image and question currently demonstrate the state-of-
1
the-art accuracy on public benchmarks [23, 16]; these ap- proaches extract image features using a convolutional neu- ral network (CNN), and use CNN or bag-of-words to obtain feature descriptors from question. They can be interpreted as a method that the answer is given by the co-occurrence of a particular combination of features extracted from an image and a question.
Contrary to the existing approaches, we deï¬ne a differ- ent recognition task depending on a question. To realize this idea, we propose a deep CNN with a dynamic param- eter layer whose weights are determined adaptively based on questions. We claim that a single deep CNN architecture can take care of various tasks by allowing adaptive weight assignment in the dynamic parameter layer. For the adap- tive parameter prediction, we employ a parameter predic- tion network, which consists of gated recurrent units (GRU) taking a question as its input and a fully-connected layer generating a set of candidate weights for the dynamic pa- rameter layer. The entire network including the CNN for ImageQA and the parameter prediction network is trained end-to-end through back-propagation, where its weights are initialized using pre-trained CNN and GRU. Our main con- tributions in this work are summarized below:
⢠We successfully adopt a deep CNN with a dynamic pa- rameter layer for ImageQA, which is a fully-connected layer whose parameters are determined dynamically based on a given question.
⢠To predict a large number of weights in the dynamic parameter layer effectively and efï¬ciently, we apply hashing trick [3], which reduces the number of param- eters signiï¬cantly with little impact on network capac- ity.
⢠We ï¬ne-tune GRU pre-trained on a large-scale text cor- pus [14] to improve generalization performance of our network. Pre-training GRU on a large corpus is natural way to deal with a small number of training data, but no one has attempted it yet to our knowledge.
⢠This is the ï¬rst work to report the results on all cur- rently available benchmark datasets such as DAQUAR, COCO-QA and VQA. Our algorithm achieves the state-of-the-art performance on all the three datasets.
The rest of this paper is organized as follows. We ï¬rst review related work in Section 2. Section 3 and 4 describe the overview of our algorithm and the architecture of our network, respectively. We discuss the detailed procedure to train the proposed network in Section 5. Experimental results are demonstrated in Section 6.
# 2. Related Work
There are several recent papers to address ImageQA [1, 10, 16, 17, 18, 23]; the most of them are based on deep
learning except [17]. Malinowski and Fritz [17] propose a Bayesian framework, which exploits recent advances in computer vision and natural language processing. Specif- ically, it employs semantic image segmentation and sym- bolic question reasoning to solve ImageQA problem. How- ever, this method depends on a pre-deï¬ned set of predicates, which makes it difï¬cult to represent complex models re- quired to understand input images.
Deep learning based approaches demonstrate competi- tive performances in ImageQA [18, 10, 23, 16, 1]. Most approaches based on deep learning commonly use CNNs to extract features from image while they use different strate- gies to handle question sentences. Some algorithms em- ploy embedding of joint features based on image and ques- tion [1, 10, 18]. However, learning a softmax classiï¬er on the simple joint featuresâconcatenation of CNN-based image features and continuous bag-of-words representation of a questionâperforms better than LSTM-based embed- ding on COCO-QA [23] dataset. Another line of research is to utilize CNNs for feature extraction from both image and question and combine the two features [16]; this ap- proach demonstrates impressive performance enhancement on DAQUAR [17] dataset by allowing ï¬ne-tuning the whole parameters.
The prediction of the weight parameters in deep neural networks has been explored in [2] in the context of zero- shot learning. To perform classiï¬cation of unseen classes, it trains a multi-layer perceptron to predict a binary clas- siï¬er for class-speciï¬c description in text. However, this method is not directly applicable to ImageQA since ï¬nding solutions based on the combination of question and answer is a more complex problem than the one discussed in [2], and ImageQA involves a signiï¬cantly larger set of candidate answers, which requires much more parameters than the bi- nary classiï¬cation case. Recently, a parameter reduction technique based on a hashing trick is proposed by Chen et al. [3] to ï¬t a large neural network in a limited memory budget. However, applying this technique to the dynamic prediction of parameters in deep neural networks is not at- tempted yet to our knowledge.
# 3. Algorithm Overview
We brieï¬y describe the motivation and formulation of our approach in this section.
# 3.1. Motivation
Although ImageQA requires different types and levels of image understanding, existing approaches [1, 10, 18] pose the problem as a ï¬at classiï¬cation task. However, we be- lieve that it is difï¬cult to solve ImageQA using a single deep neural network with ï¬xed parameters. In many CNN-based recognition problems, it is well-known to ï¬ne-tune a few layers for the adaptation to new tasks. In addition, some
eee Network > Dynamic Parameter Layer ae 2 ! Parameter Prediction Network = !| GRU P| GRU }} GRU >| GRU >| GRU GRU P| oT fT) OT TT oe ) gj e || What is in the cabinet ? 3
Figure 2. Overall architecture of the proposed Dynamic Parameter Prediction network (DPPnet), which is composed of the classiï¬cation network and the parameter prediction network. The weights in the dynamic parameter layer are mapped by a hashing trick from the candidate weights obtained from the parameter prediction network.
networks are designed to solve two or more tasks jointly by constructing multiple branches connected to a common CNN architecture. In this work, we hope to solve the het- erogeneous recognition tasks using a single CNN by adapt- ing the weights in the dynamic parameter layer. Since the task is deï¬ned by the question in ImageQA, the weights in the layer are determined depending on the question sen- tence. In addition, a hashing trick is employed to predict a large number of weights in the dynamic parameter layer and avoid parameter explosion.
# 3.2. Problem Formulation
ImageQA systems predict the best answer Ëa given an im- age I and a question q. Conventional approaches [16, 23] typically construct a joint feature vector based on two inputs I and q and solve a classiï¬cation problem for ImageQA us- ing the following equation:
network. The classiï¬cation network is a CNN. One of the fully-connected layers in the CNN is the dynamic parame- ter layer, and the weights in the layer are determined adap- tively by the parameter prediction network. The parame- ter prediction network has GRU cells and a fully-connected layer. It takes a question as its input, and generates a real- valued vector, which corresponds to candidate weights for the dynamic parameter layer in the classiï¬cation network. Given an image and a question, our algorithm estimates the weights in the dynamic parameter layer through hash- ing with the candidate weights obtained from the parameter prediction network. Then, it feeds the input image to the classiï¬cation network to obtain the ï¬nal answer. More de- tails of the proposed network are discussed in the following subsections.
# 4.1. Classiï¬cation Network
Ëa = argmax p(a|I, q; θ) aâ⦠(1)
where ⦠is a set of all possible answers and θ is a vector for the parameters in the network. On the contrary, we use the question to predict weights in the classiï¬er and solve the problem. We ï¬nd the solution by
Ëa = argmax p(a|I; θs, θd(q)) aâ⦠(2)
where θs and θd(q) denote static and dynamic parameters, respectively. Note that the values of θd(q) are determined by the question q.
# 4. Network Architecture
Figure 2 illustrates the overall architecture of the pro- posed algorithm. The network is composed of two sub- networks: classiï¬cation network and parameter prediction
The classiï¬cation network is constructed based on VGG 16-layer net [24], which is pre-trained on ImageNet [6]. We remove the last layer in the network and attach three fully- connected layers. The second last fully-connected layer of the network is the dynamic parameter layer whose weights are determined by the parameter prediction network, and the last fully-connected layer is the classiï¬cation layer whose output dimensionality is equal to the number of possible answers. The probability for each answer is computed by applying a softmax function to the output vector of the ï¬nal layer.
We put the dynamic parameter layer in the second last fully-connected layer instead of the classiï¬cation layer be- cause it involves the smallest number of parameters. As the number of parameters in the classiï¬cation layer increases in proportion to the number of possible answers, predicting the weights for the classiï¬cation layer may not be a good op-
tion to general ImageQA problems in terms of scalability. Our choice for the dynamic parameter layer can be inter- preted as follows. By ï¬xing the classiï¬cation layer while adapting the immediately preceding layer, we obtain the task-independent semantic embedding of all possible an- swers and use the representation of an input embedded in the answer space to solve an ImageQA problem. Therefore, the relationships of the answers globally learned from all recognition tasks can help solve new ones involving unseen classes, especially in multiple choice questions. For exam- ple, when not the exact ground-truth word (e.g., kitten) but similar words (e.g., cat and kitty) are shown at training time, the network can still predict the close answers (e.g., kit- ten) based on the globally learned answer embedding. Even though we could also exploit the beneï¬t of answer embed- ding based on the relations among answers to deï¬ne a loss function, we leave it as our future work.
# 4.2. Parameter Prediction Network
As mentioned earlier, our classification network has a dynamic parameter layer. That is, for an input vector of the dynamic parameter layer f* = [f/,..., f4]â, its output vector denoted by f° = [f?,..., f2]â is given by
f o = Wd(q)f i + b (3)
where b denotes a bias and Wd(q) â RM ÃN denotes the matrix constructed dynamically using the parameter predic- tion network given the input question. In other words, the weight matrix corresponding to the layer is parametrized by a function of the input question q.
The parameter prediction network is composed of GRU cells [4] followed by a fully-connected layer, which pro- duces the candidate weights to be used for the construction of weight matrix in the dynamic parameter layer within the classiï¬cation network. GRU, which is similar to LSTM, is designed to model dependency in multiple time scales. As illustrated in Figure 3, such dependency is captured by adaptively updating its hidden states with gate units. How- ever, contrary to LSTM, which maintains a separate mem- ory cell explicitly, GRU directly updates its hidden states with a reset gate and an update gate. The detailed proce- dure of the update is described below.
Let w1, ..., wT be the words in a question q, where T is the number of words in the question. In each time step t, given the embedded vector xt for a word wt, the GRU encoder updates its hidden state at time t, denoted by ht, using the following equations:
(4)
ry, = 0(W,x;, + U,hy_1) a = o(W-x, + Uzhy-1) h, = tanh(W;,x; + Un(ri © hi-1)) hy = (1 â 2) © hy-1 + Zt © by
(6)
(6)
(7)
f Input âOutput, | input Gate Gate (@)) | | | Modulation | Forget Gate Candidate Activation GRU LSTM
Figure 3. Comparison of GRU and LSTM. Contrary to LSTM that contains memory cell explicitly, GRU updates the hidden state di- rectly.
where r; and z, respectively denote the reset and update gates at time t, and h, is candidate activation at time ¢. In addition, © indicates element-wise multiplication operator and o(-) is a sigmoid function. Note that the coefficient matrices related to GRU such as W,., W., Wp, U,, Uz, and U), are learned by our training algorithm. By applying this encoder to a question sentence through a series of GRU cells, we obtain the final embedding vector h, ⬠R* of the question sentence.
Once the question embedding is obtained by GRU, the candidate weight vector, p = [p1, . . . , pK]T, is given by applying a fully-connected layer to the embedded question hT as
p = WphT where p â RK is the output of the parameter prediction net- work, and Wp is the weight matrix of the fully-connected layer in the parameter prediction network. Note that even though we employ GRU for a parameter prediction network since the pre-trained network for sentence embeddingâ skip-thought vector model [14]âis based on GRU, any form of neural networks, e.g., fully-connected and convo- lutional neural network, can be used to construct the pa- rameter prediction network.
# 4.3. Parameter Hashing
The weights in the dynamic parameter layers are deter- mined based on the learned model in the parameter predic- tion network given a question. The most straightforward approach to obtain the weights is to generate the whole ma- trix Wd(q) using the parameter prediction network. How- ever, the size of the matrix is very large, and the network may be overï¬tted easily given the limited number of train- ing examples. In addition, since we need quadratically more parameters between GRU and the fully-connected layer in the parameter prediction network to increase the dimension- ality of its output, it is not desirable to predict full weight matrix using the network. Therefore, it is preferable to con- struct Wd(q) based on a small number of candidate weights using a hashing trick.
We employ the recently proposed random weight sharing technique based on hashing [3] to construct the weights in the dynamic parameter layer. Speciï¬cally, a single param-
eter in the candidate weight vector p is shared by multiple elements of Wd(q), which is done by applying a predeï¬ned hash function that converts the 2D location in Wd(q) to the 1D index in p. By this simple hashing trick, we can reduce the number of parameters in Wd(q) while maintaining the accuracy of the network [3].
mn be the element at (m, n) in Wd(q), which cor- responds to the weight between mth output and nth input neuron. Denote by Ï(m, n) a hash function mapping a key (m, n) to a natural number in {1, . . . , K}, where K is the dimensionality of p. The ï¬nal hash function is given by
mn = pÏ(m,n) · ξ(m, n) where ξ(m, n) : N à N â {+1, â1} is another hash func- tion independent of Ï(m, n). This function is useful to re- move the bias of hashed inner product [3]. In our imple- mentation of the hash function, we adopt an open-source implementation of xxHash1.
We believe that it is reasonable to reduce the number of free parameters based on the hashing technique as there are many redundant parameters in deep neural networks [7] and the network can be parametrized using a smaller set of can- didate weights. Instead of training a huge number of pa- rameters without any constraint, it would be advantageous practically to allow multiple elements in the weight matrix It is also demonstrated that the to share the same value. number of free parameter can be reduced substantially with little loss of network performance [3].
# 5. Training Algorithm
This section discusses the error back-propagation algo- rithm in the proposed network and introduces the tech- niques adopted to enhance performance of the network.
# 5.1. Training by Error Back-Propagation
The proposed network is trained end-to-end to minimize the error between the ground-truths and the estimated an- swers. The error is back-propagated by chain rule through both the classiï¬cation network and the parameter prediction network and they are jointly trained by a ï¬rst-order opti- mization method.
Let L denote the loss function. The partial derivatives of L with respect to the kth element in the input and output of the dynamic parameter layer are given respectively by
δi k â¡ âL âf i k and δo k â¡ âL âf o k . (10)
The two derivatives have the following relation:
M 5 = So who, (11) m=1
# 1https://code.google.com/p/xxhash/
Likewise, the derivative with respect to the assigned weights in the dynamic parameter layer is given by
âL âwd mn = f i nδo m. (12)
As a single output value of the parameter prediction net- work is shared by multiple connections in the dynamic parameter layer, the derivatives with respect to all shared weights need to be accumulated to compute the derivative with respect to an element in the output of the parameter prediction network as follows:
OL âMo OL dwt mn Opr 2 y Ow!,,, OV: m=1n=1 M N OL = =ââ &(m, n)I[y(m,n) =k], (13) yy » Ow!
where I[·] denotes the indicator function. The gradients of all the preceding layers in the classiï¬cation and parame- ter prediction networks are computed by the standard back- propagation algorithm.
# 5.2. Using Pre-trained GRU
Although encoders based on recurrent neural networks (RNNs) such as LSTM [11] and GRU [4] demonstrate im- pressive performance on sentence embedding [19, 25], their beneï¬ts in the ImageQA task are marginal in comparison to bag-of-words model [23]. One of the reasons for this fact is the lack of language data in ImageQA dataset. Contrary to the tasks that have large-scale training corpora, even the largest ImageQA dataset contains relatively small amount of language data; for example, [1] contains 750K questions in total. Note that the model in [25] is trained using a corpus with more than 12M sentences.
To deal with the deï¬ciency of linguistic information in ImageQA problem, we transfer the information acquired from a large language corpus by ï¬ne-tuning the pre-trained embedding network. We initialize the GRU with the skip- thought vector model trained on a book-collection corpus containing more than 74M sentences [14]. Note that the GRU of the skip-thought vector model is trained in an un- supervised manner by predicting the surrounding sentences from the embedded sentences. As this task requires to un- derstand context, the pre-trained model produces a generic sentence embedding, which is difï¬cult to be trained with a limited number of training examples. By ï¬ne-tuning our GRU initialized with a generic sentence embedding model for ImageQA, we obtain the representations for questions that are generalized better.
# 5.3. Fine-tuning CNN
It is very common to transfer CNNs for new tasks in classiï¬cation problems, but it is not trivial to ï¬ne-tune the
CNN in our problem. We observe that the gradients below the dynamic parameter layer in the CNN are noisy since the weights are predicted by the parameter prediction net- work. Hence, a straightforward approach to ï¬ne-tune the CNN typically fails to improve performance, and we em- ploy a slightly different technique for CNN ï¬ne-tuning to sidestep the observed problem. We update the parameters of the network using new datasets except the part transferred from VGG 16-layer net at the beginning, and start to update the weights in the subnetwork if the validation accuracy is saturated.
# 5.4. Training Details
Before training, question sentences are normalized to lower cases and preprocessed by a simple tokenization tech- nique as in [29]. We normalize the answers to lower cases and regard a whole answer in a single or multiple words as a separate class.
The network is trained end-to-end by back-propagation. Adam [13] is used for optimization with initial learning rate 0.01. We clip the gradient to 0.1 to handle the gradient ex- plosion from the recurrent structure of GRU [22]. Training is terminated when there is no progress on validation accu- racy for 5 epochs.
Optimizing the dynamic parameter layer is not straight- forward since the distribution of the outputs in the dynamic parameter layer is likely to change signiï¬cantly in each batch. Therefore, we apply batch-normalization [12] to the output activations of the layer to alleviate this problem. In addition, we observe that GRU tends to converge fast and overï¬t data easily if training continues without any restric- tion. We stop ï¬ne-tuning GRU when the network start to overï¬t and continue to train the other parts of the network; this strategy improves performance in practice.
# 6. Experiments
We now describe the details of our implementation and evaluate the proposed method in various aspects.
# 6.1. Datasets
We evaluate the proposed network on all public Im- ageQA benchmark datasets such as DAQUAR [17], COCO- QA [23] and VQA [1]. They collected question-answer pairs from existing image datasets and most of the answers are single words or short phrases.
DAQUAR is based on NYUDv2 [20] dataset, which is originally designed for indoor segmentation using RGBD images. DAQUAR provides two benchmarks, which are distinguished by the number of classes and the amount of data; DAQUAR-all consists of 6,795 and 5,673 questions for training and testing respectively, and includes 894 cate- gories in answer. DAQUAR-reduced includes only 37 an- swer categories for 3,876 training and 297 testing questions.
Some questions in this dataset are associated with a set of multiple answers instead of a single one.
The questions in COCO-QA are automatically gener- ated from the image descriptions in MS COCO dataset [15] using the constituency parser with simple question-answer generation rules. The questions in this dataset are typi- cally long and explicitly classiï¬ed into 4 types depending on the generation rules: object questions, number questions, color questions and location questions. All answers are with one-words and there are 78,736 questions for training and 38,948 questions for testing.
Similar to COCO-QA, VQA is also constructed on MS COCO [15] but each question is associated with multiple answers annotated by different people. This dataset con- tains the largest number of questions: 248,349 for train- ing, 121,512 for validation, and 244,302 for testing, where the testing data is splited into test-dev, test-standard, test- challenge and test-reserve as in [15]. Each question is pro- vided with 10 answers to take the consensus of annotators into account. About 90% of answers have single words and 98% of answers do not exceed three words.
# 6.2. Evaluation Metrics
DAQUAR and COCO-QA employ both classiï¬cation ac- curacy and its relaxed version based on word similarity, WUPS [17]. It uses thresholded Wu-Palmer similarity [28] based on WordNet [9] taxonomy to compute the similarity between words. For predicted answer set Ai and ground- truth answer set T i of the ith example, WUPS is given by
# WUPS =
1 N â i a ;t), é st), 4 ydon{ I max yi(a,t), [] max y(a, ) (14) i=l acAt teT?
where µ (·, ·) denotes the thresholded Wu-Palmer similarity between prediction and ground-truth. We use two threshold values (0.9 and 0.0) in our evaluation.
VQA dataset provides open-ended task and multiple- choice task for evaluation. For open-ended task, the answer can be any word or phrase while an answer should be cho- sen out of 18 candidate answers in the multiple-choice task. In both cases, answers are evaluated by accuracy reï¬ecting human consensus. For predicted answer ai and target an- swer set T i of the ith example, the accuracy is given by
N 1 illa; =t Accyoa N > min { ter â ] ; i} (15) i=1 .
where I [·] denotes an indicator function. In other words, a predicted answer is regarded as a correct one if at least three annotators agree, and the score depends on the number of agreements if the predicted answer is not correct.
Table 1. Evaluation results on VQA test-dev in terms of AccVQA
All Y/N Num Others All Y/N Num Others Question [1] 48.09 75.66 36.70 27.14 53.68 75.71 37.05 38.64 28.13 64.01 00.42 03.77 30.53 69.87 00.45 03.76 52.64 75.55 33.67 37.37 58.97 75.59 34.35 50.33 LSTM Q [1] 48.76 78.20 35.68 26.59 54.75 78.22 36.82 38.78 LSTM Q+I [1] 53.74 78.94 35.24 36.42 57.17 78.95 35.80 43.41 54.70 77.09 36.62 39.67 59.92 77.10 37.48 50.31 RAND-GRU 55.46 79.58 36.20 39.23 61.18 79.64 38.07 50.63 CNN-FIXED 56.74 80.48 37.20 40.90 61.95 80.56 38.32 51.40 57.22 80.71 37.24 41.69 62.48 80.79 38.94 52.16
Table 2. Evaluation results on VQA test-standard
Open-Ended Multiple-Choice All Y/N Num Others All Y/N Num Others 83.30 95.77 83.39 72.67 Human [1] - - - - - - - - - DPPnet 57.36 80.28 36.92 42.24 62.69 80.35 38.79 52.79 - -
# 6.3. Results
We test three independent datasets, VQA, COCO-QA, and DAQUAR, and ï¬rst present the results for VQA dataset in Table 1. The proposed Dynamic Parameter Prediction network (DPPnet) outperforms all existing methods non- trivially. We performed controlled experiments to ana- lyze the contribution of individual components in the pro- posed algorithmâdynamic parameter prediction, use of pre-trained GRU and CNN ï¬ne-tuning, and trained 3 addi- tional models, CONCAT, RAND-GRU, and CNN-FIXED. CNN-FIXED is useful to see the impact of CNN ï¬ne-tuning since it is identical to DPPnet except that the weights in CNN are ï¬xed. RAND-GRU is the model without GRU pre-training, where the weights of GRU and word embed- ding model are initialized randomly. It does not ï¬ne-tune CNN either. CONCAT is the most basic model, which predicts answers using the two fully-connected layers for a combination of CNN and GRU features. Obviously, it does not employ any of new components such as parameter prediction, pre-trained GRU and CNN ï¬ne-tuning.
The results of the controlled experiment are also illus- trated in Table 1. CONCAT already outperforms LSTM Q+I by integrating GRU instead of LSTM [4] and batch normalization. RAND-GRU achieves better accuracy by employing dynamic parameter prediction additionally. It is interesting that most of the improvement comes from yes/no questions, which may involve various kinds of tasks since it is easy to ask many different aspects in an input image for binary classiï¬cation. CNN-FIXED improves accuracy further by adding GRU pre-training, and our ï¬nal model DPPnet achieves the state-of-the-art performance on VQA dataset with large margins as illustrated in Table 1 and 2.
Table 3, 4, and 5 illustrate the results by all algorithms in- cluding ours that have reported performance on COCO-QA, DAQUAR-reduced, DAQUAR-all datasets. The proposed
Table 3. Evaluation results on COCO-QA
IMG+BOW [23] 2VIS+BLSTM [23] Ensemble [23] ConvQA [16] DPPnet Acc 55.92 55.09 57.84 54.95 61.19 WUPS 0.9 66.78 65.34 67.90 65.36 70.84 WUPS 0.0 88.99 88.64 89.52 88.58 90.61
Table 4. Evaluation results on DAQUAR reduced
Acc - 34.68 34.17 2VIS+BLSTM [23] 35.78 36.94 39.66 44.48 Multiworld [17] Askneuron [18] IMG+BOW [23] Ensemble [23] ConvQA [16] DPPnet Single answer 0.9 - Multiple answers 0.9 Acc 0.0 - 12.73 40.76 79.54 29.27 44.99 81.48 46.83 82.15 48.15 82.68 44.86 83.06 38.72 49.56 83.95 44.44 0.0 18.10 51.47 36.50 79.47 - - - - - - - - - 44.19 79.52 49.06 82.57
Table 5. Evaluation results on DAQUAR all
Human [17] Multiworld [17] Askneuron [18] ConvQA [16] DPPnet Single answer 0.9 - - Multiple answers 0.9 Acc - - 19.43 23.40 28.98 Acc 0.0 50.20 - - 07.86 25.28 62.00 17.49 29.59 62.95 20.69 34.80 67.81 25.60 0.0 50.82 67.27 11.86 38.79 23.28 57.76 25.89 55.48 31.03 60.77
algorithm outperforms all existing approaches consistently in all benchmarks. In Table 4 and 5, single answer and mul- tiple answers denote the two subsets of questions divided by the number of ground-truth answers. Also, the numbers (0.9 and 0.0) in the second rows are WUPS thresholds.
To understand how the parameter prediction network un- derstand questions, we present several representative ques- tions before and after ï¬ne-tuning GRU in a descending or- der based on their cosine similarities to the query ques- tion in Table 6. The retrieved sentences are frequently de- termined by common subjective or objective words before ï¬ne-tuning while they rely more on the tasks to be solved after ï¬ne-tuning.
The qualitative results of the proposed algorithm are pre- sented in Figure 4. In general, the proposed network is suc- cessful to handle various types of questions that need differ- ent levels of semantic understanding. Figure 4(a) shows that the network is able to adapt recognition tasks depending on questions. However, it often fails in the questions asking the number of occurrences since these questions involve the dif- ï¬cult tasks (e.g., object detection) to learn only with image level annotations. On the other hand, the proposed network is effective to ï¬nd the answers for the same question on dif- ferent images fairly well as illustrated in Figure 4(b). Refer to our project website2 for more comprehensive qualitative results.
# 2http://cvlab.postech.ac.kr/research/dppnet/
Table 6. Retrieved sentences before and after ï¬ne-tuning GRU
Query question What body part has most recently contacted the ball? Is the person feeding the birds? Before ï¬ne-tuning After ï¬ne-tuning What shape is the ball? What colors are the ball? What team has the ball? How many times has the girl hit the ball? What number is on the womenâs Jersey closest to the ball? What is unusual about the ball? What is the speed of the ball? What body part is the boy holding the bear by? What body part is on the right side of this picture? What human body part is on the table? What body parts appear to be touching? What partial body parts are in the foreground? What part of the body does the woman on the left have on the ramp? Is he feeding the birds? Is the reptile ï¬ghting the birds? Does the elephant want to play with the birds? What is the fence made of behind the birds? Where are the majority of the birds? What colors are the birds? Is this man feeding the pigeons? Is he feeding the birds? Is the person feeding the sheep? Is the man feeding the pigeons? Is she feeding the pigeons? Is that the zookeeper feeding the giraffes? Is the reptile ï¬ghting the birds? Name a body part that would not be visible if the womanâs mouth was closed? Does the elephant want to play with the birds?
e N Q: How does the woman feel? DPPnet: happy Q: What type of hat is she wearing? DPPnet: cowboy
VW bs = Q: Is it raining? DPPnet: no âQ: What is he holding? DPPnet: umbrella
Q: What is he doing? DPPnet: skateboarding Q: Is this person dancing? DPPnet: no
= 2 Q: How many cranes are in the image? DPPnet: 2 (3) Q: How many people are on the bench? DPPnet: 2 (1)
(a) Result of the proposed algorithm on multiple questions for a single image
Q: What is the boy holding?
DPPnet: surfboard
__Q: What animal is shown?
DPPnet: giraffe
Q: What is this room?
DPPnet: living room
DPPnet: kitchen
Q: What is the animal doing?
DPPnet: resting (relaxing)
DPPnet: swimming (fishing)
(b) Results of the proposed algorithm on a single common question for multiple images
Figure 4. Sample images and questions in VQA dataset [1]. Each question requires a different type and/or level of understanding of the corresponding input image to ï¬nd correct answer. Answers in blue are correct while answers in red are incorrect. For the incorrect answers, ground-truth answers are provided within the parentheses.
# 7. Conclusion
We proposed a novel architecture for image question an- swering based on two subnetworksâclassiï¬cation network
and parameter prediction network. The classiï¬cation net- work has a dynamic parameter layer, which enables the classiï¬cation network to adaptively determine its weights through the parameter prediction network. While predicting
all entries of the weight matrix is infeasible due to its large dimensionality, we relieved this limitation using parame- ter hashing and weight sharing. The effectiveness of the proposed architecture is supported by experimental results showing the state-of-the-art performances on three different datasets. Note that the proposed method achieved outstand- ing performance even without more complex recognition processes such as referencing objects. We believe that the proposed algorithm can be extended further by integrating attention model [29] to solve such difï¬cult problems.
# References
[1] S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. L. Zitnick, and D. Parikh. VQA: visual question answering. In ICCV, 2015. 1, 2, 5, 6, 7, 8
[2] J. Ba, K. Swersky, S. Fidler, and R. Salakhutdinov. Predict- ing deep zero-shot convolutional neural networks using tex- tual descriptions. In ICCV, 2015. 2
[3] W. Chen, J. T. Wilson, S. Tyree, K. Q. Weinberger, and Y. Chen. Compressing neural networks with the hashing trick. In ICML, 2015. 2, 4, 5
[4] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. In NIPS Deep Learning Workshop, 2014. 4, 5, 7 I. Kokkinos, S. Mohamed, and A. Vedaldi. Describing textures in the wild. In CVPR, 2014. 1
[6] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei- Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009. 3
[7] M. Denil, B. Shakibi, L. Dinh, N. de Freitas, et al. Predicting parameters in deep learning. In NIPS, 2013. 5
[8] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. DeCAF: a deep convolutional acti- vation feature for generic visual recognition. In ICML, 2014. 1
[9] C. Fellbaum. Wordnet: An electronic database, 1998. 6 [10] H. Gao, J. Mao, J. Zhou, Z. Huang, L. Wang, and W. Xu. Are you talking to a machine? dataset and methods for mul- tilingual image question answering. In NIPS, 2015. 1, 2 [11] S. Hochreiter and J. Schmidhuber. Long short-term memory.
Neural computation, 9(8):1735â1780, 1997. 5
[12] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015. 6
[13] D. Kingma and J. Ba. Adam: A method for stochastic opti- mization. In ICLR, 2015. 6
[14] R. Kiros, Y. Zhu, R. Salakhutdinov, R. S. Zemel, A. Torralba, R. Urtasun, and S. Fidler. Skip-thought vectors. In NIPS, 2015. 2, 4, 5
[15] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ra- manan, P. Doll´ar, and C. L. Zitnick. Microsoft COCO: com- mon objects in context. In ECCV, 2014. 6
[16] L. Ma, Z. Lu, and H. Li. Learning to answer questions from image using convolutional neural network. arXiv preprint arXiv:1506.00333, 2015. 1, 2, 3, 7
[17] M. Malinowski and M. Fritz. A multi-world approach to question answering about real-world scenes based on uncer- tain input. In NIPS, 2014. 1, 2, 6, 7
[18] M. Malinowski, M. Rohrbach, and M. Fritz. Ask your neu- rons: A neural-based approach to answering questions about images. In ICCV, 2015. 1, 2, 7
[19] T. Mikolov, M. Karaï¬Â´at, L. Burget, J. Cernock`y, and S. Khu- danpur. Recurrent neural network based language model. In INTERSPEECH, pages 1045â1048, 2010. 5
[20] P. K. Nathan Silberman, Derek Hoiem and R. Fergus. Indoor segmentation and support inference from rgbd images. In ECCV, 2012. 6
[21] M. Oquab, L. Bottou, I. Laptev, and J. Sivic. Learning and transferring mid-level image representations using convolu- tional neural networks. In CVPR, 2014. 1
[22] R. Pascanu, T. Mikolov, and Y. Bengio. On the difï¬culty of training recurrent neural networks. In ICML, 2013. 6 [23] M. Ren, R. Kiros, and R. S. Zemel. Exploring models and data for image question answering. In NIPS, 2015. 1, 2, 3, 5, 6, 7
[24] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. 1, 3
[25] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In NIPS, 2014. 5
[26] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In CVPR, 2015. 1
[27] L. Wolf. Deepface: Closing the gap to human-level perfor- mance in face veriï¬cation. In CVPR, 2014. 1
[28] Z. Wu and M. Palmer. Verbs semantics and lexical selection. In ACL, pages 133â138, 1994. 6
[29] K. Xu, J. Ba, R. Kiros, A. Courville, R. Salakhutdinov, R. Zemel, and Y. Bengio. Show, attend and tell: Neural im- age caption generation with visual attention. In ICML, 2015. 6, 9
[30] J. Yao, S. Fidler, and R. Urtasun. Describing the scene as a whole: Joint object detection, scene classiï¬cation and se- mantic segmentation. In CVPR, 2012. 1
[31] B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva. Learning deep features for scene recognition using places database. In NIPS, 2014. 1 | {
"id": "1506.00333"
} |
1511.05234 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | 6 1 0 2
r a M 9 1
]
# Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering
Huijuan Xu and Kate Saenko
Department of Computer Science, UMass Lowell, USA hxu1@cs.uml.edu, saenko@cs.uml.edu
# V C . s c [
2 v 4 3 2 5 0 . 1 1 5 1 : v i X r a
Abstract. We address the problem of Visual Question Answering (VQA), which requires joint image and language understanding to answer a ques- tion about a given photograph. Recent approaches have applied deep image captioning methods based on convolutional-recurrent networks to this problem, but have failed to model spatial inference. To remedy this, we propose a model we call the Spatial Memory Network and apply it to the VQA task. Memory networks are recurrent neural networks with an explicit attention mechanism that selects certain parts of the infor- mation stored in memory. Our Spatial Memory Network stores neuron activations from diï¬erent spatial regions of the image in its memory, and uses the question to choose relevant regions for computing the answer, a process of which constitutes a single âhopâ in the network. We propose a novel spatial attention architecture that aligns words with image patches in the ï¬rst hop, and obtain improved results by adding a second atten- tion hop which considers the whole question to choose visual evidence based on the results of the ï¬rst hop. To better understand the inference process learned by the network, we design synthetic questions that specif- ically require spatial inference and visualize the attention weights. We evaluate our model on two published visual question answering datasets, DAQUAR [1] and VQA [2], and obtain improved results compared to a strong deep baseline model (iBOWIMG) which concatenates image and question features to predict the answer [3].
Keywords: Visual Question Answering, Spatial Attention, Memory Net- work, Deep Learning
# 1 Introduction
Visual Question Answering (VQA) is an emerging interdisciplinary research problem at the intersection of computer vision, natural language processing and artiï¬cial intelligence. It has many real-life applications, such as automatic query- ing of surveillance video [4] or assisting the visually impaired [5]. Compared to the recently popular image captioning task [6,7,8,9], VQA requires a deeper un- derstanding of the image, but is considerably easier to evaluate. It also puts more focus on artiï¬cial intelligence, namely the inference process needed to produce the answer to the visual question.
2
What is the child standing on? skateboard
What color is the phone booth? blue
What color is the phone booth? blue
Fig. 1. We propose a Spatial Memory Network for VQA (SMem-VQA) that answers questions about images using spatial inference. The ï¬gure shows the inference process of our two-hop model on examples from the VQA dataset [2]. In the ï¬rst hop (middle), the attention process captures the correspondence between individual words in the question and image regions. High attention regions (bright areas) are marked with bounding boxes and the corresponding words are highlighted using the same color. In the second hop (right), the ï¬ne-grained evidence gathered in the ï¬rst hop, as well as an embedding of the entire question, are used to collect more exact evidence to predict the answer. (Best viewed in color.)
In one of the early works [1], VQA is seen as a Turing test proxy. The authors propose an approach based on handcrafted features using a semantic parse of the question and scene analysis of the image combined in a latent-world Bayesian framework. More recently, several end-to-end deep neural networks that learn features directly from data have been applied to this problem [10,11]. Most of these are directly adapted from captioning models [6,7,8], and utilize a recurrent LSTM network, which takes the question and Convolutional Neural Net (CNN) image features as input, and outputs the answer. Though the deep learning methods in [10,11] have shown great improvement compared to the handcrafted feature method [1], they have their own drawbacks. These models based on the LSTM reading in both the question and the image features do not show a clear improvement compared to an LSTM reading in the question only [10,11]. Fur- thermore, the rather complicated LSTM models obtain similar or worse accuracy to a baseline model which concatenates CNN features and a bag-of-words ques- tion embedding to predict the answer, see the IMG+BOW model in [11] and the iBOWIMG model in [3].
A major drawback of existing models is that they do not have any explicit notion of object position, and do not support the computation of intermedi- ate results based on spatial attention. Our intuition is that answering visual questions often involves looking at diï¬erent spatial regions and comparing their contents and/or locations. For example, to answer the questions in Fig. 1, we need to look at a portion of the image, such as the child or the phone booth. Similarly, to answer the question âIs there a cat in the basket?â in Fig. 2, we can ï¬rst ï¬nd the basket and the cat objects, and then compare their locations.
We propose a new deep learning approach to VQA that incorporates explicit spatial attention, which we call the Spatial Memory Network VQA (SMem- VQA). Our approach is based on memory networks, which have recently been proposed for text Question Answering (QA) [12,13]. Memory networks combine learned text embeddings with an attention mechanism and multi-step inference. The text QA memory network stores textual knowledge in its âmemoryâ in the form of sentences, and selects relevant sentences to infer the answer. However, in VQA, the knowledge is in the form of an image, thus the memory and the question come from diï¬erent modalities. We adapt the end-to-end memory net- work [13] to solve visual question answering by storing the convolutional network outputs obtained from diï¬erent receptive ï¬elds into the memory, which explicitly allows spatial attention over the image. We also propose to repeat the process of gathering evidence from attended regions, enabling the model to update the answer based on several attention steps, or âhopsâ. The entire model is trained end-to-end and the evidence for the computed answer can be visualized using the attention weights.
To summarize our contributions, in this paper we
â propose a novel multi-hop memory network with spatial attention for the VQA task which allows one to visualize the spatial inference process used by the deep network (a CAFFE [14] implementation will be made available), â design an attention architecture in the ï¬rst hop which uses each word em- bedding to capture ï¬ne-grained alignment between the image and question, â create a series of synthetic questions that explicitly require spatial inference to analyze the working principles of the network, and show that it learns logical inference rules by visualizing the attention weights,
â provide an extensive evaluation of several existing models and our own model on the same publicly available datasets.
Sec. 2 introduces relevant work on memory networks and attention models. Sec. 3 describes our design of the multi-hop memory network architecture for visual question answering (SMem-VQA). Sec. 4 visualizes the inference rules learned by the network for synthetic spatial questions and shows the experimen- tal results on DAQUAR [1] and VQA [2] datasets. Sec. 5 concludes the paper.
# 2 Related work
Before the popularity of visual question answering (VQA), text question an- swering (QA) had already been established as a mature research problem in the area of natural language processing. Previous QA methods include searching for the key words of the question in a search engine [15]; parsing the question as a knowledge base (KB) query [16]; or embedding the question and using a similarity measurement to ï¬nd evidence for the answer [17]. Recently, memory networks were proposed for solving the QA problem. [12] ï¬rst introduces the memory network as a general model that consists of a memory and four compo- nents: input feature map, generalization, output feature map and response. The model is investigated in the context of question answering, where the long-term
3
4
memory acts as a dynamic knowledge base and the output is a textual response. [13] proposes a competitive memory network model that uses less supervision, called end-to-end memory network, which has a recurrent attention model over a large external memory. The Neural Turing Machine (NTM) [18] couples a neural network to external memory and interacts with it by attentional processes to in- fer simple algorithms such as copying, sorting, and associative recall from input and output examples. In this paper, we solve the VQA problem using a multi- modal memory network architecture that applies a spatial attention mechanism over an input image guided by an input text question.
The neural attention mechanism has been widely used in diï¬erent areas of computer vision and natural language processing, see for example the atten- tion models in image captioning [19], video description generation [20], machine translation [21][22] and machine reading systems [23]. Most methods use the soft attention mechanism ï¬rst proposed in [21], which adds a layer to the network that predicts soft weights and uses them to compute a weighted combination of the items in memory. The two main types of soft attention mechanisms diï¬er in the function that aligns the input feature vector and the candidate feature vectors in order to compute the soft attention weights. The ï¬rst type uses an alignment function based on âconcatenationâ of the input and each candidate (we use the term âconcatenationâ as described [22]), and the second type uses an alignment function based on the dot product of the input and each candi- date. The âconcatenationâ alignment function adds one input vector (e.g. hidden state vector of the LSTM) to each candidate feature vector, embeds the result- ing vectors into scalar values, and then applies the softmax function to generate the attention weight for each candidate. [19][20][21][23] use the âconcatenationâ alignment function in their soft attention models and [24] gives a literature review of such models applied to diï¬erent tasks. On the other hand, the dot product alignment function ï¬rst projects both inputs to a common vector em- bedding space, then takes the dot product of the two input vectors, and applies a softmax function to the resulting scalar value to produce the attention weight for each candidate. The end-to-end memory network [13] uses the dot product alignment function. In [22], the authors compare these two alignment functions in an attention model for the neural machine translation task, and ï¬nd that their implementation of the âconcatenationâ alignment function does not yield good performance on their task. Motivated by this, in this paper we use the dot product alignment function in our Spatial Memory Network.
VQA is related to image captioning. Several early papers about VQA directly adapt the image captioning models to solve the VQA problem [10][11] by gen- erating the answer using a recurrent LSTM network conditioned on the CNN output. But these modelsâ performance is still limited [10][11]. [25] proposes a new dataset and uses a similar attention model to that in image captioning [19], but does not give results on the more common VQA benchmark [2], and our own implementation of this model is less accurate on [2] than other baseline models. [3] summarizes several recent papers reporting results on the VQA dataset [2] on arxiv.org and gives a simple but strong baseline model (iBOWIMG) on this
dataset. This simple baseline concatenates the image features with the bag of word embedding question representation and feeds them into a softmax classiï¬er to predict the answer. The iBOWIMG model beats most VQA models consid- ered in the paper. Here, we compare our proposed model to the VQA models (namely, the ACK model [26] and the DPPnet model [27]) which have compa- rable or better results than the iBOWIMG model. The ACK model in [26] is essentially the same as the LSTM model in [11], except that it uses image at- tribute features, the generated image caption and relevant external knowledge from a knowledge base as the input to the LSTMâs ï¬rst time step. The DPPnet model in [27] tackles VQA by learning a convolutional neural network (CNN) with some parameters predicted from a separate parameter prediction network. Their parameter prediction network uses a Gate Recurrent Unit (GRU) to gen- erate a question representation, and based on this question input, maps the predicted weights to CNN via hashing. Neither of these models [26][27] contain a spatial attention mechanism, and they both use external data in addition to the VQA dataset [2], e.g. the knowledge base in [26] and the large-scale text corpus used to pre-train the GRU question representation [27]. In this paper, we explore a complementary approach of spatial attention to both improve perfor- mance and visualize the networkâs inference process, and obtain improved results without using external data compared to the iBOWIMG model [3] as well as the ACK model [26] and the DPPnet model [27] which use external data.
# 3 Spatial Memory Network for VQA
We ï¬rst give an overview of the proposed SMem-VQA network, illustrated in Fig. 2 (a). Sec. 3.1 details the word-guided spatial attention process of the ï¬rst hop shown in Fig. 2 (b), and Sec. 3.2 describes adding a second hop into SMem- VQA network.
The input to our network is a question comprised of a variable-length se- quence of words, and an image of ï¬xed size. Each word in the question is ï¬rst represented as a one-hot vector in the size of the vocabulary, with a value of one only in the corresponding word position and zeros in the other posi- tions. Each one-hot vector is then embedded into a real-valued word vector, V = {vj | vj â RN ; j = 1, · · · , T }, where T is the maximum number of words in the question and N is the dimensionality of the embedding space. Sentences with length less than T are padded with special â1 value, which are embedded to all-zero word vector.
The words in questions are used to compute attention over the visual mem- ory, which contains extracted image features. The input image is processed by a convolutional neural network (CNN) to extract high-level M -dimensional vi- sual features on a grid of spatial locations. Speciï¬cally, we use S = {si | si â RM ; i = 1, · · · , L} to represent the spatial CNN features at each of the L grid locations. In this paper, the spatial feature outputs of the last convolutional layer of GoogLeNet (inception 5b/output) [28] are used as the visual features for the image.
5
6
vy, fi Is there a cat in the basket? word embedding first hop Se |-â vy se (b) SxW, 4 âtestis | py aim image embeddings Oo ase eet ce (a) Overview (b) Word-guided attention next hop memory
Fig. 2. Our proposed Spatial Memory Network for Visual Question Answering (SMem- VQA). (a) Overview. First, the CNN activation vectors S = {si} at image locations i are projected into the semantic space of the question word vectors vj using the âatten- tionâ visual embedding WA (Sec. 3). The results are then used to infer spatial attention weights Watt using the word-guided attention process shown in (b). (b) Word-guided attention. This process predicts attention determined by the question word that has the maximum correlation with embedded visual features at each location, e.g. choosing the word basket to attend to the location of the basket in the above image (Sec. 3.1). The resulting spatial attention weights Watt are then used to compute a weighted sum over the visual features embedded via a separate âevidenceâ transformation WE, e.g., selecting evidence for the cat concept at the basket location. Finally, the weighted evidence vector Satt is combined with the full question embedding Q to predict the answer. An additional hop can repeat the process to gather more evidence (Sec. 3.2).
The convolutional image feature vectors at each location are embedded into a common semantic space with the word vectors. Two diï¬erent embeddings are used: the âattentionâ embedding WA and the âevidenceâ embedding WE. The attention embedding projects each visual feature vector such that its combina- tion with the embedded question words generates the attention weight at that location. The evidence embedding detects the presence of semantic concepts or objects, and the embedding results are multiplied with attention weights and summed over all locations to generate the visual evidence vector Satt.
Finally, the visual evidence vector is combined with the question represen- tation and used to predict the answer for the given image and question. In the next section, we describe the one-hop Spatial Memory network model and the speciï¬c attention mechanism it uses in more detail.
# 3.1 Word Guided Spatial Attention in One-Hop Model
Rather than using the bag-of-words question representation to guide attention, the attention architecture in the ï¬rst hop (Fig. 2(b)) uses each word vector separately to extract correlated visual features in memory. The intuition is that the BOW representation may be too coarse, and letting each word select a related region may provide more ï¬ne-grained attention. The correlation matrix C â RT ÃL between word vectors V and visual features S is computed as
C = V · (S · WA + bA)T (1)
where WA â RM ÃN contains the attention embedding weights of visual features S, and bA â RLÃN is the bias term. This correlation matrix is the dot product result of each word embedding and each spatial locationâs visual feature, thus each value in correlation matrix C measures the similarity between each word and each locationâs visual feature.
The spatial attention weights Watt are calculated by taking maximum over the word dimension T for the correlation matrix C, selecting the highest corre- lation value for each spatial location, and then applying the softmax function
Watt = softmax( max i=1,··· ,T (Ci)), Ci â RL (2)
The resulting attention weights Watt â RL are high for selected locations and low for other locations, with the sum of weights equal to 1. For instance, in the example shown in Fig. 2, the question âIs there a cat in the basket?â pro- duces high attention weights for the location of the basket because of the high correlation of the word vector for basket with the visual features at that location. The evidence embedding WE projects visual features S to produce high ac- tivations for certain semantic concepts. E.g., in Fig. 2, it has high activations in the region containing the cat. The results of this evidence embedding are then multiplied by the generated attention weights Watt, and summed to produce the selected visual âevidenceâ vector Satt â RN ,
Satt = Watt · (S · WE + bE) where WE â RM ÃN are the evidence embedding weights of the visual features S, and bE â RLÃN is the bias term. In our running example, this step accumulates cat presence features at the basket location.
Finally, the sum of this evidence vector Satt and the question embedding Q is used to predict the answer for the given image and question. For the question representation Q, we choose the bag-of-words (BOW). Other question represen- tations, such as an LSTM, can also be used, however, BOW has fewer parameters yet has shown good performance. As noted in [29], the simple BOW model per- forms roughly as well if not better than the sequence-based LSTM for the VQA task. Speciï¬cally, we compute
Q = WQ · V + bQ where WQ â RT represents the BOW weights for word vectors V , and bQ â RN is the bias term. The ï¬nal prediction P is
P = softmax(WP · f (Satt + Q) + bP ) where WP â RKÃN , bias term bP â RK, and K represents the number of possible prediction answers. f is the activation function, and we use ReLU here. In our running example, this step adds the evidence gathered for cat near the basket location to the question, and, since the cat was not found, predicts the answer ânoâ. The attention and evidence computation steps can be optionally repeated in another hop, before predicting the ï¬nal answer, as detailed in the next section.
7
8
# 3.2 Spatial Attention in Two-Hop Model
We can repeat hops to promote deeper inference, gathering additional evidence at each hop. Recall that the visual evidence vector Satt is added to the question representation Q in the ï¬rst hop to produce an updated question vector,
Ohop1 = Satt + Q (6)
On the next hop, this vector Ohop1 â RN is used in place of the individual word vectors V to extract additional correlated visual features to the whole question from memory and update the visual evidence.
The correlation matrix C in the ï¬rst hop provides ï¬ne-grained local evidence from each word vectors V in the question, while the correlation vector Chop2 in next hop considers the global evidence from the whole question representation Q. The correlation vector Chop2 â RL in the second hop is calculated by
Chop2 = (S · WE + bE) · Ohop1 (7)
where WE â RM ÃN should be the attention embedding weights of visual features S in the second hop and bE â RLÃN should be the bias term. Since the attention embedding weights in the second hop are shared with the evidence embedding in the ï¬rst hop, so we directly use WE and bE from ï¬rst hop here.
The attention weights in the second hop Watt2 are obtained by applying the softmax function to the correlation vector Chop2.
Watt2 = softmax(Chop2) (8)
Then, the correlated visual information in the second hop Satt2 â RN is extracted using attention weights Watt2.
Satt2 = Watt2 · (S · WE2 + bE2 ) (9)
where WE2 â RM ÃN are the evidence embedding weights of visual features S in the second hop, and bE2 â RLÃN is the bias term.
The ï¬nal answer P is predicted by combining the whole question represen- tation Q, the local visual evidence Satt from each word vector in the ï¬rst hop and the global visual evidence Satt2 from the whole question in the second hop,
P = softmax(WP · f (Ohop1 + Satt2) + bP ) (10)
where WP â RKÃN , bias term bP â RK, and K represents the number of possible prediction answers. f is activation function. More hops can be added in this manner.
The entire network is diï¬erentiable and is trained using stochastic gradient descent via standard backpropagation, allowing image feature extraction, image embedding, word embedding and answer prediction to be jointly optimized on the training image/question/answer triples.
Is there a red square on the top ? Is there a red square on the bottom ? Is there a red square on the right ? Is there a red square on the left ? GT: no Prediction: no Gt yes Prediction: yes GT: no Prediction: no Gt: yes Prediction: yes Is there a red square on the bottom ? Is there a red square on the right ? Is there a red square on the left ? Is there a red square on the left ? GT no Prediction: no Gt: no Prediction: no Gt:no Prediction: no Gt:no Prediction: no . a a
Is there a red square on the top ? GT: no Prediction: no
Is there a red square on the bottom ? Gt yes Prediction: yes
Is there a red square on the right ? GT: no Prediction: no
Is there a red square on the left ? Gt: yes Prediction: yes
Is there a red square on the bottom ? GT no Prediction: no .
Is there a red square on the right ? Gt: no Prediction: no
Is there a red square on the left ? Gt:no Prediction: no a
Is there a red square on the left ? Gt:no Prediction: no a
Fig. 3. Absolute position experiment: for each image and question pair, we show the original image (left) and the attention weights Watt (right). The attention follows the following rules. The ï¬rst rule (top row) looks at the position speciï¬ed in question (top|bottom|right|left), if it contains a square, answer âyesâ; otherwise answer ânoâ. The second rule (bottom row) looks at the region where there is a square, and answers âyesâ if the question contains that position and ânoâ for the other three positions.
# 4 Experiments
In this section, we conduct a series of experiments to evaluate our model. To explore whether the model learns to perform the spatial inference necessary for answering visual questions that explicitly require spatial reasoning, we design a set of experiments using synthetic visual question/answer data in Sec. 4.1. The experimental results of our model in standard datasets (DAQUAR [1] and VQA [2] datasets) are reported in Sec. 4.2.
# 4.1 Exploring Attention on Synthetic Data
The questions in the public VQA datasets are quite varied and diï¬cult and often require common sense knowledge to answer (e.g., âDoes this man have 20/20 vision?â about a person wearing glasses). Furthermore, past work [10,11] showed that the question text alone (no image) is a very strong predictor of the answer. Therefore, before evaluating on standard datasets, we would ï¬rst like to understand how the proposed model uses spatial attention to answer simple visual questions where the answer cannot be predicted from question alone. Our visualization demonstrates that the attention mechanism does learn to attend to objects and gather evidence via certain inference rules.
Absolute Position Recognition We investigate whether the model has the ability to recognize the absolute location of the object in the image. We explore this by designing a simple task where an object (a red square) appears in some region of a white-background image, and the question is âIs there a red square on the [top|bottom|left|right]?â For each image, we randomly place the square in one of the four regions, and generate the four questions above, together with three ânoâ answers and one âyesâ answer. The generated data is split into training and testing sets.
Due to the simplicity of this synthetic dataset, the SMem-VQA one-hop model achieves 100% test accuracy. However, the baseline model (iBOWIMG) [3]
9
10
2 Prediction: Is there a red square on the top of the cat? Prediction: no Is there a red square on the bottom of th Gr. yer ts there a red square on yes . F Is there a red square on the left of the cat? rie) : 7 âon the right of the cat? Prediction: Is there a red square on the right of the cat? eT: 10 Prediction: no = Is there a red square on the top of the cat? Prediction: no cr: no Prediction: no
2 Prediction: Is there a red square on the bottom of th Gr. yer
ts there a red square on yes . F âon the right of the cat? Prediction:
Is there a red square on the right of the cat? eT: 10 Prediction: no =
Is there a red square on the top of the cat? Prediction: no rie)
Is there a red square on the left of the cat? : 7 Prediction: no
Is there a red square on the top of the cat? cr: no Prediction: no
Fig. 4. Relative position experiment: for each image and question pair, we show the original image (left), the evidence embedding WE of the convolutional layer (mid- dle) and the attention weights Watt (right). The evidence embedding WE has high activations on both cat and red square. The attention weights follow similar inference rules as in Fig. 3, with the diï¬erence that the attention position is around the cat.
cannot infer the answer and only obtains accuracy of around 75%, which is the prior probability of the answer ânoâ in the training set. The SMem-VQA one-hop model is equivalent to the iBOWIMG model if the attention weights in our one- hop model are set equally for each location, since the iBOWIMG model uses the mean pool of the convolutional feature (inception 5b/output) in GoogLeNet that we use in SMem-VQA model. We check the visualization of the attention weights and ï¬nd that the relationship between the high attention position and the answer can be expressed by logical expressions. We show the attention weights of several typical examples in Fig. 3 which reï¬ect two logic rules: 1) Look at the position speciï¬ed in question (top|bottom|right|left), if it contains a square, then answer âyesâ; if it does not contain a square, then answer ânoâ. 2) Look at the region where there is a square, then answer âyesâ for the question about that position and ânoâ for the questions about the other three positions.
In the iBOWIMG model, the mean-pooled GoogLeNet visual features lose spatial information and thus cannot distinguish images with a square in diï¬er- ent positions. On the contrary, our SMem-VQA model can pay high attention to diï¬erent regions according to the question, and generate an answer based on the selected region, using some learned inference rules. This experiment demon- strates that the attention mechanism in our model is able to make absolute spatial location inference based on the spatial attention.
Relative Position Recognition In order to check whether the model has the ability to infer the position of one object relative to another object, we collect all the cat images from the MS COCO Detection dataset [30], and add a red square on the [top|bottom|left|right] of the bounding box of the cat in the images. For each generated image, we create four questions, âIs there a red square on the [top|bottom|left|right] of the cat?â together with three ânoâ answers and one âyesâ answer. We select 2639 training cat images and 1395 testing cat images from MS COCO Detection dataset.
Our SMem-VQA one-hop model achieves 96% test accuracy on this synthetic task, while the baseline model (iBOWIMG) accuracy is around 75%. We also check that another simple baseline that predicts the answer based on the abso-
Table 1. Accuracy results on the DAQUAR dataset (in percentage).
Multi-World [1] Neural-Image-QA [10] Question LSTM [10] VIS+LSTM [11] Question BOW [11] IMG+BOW [11] SMem-VQA One-Hop SMem-VQA Two-Hop DAQUAR 12.73 29.27 32.32 34.41 32.67 34.17 36.03 40.07
lute position of the square in the image gets around 70% accuracy. We visualize the evidence embedding WE features and the attention weights Watt of several typical examples in Fig. 4. The evidence embedding WE has high activations on the cat and the red square, while the attention weights pay high attention to certain locations around the cat. We can analyze the attention in the correctly predicted examples using the same rules as in absolute position recognition ex- periment. These rules still work, but the position is relative to the cat object: 1) Check the speciï¬ed position relative to the cat, if it ï¬nds the square, then answer âyesâ, otherwise ânoâ; 2) Find the square, then answer âyesâ for the speciï¬ed position, and answer ânoâ for the other positions around the cat. We also check the images where our model makes mistakes, and ï¬nd that the mis- takes mainly occur in images with more than one cats. The red square appears near only one of the cats in the image, but our model might make mistakes by focusing on the other cats. We conclude that our SMem-VQA model can infer the relative spatial position based on the spatial attention around the speciï¬ed object, which can also be represented by some logical inference rules.
# 4.2 Experiments on Standard Datasets
Results on DAQUAR The DAQUAR dataset is a relatively small dataset which builds on the NYU Depth Dataset V2 [31]. We use the reduced DAQUAR dataset [1]. The evaluation metric for this dataset is 0-1 accuracy. The embedding dimension is 512 for our models running on the DAQUAR dataset. We use several reported models on DAQUAR as baselines, which are listed below: ⢠Multi-World [1]: an approach based on handcrafted features using a semantic parse of the question and scene analysis of the image combined in a latent-world Bayesian framework. ⢠Neural-Image-QA [10]: uses an LSTM to encode the question and then decode the hidden information into the answer. The image CNN feature vector is shown at each time step of the encoding phase. ⢠Question LSTM [10]: only shows the question to the LSTM to predict the answer without any image information. ⢠VIS+LSTM [11]: similar to Neural-Image-QA, but only shows the image features to the LSTM at the ï¬rst time step, and the question in the remaining time steps to predict the answer.
11
12
what electrical GT: blender___One Hi which way can you not turn 7 GT: left âOne Hop: right _ Two Hop: left what is the colour of the object near the bed ? GT: pink One Hop: bed Two Hop: pink what is beneath the framed picture ? GT: sofa âOne Hop: table___Two Hop: sofa
what electrical GT: blender___One Hi
which way can you not turn 7 GT: left âOne Hop: right _ Two Hop: left
what is the colour of the object near the bed ? GT: pink One Hop: bed Two Hop: pink
what is beneath the framed picture ? GT: sofa âOne Hop: table___Two Hop: sofa
Fig. 5. Visualization of the spatial attention weights in the SMem-VQA One-Hop and Two-Hop models on VQA (top row) and DAQUAR (bottom row) datasets. For each image and question pair, we show the original image, the attention weights Watt of the One-Hop model, and the two attention weights Watt and Watt2 of the Two-Hop model in order.
⢠Question BOW [11]: only uses the BOW question representation and a single hidden layer neural network to predict the answer, without any image features. ⢠IMG+BOW [11]: concatenates the BOW question representation with image features, and then uses a single hidden layer neural network to predict the answer. This model is similar to the iBOWIMG baseline model in [3].
Results of our SMem-VQA model on the DAQUAR dataset and the base- line model results reported in previous work are shown in Tab. 1. From the DAQUAR result in Tab. 1, we see that models based on deep features signif- icantly outperform the Multi-World approach based on hand-crafted features. Modeling the question only with either the LSTM model or Question BOW model does equally well in comparison, indicating the the question text contains important prior information for predicting the answer. Also, on this dataset, the VIS+LSTM model achieves better accuracy than Neural-Image-QA model; the former shows the image only at the ï¬rst timestep of the LSTM, while the latter does so at each timestep. In comparison, both our One-Hop model and Two-Hop spatial attention models outperform the IMG+BOW, as well as the other baseline models. A major advantage of our model is the ability to visual- ize the inference process in the deep network. To illustrate this, two attention weights visualization examples in SMem-VQA One-Hop and Two-Hop models on DAQUAR dataset are shown in Fig. 5 (bottom row).
Results on VQA The VQA dataset is a recent large dataset based on MS COCO [30]. We use the full release (V1.0) open-ended dataset, which con- tains a train set and a val set. Following standard practice, we choose the top 1000 answers in train and val sets as possible prediction answers, and only keep the examples whose answers belong to these 1000 answers as train- ing data. The question vocabulary size is 7477 with the word frequency of at least three. Because of the larger training size, the embedding dimension is 1000 on the VQA dataset. We report the test-dev and test-standard results from the VQA evaluation server. The server evaluation uses the evaluation met-
Table 2. Test-dev and test-standard results on the Open-Ended VQA dataset (in percentage). Models with â use external training data in addition to the VQA dataset.
test-dev test-standard Overall yes/no number others Overall yes/no number others 36.42 53.74 LSTM Q+I [2] ACKâ [26] 40.08 55.72 DPPnetâ [27] 41.69 57.22 42.62 55.72 iBOWIMG [3] SMem-VQA One-Hop 42.09 56.56 SMem-VQA Two-Hop 57.99 80.87 37.32 43.12 78.94 79.23 80.71 76.55 78.98 35.24 36.13 37.24 35.03 35.93 54.06 55.98 57.36 55.89 - 58.24 - 79.05 80.28 76.76 - 80.8 - 36.10 36.92 34.98 - - 40.61 42.24 42.62 - 37.53 43.48
ric introduced by [2], which gives partial credit to certain synonym answers: Acc(ans) = min {(# humans that said ans)/3, 1}.
For the attention models, we do not mirror the input image when using the CNN to extract convolutional features, since this might cause confusion about the spatial locations of objects in the input image. The optimization algorithm used is stochastic gradient descent (SGD) with a minibatch of size 50 and mo- mentum of 0.9. The base learning rate is set to be 0.01 which is halved every six epoches. Regularization, dropout and L2 norm are cross-validated and used.
For the VQA dataset, we use the simple iBOWIMG model in [3] as one baseline model, which beats most existing VQA models currently on arxiv.org. We also compare to two models in [26][27] which have comparable or better results to the iBOWIMG model. These three baseline models as well the best model in VQA dataset paper [2] are listed in the following: ⢠LSTM Q+I [2]: uses the element-wise multiplication of the LSTM encoding of the question and the image feature vector to predict the answer. This is the best model in the VQA dataset paper. ⢠ACK [26]: shows the image attribute features, the generated image caption and relevant external knowledge from knowledge base to the LSTM at the ï¬rst time step, and the question in the remaining time steps to predict the answer. ⢠DPPnet [27]: uses the Gated Recurrent Unit (GRU) representation of question to predict certain parameters for a CNN classiï¬cation network. They pre-train the GRU for question representation on a large-scale text corpus to improve the GRU generalization performance. ⢠iBOWIMG [3]: concatenates the BOW question representation with image feature (GoogLeNet), and uses a softmax classiï¬cation to predict the answer.
The overall accuracy and per-answer category accuracy for our SMem-VQA models and the four baseline models on VQA dataset are shown in Tab. 2. From the table, we can see that the SMem-VQA One-Hop model obtains slightly better results compared to the iBOWIMG model. However, the SMem-VQA Two-Hop model achieves an improvement of 2.27% on test-dev and 2.35% on test-standard compared to the iBOWIMG model, demonstrating the value of spatial attention. The SMem-VQA Two-Hop model also shows best performance in the per-answer category accuracy. The SMem-VQA Two-Hop model has slightly better result than the DPPnet model. The DPPnet model uses a large-scale text corpus to pre-train the Gated Recurrent Unit (GRU) network for question representation. Similar pre-training work on extra data to improve model accuracy has been
13
14
do tourist enjoy a day at the beach? yes.
what color is the fork? green
do tourist enjoy a day at the beach? yes. what color is the fork? green what game are they playing? baseball what is the woman doing? eating
what game are they playing? baseball
what is the woman doing? eating
Fig. 6. Visualization of the original image (left), the spatial attention weights Watt in the ï¬rst hop (middle) and one correlation vector from the correlation matrix C for the location with highest attention weight in the SMem-VQA Two-Hop model on the VQA dataset. Higher values in the correlation vector indicate stronger correlation of that word with the chosen locationâs image features.
done in [32]. Considering the fact that our model does not use extra data to pre- train the word embeddings, its results are very competitive. We also experiment with adding a third hop into our model on the VQA dataset, but the result does not improve further.
The attention weights visualization examples for the SMem-VQA One-Hop and Two-Hop models on the VQA dataset are shown in Fig. 5 (top row). From the visualization, we can see that the two-hop model collects supplementary evidence for inferring the answer, which may be necessary to achieve an im- provement on these complicated real-world datasets. We also visualize the ï¬ne- grained alignment in the ï¬rst hop of our SMem-VQA Two-Hop model in Fig. 6. The correlation vector values (blue bars) measure the correlation between image regions and each word vector in the question. Higher values indicate stronger correlation of that particular word with the speciï¬c locationâs image features. We observe that the ï¬ne-grained visual evidence collected using each local word vector, together with the global visual evidence from the whole question, com- plement each other to infer the correct answer for the given image and question, as shown in Fig. 1.
5 Conclusion In this paper, we proposed the Spatial Memory Network for VQA, a memory network architecture with a spatial attention mechanism adapted to the visual question answering task. We proposed a set of synthetic spatial questions and demonstrated that our model learns inference rules based on spatial attention through attention weight visualization. Evaluation on the challenging DAQUAR and VQA datasets showed improved results over previously published models. Our model can be used to visualize the inference steps learned by the deep network, giving some insight into its processing. Future work may include further exploring the inference ability of our SMem-VQA model and exploring other VQA attention models.
# References
1. Malinowski, M., Fritz, M.: A multi-world approach to question answering about real-world scenes based on uncertain input. CoRR abs/1410.0210 (2014)
2. Antol, S., Agrawal, A., Lu, J., Mitchell, M., Batra, D., Zitnick, C.L., Parikh, D.: VQA: visual question answering. CoRR abs/1505.00468 (2015)
3. Zhou, B., Tian, Y., Sukhbaatar, S., Szlam, A., Fergus, R.: Simple baseline for visual question answering. arXiv preprint arXiv:1512.02167 (2015)
4. Tu, K., Meng, M., Lee, M.W., Choe, T.E., Zhu, S.C.: Joint video and text parsing for understanding events and answering queries. MultiMedia, IEEE 21(2) (2014) 42â70
5. Lasecki, W.S., Zhong, Y., Bigham, J.P.: Increasing the bandwidth of crowdsourced visual question answering to better support blind users. In: Proceedings of the 16th international ACM SIGACCESS conference on Computers & accessibility, ACM (2014) 263â264
6. Donahue, J., Hendricks, L.A., Guadarrama, S., Rohrbach, M., Venugopalan, S., Saenko, K., Darrell, T.: Long-term recurrent convolutional networks for visual recognition and description. arXiv preprint arXiv:1411.4389 (2014)
7. Vinyals, O., Toshev, A., Bengio, S., Erhan, D.: Show and tell: A neural image caption generator. arXiv preprint arXiv:1411.4555 (2014)
8. Karpathy, A., Joulin, A., Li, F.F.F.: Deep fragment embeddings for bidirectional image sentence mapping. In: Advances in neural information processing systems. (2014) 1889â1897
9. Fang, H., Gupta, S., Iandola, F., Srivastava, R., Deng, L., Doll´ar, P., Gao, J., He, X., Mitchell, M., Platt, J., et al.: From captions to visual concepts and back. arXiv preprint arXiv:1411.4952 (2014)
10. Malinowski, M., Rohrbach, M., Fritz, M.: Ask your neurons: A neural-based ap- proach to answering questions about images. arXiv preprint arXiv:1505.01121 (2015)
11. Ren, M., Kiros, R., Zemel, R.S.: Exploring models and data for image question answering. CoRR abs/1505.02074 (2015)
12. Weston, J., Chopra, S., Bordes, A.: Memory networks. CoRR abs/1410.3916 (2014)
13. Sukhbaatar, S., Szlam, A., Weston, J., Fergus, R.: End-to-end memory networks. arXiv preprint arXiv:1503.08895 (2015)
14. Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadar- rama, S., Darrell, T.: Caï¬e: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093 (2014)
15. Yahya, M., Berberich, K., Elbassuoni, S., Ramanath, M., Tresp, V., Weikum, G.: Natural language questions for the web of data. In: Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Compu- tational Natural Language Learning, Association for Computational Linguistics (2012) 379â390
16. Berant, J., Liang, P.: Semantic parsing via paraphrasing. In: Proceedings of ACL. Volume 7. (2014) 92
17. Bordes, A., Chopra, S., Weston, J.: Question answering with subgraph embeddings. arXiv preprint arXiv:1406.3676 (2014)
18. Graves, A., Wayne, G., Danihelka, I.: Neural turing machines. arXiv preprint arXiv:1410.5401 (2014)
15
16
19. Xu, K., Ba, J., Kiros, R., Courville, A., Salakhutdinov, R., Zemel, R., Bengio, Y.: Show, attend and tell: Neural image caption generation with visual attention. arXiv preprint arXiv:1502.03044 (2015)
20. Yao, L., Torabi, A., Cho, K., Ballas, N., Pal, C., Larochelle, H., Courville, A.: Describing videos by exploiting temporal structure. In: Proceedings of the IEEE International Conference on Computer Vision. (2015) 4507â4515
21. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014)
22. Luong, M.T., Pham, H., Manning, C.D.: Eï¬ective approaches to attention-based neural machine translation. arXiv preprint arXiv:1508.04025 (2015)
23. Hermann, K.M., Kocisky, T., Grefenstette, E., Espeholt, L., Kay, W., Suleyman, M., Blunsom, P.: Teaching machines to read and comprehend. In: Advances in Neural Information Processing Systems. (2015) 1684â1692
24. Cho, K., Courville, A., Bengio, Y.: Describing multimedia content using attention- based encoderâdecoder networks. (2015)
25. Zhu, Y., Groth, O., Bernstein, M., Fei-Fei, L.: Visual7w: Grounded question an- swering in images. arXiv preprint arXiv:1511.03416 (2015)
26. Wu, Q., Wang, P., Shen, C., Hengel, A.v.d., Dick, A.: Ask me anything: Free- form visual question answering based on knowledge from external sources. arXiv preprint arXiv:1511.06973 (2015)
27. Noh, H., Seo, P.H., Han, B.: Image question answering using convolutional neu- ral network with dynamic parameter prediction. arXiv preprint arXiv:1511.05756 (2015)
28. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: CVPR 2015. (2015)
29. Shih, K.J., Singh, S., Hoiem, D.: Where to look: Focus regions for visual question answering. arXiv preprint arXiv:1511.07394 (2015)
30. Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Doll´ar, P., Zitnick, C.L.: Microsoft coco: Common objects in context. In: Computer Visionâ ECCV 2014. Springer (2014) 740â755
31. Nathan Silberman, Derek Hoiem, P.K., Fergus, R.: Indoor segmentation and sup- port inference from rgbd images. In: ECCV. (2012)
32. Venugopalan, S., Xu, H., Donahue, J., Rohrbach, M., Mooney, R., Saenko, K.: Translating videos to natural language using deep recurrent neural networks. arXiv preprint arXiv:1412.4729 (2014) | {
"id": "1511.03416"
} |
1511.04636 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | 8 Jun 2016
arXiv:1511.04636v5 [cs.AI]
# Deep Reinforcement Learning with a Natural Language Action Space
# Ji Heâ, Jianshu Chenâ, Xiaodong Heâ, Jianfeng Gao', Lihong Lit Li Deng! and Mari Ostendorf*
âDepartment of Electrical Engineering, University of Washington, Seattle, WA 98195, USA
{jvking, ostendor}@uw.edu
*Microsoft Research, Redmond, WA 98052, USA
{jianshuc, xiaohe, jfgao, lihongli, deng}@microsoft.com
# Abstract
This paper introduces a novel architec- ture for reinforcement learning with deep neural networks designed to handle state and action spaces characterized by natural language, as found in text-based games. Termed a deep reinforcement relevance network (DRRN), the architecture repre- sents action and state spaces with sepa- rate embedding vectors, which are com- bined with an interaction function to ap- proximate the Q-function in reinforce- ment learning. We evaluate the DRRN on two popular text games, showing su- perior performance over other deep Q- learning architectures. Experiments with paraphrased action descriptions show that the model is extracting meaning rather than simply memorizing strings of text.
# 1 Introduction
This work is concerned with learning strategies for sequential decision-making tasks, where a sys- tem takes actions at a particular state with the goal of maximizing a long-term reward. More specifi- cally, we consider tasks where both the states and the actions are characterized by natural language, such as in human-computer dialog systems, tutor- ing systems, or text-based games. In a text-based game, for example, the player (or system, in this case) is given a text string that describes the cur- rent state of the game and several text strings that describe possible actions one could take. After se- lecting one of the actions, the environment state is updated and revealed in a new textual description. A reward is given either at each transition or in the end. The objective is to understand, at each step, the state text and all the action texts to pick the most relevant action, navigating through the se-
quence of texts so as to obtain the highest long- term reward. Here the notion of relevance is based on the joint state/action impact on the reward: an action text string is said to be âmore relevantâ (to a state text string) than the other action texts if taking that action would lead to a higher long- term reward. Because a playerâs action changes the environment, reinforcement learning (Sutton and Barto, 1998) is appropriate for modeling long- term dependency in text games.
There is a large body of work on reinforcement learning. Of most interest here are approaches leveraging neural networks because of their suc- cess in handling a large state space. Early work â TD-gammon â used a neural network to approxi- mate the state value function (Tesauro, 1995). Re- cently, inspired by advances in deep learning (Le- Cun et al., 2015; Hinton et al., 2012; Krizhevsky et al., 2012; Dahl et al., 2012), significant progress has been made by combining deep learning with reinforcement learning. Building on the approach of Q-learning (Watkins and Dayan, 1992), the âDeep Q-Networkâ (DQN) was developed and ap- plied to Atari games (Mnih et al., 2013; Mnih et al., 2015) and shown to achieve human level per- formance by applying convolutional neural net- works to the raw image pixels. Narasimhan et al. (2015) applied a Long Short-Term Memory network to characterize the state space in a DQN framework for learning control policies for parser- based text games. More recently, Nogueira and Cho (2016) have also proposed a goal-driven web navigation task for language based sequential de- cision making study. Another stream of work fo- cuses on continuous control with deep reinforce- ment learning (Lillicrap et al., 2016), where an actor-critic algorithm operates over a known con- tinuous action
# space.
Inspired by these successes and recent work us- ing neural networks to learn phrase- or sentence-
level embeddings (Collobert and Weston, 2008; Huang et al., 2013; Le and Mikolov, 2014; Sutskever et al., 2014; Kiros et al., 2015), we propose a novel deep architecture for text under- standing, which we call a deep reinforcement rele- vance network (DRRN). The DRRN uses separate deep neural networks to map state and action text strings into embedding vectors, from which ârel- evanceâ is measured numerically by a general in- teraction function, such as their inner product. The output of this interaction function defines the value of the Q-function for the current state-action pair, which characterizes the optimal long-term reward for pairing these two text strings. The Q-function approximation is learned in an end-to-end manner by Q-learning.
The DRRN differs from prior work in that ear- lier studies mostly considered action spaces that are bounded and known. For actions described by natural language text strings, the action space is inherently discrete and potentially unbounded due to the exponential complexity of language with re- spect to sentence length. A distinguishing aspect of the DRRN architecture â compared to sim- ple DQN extensions â is that two different types of meaning representations are learned, reflecting the tendency for state texts to describe scenes and action texts to describe potential actions from the user. We show that the DRRN learns a continuous space representation of actions that successfully generalize to paraphrased descriptions of actions unseen in training.
# 2 Deep Reinforcement Relevance Network
# 2.1 Text Games and Q-learning
We consider the sequential decision making prob- lem for text understanding. At each time step t, the agent will receive a string of text that de- scribes the state s; (i.e., âstate-textââ) and several strings of text that describe all the potential ac- tions a; (i.e., âaction-textâ). The agent attempts to understand the texts from both the state side and the action side, measuring their relevance to the current context s; for the purpose of maximizing the long-term reward, and then picking the best action. Then, the environment state is updated St41 = 8â according to the probability p(sâ|s, a), and the agent receives a reward 7; for that partic- ular transition. The policy of the agent is defined to be the probability 7(a,|s;) of taking action a,
at state s;. Define the Q-function Q7(s, a) as the expected return starting from s, taking the action a, and thereafter following policy 7(a|s) to be:
St =a =ah +oo Q"(s,a) =E {s risk k=0
where y denotes a discount factor. The optimal policy and Q-function can be found by using the Q-learning algorithm (Watkins and Dayan, 1992):
Q(st, a4) â Q(St, ae) + (1) m (Tet: max Q(s/41,@) â Q(s¢, a2)
where 1, is the learning rate of the algorithm. In this paper, we use a softmax selection strategy as the exploration policy during the learning stage, which chooses the action a; at state s; according to the following probability:
expla Qlseai)) uy Ai jy)? Tyatexp(a- Q(se,af)) m (az = a}|5¢)
where A; is the set of feasible actions at state s;, aj, is the i-th feasible action in A;, | - | denotes the cardinality of the set, and a is the scaling factor in the softmax operation. a is kept constant through- out the learning period. All methods are initialized with small random weights, so initial Q-value dif- ferences will be small, thus making the Q-learning algorithm more explorative initially. As Q-values better approximate the true values, a reasonable a will make action selection put high probability on the optimal action (exploitation), but still maintain a small exploration probability.
# 2.2. Natural language action space
Let S denote the state space, and let A denote the entire action space that includes all the unique ac- tions over time. A vanilla Q-learning recursion (1) needs to maintain a table of size |S| x |A|, which is problematic for a large state/action space. Prior work using a DNN in Q-function approximation has shown high capacity and scalability for han- dling a large state space, but most studies have used a network that generates |A| outputs, each of which represents the value of Q(s, a) for a par- ticular action a. It is not practical to have a DQN architecture of a size that is explicitly dependence on the large number of natural language actions. Further, in many text games, the feasible action set A, at each time ¢ is an unknown subset of the unbounded action space A that varies over time.
For the case where the maximum number of possible actions at any point in time (max; |A;|) is known, the DQN can be modified to simply use that number of outputs (ââMax-action DQNâ), as illustrated in Figure l(a), where the state and ac- tion vectors are concatenated (i.e., as an extended state vector) as its input. The network computes the Q-function values for the actions in the current feasible set as its outputs. For a complex game, max; |A;| may be difficult to obtain, because A; is usually unknown beforehand. Nevertheless, we will use this modified DQN as a baseline.
An alternative approach is to use a function ap- proximation using a neural network that takes a state-action pair as input, and outputs a single Q- value for each possible action (âPer-action DQNâ in Figure 1(b)). This architecture easily handles a varying number of actions and represents a second baseline.
We propose an alternative architecture for han- dling a natural language action space in sequential text understanding: the deep reinforcement rele- vance network (DRRN). As shown in Figure 1(c), the DRRN consists of a pair of DNNs, one for the state text embedding and the other for action text embeddings, which are combined using a pair- wise interaction function. The texts used to de- scribe states and actions could be very different in nature, e.g., a state text could be long, contain- ing sentences with complex linguistic structure, whereas an action text could be very concise or just a verb phrase. Therefore, it is desirable to use two networks with different structures to handle state/action texts, respectively. As we will see in the experimental sections, by using two separate deep neural networks for state and action sides, we obtain much better results.
# 2.3 DRRN architecture: Forward activation
Given any state/action text pair (s;, at), the DRRN estimates the Q-function Q(s;,a}) in two steps. First, map both s; and ai to their embedding vec- tors using the corresponding DNNs, respectively. Second, approximate Q(s:, ai) using an interac- tion function such as the inner product of the em- bedding vectors. Then, given a particular state s;, we can select the optimal action a; among the set of actions via a; = arg max,; Q(s:, ai).
More formally, let hj, and hj, denote the /-th hidden layer for state and action side neural net- works, respectively. For the state side, W),, and
bj; denote the linear transformation weight ma- trix and bias vector between the (/ â 1)-th and /-th hidden layers. W),, and bj, denote the equivalent parameters for the action side. In this study, the DRRN has L hidden layers on each side.
has = f (Wi,sst + b1,s) (3)
hia = f(Wiaai + bia) (4)
his = f(Wi-1,shi-1,s + bi-1,s) (5)
Nia=f Wiaahi-tja + bi-1,a) (6)
where f(-) is the nonlinear activation function at the hidden layers, which, for example, could be chosen as tanh(a), andi = 1,2,3,...,|A:| is the action index. A general interaction function g(-) is used to approximate the Q-function values, Q(s, a), in the following parametric form:
Q(s,a';®) =g (hiss Nia) re)
where © denotes all the model parameters. The in- teraction function could be an inner product, a bi- linear operation, or a nonlinear function such as a deep neural network. In our experiments, the inner product and bilinear operation gave similar results. For simplicity, we present our experiments mostly using the inner product interaction function.
The success of the DRRN in handling a natu- ral language action space A lies in the fact that the state-text and the action-texts are mapped into separate finite-dimensional embedding spaces. The end-to-end learning process (discussed next) makes the embedding vectors in the two spaces more aligned for âgoodâ (or relevant) action texts compared to âbadâ (or irrelevant) choices, result- ing in a higher interaction function output (Q- function value).
# 2.4 Learning the DRRN: Back propagation
To learn the DRRN, we use the âexperience- replayâ strategy (Lin, 1993), which uses a fixed exploration policy to interact with the environment to obtain a sample trajectory. Then, we randomly sample a transition tuple (5%, @k, Tk, 8k41), Com- pute the temporal difference error for sample k:
dk = re+y max Q(Sk-41, 4; On-1)-Q(Sx, Ak; Ox-1),
and update the model according to the recursions:
OQ(Sk, 4k; Of-1) Wok = Wo,kâ-1 + ede - aw (8) OQ (Sk, Ok; O¢â Duk = bv ka + Med * On tr ea) (9)
Qr(s,at) Q(s, aâ) Q(s, a") 1 1@_@ pairwise interaction i A RY (e.g. inner product) h2 has Nba T ii t â his La T 2 j i I t St a ay os | ay | St at (a) Max-action DQN (b) Per-action DQN (c) DRRN
# function
Figure 1: Different deep Q-learning architectures: Max-action DQN and Per-action DQN both treat input text as concantenated vectors and compute output Q-values with a single NN. DRRN mo lels text embeddings from state/action sides separately, and use an interaction function to compute Q-values.
1 T T T T y â , action 2 (-1.30) oF 2â ~ action 1 (-0.55) | after 200 episodes state 1 T T T T T y , ; action 1 (+0.91) > action 2 (-17.17) L < | state after 400 episodes action 1 (+16.53) action 2 (-22.08) oF state â_â_ | after 600 episodes âtg =6 â4 -2 0 2 4 6 8
Figure 2: PCA projections of text embedding vectors for state and associated action vectors after 200, 400 and 600 training episodes. The state is âAs you move forward, the people surrounding you suddenly look up with terror in their faces, and flee the street.â Action 1 (good choice) is âLook upâ, and action 2 (poor choice) is âIgnore the alarm of others and continue moving forward.â
9Q | 92 and for v ⬠{s,a}. Expressions for ay, 55° other algorithm details are given in supplementary materials. Random sampling essentially scram- bles the trajectory from experience-replay into a âbag-of-transitionsâ, which has been shown to avoid oscillations or divergence and achieve faster convergence in Q-learning (Mnih et al., 2015). Since the models on the action side share the same parameters, models associated with all actions are effectively updated even though the back propaga- tion is only over one action. We apply back prop- agation to learn how to pair the text strings from the reward signals in an end-to-end manner. The representation vectors for the state-text and the action-text are automatically learned to be aligned with each other in the text embedding space from the reward signals. A summary of the full learning algorithm is given in Algorithm 1.
Figure 2 illustrates learning with an inner product interaction function. We used Principal Component Analysis (PCA) to project the 100- dimension last hidden layer representation (before the inner product) to a 2-D plane. The vector em- beddings start with small values, and after 600 episodes of experience-replay training, the embed- dings are very close to the converged embedding (4000 episodes). The embedding vector of the op- timal action (Action 1) converges to a positive in- ner product with the state embedding vector, while Action 2 converges to a negative inner product.
# 3 Experimental Results
# 3.1 Text games
Text games, although simple compared to video games, still enjoy high popularity in online com- munities, with annual competitions held online
Algorithm 1 Learning algorithm for DRRN
1: Initialize replay memory D to capacity N.
2: Initialize DRRN with small random weights.
3: Initialize game simulator and load dictionary.
4: for episode = 1,...,M do
5: Restart game simulator.
6: Read raw state text and a list of action text from the simulator, and convert them to representation A Ss. and at,a?,...,a! 1
7 fort=1,...,7 do
Compute Q(s;, ai; ©) for the list of actions using DRRN forward activation (Section 2.3).
9: Select an action a; based on probability distribution (a; = ai|s;) (Equation 2)
10: Execute action a; in simulator
11: Observe reward r;. Read the next state text and the next list of action texts, and convert them to representation 5,41 and aj, @744,--. sae
12: Store transition (s¢, at, Tt, $141, At41) in D.
13: Sample random mini batch of transitions (5%, 44,1, 8h-41, Ap41) from D.
if Sit is terminal
Set y,
# a:
=
re +ymMaxaed,,, Q(Sk+1; aâ;®)) otherwise
15: Perform a gradient descent step on (yz, â Q(s, a4; ©))? with respect to the network parameters © (Section 2.4). Back-propagation is performed only for a, even though there are |.A;| actions at time k.
16: end for
17:
# end for
since 1995. Text games communicate to players in the form of a text display, which players have to understand and respond to by typing or click- ing text (Adams, 2014). There are three types of text games: parser-based (Figure 3(a)), choice- based (Figure 3(b)), and hypertext-based (Figure 3(c)). Parser-based games accept typed-in com- mands from the player, usually in the form of verb phrases, such as âeat appleâ, âget keyâ, or âgo eastâ. They involve the least complex ac- tion language. Choice-based and hypertext-based games present actions after or embedded within the state text. The player chooses an action, and the story continues based on the action taken at this particular state. With the development of web browsing and richer HTML display, choice-based and hypertext-based text games have become more popular, increasing in percentage from 8% in 2010 to 62% in 2014.
For parser-based text games, Narasimhan et al. (2015) have defined a fixed set of 222 actions, which is the total number of possible phrases the parser accepts. Thus the parser-based text game is reduced to a problem that is well suited to a fixed-
Game Saving John Machine of Death Text game type Choice Choice & Hypertext Vocab size 1762 2258 Action vocab size 171 419 Avg. words/description | 76.67 67.80 State transitions Deterministic | Stochastic # of states (underlying) | > 70 > 200
Table 1: Statistics for the games âSaving Johnâ and and âMachine of Deathâ.
action-set DQN. However, for choice-based and hypertext-based text games, the size of the action space could be exponential with the length of the action sentences, which is handled here by using a continuous representation of the action space.
In this study, we evaluate the DRRN with two games: a deterministic text game task called âSav- ing Johnâ and a larger-scale stochastic text game called âMachine of Deathâ from a public archive.â The basic text statistics of these tasks are shown in Table 1. The maximum value of feasible actions (ie., max; |A;|) is four in âSaving Johnâ, and nine in âMachine of Deathâ. We manually annotate fi-
'Statistics obtained from http: //www.ifarchive. org
?Simulators are available at https: //github.com/ jvking/text-games
Front Steps leads into the lobby. Well, here we are, back home again. The battered front door leads north into the lobby. The cat is out here with you, parked directly in front of the door and looking up at you expectantly + Return the catâs stare >. + âHowdy, Mittens.â (a) Parser-based The cat is out here with you, parked directly in front of the door and looking up at you expectantly. + Step purposefully over the cat and into the lobby (b) Choiced-based Well, here we are, back home again. The battered front door Well, here we are, back home again. The battered front door leads into the lobby. The cat is out here with you, parked directly in front of the door and looking up at you expectantly. You're hungry. (c) Hypertext-based Figure 3: Different types of text games nal rewards for all distinct endings in both games (as shown in supplementary materials). The mag- nitude of reward scores are given to describe sen- timent polarity of good/bad endings. On the other hand, each non-terminating step we assign with a small negative reward, to encourage the learner to finish the game as soon as possible. For the text game âMachine of Deathâ, we restrict an episode to be no longer than 500 steps. In âSaving Johnâ all actions are choice-based, for which the mapping from text strings to a, are clear. In âMachine of Deathâ, when actions are hypertext, the actions are substrings of the state. In this case s; is associated with the full state de- scription, and a; are given by the substrings with- out any surrounding context. For text input, we use raw bag-of-words as features, with different vocabularies for the state side and action side. 3.2 Experiment setup We apply DRRNs with both | and 2 hidden layer structures. In most experiments, we use dot- product as the interaction function and set the hidden dimension to be the same for each hid- den layer. We use DRRNs with 20, 50 and 100-dimension hidden layer(s) and build learn- ing curves during experience-replay training. The learning rate is constant: 7; = 0.001. In testing, as in training, we apply softmax selection. We record average final rewards as performance of the model. The DRRN is compared to multiple baselines: a linear model, two max-action DQNs (MA DQN) (L = 1 or 2 hidden layers), and two per-action DQNs (PA DQN) (again, L = 1,2). All base- lines use the same Q-learning framework with dif- ferent function approximators to predict Q(s;, at) given the current state and actions. For the lin- ear and MA DQN baselines, the input is the text- based state and action descriptions, each as a bag of words, with the number of outputs equal to the maximum number of actions. When there are fewer actions than the maximum, the highest scor- ing available action is used. The PA DQN baseline Eval metric Average reward hidden dimension 20 50 100 Linear 44 (0.4) PA DQN (£ = 1) 2.0(1.5) | 4.014) | 44 (2.0) PA DQN (ZL = 2) 1.5.0) | 45(2.5) | 7.9@G.0) MA DQN(L=1) | 2.9.1) | 4.0 (4.2) 5.9 (2.5) MA DQN (LZ = 2) | 4.93.2) | 9.0.2) | 7.1G.1) DRRN (L = 1) 17.1 (0.6) | 18.3 (0.2) | 18.2 (0.2) DRRN (L = 2) 18.4 (0.1) | 18.5 (0.3) | 18.7 (0.4) Table 2: The final average rewards and standard deviations on âSaving Johnâ. takes each pair of state-action texts as input, and generates a corresponding Q-value. We use softmax selection, which is widely applied in practice, to trade-off exploration vs. exploitation. Specifically, for each experience- replay, we first generate 200 episodes of data (about 3K tuples in âSaving Johnâ and 16K tuples in âMachine of Deathâ) using the softmax selec- tion rule in (2), where we set a = 0.2 for the first game and a = 1.0 for the second game. The a is picked according to an estimation of range of the optimal Q-values. We then shuffle the generated data tuples (s:, a¢, 1, +41) update the model as described in Section 2.4. The model is trained with multiple epochs for all configurations, and is eval- uated after each experience-replay. The discount factor Â¥ is set to 0.9. For DRRN and all baselines, network weights are initialized with small random values. To prevent algorithms from âremember- ingâ state-action ordering and make choices based on action wording, each time the algorithm/player reads text from the simulator, we randomly shuffle the list of actions. This will encourage the algo- rithms to make decisions based on the understand- ing of the texts that describe the states and actions. 3.3. Performance In Figure 4, we show the learning curves of dif- ferent models, where the dimension of the hid- 3When in a specific state, the simulator presents the pos- sible set of actions in random order, i.e. they may appear in a different order the next time a player is in this same state.
Eval metric Average reward hidden dimension 20 50 100 Linear 44 (0.4) PA DQN (£ = 1) 2.0(1.5) | 4.014) | 44 (2.0) PA DQN (ZL = 2) 1.5.0) | 45(2.5) | 7.9@G.0) MA DQN(L=1) | 2.9.1) | 4.0 (4.2) 5.9 (2.5) MA DQN (LZ = 2) | 4.93.2) | 9.0.2) | 7.1G.1) DRRN (L = 1) 17.1 (0.6) | 18.3 (0.2) | 18.2 (0.2) DRRN (L = 2) 18.4 (0.1) | 18.5 (0.3) | 18.7 (0.4)
Average reward t â2âDRAN (2-hidden) â4â DRRN (1-hidden) =o PADON (2-hidden) =o MADON (2-hidden FT ty â st |X LHe A$ Average reward ° âE=DRAN (@hiddeny âaâ DRRN (1-hidden) âo- PA DON (2-hidden) =o MADON (2-hidden) 500 1000 1500 2000 Number of episodes 2500 3000 3500 (a) Game 1: âSaving Johnâ ia) 500. 1000 1500 2000 2500 Number of episodes (b) Game 2: âMachine of Deathâ 3000 3500 4000
Figure 4: Learning curves of the two text games.
Eval metric Average reward hidden dimension 20 50 100 Linear 3.3 (1.0) PA DQN (Z = 1) 0.9 (2.4) 2.3 (0.9) 3.1 (1.3) PA DQN (Z = 2) 1.3 (1.2) 2.3 (1.6) 3.4 (1.7) MA DQN (L = 1) [| 2.01.2) 3.71.6) | 4.8 (2.9) MA DQN (L = 2) | 2.8 (0.9) 43 (0.9) 5.2 (1.2) DRRN (L = 1) 7.2 (1.5) 8.4 (1.3) 8.7 (0.9) DRRN (L = 2) 9.2 (2.1) | 10.7 (2.7) | 11.2 0.6)
Table 3: The final average rewards and standard deviations on âMachine of Deathâ.
Game 2, due to the complexity of the underly- ing state transition function, we cannot compute the exact optimal policy score. To provide more insight into the performance, we averaged scores of 8 human players for initial trials (novice) and after gaining experience, yielding scores of â5.5 and 16.0, respectively. The experienced players do outperform our algorithm. The converged per- formance is higher with two hidden layers for all models. However, deep models also converge more slowly than their | hidden layer versions, as shown for the DRRN in Figure 4.
den layers in the DQNs and DRRN are all set to 100. The error bars are obtained by running 5 independent experiments. The proposed meth- ods and baselines all start at about the same per- formance (roughly -7 average rewards for Game 1, and roughly -8 average rewards for Game 2), which is the random guess policy. After around 4000 episodes of experience-replay training, all methods converge. The DRRN converges much faster than the other three baselines and achieves a higher average reward. We hypothesize this is be- cause the DRRN architecture is better at capturing relevance between state text and action text. The faster convergence for âSaving Johnâ may be due to the smaller observation space and/or the deter- ministic nature of its state transitions (in contrast to the stochastic transitions in the other game).
Besides an inner-product, we also experimented with more complex interaction functions: a) a bi- linear operation with different action side dimen- sions; and b) a non-linear deep neural network us- ing the concatenated state and action space embed- dings as input and trained in an end-to-end fash- ion to predict Q values. For different configura- tions, we fix the state side embedding to be 100 dimensions and vary the action side embedding dimensions. The bilinear operation gave similar results, but the concatenation input to a DNN de- graded performance. Similar behaviors have been observed on a different task (Luong et al., 2015).
# 3.4. Actions with paraphrased descriptions
The final performance (at convergence) for both baselines and proposed methods are shown in Ta- bles 2 and 3. We test for different model sizes with 20, 50, and 100 dimensions in the hidden layers. The DRRN performs consistently better than all baselines, and often with a lower variance. For
To investigate how our models handle actions with âunseenâ natural language descriptions, we had two people paraphrase all actions in the game âMachine of Deathâ (used in testing phase), except a few single-word actions whose syn- onyms are out-of-vocabulary (OOV). The word- level OOV rate of paraphrased actions is 18.6%,
Q-values scatterplot between state-action pairs
scatterplot pairs iS é â y=2 x0.85 +0.24, pRâ =0.95 re oN ow e 5s 8 8 With paraphrased action i S q iy 8 1 w 8 ~40! =30 =20 =10 0 10 20 30 40 With original action
Figure 5: Scatterplot and strong correlation be- tween Q-values of paraphrased actions versus original actions
and standard 4-gram BLEU score between the paraphrased and original actions is 0.325. The re- sulting 153 paraphrased action descriptions are as- sociated with 532 unique state-action pairs.
We apply a well-trained 2-layer DRRN model (with hidden dimension 100), and predict Q- values for each state-action pair with fixed model parameters. Figure 5 shows the correlation be- tween Q-values associated with paraphrased ac- tions versus original actions. The predictive R- squared is 0.95, showing a strong positive corre- lation. We also run Q-value correlation for the NN interaction and pR? = 0.90. For baseline MA-DQN and PA-DQN, their corresponding pR? is 0.84 and 0.97, indicating they also have some generalization ability. This is confirmed in the paraphrasing-based experiments too, where the test reward on the paraphrased setup is close to the original setup. This supports the claim that deep learning is useful in general for this language understanding task, and our findings show that a decoupled architecture most effectively leverages that approach.
In Table 4 we provide examples with predicted Q-values of original descriptions and paraphrased descriptions. We also include alternative action descriptions with in-vocabulary words that will lead to positive / negative / irrelevant game devel- opment at that particular state. Table 4 shows ac- tions that are more likely to result in good endings are predicted with high Q-values. This indicates that the DRRN has some generalization ability and gains a useful level of language understanding in
Eval metric Average reward hidden dimension 20 50 100 PA DQN (Z = 2) [| 0.21.2) | 2.6(1.0) | 3.6 0.3) MA DQN (L=2) [| 2.5(1.3) | 4000.9) [| 5.10.) DRRN (L = 2) 7.3 (0.7) | 8.30.7) | 10.5 (0.9)
Table 5: The final average rewards and stan- dard deviations on paraphrased game âMachine of Deathâ.
the game scenario.
We use the baseline models and proposed DRRN model trained with the original action de- scriptions for âMachine of Deathâ, and test on paraphrased action descriptions. For this game, the underlying state transition mechanism has not changed. The only change to the game interface is that during testing, every time the player reads the actions from the game simulator, it reads the para- phrased descriptions and performs selection based on these paraphrases. Since the texts in test time are âunseenâ to the player, a good model needs to have some level of language understanding, while a naive model that memorizes all unique action texts in the original game will do poorly. The re- sults for these models are shown in Table 5. All methods have a slightly lower average reward in this setting (10.5 vs. 11.2 for the original actions), but the DRRN still gives a high reward and sig- nificantly outperforms other methods. This shows that the DRRN can generalize well to âunseenâ natural language descriptions of actions.
# 4 Related Work
There has been increasing interest in applying deep reinforcement learning to a variety problems, but only a few studies address problems with nat- ural language state or action spaces. In language processing, reinforcement learning has been ap- plied to a dialogue management system that con- verses with a human user by taking actions that generate natural language (Scheffler and Young, 2002; Young et al., 2013). There has also been in- terest in extracting textual knowledge to improve game control performance (Branavan et al., 2011), and mapping text instructions to sequences of ex- ecutable actions (Branavan et al., 2009). In some applications, it is possible to manually design fea- tures for state-action pairs, which are then used in reinforcement learning to learn a near-optimal policy (Li et al., 2009). Designing such features, however, require substantial domain knowledge.
Text (with predicted Q-values) State As you move forward, the people surrounding you suddenly look up with terror in their faces, and flee the street. Actions in the original game Ignore the alarm of others and continue moving forward. (-21.5) Look up. (16.6) Paraphrased actions (not original) look. (17.5) Disregard the caution of others and keep pushing ahead. (-11.9) Turn up and Positive actions (not original) Stay there. (2.8) Stay calmly. (2.0) Negative actions (not original) Screw it. Iâm going carefully. (-17.4) Yell at everyone. (-13.5) Irrelevant actions (not original) Insert a coin. (-1.4) Throw a coin to the ground. (-3.6)
Table 4: Predicted Q-value examples
The work most closely related to our study in- olves application of deep reinforcement to learn- ing decision policies for parser-based text games. Narasimhan et al. (2015) applied a Long Short- Term Memory DQN framework, which achieves higher average reward than the random and Bag- of-Words DQN baselines. In this work, actions are constrained to a set of known fixed command structures (one action and one argument object), based on a limited action-side vocabulary size. The overall action space is defined by the action- argument product space. This pre-specified prod- uct space is not feasible for the more complex text strings in other forms of text-based games. Our proposed DRRN, on the other hand, can handle the more complex text strings, as well as parser- based games. In preliminary experiments with the parser-based game from (Narasimhan et al., 2015), we find that the DRRN using a bag-of-words (BOW) input achieves results on par with their BOW DQN. The main advantage of the DRRN is that it can also handle actions described with more complex language.
reasonably well.
# 5 Conclusion
In this paper we develop a deep reinforcement relevance network, a novel DNN architecture for handling actions described by natural language in decision-making tasks such as text games. We show that the DRRN converges faster and to a better solution for Q-learning than alternative ar- chitectures that do not use separate embeddings for the state and action spaces. Future work in- cludes: (i) adding an attention model to robustly analyze which part of state/actions text correspond to strategic planning, and (ii) applying the pro- posed methods to more complex text games or other tasks with actions defined through natural language.
# Acknowledgments
We thank Karthik Narasimhan and Tejas Kulka- mi for providing instructions on setting up their parser-based games.
The DRRN experiments described here lever- age only a simple bag-of-words representa- tion of phrases and sentences. As observed in (Narasimhan et al., 2015), more complex sentence-based models can give further improve- ments. In preliminary experiments with âMachine of Deathâ, we did not find LSTMs to give im- proved performance, but we conjecture that they would be useful in larger-scale tasks, or when the word embeddings are initialized by training on large data sets.
# References
[Adams2014] E. Adams. 2014. Fundamentals of game design. Pearson Education.
[Branavan et al.2009] S.R.K. Branavan, H. Chen, L. Zettlemoyer, and R. Barzilay. 2009. Reinforce- ment learning for mapping instructions to actions. In Proc. of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th IJCNLP, pages 82-90, August.
As mentioned earlier, other work has applied deep reinforcement learning to a problem with a continuous action space (Lillicrap et al., 2016). In the DRRN, the action space is inherently discrete, but we learn a continuous representation of it. As indicated by the paraphrasing experiment, the con- tinuous space representation seems to generalize
[Branavan et al.2011] S.R.K. Branavan, D. Silver, and R. Barzilay. 2011. Learning to win by reading man- uals in a monte-carlo framework. In Proc. of the An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 268-277. Association for Computational Linguistics.
[Collobert and Weston2008] R. Collobert and J. We- ston. 2008. A unified architecture for natural lan-
guage processing: Deep neural networks with mul- titask learning. In Proc. of the 25th International Conference on Machine learning, pages 160-167. ACM.
[Dahl et al.2012] G. E Dahl, D. Yu, L. Deng, and A. Acero. 2012. Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. Audio, Speech, and Language Process- ing, IEEE Transactions on, 20(1):30-42.
[Hinton et al.2012] G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Van- houcke, P. Nguyen, T. N. Sainath, and B. Kings- bury. 2012. Deep neural networks for acoustic mod- eling in speech recognition: The shared views of four research groups. IEEE Signal Process. Mag., 29(6):82-97.
[Huang et al.2013] P-S. Huang, X. He, J. Gao, L. Deng, A. Acero, and L. Heck. 2013. Learning deep struc- tured semantic models for web search using click- through data. In Proc. of the ACM International Conference on Information & Knowledge Manage- ment, pages 2333-2338. ACM.
[Kiros et al.2015] R. Kiros, Y. Zhu, R. R Salakhutdinov, R. Zemel, R. Urtasun, A. Torralba, and S. Fidler. 2015. Skip-thought vectors. In Advances in Neural Information Processing Systems, pages 3276-3284.
[Krizhevsky et al.2012] A. Krizhevsky, I. Sutskever, and G. E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â 1105.
[Le and Mikolov2014] Q. V Le and T. Mikolov. 2014. Distributed representations of sentences and docu- ments. In International Conference on Machine Learning.
[LeCun et al.2015] Y. LeCun, Y. Bengio, and G. Hin- ton. 2015. Deep learning. Nature, 521(7553):436â 444.
[Li et al.2009] L. Li, J. D. Williams, and S. Balakr- ishnan. 2009. Reinforcement learning for spo- ken dialog management using least-squares _pol- icy iteration and fast feature selection. In Pro- ceedings of the Tenth Annual Conference of the International Speech Communication Association (INTERSPEECH-09), page 24752478.
[Lillicrap et al.2016] T. P Lillicrap, J. J Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wier- stra. 2016. Continuous control with deep rein- forcement learning. In International Conference on Learning Representations.
[Lin1993] L-J. Lin. 1993. Reinforcement learning for robots using neural networks. Technical report, DTIC Document.
[Luong et al.2015] M-T. Luong, H. Pham, and C. D. Manning. 2015. Effective approaches to attention- based neural machine translation. In Proc. of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412-1421, Septem- ber.
[Mnih et al.2013] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Ried- miller. 2013. Playing Atari with Deep Reinforce- ment Learning. NIPS Deep Learning Workshop, De- cember.
[Mnih et al.2015] V. Mnih, K. Kavukcuoglu, D. Silver, A. A Rusu, J. Veness, M. G Bellemare, A. Graves, M. Riedmiller, A. K Fidjeland, G. Ostrovski, et al. 2015. Human-level control through deep reinforce- ment learning. Nature, 518(7540):529-533.
[Narasimhan et al.2015] K. Narasimhan, T. Kulkarni, and R. Barzilay. 2015. Language understanding for text-based games using deep reinforcement learning. In Proc. of the 2015 Conference on Empirical Meth- ods in Natural Language Processing, pages 1-11, September.
[Nogueira and Cho2016] R. Nogueira and K. Cho. 2016. Webnav: A new large-scale task for natural language based sequential decision making. arXiv preprint arXiv: 1602.02261.
[Scheffler and Young2002] K. Scheffler and S. Young. 2002. Automatic learning of dialogue strategy using dialogue simulation and reinforcement learning. In Proc. of the second International Conference on Hu- man Language Technology Research, pages 12-19.
Sutskever et al.2014] I. Sutskever, O. Vinyals, and Q. V Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, pages 3104-3112.
Sutton and Barto1998] R. S Sutton and A. G Barto. 1998. Reinforcement learning: An introduction, volume 1. MIT press Cambridge.
Tesaurol995] G. Tesauro. 1995. Temporal difference learning and td-gammon. Communications of the ACM, 38(3):58-68.
Watkins and Dayanl992] C. JCH Watkins and P. Dayan. 1992. Q-learning. Machine learning, 8(3-4):279-292.
Young et al.2013] S. Young, M. Gasic, B. Thomson, and J. D Williams. 2013. Pomdp-based statistical spoken dialog systems: A review. Proceedings of the IEEE, 101(5):1160-1179.
2016
arXiv:1511.04636v5 [cs.AI] 8 Jun
# Supplementary Material for âDeep Reinforcement Learn- ing with a Natural Language Action Spaceâ
# A Percentage of Choice-based and Hypertext-based Text Games
As shown in Table 1.!
Year 2010 2011 2012 2013 2014 Percentage || 7.69% 7.89% 25.00% | 55.56% | 61.90%
Table 1: Percentage of choice-based and hypertext-based text games since 2010, in archive of interactive fictions
# B_ Back Propagation Formula for Learning DRRN
Let hj; and hj,_ denote the /-th hidden layer for state and action side neural net- works, respectively. For state side, W;,, and bj,; denote the linear transformation weight matrix and bias vector between the (J â 1)-th and /-th hidden layers. For actions side, W),, and b;,, denote the linear transformation weight matrix and bias vector between the (/ â 1)-th and J-th hidden layers. The DRRN has L hidden layers on each side.
# Forward:
his = f(W1,s8¢ + b1,5) (1)
Ria =f(Wiaat+bia), %=1,2,3,...,|Ad| (2)
his = f(Wi-1,shi-1,s + bi-1,s), 1 = 2,3,...,L (3)
# hia =S(Wi-rahiaa +b-ta), Q(s2, ai) = AE hia
t=1,2,3,..,|Ad, = 2,3,..,L
where f(-) is the nonlinear activation function at the hidden layers, which is chosen as tanh (xz) = (1 â exp (â2a))/(1 + exp (â2z)), and A; denotes the set of all actions at time t.
# Backward:
Note we only back propagate for actions that are actually taken. More for- mally, let a; be action the DRRN takes at time t, and denote A = [Q(s;,a,) â
'Statistics are obtained from http://www. ifarchive.org
(4)
(5)
Reward | Endings (partially shown) -20 Suspicion fills my heart and I scream. Is she trying to kill me? I donât trust her one bit... -10 Submerged under water once more, I lose all focus... 0 Even now, sheâs there for me. And I have done nothing for her... 10 Honest to God, I donât know what I see in her. Looking around, the situationâs not so bad... 20 Suddenly I can see the sky... I focus on the most important thing - that Iâm happy to be alive.
Table 2: Final rewards defined for the text game âSaving Johnâ
(re + ymaxa Q(st41, @))]?/2. Denote 51,5 = dbi,5 = OQ/Abis. dia = Sia = 0Q/0bj,q, and we have (by following chain rules):
. OA BQ = FE = Qsisar) â (r+ ymax Q(su41,4)) ©)
61,8 = 9Q-hraO© (1âhy,s) © (1 +hr,s) 61-1, = WoL © (1 _ hi-1,s) © (1 + hits); l= 2,3,...,L
ota = §Q + hp sO (1â Area) O (1+ Ara) (8) 6-1, = Wi ba (1 Mita) O(L+ Pita), 1 = 2,3,...,0
bW1,5 = 0Q/OW 5 = 51,5 ° $F (9) SWig = Q/OWi 5 = b19-hE yg, 1=2,8,.4L
OW = 0Q/OW1.4 = Ota . a} (10) Wie = 0Q/OW ia = 510° Eas 1=2,3,...,L
where © denotes element-wise Hadamard product.
# C_ Final Rewards in the Two Text Games
As shown in Table 2 and Table 3.
# D Game 2 Learning curve with shared state and action embedding
As shown in Figure 1. For the first 1000 episodes, parameter tying gives faster convergence, but learning curve also has high variance and unstable.
(7)
Reward Endings (partially shown) -20 You spend your last few moments on Earth lying there, shot through the heart, by the image of Jon Bon Jovi. -20 you hear Bon Jovi say as the world fades around you. -20 As the screams you hear around you slowly fade and your vision begins to blur, you look at the words which ended your life. -10 You may be locked away for some time. -10 Eventually youâre escorted into the back of a police car as Rachel looks on in horror. -10 Fate can wait. -10 Sadly, youâre so distracted with looking up the number that you donât notice the large truck speeding down the street. -10 All these hiccups lead to one grand disaster. 10 Stay the hell away from me! She blurts as she disappears into the crowd emerging from the bar. 20 You canât help but smile. 20 Hope you have a good life. 20 Congratulations! 20 Rachel waves goodbye as you begin the long drive home. After a few minutes, you turn the radio on to break the silence. 30 After all, itâs your life. Itâs now or never. You ainât gonna live forever. You just want to live while youâre alive.
Table 3: Final rewards for the text game âMachine of Death.â Scores are as- signed according to whether the character survives, how the friendship develops, and whether he overcomes his fear.
# E Examples of State-Action Pairs in the Two Text Games
As shown in Table 4 and Table 5.
# F Examples of State-Action Pairs that do not exist in the feasible set
As shown in Table 6.
Average reward = A= DRRN (2-hidden) ââ¬â DRRN (2-hidden tying) 0 1000 2000 3000 Number of episodes 4000
Figure 1: Learning curves of shared state-action embedding vs. proposed DRRN in Game 2
State Actions (with Q values) A wet strand of hair hinders my vision and Iâm back in the water. Sharp pain pierces my lungs. How much longer do I have? 30 seconds? Less? I need to focus. A hand comes into view once I still donât know what to do. (- 8.981) Reach for it. (18.005) more. *Me:â Hello Sent: today âCherie:â Hey. Can I call you? Sent: | Reply âPIl call youâ (14.569) No today (-9.498) âYou donât hold any power over me. Not anymore.â Lucretia raises one eyebrow. The bar is quiet. I really wish I did my hair today.â She twirls a strand. âIâm sorry,â âSave itâ //Yellow Submarine plays softly in the background.// I really hate her.â Cherie? Itâs not her fault.â You'll be sorry,â Please stop screaming.â I laugh and she throws a glass of water in my face. (16.214) I look away and she sips her glass quietly. (-7.986) My dad left before I could remember. My mom worked all the time but she had to take care of her father, my grandpa. The routine was that she had an hour between her morning shift and afternoon shift, where sheâd make food for me to bring to pops. He lived three blocks away, in a house with red steps leading up to the metal front door. Inside, the stained yellow wallpaper and rotten oranges reeked of mold. Iâd walk by myself to my grandfatherâs and back. It was lonely sometimes, being a kid and all, but it was nothing I couldnât deal with. Itâs not like he abused me, I mean it hurt but why wouldnât I fight back? I met Adam on one of these walks. He made me feel stronger, like I can face anything. Repress this memory (-8.102) Why didnât I fight back? (10.601) Face Cherie (14.583)
Table 4: Q values (in pare ntheses) for state-action pair from âSaving Johnâ, using trained DRRN. High Q-va leading to better endings ue actions are more cooperative actions thus more likely
State Actions (with Q values) Peak hour ended an hour or so ago, alleviating the feeling of being a tinned sardine that?s commonly associated with shopping malls, though there are still quite a few people busily bumbling about. To your left is a fast food restaurant. To the right is a UFO catcher, and a poster is hanging on the wall beside it. Behind you is the one of the mallâs exits. In front of you stands the Machine. Youâre carrying 4 dollars in change. fast food restaurant (1.094) the Machine (3.708) mallâs exits (0.900) UFO catcher (2.646) poster (1.062) You lift the warm mug to your lips and take a small sip of hot tea. Ask what he was looking for. (3.709) Ask about the blood stains. (7.488) Drink tea. (5.526) Wait. (6.557) As you move forward, the people surrounding you suddenly look up with terror in their faces, and flee the street. Ignore the alarm of others and con- tinue moving forward. (-21.464) Look up. (16.593) Are you happy? Is this what you want to do? If you didnât avoid that sign, would you be satisfied with how your life had turned out? Sure, youâre good at your job and it pays well, but is that all you want from work? If not, maybe itâs time for a change. Screw it. Iâm going to find a new life right now. Itâs not going to be easy, but itâs what I want. (23.205) Maybe one day. But Iâm satis- fied right now, and I have bills to pay. Keep on going. (One minute) (14.491) You slam your entire weight against the man, making him stumble backwards and drop the chair to the ground as a group of patrons race to restrain him. You feel someone grab your arm, and look over to see that it?s Rachel. Letâs get out of here, she says while motioning towards the exit. You charge out of the bar and leap back into your car, adrenaline still pumping through your veins. As you slam the door, the glove box pops open and reveals your gun. Grab it and hide it in your jacket before Rachel can see it. (21.885) Leave it. (1.915)
Table 5: Q values (in parentheses) for state-action pair from âMachine of Deathâ, using trained DRRN
Text (with Q-values) State As you move forward, the people surrounding you suddenly look up with terror in their faces, and flee the street. Actions that are in the feasible set Ignore the alarm of others and continue moving forward. (-21.5) Look up. (16.6) Positive actions that are not in the feasible set Stay there. (2.8) Stay calmly. (2.0) Negative actions that are not in the feasible set Screw it. Iâm going carefully. (-17.4) Yell at everyone. (-13.5) Irrelevant actions that are not in the feasible set Insert a coin. (-1.4) Throw a coin to the ground. (-3.6)
Irrelevant actions that are not in the feasible set Insert a coin. (-1.4) Throw a coin to the ground. (-3.6)
Table 6: Q values (in parentheses) for sta e-action pair from âMachine of Deathâ, using trained DRRN, with made-up actions that were not in the feasible set | {
"id": "1511.04636"
} |
1511.02274 | Stacked Attention Networks for Image Question Answering | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer. | http://arxiv.org/pdf/1511.02274 | Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | cs.LG, cs.CL, cs.CV, cs.NE | test-dev/standard results added | null | cs.LG | 20151107 | 20160126 | 6 1 0 2
n a J 6 2 ] G L . s c [ 2 v 4 7 2 2 0 . 1 1 5 1 : v i X r a
# Stacked Attention Networks for Image Question Answering
Zichao Yang1, Xiaodong He2, Jianfeng Gao2, Li Deng2, Alex Smola1 1Carnegie Mellon University, 2Microsoft Research, Redmond, WA 98052, USA zichaoy@cs.cmu.edu, {xiaohe, jfgao, deng}@microsoft.com, alex@smola.org
# Abstract
This paper presents stacked attention networks (SANs) that learn to answer natural language questions from im- ages. SANs use semantic representation of a question as query to search for the regions in an image that are related to the answer. We argue that image question answering (QA) often requires multiple steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an image multiple times to infer the answer progressively. Experi- ments conducted on four image QA data sets demonstrate that the proposed SANs signiï¬cantly outperform previous state-of-the-art approaches. The visualization of the atten- tion layers illustrates the progress that the SAN locates the relevant visual clues that lead to the answer of the question layer-by-layer.
# 1. Introduction
With the recent advancement in computer vision and in natural language processing (NLP), image question an- swering (QA) becomes one of the most active research ar- eas [7, 21, 18, 1, 19]. Unlike pure language based QA sys- tems that have been studied extensively in the NLP commu- nity [28, 14, 4, 31, 3, 32], image QA systems are designed to automatically answer natural language questions according to the content of a reference image.
feature vectors of different parts of image B-(I Question: Query -\--EN What are sitting 1 I I in the basket on H + + | | > a bicycle? 1 | ' | XeWYJOS Answer: râ> dogs
(a) Stacked Attention Network for Image QA
Original Image First Attention Layer Second Attention Layer
(b) Visualization of the learned multiple attention layers. The stacked attention network ï¬rst focuses on all referred concepts, e.g., bicycle, basket and objects in the basket (dogs) in the ï¬rst attention layer and then further narrows down the focus in the second layer and ï¬nds out the answer dog.
Most of the recently proposed image QA models are based on neural networks [7, 21, 18, 1, 19]. A commonly used approach was to extract a global image feature vector using a convolution neural network (CNN) [15] and encode the corresponding question as a feature vector using a long short-term memory network (LSTM) [9] and then combine them to infer the answer. Though impressive results have been reported, these models often fail to give precise an- swers when such answers are related to a set of ï¬ne-grained regions in an image.
By examining the image QA data sets, we ï¬nd that it is often that case that answering a question from an image re- quires multi-step reasoning. Take the question and image in Fig. 1 as an example. There are several objects in the im- age: bicycles, window, street, baskets and
# Figure 1: Model architecture and visualization
dogs. To answer the question what are sitting in the basket on a bicycle, we need to ï¬rst locate those objects (e.g. basket, bicycle) and concepts (e.g., sitting in) referred in the question, then gradu- ally rule out irrelevant objects, and ï¬nally pinpoint to the re- gion that are most indicative to infer the answer (i.e., dogs in the example).
In this paper, we propose stacked attention networks (SANs) that allow multi-step reasoning for image QA. SANs can be viewed as an extension of the attention mech- anism that has been successfully applied in image caption- ing [30] and machine translation [2]. The overall architec- ture of SAN is illustrated in Fig. 1a. The SAN consists of three major components: (1) the image model, which uses
1
a CNN to extract high level image representations, e.g. one vector for each region of the image; (2) the question model, which uses a CNN or a LSTM to extract a semantic vector of the question and (3) the stacked attention model, which locates, via multi-step reasoning, the image regions that are relevant to the question for answer prediction. As illustrated in Fig. 1a, the SAN ï¬rst uses the question vector to query the image vectors in the ï¬rst visual attention layer, then combine the question vector and the retrieved image vectors to form a reï¬ned query vector to query the image vectors again in the second attention layer. The higher-level atten- tion layer gives a sharper attention distribution focusing on the regions that are more relevant to the answer. Finally, we combine the image features from the highest attention layer with the last query vector to predict the answer.
The main contributions of our work are three-fold. First, we propose a stacked attention network for image QA tasks. Second, we perform comprehensive evaluations on four image QA benchmarks, demonstrating that the proposed multiple-layer SAN outperforms previous state-of-the-art approaches by a substantial margin. Third, we perform a detailed analysis where we visualize the outputs of differ- ent attention layers of the SAN and demonstrate the process that the SAN takes multiple steps to progressively focus the attention on the relevant visual clues that lead to the answer.
# 2. Related Work
Image QA is closely related to image captioning [5, 30, 6, 27, 12, 10, 20]. In [27], the system ï¬rst extracted a high level image feature vector from GoogleNet and then fed it into a LSTM to generate captions. The method proposed in [30] went one step further to use an attention mechanism in the caption generation process. Different from [30, 27], the approach proposed in [6] ï¬rst used a CNN to detect words given the images, then used a maximum entropy language model to generate a list of caption candidates, and ï¬nally used a deep multimodal similarity model (DMSM) to re- rank the candidates. Instead of using a RNN or a LSTM, the DMSM uses a CNN to model the semantics of captions. Unlike image captioning, in image QA, the question is given and the task is to learn the relevant visual and text rep- resentation to infer the answer. In order to facilitate the re- search of image QA, several data sets have been constructed in [19, 21, 7, 1] either through automatic generation based on image caption data or by human labeling of questions and answers given images. Among them, the image QA data set in [21] is generated based on the COCO caption data set. Given a sentence that describes an image, the au- thors ï¬rst used a parser to parse the sentence, then replaced the key word in the sentence using question words and the key word became the answer. [7] created an image QA data set through human labeling. The initial version was in Chi- nese and then was translated to English. [1] also created an
image QA data set through human labeling. They collected questions and answers not only for real images, but also for abstract scenes.
Several image QA models were proposed in the litera- ture. [18] used semantic parsers and image segmentation methods to predict answers based on images and questions. [19, 7] both used encoder-decoder framework to generate answers given images and questions. They ï¬rst used a LSTM to encoder the images and questions and then used another LSTM to decode the answers. They both fed the image feature to every LSTM cell. [21] proposed sev- eral neural network based models, including the encoder- decoder based models that use single direction LSTMs and bi-direction LSTMs, respectively. However, the authors found the concatenation of image features and bag of words features worked the best. [1] ï¬rst encoded questions with LSTMs and then combined question vectors with image vectors by element wise multiplication. [17] used a CNN for question modeling and used convolution operations to combine question vectors and image feature vectors. We compare the SAN with these models in Sec. 4.
To the best of our knowledge, the attention mechanism, which has been proved very successful in image captioning, has not been explored for image QA. The SAN adapt the at- tention mechanism to image QA, and can be viewed as a signiï¬cant extension to previous models [30] in that multi- ple attention layers are used to support multi-step reasoning for the image QA task.
# 3. Stacked Attention Networks (SANs)
The overall architecture of the SAN is shown in Fig. 1a. We describe the three major components of SAN in this sec- tion: the image model, the question model, and the stacked attention model. 3.1. Image Model
The image model uses a CNN [13, 23, 26] to get the representation of images. Speciï¬cally, the VGGNet [23] is used to extract the image feature map fI from a raw image I:
148 â__> 14 si2 14 448 feature map
Figure 2: CNN based image model
fI = CNNvgg(I). (1)
Unlike previous studies [21, 17, 7] that use features from the last inner product layer, we choose the features fI from the last pooling layer, which retains spatial information of the original images. We ï¬rst rescale the images to be 448 à 448
pixels, and then take the features from the last pooling layer, which therefore have a dimension of 512Ã14Ã14, as shown in Fig. 2. 14 Ã 14 is the number of regions in the image and 512 is the dimension of the feature vector for each region. Accordingly, each feature vector in fI corresponds to a 32Ã 32 pixel region of the input images. We denote by fi, i â [0, 195] the feature vector of each image region.
Then for modeling convenience, we use a single layer perceptron to transform each feature vector to a new vec- tor that has the same dimension as the question vector (de- scribed in Sec. 3.2):
vI = tanh(WI fI + bI ), (2)
where vI is a matrix and its i-th column vi is the visual feature vector for the region indexed by i.
# 3.2. Question Model
As [25, 22, 6] show that LSTMs and CNNs are powerful to capture the semantic meaning of texts, we explore both models for question representations in this study.
# 3.2.1 LSTM based question model
A LSTM >| LSTM Pee >| LSTM i ft t We We se We . i f . Question: â what are bicycle
Figure 3: LSTM based question model
The essential structure of a LSTM unit is a memory cell ct which reserves the state of a sequence. At each step, the LSTM unit takes one input vector (word vector in our case) xt and updates the memory cell ct, then output a hid- den state ht. The update process uses the gate mechanism. A forget gate ft controls how much information from past state ctâ1 is preserved. An input gate it controls how much the current input xt updates the memory cell. An output gate ot controls how much information of the memory is fed to the output as hidden state. The detailed update pro- cess is as follows:
it =Ï(Wxixt + Whihtâ1 + bi), ft =Ï(Wxf xt + Whf htâ1 + bf ), ot =Ï(Wxoxt + Whohtâ1 + bo), ct =ftctâ1 + it tanh(Wxcxt + Whchtâ1 + bc), ht =ot tanh(ct),
where i, f, o, c are input gate, forget gate, output gate and memory cell, respectively. The weight matrix and bias are parameters of the LSTM and are learned on training data.
(3)
(4)
(5)
(6)
(7)
Given the question q = [q1, ...qT ], where qt is the one hot vector representation of word at position t, we ï¬rst embed the words to a vector space through an embedding matrix xt = Weqt. Then for every time step, we feed the embed- ding vector of words in the question to LSTM:
xt =Weqt, t â {1, 2, ...T }, ht =LSTM(xt), t â {1, 2, ...T }.
(8)
(9)
As shown in Fig. 3, the question what are sitting in the basket on a bicycle is fed into the LSTM. Then the ï¬nal hidden layer is taken as the repre- sentation vector for the question, i.e., vQ = hT .
# 3.2.2 CNN based question model
] ] = : : . ms max pooling unigram.* . trigram. | over time * f bigram . convolution embedding 7 â Question. 58 = ) Re F 6 @Q ©
# Figure 4: CNN based question model
In this study, we also explore to use a CNN similar to [11] for question representation. Similar to the LSTM- based question model, we ï¬rst embed words to vectors xt = Weqt and get the question vector by concatenating the word vectors:
x1:T = [x1, x2, ..., xT ]. (10)
Then we apply convolution operation on the word embed- ding vectors. We use three convolution ï¬lters, which have the size of one (unigram), two (bigram) and three (trigram) respectively. The t-th convolution output using window size c is given by:
hc,t = tanh(Wcxt:t+câ1 + bc). (11)
The ï¬lter is applied only to window t : t + c â 1 of size c. Wc is the convolution weight and bc is the bias. The feature map of the ï¬lter with convolution size c is given by:
hc = [hc,1, hc,2, ..., hc,T âc+1]. (12)
Then we apply max-pooling over the feature maps of the
convolution size c and denote it as Ëhc = max
[hc,1, hc,2, ..., hc,T âc+1]. t (13)
The max-pooling over these vectors is a coordinate-wise max operation. For convolution feature maps of different sizes c = 1, 2, 3, we concatenate them to form the feature representation vector of the whole question sentence:
h = [Ëh1, Ëh2, Ëh3], (14)
hence vQ = h is the CNN based question vector.
The diagram of CNN model for question is shown in Fig. 4. The convolutional and pooling layers for unigrams, bigrams and trigrams are drawn in red, blue and orange, re- spectively.
# 3.3. Stacked Attention Networks
Given the image feature matrix vI and the question fea- ture vector vQ, SAN predicts the answer via multi-step rea- soning.
In many cases, an answer only related to a small region of an image. For example, in Fig. 1b, although there are multiple objects in the image: bicycles, baskets, window, street and dogs and the answer to the ques- tion only relates to dogs. Therefore, using the one global image feature vector to predict the answer could lead to sub- optimal results due to the noises introduced from regions that are irrelevant to the potential answer. Instead, reason- ing via multiple attention layers progressively, the SAN are able to gradually ï¬lter out noises and pinpoint the regions that are highly relevant to the answer.
Given the image feature matrix vI and the question vec- tor vQ, we ï¬rst feed them through a single layer neural net- work and then a softmax function to generate the attention distribution over the regions of the image:
hA = tanh(WI,AvI â (WQ,AvQ + bA)), pI =softmax(WP hA + bP ),
(15)
(16)
where vI â RdÃm, d is the image representation dimen- sion and m is the number of image regions, vQ â Rd is a d dimensional vector. Suppose WI,A, WQ,A â RkÃd and WP â R1Ãk, then pI â Rm is an m dimensional vector, which corresponds to the attention probability of each im- age region given vQ. Note that we denote by â the addition of a matrix and a vector. Since WI,AvI â RkÃm and both WQ,AvQ, bA â Rk are vectors, the addition between a ma- trix and a vector is performed by adding each column of the matrix by the vector.
Based on the attention distribution, we calculate the weighted sum of the image vectors, each from a region, Ëvi as in Eq. 17. We then combine Ëvi with the question vec- tor vQ to form a reï¬ned query vector u as in Eq. 18. u is regarded as a reï¬ned query since it encodes both question information and the visual information that is relevant to the
potential answer:
1 =o ini, (17) i
i u =ËvI + vQ.
(18)
Compared to models that simply combine the ques- tion vector and the global image vector, attention mod- els construct a more informative u since higher weights are put on the visual regions that are more relevant to the question. However, for complicated questions, a sin- gle attention layer is not sufï¬cient to locate the correct region for answer prediction. For example, the question in Fig. 1 what are sitting in the basket on a bicycle refers to some subtle relationships among multiple objects in an image. Therefore, we iterate the above query-attention process using multiple attention lay- ers, each extracting more ï¬ne-grained visual attention infor- mation for answer prediction. Formally, the SANs take the following formula: for the k-th attention layer, we compute:
A)), (19)
A = tanh(W k hk I =softmax(W k pk
# Q,Aukâ1 + bk I,AvI â (W k A + bk P ).
P hk (20)
where u0 is initialized to be vQ. Then the aggregated image feature vector is added to the previous query vector to form a new query vector:
Ëvk I = pk i vi, (21)
i I + ukâ1. uk =Ëvk
(22)
That is, in every layer, we use the combined question and image vector ukâ1 as the query for the image. After the image region is picked, we update the new query vector as I + ukâ1. We repeat this K times and then use the uk = Ëvk ï¬nal uK to infer the answer:
pans =softmax(WuuK + bu). (23)
Fig. 1b illustrates the reasoning process by an exam- ple. In the ï¬rst attention layer, the model identiï¬es roughly the area that are relevant to basket, bicycle, and sitting in. In the second attention layer, the model fo- cuses more sharply on the region that corresponds to the answer dogs. More examples can be found in Sec. 4.
# 4. Experiments
# 4.1. Data sets
We evaluate the SAN on four image QA data sets. DAQUAR-ALL is proposed in [18]. There are 6, 795 training questions and 5, 673 test questions. These ques- tions are generated on 795 and 654 images respectively. The
images are mainly indoor scenes. The questions are catego- rized into three types including Object, Color and Number. Most of the answers are single words. Following the setting in [21, 17, 19], we exclude data samples that have multiple words answers. The remaining data set covers 90% of the original data set.
reduced version of DAQUAR-ALL. There are 3, 876 training samples and 297 test samples. This data set is constrained to 37 object categories and uses only 25 test images. The single word answers data set covers 98% of the original data set.
COCO-QA is proposed in [21]. Based on the Microsoft COCO data set, the authors ï¬rst parse the caption of the im- age with an off-the-shelf parser, then replace the key com- ponents in the caption with question words for form ques- tions. There are 78736 training samples and 38948 test sam- ples in the data set. These questions are based on 8, 000 and 4, 000 images respectively. There are four types of ques- tions including Object, Number, Color, and Location. Each type takes 70%, 7%, 17%, and 6% of the whole data set, respectively. All answers in this data set are single word.
VQA is created through human labeling [1]. The data set uses images in the COCO image caption data set [16]. Unlike the other data sets, for each image, there are three questions and for each question, there are ten answers la- beled by human annotators. There are 248, 349 training questions and 121, 512 validation questions in the data set. Following [1], we use the top 1000 most frequent answer as possible outputs and this set of answers covers 82.67% of all answers. We ï¬rst studied the performance of the pro- posed model on the validation set. Following [6], we split the validation data set into two halves, val1 and val2. We use training set and val1 to train and validate and val2 to test locally. The results on the val2 set are reported in Ta- ble. 6. We also evaluated the best model, SAN(2, CNN), on the standard test server as provided in [1] and report the results in Table. 5.
# 4.2. Baselines and evaluation methods
We compare our models with a set of baselines proposed recently [21, 1, 18, 19, 17] on image QA. Since the results of these baselines are reported on different data sets in dif- ferent literature, we present the experimental results on dif- ferent data sets in different tables.
For all four data sets, we formulate image QA as a clas- siï¬cation problem since most of answers are single words. We evaluate the model using classiï¬cation accuracy as re- ported in [1, 21, 19]. The reference models also report the Wu-Palmer similarity (WUPS) measure [29]. The WUPS measure calculates the similarity between two words based on their longest common subsequence in the taxonomy tree. We can set a threshold for WUPS, if the similarity is less than the threshold, then it is zeroed out. Following the refer-
ence models, we use WUPS0.9 and WUPS0.0 as evaluation metrics besides the classiï¬cation accuracy. The evaluation on the VQA data set is different from other three data sets, since for each question there are ten answer labels that may or may not be the same. We follow [1] to use the following metric: min(# human labels that match that answer/3, 1), which basically gives full credit to the answer when three or more of the ten human labels match the answer and gives partial credit if there are less matches.
# 4.3. Model conï¬guration and training
For the image model, we use the VGGNet to extract fea- tures. When training the SAN, the parameter set of the CNN of the VGGNet is ï¬xed. We take the output from the last pooling layer as our image feature which has a dimension of 512 à 14 à 14 .
For DAQUAR and COCO-QA, we set the word embed- ding dimension and LSTMâs dimension to be 500 in the question model. For the CNN based question model, we set the unigram, bigram and trigram convolution ï¬lter size to be 128, 256, 256 respectively. The combination of these ï¬lters makes the question vector size to be 640. For VQA dataset, since it is larger than other data sets, we double the model size of the LSTM and the CNN to accommodate the large data set and the large number of classes. In evaluation, we experiment with SAN with one and two attention layers. We ï¬nd that using three or more attention layers does not further improve the performance.
In our experiments, all the models are trained using stochastic gradient descent with momentum 0.9. The batch size is ï¬xed to be 100. The best learning rate is picked using grid search. Gradient clipping technique [8] and dropout [24] are used.
# 4.4. Results and analysis
The experimental results on DAQUAR-ALL, DAQUAR- REDUCED, COCO-QA and VQA are presented in Table. 1 to 6 respectively. Our model names explain their settings: SAN is short for the proposed stacked attention networks, the value 1 or 2 in the brackets refer to using one or two attention layers, respectively. The keyword LSTM or CNN refers to the question model that SANs use.
The experimental results in Table. 1 to 6 show that the two-layer SAN gives the best results across all data sets and the two kinds of question models in the SAN, LSTM and CNN, give similar performance. For example, on DAQUAR-ALL (Table. 1), both of the proposed two- layer SANs outperform the two best baselines, the IMG- CNN in [17] and the Ask-Your-Neuron in [19], by 5.9% and 7.6% absolute in accuracy, respectively. Similar range of improvements are observed in metrics of WUPS0.9 and WUPS0.0. We also observe signiï¬cant improvements on DAQUAR-REDUCED (Table. 2), i.e., our SAN(2, LSTM)
Methods Accuracy WUPS0.9 WUPS0.0 Multi-World: [18] Multi-World 7.9 11.9 38.8 Ask-Your-Neurons: [19] Language Language + IMG CNN: [17] IMG-CNN 19.1 21.7 23.4 25.2 28.0 29.6 65.1 65.0 63.0 Ours: SAN(1, LSTM) SAN(1, CNN) SAN(2, LSTM) SAN(2, CNN) 28.9 29.2 29.3 29.3 34.7 35.1 34.9 35.1 68.5 67.8 68.1 68.6 Human :[18] Human 50.2 50.8 67.3
Table 1: DAQUAR-ALL results, in percentage
Methods Accuracy WUPS0.9 WUPS0.0 Multi-World: [18] Multi-World 12.7 18.2 51.5 Ask-Your-Neurons: [19] Language Language + IMG 31.7 34.7 38.4 40.8 80.1 79.5 VSE: [21] GUESS BOW LSTM IMG+BOW VIS+LSTM 2-VIS+BLSTM 18.2 32.7 32.7 34.2 34.4 35.8 29.7 43.2 43.5 45.0 46.1 46.8 77.6 81.3 81.6 81.5 82.2 82.2 CNN: [17] IMG-CNN 39.7 44.9 83.1 Ours: SAN(1, LSTM) SAN(1, CNN) SAN(2, LSTM) SAN(2, CNN) 45.2 45.2 46.2 45.5 49.6 49.6 51.2 50.2 84.0 83.7 85.1 83.6 Human :[18] Human 60.3 61.0 79.0
# Table 2: DAQUAR-REDUCED results, in percentage
outperforms the IMG-CNN [17], the 2-VIS+BLSTM [21], the Ask-Your-Neurons approach [19] and the Multi-World [18] by 6.5%, 10.4%, 11.5% and 33.5% absolute in accu- racy, respectively. On the larger COCO-QA data set, the proposed two-layer SANs signiï¬cantly outperform the best baselines from [17] (IMG-CNN) and [21] (IMG+BOW and 2-VIS+BLSTM) by 5.1% and 6.6% in accuracy (Table. 3).
Methods VSE: [21] GUESS BOW LSTM IMG IMG+BOW VIS+LSTM 2-VIS+BLSTM 6.7 37.5 36.8 43.0 55.9 53.3 55.1 17.4 48.5 47.6 58.6 66.8 63.9 65.3 73.4 82.8 82.3 85.9 89.0 88.3 88.6 CNN: [17] IMG-CNN CNN 55.0 32.7 65.4 44.3 88.6 80.9 Ours: SAN(1, LSTM) SAN(1, CNN) SAN(2, LSTM) SAN(2, CNN) 59.6 60.7 61.0 61.6 69.6 70.6 71.0 71.6 90.1 90.5 90.7 90.9
Table 3: COCO-QA results, in percentage
Methods VSE: [21] GUESS BOW LSTM IMG IMG+BOW VIS+LSTM 2-VIS+BLSTM 2.1 37.3 35.9 40.4 58.7 56.5 58.2 35.8 43.6 45.3 29.3 44.1 46.1 44.8 13.9 34.8 36.3 42.7 52.0 45.9 49.5 8.9 40.8 38.4 44.2 49.4 45.5 47.3 Ours: SAN(1, LSTM) SAN(1, CNN) SAN(2, LSTM) SAN(2, CNN) 62.5 63.6 63.6 64.5 49.0 48.7 49.8 48.6 54.8 56.7 57.9 57.9 51.6 52.7 52.8 54.0
# Table 4: COCO-QA accuracy per class, in percentage
test-dev test-std Methods All Yes/No Number Other All VQA: [1] Question Image Q+I LSTM Q LSTM Q+I 48.1 28.1 52.6 48.8 53.7 75.7 64.0 75.6 78.2 78.9 36.7 0.4 33.7 35.7 35.2 27.1 3.8 37.4 26.6 36.4 - - - - 54.1 SAN(2, CNN) 58.7 79.3 36.6 46.1 58.9
Table 5: VQA results on the ofï¬cial server, in percentage
Table. 5 summarizes the performance of various models on VQA, which is the largest among the four data sets. The overall results show that our best model, SAN(2, CNN),
All Yes/No 36% Number 10% Other 54% 56.6 56.9 57.3 57.6 78.1 78.8 78.3 78.6 41.6 42.0 42.2 41.8 44.8 45.0 45.9 46.4
Table 6: VQA results on our partition, in percentage
outperforms the LSTM Q+I model, the best baseline from [1], by 4.8% absolute. The superior performance of the SANs across all four benchmarks demonstrate the effective- ness of using multiple layers of attention.
In order to study the strength and weakness of the SAN in detail, we report performance at the question-type level on the two large data sets, COCO-QA and VQA, in Ta- ble. 4 and 5, respectively. We observe that on COCO- QA, compared to the two best baselines, IMG+BOW and 2-VIS+BLSTM, out best model SAN(2, CNN) improves 7.2% in the question type of Color, followed by 6.1% in Objects, 5.7% in Location and 4.2% in Number. We ob- serve similar trend of improvements on VQA. As shown in Table. 5, compared to the best baseline LSTM Q+I, the biggest improvement of SAN(2, CNN) is in the Other type, 9.7%, followed by the 1.4% improvement in Number and 0.4% improvement in Yes/No. Note that the Other type in VQA refers to questions that usually have the form of âwhat color, what kind, what are, what type, whereâ etc., which are similar to question types of Color, Objects and Loca- tion in COCO-QA. The VQA data set has a special Yes/No type of questions. The SAN only improves the performance of this type of questions slightly. This could due to that the answer for a Yes/No question is very question dependent, so better modeling of the visual information does not provide much additional gains. This also conï¬rms the similar ob- servation reported in [1], e.g., using additional image infor- mation only slightly improves the performance in Yes/No, as shown in Table. 5, Q+I vs Question, and LSTM Q+I vs LSTM Q.
Our results demonstrate clearly the positive impact of using multiple attention layers. In all four data sets, two- layer SANs always perform better than the one-layer SAN. Speciï¬cally, on COCO-QA, on average the two-layer SANs outperform the one-layer SANs by 2.2% in the type of Color, followed by 1.3% and 1.0% in the Location and Ob- jects categories, and then 0.4% in Number. This aligns to the order of the improvements of the SAN over baselines. Similar trends are observed on VQA (Table. 6), e.g., the two-layer SAN improve over the one-layer SAN by 1.4% for the Other type of question, followed by 0.2% improve- ment for Number, and ï¬at for Yes/No.
# 4.5. Visualization of attention layers
In this section, we present analysis to demonstrate that using multiple attention layers to perform multi-step rea- soning leads to more ï¬ne-grained attention layer-by-layer in locating the regions that are relevant to the potential an- swers. We do so by visualizing the outputs of the atten- tion layers of a sample set of images from the COCO-QA test set. Note the attention probability distribution is of size 14 à 14 and the original image is 448 à 448, we up-sample the attention probability distribution and apply a Gaussian ï¬lter to make it the same size as the original image.
Fig. 5 presents six examples. More examples are pre- sented in the appendix. They cover types as broad as Object, Numbers, Color and Location. For each example, the three images from left to right are the original image, the output of the ï¬rst attention layer and the output of the second at- tention layer, respectively. The bright part of the image is the detected attention. Across all those examples, we see that in the ï¬rst attention layer, the attention is scattered on many objects in the image, largely corresponds to the ob- jects and concepts referred in the question, whereas in the second layer, the attention is far more focused on the re- gions that lead to the correct answer. For example, consider the question what is the color of the horns, which asks the color of the horn on the womanâs head in Fig. 5(f). In the output of the ï¬rst attention layer, the model ï¬rst recognizes a woman in the image. In the output of the second attention layer, the attention is focused on the head of the woman, which leads to the answer of the question: the color of the horn is red.
# 4.6. Errors analysis
We randomly sample 100 images from the COCO-QA test set that the SAN make mistakes. We group the errors into four categories: (i) the SANs focus the attention on the wrong regions (22%), e.g., the example in Fig. 6(a); (ii) the SANs focus on the right region but predict a wrong answer (42%), e.g., the examples in Fig. 6(b)(c)(d); (iii) the answer is ambiguous, the SANs give answers that are different from labels, but might be acceptable (31%). E.g., in Fig. 6(e), the answer label is pot, but out model predicts vase, which is also visually reasonable; (iv) the labels are clearly wrong (5%). E.g., in Fig. 6(f), our model gives the correct answer trains while the label cars is wrong. 5. Conclusion
In this paper, we propose a new stacked attention net- work (SAN) for image QA. SAN uses a multiple-layer at- tention mechanism that queries an image multiple times to locate the relevant visual region and to infer the answer pro- gressively. Experimental results demonstrate that the pro- posed SAN signiï¬cantly outperforms previous state-of-the- art approaches by a substantial margin on all four image QA
(a) What are pulling aman on a wagon down on dirt road? (b) What is the color of the box 2 Answer: horses Prediction: horses Answer: red Prediction: red What next to the large umbrella attached to a table? (d ) How many people are going up the mountain with walking sticks? (c) Answer: trees Prediction: tree Answer: four Prediction: four ee" (e) What is sitting on the handle bar of a bicycle? (f) What is the color of the horns? Answer: bird Prediction: bird Answer: red Prediction: red Original Image First Attention Layer Second Attention Layer Original Image First Attention Layer Second Attention Layer
Figure 5: Visualization of two attention layers
What swim in the ocean near two large ferries? What is the color of the shirt? ( a) Answer: ducks Prediction: boats ( b) Answer: purple Prediction: green (c) What is the young woman eating? Answer: banana Prediction: donut (d) How many umbrellas with various patterns? Answer: three Prediction: two What are passing underneath the walkway bridge? The very old looking what is on display? (e) -, a _ (f) Answer: cars Prediction: trains Answer: pot Prediction: vase Original mage _First Attention Layer Second Attention Layer Originallmage _ First Attention Layer Second Attention Layer
Figure 6: Examples of mistakes
data sets. The visualization of the attention layers further il- lustrates the process that the SAN focuses the attention to the relevant visual clues that lead to the answer of the ques- tion layer-by-layer.
# References
[1] S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. L. Zitnick, and D. Parikh. Vqa: Visual question answering. arXiv preprint arXiv:1505.00468, 2015. 1, 2, 5, 6, 7
[2] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014. 1
[3] J. Berant and P. Liang. Semantic parsing via paraphrasing. In Proceedings of ACL, volume 7, page 92, 2014. 1
[4] A. Bordes, S. Chopra, and J. Weston. Question answering with subgraph embeddings. arXiv preprint arXiv:1406.3676, 2014. 1
[5] X. Chen and C. L. Zitnick. Learning a recurrent visual rep- arXiv preprint resentation for image caption generation. arXiv:1411.5654, 2014. 2
[6] H. Fang, S. Gupta, F. Iandola, R. Srivastava, L. Deng, P. Doll´ar, J. Gao, X. He, M. Mitchell, J. Platt, et al. From captions to visual concepts and back. arXiv preprint arXiv:1411.4952, 2014. 2, 3, 5
[7] H. Gao, J. Mao, J. Zhou, Z. Huang, L. Wang, and W. Xu. Are you talking to a machine? dataset and methods for arXiv preprint multilingual arXiv:1505.05612, 2015. 1, 2
[8] A. Graves. Generating sequences with recurrent neural net- works. arXiv preprint arXiv:1308.0850, 2013. 5
[9] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735â1780, 1997. 1
[10] A. Karpathy and L. Fei-Fei. Deep visual-semantic align- arXiv preprint ments for generating image descriptions. arXiv:1412.2306, 2014. 2
[11] Y. Kim. Convolutional neural networks for sentence classiï¬- cation. arXiv preprint arXiv:1408.5882, 2014. 3
[12] R. Kiros, R. Salakhutdinov, and R. S. Zemel. Unifying visual-semantic embeddings with multimodal neural lan- guage models. arXiv preprint arXiv:1411.2539, 2014. 2
Imagenet In classiï¬cation with deep convolutional neural networks. Advances in neural information processing systems, pages 1097â1105, 2012. 2
[14] A. Kumar, O. Irsoy, J. Su, J. Bradbury, R. English, B. Pierce, P. Ondruska, I. Gulrajani, and R. Socher. Ask me anything: Dynamic memory networks for natural language processing. arXiv preprint arXiv:1506.07285, 2015. 1
[15] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient- based learning applied to document recognition. Proceed- ings of the IEEE, 86(11):2278â2324, 1998. 1
[16] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ra- manan, P. Doll´ar, and C. L. Zitnick. Microsoft coco: Com- In Computer VisionâECCV 2014, mon objects in context. pages 740â755. Springer, 2014. 5
[17] L. Ma, Z. Lu, and H. Li. Learning to answer questions from image using convolutional neural network. arXiv preprint arXiv:1506.00333, 2015. 2, 5, 6
[18] M. Malinowski and M. Fritz. A multi-world approach to question answering about real-world scenes based on uncer- In Advances in Neural Information Processing tain input. Systems, pages 1682â1690, 2014. 1, 2, 4, 5, 6
[19] M. Malinowski, M. Rohrbach, and M. Fritz. Ask your neu- rons: A neural-based approach to answering questions about images. arXiv preprint arXiv:1505.01121, 2015. 1, 2, 5, 6
[20] J. Mao, W. Xu, Y. Yang, J. Wang, and A. Yuille. Deep cap- tioning with multimodal recurrent neural networks (m-rnn). arXiv preprint arXiv:1412.6632, 2014. 2
[21] M. Ren, R. Kiros, and R. Zemel. and data for image question answering. arXiv:1505.02074, 2015. 1, 2, 5, 6 Exploring models arXiv preprint
[22] Y. Shen, X. He, J. Gao, L. Deng, and G. Mesnil. A latent semantic model with convolutional-pooling structure for in- formation retrieval. In Proceedings of the 23rd ACM Interna- tional Conference on Conference on Information and Knowl- edge Management, pages 101â110. ACM, 2014. 3
[23] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. 2
[24] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to prevent neural networks from overï¬tting. The Journal of Machine Learning Research, 15(1):1929â1958, 2014. 5
[25] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In Advances in neural infor- mation processing systems, pages 3104â3112, 2014. 3 [26] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabi- novich. Going deeper with convolutions. arXiv preprint arXiv:1409.4842, 2014. 2
[27] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A neural image caption generator. arXiv preprint arXiv:1411.4555, 2014. 2
[28] J. Weston, S. Chopra, and A. Bordes. Memory networks. arXiv preprint arXiv:1410.3916, 2014. 1
[29] Z. Wu and M. Palmer. Verbs semantics and lexical selection. In Proceedings of the 32nd annual meeting on Association for Computational Linguistics, pages 133â138. Association for Computational Linguistics, 1994. 5
[30] K. Xu, J. Ba, R. Kiros, A. Courville, R. Salakhutdinov, R. Zemel, and Y. Bengio. Show, attend and tell: Neural im- age caption generation with visual attention. arXiv preprint arXiv:1502.03044, 2015. 1, 2
[31] W.-t. Yih, M.-W. Chang, X. He, and J. Gao. Semantic pars- ing via staged query graph generation: Question answering with knowledge base. In Proceedings of the Joint Conference of the 53rd Annual Meeting of the ACL and the 7th Interna- tional Joint Conference on Natural Language Processing of the AFNLP, 2015. 1
[32] W.-t. Yih, X. He, and C. Meek. Semantic parsing for single- relation question answering. In Proceedings of ACL, 2014. 1
What take the nap with a blanket? What is the color of the cake? Answer: dogs Prediction: dogs Answer: brown Prediction: white What stands between two blue lounge chairs on an empty beach? Answer: unbrella Prediction: unbrella What is the color of the motorcycle? Answer: blue Prediction: blue â S What is sitting in the luggage bag? What is the color of the design? Answer: cat Prediction: cat Answer: red Prediction: red What is the color of the trucks? What is in front of the clear sky? Answer: green Prediction: green Answer: tower Prediction: tower What is next to the desk with a computer and laptop? What is the color of the surface? Answer: chair Prediction: chair Answer: white Prediction: white F } sits What are flying against the cloudy sky? Where do the young adult make us standing? Answer: kites Prediction: kites Answer: room Prediction: room
Figure 7: More examples | {
"id": "1506.00333"
} |
1510.03009 | Neural Networks with Few Multiplications | For most deep learning algorithms training is notoriously time consuming.
Since most of the computation in training neural networks is typically spent on
floating point multiplications, we investigate an approach to training that
eliminates the need for most of these. Our method consists of two parts: First
we stochastically binarize weights to convert multiplications involved in
computing hidden states to sign changes. Second, while back-propagating error
derivatives, in addition to binarizing the weights, we quantize the
representations at each layer to convert the remaining multiplications into
binary shifts. Experimental results across 3 popular datasets (MNIST, CIFAR10,
SVHN) show that this approach not only does not hurt classification performance
but can result in even better performance than standard stochastic gradient
descent training, paving the way to fast, hardware-friendly training of neural
networks. | http://arxiv.org/pdf/1510.03009 | Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, Yoshua Bengio | cs.LG, cs.NE | Published as a conference paper at ICLR 2016. 9 pages, 3 figures | null | cs.LG | 20151011 | 20160226 | 6 1 0 2
b e F 6 2 ] G L . s c [
3 v 9 0 0 3 0 . 0 1 5 1 : v i X r a
Published as a conference paper at ICLR 2016
# NEURAL NETWORKS WITH FEW MULTIPLICATIONS
Zhouhan Lin Universit´e de Montr´eal Canada zhouhan.lin@umontreal.ca
Matthieu Courbariaux Universit´e de Montr´eal Canada matthieu.courbariaux@gmail.com
Roland Memisevic Universit´e de Montr´eal Canada roland.umontreal@gmail.com
Yoshua Bengio Universit´e de Montr´eal Canada
# ABSTRACT
For most deep learning algorithms training is notoriously time consuming. Since most of the computation in training neural networks is typically spent on ï¬oating point multiplications, we investigate an approach to training that eliminates the need for most of these. Our method consists of two parts: First we stochastically binarize weights to convert multiplications involved in computing hidden states to sign changes. Second, while back-propagating error derivatives, in addition to binarizing the weights, we quantize the representations at each layer to convert the remaining multiplications into binary shifts. Experimental results across 3 pop- ular datasets (MNIST, CIFAR10, SVHN) show that this approach not only does not hurt classiï¬cation performance but can result in even better performance than standard stochastic gradient descent training, paving the way to fast, hardware- friendly training of neural networks.
# INTRODUCTION
Training deep neural networks has long been computational demanding and time consuming. For some state-of-the-art architectures, it can take weeks to get models trained (Krizhevsky et al., 2012). Another problem is that the demand for memory can be huge. For example, many common models in speech recognition or machine translation need 12 Gigabytes or more of storage (Gulcehre et al., 2015). To deal with these issues it is common to train deep neural networks by resorting to GPU or CPU clusters and to well designed parallelization strategies (Le, 2013).
Most of the computation performed in training a neural network are ï¬oating point multiplications. In this paper, we focus on eliminating most of these multiplications to reduce computation. Based on our previous work (Courbariaux et al., 2015), which eliminates multiplications in computing hidden representations by binarizing weights, our method deals with both hidden state computations and backward weight updates. Our approach has 2 components. In the forward pass, weights are stochastically binarized using an approach we call binary connect or ternary connect, and for back- propagation of errors, we propose a new approach which we call quantized back propagation that converts multiplications into bit-shifts. 1
# 2 RELATED WORK
Several approaches have been proposed in the past to simplify computations in neural networks. Some of them try to restrict weight values to be an integer power of two, thus to reduce all the mul- tiplications to be binary shifts (Kwan & Tang, 1993; Marchesi et al., 1993). In this way, multiplica- tions are eliminated in both training and testing time. The disadvantage is that model performance can be severely reduced, and convergence of training can no longer be guaranteed.
1The codes BinaryConnect for these approaches are available online at https://github.com/hantek/
1
Published as a conference paper at ICLR 2016
Kim & Paris (2015) introduces a completely Boolean network, which simpliï¬es the test time com- putation at an acceptable performance hit. The approach still requires a real-valued, full precision training phase, however, so the beneï¬ts of reducing computations does not apply to training. Sim- ilarly, Machado et al. (2015) manage to get acceptable accuracy on sparse representation classiï¬- cation by replacing all ï¬oating-point multiplications by integer shifts. Bit-stream networks (Burge et al., 1999) also provides a way of binarizing neural network connections, by substituting weight connections with logical gates. Similar to that, Cheng et al. (2015) proves deep neural networks with binary weights can be trained to distinguish between multiple classes with expectation back propagation.
There are some other techniques, which focus on reducing the training complexity. For instance, instead of reducing the precision of weights, Simard & Graf (1994) quantizes states, learning rates, and gradients to powers of two. This approach manages to eliminate multiplications with negligible performance reduction.
# 3 BINARY AND TERNARY CONNECT
3.1 BINARY CONNECT REVISITED
In Courbariaux et al. (2015), we introduced a weight binarization technique which removes mul- tiplications in the forward pass. We summarize this approach in this subsection, and introduce an extension to it in the next.
Consider a neural network layer with N input and M output units. The forward computation is y = h(W x + b) where W and b are weights and biases, respectively, h is the activation function, and x and y are the layerâs inputs and outputs. If we choose ReLU as h, there will be no multiplications in computing the activation function, thus all multiplications reside in the matrix product W x. For each input vector x, N M ï¬oating point multiplications are needed.
Binary connect eliminates these multiplications by stochastically sampling weights to be â1 or 1. Full precision weights ¯w are kept in memory as reference, and each time when y is needed, we sample a stochastic weight matrix W according to ¯w. For each element of the sampled matrix W , the probability of getting a 1 is proportional to how âcloseâ its corresponding entry in ¯w is to 1. i.e.,
P (Wij = 1) = ¯wij + 1 2 ; P (Wij = â1) = 1 â P (Wij = 1) (1)
It is necessary to add some edge constraints to ¯w. To ensure that P (Wij = 1) lies in a reasonable range, values in ¯w are forced to be a real value in the interval [-1, 1]. If during the updates any of its value grows beyond that interval, we set it to be its corresponding edge values â1 or 1. That way ï¬oating point multiplications become sign changes.
A remaining question concerns the use of multiplications in the random number generator involved in the sampling process. Sampling an integer has to be faster than multiplication for the algorithm to be worth it. To be precise, in most cases we are doing mini-batch learning and the sampling process is performed only once for the whole mini-batch. Normally the batch size B varies up to several hundreds. So, as long as one sampling process is signiï¬cantly faster than B times of multiplications, it is still worth it. Fortunately, efï¬ciently generating random numbers has been studied in Jeavons et al. (1994); van Daalen et al. (1993). Also, it is possible to get random numbers according to real random processes, like CPU temperatures, etc. We are not going into the details of random number generation as this is not the focus of this paper.
3.2 TERNARY CONNECT
The binary connect introduced in the former subsection allows weights to be â1 or 1. However, in a trained neural network, it is common to observe that many learned weights are zero or close to zero. Although the stochastic sampling process would allow the mean value of sampled weights to be zero, this suggests that it may be beneï¬cial to explicitly allow weights to be zero.
To allow weights to be zero, some adjustments are needed for Eq. 1. We split the interval of [-1, 1], within which the full precision weight value ¯wij lies, into two sub-intervals: [â1, 0] and (0, 1]. If a
2
Published as a conference paper at ICLR 2016
weight value ¯wij drops into one of them, we sample ¯wij to be the two edge values of that interval, according to their distance from ¯wij, i.e., if ¯wij > 0:
P (Wij = 1) = ¯wij; P (Wij = 0) = 1 â ¯wij (2)
and if ¯wij <= 0:
P (Wij = â1) = â ¯wij; P (Wij = 0) = 1 + ¯wij (3)
Like binary connect, ternary connect also eliminates all multiplications in the forward pass.
# 4 QUANTIZED BACK PROPAGATION
In the former section we described how multiplications can be eliminated from the forward pass. In this section, we propose a way to eliminate multiplications from the backward pass.
Suppose the i-th layer of the network has N input and M output units, and consider an error signal δ propagating downward from its output. The updates for weights and biases would be the outer product of the layerâs input and the error signal:
AW =n [6 On (Wxt »)| x? (4)
Ab=n [5 Ohâ (Wx + »)| (5)
where 77 is the learning rate, and x the input to the layer. The operator © stands for element-wise multiply. While propagating through the layers, the error signal 6 needs to be updated, too. Its update taking into account the next layer below takes the form:
# 6 = [Wd] Oh
(W x + b) (6)
There are 3 terms that appear repeatedly in Eqs. to 6,hâ (Wx + b) and x. The latter two terms introduce matrix outer products. To eliminate multiplications, we can quantize one of them to be an integer power of 2, so that multiplications involving that term become binary shifts. The expression nh (Wx + b) contains downflowing gradients, which are largely determined by the cost function and network parameters, thus it is hard to bound its values. However, bounding the values is essential for quantization because we need to supply a fixed number of bits for each sampled value, and if that value varies too much, we will need too many bits for the exponent. This, in turn, will result in the need for more bits to store the sampled value and unnecessarily increase the required amount of computation.
While h (W x + b) is not a good choice for quantization, x is a better choice, because it is the hidden representation at each layer, and we know roughly the distribution of each layerâs activation.
Our approach is therefore to eliminate multiplications in Eq. 4 by quantizing each entry in x to an integer power of 2. That way the outer product in Eq. 4 becomes a series of bit shifts. Experi- mentally, we ï¬nd that allowing a maximum of 3 to 4 bits of shift is sufï¬cient to make the network work well. This means that 3 bits are already enough to quantize x. As the ï¬oat32 format has 24 bits of mantissa, shifting (to the left or right) by 3 to 4 bits is completely tolerable. We refer to this approach of back propagation as âquantized back propagation.â
If we choose ReLU as the activation function, and since we are reusing the (W x + b) that was (W x + b) involves no additional sampling computed during the forward pass, computing the term h or multiplications. In addition, quantized back propagation eliminates the multiplications in the outer product in Eq. 4. The only places where multiplications remain are the element-wise products. In Eq. 5, multiplying by η and Ï requires 2 à M multiplications, while in Eq. 4 we can reuse the result of Eq. 5. To update δ would need another M multiplications, thus 3 à M multiplications
3
Published as a conference paper at ICLR 2016
are needed for all computations from Eqs. 4 through 6. Pseudo code in Algorithm 1 outlines how quantized back propagation is conducted.
Algorithm 1 Quantized Back Propagation (QBP). C is the cost function. binarize(W ) and clip(W ) stands for binarize and clip methods. L is the number of layers. Require: a deep model with parameters W , b at each layer. Input data x, its corresponding targets
y, and learning rate η.
y, and learning rate η. 1: procedure QBP(model, x, y, η) 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 1. Forward propagation: for each layer i in range(1, L) do Wb â binarize(W ) Compute activation ai according to its previous layer output aiâ1, Wb and b. 2. Backward propagation: Initialize output layerâs error signal δ = âC âaL for each layer i in range(L, 1) do . Compute âW and âb according to Eqs. 4 and 5. Update W : W â clip(W â âW ) Update b: b â b â âb Compute âC âakâ1 by updating δ according to Eq. 6.
Like in the forward pass, most of the multiplications are used in the weight updates. Compared with standard back propagation, which would need 2M N + 3M multiplications at least, the amount of multiplications left is negligible in quantized back propagation. Our experiments in Section 5 show that this way of dramatically decreasing multiplications does not necessarily entail a loss in performance.
# 5 EXPERIMENTS
We tried our approach on both fully connected networks and convolutional networks. Our imple- mentation uses Theano (Bastien et al., 2012). We experimented with 3 datasets: MNIST, CIFAR10, and SVHN. In the following subsection we show the performance that these multiplier-light neural networks can achieve. In the subsequent subsections we study some of their properties, such as convergence and robustness, in more detail.
5.1 GENERAL PERFORMANCE
We tested different variations of our approach, and compare the results with Courbariaux et al. (2015) and full precision training (Table 1). All models are trained with stochastic gradient descent (SGD) without momentum. We use batch normalization for all the models to accelerate learning. At training time, binary (ternary) connect and quantized back propagation are used, while at test time, we use the learned full resolution weights for the forward propagation. For each dataset, all hyper-parameters are set to the same values for the different methods, except that the learning rate is adapted independently for each one.
Table 1: Performances across different datasets
MNIST CIFAR10 SVHN Full precision Binary connect 1.33% 15.64% 2.85% 1.23% 12.04% 2.47% Binary connect + Quantized backprop 1.29% 12.08% 2.48% Ternary connect + Quantized backprop 1.15% 12.01% 2.42%
4
Published as a conference paper at ICLR 2016
# 5.1.1 MNIST
The MNIST dataset (LeCun et al., 1998) has 50000 images for training and 10000 for testing. All images are grey value images of size 28 Ã 28 pixels, falling into 10 classes corresponding to the 10 digits. The model we use is a fully connected network with 4 layers: 784-1024-1024-1024-10. At the last layer we use the hinge loss as the cost. The training set is separated into two parts, one of which is the training set with 40000 images and the other the validation set with 10000 images. Training is conducted in a mini-batch way, with a batch size of 200.
With ternary connect, quantized backprop, and batch normalization, we reach an error rate of 1.15%. This result is better than full precision training (also with batch normalization), which yields an error rate 1.33%. If without batch normalization, the error rates rise to 1.48% and 1.67%, respectively. We also explored the performance if we sample those weights during test time. With ternary connect at test time, the same model (the one reaches 1.15% error rate) yields 1.49% error rate, which is still fairly acceptable. Our experimental results show that despite removing most multiplications, our approach yields a comparable (in fact, even slightly higher) performance than full precision training. The performance improvement is likely due to the regularization effect implied by the stochastic sampling.
Taking this network as a concrete example, the actual amount of multiplications in each case can be estimated precisely. Multiplications in the forward pass is obvious, and for the backward pass section 4 has already given an estimation. Now we estimate the amount of multiplications incurred by batch normalization. Suppose we have a pre-hidden representation h with mini-batch size B on a layer which has M output units (thus h should have shape B à M ), then batch normalization can be formalized as γ hâmean(h) std(h) + β. One need to compute the mean(h) over a mini-batch, which takes M multiplications, and BM + 2M multiplication to compute the standard deviation std(h). The fraction takes BM divisions, which should be equal to the same amount of multiplication. Multiplying that by the γ parameter, adds another BM multiplications. So each batch normalization layer takes an extra 3BM + 3M multiplications in the forward pass. The backward pass takes roughly twice as many multiplications in addition, if we use SGD. These amount of multiplications are the same no matter we use binarization or not. Bearing those in mind, the total amount of multiplications invoked in a mini-batch update are shown in Table 2. The last column lists the ratio of multiplications left, after applying ternary connect and quantized back propagation.
Table 2: Estimated number of multiplications in MNIST net
Full precision without BN 1.7480 Ã 109 1.7535 Ã 109 with BN Ternary connect + Quantized backprop 1.8492 Ã 106 7.4245 Ã 106 ratio 0.001058 0.004234
# 5.1.2 CIFAR10
CIFAR10 (Krizhevsky & Hinton, 2009) contains images of size 32 à 32 RGB pixels. Like for MNIST, we split the dataset into 40000, 10000, and 10000 training-, validation-, and test-cases, respectively. We apply our approach in a convolutional network for this dataset. The network has 6 convolution/pooling layers, 1 fully connected layer and 1 classiï¬cation layer. We use the hinge loss for training, with a batch size of 100. We also tried using ternary connect at test time. On the model trained by ternary connect and quantized back propagation, it yields 13.54% error rate. Similar to what we observed in the fully connected network, binary (ternary) connect and quantized back propagation yield a slightly higher performance than ordinary SGD.
# 5.1.3 SVHN
The Street View House Numbers (SVHN) dataset (Netzer et al., 2011) contains RGB images of house numbers. It contains more than 600,000 images in its extended training set, and roughly 26,000 images in its test set. We remove 6,000 images from the training set for validation. We use 7 layers of convolution/pooling, 1 fully connected layer, and 1 classiï¬cation layer. Batch size is also
5
Published as a conference paper at ICLR 2016
set to be 100. The performances we get is consistent with our results on CIFAR10. Extending the ternary connect mechanism to its test time yields 2.99% error rate on this dataset. Again, it improves over ordinary SGD by using binary (ternary) connect and quantized back propagation.
# 5.2 CONVERGENCE
Taking the convolutional networks on CIFAR10 as a test-bed, we now study the learning behaviour in more detail. Figure 1 shows the performance of the model in terms of test set errors during training. The ï¬gure shows that binarization makes the network converge slower than ordinary SGD, but yields a better optimum after the algorithm converges. Compared with binary connect (red line), adding quantization in the error propagation (yellow line) doesnât hurt the model accuracy at all. Moreover, having ternary connect combined with quantized back propagation (green line) surpasses all the other three approaches.
â Full Resolution â Binary Connect â Binary Connect + Quantized BP Ternary Connect + Quantized BP Error rate i) 50 100 150 200 250 300 epochs
Figure 1: Test set error rate at each epoch for ordinary back propagation, binary connect, binary connect with quantized back propagation, and ternary connect with quantized back propagation. Vertical axis is represented in logarithmic scale.
5.3 THE EFFECT OF BIT CLIPPING
In Section 4 we mentioned that quantization will be limited by the number of bits we use. The maximum number of bits to shift determines the amount of memory needed, but it also determines in what range a single weight update can vary. Figure 2 shows the model performance as a function of the maximum allowed bit shifts. These experiments are conducted on the MNIST dataset, with the aforementioned fully connected model. For each case of bit clipping, we repeat the experiment for 10 times with different initial random instantiations.
The ï¬gure shows that the approach is not very sensible to the number of bits used. The maximum allowed shift in the ï¬gure varies from 2 bits to 10 bits, and the performance remains roughly the same. Even by restricting bit shifts to 2, the model can still learn successfully. The fact that the performance is not very sensitive to the maximum of allowed bit shifts suggests that we do not need to redeï¬ne the number of bits used for quantizing x for different tasks, which would be an important practical advantage.
The x to be quantized is not necessarily distributed symmetrically around 2. For example, Figure 3 shows the distribution of x at each layer in the middle of training. The maximum amount of shift to the left does not need to be the same as that on the right. A more efï¬cient way is to use different values for the maximum left shift and the maximum right shift. Bearing that in mind, we set it to 3 bits maximum to the right and 4 bits to the left.
6
Published as a conference paper at ICLR 2016
1.375 OO Error rate (%) 1.125 Maximum allowed shifts
Figure 2: Model performance as a function of the maximum bit shifts allowed in quantized back propagation. The dark blue line indicates mean error rate over 10 independent runs, while light blue lines indicate their corresponding maximum and minimum error rates.
1200072 = =10 = 2 10000 8000 6000 4000 2000 =20 =15 =10 -5 0 5
Figure 3: Histogram of representations at each layer while training a fully connected network for MNIST. The ï¬gure represents a snap-shot in the middle of training. Each subï¬gure, from bottom up, represents the histogram of hidden states from the ï¬rst layer to the last layer. The horizontal axes stand for the exponent of the layersâ representations, i.e., log2 x.
# 6 CONCLUSION AND FUTURE WORK
We proposed a way to eliminate most of the ï¬oating point multiplications used during training a feedforward neural network. This could make it possible to dramatically accelerate the training of neural networks by using dedicated hardware implementations.
A somewhat surprising fact is that instead of damaging prediction accuracy the approach tends im- prove it, which is probably due to several facts. First is the regularization effect that the stochastic sampling process entails. Noise injection brought by sampling the weight values can be viewed as a regularizer, and that improves the model generalization. The second fact is low precision weight val- ues. Basically, the generalization error bounds for neural nets depend on the weights precision. Low precision prevents the optimizer from ï¬nding solutions that require a lot of precision, which corre- spond to very thin (high curvature) critical points, and these minima are more likely to correspond to overï¬tted solutions then broad minima (there are more functions that are compatible with such solutions, corresponding to a smaller description length and thus better generalization). Similarly,
7
Published as a conference paper at ICLR 2016
Neelakantan et al. (2015) adds noise into gradients, which makes the optimizer prefer large-basin areas and forces it to ï¬nd broad minima. It also lowers the training loss and improves generalization.
Directions for future work include exploring actual implementations of this approach (for example, using FPGA), seeking more efï¬cient ways of binarization, and the extension to recurrent neural networks.
# ACKNOWLEDGMENTS
The authors would like to thank the developers of Theano (Bastien et al., 2012). We acknowledge the support of the following agencies for research funding and computing support: Samsung, NSERC, Calcul Qu´ebec, Compute Canada, the Canada Research Chairs and CIFAR.
# REFERENCES
Bastien, Fr´ed´eric, Lamblin, Pascal, Pascanu, Razvan, Bergstra, James, Goodfellow, Ian J., Bergeron, Arnaud, Bouchard, Nicolas, and Bengio, Yoshua. Theano: new features and speed improvements. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop, 2012.
Burge, Peter S., van Daalen, Max R., Rising, Barry J. P., and Shawe-Taylor, John S. Stochastic bit- stream neural networks. In Maass, Wolfgang and Bishop, Christopher M. (eds.), Pulsed Neural Networks, pp. 337â352. MIT Press, Cambridge, MA, USA, 1999. ISBN 0-626-13350-4. URL http://dl.acm.org/citation.cfm?id=296533.296552.
Cheng, Zhiyong, Soudry, Daniel, Mao, Zexi, and Lan, Zhenzhong. Training binary multilayer arXiv preprint neural networks for image classiï¬cation using expectation backpropagation. arXiv:1503.03562, 2015.
Courbariaux, Matthieu, Bengio, Yoshua, and David, Jean-Pierre. Binaryconnect: Training deep neu- ral networks with binary weights during propagations. arXiv preprint arXiv:1511.00363, 2015.
Gulcehre, Caglar, Firat, Orhan, Xu, Kelvin, Cho, Kyunghyun, Barrault, Loic, Lin, Huei-Chi, Bougares, Fethi, Schwenk, Holger, and Bengio, Yoshua. On using monolingual corpora in neural machine translation. arXiv preprint arXiv:1503.03535, 2015.
Jeavons, Peter, Cohen, David A., and Shawe-Taylor, John. Generating binary sequences for stochas- tic computing. Information Theory, IEEE Transactions on, 40(3):716â720, 1994.
Kim, Minje and Paris, Smaragdis. Bitwise neural networks. In Proceedings of The 31st International Conference on Machine Learning, pp. 0â0, 2015.
Krizhevsky, Alex and Hinton, Geoffrey. Learning multiple layers of features from tiny images, 2009.
Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. Imagenet classiï¬cation with deep convo- lutional neural networks. In Advances in neural information processing systems, pp. 1097â1105, 2012.
Kwan, Hon Keung and Tang, CZ. Multiplierless multilayer feedforward neural network design suitable for continuous input-output mapping. Electronics Letters, 29(14):1259â1260, 1993.
In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, pp. 8595â 8598. IEEE, 2013.
LeCun, Yann, Bottou, L´eon, Bengio, Yoshua, and Haffner, Patrick. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278â2324, 1998.
Machado, Emerson Lopes, Miosso, Cristiano Jacques, von Borries, Ricardo, Coutinho, Murilo, Berger, Pedro de Azevedo, Marques, Thiago, and Jacobi, Ricardo Pezzuol. Computational cost reduction in learned transform classiï¬cations. arXiv preprint arXiv:1504.06779, 2015.
Marchesi, Michele, Orlandi, Gianni, Piazza, Francesco, and Uncini, Aurelio. Fast neural networks without multipliers. Neural Networks, IEEE Transactions on, 4(1):53â62, 1993.
8
Published as a conference paper at ICLR 2016
Neelakantan, Arvind, Vilnis, Luke, Le, Quoc V, Sutskever, Ilya, Kaiser, Lukasz, Kurach, Karol, and Martens, James. Adding gradient noise improves learning for very deep networks. arXiv preprint arXiv:1511.06807, 2015.
Netzer, Yuval, Wang, Tao, Coates, Adam, Bissacco, Alessandro, Wu, Bo, and Ng, Andrew Y. Read- ing digits in natural images with unsupervised feature learning. In NIPS workshop on deep learn- ing and unsupervised feature learning, pp. 5. Granada, Spain, 2011.
Simard, Patrice Y and Graf, Hans Peter. Backpropagation without multiplication. In Advances in Neural Information Processing Systems, pp. 232â239, 1994.
van Daalen, Max, Jeavons, Pete, Shawe-Taylor, John, and Cohen, Dave. Device for generating binary sequences for stochastic computing. Electronics Letters, 29(1):80â81, 1993.
9 | {
"id": "1503.03535"
} |
1510.02675 | Controlled Experiments for Word Embeddings | An experimental approach to studying the properties of word embeddings is
proposed. Controlled experiments, achieved through modifications of the
training corpus, permit the demonstration of direct relations between word
properties and word vector direction and length. The approach is demonstrated
using the word2vec CBOW model with experiments that independently vary word
frequency and word co-occurrence noise. The experiments reveal that word vector
length depends more or less linearly on both word frequency and the level of
noise in the co-occurrence distribution of the word. The coefficients of
linearity depend upon the word. The special point in feature space, defined by
the (artificial) word with pure noise in its co-occurrence distribution, is
found to be small but non-zero. | http://arxiv.org/pdf/1510.02675 | Benjamin J. Wilson, Adriaan M. J. Schakel | cs.CL, 68T50, I.2.7 | Chagelog: Rerun experiment with subsampling turned off;
re-interpreted results in light of Schnabel et al. (2015). 15 pages | null | cs.CL | 20151009 | 20151214 | 5 1 0 2 c e D 4 1
arXiv:1510.02675v2 [cs.CL]
# ] L C . s c [
2 v 5 7 6 2 0 . 0 1 5 1 : v i X r a
# Controlled Experiments for Word Embeddings
# Adriaan M. J. Schakel NNLP adriaan.schakel@gmail.com
Benjamin Wilson Adriaan M. J. Schakel Lateral GmbH NNLP benjamin@lateral.io adriaan.schakel@gmail.com
February 15, 2022
# Abstract
An experimental approach to studying the properties of word embeddings is proposed. Controlled experiments, achieved through modiï¬cations of the training corpus, permit the demonstration of direct relations between word properties and word vector direc- tion and length. The approach is demonstrated using the word2vec CBOW model with experiments that independently vary word frequency and word co-occurrence noise. The experiments reveal that word vector length depends more or less linearly on both word frequency and the level of noise in the co-occurrence distribution of the word. The coefï¬cients of linearity depend upon the word. The special point in feature space, deï¬ned by the (artiï¬cial) word with pure noise in its co-occurrence distribution, is found to be small but non-zero.
# 1 Introduction
Word embeddings, or distributed representations of words, have been the subject of much recent re- search in the natural language processing and machine learning communities, demonstrating state-of- the-art performance on word similarity and word analogy tasks, amongst others. Word embeddings represent words from the vocabulary as dense, real-valued vectors. Instead of one-hot vectors that merely indicate the location of a word in the vocabulary, dense vectors of dimension much smaller than the vocabulary size are constructed such that they carry syntactic and semantic information. Irrespec- tive of the technique chosen, word embeddings are typically derived from word co-occurrences. More speciï¬cally, in a machine-learning setting, word embeddings are typically trained by scanning a short window over all the text in a corpus. This process can be seen as sampling word co-occurrence distribu- tions, where it is recalled that the co-occurrence distribution of a target word w denotes the conditional probability P(wâ²|w) that a word wâ² occurs in its context, i.e., given that w occurred. Most applications of word embeddings explore not the word vectors themselves, but relations between them to solve, for example, similarity and word relation tasks [2]. For these tasks, it was found that using normalised word vectors improves performance. Word vector length is therefore typically ignored.
In a previous paper [9], we proposed the use of word vector length as measure of word signiï¬cance. Using a domain-speciï¬c corpus of scientiï¬c abstracts, we observed that words that appear only in similar contexts tend to have longer vectors than words of the same frequency that appear in a wide variety of contexts. For a given frequency band, we found meaningless function words clearly separated from proper nouns, each of which typically carries the meaning of a distinctive context in this corpus. In other words, the longer its vector, the more signiï¬cant a word is. We also observed that word signiï¬cance is not the only factor determining the length of a word vector, also the frequency with which a word occurs plays an important role.
1
In this paper, we wish to study in detail to what extent these two factors determine word vectors. For a given corpus, both term frequency and co-occurrence are, of course, ï¬xed and it is not obvious how to unravel these dependencies in an unambiguous, objective manner. In particular, it is difï¬cult to establish the distinctiveness of the contexts in which a word is used. To overcome these problems, we propose to modify the training corpus in a controlled fashion. To this end, we insert new tokens into the corpus with varying frequencies and varying levels of noise in their co-occurrence distributions. By modeling the frequency and co-occurrence distributions of these tokens, or pseudowords1, on existing words in the corpus, we are able to study their effect on word vectors independently of one another. We can thus study a family of pseudowords that all appear in the same context, but with different frequencies, or study a family of pseudowords that all have the same frequency, but appear in a different number of contexts. Starting from the limited number of contexts in which a word appears in the original corpus, we can increase this number by interspersing the word in arbitrary contexts at random. The word thus looses its signiï¬cance in a controlled way. Although we present our approach using the word2vec CBOW model, these and related experiments could equally well be carried out for other word embedding methods such as the word2vec skip-gram model [7, 6], GloVe [8], and SENNA [3].
We show that the length of the word vectors generated by the CBOW model depends more or less linearly on both word frequency and level of noise in the co-occurrence distribution of the word. In both cases, the coefï¬cient of linearity depends upon the word. If the co-occurrence distribution is ï¬xed, then word vector length increases with word frequency. If, on the other hand, word frequency is held constant, then word vector length decreases as the level of noise in the co-occurrence distribution of the word is increased. In addition, we show that the direction of a word vector varies smoothly with word frequency and the level of co-occurrence noise. When noise is added to the co-occurrence distribu- tion of a word, the corresponding vector smoothly interpolates between the original word vector and a small vector perpendicular to it that represents a word with pure noise in its co-occurrence distribution. Surprisingly, the special point in feature space, obtained by interspersing a pseudoword uniformly at random throughout the corpus with a frequency sufï¬ciently large to sample all contexts, is non-zero.
This paper is structured as follows. Section 2 draws connections to related work, while Section 3 describes the corpus and the CBOW model used in our experiments. Section 4 describes a controlled experiment for varying word frequency while holding the co-occurrence distribution ï¬xed. Section 5, in a complementary fashion, describes a controlled experiment for varying the level of noise in the co- occurrence distribution of a word while holding the word frequency ï¬xed. The ï¬nal section, Section 6, considers further questions and possible future directions.
# 2 Related work
Our experimental ï¬nding that word vector length decreases with co-occurrence noise is related to earlier work by Vecchi, Baroni, and Zamparelli [11], where a relation between vector length and the âsemantic devianceâ of an adjective-noun composite was studied empirically. In that paper, which is also based on word co-occurrence statistics, the authors study adjective-noun composites. They built a vocabulary from the 8k most frequent nouns and 4k most frequent adjectives in a large general language corpus and added 22k adjective-noun composites. For each item in the vocabulary, they recorded the co-occurrences with the top 10k most frequent content words (nouns, adjectives or verbs), and constructed word embed- dings via singular value decomposition of the co-occurrence matrix [5]. The authors considered several models for constructing vectors of unattested adjective-noun composites, the two simplest being adding and component-wise multiplying the adjective and noun vectors. They hypothesized that the length of the vectors thus constructed can be used to distinguish acceptable and semantically deviant adjective- noun composites. Using a few hundred adjective-noun composites selected by humans for evaluation, they found that deviant composites have a shorter vector than acceptable ones, in accordance with their expectation. In contrast to their work, our approach does not require human annotation.
1We refer to these tokens as pseudowords, since their properties are modeled upon words in the lexicon and because our corpus modiï¬cation approach is reminiscent of the pseudoword approach for generating labeled data for word sense disambiguation tasks in [4].
2
Recent theoretical work [1] has approached the problem of explaining the so-called âcompositionalityâ property exhibited by some word embeddings. In that work, unnormalised vectors are used in their model of the word relation task. It is hoped that experimental approaches such as those described here might enable theoretical investigations to describe the role of the word vector length in the word relation tasks.
# 3 Corpus and model
Our training data is built from the Wikipedia data dump from October 2013. To remove the bulk of robot-generated pages from the training data, only pages with at least 20 monthly page views are retained.2 Stubs and disambiguation pages are also removed, leaving 463 thousand pages with a total of 482 million words. Punctuation marks and numbers were removed from the pages and all words were lower-cased. Word frequencies are summarised in Table 1. This base corpus is then modiï¬ed as described in Sections 4 and 5. For recognisability, the pseudowords inserted into the corpus are upper-cased.
# 3.1 Word2vec
Word2vec, a feed-forward neural network with a single hidden layer, learns word vectors from word co-occurrences in an unsupervised manner. Word2vec comes in two versions. In the continuous bag- of-words (CBOW) model, the words appearing around a target word serve as input. That input is projected linearly onto the hidden layer and the network then attempts to predict the target word on output. Training is achieved through back-propagation. The word vectors are encoded in the weights of the ï¬rst synaptic layer, âsyn0â. The weights of the second synaptic layer (âsyn1negâ, in the case of negative sampling) are typically discarded. In the other model, called skip-gram, target and context words swap places, so that the target word now serves as input, while the network attempts to predict the context words on output.
For simplicity only the word2vec CBOW word embedding with a single set of hyperparameters is considered. Speciï¬cally, a CBOW model with a hidden layer of size 100 is trained using negative sampling with 5 negative samples, a window size of 10, a minimum frequency of 128, and 10 passes through the corpus. Sub-sampling was not used so that the inï¬uence of word frequency could be more clearly discerned. Similar experimental results were obtained using hierarchical softmax, but these are omitted for succinctness. The relatively high low-frequency cut-off is chosen to ensure that word vectors, in all but degenerate cases, receive a sufï¬cient number of gradient updates to be meaningful. This frequency cut-off results in a vocabulary of 81117 words (only unigrams were considered).
The most recent revision of word2vec was used.3 The source code for performing the experiments is made available on GitHub.4
# 3.2 Replacement procedure
In the experiments detailed below, we modify the corpus in a controlled manner by introducing pseu- dowords into the corpus via a replacement procedure. For the frequency experiment, the procedure is as follows. Consider a word, say cat. For each occurrence of this word, a sample i, 1 6 i 6 n is drawn from a truncated geometric distribution, and that occurrence of the word cat is replaced with the pseudoword CAT i. In this way, the word cat is replaced throughout the corpus by a family of pseudowords with varying frequencies but approximately the same co-occurrence distribution as cat. That is, all these pseudowords are used in roughly the same contexts as the original word.
2For further justiï¬cation and to obtain the dataset, see
https://blog.lateral.io/2015/06/the-unknown-perils-of-mining-wikipedia/ 3SVN revision 42, see http://word2vec.googlecode.com/svn/trunk/ 4https://github.com/benjaminwilson/word2vec-norm-experiments
3
frequency band 20 â 21 21 â 22 22 â 23 23 â 24 24 â 25 25 â 26 26 â 27 27 â 28 28 â 29 29 â 210 210 â 211 211 â 212 212 â 213 213 â 214 214 â 215 215 â 216 216 â 217 217 â 218 218 â 219 219 â 220 220 â 221 221 â 222 222 â 223 223 â 224 224 â 225 225 â 226
# words 979187 isa220, zhangzhongzhu, yewell, gxgr 416549 wz132, prabhanjna, fesh, rudick 220573 gustafsdotter, summerfields, autodata, nagassarium 134870 futu, abertillery, shikaras, yuppy 90755 chuva, waffling, wws, andujar 62581 nagini, sultanah, charrette, wndy 41359 shew, dl, kidjo, strangeways 27480 smartly, sydow, beek, falsify 17817 legionaries, mbius, mannerism, cathars 12291 bedtime, disabling, jockeys, brougham 8215 frederic, monmouth, constituting, grabbing 5509 questionable, bosnian, pigment, coaster 3809 dismissal, torpedo, coordinates, stays 2474 liberty, hebrew, survival, muscles 1579 destruction, trophy, patrick, seats 943 draft, wood, ireland, reason 495 brought, move, sometimes, away 221 february, children, college, see 83 music, life, following, game 29 during, time, other, she 17 has, its, but, an 10 by, on, it, his 4 was, is, as, for 3 in, and, to 1 of 1 the
Table 1: Number of words, by frequency band, as observed in the unmodiï¬ed corpus.
4
The geometric distribution is truncated to limit the number of pseudowords inserted into the corpus. For any choice 0 < p < 1 and maximum value n > 0, the truncated geometric distribution is given by the probability density function
Pp,n(i) = piâ1(1 â p) 1 â pn , 1 6 i 6 n. (1)
The factor in the denominator, which tends to unity in the limit n â â, assures proper normalisation. We have chosen this distribution because the probabilities decay exponentially base p as a function of i. Of course, other distributions might equally well have been chosen for the experiments.
For the noise experiment, we take instead of a geometric distribution, the distribution
Pn(i) = 2(n â i) n(n â 1) , 1 6 i 6 n. (2)
We have chosen this distribution for the noise experiment, because it leads to evenly spaced proportions of co-occurrence noise that cover the entire interval [0, 1].
# 4 Varying word frequency
In this ï¬rst experiment, we investigate the effect of word frequency on the word embedding. Using the replacement procedure, we introduce a small number of families of pseudowords into the corpus. The pseudowords in each family vary in frequency but, replacing a single word, all share a common co-occurrence distribution. This allows us to study the role of word frequency in isolation, everything else being kept equal. We consider two types of pseudowords.
# 4.1 Pseudowords derived from existing words
We choose uniformly at random a small number of words from the unmodiï¬ed vocabulary for our experiment. In order that the inserted pseudowords do not have too low a frequency, only words which occur at least 10 thousand times are chosen. We also include the high-frequency stopword the for comparison. Table 2 lists the words chosen for this experiment along with their frequencies.
The replacement procedure of Section 3.2 is then performed for each of these words, using a geometric decay rate of p = 1 2 , and maximum value n = 20, so that the 1st pseudoword is inserted with a probability of about 0.5, the 2nd with a probability of about 0.25, and so on. This value of p is one of a range of values that ensure that, for each word, multiple pseudowords will be inserted with a frequency sufï¬cient to survive the low-frequency cut-off of 128. A maximum value n = 20 sufï¬ces for this choice of p, since 220+log2 128 exceeds the maximum frequency of any word in the corpus. Figure 1 illustrates the effect of these modiï¬cations on a sample text, with a family of pseudowords CAT i, derived from the word cat. Notice that all occurrences of the word cat have been replaced with the pseudowords CAT i.
# 4.2 Pseudowords derived from an artiï¬cial, meaningless word
Whereas the pseudowords introduced above all replace an existing word that carries a meaning, we now include for comparison a high-frequency, meaningless word. We choose to introduce an artiï¬cial, entirely meaningless word VOID into the corpus, rather than choose an existing (stop)word whose mean- inglessness is only supposed. To achieve this, we intersperse the word uniformly at random throughout the corpus so that its relative frequency is 0.005. The co-occurrence distribution of VOID thus coincides with the unconditional word distribution. The replacement procedure is then performed for this word, using the same values for p and n as above. Figure 2 shows the effect of these modiï¬cations on a sample text, where a higher relative frequency of 0.05 is used instead for illustrative purposes.
5
word lawsuit mercury protestant hidden squad kong awarded response the frequency 11565 13059 13404 15736 24872 32674 55528 69511 38012326
Table 2: Words chosen for the word frequency experiment, along with their frequency in the unmodiï¬ed corpus.
the domestic CAT 2 was first classified as felis catus the semiferal CAT 1 a mostly outdoor CAT 1 is not owned by any one individual a pedigreed CAT 1 is one whose ancestry is recorded by a CAT 2 fancier organization a purebred CAT 2 is one whose ancestry contains only individuals of the same breed the CAT 4 skull is unusual among mammals in having very large eye sockets another unusual feature is that the CAT 1 cannot produce taurine within groups one CAT 1 is usually dominant over the others
the domestic CAT 2 was first classified as felis catus the semiferal CAT 1 a mostly outdoor CAT 1 is not owned by any one individual a pedigreed CAT 1 is one whose ancestry is recorded by a CAT 2 fancier organization a purebred CAT 2 is one whose ancestry contains only individuals of the same breed the CAT 4 skull is unusual among mammals in having very large eye sockets another unusual feature is that the CAT 1 cannot produce taurine within groups one CAT 1 is usually dominant over the others
Figure 1: Example sentences modiï¬ed in the word frequency experiment as per Section 4.1, where the word cat is replaced with pseudowords CAT i using the truncated geometric distribution (1) with p = 1
VOID 1 the domestic cat was first classified as felis catus the semiferal cat VOID 3 a mostly outdoor cat is not VOID 2 owned by VOID 1 any one individual a pedigreed cat is one whose ancestry is recorded by a cat fancier organization a purebred cat is one whose ancestry contains only individuals of the same breed the cat skull is unusual among VOID 1 mammals in having very large eye sockets another unusual feature is that the cat cannot produce taurine within groups one cat is usually dominant over the others
Figure 2: The same example sentences as in Figure 1 where instead of the word cat now the mean- ingless word VOID is replaced with pseudowords VOID i. For illustrative purposes, the meaningless word VOID was here interspersed with a relative frequency of 0.05.
6
# 4.3 Experimental results
We next present the results of the word frequency experiment. We consider the effect of word frequency on the direction and on the length of word vectors separately.
# 4.3.1 Word frequency and vector direction
Figure 3 shows the cosine similarity of pairs of vectors representing some of the pseudowords used in this experiment. Recall that the cosine similarity measures the extent to which two vectors have the same direction, taking a maximum value of 1 and a minimum value of â1. The number of different pseudowords associated with an experiment word is the number of times that its frequency can be halved and remain above the low-frequency cut-off of 128.
Consider ï¬rst the vectors for the pseudowords associated to the word the. Notice that the cosine similarity of the vectors for THE 1 and THE i decreases monotonically with i, while the cosine sim- ilarity of the vectors for THE i and THE 18 increases monotonically with i. Indeed the direction of the vector THE i changes systematically, interpolating between the directions of the vectors of the highest-frequency pseudoword THE 1 and the lowest-frequency pseudoword THE 18. The same trend is apparent (though over shorter frequency ranges) for all the families of pseudowords other than that for VOID.
Consider now the vectors for pseudowords derived from the meaningless word VOID. The vectors for VOID 7, . . . , VOID 13 are approximately orthogonal to one another, just as would be expected from randomly drawn vectors in a high dimensional space. As the pseudoword VOID occurs by construction in every context, a much higher number of samples is required to capture its co-occurrence distribution, and thereby to learn its vector (the same is true, but to a lesser extent, for the stopword the). We conclude that the vectors corresponding to the lower frequency pseudowords VOID 7, . . . , VOID 13 have not been trained on a sufï¬cient number of samples to establish their proper direction. These vectors are excluded from further analysis. The vectors for VOID 1, . . . , VOID 6, on the other hand, exhibit the smooth change in vector direction with word frequency described in the previous paragraph.
In recent work on the evaluation of word embeddings, Schnabel et al. [10] trained logistic regression models to predict whether a word was rare or frequent given only the direction of its word vector. For various word embedding methods, the prediction accuracy was measured as a function of the threshold for word rarity. It was found in the case of word2vec CBOW that word vector direction could be used to distinguish very rare words from all other words. Figure 3 is consistent with this ï¬nding, as it is apparent that word vector direction does change gradually with frequency. Schnabel et al. claim further that word vector direction must encode word frequency directly, and not indirectly via semantic information. Figure 3, considered for any particular experiment word in isolation (e.g. SQUAD), demonstrates that the variance of word vector direction with word frequency is indeed independent of co-occurrence (semantic) information, and thereby provides further evidence for this claim.
# 4.3.2 Word frequency and vector length
We next consider the effect of frequency on word vector length. Throughout, we measure vector length using the Euclidean norm. Figure 4 shows this relation for individual words, both for the word vectors, represented by the weights of the ï¬rst synaptic layer, syn0, in the word2vec neural network, and for the vectors represented by the weights of the second synaptic layer, syn1neg. We include the latter, which are typically ignored, for completeness. Each line corresponds to a single word, and the points on each line indicate the frequency and vector length of the pseudowords derived from that word. For example, the six points on the line corresponding to the word protestant are labeled, from right to left, by the pseudowords PROTESTANT 1, PROTESTANT 2, . . . , PROTESTANT 6. Again, the number of points on the line is determined by the frequency of the original word. For example, the frequency of the word protestant can be halved at most 6 times so that the frequency of the last pseudoword is still above the low-frequency cut-off. Because all the points on a line share the same co-occurrence distribution, the left panel in Figure 4 demonstrates conclusively that length does indeed depend on frequency directly.
7
Cosine similarity o word vectors VOID_13 . . . . . . . . . . VOID_2 VOID_1 THE_18 . . . . . . . . . . . . . . . THE_2 THE_1 KONG_7 . . . . KONG_2 KONG_1 PROTESTANT_6 . . . PROTESTANT_2 PROTESTANT_1 HIDDEN_6 . . . HIDDEN_2 HIDDEN_1 SQUAD_7 . . . . SQUAD_2 SQUAD_1 1 _ D A U Q S 2 _ D A U Q S . . . 7 _ D A U Q S . 1 _ N E D D H I 2 _ N E D D H I . . . 1 _ T N A T S E T O R P 6 _ N E D D H I 2 _ T N A T S E T O R P . . 6 _ T N A T S E T O R P . 1 _ G N O K 2 _ G N O K . . . 7 _ G N O K . 1 _ E H T 2. _ E H T . . . . . . . . . . . . . 8 1 _ E H T . 1 _ D O V 2. _ D O V I I . . . . . . . . . 3 1 _ D O V I 1.0 0.8 0.6 0.4 0.2 0.0
Figure 3: Heatmap of the cosine similarity of the vectors representing some of the pseudowords used in the word frequency experiment. The words other than the and VOID were chosen randomly.
Moreover, this relation is seen to be approximately linear for each word considered. Notice also that the relative positions of the lengths of the word vectors associated with the experiment words are roughly independent of the frequency band, i.e., the plotted lines rarely cross.
Observe that the lengths of the vectors representing the meaningless pseudowords VOID i are approx- imately constant (about 2.5). Since we already found the direction to be also constant, it is sensible to speak of the word vector of VOID irrespective of its frequency. In particular, the vector of the pseu- doword VOID 1 may be taken as an approximation.
5 Varying co-occurrence noise
This second experiment is complementary to the ï¬rst. Whereas in the ï¬rst experiment we studied the effect of word frequency on word vectors for ï¬xed co-occurrence, we here study the effect of co- occurrence noise when the frequency is ï¬xed. As before, we do so in a controlled manner.
# 5.1 Generating noise
We take the noise distribution to be the (observed) unconditional word distribution. Noise can then be added to the co-occurrence distribution of a word by simply interspersing occurrences of that word
8
â0.2
â0.4
â0.6
â0.8
â1.0
9
syn0 syn1neg 45 25 40 h t g n e l r o t c e v 35 30 25 20 15 20 15 10 kong awarded lawsuit protestant squad mercury response hidden the VOID 10 5 5 0 10 2 10 3 10 4 5 10 frequency 10 6 10 7 10 8 0 10 2 10 3 10 4 5 10 frequency 10 6 10 7 10 8
Figure 4: Vector length vs. frequency for pseudowords derived from a few words chosen at random. For each word, pseudowords of varying frequency but with the co-occurrence distribution of that word were inserted into the corpus, as described in Section 4. The vectors are obtained from the ï¬rst synaptic layer, syn0, of the word2vec neural network. The vectors obtained from the second layer, syn1neg, are included for completeness. Legend entries are ordered by vector length of the left-most data point in the syn0 plot, descending.
word dying bridges appointment aids boss removal jobs community frequency 10693 12193 12546 13487 14105 15505 21065 115802
Table 3: Words chosen for the co-occurrence noise experiment, along with the word frequencies in the unmodiï¬ed corpus.
uniformly at random throughout the corpus. A word that is consistently used in a distinctive context in the unmodiï¬ed corpus thus appears in the modiï¬ed corpus also in completely unrelated contexts. As in Section 4, we choose a small number of words from the unmodiï¬ed corpus for this experiment. Table 3 lists the words chosen, along with their frequencies in the corpus.
For each of these words, the replacement procedure of Section 3.2 is performed using the distribu- tion (2) with n = 7. For every replacement pseudoword (e.g. CAT i), additional occurrences of this pseudoword are interspersed uniformly at random throughout the corpus, such that the ï¬nal frequency of the replacement pseudoword is 2/n times that of the original word cat. For example, if the original word cat occurred 1000 times, then after the replacement procedure, CAT 2 occurs approximately 238 times, so a further (approximately) 2/7 à 1000 â 238 â 48 random occurrences of CAT 2 are interspersed throughout the corpus. In this way, the word cat is removed from the corpus and replaced with a family of pseudowords CAT i, 1 6 i 6 7. These pseudowords all have the same frequency, but their co-occurrence distributions, while based on that of cat, have an increasing amount of noise. Speciï¬cally, the proportion of noise for the ith pseudoword is
1 â n 2 Pn(i) = i â 1 n â 1 , or 0, 1 n â 1 , 2 n â 1 , . . . , 1 for i = 1, 2, . . . , n,
which is evenly distributed. The ï¬rst pseudoword contains no noise at all, while the last pseudoword stands for pure noise. The particular choice of n assures a reasonable coverage of the interval [0, 1]. Other parameter values (or indeed other distributions) could, of course, have been used equally well.
Figure 5 illustrates the effect of this modiï¬cation in the case where the only word chosen is cat. The original text in this case concerned both cats and dogs. Notice that the word cat has been replaced entirely in the cats section by CAT i and, moreover, that these same pseudowords appear also in the dogs section. These occurrences (and additionally, with probability, some occurrences from the cats section) constitute noise.
# 5.2 Experimental results
Figure 6 shows the cosine similarity of pairs of vectors representing some of the pseudowords used in this experiment. Remember that the ï¬rst pseudoword (i = 1) in a family is without noise in its co-occurrence distribution, while the last one (i = n, with n = 7) stands for pure noise and has therefore no relation anymore with the word it derives from. The ï¬gure demonstrates that the vectors within a family only moderately deviate from the original direction deï¬ned by the ï¬rst pseudoword (i = 1) when noise is added to the co-occurrence distribution. For 1 < i < 7, the deviation typically increases with the proportion of noise. The vector of the last pseudoword (i = n), associated with pure noise, is seen within each of the families to point in a completely different direction, more or less perpendicular to the original one. To understand this interpolating behavior, recall from Section 4.3 that the vector for the entirely meaningless word VOID is small but non-zero. Since the noise distribution coincides with the co-occurrence distribution of VOID, the vectors for the experiment words must tend to the word vector for VOID as the proportion of noise in their co-occurrence distributions approaches
10
the domestic CAT 2 was first classified as felis catus the semiferal CAT 3 a mostly outdoor CAT 4 is not CAT 2 owned by any one individual a pedigreed CAT 4 is one whose ancestry is recorded by a CAT 1 fancier organization CAT 6 a purebred CAT 3 is one whose ancestry contains only individuals of the same breed the CAT 1 skull is unusual among mammals in having very CAT 4 large eye sockets another unusual feature is that the CAT 4 cannot produce taurine within groups one CAT 2 is usually dominant over the others ... the domestic dog canis lupus familiaris is a domesticated canid which has been selectively CAT 5 bred dogs perform many roles for people such as hunting herding and pulling loads CAT 7 in domestic dogs sexual maturity begins to happen around age six to twelve months this is CAT 6 the time at CAT 3 which female dogs will have their first estrous cycle some dog breeds have acquired traits through selective breeding that interfere with reproduction
Figure 5: Example sentences modiï¬ed for the co-occurrence noise experiment, where the word cat was chosen for replacement. The pseudowords were generated using the distribution (2) with n = 7.
1. This convergence to a common point is only indistinctly apparent in Figure 6, as the frequency of the experiment pseudowords is insufï¬cient to sample the full variety of the contexts of VOID, i.e., all contexts (see Section 4.3.1).
The left panel in Figure 7 reveals that vector length varies more or less linearly with the proportion of noise in the co-occurrence distribution of the word. This ï¬gure motivates an interpretation of vector length, within a sufï¬ciently narrow frequency band, as a measure of the absence of co-occurrence noise, or put differently, of the extent to which a word carries the meaning of a distinctive context.
# 6 Discussion
Our principle contribution has been to demonstrate that controlled experiments can be used to gain insight into a word embedding. These experiments can be carried out for any word embedding (or indeed language model), for they are achieved via modiï¬cation of the training corpus only. They do not require knowledge of the model implementation. It would naturally be of interest to perform these experiments for other word embeddings other than word2vec CBOW, such as skipgrams and GloVe, as well as for different hyperparameters settings.
More elaborate experiments could be carried out. For instance, by introducing pseudowords into the cor- pus that mix, with varying proportions, the co-occurrence distributions of two words, the path between the word vectors in the feature space could be studied. The co-occurrence noise experiment described here would be a special case of such an experiment where one of the two words was VOID.
Questions pertaining to word2vec in particular arise naturally from the results of the experiments. Fig- ures 4 and 7, for example, demonstrate that the word vectors obtained from the ï¬rst synaptic layer, syn0, have very different properties from those that could be obtained from the second layer, syn1neg. These differences warrant further investigation.
11
Cosine simila ity of wo d vecto s
JOBS_7 . . . . JOBS_2 JOBS_1 BOSS_7 . . . . BOSS_2 BOSS_1 BRIDGES_7 . . . . BRIDGES_2 BRIDGES_1 DYING_7 . . . . DYING_2 DYING_1 1 _ G N Y D I 2 _ G N Y D I . . . . 7 _ G N Y D I 1 _ S E G D R B I 2 _ S E G D R B I . . . . 7 _ S E G D R B I 1 _ S S O B 2 _ S S O B . . . . 7 _ S S O B 1 _ S B O J 2 . _ S B O J . . . 7 _ S B O J
Figure 6: Heatmap of the cosine similarity of the vectors representing some of the pseudowords used in the co-occurrence noise experiment (the words were chosen at random). The largely red blocks demonstrate that for i < 7 the direction of the vectors only moderately changes when noise is added to the co-occurrence distribution. The vector of the pseudowords associated with pure noise (i = 7) is seen to be almost perpendicular to the word vectors they derive from.
12
1.0
0.8
0.6
0.4
0.2
0.0
â0.2
â0.4
â0.6
â0.8
â1.0
1 3
syn0 syn1neg 30 14 25 20 15 12 10 8 6 10 4 5 2 0 0.0 0.2 0.4 0.6 0.8 1.0 0 0.0 0.2 0.4 0.6 0.8 1.0 Proportion of occurrences from noise distribution Proportion of occurrences from noise distribution
# h t g n e
# l
r o t c e v
appointment jobs community removal bridges aids boss dying
Figure 7: Vector length vs. proportion of occurrences from the noise distribution for words chosen for this experiment. For each word, pseudowords of equal frequency but with increasing proportion of co-occurrence noise were inserted into the corpus, as described in Section 5. The word vectors are obtained from the ï¬rst synaptic layer, syn0. The second layer, syn1neg, is included for completeness. Legend entries are ordered by vector length of the left-most data point in the syn0 plot, descending.
The co-occurrence distribution of VOID is the unconditional frequency distribution, and in this sense pure background noise. Thus the word vector of VOID is a special point in the feature space. Figure 4 shows that this point is not at the origin of the feature space, i.e., is not the zero vector. The origin, however, is implicitly the point of reference in word2vec word similarity tasks. This raises the question of whether improved performance on similarity tasks could be achieved by transforming the feature space or modifying the model such that the representation of pure noise, i.e., the vector for VOID, is at the origin of the transformed feature space.
# 7 Acknowledgments
The authors thank Tobias Schnabel for helpful discussions.
14
# References
[1] Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. Random walks on context spaces: Towards an explanation of the mysteries of semantic word embeddings. CoRR, abs/1502.03520, 2015.
[2] Marco Baroni, Georgiana Dinu, and Germ´an Kruszewski. Donât count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 238â247, Baltimore, Maryland, June 2014. Association for Computational Linguistics.
[3] Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. Natural language processing (almost) from scratch. J. Mach. Learn. Res., 12:2493â2537, November 2011.
[4] William A Gale, Kenneth W Church, and David Yarowsky. Work on statistical methods for word sense In Working Notes of the AAAI Fall Symposium on Probabilistic Approaches to Natural disambiguation. Language, volume 54, page 60, 1992.
[5] Thomas K Landauer and Susan T. Dutnais. A solution to platos problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. PSYCHOLOGICAL REVIEW, 104(2):211â240, 1997.
[6] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efï¬cient estimation of word representations in vector space. CoRR, abs/1301.3781, 2013.
[7] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. Distributed representations of words and phrases and their compositionality. CoRR, abs/1310.4546, 2013.
[8] Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word representa- tion. Proceedings of the Empiricial Methods in Natural Language Processing (EMNLP 2014), 12:1532â1543, 2014.
[9] Adriaan M. J. Schakel and Benjamin J. Wilson. Measuring word signiï¬cance using distributed representations of words, 2015.
[10] Tobias Schnabel, Igor Labutov, David Mimno, and Thorsten Joachims. Evaluation methods for unsuper- vised word embeddings. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 298â307, Lisbon, Portugal, September 2015. Association for Computational Linguistics.
(Linear) Maps of the Impossible: Capturing Se- mantic Anomalies in Distributional Space. In Proceedings of the Workshop on Distributional Semantics and Compositionality, pages 1â9, Portland, Oregon, USA, June 2011. Association for Computational Linguistics.
15 | {
"id": "1510.02675"
} |
1510.01378 | Batch Normalized Recurrent Neural Networks | Recurrent Neural Networks (RNNs) are powerful models for sequential data that
have the potential to learn long-term dependencies. However, they are
computationally expensive to train and difficult to parallelize. Recent work
has shown that normalizing intermediate representations of neural networks can
significantly improve convergence rates in feedforward neural networks . In
particular, batch normalization, which uses mini-batch statistics to
standardize features, was shown to significantly reduce training time. In this
paper, we show that applying batch normalization to the hidden-to-hidden
transitions of our RNNs doesn't help the training procedure. We also show that
when applied to the input-to-hidden transitions, batch normalization can lead
to a faster convergence of the training criterion but doesn't seem to improve
the generalization performance on both our language modelling and speech
recognition tasks. All in all, applying batch normalization to RNNs turns out
to be more challenging than applying it to feedforward networks, but certain
variants of it can still be beneficial. | http://arxiv.org/pdf/1510.01378 | César Laurent, Gabriel Pereyra, Philémon Brakel, Ying Zhang, Yoshua Bengio | stat.ML, cs.LG, cs.NE | null | null | stat.ML | 20151005 | 20151005 | 5 1 0 2
t c O 5 ] L M . t a t s [ 1 v 8 7 3 1 0 . 0 1 5 1 : v i X r a
# Batch Normalized Recurrent Neural Networks
# C´esar Laurent â Universit´e de Montr´eal
# Gabriel Pereyra â University of Southern California
Phil´emon Brakel Universit´e de Montr´eal
Ying Zhang Universit´e de Montr´eal
Yoshua Bengio â Universit´e de Montr´eal
# Abstract
Recurrent Neural Networks (RNNs) are powerful models for sequential data that have the potential to learn long-term dependencies. However, they are computa- tionally expensive to train and difï¬cult to parallelize. Recent work has shown that normalizing intermediate representations of neural networks can signiï¬cantly im- prove convergence rates in feedforward neural networks [1]. In particular, batch normalization, which uses mini-batch statistics to standardize features, was shown to signiï¬cantly reduce training time. In this paper, we show that applying batch normalization to the hidden-to-hidden transitions of our RNNs doesnât help the training procedure. We also show that when applied to the input-to-hidden transi- tions, batch normalization can lead to a faster convergence of the training criterion but doesnât seem to improve the generalization performance on both our language modelling and speech recognition tasks. All in all, applying batch normalization to RNNs turns out to be more challenging than applying it to feedforward net- works, but certain variants of it can still be beneï¬cial.
# 1 Introduction
Recurrent Neural Networks (RNNs) have received renewed interest due to their recent success in var- ious domains, including speech recognition [2], machine translation [3, 4] and language modelling [5]. The so-called Long Short-Term Memory (LSTM) [6] type RNN has been particularly success- ful. Often, it seems beneï¬cial to train deep architectures in which multiple RNNs are stacked on top of each other [2]. Unfortunately, the training cost for large datasets and deep architectures of stacked RNNs can be prohibitively high, often times an order of magnitude greater than simpler models like n-grams [7]. Because of this, recent work has explored methods for parallelizing RNNs across mul- tiple graphics cards (GPUs). In [3], an LSTM type RNN was distributed layer-wise across multiple GPUs and in [8] a bidirectional RNN was distributed across time. However, due to the sequential nature of RNNs, it is difï¬cult to achieve linear speed ups relative to the number of GPUs.
Another way to reduce training times is through a better conditioned optimization procedure. Stan- dardizing or whitening of input data has long been known to improve the convergence of gradient- based optimization methods [9]. Extending this idea to multi-layered networks suggests that nor- malizing or whitening intermediate representations can similarly improve convergence. However, applying these transforms would be extremely costly. In [1], batch normalization was used to stan- dardize intermediate representations by approximating the population statistics using sample-based approximations obtained from small subsets of the data, often called mini-batches, that are also used to obtain gradient approximations for stochastic gradient descent, the most commonly used optimization method for neural network training. It has also been shown that convergence can be improved even more by whitening intermediate representations instead of simply standardizing
# âEqual contribution â CIFAR Senior Fellow
1
them [10]. These methods reduced the training time of Convolutional Neural Networks (CNNs) by an order of magnitude and additionallly provided a regularization effect, leading to state-of-the-art results in object recognition on the ImageNet dataset [11]. In this paper, we explore how to leverage normalization in RNNs and show that training time can be reduced.
# 2 Batch Normalization
In optimization, feature standardization or whitening is a common procedure that has been shown to reduce convergence rates [9]. Extending the idea to deep neural networks, one can think of an arbitrary layer as receiving samples from a distribution that is shaped by the layer below. This distribution changes during the course of training, making any layer but the ï¬rst responsible not only for learning a good representation but also for adapting to a changing input distribution. This distribution variation is termed Internal Covariate Shift, and reducing it is hypothesized to help the training procedure [1].
To reduce this internal covariate shift, we could whiten each layer of the network. However, this often turns out to be too computationally demanding. Batch normalization [1] approximates the whitening by standardizing the intermediate representations using the statistics of the current mini- batch. Given a mini-batch x, we can calculate the sample mean and sample variance of each feature k along the mini-batch axis
m 1 Xe = So xik: dd) m i=l
# i=l 1
Ï2 k = 1 m i=1 (xi,k â ¯xk)2, (2)
where m is the size of the mini-batch. Using these statistics, we can standardize each feature as follows
(3)
where ⬠is a small positive constant to improve numerical stability.
However, standardizing the intermediate activations reduces the representational power of the layer. To account for this, batch normalization introduces additional learnable parameters γ and β, which respectively scale and shift the data, leading to a layer of the form
BN (xk) = γk Ëxk + βk. (4)
By setting γk to Ïk and βk to ¯xk, the network can recover the original layer representation. So, for a standard feedforward layer in a neural network
y = Ï(Wx + b), (5)
where W is the weights matrix, b is the bias vector, x is the input of the layer and Ï is an arbitrary activation function, batch normalization is applied as follows
y = Ï(BN (Wx)). (6)
Note that the bias vector has been removed, since its effect is cancelled by the standardization. Since the normalization is now part of the network, the back propagation procedure needs to be adapted to propagate gradients through the mean and variance computations as well.
At test time, we canât use the statistics of the mini-batch. Instead, we can estimate them by either forwarding several training mini-batches through the network and averaging their statistics, or by maintaining a running average calculated over each mini-batch seen during training.
2
# 3 Recurrent Neural Networks
Recurrent Neural Networks (RNNs) extend Neural Networks to sequential data. Given an input sequence of vectors (x1, . . . , xT ), they produce a sequence of hidden states (h1, . . . , hT ), which are computed at time step t as follows
ht = Ï(Whhtâ1 + Wxxt), (7)
where Wh is the recurrent weight matrix, Wx is the input-to-hidden weight matrix, and Ï is an arbitrary activation function.
If we have access to the whole input sequence, we can use information not only from the past time steps, but also from the future ones, allowing for bidirectional RNNs [12] ââ h t = Ï( ââ h t = Ï( ââ h t :
# ââ Wh ââ Wh
ââ h tâ1 + ââ h t+1 + ââ h t],
ââ Wxxt), ââ Wxxt),
where [x : y] denotes the concatenation of x and y. Finally, we can stack RNNs by using h as the input to another RNN, creating deeper architectures [13]
hl t = Ï(Whhl tâ1 + Wxhlâ1 t ). (11)
In vanilla RNNs, the activation function Ï is usually a sigmoid function, such as the hyperbolic tangent. Training such networks is known to be particularly difï¬cult, because of vanishing and exploding gradients [14].
# 3.1 Long Short-Term Memory
A commonly used recurrent structure is the Long Short-Term Memory (LSTM). It addresses the vanishing gradient problem commonly found in vanilla RNNs by incorporating gating functions into its state dynamics [6]. At each time step, an LSTM maintains a hidden vector h and a cell vector c responsible for controlling state updates and outputs. More concretely, we deï¬ne the computation at time step t as follows [15]:
i, = sigmoid(W);hy_1 + Waix:) f, = sigmoid(W), phy_1 + Wi fXt) ce, =f, Oc_1 +i; © tanh(W),-hy_1 + WicX:) 0, = sigmoid(W),ohy_1 + WheXt + Weotr) hy = o% © tanh(c;)
i, = sigmoid(W);hy_1 + Waix:) (12)
f, = sigmoid(W), phy_1 + Wi fXt) (13)
ce, =f, Oc_1 +i; © tanh(W),-hy_1 + WicX:) (14)
= sigmoid(W),ohy_1 + WheXt + Weotr) (15)
hy = o% © tanh(c;) (16)
where sigmoid(·) is the logistic sigmoid function, tanh is the hyperbolic tangent function, Wh· are the recurrent weight matrices and Wx· are the input-to-hiddent weight matrices. it, ft and ot are respectively the input, forget and output gates, and ct is the cell.
# 4 Batch Normalization for RNNs
From equation 6, an analogous way to apply batch normalization to an RNN would be as follows:
ht = Ï(BN (Whhtâ1 + Wxxt)). (17)
However, in our experiments, when batch normalization was applied in this fashion, it didnât help the training procedure (see appendix A for more details). Instead we propose to apply batch normal- ization only to the input-to-hidden transition (Wxxt), i.e. as follows:
ht = Ï(Whhtâ1 + BN (Wxxt)). (18)
This idea is similar to the way dropout [16] can be applied to RNNs [17]: batch normalization is applied only on the vertical connections (i.e. from one layer to another) and not on the horizontal connections (i.e. within the recurrent layer). We use the same principle for LSTMs: batch normal- ization is only applied after multiplication with the input-to-hidden weight matrices Wx·.
3
(12) (13) (14) (15) (16)
Model Train Dev BiRNN BiRNN (BN) FCE FER FCE FER 0.33 0.95 0.73 0.34 0.28 0.22 1.11 1.19
Table 1: Best framewise cross entropy (FCE) and frame error rate (FER) on the training and devel- opment sets for both networks.
# 4.1 Frame-wise and Sequence-wise Normalization
In experiments where we donât have access to the future frames, like in language modelling where the goal is to predict the next character, we are forced to compute the normalization a each time step
Xkt â Xkyt Von te (19) Xkt =
Weâll refer to this as frame-wise normalization.
In applications like speech recognition, we usually have access to the entire sequences. However, those sequences may have variable length. Usually, when using mini-batches, the smaller sequences are padded with zeroes to match the size of the longest sequence of the mini-batch. In such setups we canât use frame-wise normalization, because the number of unpadded frames decreases along the time axis, leading to increasingly poorer statistics estimates. To solve this problem, we apply a sequence-wise normalization, where we compute the mean and variance of each feature along both the time and batch axis using
m T 1 Xe =o > So Xie: (20) i=1 t=1
rat oR = = Dei â Re)â, (21) i=1 t=1
where T is the length of each sequence and n is the total number of unpadded frames in the mini- batch. Weâll refer to this type of normalization as sequence-wise normalization.
# 5 Experiments
We ran experiments on a speech recognition task and a language modelling task. The models were implemented using Theano [18] and Blocks [19].
# 5.1 Speech Alignment Prediction
For the speech task, we used the Wall Street Journal (WSJ) [20] speech corpus. We used the si284 split as training set and evaluated our models on the dev93 development set. The raw audio was transformed into 40 dimensional log mel ï¬lter-banks (plus energy), with deltas and delta-deltas. As in [21], the forced alignments were generated from the Kaldi recipe tri4b, leading to 3546 clustered triphone states. Because of memory issues, we removed from the training set the sequences that were longer than 1300 frames (4.6% of the set), leading to a training set of 35746 sequences.
The baseline model (BL) is a stack of 5 bidirectional LSTM layers with 250 hidden units each, followed by a size 3546 softmax output layer. All the weights were initialized using the Glorot [22] scheme and all the biases were set to zero. For the batch normalized model (BN) we applied sequence-wise normalization to each LSTM of the baseline model. Both networks were trained using standard SGD with momentum, with a ï¬xed learning rate of 1e-4 and a ï¬xed momentum factor of 0.9. The mini-batch size is 24.
4
vee BL train â BLdev 2. BN train 5 â BNdev c Oo B 4b 2 oO b 3p = E 52 x 1 0 i L L i n (0) 20 40 60 80 100 120 Every 250 batches
Figure 1: Frame-wise cross entropy on WSJ for the baseline (blue) and batch normalized (red) networks. The dotted lines are the training curves and the solid lines are the validation curves.
# 5.2 Language Modeling
We used the Penn Treebank (PTB) [23] corpus for our language modeling experiments. We use the standard split (929k training words, 73k validation words, and 82k test words) and vocabulary of 10k words. We train a small, medium and large LSTM as described in [17].
All models consist of two stacked LSTM layers and are trained with stochastic gradient descent (SGD) with a learning rate of 1 and a mini-batch size of 32.
The small LSTM has two layers of 200 memory cells, with parameters being initialized from a uniform distribution with range [-0.1, 0.1]. We back propagate across 20 time steps and the gradients are scaled according to the maximum norm of the gradients whenever the norm is greater than 10. We train for 15 epochs and halve the learning rate every epoch after the 6th.
The medium LSTM has a hidden size of 650 for both layers, with parameters being initialized from a uniform distribution with range [-0.05, 0.05]. We apply dropout with probability of 50% between all layers. We back propagate across 35 time steps and gradients are scaled according to the maximum norm of the gradients whenever the norm is greater than 5. We train for 40 epochs and divide the learning rate by 1.2 every epoch after the 6th.
The Large LSTM has two layers of 1500 memory cells, with parameters being initialized from a uniform distribution with range [-0.04, 0.04]. We apply dropout between all layers. We back propagate across 35 time steps and gradients are scaled according to the maximum norm of the gradients whenever the norm is greater than 5. We train for 55 epochs and divide the learning rate by 1.15 every epoch after the 15th.
# 6 Results and Discussion
Figure 1 shows the training and development framewise cross entropy curves for both networks of the speech experiments. As we can see, the batch normalized networks trains faster (at some points about twice as fast as the baseline), but overï¬ts more. The best results, reported in table 1, are comparable to the ones obtained in [21].
Figure 2 shows the training and validation perplexity for the large LSTM network of the language experiment. We can also observe that the training is faster when we apply batch normalization to
5
300 - rrseene Large BL train â Large BL valid Large BN train â Large BN valid 250 200 150 Perplexity 100 50 0 10 20 30 40 50 60 Epochs
Figure 2: Large LSTM on Penn Treebank for the baseline (blue) and the batch normalized (red) networks. The dotted lines are the training curves and the solid lines are the validation curves.
Model Train Valid Small LSTM 78.5 119.2 Small LSTM (BN) 62.5 120.9 Medium LSTM 49.1 89.0 Medium LSTM (BN) 41.0 90.6 Large LSTM 49.3 81.8 Large LSTM (BN) 35.0 97.4
Table 2: Best perplexity on training and development sets for LSTMs on Penn Treebank.
the network. However, it also overï¬ts more than the baseline version. The best results are reported in table 2.
For both experiments we observed a faster training and a greater overï¬tting when using our version of batch normalization. This last effect is less prevalent in the speech experiment, perhaps because the training set is way bigger, or perhaps because the frame-wise normalization is less effective than the sequence-wise one. in the language modeling task we predict one character at a time, whereas we predict the whole sequence in the speech experiment.
Batch normalization also allows for higher learning rates in feedforward networks, however since we only applied batch normalization to parts of the network, higher learning rates didnât work well because they affected un-normalized parts as well.
Our experiments suggest that applying batch normalization to the input-to-hidden connections in RNNs can improve the conditioning of the optimization problem. Future directions include whiten- ing input-to-hidden connections [10] and normalizing the hidden state instead of just a portion of the network.
6
# Acknowledgments
Part of this work was funded by Samsung. We also want to thank Nervana Systems for providing GPUs.
# References
[1] Sergey Ioffe and Christian Szegedy, âBatch normalization: Accelerating deep network training by reducing internal covariate shift,â arXiv preprint arXiv:1502.03167, 2015.
[2] Alan Graves, Abdel-rahman Mohamed, and Geoffrey Hinton, âSpeech recognition with deep recurrent neural networks,â in Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on. IEEE, 2013, pp. 6645â6649.
[3] Ilya Sutskever, Oriol Vinyals, and Quoc Le, âSequence to sequence learning with neural networks,â in Advances in Neural Information Processing Systems, 2014, pp. 3104â3112.
[4] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio, âNeural machine translation by jointly learning to align and translate,â arXiv preprint arXiv:1409.0473, 2014.
[5] Tom´aËs Mikolov, âStatistical language models based on neural networks,â Presentation at Google, Mountain View, 2nd April, 2012.
[6] Sepp Hochreiter and J¨urgen Schmidhuber, âLong short-term memory,â Neural computation, vol. 9, no. 8, pp. 1735â1780, 1997.
[7] Will Williams, Niranjani Prasad, David Mrva, Tom Ash, and Tony Robinson, âScaling recur- rent neural network language models,â arXiv preprint arXiv:1502.00512, 2015.
[8] Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, Sanjeev Satheesh, Shubho Sengupta, Adam Coates, et al., âDeepspeech: Scaling up end-to-end speech recognition,â arXiv preprint arXiv:1412.5567, 2014.
[9] Yann A LeCun, L´eon Bottou, Genevieve B Orr, and Klaus-Robert M¨uller, âEfï¬cient back- prop,â in Neural networks: Tricks of the trade, pp. 9â48. Springer, 2012.
[10] Guillaume Desjardins, Karen Simonyan, Razvan Pascanu, and Koray Kavukcuoglu, âNatural neural networks,â arXiv preprint arXiv:1507.00210, 2015.
[11] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhi- heng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei, âImageNet Large Scale Visual Recognition Challenge,â International Journal of Computer Vision (IJCV), pp. 1â42, April 2015.
[12] Mike Schuster and Kuldip K Paliwal, âBidirectional recurrent neural networks,â Signal Pro- cessing, IEEE Transactions on, vol. 45, no. 11, pp. 2673â2681, 1997.
[13] Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio, âHow to construct deep recurrent neural networks,â arXiv preprint arXiv:1312.6026, 2013.
[14] Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio, âOn the difï¬culty of training recurrent neural networks,â arXiv preprint arXiv:1211.5063, 2012.
[15] Felix A Gers, Nicol N Schraudolph, and J¨urgen Schmidhuber, âLearning precise timing with lstm recurrent networks,â The Journal of Machine Learning Research, vol. 3, pp. 115â143, 2003.
[16] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdi- nov, âDropout: A simple way to prevent neural networks from overï¬tting,â The Journal of Machine Learning Research, vol. 15, no. 1, pp. 1929â1958, 2014.
[17] Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals, âRecurrent neural network regulariza- tion,â arXiv preprint arXiv:1409.2329, 2014.
[18] Fr´ed´eric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud Bergeron, Nicolas Bouchard, and Yoshua Bengio, âTheano: new features and speed improve- ments,â Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop, 2012.
[19] B. van Merri¨enboer, D. Bahdanau, V. Dumoulin, D. Serdyuk, D. Warde-Farley, J. Chorowski, and Y. Bengio, âBlocks and Fuel: Frameworks for deep learning,â ArXiv e-prints, June 2015.
7
Model Train Valid Best Baseline 1.05 1.10 Best Batch Norm 1.07 1.11
Table 3: Best frame-wise crossentropy for the best baseline network and for the best batch normal- ized one.
[20] Douglas B Paul and Janet M Baker, âThe design for the wall street journal-based csr corpus,â in Proceedings of the workshop on Speech and Natural Language. Association for Computational Linguistics, 1992, pp. 357â362.
[21] Alan Graves, Navdeep Jaitly, and Abdel-rahman Mohamed, âHybrid speech recognition with deep bidirectional lstm,â in Automatic Speech Recognition and Understanding (ASRU), 2013 IEEE Workshop on. IEEE, 2013, pp. 273â278.
[22] Xavier Glorot and Yoshua Bengio, âUnderstanding the difï¬culty of training deep feedforward neural networks,â in International conference on artiï¬cial intelligence and statistics, 2010, pp. 249â256.
[23] Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini, âBuilding a large an- notated corpus of english: The penn treebank,â Computational linguistics, vol. 19, no. 2, pp. 313â330, 1993.
# A Experimentations with Normalization Inside the Recurrence
In our ï¬rst experiments we investigated if batch normalization can be applied in the same way as in a feedforward network (equation 17). We tried it on a language modelling task on the PennTreebank dataset, where the goal was to predict the next characters of a ï¬xed length sequence of 100 symbols.
The network is composed of a lookup table of dimension 250 followed by 3 layers of simple recur- rent networks with 250 hidden units each. A dimension 50 softmax layer is added on the top. In the batch normalized networks, we apply batch normalization to the hidden-to-hidden transition, as in equation 17, meaning that we compute one mean and one variance for each of the 250 features at each time step. For inference, we also keep track of the statistics for each time step. However, we used the same γ and β for each time step.
The lookup table is randomly initialized using an isotropic Gaussian with zero mean and unit vari- ance. All the other matrices of the network are initialized using the Glorot scheme [22] and all the bias are set to zero. We used SGD with momentum. We performed a random search over the learn- ing rate (distributed in the range [0.0001, 1]), the momentum (with possible values of 0.5, 0.8, 0.9, 0.95, 0.995), and the batch size (32, 64 or 128). We let the experiment run for 20 epochs. A total of 52 experiments were performed.
In every experiment that we ran, the performances of batch normalized networks were always slightly worse than (or at best equivalent to) the baseline networks, except for the ones where the learning rate is too high and the baseline diverges while the batch normalized one is still able to train. Figure 3 shows an example of a working experiment. We observed that in practically all the exper- iments that converged, the normalization was actually harming the performance. Table 3 shows the results of the best baseline and batch normalized networks. We can observe that both best networks have similar performances. The settings for the best baseline are: learning rate 0.42, momentum 0.95, batch size 32. The settings for the best batch normalized network are: learning rate 3.71e-4, momentum 0.995, batch size 128.
Those results suggest that this way of applying batch normalization in the recurrent networks is not optimal. It seems that batch normalization hurts the training procedure. It may be due to the fact that we estimate new statistics at each time step, or because of the repeated application of γ and β during the recurrent procedure, which could lead to exploding or vanishing gradients. We will investigate more in depth what happens in the batch normalized networks, especially during the back-propagation.
8
4.5 Cross Entropy w w - ° u ° N u 2.0 1.5 â BLtrain â BN train 15 20
Figure 3: Typical training curves obtained during the grid search. The baseline network is in blue and batch normalized one in red. For this experiment, the hyper-parameters are: learning rate 7.8e-4, momentum 0.5, batch size 64.
9 | {
"id": "1502.03167"
} |
1510.00149 | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency. | http://arxiv.org/pdf/1510.00149 | Song Han, Huizi Mao, William J. Dally | cs.CV, cs.NE | Published as a conference paper at ICLR 2016 (oral) | null | cs.CV | 20151001 | 20160215 | 6 1 0 2
b e F 5 1 ] V C . s c [
5 v 9 4 1 0 0 . 0 1 5 1 : v i X r a
Published as a conference paper at ICLR 2016
DEEP COMPRESSION: COMPRESSING DEEP NEURAL NETWORKS WITH PRUNING, TRAINED QUANTIZATION AND HUFFMAN CODING
# Song Han Stanford University, Stanford, CA 94305, USA songhan@stanford.edu
# Huizi Mao Tsinghua University, Beijing, 100084, China mhz12@mails.tsinghua.edu.cn
William J. Dally Stanford University, Stanford, CA 94305, USA NVIDIA, Santa Clara, CA 95050, USA dally@stanford.edu
# ABSTRACT
Neural networks are both computationally intensive and memory intensive, making them difï¬cult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce âdeep compressionâ, a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35à to 49à without affecting their accuracy. Our method ï¬rst prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, ï¬nally, we apply Huffman coding. After the ï¬rst two steps we retrain the network to ï¬ne tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9à to 13Ã; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35Ã, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49à from 552MB to 11.3MB, again with no loss of accuracy. This allows ï¬tting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3à to 4à layerwise speedup and 3à to 7à better energy efï¬ciency.
# INTRODUCTION
Deep neural networks have evolved to the state-of-the-art technique for computer vision tasks (Krizhevsky et al., 2012)(Simonyan & Zisserman, 2014). Though these neural networks are very powerful, the large number of weights consumes considerable storage and memory bandwidth. For example, the AlexNet Caffemodel is over 200MB, and the VGG-16 Caffemodel is over 500MB (BVLC). This makes it difï¬cult to deploy deep neural networks on mobile system.
First, for many mobile-ï¬rst companies such as Baidu and Facebook, various apps are updated via different app stores, and they are very sensitive to the size of the binary ï¬les. For example, App Store has the restriction âapps above 100 MB will not download until you connect to Wi-Fiâ. As a result, a feature that increases the binary size by 100MB will receive much more scrutiny than one that increases it by 10MB. Although having deep neural networks running on mobile has many great
1
Published as a conference paper at ICLR 2016
Quantization: less bits per weight Pruning: less number of weights wo s Huffman Encoding t { 1 1 1 ' 1 1 ! ! 1 same | | same ' ' 1 ' 1 1 1 1 1 ' 1 1 original same ' network accuracy accuracy , accuracy 1 1 1 | t 1 original 1 9x-13x | (Quantize the Weightlex 1 27-31% | 1 35x-49x size {reduction 9 ireduction 1 with Code âreduction Book 1 fl i 1 /
Figure 1: The three stage compression pipeline: pruning, quantization and Huffman coding. Pruning reduces the number of weights by 10Ã, while quantization further improves the compression rate: between 27Ã and 31Ã. Huffman coding gives more compression: between 35Ã and 49Ã. The compression rate already included the meta-data for sparse representation. The compression scheme doesnât incur any accuracy loss.
features such as better privacy, less network bandwidth and real time processing, the large storage overhead prevents deep neural networks from being incorporated into mobile apps.
The second issue is energy consumption. Running large neural networks require a lot of memory bandwidth to fetch the weights and a lot of computation to do dot productsâ which in turn consumes considerable energy. Mobile devices are battery constrained, making power hungry applications such as deep neural networks hard to deploy.
Energy consumption is dominated by memory access. Under 45nm CMOS technology, a 32 bit ï¬oating point add consumes 0.9pJ, a 32bit SRAM cache access takes 5pJ, while a 32bit DRAM memory access takes 640pJ, which is 3 orders of magnitude of an add operation. Large networks do not ï¬t in on-chip storage and hence require the more costly DRAM accesses. Running a 1 billion connection neural network, for example, at 20fps would require (20Hz)(1G)(640pJ) = 12.8W just for DRAM access - well beyond the power envelope of a typical mobile device.
Our goal is to reduce the storage and energy required to run inference on such large networks so they can be deployed on mobile devices. To achieve this goal, we present âdeep compressionâ: a three- stage pipeline (Figure 1) to reduce the storage required by neural network in a manner that preserves the original accuracy. First, we prune the networking by removing the redundant connections, keeping only the most informative connections. Next, the weights are quantized so that multiple connections share the same weight, thus only the codebook (effective weights) and the indices need to be stored. Finally, we apply Huffman coding to take advantage of the biased distribution of effective weights.
Our main insight is that, pruning and trained quantization are able to compress the network without interfering each other, thus lead to surprisingly high compression rate. It makes the required storage so small (a few megabytes) that all weights can be cached on chip instead of going to off-chip DRAM which is energy consuming. Based on âdeep compressionâ, the EIE hardware accelerator Han et al. (2016) was later proposed that works on the compressed model, achieving signiï¬cant speedup and energy efï¬ciency improvement.
# 2 NETWORK PRUNING
Network pruning has been widely studied to compress CNN models. In early work, network pruning proved to be a valid way to reduce the network complexity and over-ï¬tting (LeCun et al., 1989; Hanson & Pratt, 1989; Hassibi et al., 1993; Str¨om, 1997). Recently Han et al. (2015) pruned state- of-the-art CNN models with no loss of accuracy. We build on top of that approach. As shown on the left side of Figure 1, we start by learning the connectivity via normal network training. Next, we prune the small-weight connections: all connections with weights below a threshold are removed from the network. Finally, we retrain the network to learn the ï¬nal weights for the remaining sparse connections. Pruning reduced the number of parameters by 9à and 13à for AlexNet and VGG-16 model.
2
Published as a conference paper at ICLR 2016
Span Exceeds 8=243 im [o[i][2]s]4]s]s]7][e,[s]u[n][e[ul ule if 3 value 0 Filler Zero
Figure 2: Representing the matrix sparsity with relative index. Padding ï¬ller zero to prevent overï¬ow.
weights cluster index fine-tuned (32 bit float) (2 bit uint) centroids centroids 3 0 2 1 3] cluster | 1 1} 0 | 3 | 4 > of 3]1)o]r 3 1 2 2 | 0; lr gradient loroup by reduce > >
Figure 3: Weight sharing by scalar quantization (top) and centroids ï¬ne-tuning (bottom).
We store the sparse structure that results from pruning using compressed sparse row (CSR) or compressed sparse column (CSC) format, which requires 2a + n + 1 numbers, where a is the number of non-zero elements and n is the number of rows or columns.
To compress further, we store the index difference instead of the absolute position, and encode this difference in 8 bits for conv layer and 5 bits for fc layer. When we need an index difference larger than the bound, we the zero padding solution shown in Figure 2: in case when the difference exceeds 8, the largest 3-bit (as an example) unsigned number, we add a ï¬ller zero.
# 3 TRAINED QUANTIZATION AND WEIGHT SHARING
Network quantization and weight sharing further compresses the pruned network by reducing the number of bits required to represent each weight. We limit the number of effective weights we need to store by having multiple connections share the same weight, and then ï¬ne-tune those shared weights.
Weight sharing is illustrated in Figure 3. Suppose we have a layer that has 4 input neurons and 4 output neurons, the weight is a 4 Ã 4 matrix. On the top left is the 4 Ã 4 weight matrix, and on the bottom left is the 4 Ã 4 gradient matrix. The weights are quantized to 4 bins (denoted with 4 colors), all the weights in the same bin share the same value, thus for each weight, we then need to store only a small index into a table of shared weights. During update, all the gradients are grouped by the color and summed together, multiplied by the learning rate and subtracted from the shared centroids from last iteration. For pruned AlexNet, we are able to quantize to 8-bits (256 shared weights) for each CONV layers, and 5-bits (32 shared weights) for each FC layer without any loss of accuracy.
To calculate the compression rate, given k clusters, we only need log2(k) bits to encode the index. In general, for a network with n connections and each connection is represented with b bits, constraining the connections to have only k shared weights will result in a compression rate of:
r = nb nlog2(k) + kb (1)
For example, Figure 3 shows the weights of a single layer neural network with four input units and four output units. There are 4 Ã 4 = 16 weights originally but there are only 4 shared weights: similar weights are grouped together to share the same value. Originally we need to store 16 weights each
3
Published as a conference paper at ICLR 2016
20001 CF so Tinear quantization bor nonlinear quantization by density initialization °° clustring and finetuning finear initialization 15000 andom inkiazation 10000} a density 5000| cummulative alstribution 0.10 =B.05 3.00 05 Tio T0108 002 0.00 a0z 0.08 0.06 weight value weight value
Figure 4: Left: Three different methods for centroids initialization. Right: Distribution of weights (blue) and distribution of codebook before (green cross) and after ï¬ne-tuning (red dot).
has 32 bits, now we need to store only 4 effective weights (blue, green, red and orange), each has 32 bits, together with 16 2-bit indices giving a compression rate of 16 â 32/(4 â 32 + 2 â 16) = 3.2
3.1 WEIGHT SHARING
We use k-means clustering to identify the shared weights for each layer of a trained network, so that all the weights that fall into the same cluster will share the same weight. Weights are not shared across layers. We partition n original weights W = {w1, we Wy} into k clusters C = {c1,c2,..., ck} n > k, so as to minimize the within-cluster sum of squares (WCSS):
k arg min ) | Ss wâ ei? (2) i=1 wee;
Different from HashNet (Chen et al., 2015) where weight sharing is determined by a hash function before the networks sees any training data, our method determines weight sharing after a network is fully trained, so that the shared weights approximate the original network.
INITIALIZATION OF SHARED WEIGHTS
Centroid initialization impacts the quality of clustering and thus affects the networkâs prediction accuracy. We examine three initialization methods: Forgy(random), density-based, and linear initialization. In Figure 4 we plotted the original weightsâ distribution of conv3 layer in AlexNet (CDF in blue, PDF in red). The weights forms a bimodal distribution after network pruning. On the bottom it plots the effective weights (centroids) with 3 different initialization methods (shown in blue, red and yellow). In this example, there are 13 clusters.
Forgy (random) initialization randomly chooses k observations from the data set and uses these as the initial centroids. The initialized centroids are shown in yellow. Since there are two peaks in the bimodal distribution, Forgy method tend to concentrate around those two peaks.
Density-based initialization linearly spaces the CDF of the weights in the y-axis, then ï¬nds the horizontal intersection with the CDF, and ï¬nally ï¬nds the vertical intersection on the x-axis, which becomes a centroid, as shown in blue dots. This method makes the centroids denser around the two peaks, but more scatted than the Forgy method.
Linear initialization linearly spaces the centroids between the [min, max] of the original weights. This initialization method is invariant to the distribution of the weights and is the most scattered compared with the former two methods.
Larger weights play a more important role than smaller weights (Han et al., 2015), but there are fewer of these large weights. Thus for both Forgy initialization and density-based initialization, very few centroids have large absolute value which results in poor representation of these few large weights. Linear initialization does not suffer from this problem. The experiment section compares the accuracy
4
Published as a conference paper at ICLR 2016
3 8
100000 220000 75000 165000 50000 5 110000 8 25000 55000 oO oO 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 1°93 5 7 9 11 13 15 17 19 21 23 25 27 29 31 Weight Index (32 Effective Weights) Sparse Matrix Location Index (Max Diff is 32)
Figure 5: Distribution for weight (Left) and index (Right). The distribution is biased.
of different initialization methods after clustering and ï¬ne-tuning, showing that linear initialization works best.
3.3 FEED-FORWARD AND BACK-PROPAGATION
The centroids of the one-dimensional k-means clustering are the shared weights. There is one level of indirection during feed forward phase and back-propagation phase looking up the weight table. An index into the shared weight table is stored for each connection. During back-propagation, the gradient for each shared weight is calculated and used to update the shared weight. This procedure is shown in Figure 3.
We denote the loss by L, the weight in the ith column and jth row by Wij, the centroid index of element Wi,j by Iij, the kth centroid of the layer by Ck. By using the indicator function 1(.), the gradient of the centroids is calculated as:
OL OL OW;; OL Ss J (Liz = k) (3) ag aC OWij OC, OW;
4 HUFFMAN CODING
A Huffman code is an optimal preï¬x code commonly used for lossless data compression(Van Leeuwen, 1976). It uses variable-length codewords to encode source symbols. The table is derived from the occurrence probability for each symbol. More common symbols are represented with fewer bits.
Figure 5 shows the probability distribution of quantized weights and the sparse matrix index of the last fully connected layer in AlexNet. Both distributions are biased: most of the quantized weights are distributed around the two peaks; the sparse matrix index difference are rarely above 20. Experiments show that Huffman coding these non-uniformly distributed values saves 20% â 30% of network storage.
# 5 EXPERIMENTS
We pruned, quantized, and Huffman encoded four networks: two on MNIST and two on ImageNet data-sets. The network parameters and accuracy-1 before and after pruning are shown in Table 1. The compression pipeline saves network storage by 35Ã to 49Ã across different networks without loss of accuracy. The total size of AlexNet decreased from 240MB to 6.9MB, which is small enough to be put into on-chip SRAM, eliminating the need to store the model in energy-consuming DRAM memory.
Training is performed with the Caffe framework (Jia et al., 2014). Pruning is implemented by adding a mask to the blobs to mask out the update of the pruned connections. Quantization and weight sharing are implemented by maintaining a codebook structure that stores the shared weight, and group-by-index after calculating the gradient of each layer. Each shared weight is updated with all the gradients that fall into that bucket. Huffman coding doesnât require training and is implemented ofï¬ine after all the ï¬ne-tuning is ï¬nished.
5.1 LENET-300-100 AND LENET-5 ON MNIST
We ï¬rst experimented on MNIST dataset with LeNet-300-100 and LeNet-5 network (LeCun et al., 1998). LeNet-300-100 is a fully connected network with two hidden layers, with 300 and 100
1Reference model is from Caffe model zoo, accuracy is measured without data augmentation
5
Published as a conference paper at ICLR 2016
Table 1: The compression pipeline can save 35Ã to 49Ã parameter storage with no loss of accuracy.
Network Top-1 Error Top-5 Error Parameters Compress Rate LeNet-300-100 Ref LeNet-300-100 Compressed LeNet-5 Ref LeNet-5 Compressed AlexNet Ref AlexNet Compressed VGG-16 Ref VGG-16 Compressed 1.64% 1.58% 0.80% 0.74% 42.78% 42.78% 31.50% 31.17% - - - - 19.73% 19.70% 11.32% 10.91% 1070 KB 27 KB 1720 KB 44 KB 240 MB 6.9 MB 552 MB 11.3 MB 40Ã 39Ã 35Ã 49Ã
Table 2: Compression statistics for LeNet-300-100. P: pruning, Q:quantization, H:Huffman coding.
Layer ip1 ip2 ip3 Total #Weights 235K 30K 1K 266K Weights% (P) 8% 9% 26% 8%(12Ã) Weight bits (P+Q) 6 6 6 6 Weight bits (P+Q+H) 4.4 4.4 4.3 5.1 Index bits (P+Q) 5 5 5 5 Index bits (P+Q+H) 3.7 4.3 3.2 3.7 Compress rate (P+Q) 3.1% 3.8% 15.7% 3.1% (32Ã) Compress rate (P+Q+H) 2.32% 3.04% 12.70% 2.49% (40Ã)
Table 3: Compression statistics for LeNet-5. P: pruning, Q:quantization, H:Huffman coding.
Layer conv1 conv2 ip1 ip2 Total #Weights 0.5K 25K 400K 5K 431K Weights% (P) 66% 12% 8% 19% 8%(12Ã) Weight bits (P+Q) 8 8 5 5 5.3 Weight bits (P+Q+H) 7.2 7.2 4.5 5.2 4.1 Index bits (P+Q) 5 5 5 5 5 Index bits (P+Q+H) 1.5 3.9 4.5 3.7 4.4 Compress rate (P+Q) 78.5% 6.0% 2.7% 6.9% 3.05% (33Ã) Compress rate (P+Q+H) 67.45% 5.28% 2.45% 6.13% 2.55% (39Ã)
neurons each, which achieves 1.6% error rate on Mnist. LeNet-5 is a convolutional network that has two convolutional layers and two fully connected layers, which achieves 0.8% error rate on Mnist. Table 2 and table 3 show the statistics of the compression pipeline. The compression rate includes the overhead of the codebook and sparse indexes. Most of the saving comes from pruning and quantization (compressed 32Ã), while Huffman coding gives a marginal gain (compressed 40Ã)
5.2 ALEXNET ON IMAGENET
We further examine the performance of Deep Compression on the ImageNet ILSVRC-2012 dataset, which has 1.2M training examples and 50k validation examples. We use the AlexNet Caffe model as the reference model, which has 61 million parameters and achieved a top-1 accuracy of 57.2% and a top-5 accuracy of 80.3%. Table 4 shows that AlexNet can be compressed to 2.88% of its original size without impacting accuracy. There are 256 shared weights in each CONV layer, which are encoded with 8 bits, and 32 shared weights in each FC layer, which are encoded with only 5 bits. The relative sparse index is encoded with 4 bits. Huffman coding compressed additional 22%, resulting in 35Ã compression in total.
5.3 VGG-16 ON IMAGENET
With promising results on AlexNet, we also looked at a larger, more recent network, VGG-16 (Si- monyan & Zisserman, 2014), on the same ILSVRC-2012 dataset. VGG-16 has far more convolutional layers but still only three fully-connected layers. Following a similar methodology, we aggressively compressed both convolutional and fully-connected layers to realize a signiï¬cant reduction in the number of effective weights, shown in Table5.
The VGG16 network as a whole has been compressed by 49Ã. Weights in the CONV layers are represented with 8 bits, and FC layers use 5 bits, which does not impact the accuracy. The two largest fully-connected layers can each be pruned to less than 1.6% of their original size. This reduction
6
Published as a conference paper at ICLR 2016
Table 4: Compression statistics for AlexNet. P: pruning, Q: quantization, H:Huffman coding.
Layer conv1 conv2 conv3 conv4 conv5 fc6 fc7 fc8 Total #Weights 35K 307K 885K 663K 442K 38M 17M 4M 61M Weights% (P) 84% 38% 35% 37% 37% 9% 9% 25% 11%(9Ã) Weight bits (P+Q) 8 8 8 8 8 5 5 5 5.4 Weight bits (P+Q+H) 6.3 5.5 5.1 5.2 5.6 3.9 3.6 4 4 Index bits (P+Q) 4 4 4 4 4 4 4 4 4 Index bits (P+Q+H) 1.2 2.3 2.6 2.5 2.5 3.2 3.7 3.2 3.2 Compress rate (P+Q) 32.6% 14.5% 13.1% 14.1% 14.0% 3.0% 3.0% 7.3% 3.7% (27Ã)
Compress rate (P+Q+H) 20.53% 9.43% 8.44% 9.11% 9.43% 2.39% 2.46% 5.85% 2.88% (35Ã)
Table 5: Compression statistics for VGG-16. P: pruning, Q:quantization, H:Huffman coding.
Layer conv1 1 conv1 2 conv2 1 conv2 2 conv3 1 conv3 2 conv3 3 conv4 1 conv4 2 conv4 3 conv5 1 conv5 2 conv5 3 fc6 fc7 fc8 Total #Weights 2K 37K 74K 148K 295K 590K 590K 1M 2M 2M 2M 2M 2M 103M 17M 4M 138M Weights% (P) 58% 22% 34% 36% 53% 24% 42% 32% 27% 34% 35% 29% 36% 4% 4% 23% 7.5%(13Ã) Weigh bits (P+Q) 8 8 8 8 8 8 8 8 8 8 8 8 8 5 5 5 6.4 Weight bits (P+Q+H) 6.8 6.5 5.6 5.9 4.8 4.6 4.6 4.6 4.2 4.4 4.7 4.6 4.6 3.6 4 4 4.1 Index bits (P+Q) 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 Index bits (P+Q+H) 1.7 2.6 2.4 2.3 1.8 2.9 2.2 2.6 2.9 2.5 2.5 2.7 2.3 3.5 4.3 3.4 3.1 Compress rate (P+Q) 40.0% 9.8% 14.3% 14.7% 21.7% 9.7% 17.0% 13.1% 10.9% 14.0% 14.3% 11.7% 14.8% 1.6% 1.5% 7.1% 3.2% (31Ã)
Compress rate (P+Q+H) 29.97% 6.99% 8.91% 9.31% 11.15% 5.67% 8.96% 7.29% 5.93% 7.47% 8.00% 6.52% 7.79% 1.10% 1.25% 5.24% 2.05% (49Ã)
is critical for real time image processing, where there is little reuse of these layers across images (unlike batch processing). This is also critical for fast object detection algorithms where one CONV pass is used by many FC passes. The reduced layers will ï¬t in an on-chip SRAM and have modest bandwidth requirements. Without the reduction, the bandwidth requirements are prohibitive.
# 6 DISCUSSIONS
6.1 PRUNING AND QUANTIZATION WORKING TOGETHER
Figure 6 shows the accuracy at different compression rates for pruning and quantization together or individually. When working individually, as shown in the purple and yellow lines, accuracy of pruned network begins to drop signiï¬cantly when compressed below 8% of its original size; accuracy of quantized network also begins to drop signiï¬cantly when compressed below 8% of its original size. But when combined, as shown in the red line, the network can be compressed to 3% of original size with no loss of accuracy. On the far right side compared the result of SVD, which is inexpensive but has a poor compression rate.
The three plots in Figure 7 show how accuracy drops with fewer bits per connection for CONV layers (left), FC layers (middle) and all layers (right). Each plot reports both top-1 and top-5 accuracy. Dashed lines only applied quantization but without pruning; solid lines did both quantization and pruning. There is very little difference between the two. This shows that pruning works well with quantization.
Quantization works well on pruned network because unpruned AlexNet has 60 million weights to quantize, while pruned AlexNet has only 6.7 million weights to quantize. Given the same amount of centroids, the latter has less error.
7
Published as a conference paper at ICLR 2016
â© Pruning + Quantization Pruning Only @ Quantization Only © SVD 0.5% 0.0% -0.5% -1.0% -1.5% -2.0% -2.5% 3.0% -3.5% -4.0% 4.5% Accuracy Loss 2% 5% 8% 11% 14% 17% 20% Model Size Ratio after Compression
Figure 6: Accuracy v.s. compression rate under different compression methods. Pruning and quantization works best when combined.
4 top5, quantized only © tops, pruned + quantized topt, quantized only © topt, pruned + quantized 85% © top, quantized only © topS, pruned + quantized top, quantized only © tops, pruned + quantized topt, quantized only © topt, pruned + quantized topt, quantized only © topt, pruned + quantized 85% 85% 68% 88% 68% > > 5 sis F si F sam 3 20% B sem 3 os 8 s4% B sae Mo 2 < < 17% 17% 17% 0% 0% 0% ibit 2bits Sits dbits bits Gbits bits abits âbit 2bits bits bits Sbits Gbits Tots Abts âbit 2bits bits 4bits Shits Gbits 7bits Abts Number of bits per effective weight in all Number of bits per effective weight in all Number of bits per effective weight in FC layers Conv layers all layers
© top, quantized only © topS, pruned + quantized topt, quantized only © topt, pruned + quantized 85% 68% > 5 sis 3 20% 8 s4% 2 17% 0% ibit 2bits Sits dbits bits Gbits bits abits Number of bits per effective weight in all FC layers
top, quantized only © tops, pruned + quantized topt, quantized only © topt, pruned + quantized 85% 88% F si B sem B sae < 17% 0% âbit 2bits bits bits Sbits Gbits Tots Abts Number of bits per effective weight in all Conv layers
4 top5, quantized only © tops, pruned + quantized topt, quantized only © topt, pruned + quantized 85% 68% > F sam 3 os Mo < 17% 0% âbit 2bits bits 4bits Shits Gbits 7bits Abts Number of bits per effective weight in all layers
Figure 7: Pruning doesnât hurt quantization. Dashed: quantization on unpruned network. Solid: quantization on pruned network; Accuracy begins to drop at the same number of quantization bits whether or not the network has been pruned. Although pruning made the number of parameters less, quantization still works well, or even better(3 bits case on the left ï¬gure) as in the unpruned network.
â© uniform init + density init © random init â© uniform init + density init © random init 58% 81% 3 56% 3 79% 5 5 3 3 2 54% 2 76% T Bd So 52% So 74% 50% 71% 2bits bits 4bits Sbits 6bits 7bits 8bits 2bits bits 4bits Sbits 6bits 7bits 8bits Number of bits per effective weight Number of bits per effective weight
â© uniform init + density init © random init 58% 3 56% 5 3 2 54% T So 52% 50% 2bits bits 4bits Sbits 6bits 7bits 8bits Number of bits per effective weight
â© uniform init + density init © random init 81% 3 79% 5 3 2 76% Bd So 74% 71% 2bits bits 4bits Sbits 6bits 7bits 8bits Number of bits per effective weight
Figure 8: Accuracy of different initialization methods. Left: top-1 accuracy. Right: top-5 accuracy. Linear initialization gives best result.
The ï¬rst two plots in Figure 7 show that CONV layers require more bits of precision than FC layers. For CONV layers, accuracy drops signiï¬cantly below 4 bits, while FC layer is more robust: not until 2 bits did the accuracy drop signiï¬cantly.
6.2 CENTROID INITIALIZATION
Figure 8 compares the accuracy of the three different initialization methods with respect to top-1 accuracy (Left) and top-5 accuracy (Right). The network is quantized to 2 â¼ 8 bits as shown on x-axis. Linear initialization outperforms the density initialization and random initialization in all cases except at 3 bits.
The initial centroids of linear initialization spread equally across the x-axis, from the min value to the max value. That helps to maintain the large weights as the large weights play a more important role than smaller ones, which is also shown in network pruning Han et al. (2015). Neither random nor density-based initialization retains large centroids. With these initialization methods, large weights are clustered to the small centroids because there are few large weights. In contrast, linear initialization allows large weights a better chance to form a large centroid.
8
Published as a conference paper at ICLR 2016
ll CPU Dense (Basenline) Ml CPU Pruned ®@ GPU Dense ® GPUPruned lM TK1 Dense ® TK1 Pruned 1x8 if aa rf aa. a, dx 1.0x 1.0x 1x ok xed AlexNet_Fc6 = AlexNet_Fc7 = AlexNet_Fc8 + VGGNet_Fc6 VGGNet_Fc7 VGGNet_Fc8 Geo Mean 100x 3 g Speedup (normzlized to CPU) x eS x
Figure 9: Compared with the original network, pruned network layer achieved 3Ã speedup on CPU, 3.5Ã on GPU and 4.2Ã on mobile GPU on average. Batch size = 1 targeting real time processing. Performance number normalized to CPU.
ll CPU Dense (Baseline) Mi CPU Pruned ® GPU Dense M GPU Pruned @® TK1 Dense M@ TK1 Pruned TeRLLee AlexNet_Fc6 = AlexNet_Fc7 â AlexNet_Fc8 + VGGNet_Fc6 VGGNet_Fc7 VGGNet_Fc8 Geo Mean 100x Energy Efficiency (normzlized to CPU)
Figure 10: Compared with the original network, pruned network layer takes 7Ã less energy on CPU, 3.3Ã less on GPU and 4.2Ã less on mobile GPU on average. Batch size = 1 targeting real time processing. Energy number normalized to CPU.
6.3 SPEEDUP AND ENERGY EFFICIENCY
Deep Compression is targeting extremely latency-focused applications running on mobile, which requires real-time inference, such as pedestrian detection on an embedded processor inside an autonomous vehicle. Waiting for a batch to assemble signiï¬cantly adds latency. So when bench- marking the performance and energy efï¬ciency, we consider the case when batch size = 1. The cases of batching are given in Appendix A.
Fully connected layer dominates the model size (more than 90%) and got compressed the most by Deep Compression (96% weights pruned in VGG-16). In state-of-the-art object detection algorithms such as fast R-CNN (Girshick, 2015), upto 38% computation time is consumed on FC layers on uncompressed model. So itâs interesting to benchmark on FC layers, to see the effect of Deep Compression on performance and energy. Thus we setup our benchmark on FC6, FC7, FC8 layers of AlexNet and VGG-16. In the non-batched case, the activation matrix is a vector with just one column, so the computation boils down to dense / sparse matrix-vector multiplication for original / pruned model, respectively. Since current BLAS library on CPU and GPU doesnât support indirect look-up and relative indexing, we didnât benchmark the quantized model.
We compare three different off-the-shelf hardware: the NVIDIA GeForce GTX Titan X and the Intel Core i7 5930K as desktop processors (same package as NVIDIA Digits Dev Box) and NVIDIA Tegra K1 as mobile processor. To run the benchmark on GPU, we used cuBLAS GEMV for the original dense layer. For the pruned sparse layer, we stored the sparse matrix in in CSR format, and used cuSPARSE CSRMV kernel, which is optimized for sparse matrix-vector multiplication on GPU. To run the benchmark on CPU, we used MKL CBLAS GEMV for the original dense model and MKL SPBLAS CSRMV for the pruned sparse model.
To compare power consumption between different systems, it is important to measure power at a consistent manner (NVIDIA, b). For our analysis, we are comparing pre-regulation power of the entire application processor (AP) / SOC and DRAM combined. On CPU, the benchmark is running on single socket with a single Haswell-E class Core i7-5930K processor. CPU socket and DRAM power are as reported by the pcm-power utility provided by Intel. For GPU, we used nvidia-smi utility to report the power of Titan X. For mobile GPU, we use a Jetson TK1 development board and measured the total power consumption with a power-meter. We assume 15% AC to DC conversion loss, 85% regulator efï¬ciency and 15% power consumed by peripheral components (NVIDIA, a) to report the AP+DRAM power for Tegra K1.
9
Published as a conference paper at ICLR 2016
Table 6: Accuracy of AlexNet with different aggressiveness of weight sharing and quantization. 8/5 bit quantization has no loss of accuracy; 8/4 bit quantization, which is more hardware friendly, has negligible loss of accuracy of 0.01%; To be really aggressive, 4/2 bit quantization resulted in 1.99% and 2.60% loss of accuracy.
#CONV bits / #FC bits 32bits / 32bits 8 bits / 5 bits 8 bits / 4 bits 4 bits / 2 bits Top-1 Error Top-5 Error 42.78% 42.78% 42.79% 44.77% 19.73% 19.70% 19.73% 22.33% Top-1 Error Increase - 0.00% 0.01% 1.99% Top-5 Error Increase - -0.03% 0.00% 2.60%
The ratio of memory access over computation characteristic with and without batching is different. When the input activations are batched to a matrix the computation becomes matrix-matrix multipli- cation, where locality can be improved by blocking. Matrix could be blocked to ï¬t in caches and reused efï¬ciently. In this case, the amount of memory access is O(n2), and that of computation is O(n3), the ratio between memory access and computation is in the order of 1/n.
In real time processing when batching is not allowed, the input activation is a single vector and the computation is matrix-vector multiplication. In this case, the amount of memory access is O(n2), and the computation is O(n2), memory access and computation are of the same magnitude (as opposed to 1/n). That indicates MV is more memory-bounded than MM. So reducing the memory footprint is critical for the non-batching case.
Figure 9 illustrates the speedup of pruning on different hardware. There are 6 columns for each benchmark, showing the computation time of CPU / GPU / TK1 on dense / pruned network. Time is normalized to CPU. When batch size = 1, pruned network layer obtained 3à to 4à speedup over the dense network on average because it has smaller memory footprint and alleviates the data transferring overhead, especially for large matrices that are unable to ï¬t into the caches. For example VGG16âs FC6 layer, the largest layer in our experiment, contains 25088 à 4096 à 4 Bytes â 400M B data, which is far from the capacity of L3 cache.
In those latency-tolerating applications , batching improves memory locality, where weights could be blocked and reused in matrix-matrix multiplication. In this scenario, pruned network no longer shows its advantage. We give detailed timing results in Appendix A.
Figure 10 illustrates the energy efï¬ciency of pruning on different hardware. We multiply power consumption with computation time to get energy consumption, then normalized to CPU to get energy efï¬ciency. When batch size = 1, pruned network layer consumes 3à to 7à less energy over the dense network on average. Reported by nvidia-smi, GPU utilization is 99% for both dense and sparse cases.
6.4 RATIO OF WEIGHTS, INDEX AND CODEBOOK
Pruning makes the weight matrix sparse, so extra space is needed to store the indexes of non-zero elements. Quantization adds storage for a codebook. The experiment section has already included these two factors. Figure 11 shows the breakdown of three different components when quantizing four networks. Since on average both the weights and the sparse indexes are encoded with 5 bits, their storage is roughly half and half. The overhead of codebook is very small and often negligible.
@ Weight @ Index © Codebook AlexNet VGGNet Lenet-300-100 Lenet-5
Figure 11: Storage ratio of weight, index and codebook.
10
Published as a conference paper at ICLR 2016
Table 7: Comparison with other compression methods on AlexNet. (Collins & Kohli, 2014) reduced the parameters by 4Ã and with inferior accuracy. Deep Fried Convnets(Yang et al., 2014) worked on fully connected layers and reduced the parameters by less than 4Ã. SVD save parameters but suffers from large accuracy loss as much as 2%. Network pruning (Han et al., 2015) reduced the parameters by 9Ã, not including index overhead. On other networks similar to AlexNet, (Denton et al., 2014) exploited linear structure of convnets and compressed the network by 2.4Ã to 13.4Ã layer wise, with 0.9% accuracy loss on compressing a single layer. (Gong et al., 2014) experimented with vector quantization and compressed the network by 16Ã to 24Ã, incurring 1% accuracy loss.
Top-1 Error Top-5 Error 42.78% 41.93% 42.90% 44.40% 44.02% 42.77% 42.78% 42.78% 19.73% - - - 20.56% 19.67% 19.70% 19.70% Parameters 240MB 131MB 64MB 61MB 47.6MB 27MB 8.9MB 6.9MB
Compress Rate 1Ã 2Ã 3.7Ã 4Ã 5Ã 9Ã 27Ã 35Ã
# 7 RELATED WORK
Neural networks are typically over-parametrized, and there is signiï¬cant redundancy for deep learning models(Denil et al., 2013). This results in a waste of both computation and memory usage. There have been various proposals to remove the redundancy: Vanhoucke et al. (2011) explored a ï¬xed- point implementation with 8-bit integer (vs 32-bit ï¬oating point) activations. Hwang & Sung (2014) proposed an optimization method for the ï¬xed-point network with ternary weights and 3-bit activations. Anwar et al. (2015) quantized the neural network using L2 error minimization and achieved better accuracy on MNIST and CIFAR-10 datasets.Denton et al. (2014) exploited the linear structure of the neural network by ï¬nding an appropriate low-rank approximation of the parameters and keeping the accuracy within 1% of the original model.
The empirical success in this paper is consistent with the theoretical study of random-like sparse networks with +1/0/-1 weights (Arora et al., 2014), which have been proved to enjoy nice properties (e.g. reversibility), and to allow a provably polynomial time algorithm for training.
Much work has been focused on binning the network parameters into buckets, and only the values in the buckets need to be stored. HashedNets(Chen et al., 2015) reduce model sizes by using a hash function to randomly group connection weights, so that all connections within the same hash bucket share a single parameter value. In their method, the weight binning is pre-determined by the hash function, instead of being learned through training, which doesnât capture the nature of images. Gong et al. (2014) compressed deep convnets using vector quantization, which resulted in 1% accuracy loss. Both methods studied only the fully connected layer, ignoring the convolutional layers.
There have been other attempts to reduce the number of parameters of neural networks by replacing the fully connected layer with global average pooling. The Network in Network architecture(Lin et al., 2013) and GoogLenet(Szegedy et al., 2014) achieves state-of-the-art results on several benchmarks by adopting this idea. However, transfer learning, i.e. reusing features learned on the ImageNet dataset and applying them to new tasks by only ï¬ne-tuning the fully connected layers, is more difï¬cult with this approach. This problem is noted by Szegedy et al. (2014) and motivates them to add a linear layer on the top of their networks to enable transfer learning.
Network pruning has been used both to reduce network complexity and to reduce over-ï¬tting. An early approach to pruning was biased weight decay (Hanson & Pratt, 1989). Optimal Brain Damage (LeCun et al., 1989) and Optimal Brain Surgeon (Hassibi et al., 1993) prune networks to reduce the number of connections based on the Hessian of the loss function and suggest that such pruning is more accurate than magnitude-based pruning such as weight decay. A recent work (Han et al., 2015) successfully pruned several state of the art large scale networks and showed that the number of parameters could be reduce by an order of magnitude. There are also attempts to reduce the number of activations for both compression and acceleration Van Nguyen et al. (2015).
11
Published as a conference paper at ICLR 2016
# 8 FUTURE WORK
While the pruned network has been benchmarked on various hardware, the quantized network with weight sharing has not, because off-the-shelf cuSPARSE or MKL SPBLAS library does not support indirect matrix entry lookup, nor is the relative index in CSC or CSR format supported. So the full advantage of Deep Compression that ï¬t the model in cache is not fully unveiled. A software solution is to write customized GPU kernels that support this. A hardware solution is to build custom ASIC architecture specialized to traverse the sparse and quantized network structure, which also supports customized quantization bit width. We expect this architecture to have energy dominated by on-chip SRAM access instead of off-chip DRAM access.
# 9 CONCLUSION
We have presented âDeep Compressionâ that compressed neural networks without affecting accuracy. Our method operates by pruning the unimportant connections, quantizing the network using weight sharing, and then applying Huffman coding. We highlight our experiments on AlexNet which reduced the weight storage by 35à without loss of accuracy. We show similar results for VGG-16 and LeNet networks compressed by 49à and 39à without loss of accuracy. This leads to smaller storage requirement of putting convnets into mobile app. After Deep Compression the size of these networks ï¬t into on-chip SRAM cache (5pJ/access) rather than requiring off-chip DRAM memory (640pJ/access). This potentially makes deep neural networks more energy efï¬cient to run on mobile. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained.
# REFERENCES
Anwar, Sajid, Hwang, Kyuyeon, and Sung, Wonyong. Fixed point optimization of deep convolutional neural networks for object recognition. In Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, pp. 1131â1135. IEEE, 2015.
Arora, Sanjeev, Bhaskara, Aditya, Ge, Rong, and Ma, Tengyu. Provable bounds for learning some deep representations. In Proceedings of the 31th International Conference on Machine Learning, ICML 2014, pp. 584â592, 2014.
# BVLC. Caffe model zoo. URL http://caffe.berkeleyvision.org/model_zoo.
Chen, Wenlin, Wilson, James T., Tyree, Stephen, Weinberger, Kilian Q., and Chen, Yixin. Compress- ing neural networks with the hashing trick. arXiv preprint arXiv:1504.04788, 2015.
Collins, Maxwell D and Kohli, Pushmeet. Memory bounded deep convolutional networks. arXiv preprint arXiv:1412.1442, 2014.
Denil, Misha, Shakibi, Babak, Dinh, Laurent, de Freitas, Nando, et al. Predicting parameters in deep learning. In Advances in Neural Information Processing Systems, pp. 2148â2156, 2013.
Denton, Emily L, Zaremba, Wojciech, Bruna, Joan, LeCun, Yann, and Fergus, Rob. Exploiting linear structure within convolutional networks for efï¬cient evaluation. In Advances in Neural Information Processing Systems, pp. 1269â1277, 2014.
Girshick, Ross. Fast r-cnn. arXiv preprint arXiv:1504.08083, 2015.
Gong, Yunchao, Liu, Liu, Yang, Ming, and Bourdev, Lubomir. Compressing deep convolutional networks using vector quantization. arXiv preprint arXiv:1412.6115, 2014.
Han, Song, Pool, Jeff, Tran, John, and Dally, William J. Learning both weights and connections for efï¬cient neural networks. In Advances in Neural Information Processing Systems, 2015.
Han, Song, Liu, Xingyu, Mao, Huizi, Pu, Jing, Pedram, Ardavan, Horowitz, Mark A, and Dally, William J. EIE: Efï¬cient inference engine on compressed deep neural network. arXiv preprint arXiv:1602.01528, 2016.
12
Published as a conference paper at ICLR 2016
Hanson, Stephen Jos´e and Pratt, Lorien Y. Comparing biases for minimal network construction with back-propagation. In Advances in neural information processing systems, pp. 177â185, 1989.
Hassibi, Babak, Stork, David G, et al. Second order derivatives for network pruning: Optimal brain surgeon. Advances in neural information processing systems, pp. 164â164, 1993.
Hwang, Kyuyeon and Sung, Wonyong. Fixed-point feedforward deep neural network design using weights+ 1, 0, and- 1. In Signal Processing Systems (SiPS), 2014 IEEE Workshop on, pp. 1â6. IEEE, 2014.
Jia, Yangqing, Shelhamer, Evan, Donahue, Jeff, Karayev, Sergey, Long, Jonathan, Girshick, Ross, Guadarrama, Sergio, and Darrell, Trevor. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.
Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. Imagenet classiï¬cation with deep convolutional neural networks. In NIPS, pp. 1097â1105, 2012.
LeCun, Yann, Denker, John S, Solla, Sara A, Howard, Richard E, and Jackel, Lawrence D. Optimal brain damage. In NIPs, volume 89, 1989.
LeCun, Yann, Bottou, Leon, Bengio, Yoshua, and Haffner, Patrick. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278â2324, 1998.
Lin, Min, Chen, Qiang, and Yan, Shuicheng. Network in network. arXiv:1312.4400, 2013.
NVIDIA. Technical brief: NVIDIA jetson TK1 development kit bringing GPU-accelerated computing to embedded systems, a. URL http://www.nvidia.com.
NVIDIA. Whitepaper: GPU-based deep learning inference: A performance and power analysis, b. URL http://www.nvidia.com/object/white-papers.html.
Simonyan, Karen and Zisserman, Andrew. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
Str¨om, Nikko. Phoneme probability estimation with dynamic sparsely connected artiï¬cial neural networks. The Free Speech Journal, 1(5):1â41, 1997.
Szegedy, Christian, Liu, Wei, Jia, Yangqing, Sermanet, Pierre, Reed, Scott, Anguelov, Dragomir, Erhan, Dumitru, Vanhoucke, Vincent, and Rabinovich, Andrew. Going deeper with convolutions. arXiv preprint arXiv:1409.4842, 2014.
Van Leeuwen, Jan. On the construction of huffman trees. In ICALP, pp. 382â410, 1976.
Van Nguyen, Hien, Zhou, Kevin, and Vemulapalli, Raviteja. Cross-domain synthesis of medical images using efï¬cient location-sensitive deep network. In Medical Image Computing and Computer- Assisted InterventionâMICCAI 2015, pp. 677â684. Springer, 2015.
Vanhoucke, Vincent, Senior, Andrew, and Mao, Mark Z. Improving the speed of neural networks on cpus. In Proc. Deep Learning and Unsupervised Feature Learning NIPS Workshop, 2011.
Yang, Zichao, Moczulski, Marcin, Denil, Misha, de Freitas, Nando, Smola, Alex, Song, Le, and Wang, Ziyu. Deep fried convnets. arXiv preprint arXiv:1412.7149, 2014.
13
Published as a conference paper at ICLR 2016
A APPENDIX: DETAILED TIMING / POWER REPORTS OF DENSE & SPARSE NETWORK LAYERS
Table 8: Average time on different layers. To avoid variance, we measured the time spent on each layer for 4096 input samples, and averaged the time regarding each input sample. For GPU, the time consumed by cudaMalloc and cudaMemcpy is not counted. For batch size = 1, gemv is used; For batch size = 64, gemm is used. For sparse case, csrmv and csrmm is used, respectively.
Time (us) Titan X Core i7-5930k Tegra K1 dense (batch=1) sparse (batch=1) dense (batch=64) sparse (batch=64) dense (batch=1) sparse (batch=1) dense (batch=64) sparse (batch=64) dense (batch=1) sparse (batch=1) dense (batch=64) sparse (batch=64) AlexNet FC6 541.5 134.8 19.8 94.6 7516.2 3066.5 318.4 1417.6 12437.2 2879.3 1663.6 4003.9 AlexNet FC7 243.0 65.8 8.9 51.5 6187.1 1282.1 188.9 682.1 5765.0 1256.5 2056.8 1372.8 AlexNet FC8 80.5 54.6 5.9 23.2 1134.9 890.5 45.8 407.7 2252.1 837.0 298.0 576.7 VGG16 FC6 1467.8 167.0 53.6 121.5 35022.8 3774.3 1056.0 1780.3 35427.0 4377.2 2001.4 8024.8 VGG16 FC7 243.0 39.8 8.9 24.4 5372.8 545.1 188.3 274.9 5544.3 626.3 2050.7 660.2
Table 9: Power consumption of different layers. We measured the Titan X GPU power with nvidia-smi, Core i7-5930k CPU power with pcm-power and Tegra K1 mobile GPU power with an external power meter (scaled to AP+DRAM, see paper discussion). During power measurement, we repeated each computation multiple times in order to get stable numbers. On CPU, dense matrix multiplications consume 2x energy than sparse ones because it is accelerated with multi-threading.
Power (Watts) TitanX Core i7-5930k Tegra K1 dense (batch=1) sparse (batch=1) dense (batch=64) sparse (batch=64) dense (batch=1) sparse (batch=1) dense (batch=64) sparse (batch=64) dense (batch=1) sparse (batch=1) dense (batch=64) sparse (batch=64) AlexNet FC6 157 181 168 156 83.5 42.3 85.4 37.2 5.1 5.9 5.6 5.0 AlexNet FC7 159 183 173 158 72.8 37.4 84.7 37.1 5.1 6.1 5.6 4.6 AlexNet FC8 159 162 166 163 77.6 36.5 101.6 38 5.4 5.8 6.3 5.1 VGG16 FC6 166 189 173 160 70.6 38.0 83.1 39.5 5.3 5.6 5.4 4.8 VGG16 FC7 163 166 173 158 74.6 37.4 97.1 36.6 5.3 6.3 5.6 4.7
14
VGG16 FC8 80.5 48.0 5.9 22.0 774.2 777.3 45.7 363.1 2243.1 745.1 483.9 544.1
VGG16 FC8 159 162 167 161 77.0 36.0 87.5 38.2 5.4 5.8 6.3 5.0 | {
"id": "1504.08083"
} |
1509.03005 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | This paper proposes GProp, a deep reinforcement learning algorithm for
continuous policies with compatible function approximation. The algorithm is
based on two innovations. Firstly, we present a temporal-difference based
method for learning the gradient of the value-function. Secondly, we present
the deviator-actor-critic (DAC) model, which comprises three neural networks
that estimate the value function, its gradient, and determine the actor's
policy respectively. We evaluate GProp on two challenging tasks: a contextual
bandit problem constructed from nonparametric regression datasets that is
designed to probe the ability of reinforcement learning algorithms to
accurately estimate gradients; and the octopus arm, a challenging reinforcement
learning benchmark. GProp is competitive with fully supervised methods on the
bandit task and achieves the best performance to date on the octopus arm. | http://arxiv.org/pdf/1509.03005 | David Balduzzi, Muhammad Ghifary | cs.LG, cs.AI, cs.NE, stat.ML | 27 pages | null | cs.LG | 20150910 | 20150910 | 5 1 0 2
p e S 0 1 ] G L . s c [
1 v 5 0 0 3 0 . 9 0 5 1 : v i X r a
# Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
David Balduzzi School of Mathematics and Statistics Victoria University of Wellington Wellington, New Zealand
david.balduzzi@vuw.ac.nz
Muhammad Ghifary School of Engineering and Computer Science Victoria University of Wellington Wellington, New Zealand
muhammad.ghifary@ecs.vuw.ac.nz
# Abstract
This paper proposes GProp, a deep reinforcement learning algorithm for continuous poli- cies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-diï¬erence based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which com- prises three neural networks that estimate the value function, its gradient, and determine the actorâs policy respectively.
We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforce- ment learning algorithms to accurately estimate gradients; and the octopus arm, a challeng- ing reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm.
Keywords: policy gradient, reinforcement learning, deep learning, gradient estimation, temporal diï¬erence learning
# 1. Introduction
In reinforcement learning, an agent learns to maximize its discounted future rewards (Sutton and Barto, 1998). The structure of the environment is initially unknown, so the agent must both learn the rewards associated with various action-sequence pairs and optimize its policy. A natural approach is to tackle the subproblems separately via a critic and an actor (Barto et al., 1983; Konda and Tsitsiklis, 2000), where the critic estimates the value of diï¬erent actions and the actor maximizes rewards by following the policy gradient (Sutton et al., 1999; Peters and Schaal, 2006; Silver et al., 2014). Policy gradient methods have proven useful in settings with high-dimensional continuous action spaces, especially when task- relevant policy representations are at hand (Deisenroth et al., 2011; Levine et al., 2015; Wahlstr¨om et al., 2015).
In the supervised setting, representation or deep learning algorithms have recently demonstrated remarkable performance on a range of benchmark problems. However, the problem of
1
Balduzzi and Ghifary
learning features for reinforcement learning remains comparatively underdeveloped. The most dramatic recent success uses Q-learning over ï¬nite action spaces, and essentially build a neural network critic (Mnih et al., 2015). Here, we consider continuous action spaces, and develop an algorithm that simultaneously learns the value function and its gradient, which it then uses to ï¬nd the optimal policy.
# 1.1 Outline
This paper presents Value-Gradient Backpropagation (GProp), a deep actor-critic algorithm for continuous action spaces with compatible function approximation. Our starting point is the deterministic policy gradient and associated compatibility conditions derived in (Silver et al., 2014). Roughly speaking, the compatibility conditions are that
C1. the critic approximate the gradient of the value-function and
C2. the approximation is closely related to the gradient of the policy.
See Theorem 2 for details. We identify and solve two problems with prior work on policy gradients â relating to the two compatibility conditions:
P1. Temporal diï¬erence methods do not directly estimate the gradient of the value function. Instead, temporal diï¬erence methods are applied to learn an approximation of the form Qv(s) + Qw(s, a), where Qv(s) estimates the value of a state, given the current policy, and Qw(s, a) estimates the advantage from deviating from the current policy (Sutton et al., 1999; Peters and Schaal, 2006; Deisenroth et al., 2011; Silver et al., 2014). Although the advantage is related to the gradient of the value function, it is not the same thing.
P2. The representations used for compatible approximation scale badly on neural networks. The second problem is that prior work has restricted to advantage functions constructed from a particular state-action representation, Ï(s, a) = âθ µθ(s)(a â µθ(s)), that de- pends on the gradient of the policy. The representation is easy to handle for linear policies. However, if the policy is a neural network, then the standard state-action representation ties the critic too closely to the actor and depends on the internal struc- ture of the actor, Example 2. As a result, weight updates cannot be performed by backpropagation, see section 5.5.
The paper makes three novel contributions. The ï¬rst two contributions relate directly to problems P1 and P2. The third is a new task designed to test the accuracy of gradient estimates.
Method to directly learn the gradient of the value function. The ï¬rst contribution is to modify temporal diï¬erence learning so that it directly estimates the gradient of the value-function. The gradient perturbation trick, Lemma 3, provides a way to simultaneously estimate both the value of a function at a point and its gradient, by perturbing the functionâs input with uncorrelated Gaussian noise.
Plugging in a neural network instead of a linear estimator extends the trick to the problem of learning a function and its gradient over the entire state-action space. Moreover, the trick combines naturally with temporal diï¬erence methods, Theorem 5, and is therefore well-suited to applications in reinforcement learning.
2
Compatible Value Gradients for Deep Reinforcement Learning
Deviator-Actor-Critic (DAC) model with compatible function approximation. The second contribution is to propose the Deviator-Actor-Critic (DAC) model, Deï¬nition 2, consisting in three coupled neural networks and Value-Gradient Backpropagation (GProp), Algorithm 1, which backpropagates three diï¬erent signals to train the three networks. The main result, Theorem 6, is that GProp has compatible function approximation when im- plemented on the DAC model when the neural network consists in linear and rectilinear units.1
The proof relies on decomposing the Actor-network into individual units that are con- sidered as actors in their own right, based on ideas in (Srivastava et al., 2014; Balduzzi, 2015). It also suggests interesting connections to work on structural credit assignment in multiagent reinforcement learning (Agogino and Tumer, 2004, 2008; HolmesParker et al., 2014).
Contextual bandit task to probe the accuracy of gradient estimates. A third contribution, that may be of independent interest, is a new contextual bandit setting de- signed to probe the ability of reinforcement learning algorithms to estimate gradients. A supervised-to-contextual bandit transform was proposed in (Dud´ık et al., 2014) as a method for turning classiï¬cation datasets into K-armed contextual bandit datasets.
We are interested in the continuous setting in this paper. We therefore adapt their transform with a twist. The SARCOS and Barrett datasets from robotics have features corresponding to the positions, velocities and accelerations of seven joints and labels corre- sponding to their torques. There are 7 joints in both cases, so the feature and label spaces are 21 and 7 dimensional respectively. The datasets are traditionally used as regression benchmarks labeled SARCOS1 through SARCOS7 where the task is to predict the torque of a single joint â and similarly for Barrett.
We convert the two datasets into two continuous contextual bandit tasks where the reward signal is the negative distance to the correct label 7-dimensional. The algorithm is thus âtoldâ that the label lies on a sphere in a 7-dimensional space. The missing information required to pin down the labelâs position is precisely the gradient. For an algorithm to make predictions that are competitive with fully supervised methods, it is necessary to ï¬nd extremely accurate gradient estimates.
Experiments. Section 6 evaluates the performance of GProp on the contextual bandit problems described above and on the challenging octopus arm task (Engel et al., 2005). We show that GProp is able to simultaneously solve seven nonparametric regression prob- lems without observing any labels â instead using the distance between its actions and the correct labels. It turns out that GProp is competitive with recent fully supervised learning algorithms on the task. Finally, we evaluate GProp on the octopus arm benchmark, where it achieves the best performance reported to date.
1. The proof also holds for maxpooling, weight-tying and other features of convnets. A description of how closely related results extend to convnets is provided in (Balduzzi, 2015).
3
Balduzzi and Ghifary
# 1.2 Related work
An early reinforcement learning algorithm for neural networks is REINFORCE (Williams, 1992). A disadvantage of REINFORCE is that the entire network is trained with a single scalar signal.
Our proposal builds on ideas introduced with deep Q-learning (Mnih et al., 2015), such as replay. However, deep Q-learning is restricted to ï¬nite action spaces, whereas we are concerned with continuous action spaces.
Policy gradients were introduced in (Sutton et al., 1999) and have been used extensively (Kakade, 2001; Peters and Schaal, 2006; Deisenroth et al., 2011). The deterministic policy gradient was introduced in (Silver et al., 2014), which also proposed the algorithm COPDAC-Q. The relationship between GProp and COPDAC-Q is discussed in detail in section 5.5.
An alternate approach, based on the idea of backpropagating the gradient of the value function, is developed in (Jordan and Jacobs, 1990; Prokhorov and Wunsch, 1997; Wang and Si, 2001; Hafner and Riedmiller, 2011; Fairbank and Alonso, 2012; Fairbank et al., 2013). Unfortunately, these algorithms do not have compatible function approximation in general, so there are no guarantees on actor-critic interactions. See section 5.5 for further discussion.
The analysis used to prove compatible function approximation relies on decomposing the Actor neural network into a collection of agents corresponding to the units in the network. The relation between GProp and the diï¬erence-based objective proposed for multiagent learning (Agogino and Tumer, 2008; HolmesParker et al., 2014) is discussed in section 5.4.
# 1.3 Notation
We use boldface to denote vectors, subscripts for time, and superscripts for individual units in a network. Sets of parameters are capitalized (Î, W, V) when they refer to matrices or to the parameters of neural networks.
# 2. Deterministic Policy Gradients
This section recalls previous work on policy gradients. The basic idea is to simultaneously train an actor and a critic. The critic learns an estimate of the value of diï¬erent policies; the actor then follows the gradient of the value-function to ï¬nd an optimal (or locally optimal) policy in terms of expected rewards.
# 2.1 The Policy Gradient Theorem
The environment is modeled as a Markov Decision Process consisting of state space S C Râ¢, action space A C R%, initial distribution p(s) on states, stationary transition distribution P(St+1|Sz,a,) and reward function r: S x AR. A policy is a function pg: S > A from states to actions. We will often add noise to policies, causing them to be stochastic. In this case, the policy is a function ty: S > Ay, where A, is the set of probability distributions on actions.
Let p(s > sâ,y2) denote the distribution on states sâ at time ¢ given policy w and initial state s at ¢ = 0 and let pÂ¥(sâ) = J fo7'pi(s)pi(s > sâ,p)ds. Let rf =
4
# Compatible Value Gradients for Deep Reinforcement Learning
Ï =t Î³Ï âtr(sÏ , aÏ ) be the discounted future reward. Deï¬ne the
value of a state-action pair: value of a policy: Qµθ (s, a) = E[rγ J(µθ) = 1 |S1 = s, A1 = a; µθ] E sâ¼Ïµ,aâ¼ÂµÎ¸ [Qµθ (s, a)]. and
The aim is to ï¬nd the policy θâ := argmaxθ J(µθ) with maximal value. A natural ap- proach is to follow the gradient (Sutton et al., 1999), which in the deterministic case can be computed explicitly as
Theorem 1 (policy gradient) Under reasonable assumptions on the regularity of the Markov Decision Process the policy gradient can be computed as
â θ J(µθ) = E sâ¼Ïµ â θ µθ(s) â a Qµ(s, a)|a=µθ(s) .
Proof See (Silver et al., 2014).
# 2.2 Linear Compatible Function Approximation
Since the agent does not have direct access to the value function Qµ, it must instead learn an estimate Qw â Qµ. A suï¬cient condition for when plugging an estimate Qw(s, a) into the policy gradient âθ J(θ) = E[âθ µθ(s) âa Qµθ (s, a)|a=µθ(s)] yields an unbiased estimator was ï¬rst proposed in (Sutton et al., 1999).
A suï¬cient condition in the deterministic setting is:
Theorem 2 (compatible value function approximation) The value-estimate Qw(s, a) satisï¬es is compatible with the policy gradient, that is
â θ J(θ) = E â θ µθ(s) · â a Qw(s, a)|a=µθ(s)
if the following conditions hold:
# C1. Qw approximates the value gradient:
The weights learned by the approximate value function must satisfy w = argmin,, â¬or (9, where
lar (0,w) = ly Qâ¢(S, a) ja=py(s) â VY QO" (8, a)ja=puo(s) â| (a)
is the mean-square diï¬erence between the gradient of the true value function Qµ and the approximation Qw.
# C2. Qw is policy-compatible:
The gradients of the value-function and the policy must satisfy
VQ" (8, a) ja=o(s) = ( Vv Hy(s),w). (2)
5
|
Balduzzi and Ghifary
Proof See (Silver et al., 2014).
Having stated the compatibility condition, it is worth revisiting the problems that we propose to tackle in the paper. The ï¬rst problem is to directly estimate the gradient of the value function, as required by Eq. (1) in condition C1. The standard approach used in the literature is to estimate the value function, or the closely related advantage function, using temporal diï¬erence learning, and then compute the derivative of the estimate. The next section shows how the gradient can be estimated directly.
The second problem relates to the compatibility condition on policy and value gradients required by Eq. (2) in condition C2. The only function approximation satisfying C2 that has been proposed is
Example 1 (standard value function approximation) Let $(s) be an m-dimensional feature representation on states and set o(s,a) := Vo p9(s)- (a - Ho(s)). Then the value function approximation
QY(s,a) = (p(s,a),w) +($(8),v) = (aâ Hols)" V He(s)⢠w+ (s)T-v. eS ââ 0 advantage function
satisï¬es condition C2 of Theorem 2.
The approximation in Example 1 encounters serious problems when applied to deep policies, see discussion in section 5.5.
# 3. Learning Value Gradients
In this section, we tackle the ï¬rst problem by modifying temporal-diï¬erence (TD) learning so that it directly estimates the gradient of the value function. First, we developed a new approach to estimating the gradient of a black-box function at a point, based on perturbing the function with gaussian noise. It turns out that the approach extends easily to learning the gradient of a black-box function across its entire domain. Moreover, it is easy to combine with neural networks and temporal diï¬erence learning.
# 3.1 Estimating the gradient of an unknown function at a point
Gradient estimates have been intensively studied in bandit problems, where rewards (or losses) are observed but labels are not. Thus, in contrast to supervised learning where it is possible to compute the gradient of the loss, in bandit problems the gradient must be estimated. More formally, consider the following setup.
Deï¬nition 1 (zeroth-order black-box) A function f : Rd â R is a zeroth-order black-box if it can only be queried for zeroth- order information. That is, User can request the value f (x) of f at any point x â Rd, but cannot request the gradient of the function.
We use the shorthand black-box in what follows.
6
|
Compatible Value Gradients for Deep Reinforcement Learning
The black-box model for optimization was introduced in (Nemirovski and Yudin, 1983), see (Raginsky and Rakhlin, 2011) for a recent exposition. In those papers, a black-box consists in a ï¬rst-order oracle that can provide both zeroth-order information (the value of the function) and ï¬rst-order information (the gradient or subgradient of the function).
Remark 1 (reward function is a black-box; value function is not) The reward function r(s, a) is a black box since Nature does not provide gradient informa- tion. The value function Qµθ (s, a) = E[rγ 1 |S1 = s, A1 = a; µθ] is not even a black-box: it cannot be queried directly since it is deï¬ned as the expected discounted future reward. It is for this reason the gradient perturbation trick must be combined with temporal diï¬erence learning, see section 3.4.
An important insight is that the gradient of an unknown function at a speciï¬c point can be estimated by perturbing its input (Flaxman et al., 2005). For example, for small δ > 0 the gradient of f : Rd â R is approximately â f (x)|x=µ â d · Eu[ f (µ+δu) u] where the expectation is over vectors sampled uniformly from the unit sphere.
The following lemma provides a simple method for estimating the gradient of a function at a point based on Gaussian perturbations:
Lemma 3 (gradient perturbation trick) The gradient of diï¬erentiable f : Rd â R at µ â Rd is
2 V fC) x= = lim argmin {min E ety) (rw +) â (w,e) â b) \ . (3) 020 werd | bER e~N(
Proof By taking sufficiently small variance, we can assume that f is locally linear. Setting b = f(w) yields a line through the origin. It therefore suffices to consider the special case f(x) = (v,x). Setting
w = aramin [5 (( 6) ly, 6) ; werd â¬~N(0,0?-1q) |2
we are required to show that w* = v. The problem is convex, so setting the gradient to zero requires to solve 0 = E [(w âv,e)- el, which reduces to solving the set of linear equations
# d
d Yi(w" â v') Efeâeâ] = (w! â vw!) E[(e)?] = (w? â vo!) -0? =0 for all j. i=1
The first equality holds since Efeâeâ] = 0. It follows immediately that w* = v.
# 3.2 Learning gradients across a range
The solution to the optimization problem in Eq. (3) is the gradient â f (x) of f at a particular µ â Rd. The next step is to learn a function GW : Rd â Rd that approximates the gradient across a range of values.
7
|
# Balduzzi and Ghifary
More precisely, given a sample {xi}n i=1 â¼ PX of points, we aim to ï¬nd
n W* := argmin ) > [iIv ff (xi) â GW (x;)||"| . Ww il
The next lemma considers the case where QY and GW are linear estimates, of the form QY (x) := ((x),v), and GW (x) = W- w(x) for fixed representations @ : X > R⢠and w:X +Râ.
Lemma 4 (gradient learning) Let f : R¢ > R be a differentiable function. Suppose that @ : X â R⢠andy: X + Râ are representations such that there exists an m-vector v* and a (d x n)-matrix W* satisfying f(x) = (@(x), v*) and V f = W*- Y(x) for all x in the sample.
If we deï¬ne loss function
⬠((W,V,x,0) = (20 46) â(GW(x),)â arn) |
then
W* = lim argminmin E_ [e(W, V,x,o)]. e550 WwW VioxvP
Proof Follows from Lemma 3.
In short, the lemma reduces gradient estimation to a simple optimization problem given a good enough representation. Jumping ahead slightly to section 4, we ensure that our model has good enough representations by constructing two neural networks to learn them. The ï¬rst neural network, QV : Rd â R, learns an approximation to f (x) that plays the role of the baseline b. The second neural network, GW : Rd â Rd learns an approximation to the gradient.
# 3.3 Temporal diï¬erence learning
Recall that Qµ(s, a) is the expected value of a state-action pair given policy µ. It is never observed directly, since it is computed by discounting over future rewards. TD-learning is a popular approach to estimating Qµ through dynamic programming (Sutton and Barto, 1998).
We quickly review TD-learning. Let Ï : S à A â Rm be a ï¬xed representation. The goal is to ï¬nd a value-estimate
Q"(s.a) := (6(s.a),v),
where v is an m-dimensional vector, that is as close as possible to the true value function. If the value-function were known, we could simply minimize the mean-square error with respect to v:
fuse(v) = [(ors.a) - a%(e.a))']. (s,a)~(o" 1)
8
# a
# Compatible Value Gradients for Deep Reinforcement Learning
Unfortunately, it is impossible to minimize the mean-square error directly, since the value- function is the expected discounted future reward, rather than the reward. That is, the value function is not provided explicitly by the environment â not even as a black-box. The Bellman error is therefore used a substitute for the mean-square error:
TD-error, 6 r âââââââ.. lowly) = Ey (ra) +12") m(s))) -2"(s.a) ) sa)~(ph, â__â<$<ââ =QH(s,a)
2
|
where sâ is the state subsequent to s.
Let δt = rt â Qv(st, at) + γQv(st+1, µθ(st+1)) be the TD-error. TD-learning updates v according to
vt+1 â vt + ηt · δt · â v Qv(st, at) = vt + ηt · δt · Ï(s, a), (4)
where ηt is a sequence of learning rates. The convergence properties of TD-learning and related algorithms have been studied extensively, see (Tsitsiklis and Roy, 1997; Dann et al., 2014).
# 3.4 Temporal diï¬erence gradient (TDG) learning
Finally, we apply temporal diï¬erence methods to estimate the gradient 2 of the value func- tion, as required by condition C1 of Theorem 2. We are interested in gradient approxima- tions of the form
QW (s, a, â¬) = (G(s,a),â¬) = (W- Y(s,a),â¬),
where wy : S x A â Râ and W is a (d x n)-dimensional matrix. The goal is to find W* such that GW" (s,a) & Ve QH(s, a, â¬)\e-0 = Va Q4(s, a) |a=y4(s) for all sampled state-action pairs.
It is convenient to introduce notation QH(s,a,e) := Q4(s,a+e) and shorthand s := (S, 4o(s)). Then, analogously to the mean-square, define the perturbed gradient error:
trcelw, Wie?) = 8,8 |(Q"@e)â(Gâ¢@).â¬)- Q°@) |.
Given a good enough representation, Lemma 4 guarantees that minimizing the perturbed gradient error yields the gradient of the value function. Unfortunately, as discussed above, the value function cannot be queried directly. We therefore introduce the Bellman gradient error as a proxy
TDG-error, ⬠.ODww⢠v. Wo?) = 8, 8 [(76.6) +7@°@)â(G%).«)â 9) =QH(,â¬)
|
.
2. Residual gradient (RG) and gradient temporal diï¬erence (GTD) methods were introduced in (Baird, 1995; Sutton et al., 2009a,b). The similar names may be confusing. RG and GTD methods are TD methods derived from gradient descent. In contrast, we develop a TD-based approach to learning gra- dients. The two approaches are thus complementary and straightforward to combine. However, in this paper we restrict to extending vanilla TD to learning gradients.
9
# Balduzzi and Ghifary
Set the TDG-error as
& = r(Se) + Â¥Q* Bi41) â (GY Gr), â¬) â QY (&)
and, analogously to Eq. (4), deï¬ne the TDG-updates
Vi4t â Vet me: &- VQ (61) =vitm-&- (8) Wisi â We + me te - VQ i) = Wi +m: &-⬠@ W(),
where ⬠@ (8) is the (d x n) matrix given by the outer product. We refer to â¬- ⬠as the perturbed TDG-error.
The following extension theorem allows us to import guarantees from temporal-diï¬erence learning to temporal-diï¬erence gradient learning.
Theorem 5 (zeroth to ï¬rst-order extension) Guarantees on TD-learning extend to TDG-learning.
The idea is to reformulate TDG-learning as TD-learning, with a slightly diï¬erent reward function and function approximation. Since the function approximation is still linear, any guarantees on convergence for TD-learning transfered automatically to TDG-learning.
Proof First, we incorporate ⬠into the state-action pair. Define 7(s,a,â¬) := r(s,a+e) and
ab(s,a,â¬) =⬠@ w(s, a).
Second, we define a dot product on matrices of equal size by flattening them down to vectors. More precisely, given two matrices A and B of the same dimension (m x n), define the dot-product (A,B) = et Aj; Bij. It is easy to see that
GW(s,a) = (W- o(s,a), â¬) = (h(s,a,â¬), W).
The TDG-error can then be rewritten as
& =F(s,a,â¬) +7QY(s', aâ, â¬â) â QV (s,a,â¬)
where QY'W(s, a, â¬) = (4(s, a), v) + (W(s, a, â¬), W) is a linear function approximation.
If we are in a setting where TD-learning is guaranteed to converge to the value-function, it follows that TDG-learning is also guaranteed to converge â since it is simply a differ- ent linear approximation. Thus, Qâ(8,â¬) ~ QY(8) + GW(8,e) and the result follows by Lemma 4. a
# 4. Algorithm: Value-Gradient Backpropagation
This section presents our model, which consists of three coupled neural networks that learn to estimate the value function, its gradient, and the optimal policy respectively.
10
# Compatible Value Gradients for Deep Reinforcement Learning
Deï¬nition 2 (deviator-actor-critic) The deviator-actor-critic (DAC) model consists in three neural networks:
actor-network with policy µΠ: S â A â Rd; ⢠critic-network, QV : S à A â R, that estimates the value function; and
⢠deviator-network, GW : S à A â Rd, that estimates the gradient of the value function.
Gaussian noise is added to the policy during training resulting in actions a = fe(s) + ⬠where ⬠~ N(0,07-1q). The outputs of the critic and deviator are combined as
(5 als).6) = 0% (otal) + (@⢠(5 t0(8):¢)
The Gaussian noise plays two roles. Firstly, it controls the explore/exploit tradeoï¬ by controlling the extent to which Actor deviates from its current optimal policy. Secondly, it controls the âresolutionâ at which Deviator estimates the gradient.
The three networks are trained by backpropagating three diï¬erent signals. Critic, De- viator and Actor backpropagate the TDG-error, the perturbed TDG-error, and Deviatorâs gradient estimate respectively; see Algorithm 1. An explicit description of the weight up- dates of individual units is provided in Appendix A.
Deviator estimates the gradient of the value-function with respect to deviations ⬠from the current policy. Backpropagating the gradient through Actor allows to estimate the influence of Actor-parameters on the value function as a function of their effect on the policy.
Algorithm 1: Value-Gradient Backpropagation (GProp).
for rounds t = 1, 2, . . . , T do
rounds t= 1,2,...,T do Network gets state s;, responds a; = He, (sr) + â¬, gets reward r; Let § := (s, 9(s)). & â re + YQY* (S41) â QY*(&) â (GW*(&), â¬) // compute TDG-error Or41 â ©: + nf! - Vo Me, (st) - GY (8) // backpropagate GW View â Vit nf -& Vv QY'(&) // backpropagate ⬠Wii â Wit nb -&- Vw GW (8;) -⬠// backpropagate â¬-â¬
Critic and Deviator learn representations suited to estimating the value function and its gradient respectively. Note that even though the gradient is a linear function at a point, it can be a highly nonlinear function in general. Similarly, Actor learns a policy representation. We set the learning rates of Critic and Deviator to be equal (n⬠= np ) in the experiments in section 6. However, the perturbation ⬠has the effect of slowing down and stabilizing Deviator updates:
Remark 2 (stability) The magnitude of Deviatorâs weight updates depend on ⬠~ N(0,0? -Iy) since they are computed by backpropagating the perturbed TDG-error &-â¬. Thus as 0? + 0, Deviatorâs learning rate essentially tends to zero. In general, Deviator learns more slowly than Critic.
11
Balduzzi and Ghifary
This has a stabilizing eï¬ect on the policy since Actor is insulated from Critic â its weight updates only depend (directly) on the output of Deviator.
# 5. Analysis: Deep Compatible Function Approximation
Our main result is that the deviatorâs value gradient is compatible with the policy gradient of each unit in the actor-network â considered as an actor in its own right:
Theorem 6 (deep compatible function approximation) Suppose that all units are rectilinear or linear. Then for each Actor-unit in the Actor- network there exists a reparametrization of the value-gradient approximator, GW, that sat- isï¬es the compatibility conditions in Theorem 2.
The actor-network is thus a collection of interdependent agents that individually fol- low the correct policy gradients. The experiments below show that they also collectively converge on useful behaviors.
Overview of the proof. The next few subsections prove Theorem 6. We provide a brief overview before diving into the details.
Guarantees for temporal diï¬erence learning and policy gradients are typically based on the assumption that the value-function approximation is a linear function of the learned parameters. However, we are interested in the case where Actor, Critic and Deviator are all neural networks, and are therefore highly nonlinear functions of their parameters. The goal is thus to relate the representations learned by neural networks to the prior work on linear function approximations.
To do so, we build on the following observation, implicit in (Srivastava et al., 2014):
Remark 3 (active submodels) A neural network of n linear and rectilinear units can be considered as a set of 2n submodels, corresponding to diï¬erent subsets of units. The active submodel at time t consists in the active units (that is, the linear units and the rectiï¬ers that do not output 0).
The active submodel has two important properties:
⢠it is a linear function from inputs to outputs, since rectiï¬ers are linear when active, and
⢠at each time step, learning only occurs over the active submodels, since only active units update their weights.
The feedforward sweep of a rectiï¬er network can thus be disentangled into two steps (Bal- duzzi, 2015). The ï¬rst step, which is highly nonlinear, applies a gating operation that selects the active submodel â by rendering various units inactive. The second step computes the output of the neural network via matrix multiplication. It is important to emphasize that although the active submodel is a linear function from inputs to outputs, it is not a linear function of the weights.
The strategy of the proof is to decompose the Actor-network in an interacting collection of agents, referred to as Actor-units. That is, we model each unit in the Actor-network as
12
Compatible Value Gradients for Deep Reinforcement Learning
an Actor in its own right that. On each time step that an Actor-unit is active, it interacts with the Deviator-submodel corresponding to the current active submodel of the Deviator- network. The proof shows that each Actor-unit has compatible function approximation.
# 5.1 Error backpropagation on rectilinear neural networks
First, we recall some basic facts about backpropagation in the case of rectilinear units. Recent work has shown that replacing sigmoid functions with rectiï¬ers S(x) = max(0, x) improves the performance of neural networks (Nair and Hinton, 2010; Glorot et al., 2011; Zeiler et al., 2013; Dahl et al., 2013).
Let us establish some notation. The output of a rectiï¬er with weight vector w is
Sw(x) := S((w,x)) := max(0, (w, x)).
The rectifier is active if (w,x) > 0. We use rectifiers because they perform well in prac- tice and have the nice property that units are linear when they are active. The rectifier subgradient is the indicator function
1(x) := â S(x) = 1 x > 0 else. 0
Consider a neural network of n units, each equipped with a weight vector w/ ⬠Hj; C R4. Hidden units are rectifiers; output units are linear. There are n units in total. It is convenient to combine all the weight vectors into a single object; let WC H = ITj- 1H; C RY where N = via d;. The network is a function FW R⢠| R¢: xi FW (xin) =: Xout-
. The network has error function â¬(xXout, y) with gradient g = Vx,,, â¬. Let xâ denote the output of unit 7 and @/ (xin) = (2") fisi-+j} denote its input, so that a7 = S((w/, @) (xin). Note that ¢? depends on W (specifically, the weights of lower units) but this is supressed from the notation.
Deï¬nition 3 (inï¬uence) The inï¬uence of unit j on unit k at time t is Ïj,k inï¬uence of unit j on the output layer is the vector Ïj
. ark The influence of unit j on unit k at time t is atk = ot (Balduzzi et al., 2015). The . iam influence of unit j on the output layer is the vector 7m, = (72°) peout'
The following lemma summarizes an analysis of the feedforward and feedback sweep of neural nets.
Lemma 7 (structure of neural network gradients) The following properties hold
# a. Inï¬uence.
A path is active at time t if all units on the path are ï¬ring. The inï¬uence of j on k is the sum of products of weights over all active paths from j to k:
alk = > wre de > wren? vee > wrkgk {alja} {Blaâ+B} {wlwk}
where α, β, . . . , Ï refer to units along the path from j to k.
13
Balduzzi and Ghifary
# b. Output decomposition.
The output of a neural network decomposes, relative to the output of unit j, as
F W(xin) = Ïj · xj + Ïâj · xin,
where Ïâj is the (m à d)-matrix whose (ik)th entry is the sum over all active paths from input unit i to output unit k that do not intersect unit j.
# c. Output gradient.
Fix an input Xin ⬠R⢠and consider the network as a function from parameters to outputs F*(Xin) > H > R¢: Ws FW (xin) whose gradient is an (N xd)-matrix. The (ij)"*-entry of the gradient is the input to the unit times its influence:
(vwFW (xin)) 2] _ 9 (Xin) a if unit j is active 0 else.
# d. Backpropagated error.
Fix xin ⬠Râ¢â and consider the function E(W) = E(F*(xin),y) : H > R: We E(EW (xin), y). Let g = Vou E(Xouts Â¥)- The gradient of the error function is
(Vw), = (8: (Vw (xin)) ;;) = 8) (VwF⢠(xin) ;; = 5! - 6! (Xin)
where the backpropagated error signal 5! received by unit j decomposes as 5) = (g, ww).
Proof Direct computation.
The lemma holds generically for networks of rectiï¬er and linear units. We apply it to actor, critic and deviator networks below.
# 5.2 A minimal DAC model
This subsection proves condition C1 of compatible function approximation for a minimal, linear Deviator-Actor-Critic model. The next subsection shows how the minimal model arises at the level of Actor-units.
Definition 4 (minimal model) The minimal model of a Deviator-Actor-Critic consists in an Actor with linear policy Lio(s) = (0,0(s)) + â¬, where 6 is an m-vector and ⬠is a noisy scalar. The Critic and Deviator together output:
Qââ(s, n9(s), â¬) = QY(s) + GY (wW9(s), ©) = (P(S), v) + Ho(s) - (e,w), âââ âS Critic Deviator
where v is an m-vector, w is a scalar, and (e,w) is simply scalar multiplication.
14
# a
Compatible Value Gradients for Deep Reinforcement Learning
The Critic in the minimal model is standard. However, the Deviator has been reduced it learns a single scalar parameter, w, that is used to train the actor. to almost nothing: The minimal model is thus too simple to be much use as a standalone algorithm.
Lemma 8 (compatible function approximation for the minimal model) There exists a reparametrization of the gradient estimate of the minimal model G(s, â¬) = G" (9(s), â¬) such that compatibility condition C1 in Theorem 2 is satisifed:
VG (8.6) = (Y Hols). ¥).
Proof Let w :=w- 67 and construct G(s, â¬) := (w- @(s),â¬). Clearly,
G (5,6) = (w- 8" G(8),â¬) = p(s) + (w,â¬) = G"(uo(s)-0)
Observe that V. GÂ¥(s, â¬) = w- ye(s) and that, similarly, (VY o(s), W) = w- po(s)
(VY o(s), W) = w- po(s)
as required.
# 5.3 Proof of Theorem 6
The proof proceeds by showing that the compatibility conditions in Theorem 2 hold for each Actor-unit. The key step is to relate the Actor-units to the minimal model introduced above.
Lemma 9 (reduction to minimal model) Actor-units in a DAC neural network are equivalent to minimal model Actors.
Proof Let Ïj time t. When unit j is active, Lemma 7ab implies we can write µÎt(st) = Ïj µÎâj the Actor-network that do not intersect unit j.
Following Remark 3, the active subnetwork of the Deviator-network at time t is a linear transform which, by abuse of notation, we denote by W}.
Combine the last two points to obtain
GW(8,) = Wy (1 - (0, 61 (51)) + Ho (S1)) = (W}.- 7?) - (0, 6 (s:)) + terms that can be omitted.
Observe that (W; - n) is a d-vector. We have therefore reduced Actor-unit jâs interaction with the Deviator-network to d copies of the minimal model. a
Theorem 6 follows from combining the above Lemmas.
15
a
Balduzzi and Ghifary
Proof Compatibility condition C1 follows from Lemmas 8 and 9. Compatibility condition C2 holds since the Critic and Deviator minimize the Bellman gradient error with respect to W and V which also, implicitly, minimizes the Bellman gradient error with respect to the corresponding reparametrized Ëwâs for each Actor-unit.
Theorem 6 shows that each Actor-unit satisï¬es the conditions for compatible function approximation and so follows the correct gradient when performing weight updates.
# 5.4 Structural credit assignment for multiagent learning
It is interesting to relate our approach to the literature on multiagent reinforcement learning (Guestrin et al., 2002; Agogino and Tumer, 2004, 2008). In particular, (HolmesParker et al., 2014) consider the structural credit assignment problem within populations of interacting agents: How to reward individual agents in a population for rewards based on their collective behavior? They propose to train agents within populations with a diï¬erence-based objective of the form
Dj = Q(z) â Q(zâj, cj) (5)
where Q is the objective function to be maximized; zj and zâj are the system variables that are and are not under the control of agent j respective, and cj is a ï¬xed counterfactual action.
In our setting, the gradient used by Actor-unit j to update its weights can be described explicitly:
Lemma 10 (local policy gradients) Actor-unit j follows policy gradient
V J[Moi] = V Hoi (s )- (I, GY) |,
where (wi, GWs )) & = D,iQU(S) is Deviatorâs estimate of the directional derivative of the value ee in the direction of Actor-unit jâs influence.
Proof Follows from Lemma 7b.
Notice that âzj Q = âzj Dj in Eq. (5). It follows that training the Actor-network via GProp causes the Actor-units to optimize the diï¬erence-based objective â without requiring to compute the diï¬erence explicitly. Although the topic is beyond the scope of the current paper, it is worth exploring how suitably adapted variants of backpropagation can be applied to the reinforcement learning problems in the multiagent setting.
# 5.5 Comparison with related work
Comparison with COPDAC-Q. Extending the standard value function approximation in Example 1 to the setting where Actor is a neural network yields the following representation, which is used in (Silver et al., 2014) when applying COPDAC-Q to the octopus arm task:
16
# a
# Compatible Value Gradients for Deep Reinforcement Learning
Example 2 (extension of standard value approximation to neural networks) Let µΠ: S â A and QV : S â R be an Actor and Critic neural network respectively. the total number of entries in Î). It Suppose the Actor-network has N parameters (i.e. follows that the Jacobian âΠµÎ(s) is an (N à d)-matrix.
The value function approximation is then
QYW(s,a) = (aâ He(s))"- Vote(s)" w+ QY(s). SES advantage function Critic
where w is an N -vector.
Weight updates under COPDAC-Q, with the function approximation above, are therefore as described in Algorithm 2.
Algorithm 2: Compatible Deterministic Actor-Critic (COPDAC-Q).
for rounds t= 1,2,...,T do Network gets state s;, responds a; = fo,(s:) + ⬠where ⬠~ N(0,0? - Iy), gets reward ry 5p â Tr + YQY*(St41) â QY*(st) â (Vo Meo, (St) - â¬, wi) Or41 <â ©: + nf - Vo Me, (si) - Vo Me, (St) - we Visi â Vit ne - 54: Vv QY*(s:) Wry <â Wer ne âO° Vo Ho, (st) 7â¬
Let us compare GProp with COPDAC-Q, considering the three updates in turn:
⢠Actor updates.
Under GProp, the Actor backpropagates the value-gradient estimate. In contrast under COPDAC-Q the Actor performs a complicated update that combines the policy gradient âε(s) with the advantage functionâs weights â and diï¬ers substantively from backprop.
⢠Deviator / advantage-function updates.
Under GProp, the Deviator backpropagates the perturbed TDG-error. In contrast, COPDAC-Q uses the gradient of the Actor to update the weight vector w of the advan- tage function. By Lemma 7d, backprop takes the form g⢠- Vo 9(s) where g is a d-vector. In contrast, the advantage function requires computing Vo âe(s)⢠- w, where w is an N-vector. Although the two formulae appear similarly superficially, they carry very different computational costs.
The ï¬rst consequence is that the parameters of w must exactly line up with those of the policy. The second consequence is that, by Lemma 7c, the advantage function requires access to
(âεÎ(s))ij = Ïij(s) · Ïj 0 if unit j is active else,
where Ïij(s) is the input from unit i to unit j. Thus, the advantage function requires access to the input Ïj(s) and the inï¬uence Ïj of every unit in the Actor-network.
17
Balduzzi and Ghifary
⢠Critic updates.
The critic updates for the two algorithms are essentially identical, with the TD-error replaced with the TDG-error.
In short, the approximation in Example 2 that is used by COPDAC-Q is thus not well- adapted to deep learning. The main reason is that learning the advantage function requires coupling the vector w with the parameters Î of the actor.
Comparison with computing the gradient of the value-function approximation. Perhaps the most natural approach to estimating the gradient is to simply estimate the value function, and then use its gradient as an estimate of the derivative (Jordan and Jacobs, 1990; Prokhorov and Wunsch, 1997; Wang and Si, 2001; Hafner and Riedmiller, 2011; Fairbank and Alonso, 2012; Fairbank et al., 2013). The main problem with this approach is that, to date, it has not been show that the resulting updates of the Critic and the Actor are compatible.
There are also no guarantees that the gradient of the Critic will be a good approximation to the gradient of the value function â although it is intuitively plausible. The problem becomes particularly severe when the value-function is estimated via a neural network that uses activation functions that are not smooth such as rectifers. Rectiï¬ers are becoming increasingly popular due to their superior empirical performance (Nair and Hinton, 2010; Glorot et al., 2011; Zeiler et al., 2013; Dahl et al., 2013).
# 6. Experiments
We evaluate GProp on three tasks: two highly nonlinear contextual bandit tasks constructed from benchmark datasets for nonparametric regression, and the octopus arm.
We do not evaluate GProp on other standard reinforcement learning benchmarks such as Mountain Car, Pendulum or Puddle World, since these can already be handled by linear actor-critic algorithms. The contribution of GProp is the ability to learn representations suited to nonlinear problems.
Cloning and replay. Temporal diï¬erence learning can be unstable when run over a neural network. A recent innovation introduced in (Mnih et al., 2015) that stabilizes TD- learning is to clone a separate network Q ËV to compute the targets rt + γQ ËV(Ëst+1). The parameters of the cloned network are updated periodically.
We implement a similar modiï¬cation of the TDG-error in Algorithm 1. We also use experience replay (Mnih et al., 2015). GProp is well-suited to replay, since the critic and deviator can learn values and gradients over the full range of previously observed state- action pairs oï¬ine.
Cloning and replay were also applied to COPDAC-Q. Both algorithms were implemented in Theano (Bergstra et al., 2010; Bastien et al., 2012).
# 6.1 Contextual Bandit Tasks
The goal of the contextual bandit tasks is to probe the ability of reinforcement learning algorithms to accurately estimate gradients. The experimental setting may thus be of independent interest.
18
# Compatible Value Gradients for Deep Reinforcement Learning
Contextual Bandit (SARCOS) 0.00 -0.02 Qa é a 2-0.04 G a zg ©-0.06 Ss z -0.08 0-209 100 200 300 400 500 epochs
Contextual Bandit (SARCOS) 0.00 Contextual Bandit (Barrett) 0.00 -0.02 Qa 3 2-0.04 £ < -0.04 a Vv S ©-0.06 o oH $-0.06 gv -0.08 -0.08 0-209 100 200 300 400 500 ~0.10 epochs ( 500 1000 1500 epochs
Contextual Bandit (Barrett) 0.00 3 £ < -0.04 Vv S o oH $-0.06 gv -0.08 ~0.10 ( 500 1000 1500 epochs
Figure 1: Performance on contextual bandit tasks. The reward (negative normalized test MSE) for 10 runs are shown and averaged (thick lines). Performance variation for GProp is barely visible. Epochs refer to multiples of dataset; algorithms are ultimately trained on the same number of random samples for both datasets.
Description. We converted two robotics datasets, SARCOS3 and Barrett WAM4, into contextual bandit problems via the supervised-to-contextual-bandit transform in (Dud´ık et al., 2014). The datasets have 44,484 and 12,000 training points respectively, both with 21 features corresponding to the positions, velocities and accelerations of seven joints. Labels are 7-dimensional vectors corresponding to the torques of the 7 joints.
In the contextual bandit task, the agent samples 21-dimensional state vectors i.i.d. from either the SARCOS or Barrett training data and executes 7-dimensional actions. The reward r(s,a) = â|ly(s ) â all? is the negative mean-square distance from the action to the label. Note that the reward is a scalar, whereas the correct label is a 7-dimensional vector. The gradient of the reward
1 2 â a r(s, a) = y(s) â a
is the direction from the action to the correct label. In the supervised setting, the gradient can be computed. In the bandit setting, the reward is a zeroth-order black box.
The agent thus receives far less information in the bandit setting than in the fully supervised setting. Intuitively, the negative distance r(s, a) âtellsâ the algorithm that the correct label lies on the surface of a sphere in the 7-dimensional action space that is centred on the most recent action. By contrast, in the supervised setting, the algorithm is given the position of the label in the action space. In the bandit setting, the algorithm must estimate the position of the label on the surface of the sphere. Equivalently, the algorithm must estimate the labelâs direction relative to the center of the sphere â which is given by the gradient of the value function.
3. Taken from www.gaussianprocess.org/gpml/data/. 4. Taken from http://www.ausy.tu-darmstadt.de/Miscellaneous/Miscellaneous.
19
Balduzzi and Ghifary
The goal of the contextual bandit task is thus to simultaneously solve seven nonpara- metric regression problems when observing distances-to-labels instead of directly observing labels. The value function is relatively easy to learn in contextual bandit setting since the task is not sequential. However, both the value function and its gradient are highly nonlinear, and it is precisely the gradient that speciï¬es where labels lie on the spheres.
Network architectures. GProp and COPDAC-Q were implemented on an actor and devi- ator network of two layers (300 and 100 rectiï¬ers) each and a critic with a hidden layers of 100 and 10 rectiï¬ers. Updates were computed via RMSProp with momentum. The variance of the Gaussian noise Ï was set to decrease linearly from Ï2 = 1.0 until reaching Ï2 = 0.1 at which point it remained ï¬xed.
Performance. Figure 1 compares the test-set performance of policies learned by GProp against COPDAC-Q. The ï¬nal policies trained by GProp achieved average mean-square test error of 0.013 and 0.014 on the seven SARCOS and Barrett benchmarks respectively.
Remarkably, GProp is competitive with fully-supervised nonparametric regression algo- rithms on the SARCOS and Barrett datasets, see Figure 2bc in (Nguyen-Tuong et al., 2008) and the results in (Kpotufe and Boularias, 2013; Trivedi et al., 2014). It is important to note that the results reported in those papers are for algorithms that are given the labels and that solve one regression problem at a time. To the best of our knowledge, there are no prior examples of a bandit or reinforcement learning algorithm that is competitive with fully supervised methods on regression datasets.
For comparison, we implemented Backprop on the Actor-network under full-supervision. Backprop converged to .006 and .005 on SARCOS and BARRETT, compared to 0.013 and 0.014 for GProp. Note that BackProp is trained on 7-dim labels whereas GProp receives 1-dim rewards.
Contextual Bandit Gradients (Barrett) 0.040 ~ 0.035 â COPDAC-Q , â GradProp 0.030 Ly 0.025 an = 0.020 mB # 0.015 0.010 0.005 0.000, 500 1000 1500 epochs
Contextual Bandit Gradients (SARCOS) 0.040 0.035 COPDAC-Q , GradProp 0.030 th 0.025 Ss + 0.020 0.015 0.010 noo 0.000; 100 200 300 400 500 epochs
Contextual Bandit Gradients (SARCOS) Contextual Bandit Gradients (Barrett) 0.040 0.040 ~ 0.035 COPDAC-Q 0.035 â COPDAC-Q , GradProp , â GradProp 0.030 0.030 th 0.025 Ly 0.025 Ss an + 0.020 = 0.020 mB 0.015 # 0.015 0.010 0.010 noo 0.005 0.000; 100 200 300 400 500 0.000, 500 1000 1500 epochs epochs
Figure 2: Gradient estimates for contextual bandit tasks. The normalized MSE of the gradient estimates compared against the true gradients, i.e. 1 2, are shown for 10 runs of COPDAC-Q and GProp, along with their averages (thick lines).
20
# Compatible Value Gradients for Deep Reinforcement Learning
Accuracy of gradient-estimates. The true value-gradients can be computed and com- pared with the algorithmâs estimates on the contextual bandit task. Fig. 2 shows the per- formance of the two algorithms. GPropâs gradient-error converges to < 0.005 on both tasks. COPDAC-Qâs gradient estimate, implicit in the advantage function, converges to 0.03 (SAR- COS) and 0.07 (BARRETT). This conï¬rms that GProp yields signiï¬cantly better gradient estimates.
COPDAC-Qâs estimates are signiï¬cantly worse for Barrett compared to SARCOS, in line with the worse performance of COPDAC-Q on Barrett in Fig. 1. It is unclear why COPDAC-Qâs gradient estimate gets worse on Barrett for some period of time. On the other hand, since there are no guarantees on COPDAC-Qâs estimates, it follows that its erratic behavior is perhaps not surprising.
Comparison with bandit task in (Silver et al., 2014). Note that although the contextual bandit problems investigated here are lower-dimensional (with 21-dimensional state spaces and 7-dimensional action spaces) than the bandit problem in (Silver et al., 2014) (with no state space and 10, 25 and 50-dimensional action spaces), they are nevertheless much harder. The optimal action in the bandit problem, in all cases, is the constant vector [4, . . . , 4] consisting of only 4s. In contrast, SARCOS and BARRETT are nontrivial benchmarks even when fully supervised.
# 6.2 Octopus Arm
The octopus arm task is a challenging environment that is high-dimensional, sequential and highly nonlinear.
Desciption. The objective is to learn to hit a target with a simulated octopus arm (Engel et al., 2005).5 Settings are taken from (Silver et al., 2014). Importantly, the action-space is not simpliï¬ed using âmacro-actionsâ. The arm has C = 6 compartments attached to a rotating base. There are 50 = 8C + 2 state variables (x, y position/velocity of nodes along the upper/lower side of the arm; angular position/velocity of the base) and 20 = 3C + 2 action variables controlling the clockwise and counter-clockwise rotation of the base and three muscles per compartment.
After each step, the agent receives a reward of 10 · âdist, where âdist is the change in distance between the arm and the target. The ï¬nal reward is +50 if the agent hits the target. An episode ends when the target is hit or after 300 steps.
The arm initializes at eight positions relative to the target: ±45â¦, ±75â¦, ±105â¦, ±135â¦. See Appendix B for more details.
Network architectures. We applied GProp to an actor-network with 100 hidden recti- ï¬ers and linear output units clipped to lie in [0, 1]; and critic and deviator networks both with two hidden layers of 100 and 40 rectiï¬ers, and linear output units. Updates were computed via RMSProp with step rate of 10â4, moving average decay, with Nesterov mo- mentum (Hinton et al., 2012) penalty of 0.9 and 0.9 respectively, and discount rate γ of 0.95.
5. Simulator taken from
http://reinforcementlearningproject.googlecode.com/svn/trunk/FoundationsOfAI/ octopus-arm-simulator/octopus/
21
# Balduzzi and Ghifary
Octopus arm â COPDAC-Q â GradProp 300 250 200 steps to target e a 3 100 50 °% 50000 100000 150000 200000 250000 300000 # training actions
Octopus arm â COPDAC-Q â GradProp Fal u > ° w a w ° reward per step N a 2.0 15 Se Poo 1.0 0.5 0.05 50000 100000 150000 200000 250000 300000 # training actions
Octopus arm Octopus arm â COPDAC-Q â COPDAC-Q â GradProp â GradProp Fal u 300 > ° 250 w a w ° 200 steps to target e a 3 reward per step N a 2.0 100 15 Se Poo 1.0 50 0.5 °% 50000 100000 150000 200000 250000 300000 0.05 50000 100000 150000 200000 250000 300000 # training actions # training actions
Figure 3: Performance on octopus arm task. Ten runs of GProp and COPDAC-Q on a 6-segment octopus arm with 20 action and 50 state dimensions. Thick lines depict average values. Left panel: number of steps/episode for the arm to reach the target. Right panel: corresponding average rewards/step.
The variance of the Gaussian noise was initialized to Ï2 = 1.0. An explore/exploit tradeoï¬ was implemented as follows. When the arm hit the target in more than 300 steps, we set Ï2 â Ï2 · 1.3; otherwise Ï2 â Ï2/1.3. A hard lower bound was ï¬xed at Ï2 = 0.3.
We implemented COPDAC-Q on a variety of architectures; the best results are shown (also please see Figure 3 in (Silver et al., 2014)). They were obtained using a similar architecture to GProp, with sigmoidal hidden units and sigmoidal output units for the actor. Linear, rectilinear and clipped-linear output units were also tried. As for GProp, cloning and experience replay were used to increase stability.
Performance. Figure 3 shows the steps-to-target and average-reward-per-step on ten training runs. GProp converges rapidly and reliably (within ±170, 000 steps) to a stable policy that uses less than 50 steps to hit the target on average (see supplementary video for examples of the ï¬nal policy in action). GProp converges quicker, and to a better solu- tion, than COPDAC-Q. The reader is strongly encouraged to compare our results with those reported in (Silver et al., 2014). To the best of our knowledge, GProp achieves the best performance to date on the octopus arm task.
It is clear from the variability displayed in the ï¬gures that both the policy and Stability. the gradients learned by GProp are more stable than COPDAC-Q. Note that the higher vari- ability exhibited by GProp in the right-hand panel of Fig. 3 (rewards-per-step) is misleading. It arises because dividing by the number of steps â which is lower for GProp since it hits the target more quickly after training â inï¬ates GPropâs apparent variability.
# 7. Conclusion
Value-Gradient Backpropagation (GProp) is the ï¬rst deep reinforcement learning algorithm with compatible function approximation for continuous policies. It builds on the determinis-
22
# Compatible Value Gradients for Deep Reinforcement Learning
tic actor-critic, COPDAC-Q, developed in (Silver et al., 2014) with two decisive modiï¬cations. First, we incorporate an explicit estimate of the value gradient into the algorithm. Second, we construct a model that decouples the internal structure of the actor, critic, and deviator â so that all three can be trained via backpropagation.
GProp achieves state-of-the-art performance on two contextual bandit problems where it simultaneously solves seven regression problems without observing labels. Note that GProp is competitive with recent fully supervised methods that solve a single regression problem at a time. Further, GProp outperforms the prior state-of-the-art on the octopus arm task, quickly converging onto policies that rapidly and ï¬uidly hit the target.
Acknowledgements. We thank Nicolas Heess for sharing the settings of the octopus arm experiments in (Silver et al., 2014).
# References
Adrian K Agogino and Kagan Tumer. Unifying Temporal and Structural Credit Assignment Problems. In AAMAS, 2004.
Adrian K Agogino and Kagan Tumer. Analyzing and Visualizing Multiagent Rewards in Dynamic and Stochastic Environments. Journal of Autonomous Agents and Multi-Agent Systems, 17(2):320â338, 2008.
L C Baird. Residual algorithms: Reinforcement learning with function approximation. In ICML, 1995.
David Balduzzi. Deep Online Convex Optimization by Putting Forecaster to Sleep. arXiv:1509.01851, 2015. In
David Balduzzi, Hastagiri Vanchinathan, and Joachim Buhmann. Kickback cuts Backpropâs red-tape: Biologically plausible credit assignment in neural networks. In AAAI, 2015.
Andrew G Barto, Richard S Sutton, and Charles W Anderson. Neuronlike Adapative Elements That Can Solve Diï¬cult Learning Control Problems. IEEE Trans. Systems, Man, Cyb, 13(5):834â846, 1983.
F Bastien, P Lamblin, R Pascanu, J Bergstra, I Goodfellow, A Bergeron, N Bouchard, and Y Bengio. Theano: new features and speed improvements. In NIPS Workshop: Deep Learning and Unsupervised Feature Learning, 2012.
J Bergstra, O Breuleux, F Bastien, P Lamblin, R Pascanu, G Desjardins, J Turian, D Warde- Farley, and Yoshua Bengio. Theano: A CPU and GPU Math Expression Compiler. In Proc. Python for Scientiï¬c Comp. Conf. (SciPy), 2010.
George E Dahl, Tara N Sainath, and Geoï¬rey Hinton. Improving deep neural networks for LVCSR using rectiï¬ed linear units and dropout. In IEEE Int Conf on Acoustics, Speech and Signal Processing (ICASSP), 2013.
Christoph Dann, Gerhard Neumann, and Jan Peters. Policy Evaluation with Temporal Diï¬erences: A Survey and Comparison. JMLR, 15:809â883, 2014.
23
Balduzzi and Ghifary
Marc Peter Deisenroth, Gerhard Neumann, and Jan Peters. A Survey on Policy Search for Robotics. Foundations and Trends in Machine Learning, 2(1-2):1â142, 2011.
Miroslav Dud´ık, Dumitru Erhan, John Langford, and Lihong Li. Doubly Robust Policy Evaluation and Optimization. Statistical Science, 29(4):485â511, 2014.
Y Engel, P Szab´o, and D Volkinshtein. Learning to control an octopus arm with gaussian process temporal diï¬erence methods. In NIPS, 2005.
Michael Fairbank and Eduardo Alonso. Value-Gradient Learning. In IEEE World Confer- ence on Computational Intelligence (WCCI), 2012.
Michael Fairbank, Eduardo Alonso, and Daniel V Prokhorov. An Equivalence Between Adaptive Dynamic Programming With a Critic and Backpropagation Through Time. IEEE Trans. Neur. Net., 24(12):2088â2100, 2013.
Abraham Flaxman, Adam Kalai, and H Brendan McMahan. Online convex optimization in the bandit setting: Gradient descent without a gradient. In SODA, 2005.
Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep Sparse Rectiï¬er Neural Networks. In Proc. 14th Int Conference on Artiï¬cial Intelligence and Statistics (AISTATS), 2011.
Carlos Guestrin, Michail Lagoudakis, and Ronald Parr. Coordinated Reinforcement Learn- ing. In ICML, 2002.
Roland Hafner and Martin Riedmiller. Reinforcement learning in feedback control: Chal- lenges and benchmarks from technical process control. Machine Learning, 84:137â169, 2011.
G Hinton, Nitish Srivastava, and Kevin Swersky. Lecture 6a: Overview of minibatch gra- dient descent. 2012.
Chris HolmesParker, Adrian K Agogino, and Kagan Tumer. Combining Reward Shaping and Hierarchies for Scaling to Large Multiagent Systems. The Knowledge Engineering Review, 2014.
Michael I Jordan and R A Jacobs. Learning to control an unstable system with forward modeling. In NIPS, 1990.
Sham Kakade. A natural policy gradient. In NIPS, 2001.
Vijay R Konda and John N Tsitsiklis. Actor-critic algorithms. In NIPS, 2000.
Samory Kpotufe and Abdeslam Boularias. Gradient Weights help Nonparametric Regres- sors. In Advances in Neural Information Processing Systems (NIPS), 2013.
Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-End Training of Deep Visuomotor Policies. arXiv:1504.00702, 2015.
24
Compatible Value Gradients for Deep Reinforcement Learning
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Ku- maran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):529â533, 02 2015.
Vinod Nair and Geoï¬rey Hinton. Rectiï¬ed Linear Units Improve Restricted Boltzmann Machines. In ICML, 2010.
A S Nemirovski and D B Yudin. Problem complexity and method eï¬ciency in optimization. Wiley-Interscience, 1983.
Duy Nguyen-Tuong, Jan Peters, and Matthias Seeger. Local Gaussian Process Regression for Real Time Online Model Learning. In NIPS, 2008.
Jan Peters and Stefan Schaal. Policy Gradient Methods for Robotics. In Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2006.
Daniel V Prokhorov and Donald C Wunsch. Adaptive Critic Designs. IEEE Trans. Neur. Net., 8(5):997â1007, 1997.
Maxim Raginsky and Alexander Rakhlin. Information-Based Complexity, Feedback and Dynamics in Convex Programming. IEEE Trans. Inf. Theory, 57(10):7036â7056, 2011.
David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Ried- miller. Deterministic Policy Gradient Algorithms. In ICML, 2014.
Nitish Srivastava, Geoï¬rey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhut- dinov. Dropout: A Simple Way to Prevent Neural Networks from Overï¬tting. JMLR, 15:1929â1958, 2014.
R S Sutton and A G Barto. Reinforcement Learning: An Introduction. MIT Press, 1998.
Richard Sutton, David McAllester, Satinder Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. In NIPS, 1999.
Richard Sutton, Hamid Reza Maei, Doina Precup, Shalabh Bhatnagar, David Silver, Csaba Szepesv´ari, and Eric Wiewiora. Fast Gradient-Descent Methods for Temporal-Diï¬erence Learning with Linear Function Approximation. In ICML, 2009a.
Richard Sutton, Csaba Szepesv´ari, and Hamid Reza Maei. A convergent O(n) algorithm for oï¬-policy temporal-diï¬erence learning with linear function approximation. In Adv in Neural Information Processing Systems (NIPS), 2009b.
Shubhendu Trivedi, Jialei Wang, Samory Kpotufe, and Gregory Shakhnarovich. A Consis- tent Estimator of the Expected Gradient Outerproduct. In UAI, 2014.
John Tsitsiklis and Benjamin Van Roy. An Analysis of Temporal-Diï¬erence Learning with Function Approximation. IEEE Trans. Aut. Control, 42(5):674â690, 1997.
25
Balduzzi and Ghifary
Niklas Wahlstr¨om, Thomas B. Sch¨on, and Marc Peter Deisenroth. From Pixels to Torques: Policy Learning with Deep Dynamical Models. arXiv:1502.02251, 2015.
Y Wang and J Si. On-line learning control by association and reinforcement. IEEE Trans. Neur. Net., 12(2):264â276, 2001.
Ronald J Williams. Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning. Machine Learning, 8:229â256, 1992.
M D Zeiler, M Ranzato, R Monga, M Mao, K Yang, Q V Le, P Nguyen, A Senior, V Van- In houcke, J Dean, and G Hinton. On Rectiï¬ed Linear Units for Speech Processing. ICASSP, 2013.
# Appendices
# A. Explicit weight updates under GProp
It is instructive to describe the weight updates under GProp more explicitly.
Let θj, wj and vj denote the weight vector of unit j, according to whether it belongs to the actor, deviator or critic network. Similarly, in each case Ïj or Ïj denotes the inï¬uence of unit j on the networkâs output layer, where the inï¬uence is vector-valued for actor and deviator networks and scalar-valued for the critic network.
Weight updates in the deviator-actor-critic model, where all three networks consist of rectiï¬er units performing stochastic gradient descent, are then per Algorithm 3. Units that are not active on a round do not update their weights that round.
Algorithm 3: GProp: Explicit updates.
for rounds t= 1,2,...,T do Network gets state s;, responds &e â re + 7QÂ¥* (S41) â QV! (St for unit j = 1,2,...,n do if j is an active actor unit then | O41 <â Of +f - (aw: (5). 7/) - $i (sz) // backpropagate GW else if j is an active critic unit then | Vind VET (&, i) - bi (st) // backpropagate ⬠else if j is an active deviator unit then | wy â wi taf - (& ââ¬, 7) - bt (st) // backpropagate â¬-â¬
# B. Details for octopus arm experiments
Listing 1 summarizes technical information with respect to the physical description and task setting used in the octopus arm simulator in XML format.
26
# Compatible Value Gradients for Deep Reinforcement Learning
Listing 1 Physical description and task setting for the octopus arm (setting.xml).
<c o n s t a n t s >
< f r i c t i o n T a n g e n t i a l >0.4</ f r i c t i o n T a n g e n t i a l > < f r i c t i o n P e r p e n d i c u l a r >1</ f r i c t i o n P e r p e n d i c u l a r > <p r e s s u r e >10</ p r e s s u r e > <g r a v i t y >0.01</ g r a v i t y > <s u r f a c e L e v e l >5</ s u r f a c e L e v e l > <buoyancy >0.08</ buoyancy> <m u s c l e A c t i v e >0.1</ m u s c l e A c t i v e > <m u s c l e P a s s i v e >0.04</ m u s c l e P a s s i v e > <m u s c l e N o r m a l i z e d M i n L e n g t h >0.1</ m u s c l e N o r m a l i z e d M i n L e n g t h > <muscleDamping >â1</muscleDamping> <r e p u l s i o n C o n s t a n t >.01</ r e p u l s i o n C o n s t a n t > <r e p u l s i o n P o w e r >1</ r e p u l s i o n P o w e r > <r e p u l s i o n T h r e s h o l d >0.7</ r e p u l s i o n T h r e s h o l d > < t o r q u e C o e f f i c i e n t >0.025</ t o r q u e C o e f f i c i e n t >
<t a r g e t T a s k t i m e L i m i t =â300â s t e p R e w a r d=â1â> <t a r g e t p o s i t i o n =ââ3.25 â3.25â r e w a r d =â50â /> </ t a r g e t T a s k >
27 | {
"id": "1502.02251"
} |
1509.02971 | Continuous control with deep reinforcement learning | We adapt the ideas underlying the success of Deep Q-Learning to the
continuous action domain. We present an actor-critic, model-free algorithm
based on the deterministic policy gradient that can operate over continuous
action spaces. Using the same learning algorithm, network architecture and
hyper-parameters, our algorithm robustly solves more than 20 simulated physics
tasks, including classic problems such as cartpole swing-up, dexterous
manipulation, legged locomotion and car driving. Our algorithm is able to find
policies whose performance is competitive with those found by a planning
algorithm with full access to the dynamics of the domain and its derivatives.
We further demonstrate that for many of the tasks the algorithm can learn
policies end-to-end: directly from raw pixel inputs. | http://arxiv.org/pdf/1509.02971 | Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra | cs.LG, stat.ML | 10 pages + supplementary | null | cs.LG | 20150909 | 20190705 | 9 1 0 2
l u J 5 ] G L . s c [
6 v 1 7 9 2 0 . 9 0 5 1 : v i X r a
Published as a conference paper at ICLR 2016
# CONTINUOUS CONTROL WITH DEEP REINFORCEMENT LEARNING
Timothy P. Lillicrapâ, Jonathan J. Huntâ, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver & Daan Wierstra Google Deepmind London, UK {countzero, jjhunt, apritzel, heess, etom, tassa, davidsilver, wierstra} @ google.com
# ABSTRACT
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the de- terministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our al- gorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to ï¬nd policies whose performance is com- petitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies âend-to-endâ: directly from raw pixel in- puts.
# INTRODUCTION
One of the primary goals of the ï¬eld of artiï¬cial intelligence is to solve complex tasks from unpro- cessed, high-dimensional, sensory input. Recently, signiï¬cant progress has been made by combin- ing advances in deep learning for sensory processing (Krizhevsky et al., 2012) with reinforcement learning, resulting in the âDeep Q Networkâ (DQN) algorithm (Mnih et al., 2015) that is capable of human level performance on many Atari video games using unprocessed pixels for input. To do so, deep neural network function approximators were used to estimate the action-value function.
However, while DQN solves problems with high-dimensional observation spaces, it can only handle discrete and low-dimensional action spaces. Many tasks of interest, most notably physical control tasks, have continuous (real valued) and high dimensional action spaces. DQN cannot be straight- forwardly applied to continuous domains since it relies on a ï¬nding the action that maximizes the action-value function, which in the continuous valued case requires an iterative optimization process at every step.
An obvious approach to adapting deep reinforcement learning methods such as DQN to continuous domains is to to simply discretize the action space. However, this has many limitations, most no- tably the curse of dimensionality: the number of actions increases exponentially with the number of degrees of freedom. For example, a 7 degree of freedom system (as in the human arm) with the coarsest discretization ai â {âk, 0, k} for each joint leads to an action space with dimensionality: 37 = 2187. The situation is even worse for tasks that require ï¬ne control of actions as they require a correspondingly ï¬ner grained discretization, leading to an explosion of the number of discrete actions. Such large action spaces are difï¬cult to explore efï¬ciently, and thus successfully training DQN-like networks in this context is likely intractable. Additionally, naive discretization of action spaces needlessly throws away information about the structure of the action domain, which may be essential for solving many problems.
In this work we present a model-free, off-policy actor-critic algorithm using deep function approx- imators that can learn policies in high-dimensional, continuous action spaces. Our work is based
âThese authors contributed equally.
1
Published as a conference paper at ICLR 2016
on the deterministic policy gradient (DPG) algorithm (Silver et al., 2014) (itself similar to NFQCA (Hafner & Riedmiller, 2011), and similar ideas can be found in (Prokhorov et al., 1997)). However, as we show below, a naive application of this actor-critic method with neural function approximators is unstable for challenging problems.
Here we combine the actor-critic approach with insights from the recent success of Deep Q Network (DQN) (Mnih et al., 2013; 2015). Prior to DQN, it was generally believed that learning value functions using large, non-linear function approximators was difï¬cult and unstable. DQN is able to learn value functions using such function approximators in a stable and robust way due to two innovations: 1. the network is trained off-policy with samples from a replay buffer to minimize correlations between samples; 2. the network is trained with a target Q network to give consistent targets during temporal difference backups. In this work we make use of the same ideas, along with batch normalization (Ioffe & Szegedy, 2015), a recent advance in deep learning.
In order to evaluate our method we constructed a variety of challenging physical control problems that involve complex multi-joint movements, unstable and rich contact dynamics, and gait behavior. Among these are classic problems such as the cartpole swing-up problem, as well as many new domains. A long-standing challenge of robotic control is to learn an action policy directly from raw sensory input such as video. Accordingly, we place a ï¬xed viewpoint camera in the simulator and attempted all tasks using both low-dimensional observations (e.g. joint angles) and directly from pixels.
Our model-free approach which we call Deep DPG (DDPG) can learn competitive policies for all of our tasks using low-dimensional observations (e.g. cartesian coordinates or joint angles) using the same hyper-parameters and network structure. In many cases, we are also able to learn good policies directly from pixels, again keeping hyperparameters and network structure constant 1.
A key feature of the approach is its simplicity: it requires only a straightforward actor-critic archi- tecture and learning algorithm with very few âmoving partsâ, making it easy to implement and scale to more difï¬cult problems and larger networks. For the physical control problems we compare our results to a baseline computed by a planner (Tassa et al., 2012) that has full access to the underly- ing simulated dynamics and its derivatives (see supplementary information). Interestingly, DDPG can sometimes ï¬nd policies that exceed the performance of the planner, in some cases even when learning from pixels (the planner always plans over the underlying low-dimensional state space).
# 2 BACKGROUND
We consider a standard reinforcement learning setup consisting of an agent interacting with an en- vironment E in discrete timesteps. At each timestep t the agent receives an observation xt, takes an action at and receives a scalar reward rt. In all the environments considered here the actions are real-valued at â IRN . In general, the environment may be partially observed so that the entire history of the observation, action pairs st = (x1, a1, ..., atâ1, xt) may be required to describe the state. Here, we assumed the environment is fully-observed so st = xt.
An agentâs behavior is defined by a policy, 7, which maps states to a probability distribution over the actions 7: S â P(A). The environment, E, may also be stochastic. We model it as a Markov decision process with a state space S, action space A = JRN,, an initial state distribution p(s1), transition dynamics p(s;41|s,, @,), and reward function r(s;, a¢). The return from a state is defined as the sum of discounted future reward Ry = an 9 r(si, ai) with a discounting factor y ⬠[0, 1]. Note that the return depends on the actions chosen, and therefore on the policy 7, and may be stochastic. The goal in reinforcement learning is to learn a policy which maximizes the expected return from the start distribution J = E,, 5,.£,a;~7 [Ri]. We denote the discounted state visitation distribution for a policy 7 as pâ.
The action-value function is used in many reinforcement learning algorithms. It describes the ex- pected return after taking an action at in state st and thereafter following policy Ï:
QÏ(st, at) = Eriâ¥t,si>tâ¼E,ai>tâ¼Ï [Rt|st, at] (1)
1You can view a movie of some of the learned policies at https://goo.gl/J4PIAz
2
Published as a conference paper at ICLR 2016
Many approaches in reinforcement learning make use of the recursive relationship known as the Bellman equation:
Q" (81,41) = Evy seyioe [P(8e, ar) + YÂ¥ Ea en [Qâ (se41, ar41)]] (2)
If the target policy is deterministic we can describe it as a function µ : S â A and avoid the inner expectation:
Qµ(st, at) = Ert,st+1â¼E [r(st, at) + γQµ(st+1, µ(st+1))]
The expectation depends only on the environment. This means that it is possible to learn Qµ off- policy, using transitions which are generated from a different stochastic behavior policy β.
Q-learning (Watkins & Dayan, 1992), a commonly used off-policy algorithm, uses the greedy policy µ(s) = arg maxa Q(s, a). We consider function approximators parameterized by θQ, which we optimize by minimizing the loss:
2 L(62) = Ey, p8 a~B,rewE [(Q(s1, ar) ây) | (4)
where
yt = r(st, at) + γQ(st+1, µ(st+1)|θQ). (5)
While yt is also dependent on θQ, this is typically ignored.
The use of large, non-linear function approximators for learning value or action-value functions has often been avoided in the past since theoretical performance guarantees are impossible, and prac- tically learning tends to be unstable. Recently, (Mnih et al., 2013; 2015) adapted the Q-learning algorithm in order to make effective use of large neural networks as function approximators. Their algorithm was able to learn to play Atari games from pixels. In order to scale Q-learning they intro- duced two major changes: the use of a replay buffer, and a separate target network for calculating yt. We employ these in the context of DDPG and explain their implementation in the next section.
# 3 ALGORITHM
It is not possible to straightforwardly apply Q-learning to continuous action spaces, because in con- tinuous spaces ï¬nding the greedy policy requires an optimization of at at every timestep; this opti- mization is too slow to be practical with large, unconstrained function approximators and nontrivial action spaces. Instead, here we used an actor-critic approach based on the DPG algorithm (Silver et al., 2014). The DPG algorithm maintains a parameterized actor function µ(s|θµ) which speciï¬es the current policy by deterministically mapping states to a speciï¬c action. The critic Q(s, a) is learned using the Bellman equation as in Q-learning. The actor is updated by following the applying the chain rule to the expected return from the start distribution J with respect to the actor parameters:
Vow d © Esa [Vor Q(s, a|0%)| s=s,.0=n(s1|0")| Ex,x08 [VaQ(s; 210% )|s=se,a=pi(se) Vo, 4(5|9") |s=se] (6)
Silver et al. (2014) proved that this is the policy gradient, the gradient of the policyâs performance 2.
As with Q learning, introducing non-linear function approximators means that convergence is no longer guaranteed. However, such approximators appear essential in order to learn and generalize on large state spaces. NFQCA (Hafner & Riedmiller, 2011), which uses the same update rules as DPG but with neural network function approximators, uses batch learning for stability, which is intractable for large networks. A minibatch version of NFQCA which does not reset the policy at each update, as would be required to scale to large networks, is equivalent to the original DPG, which we compare to here. Our contribution here is to provide modiï¬cations to DPG, inspired by the success of DQN, which allow it to use neural network function approximators to learn in large state and action spaces online. We refer to our algorithm as Deep DPG (DDPG, Algorithm 1).
2In practice, as in commonly done in policy gradient implementations, we ignored the discount in the state- visitation distribution Ïβ.
3
(3)
Published as a conference paper at ICLR 2016
One challenge when using neural networks for reinforcement learning is that most optimization al- gorithms assume that the samples are independently and identically distributed. Obviously, when the samples are generated from exploring sequentially in an environment this assumption no longer holds. Additionally, to make efï¬cient use of hardware optimizations, it is essential to learn in mini- batches, rather than online.
As in DQN, we used a replay buffer to address these issues. The replay buffer is a ï¬nite sized cache R. Transitions were sampled from the environment according to the exploration policy and the tuple (st, at, rt, st+1) was stored in the replay buffer. When the replay buffer was full the oldest samples were discarded. At each timestep the actor and critic are updated by sampling a minibatch uniformly from the buffer. Because DDPG is an off-policy algorithm, the replay buffer can be large, allowing the algorithm to beneï¬t from learning across a set of uncorrelated transitions.
Directly implementing Q learning (equation|4) with neural networks proved to be unstable in many environments. Since the network Q(s,a|9@) being updated is also used in calculating the target value (equation|5), the Q update is prone to divergence. Our solution is similar to the target network used in (Mnih et al.| but modified for actor-critic and using âsoftâ target updates, rather than directly copying the weights. We create a copy of the actor and critic networks, Qâ(s, ale2â) and pe (s|o"â) respectively, that are used for calculating the target values. The weights of these target networks are then updated by having them slowly track the learned networks: 0â + 70 + (1 â 7)â with r < 1. This means that the target values are constrained to change slowly, greatly improving the stability of learning. This simple change moves the relatively unstable problem of learning the action-value function closer to the case of supervised learning, a problem for which robust solutions exist. We found that having both a target yuâ and Qâ was required to have stable targets y; in order to consistently train the critic without divergence. This may slow learning, since the target network delays the propagation of value estimations. However, in practice we found this was greatly outweighed by the stability of learning.
When learning from low dimensional feature vector observations, the different components of the observation may have different physical units (for example, positions versus velocities) and the ranges may vary across environments. This can make it difï¬cult for the network to learn effec- tively and may make it difï¬cult to ï¬nd hyper-parameters which generalise across environments with different scales of state values.
One approach to this problem is to manually scale the features so they are in similar ranges across environments and units. We address this issue by adapting a recent technique from deep learning called batch normalization (Ioffe & Szegedy, 2015). This technique normalizes each dimension across the samples in a minibatch to have unit mean and variance. In addition, it maintains a run- ning average of the mean and variance to use for normalization during testing (in our case, during exploration or evaluation). In deep networks, it is used to minimize covariance shift during training, by ensuring that each layer receives whitened input. In the low-dimensional case, we used batch normalization on the state input and all layers of the µ network and all layers of the Q network prior to the action input (details of the networks are given in the supplementary material). With batch normalization, we were able to learn effectively across many different tasks with differing types of units, without needing to manually ensure the units were within a set range.
A major challenge of learning in continuous action spaces is exploration. An advantage of off- policies algorithms such as DDPG is that we can treat the problem of exploration independently from the learning algorithm. We constructed an exploration policy yuâ by adding noise sampled from a noise process NV to our actor policy
# H(s1) = psilOf)
(7) N can be chosen to suit the environment. As detailed in the supplementary materials we used an Ornstein-Uhlenbeck process (Uhlenbeck & Ornstein, 1930) to generate temporally correlated exploration for exploration efï¬ciency in physical control problems with inertia (similar use of auto- correlated noise was introduced in (Wawrzy´nski, 2015)).
# 4 RESULTS
We constructed simulated physical environments of varying levels of difï¬culty to test our algorithm. This included classic reinforcement learning environments such as cartpole, as well as difï¬cult,
4
Published as a conference paper at ICLR 2016
# Algorithm 1 DDPG algorithm
Randomly initialize critic network Q(s, a|9@) and actor :(s|0â) with weights 0? and 6". Initialize target network Qâ and jxâ with weights 02â â 62, 9 ~ 9H Initialize replay buffer R for episode = 1,M do Initialize a random process N for action exploration Receive initial observation state s; fort=1,T do Select action a, = j1(s,|9") +N; according to the current policy and exploration noise Execute action a; and observe reward r; and observe new state 5,41 Store transition (s,, 41,7, 8:41) in R Sample a random minibatch of N transitions (s;,a;,7;, 5:41) from R Set yi = ri +7Q" (sin, Ml(si41|0"â)0%â) Update critic by minimizing the loss: L = 4 0 ,(yi â Q(si, ai|92))â Update the actor policy using the sampled policy gradient:
1 Vou) © x » VaQ(s, a|0%) |sâs,,a=p(s,) Von ( 5/0") |,
Update the target networks:
6? + 702 + (1â7)6? OH! ON 4 (l- r)oH"
# end for end for
high dimensional tasks such as gripper, tasks involving contacts such as puck striking (canada) and locomotion tasks such as cheetah (Wawrzy´nski, 2009). In all domains but cheetah the actions were torques applied to the actuated joints. These environments were simulated using MuJoCo (Todorov et al., 2012). Figure 1 shows renderings of some of the environments used in the task (the supplementary contains details of the environments and you can view some of the learned policies at https://goo.gl/J4PIAz).
In all tasks, we ran experiments using both a low-dimensional state description (such as joint angles and positions) and high-dimensional renderings of the environment. As in DQN (Mnih et al., 2013; 2015), in order to make the problems approximately fully observable in the high dimensional envi- ronment we used action repeats. For each timestep of the agent, we step the simulation 3 timesteps, repeating the agentâs action and rendering each time. Thus the observation reported to the agent contains 9 feature maps (the RGB of each of the 3 renderings) which allows the agent to infer veloc- ities using the differences between frames. The frames were downsampled to 64x64 pixels and the 8-bit RGB values were converted to ï¬oating point scaled to [0, 1]. See supplementary information for details of our network structure and hyperparameters.
We evaluated the policy periodically during training by testing it without exploration noise. Figure 2 shows the performance curve for a selection of environments. We also report results with compo- nents of our algorithm (i.e. the target network or batch normalization) removed. In order to perform well across all tasks, both of these additions are necessary. In particular, learning without a target network, as in the original work with DPG, is very poor in many environments.
Surprisingly, in some simpler tasks, learning policies from pixels is just as fast as learning using the low-dimensional state descriptor. This may be due to the action repeats making the problem simpler. It may also be that the convolutional layers provide an easily separable representation of state space, which is straightforward for the higher layers to learn on quickly.
Table 1 summarizes DDPGâs performance across all of the environments (results are averaged over 5 replicas). We normalized the scores using two baselines. The ï¬rst baseline is the mean return from a naive policy which samples actions from a uniform distribution over the valid action space. The second baseline is iLQG (Todorov & Li, 2005), a planning based solver with full access to the
5
Published as a conference paper at ICLR 2016
underlying physical model and its derivatives. We normalize scores so that the naive policy has a mean score of 0 and iLQG has a mean score of 1. DDPG is able to learn good policies on many of the tasks, and in many cases some of the replicas learn policies which are superior to those found by iLQG, even when learning directly from pixels.
It can be challenging to learn accurate value estimates. Q-learning, for example, is prone to over- estimating values (Hasselt, 2010). We examined DDPGâs estimates empirically by comparing the values estimated by Q after training with the true returns seen on test episodes. Figure 3 shows that in simple tasks DDPG estimates returns accurately without systematic biases. For harder tasks the Q estimates are worse, but DDPG is still able learn good policies.
To demonstrate the generality of our approach we also include Torcs, a racing game where the actions are acceleration, braking and steering. Torcs has previously been used as a testbed in other policy learning approaches (Koutn´ık et al., 2014b). We used an identical network architecture and learning algorithm hyper-parameters to the physics tasks but altered the noise process for exploration because of the very different time scales involved. On both low-dimensional and from pixels, some replicas were able to learn reasonable policies that are able to complete a circuit around the track though other replicas failed to learn a sensible policy.
Figure 1: Example screenshots of a sample of environments we attempt to solve with DDPG. In order from the left: the cartpole swing-up task, a reaching task, a gasp and move task, a puck-hitting task, a monoped balancing task, two locomotion tasks and Torcs (driving simulator). We tackle all tasks using both low-dimensional feature vector and high-dimensional pixel inputs. Detailed descriptions of the environments are provided in the supplementary. Movies of some of the learned policies are available at https://goo.gl/J4PIAz.
Cart Pendulum Swing-up. Cartpole Swing-up Fixed Reacher Monoped Balancing qr 1 1 1 s] o| 0 0) 0 . oO . . Gripper Blockworld Puck Shooting Cheetah Moving Gripper pil a 1 g 1 ? é 0 3 o| Hy a ° 0) E 0 S 1 . . 20 1 0 1 0 1 ) 1 0 1
# Million Steps
Figure 2: Performance curves for a selection of domains using variants of DPG: original DPG algorithm (minibatch NFQCA) with batch normalization (light grey), with target network (dark grey), with target networks and batch normalization (green), with target networks from pixel-only inputs (blue). Target networks are crucial.
# 5 RELATED WORK
The original DPG paper evaluated the algorithm with toy problems using tile-coding and linear function approximators. It demonstrated data efï¬ciency advantages for off-policy DPG over both on- and off-policy stochastic actor critic. It also solved one more challenging task in which a multi- jointed octopus arm had to strike a target with any part of the limb. However, that paper did not demonstrate scaling the approach to large, high-dimensional observation spaces as we have here.
It has often been assumed that standard policy search methods such as those explored in the present work are simply too fragile to scale to difï¬cult problems (Levine et al., 2015). Standard policy search
6
Published as a conference paper at ICLR 2016
Pendulum Cartpole Cheetah o ral ov A a £ 4 | a uu Return
Return
Figure 3: Density plot showing estimated Q values versus observed returns sampled from test episodes on 5 replicas. In simple domains such as pendulum and cartpole the Q values are quite accurate. In more complex tasks, the Q estimates are less accurate, but can still be used to learn competent policies. Dotted line indicates unity, units are arbitrary.
Table 1: Performance after training across all environments for at most 2.5 million steps. We report both the average and best observed (across 5 runs). All scores, except Torcs, are normalized so that a random agent receives 0 and a planning algorithm 1; for Torcs we present the raw reward score. We include results from the DDPG algorithn in the low-dimensional (lowd) version of the environment and high-dimensional (pix). For comparision we also include results from the original DPG algorithm with a replay buffer and batch normalization (cntrl).
Rav,lowd Rbest,lowd
Rbest,pix Rav,cntrl Rbest,cntrl -0.080 -0.139 0.125 -0.045 0.343 0.244 -0.468 0.197 0.143 0.583 -0.008 0.259 0.290 0.620 0.461 0.557 -0.031 0.078 0.198 0.416 0.099 0.231 0.204 -0.046 1.010 0.393 -911.034
is thought to be difï¬cult because it deals simultaneously with complex environmental dynamics and a complex policy. Indeed, most past work with actor-critic and policy optimization approaches have had difï¬culty scaling up to more challenging problems (Deisenroth et al., 2013). Typically, this is due to instability in learning wherein progress on a problem is either destroyed by subsequent learning updates, or else learning is too slow to be practical.
Recent work with model-free policy search has demonstrated that it may not be as fragile as previ- ously supposed. Wawrzy´nski (2009); Wawrzy´nski & Tanwani (2013) has trained stochastic policies
7
Published as a conference paper at ICLR 2016
in an actor-critic framework with a replay buffer. Concurrent with our work, Balduzzi & Ghifary (2015) extended the DPG algorithm with a âdeviatorâ network which explicitly learns âQ/âa. How- ever, they only train on two low-dimensional domains. Heess et al. (2015) introduced SVG(0) which also uses a Q-critic but learns a stochastic policy. DPG can be considered the deterministic limit of SVG(0). The techniques we described here for scaling DPG are also applicable to stochastic policies by using the reparametrization trick (Heess et al., 2015; Schulman et al., 2015a).
Another approach, trust region policy optimization (TRPO) (Schulman et al., 2015b), directly con- structs stochastic neural network policies without decomposing problems into optimal control and supervised phases. This method produces near monotonic improvements in return by making care- fully chosen updates to the policy parameters, constraining updates to prevent the new policy from diverging too far from the existing policy. This approach does not require learning an action-value function, and (perhaps as a result) appears to be signiï¬cantly less data efï¬cient.
To combat the challenges of the actor-critic approach, recent work with guided policy search (GPS) algorithms (e.g., (Levine et al., 2015)) decomposes the problem into three phases that are rela- tively easy to solve: ï¬rst, it uses full-state observations to create locally-linear approximations of the dynamics around one or more nominal trajectories, and then uses optimal control to ï¬nd the locally-linear optimal policy along these trajectories; ï¬nally, it uses supervised learning to train a complex, non-linear policy (e.g. a deep neural network) to reproduce the state-to-action mapping of the optimized trajectories.
This approach has several beneï¬ts, including data efï¬ciency, and has been applied successfully to a variety of real-world robotic manipulation tasks using vision. In these tasks GPS uses a similar convolutional policy network to ours with 2 notable exceptions: 1. it uses a spatial softmax to reduce the dimensionality of visual features into a single (x, y) coordinate for each feature map, and 2. the policy also receives direct low-dimensional state information about the conï¬guration of the robot at the ï¬rst fully connected layer in the network. Both likely increase the power and data efï¬ciency of the algorithm and could easily be exploited within the DDPG framework.
PILCO (Deisenroth & Rasmussen, 2011) uses Gaussian processes to learn a non-parametric, proba- bilistic model of the dynamics. Using this learned model, PILCO calculates analytic policy gradients and achieves impressive data efï¬ciency in a number of control problems. However, due to the high computational demand, PILCO is âimpractical for high-dimensional problemsâ (Wahlstr¨om et al., 2015). It seems that deep function approximators are the most promising approach for scaling rein- forcement learning to large, high-dimensional domains.
Wahlstr¨om et al. (2015) used a deep dynamical model network along with model predictive control to solve the pendulum swing-up task from pixel input. They trained a differentiable forward model and encoded the goal state into the learned latent space. They use model-predictive control over the learned model to ï¬nd a policy for reaching the target. However, this approach is only applicable to domains with goal states that can be demonstrated to the algorithm.
Recently, evolutionary approaches have been used to learn competitive policies for Torcs from pixels using compressed weight parametrizations (Koutn´ık et al., 2014a) or unsupervised learning (Koutn´ık et al., 2014b) to reduce the dimensionality of the evolved weights. It is unclear how well these approaches generalize to other problems.
# 6 CONCLUSION
The work combines insights from recent advances in deep learning and reinforcement learning, re- sulting in an algorithm that robustly solves challenging problems across a variety of domains with continuous action spaces, even when using raw pixels for observations. As with most reinforcement learning algorithms, the use of non-linear function approximators nulliï¬es any convergence guar- antees; however, our experimental results demonstrate that stable learning without the need for any modiï¬cations between environments.
Interestingly, all of our experiments used substantially fewer steps of experience than was used by DQN learning to ï¬nd solutions in the Atari domain. Nearly all of the problems we looked at were solved within 2.5 million steps of experience (and usually far fewer), a factor of 20 fewer steps than
8
Published as a conference paper at ICLR 2016
DQN requires for good Atari solutions. This suggests that, given more simulation time, DDPG may solve even more difï¬cult problems than those considered here.
A few limitations to our approach remain. Most notably, as with most model-free reinforcement approaches, DDPG requires a large number of training episodes to ï¬nd solutions. However, we believe that a robust model-free approach may be an important component of larger systems which may attack these limitations (Gl¨ascher et al., 2010).
# REFERENCES
Balduzzi, David and Ghifary, Muhammad. Compatible value gradients for reinforcement learning of continuous deep policies. arXiv preprint arXiv:1509.03005, 2015.
Deisenroth, Marc and Rasmussen, Carl E. Pilco: A model-based and data-efï¬cient approach to policy search. In Proceedings of the 28th International Conference on machine learning (ICML- 11), pp. 465â472, 2011.
Deisenroth, Marc Peter, Neumann, Gerhard, Peters, Jan, et al. A survey on policy search for robotics. Foundations and Trends in Robotics, 2(1-2):1â142, 2013.
Gl¨ascher, Jan, Daw, Nathaniel, Dayan, Peter, and OâDoherty, John P. States versus rewards: dis- sociable neural prediction error signals underlying model-based and model-free reinforcement learning. Neuron, 66(4):585â595, 2010.
Glorot, Xavier, Bordes, Antoine, and Bengio, Yoshua. Deep sparse rectiï¬er networks. In Proceed- ings of the 14th International Conference on Artiï¬cial Intelligence and Statistics. JMLR W&CP Volume, volume 15, pp. 315â323, 2011.
Hafner, Roland and Riedmiller, Martin. Reinforcement learning in feedback control. Machine learning, 84(1-2):137â169, 2011.
Hasselt, Hado V. Double q-learning. In Advances in Neural Information Processing Systems, pp. 2613â2621, 2010.
Heess, N., Hunt, J. J, Lillicrap, T. P, and Silver, D. Memory-based control with recurrent neural networks. NIPS Deep Reinforcement Learning Workshop (arXiv:1512.04455), 2015.
Heess, Nicolas, Wayne, Gregory, Silver, David, Lillicrap, Tim, Erez, Tom, and Tassa, Yuval. Learn- ing continuous control policies by stochastic value gradients. In Advances in Neural Information Processing Systems, pp. 2926â2934, 2015.
Ioffe, Sergey and Szegedy, Christian. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
Kingma, Diederik and Ba, Jimmy. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Koutn´ık, Jan, Schmidhuber, J¨urgen, and Gomez, Faustino. Evolving deep unsupervised convolu- tional networks for vision-based reinforcement learning. In Proceedings of the 2014 conference on Genetic and evolutionary computation, pp. 541â548. ACM, 2014a.
Koutn´ık, Jan, Schmidhuber, J¨urgen, and Gomez, Faustino. Online evolution of deep convolutional network for vision-based reinforcement learning. In From Animals to Animats 13, pp. 260â269. Springer, 2014b.
Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. Imagenet classiï¬cation with deep convo- lutional neural networks. In Advances in neural information processing systems, pp. 1097â1105, 2012.
Levine, Sergey, Finn, Chelsea, Darrell, Trevor, and Abbeel, Pieter. End-to-end training of deep visuomotor policies. arXiv preprint arXiv:1504.00702, 2015.
9
Published as a conference paper at ICLR 2016
Mnih, Volodymyr, Kavukcuoglu, Koray, Silver, David, Graves, Alex, Antonoglou, Ioannis, Wier- stra, Daan, and Riedmiller, Martin. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
Mnih, Volodymyr, Kavukcuoglu, Koray, Silver, David, Rusu, Andrei A, Veness, Joel, Bellemare, Marc G, Graves, Alex, Riedmiller, Martin, Fidjeland, Andreas K, Ostrovski, Georg, et al. Human- level control through deep reinforcement learning. Nature, 518(7540):529â533, 2015.
Prokhorov, Danil V, Wunsch, Donald C, et al. Adaptive critic designs. Neural Networks, IEEE Transactions on, 8(5):997â1007, 1997.
Schulman, John, Heess, Nicolas, Weber, Theophane, and Abbeel, Pieter. Gradient estimation using stochastic computation graphs. In Advances in Neural Information Processing Systems, pp. 3510â 3522, 2015a.
Schulman, John, Levine, Sergey, Moritz, Philipp, Jordan, Michael I, and Abbeel, Pieter. Trust region policy optimization. arXiv preprint arXiv:1502.05477, 2015b.
Silver, David, Lever, Guy, Heess, Nicolas, Degris, Thomas, Wierstra, Daan, and Riedmiller, Martin. Deterministic policy gradient algorithms. In ICML, 2014.
Tassa, Yuval, Erez, Tom, and Todorov, Emanuel. Synthesis and stabilization of complex behaviors through online trajectory optimization. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pp. 4906â4913. IEEE, 2012.
Todorov, Emanuel and Li, Weiwei. A generalized iterative lqg method for locally-optimal feed- back control of constrained nonlinear stochastic systems. In American Control Conference, 2005. Proceedings of the 2005, pp. 300â306. IEEE, 2005.
Todorov, Emanuel, Erez, Tom, and Tassa, Yuval. Mujoco: A physics engine for model-based control. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pp. 5026â 5033. IEEE, 2012.
Uhlenbeck, George E and Ornstein, Leonard S. On the theory of the brownian motion. Physical review, 36(5):823, 1930.
Wahlstr¨om, Niklas, Sch¨on, Thomas B, and Deisenroth, Marc Peter. From pixels to torques: Policy learning with deep dynamical models. arXiv preprint arXiv:1502.02251, 2015.
Watkins, Christopher JCH and Dayan, Peter. Q-learning. Machine learning, 8(3-4):279â292, 1992.
Wawrzy´nski, PaweÅ. Real-time reinforcement learning by sequential actorâcritics and experience replay. Neural Networks, 22(10):1484â1497, 2009.
Wawrzy´nski, PaweÅ. Control policy with autocorrelated noise in reinforcement learning for robotics. International Journal of Machine Learning and Computing, 5:91â95, 2015.
Wawrzy´nski, PaweÅ and Tanwani, Ajay Kumar. Autonomous reinforcement learning with experience replay. Neural Networks, 41:156â167, 2013.
10
Published as a conference paper at ICLR 2016
# Supplementary Information: Continuous control with deep reinforcement learning
7 EXPERIMENT DETAILS
We used Adam (Kingma & Ba, 2014) for learning the neural network parameters with a learning rate of 10â4 and 10â3 for the actor and critic respectively. For Q we included L2 weight decay of 10â2 and used a discount factor of γ = 0.99. For the soft target updates we used Ï = 0.001. The neural networks used the rectiï¬ed non-linearity (Glorot et al., 2011) for all hidden layers. The ï¬nal output layer of the actor was a tanh layer, to bound the actions. The low-dimensional networks had 2 hidden layers with 400 and 300 units respectively (â 130,000 parameters). Actions were not included until the 2nd hidden layer of Q. When learning from pixels we used 3 convolutional layers (no pooling) with 32 ï¬lters at each layer. This was followed by two fully connected layers with 200 units (â 430,000 parameters). The ï¬nal layer weights and biases of both the actor and critic were initialized from a uniform distribution [â3 à 10â3, 3 à 10â3] and [3 à 10â4, 3 à 10â4] for the low dimensional and pixel cases respectively. This was to ensure the initial outputs for the policy and value estimates were near zero. The other layers were initialized from uniform distributions [â 1â f ] where f is the fan-in of the layer. The actions were not included until the fully-connected layers. We trained with minibatch sizes of 64 for the low dimensional problems and 16 on pixels. We used a replay buffer size of 106.
For the exploration noise process we used temporally correlated noise in order to explore well in physical environments that have momentum. We used an Ornstein-Uhlenbeck process (Uhlenbeck & Ornstein, 1930) with θ = 0.15 and Ï = 0.2. The Ornstein-Uhlenbeck process models the velocity of a Brownian particle with friction, which results in temporally correlated values centered around 0.
# 8 PLANNING ALGORITHM
Our planner is implemented as a model-predictive controller (Tassa et al., 2012): at every time step we run a single iteration of trajectory optimization (using iLQG, (Todorov & Li, 2005)), starting from the true state of the system. Every single trajectory optimization is planned over a horizon between 250ms and 600ms, and this planning horizon recedes as the simulation of the world unfolds, as is the case in model-predictive control.
The iLQG iteration begins with an initial rollout of the previous policy, which determines the nom- inal trajectory. We use repeated samples of simulated dynamics to approximate a linear expansion of the dynamics around every step of the trajectory, as well as a quadratic expansion of the cost function. We use this sequence of locally-linear-quadratic models to integrate the value function backwards in time along the nominal trajectory. This back-pass results in a putative modiï¬cation to the action sequence that will decrease the total cost. We perform a derivative-free line-search over this direction in the space of action sequences by integrating the dynamics forward (the forward- pass), and choose the best trajectory. We store this action sequence in order to warm-start the next iLQG iteration, and execute the ï¬rst action in the simulator. This results in a new state, which is used as the initial state in the next iteration of trajectory optimization.
# 9 ENVIRONMENT DETAILS
9.1 TORCS ENVIRONMENT
For the Torcs environment we used a reward function which provides a positive reward at each step for the velocity of the car projected along the track direction and a penalty of â1 for collisions. Episodes were terminated if progress was not made along the track after 500 frames.
11
Published as a conference paper at ICLR 2016
# 9.2 MUJOCO ENVIRONMENTS
For physical control tasks we used reward functions which provide feedback at every step. In all tasks, the reward contained a small action cost. For all tasks that have a static goal state (e.g. pendulum swingup and reaching) we provide a smoothly varying reward based on distance to a goal state, and in some cases an additional positive reward when within a small radius of the target state. For grasping and manipulation tasks we used a reward with a term which encourages movement towards the payload and a second component which encourages moving the payload to the target. In locomotion tasks we reward forward action and penalize hard impacts to encourage smooth rather than hopping gaits (Schulman et al., 2015b). In addition, we used a negative reward and early termination for falls which were determined by simple threshholds on the height and torso angle (in the case of walker2d).
Table 2 states the dimensionality of the problems and below is a summary of all the physics envi- ronments.
task name blockworld1 blockworld3da canada canada2d cart cartpole cartpoleBalance cartpoleParallelDouble cartpoleParallelTriple cartpoleSerialDouble cartpoleSerialTriple cheetah ï¬xedReacher ï¬xedReacherDouble ï¬xedReacherSingle gripper gripperRandom hardCheetah hardCheetahNice hopper hyq hyqKick movingGripper movingGripperRandom pendulum reacher reacher3daFixedTarget reacher3daRandomTarget reacherDouble reacherObstacle reacherSingle walker2d dim(s) 18 31 22 14 2 4 4 6 8 6 8 18 10 8 6 18 18 18 18 14 37 37 22 22 2 10 20 20 6 18 6 18 dim(a) 5 9 7 3 1 1 1 1 1 1 1 6 3 2 1 5 5 6 6 4 12 12 7 7 1 3 7 7 1 5 1 6 dim(o) 43 102 62 29 3 14 14 16 23 14 23 17 23 18 13 43 43 17 17 14 37 37 49 49 3 23 61 61 13 38 13 41
Table 2: Dimensionality of the MuJoCo tasks: the dimensionality of the underlying physics model dim(s), number of action dimensions dim(a) and observation dimensions dim(o).
task name Brief Description blockworld1 Agent is required to use an arm with gripper constrained to the 2D plane to grab a falling block and lift it against gravity to a ï¬xed target position.
12
Published as a conference paper at ICLR 2016
blockworld3da Agent is required to use a human-like arm with 7-DOF and a simple gripper to grab a block and lift it against gravity to a ï¬xed target posi- tion. canada Agent is required to use a 7-DOF arm with hockey-stick like appendage to hit a ball to a target. canada2d Agent is required to use an arm with hockey-stick like appendage to hit a ball initialzed to a random start location to a random target location. cart Agent must move a simple mass to rest at 0. The mass begins each trial in random positions and with random velocities. cartpole The classic cart-pole swing-up task. Agent must balance a pole at- tached to a cart by applying forces to the cart alone. The pole starts each episode hanging upside-down. cartpoleBalance The classic cart-pole balance task. Agent must balance a pole attached to a cart by applying forces to the cart alone. The pole starts in the upright positions at the beginning of each episode. cartpoleParallelDouble Variant on the classic cart-pole. Two poles, both attached to the cart, should be kept upright as much as possible. cartpoleSerialDouble Variant on the classic cart-pole. Two poles, one attached to the cart and the second attached to the end of the ï¬rst, should be kept upright as much as possible. cartpoleSerialTriple Variant on the classic cart-pole. Three poles, one attached to the cart, the second attached to the end of the ï¬rst, and the third attached to the end of the second, should be kept upright as much as possible. cheetah The agent should move forward as quickly as possible with a cheetah- like body that is constrained to the plane. This environment is based very closely on the one introduced by Wawrzy´nski (2009); Wawrzy´nski & Tanwani (2013). ï¬xedReacher Agent is required to move a 3-DOF arm to a ï¬xed target position. ï¬xedReacherDouble Agent is required to move a 2-DOF arm to a ï¬xed target position. ï¬xedReacherSingle Agent is required to move a simple 1-DOF arm to a ï¬xed target position. gripper Agent must use an arm with gripper appendage to grasp an object and manuver the object to a ï¬xed target. gripperRandom The same task as gripper except that the arm object and target posi- tion are initialized in random locations. hardCheetah
hardCheetah The agent should move forward as quickly as possible with a cheetah- like body that is constrained to the plane. This environment is based very closely on the one introduced by Wawrzy´nski (2009); Wawrzy´nski & Tanwani (2013), but has been made much more difï¬cult by removing the stabalizing joint stiffness from the model.
# hopper
Agent must balance a multiple degree of freedom monoped to keep it from falling.
hyq Agent is required to keep a quadroped model based on the hyq robot from falling.
13
Published as a conference paper at ICLR 2016
movingGripper Agent must use an arm with gripper attached to a moveable platform to grasp an object and move it to a ï¬xed target. movingGripperRandom The same as the movingGripper environment except that the object po- sition, target position, and arm state are initialized randomly. pendulum The classic pendulum swing-up problem. The pendulum should be brought to the upright position and balanced. Torque limits prevent the agent from swinging the pendulum up directly. reacher3daFixedTarget Agent is required to move a 7-DOF human-like arm to a ï¬xed target position. reacher3daRandomTarget Agent is required to move a 7-DOF human-like arm from random start- ing locations to random target positions. reacher Agent is required to move a 3-DOF arm from random starting locations to random target positions. reacherSingle Agent is required to move a simple 1-DOF arm from random starting locations to random target positions. reacherObstacle Agent is required to move a 5-DOF arm around an obstacle to a ran- domized target position. walker2d Agent should move forward as quickly as possible with a bipedal walker constrained to the plane without falling down or pitching the torso too far forward or backward.
14 | {
"id": "1502.03167"
} |
1508.06615 | Character-Aware Neural Language Models | We describe a simple neural language model that relies only on
character-level inputs. Predictions are still made at the word-level. Our model
employs a convolutional neural network (CNN) and a highway network over
characters, whose output is given to a long short-term memory (LSTM) recurrent
neural network language model (RNN-LM). On the English Penn Treebank the model
is on par with the existing state-of-the-art despite having 60% fewer
parameters. On languages with rich morphology (Arabic, Czech, French, German,
Spanish, Russian), the model outperforms word-level/morpheme-level LSTM
baselines, again with fewer parameters. The results suggest that on many
languages, character inputs are sufficient for language modeling. Analysis of
word representations obtained from the character composition part of the model
reveals that the model is able to encode, from characters only, both semantic
and orthographic information. | http://arxiv.org/pdf/1508.06615 | Yoon Kim, Yacine Jernite, David Sontag, Alexander M. Rush | cs.CL, cs.NE, stat.ML | AAAI 2016 | null | cs.CL | 20150826 | 20151201 | 5 1 0 2 c e D 1 ] L C . s c [
4 v 5 1 6 6 0 . 8 0 5 1 : v i X r a
# Character-Aware Neural Language Models
# Yoon Kimâ
# Yacine Jerniteâ
# David Sontagâ
# Alexander M. Rushâ
# â School of Engineering and Applied Sciences Harvard University {yoonkim,srush}@seas.harvard.edu
# âCourant Institute of Mathematical Sciences New York University {jernite,dsontag}@cs.nyu.edu
# Abstract
We describe a simple neural language model that re- lies only on character-level inputs. Predictions are still made at the word-level. Our model employs a con- volutional neural network (CNN) and a highway net- is given to a work over characters, whose output long short-term memory (LSTM) recurrent neural net- work language model (RNN-LM). On the English Penn Treebank the model is on par with the existing state-of-the-art despite having 60% fewer parameters. On languages with rich morphology (Arabic, Czech, French, German, Spanish, Russian), the model out- performs word-level/morpheme-level LSTM baselines, again with fewer parameters. The results suggest that on many languages, character inputs are sufï¬cient for lan- guage modeling. Analysis of word representations ob- tained from the character composition part of the model reveals that the model is able to encode, from characters only, both semantic and orthographic information.
Introduction Language modeling is a fundamental task in artiï¬cial intel- ligence and natural language processing (NLP), with appli- cations in speech recognition, text generation, and machine translation. A language model is formalized as a probability distribution over a sequence of strings (words), and tradi- tional methods usually involve making an n-th order Markov assumption and estimating n-gram probabilities via count- ing and subsequent smoothing (Chen and Goodman 1998). The count-based models are simple to train, but probabilities of rare n-grams can be poorly estimated due to data sparsity (despite smoothing techniques).
Neural Language Models (NLM) address the n-gram data sparsity issue through parameterization of words as vectors (word embeddings) and using them as inputs to a neural net- work (Bengio, Ducharme, and Vincent 2003; Mikolov et al. 2010). The parameters are learned as part of the training process. Word embeddings obtained through NLMs exhibit the property whereby semantically close words are likewise close in the induced vector space (as is the case with non- neural techniques such as Latent Semantic Analysis (Deer- wester, Dumais, and Harshman 1990)).
While NLMs have been shown to outperform count-based n-gram language models (Mikolov et al. 2011), they are blind to subword information (e.g. morphemes). For exam- ple, they do not know, a priori, that eventful, eventfully, un- eventful, and uneventfully should have structurally related embeddings in the vector space. Embeddings of rare words can thus be poorly estimated, leading to high perplexities for rare words (and words surrounding them). This is espe- cially problematic in morphologically rich languages with long-tailed frequency distributions or domains with dynamic vocabularies (e.g. social media).
In this work, we propose a language model that lever- ages subword information through a character-level con- volutional neural network (CNN), whose output is used as an input to a recurrent neural network language model (RNN-LM). Unlike previous works that utilize subword in- formation via morphemes (Botha and Blunsom 2014; Lu- ong, Socher, and Manning 2013), our model does not require morphological tagging as a pre-processing step. And, unlike the recent line of work which combines input word embed- dings with features from a character-level model (dos Santos and Zadrozny 2014; dos Santos and Guimaraes 2015), our model does not utilize word embeddings at all in the input layer. Given that most of the parameters in NLMs are from the word embeddings, the proposed model has signiï¬cantly fewer parameters than previous NLMs, making it attractive for applications where model size may be an issue (e.g. cell phones).
To summarize, our contributions are as follows:
⢠on English, we achieve results on par with the existing state-of-the-art on the Penn Treebank (PTB), despite hav- ing approximately 60% fewer parameters, and
⢠on morphologically rich languages (Arabic, Czech, French, German, Spanish, and Russian), our model outperforms various baselines (Kneser-Ney, word- level/morpheme-level LSTM), again with fewer parame- ters.
We have released all the code for the models described in this paper.1
Copyright © 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
1https://github.com/yoonkim/lstm-char-cnn
Model The architecture of our model, shown in Figure 1, is straight- forward. Whereas a conventional NLM takes word embed- dings as inputs, our model instead takes the output from a single-layer character-level convolutional neural network with max-over-time pooling.
For notation, we denote vectors with bold lower-case (e.g. xt, b), matrices with bold upper-case (e.g. W, Uo), scalars with italic lower-case (e.g. x, b), and sets with cursive upper- case (e.g. V, C) letters. For notational convenience we as- sume that words and characters have already been converted into indices.
Recurrent Neural Network A recurrent neural network (RNN) is a type of neural net- work architecture particularly suited for modeling sequen- tial phenomena. At each time step t, an RNN takes the input vector xt â Rn and the hidden state vector htâ1 â Rm and produces the next hidden state ht by applying the following recursive operation:
ht = f (Wxt + Uhtâ1 + b) (1) Here W â RmÃn, U â RmÃm, b â Rm are parameters of an afï¬ne transformation and f is an element-wise nonlin- earity. In theory the RNN can summarize all historical in- formation up to time t with the hidden state ht. In practice however, learning long-range dependencies with a vanilla RNN is difï¬cult due to vanishing/exploding gradients (Ben- gio, Simard, and Frasconi 1994), which occurs as a result of the Jacobianâs multiplicativity with respect to time.
(Hochreiter and Schmidhuber 1997) addresses the problem of learning long range dependencies by augmenting the RNN with a memory cell vector ct â Rn at each time step. Concretely, one step of an LSTM takes as input xt, htâ1, ctâ1 and produces ht, ct via the following intermediate calculations:
i, = 0 W'x; + U'hy_, + bâ) f, = o(Wix, + Ui + bf) 0, = 0(W°x, + U°hy_1 +b?) g, = tanh(W2x, + U%h,_; + bâ) ce =f0Oc¢_1+i Og; h, = 0; © tanh(c;) (2)
Here o(-) and tanh(-) are the element-wise sigmoid and hy- perbolic tangent functions, © is the element-wise multipli- cation operator, and i;, f;, o, are referred to as input, for- get, and output gates. Att = 1, ho and co are initialized to zero vectors. Parameters of the LSTM are W!, U/,b/â for j⬠{i f,0,9}-
Memory cells in the LSTM are additive with respect to time, alleviating the gradient vanishing problem. Gradient exploding is still an issue, though in practice simple opti- mization strategies (such as gradient clipping) work well. LSTMs have been shown to outperform vanilla RNNs on many tasks, including on language modeling (Sundermeyer, Schluter, and Ney 2012). It is easy to extend the RNN/LSTM to two (or more) layers by having another network whose
absurdity is: recognized 7 betweennext word and prediction Softmax output to obtain distribution over next word Long short-term memory network Highway network Max-over-time poolinglayer Convolution layer with multiple filters of different widths Concatenation of character embeddings moment the is recognized
Figure 1: Architecture of our language model applied to an exam- ple sentence. Best viewed in color. Here the model takes absurdity as the current input and combines it with the history (as represented by the hidden state) to predict the next word, is. First layer performs a lookup of character embeddings (of dimension four) and stacks them to form the matrix Ck. Then convolution operations are ap- plied between Ck and multiple ï¬lter matrices. Note that in the above example we have twelve ï¬ltersâthree ï¬lters of width two (blue), four ï¬lters of width three (yellow), and ï¬ve ï¬lters of width four (red). A max-over-time pooling operation is applied to obtain a ï¬xed-dimensional representation of the word, which is given to the highway network. The highway networkâs output is used as the input to a multi-layer LSTM. Finally, an afï¬ne transformation fol- lowed by a softmax is applied over the hidden representation of the LSTM to obtain the distribution over the next word. Cross en- tropy loss between the (predicted) distribution over next word and the actual next word is minimized. Element-wise addition, multi- plication, and sigmoid operators are depicted in circles, and afï¬ne transformations (plus nonlinearities where appropriate) are repre- sented by solid arrows.
input at t is ht (from the ï¬rst network). Indeed, having mul- tiple layers is often crucial for obtaining competitive perfor- mance on various tasks (Pascanu et al. 2013).
Recurrent Neural Network Language Model Let V be the ï¬xed size vocabulary of words. A language model speciï¬es a distribution over wt+1 (whose support is V) given the historical sequence w1:t = [w1, . . . , wt]. A re- current neural network language model (RNN-LM) does this
by applying an afï¬ne transformation to the hidden layer fol- lowed by a softmax:
exp(hy - p! +9â) Vyrev exp(hy, - p!â + q?") (3) Pr(wey1 = J|wie)
where pj is the j-th column of P â RmÃ|V| (also referred to as the output embedding),2 and qj is a bias term. Similarly, for a conventional RNN-LM which usually takes words as inputs, if wt = k, then the input to the RNN-LM at t is the input embedding xk, the k-th column of the embedding matrix X â RnÃ|V|. Our model simply replaces the input embeddings X with the output from a character-level con- volutional neural network, to be described below.
If we denote w1:T = [w1, · · · , wT ] to be the sequence of words in the training corpus, training involves minimizing the negative log-likelihood (N LL) of the sequence
T NLL=-â > log Pr(w;|w1.1-1) (4) t=1
which is typically done by truncated backpropagation through time (Werbos 1990; Graves 2013).
Character-level Convolutional Neural Network In our model, the input at time t is an output from a character-level convolutional neural network (CharCNN), which we describe in this section. CNNs (LeCun et al. 1989) have achieved state-of-the-art results on computer vi- sion (Krizhevsky, Sutskever, and Hinton 2012) and have also been shown to be effective for various NLP tasks (Collobert et al. 2011). Architectures employed for NLP applications differ in that they typically involve temporal rather than spa- tial convolutions.
Let C be the vocabulary of characters, d be the dimen- sionality of character embeddings,3 and Q â RdÃ|C| be the matrix character embeddings. Suppose that word k â V is made up of a sequence of characters [c1, . . . , cl], where l is the length of word k. Then the character-level representation of k is given by the matrix Ck â RdÃl, where the j-th col- umn corresponds to the character embedding for cj (i.e. the cj-th column of Q).4
We apply a narrow convolution between Ck and a ï¬lter (or kernel) H â RdÃw of width w, after which we add a bias and apply a nonlinearity to obtain a feature map f k â Rlâw+1. Speciï¬cally, the i-th element of f k is given by:
f* [i] = tanh((C*[x,i:i+w-1],H)+b) (6)
2In our work, predictions are at the word-level, and hence we still utilize word embeddings in the output layer.
3Given that |C| is usually small, some authors work with one- hot representations of characters. However we found that using lower dimensional representations of characters (i.e. d < |C|) per- formed slightly better.
4Two technical details warrant mention here: (1) we append start-of-word and end-of-word characters to each word to better represent preï¬xes and sufï¬xes and hence Ck actually has l + 2 columns; (2) for batch processing, we zero-pad Ck so that the num- ber of columns is constant (equal to the max word length) for all words in V.
where C* [x, i : i+-wâ1] is the i-to-(i+wâ1)-th column of C* and (A, B) = Tr(AB*) is the Frobenius inner product. Finally, we take the max-over-time
yk = max i f k[i] (6)
as the feature corresponding to the ï¬lter H (when applied to word k). The idea is to capture the most important featureâ the one with the highest valueâfor a given ï¬lter. A ï¬lter is essentially picking out a character n-gram, where the size of the n-gram corresponds to the ï¬lter width.
We have described the process by which one feature is obtained from one ï¬lter matrix. Our CharCNN uses multiple ï¬lters of varying widths to obtain the feature vector for k. So if we have a total of h ï¬lters H1, . . . , Hh, then yk = [yk h] is the input representation of k. For many NLP applications h is typically chosen to be in [100, 1000].
Highway Network We could simply replace xk (the word embedding) with yk at each t in the RNN-LM, and as we show later, this simple model performs well on its own (Table 7). One could also have a multilayer perceptron (MLP) over yk to model in- teractions between the character n-grams picked up by the ï¬lters, but we found that this resulted in worse performance. Instead we obtained improvements by running yk through a highway network, recently proposed by Srivastava et al. (2015). Whereas one layer of an MLP applies an afï¬ne trans- formation followed by a nonlinearity to obtain a new set of features,
z = g(Wy + b) (7)
one layer of a highway network does the following:
z=tOg(Wuy+by)+(1-t)oy (8)
where g is a nonlinearity, t = Ï(WT y + bT ) is called the transform gate, and (1ât) is called the carry gate. Similar to the memory cells in LSTM networks, highway layers allow for training of deep networks by adaptively carrying some dimensions of the input directly to the output.5 By construc- tion the dimensions of y and z have to match, and hence WT and WH are square matrices.
Experimental Setup As is standard in language modeling, we use perplexity (P P L) to evaluate the performance of our models. Perplex- ity of a model over a sequence [w1, . . . , wT ] is given by
PPL= exp(â*) (9)
where N LL is calculated over the test set. We test the model on corpora of varying languages and sizes (statistics avail- able in Table 1).
We conduct hyperparameter search, model introspection, and ablation studies on the English Penn Treebank (PTB) (Marcus, Santorini, and Marcinkiewicz 1993), utilizing the
5Srivastava et al. (2015) recommend initializing bT to a neg- ative value, in order to militate the initial behavior towards carry. We initialized bT to a small interval around â2.
DATA-S DATA-L |V| |C| T |V| |C| English (EN) Czech (CS) German (DE) Spanish (ES) French (FR) Russian (RU) Arabic (AR) 10 k 46 k 37 k 27 k 25 k 62 k 86 k 51 101 74 72 76 62 132 1 m 60 k 1 m 206 k 1 m 339 k 1 m 152 k 1 m 137 k 1 m 497 k 4 m â 197 195 260 222 225 111 â T 20 m 17 m 51 m 56 m 57 m 25 m â
Table 1: Corpus statistics. |V| = word vocabulary size; |C| = char- acter vocabulary size; T = number of tokens in training set. The small English data is from the Penn Treebank and the Arabic data is from the News-Commentary corpus. The rest are from the 2013 ACL Workshop on Machine Translation. |C| is large because of (rarely occurring) special characters.
standard training (0-20), validation (21-22), and test (23-24) splits along with pre-processing by Mikolov et al. (2010). With approximately 1m tokens and |V| = 10k, this version has been extensively used by the language modeling com- munity and is publicly available.6
With the optimal hyperparameters tuned on PTB, we ap- ply the model to various morphologically rich languages: Czech, German, French, Spanish, Russian, and Arabic. Non- Arabic data comes from the 2013 ACL Workshop on Ma- chine Translation,7 and we use the same train/validation/test splits as in Botha and Blunsom (2014). While the raw data are publicly available, we obtained the preprocessed ver- sions from the authors,8 whose morphological NLM serves as a baseline for our work. We train on both the small datasets (DATA-S) with 1m tokens per language, and the large datasets (DATA-L) including the large English data which has a much bigger |V| than the PTB. Arabic data comes from the News-Commentary corpus,9 and we per- form our own preprocessing and train/validation/test splits. In these datasets only singleton words were replaced with <unk> and hence we effectively use the full vocabulary. It is worth noting that the character model can utilize surface forms of OOV tokens (which were replaced with <unk>), but we do not do this and stick to the preprocessed versions (de- spite disadvantaging the character models) for exact com- parison against prior work.
# Optimization
The models are trained by truncated backpropagation through time (Werbos 1990; Graves 2013). We backprop- agate for 35 time steps using stochastic gradient descent where the learning rate is initially set to 1.0 and halved if the perplexity does not decrease by more than 1.0 on the validation set after an epoch. On DATA-S we use a batch size of 20 and on DATA-L we use a batch size of 100 (for
# 6http://www.ï¬t.vutbr.cz/â¼imikolov/rnnlm/ 7http://www.statmt.org/wmt13/translation-task.html 8http://bothameister.github.io/ 9http://opus.lingï¬l.uu.se/News-Commentary.php
Small Large CNN d w [1, 2, 3, 4, 5, 6] [25 · w] h f tanh 15 15 [1, 2, 3, 4, 5, 6, 7] [min{200, 50 · w}] tanh Highway l g 1 ReLU 2 ReLU LSTM l m 300 2 2 650
Table 2: Architecture of the small and large models. d = dimensionality of character embeddings; w = ï¬lter widths; h = number of ï¬lter matrices, as a function of ï¬lter width (so the large model has ï¬lters of width [1, 2, 3, 4, 5, 6, 7] of size [50, 100, 150, 200, 200, 200, 200] for a total of 1100 ï¬lters); f, g = nonlinearity functions; l = number of layers; m = number of hidden units.
greater efï¬ciency). Gradients are averaged over each batch. We train for 25 epochs on non-Arabic and 30 epochs on Ara- bic data (which was sufï¬cient for convergence), picking the best performing model on the validation set. Parameters of the model are randomly initialized over a uniform distribu- tion with support [â0.05, 0.05].
For regularization we use dropout (Hinton et al. 2012) with probability 0.5 on the LSTM input-to-hidden layers (except on the initial Highway to LSTM layer) and the hidden-to-output softmax layer. We further constrain the norm of the gradients to be below 5, so that if the L2 norm of the gradient exceeds 5 then we renormalize it to have || · || = 5 before updating. The gradient norm constraint was crucial in training the model. These choices were largely guided by previous work of Zaremba et al. (2014) on word- level language modeling with LSTMs.
Finally, in order to speed up training on DATA-L we em- ploy a hierarchical softmax (Morin and Bengio 2005)âa common strategy for training language models with very large |V|âinstead of the usual softmax. We pick the number of clusters c = [\/|V|] and randomly split V into mutually exclusive and collectively exhaustive subsets V),...,V. of (approximately) equal size.!° Then Pr(wi41 = j|wi.t) be- comes,
. exp(h,-sâ +tâ Pr(wisa = j|wie) x P(t ) â1 exp(hy, - sâ + #â") exp(hy - p} + gf) Vive, exp(hy - pr +q@ ) (10)
where r is the cluster index such that j â Vr. The ï¬rst term is simply the probability of picking cluster r, and the second
10While Brown clustering/frequency-based clustering is com- monly used in the literature (e.g. Botha and Blunsom (2014) use Brown clusering), we used random clusters as our implementation enjoys the best speed-up when the number of words in each clus- ter is approximately equal. We found random clustering to work surprisingly well.
P P L Size LSTM-Word-Small LSTM-Char-Small LSTM-Word-Large LSTM-Char-Large 97.6 92.3 85.4 78.9 5 m 5 m 20 m 19 m KN-5 (Mikolov et al. 2012) RNNâ (Mikolov et al. 2012) RNN-LDAâ (Mikolov et al. 2012) genCNNâ (Wang et al. 2015) FOFE-FNNLMâ (Zhang et al. 2015) Deep RNN (Pascanu et al. 2013) Sum-Prod Netâ (Cheng et al. 2014) LSTM-1â (Zaremba et al. 2014) LSTM-2â (Zaremba et al. 2014) 141.2 124.7 113.7 116.4 108.0 107.5 100.0 82.7 78.4 2 m 6 m 7 m 8 m 6 m 6 m 5 m 20 m 52 m
Table 3: Performance of our model versus other neural language models on the English Penn Treebank test set. P P L refers to per- plexity (lower is better) and size refers to the approximate number of parameters in the model. KN-5 is a Kneser-Ney 5-gram language model which serves as a non-neural baseline. â For these models the authors did not explicitly state the number of parameters, and hence sizes shown here are estimates based on our understanding of their papers or private correspondence with the respective authors.
term is the probability of picking word j given that cluster r is picked. We found that hierarchical softmax was not nec- essary for models trained on DATA-S.
# Results
English Penn Treebank We train two versions of our model to assess the trade-off between performance and size. Architecture of the small (LSTM-Char-Small) and large (LSTM-Char-Large) models is summarized in Table 2. As another baseline, we also train two comparable LSTM models that use word em- beddings only (LSTM-Word-Small, LSTM-Word-Large). LSTM-Word-Small uses 200 hidden units and LSTM-Word- Large uses 650 hidden units. Word embedding sizes are also 200 and 650 respectively. These were chosen to keep the number of parameters similar to the corresponding character-level model.
As can be seen from Table 3, our large model is on par with the existing state-of-the-art (Zaremba et al. 2014), despite having approximately 60% fewer parameters. Our small model signiï¬cantly outperforms other NLMs of sim- ilar size, even though it is penalized by the fact that the dataset already has OOV words replaced with <unk> (other models are purely word-level models). While lower perplex- ities have been reported with model ensembles (Mikolov and Zweig 2012), we do not include them here as they are not comparable to the current work.
Other Languages The modelâs performance on the English PTB is informative to the extent that it facilitates comparison against the large body of existing work. However, English is relatively simple
DATA-S CS DE ES FR RU Botha KN-4 MLBL 545 465 366 296 241 200 274 225 396 304 Small Word Morph Char 503 414 401 305 278 260 212 197 182 229 216 189 352 290 278 Large Word Morph Char 493 398 371 286 263 239 200 177 165 222 196 184 357 271 261 AR 323 â 216 230 196 172 148 148
Table 4: Test set perplexities for DATA-S. First two rows are from Botha (2014) (except on Arabic where we trained our own KN-4 model) while the last six are from this paper. KN-4 is a Kneser- Ney 4-gram language model, and MLBL is the best performing morphological logbilinear model from Botha (2014). Small/Large refer to model size (see Table 2), and Word/Morph/Char are models with words/morphemes/characters as inputs respectively.
from a morphological standpoint, and thus our next set of results (and arguably the main contribution of this paper) is focused on languages with richer morphology (Table 4, Table 5).
We compare our results against the morphological log- bilinear (MLBL) model from Botha and Blunsom (2014), whose model also takes into account subword information through morpheme embeddings that are summed at the input and output layers. As comparison against the MLBL mod- els is confounded by our use of LSTMsâwidely known to outperform their feed-forward/log-bilinear cousinsâwe also train an LSTM version of the morphological NLM, where the input representation of a word given to the LSTM is a summation of the wordâs morpheme embeddings. Con- cretely, suppose that M is the set of morphemes in a lan- guage, M â RnÃ|M| is the matrix of morpheme embed- dings, and mj is the j-th column of M (i.e. a morpheme embedding). Given the input word k, we feed the following representation to the LSTM:
xk + mj jâMk (11)
where xk is the word embedding (as in a word-level model) and Mk â M is the set of morphemes for word k. The morphemes are obtained by running an unsupervised mor- phological tagger as a preprocessing step.11 We emphasize that the word embedding itself (i.e. xk) is added on top of the morpheme embeddings, as was done in Botha and Blunsom (2014). The morpheme embeddings are of size 200/650 for the small/large models respectively. We further train word- level LSTM models as another baseline.
On DATA-S it is clear from Table 4 that the character- level models outperform their word-level counterparts de-
11We use Morfessor Cat-MAP (Creutz and Lagus 2007), as in Botha and Blunsom (2014).
DATA-L CS DE ES FR RU Botha KN-4 MLBL 862 643 463 404 219 203 243 227 390 300 Small Word Morph Char 701 615 578 347 331 305 186 189 169 202 209 190 353 331 313 EN 291 273 236 233 216
Table 5: Test set perplexities on DATA-L. First two rows are from Botha (2014), while the last three rows are from the small LSTM models described in the paper. KN-4 is a Kneser-Ney 4-gram lan- guage model, and MLBL is the best performing morphological log- bilinear model from Botha (2014). Word/Morph/Char are models with words/morphemes/characters as inputs respectively.
spite, again, being smaller.12 The character models also out- perform their morphological counterparts (both MLBL and LSTM architectures), although improvements over the mor- phological LSTMs are more measured. Note that the mor- pheme models have strictly more parameters than the word models because word embeddings are used as part of the in- put.
Due to memory constraints13 we only train the small models on DATA-L (Table 5). Interestingly we do not ob- serve signiï¬cant differences going from word to morpheme LSTMs on Spanish, French, and English. The character models again outperform the word/morpheme models. We also observe signiï¬cant perplexity reductions even on En- glish when V is large. We conclude this section by noting that we used the same architecture for all languages and did not perform any language-speciï¬c tuning of hyperparame- ters.
Discussion Learned Word Representations We explore the word representations learned by the models on the PTB. Table 6 has the nearest neighbors of word rep- resentations learned from both the word-level and character- level models. For the character models we compare the rep- resentations obtained before and after highway layers.
Before the highway layers the representations seem to solely rely on surface formsâfor example the nearest neigh- bors of you are your, young, four, youth, which are close to you in terms of edit distance. The highway layers however, seem to enable encoding of semantic features that are not discernable from orthography alone. After highway layers the nearest neighbor of you is we, which is orthographically distinct from you. Another example is while and thoughâ these words are far apart edit distance-wise yet the composi- tion model is able to place them near each other. The model
12The difference in parameters is greater for non-PTB corpora as the size of the word model scales faster with |V|. For example, on Arabic the small/large word models have 35m/121m parameters while the corresponding character models have 29m/69m parame- ters respectively.
13All models were trained on GPUs with 2GB memory.
Figure 2: Plot of character n-gram representations via PCA for English. Colors correspond to: preï¬xes (red), sufï¬xes (blue), hy- phenated (orange), and all others (grey). Preï¬xes refer to character n-grams which start with the start-of-word character. Sufï¬xes like- wise refer to character n-grams which end with the end-of-word character.
also makes some clear mistakes (e.g. his and hhs), highlight- ing the limits of our approach, although this could be due to the small dataset.
The learned representations of OOV words (computer- aided, misinformed) are positioned near words with the same part-of-speech. The model is also able to correct for incorrect/non-standard spelling (looooook), indicating po- tential applications for text normalization in noisy domains.
Learned Character N -gram Representations As discussed previously, each ï¬lter of the CharCNN is es- sentially learning to detect particular character n-grams. Our initial expectation was that each ï¬lter would learn to activate on different morphemes and then build up semantic repre- sentations of words from the identiï¬ed morphemes. How- ever, upon reviewing the character n-grams picked up by the ï¬lters (i.e. those that maximized the value of the ï¬lter), we found that they did not (in general) correspond to valid morphemes.
To get a better intuition for what the character composi- tion model is learning, we plot the learned representations of all character n-grams (that occurred as part of at least two words in V) via principal components analysis (Figure 2). We feed each character n-gram into the CharCNN and use the CharCNNâs output as the ï¬xed dimensional representa- tion for the corresponding character n-gram. As is appar- ent from Figure 2, the model learns to differentiate between preï¬xes (red), sufï¬xes (blue), and others (grey). We also ï¬nd that the representations are particularly sensitive to character n-grams containing hyphens (orange), presumably because this is a strong signal of a wordâs part-of-speech.
Highway Layers We quantitatively investigate the effect of highway network layers via ablation studies (Table 7). We train a model with- out any highway layers, and ï¬nd that performance decreases signiï¬cantly. As the difference in performance could be due to the decrease in model size, we also train a model that feeds yk (i.e. word representation from the CharCNN)
In Vocabulary Out-of-Vocabulary while his you richard trading computer-aided misinformed looooook LSTM-Word although letting though minute your her my their conservatives we guys i jonathan robert neil nancy advertised advertising turnover turnover â â â â â â â â â â â â LSTM-Char (before highway) chile whole meanwhile white this hhs is has your young four youth hard rich richer richter heading training reading leading computer-guided computerized disk-drive computer informed performed transformed inform look cook looks shook LSTM-Char (after highway) meanwhile whole though nevertheless hhs this their your we your doug i eduard gerard edward carl trade training traded trader computer-guided computer-driven computerized computer informed performed outperformed transformed look looks looked looking
Table 6: Nearest neighbor words (based on cosine similarity) of word representations from the large word-level and character-level (before and after highway layers) models trained on the PTB. Last three words are OOV words, and therefore they do not have representations in the word-level model.
LSTM-Char Small Large No Highway Layers One Highway Layer Two Highway Layers One MLP Layer 100.3 92.3 90.1 111.2 84.6 79.7 78.9 92.6
|V| 10 k 25 k 50 k 100 k T 1 m 17% 16% 21% 5 m 10 m 25 m â 8% 14% 16% 21% 9% 12% 15% 9% 9% 10% 8% 9%
Table 7: Perplexity on the Penn Treebank for small/large models trained with/without highway layers.
through a one-layer multilayer perceptron (MLP) to use as input into the LSTM. We ï¬nd that the MLP does poorly, al- though this could be due to optimization issues.
Table 8: Perplexity reductions by going from small word-level to character-level models based on different corpus/vocabulary sizes on German (DE). |V| is the vocabulary size and T is the number of tokens in the training set. The full vocabulary of the 1m dataset was less than 100k and hence that scenario is unavailable.
We hypothesize that highway networks are especially well-suited to work with CNNs, adaptively combining lo- cal features detected by the individual ï¬lters. CNNs have already proven to be been successful for many NLP tasks (Collobert et al. 2011; Shen et al. 2014; Kalchbrenner, Grefenstette, and Blunsom 2014; Kim 2014; Zhang, Zhao, and LeCun 2015; Lei, Barzilay, and Jaakola 2015), and we posit that further gains could be achieved by employing highway layers on top of existing CNN architectures.
ity reductions as a result of going from a small word-level model to a small character-level model. To vary the vocabu- lary size we take the most frequent k words and replace the rest with <unk>. As with previous experiments the character model does not utilize surface forms of <unk> and simply treats it as another token. Although Table 8 suggests that the perplexity reductions become less pronounced as the corpus size increases, we nonetheless ï¬nd that the character-level model outperforms the word-level model in all scenarios.
We also anecdotally note that (1) having one to two high- way layers was important, but more highway layers gener- ally resulted in similar performance (though this may de- pend on the size of the datasets), (2) having more convolu- tional layers before max-pooling did not help, and (3) high- way layers did not improve models that only used word em- beddings as inputs.
# Effect of Corpus/Vocab Sizes
We next study the effect of training corpus/vocabulary sizes on the relative performance between the different models. We take the German (DE) dataset from DATA-L and vary the training corpus/vocabulary sizes, calculating the perplex-
# Further Observations
We report on some further experiments and observations:
⢠Combining word embeddings with the CharCNNâs out- put to form a combined representation of a word (to be used as input to the LSTM) resulted in slightly worse performance (81 on PTB with a large model). This was surprising, as improvements have been reported on part- of-speech tagging (dos Santos and Zadrozny 2014) and named entity recognition (dos Santos and Guimaraes 2015) by concatenating word embeddings with the out- put from a character-level CNN. While this could be due
to insufï¬cient experimentation on our part,14 it suggests that for some tasks, word embeddings are superï¬uousâ character inputs are good enough.
⢠While our model requires additional convolution opera- tions over characters and is thus slower than a comparable word-level model which can perform a simple lookup at the input layer, we found that the difference was manage- able with optimized GPU implementationsâfor example on PTB the large character-level model trained at 1500 to- kens/sec compared to the word-level model which trained at 3000 tokens/sec. For scoring, our model can have the same running time as a pure word-level model, as the CharCNNâs outputs can be pre-computed for all words in V. This would, however, be at the expense of increased model size, and thus a trade-off can be made between run-time speed and memory (e.g. one could restrict the pre-computation to the most frequent words).
Related Work Neural Language Models (NLM) encompass a rich fam- ily of neural network architectures for language modeling. Some example architectures include feed-forward (Bengio, Ducharme, and Vincent 2003), recurrent (Mikolov et al. 2010), sum-product (Cheng et al. 2014), log-bilinear (Mnih and Hinton 2007), and convolutional (Wang et al. 2015) net- works.
In order to address the rare word problem, Alexandrescu and Kirchhoff (2006)âbuilding on analogous work on count-based n-gram language models by Bilmes and Kirch- hoff (2003)ârepresent a word as a set of shared factor em- beddings. Their Factored Neural Language Model (FNLM) can incorporate morphemes, word shape information (e.g. capitalization) or any other annotation (e.g. part-of-speech tags) to represent words.
A speciï¬c class of FNLMs leverages morphemic infor- mation by viewing a word as a function of its (learned) morpheme embeddings (Luong, Socher, and Manning 2013; Botha and Blunsom 2014; Qui et al. 2014). For example Lu- ong, Socher, and Manning (2013) apply a recursive neural network over morpheme embeddings to obtain the embed- ding for a single word. While such models have proved use- ful, they require morphological tagging as a preprocessing step.
Another direction of work has involved purely character- level NLMs, wherein both input and output are charac- ters (Sutskever, Martens, and Hinton 2011; Graves 2013). Character-level models obviate the need for morphological tagging or manual feature engineering, and have the attrac- tive property of being able to generate novel words. How- ever they are generally outperformed by word-level models (Mikolov et al. 2012).
improvements have been reported on part-of-speech tagging (dos Santos and Zadrozny 2014) and named entity recognition (dos Santos
14We experimented with (1) concatenation, (2) tensor products, (3) averaging, and (4) adaptive weighting schemes whereby the model learns a convex combination of word embeddings and the CharCNN outputs.
and Guimaraes 2015) by representing a word as a concatena- tion of its word embedding and an output from a character- level CNN, and using the combined representation as fea- tures in a Conditional Random Field (CRF). Zhang, Zhao, and LeCun (2015) do away with word embeddings com- pletely and show that for text classiï¬cation, a deep CNN over characters performs well. Ballesteros, Dyer, and Smith (2015) use an RNN over characters only to train a transition- based parser, obtaining improvements on many morpholog- ically rich languages.
Finally, Ling et al. (2015) apply a bi-directional LSTM over characters to use as inputs for language modeling and part-of-speech tagging. They show improvements on various languages (English, Portuguese, Catalan, German, Turkish). It remains open as to which character composition model (i.e. CNN or LSTM) performs better.
Conclusion We have introduced a neural language model that utilizes only character-level inputs. Predictions are still made at the word-level. Despite having fewer parameters, our model outperforms baseline models that utilize word/morpheme embeddings in the input layer. Our work questions the ne- cessity of word embeddings (as inputs) for neural language modeling.
Analysis of word representations obtained from the char- acter composition part of the model further indicates that the model is able to encode, from characters only, rich se- mantic and orthographic features. Using the CharCNN and highway layers for representation learning (e.g. as input into word2vec (Mikolov et al. 2013)) remains an avenue for fu- ture work.
Insofar as sequential processing of words as inputs is ubiquitous in natural language processing, it would be in- teresting to see if the architecture introduced in this paper is viable for other tasksâfor example, as an encoder/decoder in neural machine translation (Cho et al. 2014; Sutskever, Vinyals, and Le 2014).
Acknowledgments We are especially grateful to Jan Botha for providing the preprocessed datasets and the model results.
References Alexandrescu, A., and Kirchhoff, K. 2006. Factored Neural Lan- guage Models. In Proceedings of NAACL. Ballesteros, M.; Dyer, C.; and Smith, N. A. Im- proved Transition-Based Parsing by Modeling Characters instead of Words with LSTMs. In Proceedings of EMNLP. Bengio, Y.; Ducharme, R.; and Vincent, P. 2003. A Neural Prob- abilistic Language Model. Journal of Machine Learning Research 3:1137â1155. Bengio, Y.; Simard, P.; and Frasconi, P. 1994. Learning Long-term Dependencies with Gradient Descent is Difï¬cult. IEEE Transac- tions on Neural Networks 5:157â166. Bilmes, J., and Kirchhoff, K. 2003. Factored Language Models and Generalized Parallel Backoff. In Proceedings of NAACL.
Botha, J., and Blunsom, P. 2014. Compositional Morphology for Word Representations and Language Modelling. In Proceedings of ICML.
Botha, J. 2014. Probabilistic Modelling of Morphologically Rich Languages. DPhil Dissertation, Oxford University.
Chen, S., and Goodman, J. 1998. An Empirical Study of Smooth- ing Techniques for Language Modeling. Technical Report, Har- vard University.
Cheng, W. C.; Kok, S.; Pham, H. V.; Chieu, H. L.; and Chai, K. M. 2014. Language Modeling with Sum-Product Networks. In Pro- ceedings of INTERSPEECH.
Cho, K.; van Merrienboer, B.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; and Bengio, Y. 2014. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Ma- chine Translation. In Proceedings of EMNLP.
Collobert, R.; Weston, J.; Bottou, L.; Karlen, M.; Kavukcuoglu, K.; and Kuksa, P. 2011. Natural Language Processing (almost) from Scratch. Journal of Machine Learning Research 12:2493â2537.
Creutz, M., and Lagus, K. 2007. Unsupervised Models for Mor- pheme Segmentation and Morphology Learning. In Proceedings of the ACM Transations on Speech and Language Processing.
Deerwester, S.; Dumais, S.; and Harshman, R. 1990. Indexing by Latent Semantic Analysis. Journal of American Society of Infor- mation Science 41:391â407.
dos Santos, C. N., and Guimaraes, V. 2015. Boosting Named Entity Recognition with Neural Character Embeddings. In Proceedings of ACL Named Entities Workshop.
dos Santos, C. N., and Zadrozny, B. 2014. Learning Character- level Representations for Part-of-Speech Tagging. In Proceedings of ICML.
Graves, A. 2013. Generating Sequences with Recurrent Neural Networks. arXiv:1308.0850.
Hinton, G.; Srivastava, N.; Krizhevsky, A.; Sutskever, I.; and Salakhutdinov, R. 2012. Improving Neural Networks by Prevent- ing Co-Adaptation of Feature Detectors. arxiv:1207.0580.
Hochreiter, S., and Schmidhuber, J. 1997. Long Short-Term Mem- ory. Neural Computation 9:1735â1780.
Kalchbrenner, N.; Grefenstette, E.; and Blunsom, P. 2014. A Con- volutional Neural Network for Modelling Sentences. In Proceed- ings of ACL.
Kim, Y. 2014. Convolutional Neural Networks for Sentence Clas- siï¬cation. In Proceedings of EMNLP.
ImageNet Krizhevsky, A.; Sutskever, I.; and Hinton, G. 2012. Classiï¬cation with Deep Convolutional Neural Networks. In Pro- ceedings of NIPS.
LeCun, Y.; Boser, B.; Denker, J. S.; Henderson, D.; Howard, R. E.; Hubbard, W.; and Jackel, L. D. 1989. Handwritten Digit Recogni- tion with a Backpropagation Network. In Proceedings of NIPS.
Lei, T.; Barzilay, R.; and Jaakola, T. 2015. Molding CNNs for Text: Non-linear, Non-consecutive Convolutions. In Proceedings of EMNLP.
Ling, W.; Lui, T.; Marujo, L.; Astudillo, R. F.; Amir, S.; Dyer, C.; Black, A. W.; and Trancoso, I. 2015. Finding Function in Form: Compositional Character Models for Open Vocabulary Word Rep- resentation. In Proceedings of EMNLP.
Luong, M.-T.; Socher, R.; and Manning, C. 2013. Better Word Representations with Recursive Neural Networks for Morphology. In Proceedings of CoNLL.
Marcus, M.; Santorini, B.; and Marcinkiewicz, M. 1993. Building a Large Annotated Corpus of English: the Penn Treebank. Compu- tational Linguistics 19:331â330. Mikolov, T., and Zweig, G. 2012. Context Dependent Recurrent Neural Network Language Model. In Proceedings of SLT. Mikolov, T.; Karaï¬at, M.; Burget, L.; Cernocky, J.; and Khudanpur, S. 2010. Recurrent Neural Network Based Language Model. In Proceedings of INTERSPEECH. Mikolov, T.; Deoras, A.; Kombrink, S.; Burget, L.; and Cernocky, J. 2011. Empirical Evaluation and Combination of Advanced Lan- guage Modeling Techniques. In Proceedings of INTERSPEECH. Mikolov, T.; Sutskever, I.; Deoras, A.; Le, H.-S.; Kombrink, S.; and Cernocky, J. 2012. Subword Language Modeling with Neural Networks. preprint: www.ï¬t.vutbr.cz/Ëimikolov/rnnlm/char.pdf. Mikolov, T.; Chen, K.; Corrado, G.; and Dean, J. 2013. Ef- ï¬cient Estimation of Word Representations in Vector Space. arXiv:1301.3781. Mnih, A., and Hinton, G. 2007. Three New Graphical Models for Statistical Language Modelling. In Proceedings of ICML. Morin, F., and Bengio, Y. 2005. Hierarchical Probabilistic Neural Network Language Model. In Proceedings of AISTATS. Pascanu, R.; Culcehre, C.; Cho, K.; and Bengio, Y. 2013. How to Construct Deep Neural Networks. arXiv:1312.6026. Qui, S.; Cui, Q.; Bian, J.; and Gao, B. 2014. Co-learning of Word Representations and Morpheme Representations. In Proceedings of COLING. Shen, Y.; He, X.; Gao, J.; Deng, L.; and Mesnil, G. 2014. A Latent Semantic Model with Convolutional-pooling Structure for Infor- mation Retrieval. In Proceedings of CIKM. Srivastava, R. K.; Greff, K.; and Schmidhuber, J. 2015. Training Very Deep Networks. arXiv:1507.06228. Sundermeyer, M.; Schluter, R.; and Ney, H. 2012. LSTM Neural Networks for Language Modeling. Sutskever, I.; Martens, J.; and Hinton, G. 2011. Generating Text with Recurrent Neural Networks. Sutskever, I.; Vinyals, O.; and Le, Q. 2014. Sequence to Sequence Learning with Neural Networks. Wang, M.; Lu, Z.; Li, H.; Jiang, W.; and Liu, Q. 2015. genCNN: In A Convolutional Architecture for Word Sequence Prediction. Proceedings of ACL. Werbos, P. 1990. Back-propagation Through Time: what it does and how to do it. In Proceedings of IEEE. Zaremba, W.; Sutskever, I.; and Vinyals, O. 2014. Recurrent Neural Network Regularization. arXiv:1409.2329. Zhang, S.; Jiang, H.; Xu, M.; Hou, J.; and Dai, L. 2015. The Fixed- Size Ordinally-Forgetting Encoding Method for Neural Network Language Models. In Proceedings of ACL. Zhang, X.; Zhao, J.; and LeCun, Y. 2015. Character-level Convo- lutional Networks for Text Classiï¬cation. In Proceedings of NIPS. | {
"id": "1507.06228"
} |
1508.05326 | A large annotated corpus for learning natural language inference | Understanding entailment and contradiction is fundamental to understanding
natural language, and inference about entailment and contradiction is a
valuable testing ground for the development of semantic representations.
However, machine learning research in this area has been dramatically limited
by the lack of large-scale resources. To address this, we introduce the
Stanford Natural Language Inference corpus, a new, freely available collection
of labeled sentence pairs, written by humans doing a novel grounded task based
on image captioning. At 570K pairs, it is two orders of magnitude larger than
all other resources of its type. This increase in scale allows lexicalized
classifiers to outperform some sophisticated existing entailment models, and it
allows a neural network-based model to perform competitively on natural
language inference benchmarks for the first time. | http://arxiv.org/pdf/1508.05326 | Samuel R. Bowman, Gabor Angeli, Christopher Potts, Christopher D. Manning | cs.CL | To appear at EMNLP 2015. The data will be posted shortly before the
conference (the week of 14 Sep) at http://nlp.stanford.edu/projects/snli/ | null | cs.CL | 20150821 | 20150821 | 5 1 0 2
g u A 1 2 ] L C . s c [
1 v 6 2 3 5 0 . 8 0 5 1 : v i X r a
# A large annotated corpus for learning natural language inference
# Samuel R. Bowmanââ sbowman@stanford.edu
# Gabor Angeliâ â¡ angeli@stanford.edu
# Christopher Pottsâ cgpotts@stanford.edu
Christopher D. Manningââ â¡ manning@stanford.edu
âStanford Linguistics â Stanford NLP Group â¡Stanford Computer Science
# Abstract
Understanding entailment and contradic- tion is fundamental to understanding nat- ural language, and inference about entail- ment and contradiction is a valuable test- ing ground for the development of seman- tic representations. However, machine learning research in this area has been dra- matically limited by the lack of large-scale resources. To address this, we introduce the Stanford Natural Language Inference corpus, a new, freely available collection of labeled sentence pairs, written by hu- mans doing a novel grounded task based on image captioning. At 570K pairs, it is two orders of magnitude larger than all other resources of its type. This in- crease in scale allows lexicalized classi- ï¬ers to outperform some sophisticated ex- isting entailment models, and it allows a neural network-based model to perform competitively on natural language infer- ence benchmarks for the ï¬rst time.
# Introduction
for approaches employing distributed word and phrase representations. Distributed representa- tions excel at capturing relations based in similar- ity, and have proven effective at modeling simple dimensions of meaning like evaluative sentiment (e.g., Socher et al. 2013), but it is less clear that they can be trained to support the full range of logical and commonsense inferences required for NLI (Bowman et al., 2015; Weston et al., 2015b; In a SemEval 2014 task Weston et al., 2015a). aimed at evaluating distributed representations for NLI, the best-performing systems relied heavily on additional features and reasoning capabilities (Marelli et al., 2014a).
Our ultimate objective is to provide an empiri- cal evaluation of learning-centered approaches to NLI, advancing the case for NLI as a tool for the evaluation of domain-general approaches to semantic representation. However, in our view, existing NLI corpora do not permit such an as- sessment. They are generally too small for train- ing modern data-intensive, wide-coverage models, many contain sentences that were algorithmically generated, and they are often beset with indeter- minacies of event and entity coreference that sig- niï¬cantly impact annotation quality.
The semantic concepts of entailment and contra- diction are central to all aspects of natural lan- guage meaning (Katz, 1972; van Benthem, 2008), from the lexicon to the content of entire texts. Thus, natural language inference (NLI) â charac- terizing and using these relations in computational systems (Fyodorov et al., 2000; Condoravdi et al., 2003; Bos and Markert, 2005; Dagan et al., 2006; MacCartney and Manning, 2009) â is essential in tasks ranging from information retrieval to seman- tic parsing to commonsense reasoning.
NLI has been addressed using a variety of tech- niques, including those based on symbolic logic, knowledge bases, and neural networks. In recent years, it has become an important testing ground
To address this, this paper introduces the Stan- ford Natural Language Inference (SNLI) corpus, a collection of sentence pairs labeled for entail- ment, contradiction, and semantic independence. At 570,152 sentence pairs, SNLI is two orders of magnitude larger than all other resources of its type. And, in contrast to many such resources, all of its sentences and labels were written by hu- mans in a grounded, naturalistic context. In a sepa- rate validation phase, we collected four additional judgments for each label for 56,941 of the exam- ples. Of these, 98% of cases emerge with a three- annotator consensus, and 58% see a unanimous consensus from all ï¬ve annotators.
In this paper, we use this corpus to evaluate
A man inspects the uniform of a ï¬gure in some East Asian country. contradiction C C C C C The man is sleeping An older and younger man smiling. neutral N N E N N Two men are smiling and laughing at the cats play- ing on the ï¬oor. A black race car starts up in front of a crowd of people. contradiction C C C C C A man is driving down a lonely road. A soccer game with multiple males playing. entailment E E E E E Some men are playing a sport. A smiling costumed woman is holding an um- brella. neutral N N E C N A happy woman in a fairy costume holds an um- brella.
Table 1: Randomly chosen examples from the development section of our new corpus, shown with both the selected gold labels and the full set of labels (abbreviated) from the individual annotators, including (in the ï¬rst position) the label used by the initial author of the pair.
a variety of models for natural language infer- ence, including rule-based systems, simple lin- ear classiï¬ers, and neural network-based models. We ï¬nd that two models achieve comparable per- formance: a feature-rich classiï¬er model and a neural network model centered around a Long Short-Term Memory network (LSTM; Hochreiter and Schmidhuber 1997). We further evaluate the LSTM model by taking advantage of its ready sup- port for transfer learning, and show that it can be adapted to an existing NLI challenge task, yielding the best reported performance by a neural network model and approaching the overall state of the art.
# 2 A new corpus for NLI
To date, the primary sources of annotated NLI cor- pora have been the Recognizing Textual Entail- ment (RTE) challenge tasks.1 These are generally high-quality, hand-labeled data sets, and they have stimulated innovative logical and statistical mod- els of natural language reasoning, but their small size (fewer than a thousand examples each) limits their utility as a testbed for learned distributed rep- resentations. The data for the SemEval 2014 task called Sentences Involving Compositional Knowl- edge (SICK) is a step up in terms of size, but only to 4,500 training examples, and its partly automatic construction introduced some spurious patterns into the data (Marelli et al. 2014a, §6). The Denotation Graph entailment set (Young et al., 2014) contains millions of examples of en- tailments between sentences and artiï¬cially con- structed short phrases, but it was labeled using fully automatic methods, and is noisy enough that it is probably suitable only as a source of sup-
plementary training data. Outside the domain of sentence-level entailment, Levy et al. (2014) intro- duce a large corpus of semi-automatically anno- tated entailment examples between subjectâverbâ object relation triples, and the second release of the Paraphrase Database (Pavlick et al., 2015) in- cludes automatically generated entailment anno- tations over a large corpus of pairs of words and short phrases.
Existing resources suffer from a subtler issue impacts even projects using only human- that provided annotations: indeterminacies of event and entity coreference lead to insurmountable in- determinacy concerning the correct semantic la- bel (de Marneffe et al. 2008 §4.3; Marelli et al. 2014b). For an example of the pitfalls surround- ing entity coreference, consider the sentence pair A boat sank in the Paciï¬c Ocean and A boat sank in the Atlantic Ocean. The pair could be labeled as a contradiction if one assumes that the two sen- tences refer to the same single event, but could also be reasonably labeled as neutral if that as- sumption is not made. In order to ensure that our labeling scheme assigns a single correct label to every pair, we must select one of these approaches across the board, but both choices present prob- lems. If we opt not to assume that events are coreferent, then we will only ever ï¬nd contradic- tions between sentences that make broad univer- sal assertions, but if we opt to assume coreference, new counterintuitive predictions emerge. For ex- ample, Ruth Bader Ginsburg was appointed to the US Supreme Court and I had a sandwich for lunch today would unintuitively be labeled as a contra- diction, rather than neutral, under this assumption. Entity coreference presents a similar kind of in- determinacy, as in the pair A tourist visited New
# 1http://aclweb.org/aclwiki/index.php?
title=Textual_Entailment_Resource_Pool
York and A tourist visited the city. Assuming coreference between New York and the city justi- ï¬es labeling the pair as an entailment, but with- out that assumption the city could be taken to refer to a speciï¬c unknown city, leaving the pair neu- tral. This kind of indeterminacy of label can be re- solved only once the questions of coreference are resolved.
With SNLI, we sought to address the issues of size, quality, and indeterminacy. To do this, we employed a crowdsourcing framework with the following crucial innovations. First, the exam- ples were grounded in speciï¬c scenarios, and the premise and hypothesis sentences in each exam- ple were constrained to describe that scenario from the same perspective, which helps greatly in con- trolling event and entity coreference.2 Second, the prompt gave participants the freedom to produce entirely novel sentences within the task setting, which led to richer examples than we see with the more proscribed string-editing techniques of ear- lier approaches, without sacriï¬cing consistency. Third, a subset of the resulting sentences were sent to a validation task aimed at providing a highly re- liable set of annotations over the same data, and at identifying areas of inferential uncertainty.
# 2.1 Data collection
We used Amazon Mechanical Turk for data col- lection. In each individual task (each HIT), a worker was presented with premise scene descrip- tions from a pre-existing corpus, and asked to supply hypotheses for each of our three labelsâ entailment, neutral, and contradictionâforcing the data to be balanced among these classes.
The instructions that we provided to the work- ers are shown in Figure 1. Below the instructions were three ï¬elds for each of three requested sen- tences, corresponding to our entailment, neutral, and contradiction labels, a fourth ï¬eld (marked optional) for reporting problems, and a link to an FAQ page. That FAQ grew over the course of data collection. It warned about disallowed tech- niques (e.g., reusing the same sentence for many different prompts, which we saw in a few cases), provided guidance concerning sentence length and
2 Issues of coreference are not completely solved, but greatly mitigated. For example, with the premise sentence A dog is lying in the grass, a worker could safely assume that the dog is the most prominent thing in the photo, and very likely the only dog, and build contradicting sentences assum- ing reference to the same dog.
We will show you the caption for a photo. We will not show you the photo. Using only the caption and what you know about the world:
⢠Write one alternate caption that is deï¬nitely a true description of the photo. Example: For the caption âTwo dogs are running through a ï¬eld.â you could write âThere are animals outdoors.â
⢠Write one alternate caption that might be a true description of the photo. Example: For the cap- tion âTwo dogs are running through a ï¬eld.â you could write âSome puppies are running to catch a stick.â
⢠Write one alternate caption that is deï¬nitely a false description of the photo. Example: For the caption âTwo dogs are running through a ï¬eld.â you could write âThe pets are sitting on a couch.â This is different from the maybe correct category because itâs impossible for the dogs to be both running and sitting.
Figure 1: The instructions used on Mechanical Turk for data collection.
complexity (we did not enforce a minimum length, and we allowed bare NPs as well as full sen- tences), and reviewed logistical issues around pay- ment timing. About 2,500 workers contributed.
For the premises, we used captions from the Flickr30k corpus (Young et al., 2014), a collection of approximately 160k captions (corresponding to about 30k images) collected in an earlier crowd- sourced effort.3 The captions were not authored by the photographers who took the source images, and they tend to contain relatively literal scene de- scriptions that are suited to our approach, rather than those typically associated with personal pho- tographs (as in their example: Our trip to the Olympic Peninsula). In order to ensure that the la- bel for each sentence pair can be recovered solely based on the available text, we did not use the im- ages at all during corpus collection.
Table 2 reports some key statistics about the col- lected corpus, and Figure 2 shows the distributions of sentence lengths for both our source hypotheses and our newly collected premises. We observed that while premise sentences varied considerably in length, hypothesis sentences tended to be as
3 We additionally include about 4k sentence pairs from a pilot study in which the premise sentences were instead drawn from the VisualGenome corpus (under construction; visualgenome.org). These examples appear only in the training set, and have pair identiï¬ers preï¬xed with vg in our corpus.
Data set sizes: Training pairs Development pairs Test pairs 550,152 10,000 10,000 Sentence length: Premise mean token count Hypothesis mean token count 14.1 8.3 Parser output: Premise âSâ-rooted parses Hypothesis âSâ-rooted parses Distinct words (ignoring case) 74.0% 88.9% 37,026
Table 2: Key statistics for the raw sentence pairs in SNLI. Since the two halves of each pair were collected separately, we report some statistics for both.
short as possible while still providing enough in- formation to yield a clear judgment, clustering at around seven words. We also observed that the bulk of the sentences from both sources were syn- tactically complete rather than fragments, and the frequency with which the parser produces a parse rooted with an âSâ (sentence) node attests to this.
# 2.2 Data validation
In order to measure the quality of our corpus, and in order to construct maximally useful test- ing and development sets, we performed an addi- tional round of validation for about 10% of our data. This validation phase followed the same basic form as the Mechanical Turk labeling task used to label the SICK entailment data: we pre- sented workers with pairs of sentences in batches of ï¬ve, and asked them to choose a single label for each pair. We supplied each pair to four an- notators, yielding ï¬ve labels per pair including the label used by the original author. The instructions were similar to the instructions for initial data col- lection shown in Figure 1, and linked to a similar FAQ. Though we initially used a very restrictive qualiï¬cation (based on past approval rate) to se- lect workers for the validation task, we nonethe- less discovered (and deleted) some instances of random guessing in an early batch of work, and subsequently instituted a fully closed qualiï¬cation restricted to about 30 trusted workers.
For each pair that we validated, we assigned a gold label. If any one of the three labels was cho- sen by at least three of the ï¬ve annotators, it was
ââ Premise â Hypothesis 100,000 90,000 80,000 70,000 60,000 50,000 40,000 30,000 20,000 Number of sentences 0 5 10 15 20 25 30 35 40 Sentence length (tokens)
Figure 2: The distribution of sentence length.
chosen as the gold label. If there was no such con- sensus, which occurred in about 2% of cases, we assigned the placeholder label â-â. While these un- labeled examples are included in the corpus dis- tribution, they are unlikely to be helpful for the standard NLI classiï¬cation task, and we do not in- clude them in either training or evaluation in the experiments that we discuss in this paper.
The results of this validation process are sum- marized in Table 3. Nearly all of the examples received a majority label, indicating broad con- sensus about the nature of the data and categories. The gold-labeled examples are very nearly evenly distributed across the three labels. The Fleiss κ scores (computed over every example with a full ï¬ve annotations) are likely to be conservative given our large and unevenly distributed pool of annotators, but they still provide insights about the levels of disagreement across the three semantic classes. This disagreement likely reï¬ects not just the limitations of large crowdsourcing efforts but also the uncertainty inherent in naturalistic NLI. Regardless, the overall rate of agreement is ex- tremely high, suggesting that the corpus is sufï¬- ciently high quality to pose a challenging but real- istic machine learning task.
# 2.3 The distributed corpus
Table 1 shows a set of randomly chosen validated examples from the development set with their la- bels. Qualitatively, we ï¬nd the data that we col- lected draws fairly extensively on commonsense knowledge, and that hypothesis and premise sen- tences often differ structurally in signiï¬cant ways, suggesting that there is room for improvement be- yond superï¬cial word alignment models. We also ï¬nd the sentences that we collected to be largely
General: Validated pairs Pairs w/ unanimous gold label 56,951 58.3% Individual annotator label agreement: Individual label = gold label 89.0% Individual label = authorâs label 85.8% Gold label/authorâs label agreement: Gold label = authorâs label 91.2% Gold label 4 authorâs label 6.8% No gold label (no 3 labels match) 2.0% Fleiss «: contradiction 0.77 entailment 0.72 neutral 0.60 Overall 0.70
Table 3: Statistics for the validated pairs. The au- thorâs label is the label used by the worker who wrote the premise to create the sentence pair. A gold label reï¬ects a consensus of three votes from among the author and the four annotators.
ï¬uent, correctly spelled English, with a mix of full sentences and caption-style noun phrase frag- ments, though punctuation and capitalization are often omitted.
The corpus is available under a CreativeCom- mons Attribution-ShareAlike license, the same li- cense used for the Flickr30k source captions. It can be downloaded at: nlp.stanford.edu/projects/snli/
Partition We distribute the corpus with a pre- speciï¬ed train/test/development split. The test and development sets contain 10k examples each. Each original ImageFlickr caption occurs in only one of the three sets, and all of the examples in the test and development sets have been validated.
Parses The distributed corpus includes parses produced by the Stanford PCFG Parser 3.5.2 (Klein and Manning, 2003), trained on the stan- dard training set as well as on the Brown Corpus (Francis and Kucera 1979), which we found to im- prove the parse quality of the descriptive sentences and noun phrases found in the descriptions.
# 3 Our data as a platform for evaluation
The most immediate application for our corpus is in developing models for the task of NLI. In par-
System SNLI SICK RTE-3 Edit Distance Based 71.9 65.4 61.9 Classiï¬er Based 72.2 71.4 61.5 + Lexical Resources 75.0 78.8 63.6
Table 4: 2-class test accuracy for two simple baseline systems included in the Excitement Open Platform, as well as SICK and RTE results for a model making use of more sophisticated lexical resources.
ticular, since it is dramatically larger than any ex- isting corpus of comparable quality, we expect it to be suitable for training parameter-rich models like neural networks, which have not previously been competitive at this task. Our ability to evaluate standard classiï¬er-base NLI models, however, was limited to those which were designed to scale to SNLIâs size without modiï¬cation, so a more com- plete comparison of approaches will have to wait for future work. In this section, we explore the per- formance of three classes of models which could scale readily: (i) models from a well-known NLI system, the Excitement Open Platform; (ii) vari- ants of a strong but simple feature-based classi- ï¬er model, which makes use of both unlexicalized and lexicalized features, and (iii) distributed repre- sentation models, including a baseline model and neural network sequence models.
# 3.1 Excitement Open Platform models
The ï¬rst class of models is from the Excitement Open Platform (EOP, Pad´o et al. 2014; Magnini et al. 2014)âan open source platform for RTE re- search. EOP is a tool for quickly developing NLI systems while sharing components such as com- mon lexical resources and evaluation sets. We evaluate on two algorithms included in the dis- tribution: a simple edit-distance based algorithm and a classiï¬er-based algorithm, the latter both in a bare form and augmented with EOPâs full suite of lexical resources.
Our initial goal was to better understand the dif- ï¬culty of the task of classifying SNLI corpus in- ferences, rather than necessarily the performance of a state-of-the-art RTE system. We approached this by running the same system on several data sets: our own test set, the SICK test data, and the standard RTE-3 test set (Giampiccolo et al., 2007). We report results in Table 4. Each of the models
was separately trained on the training set of each corpus. All models are evaluated only on 2-class entailment. To convert 3-class problems like SICK and SNLI to this setting, all instances of contradic- tion and unknown are converted to nonentailment. This yields a most-frequent-class baseline accu- racy of 66% on SNLI, and 71% on SICK. This is intended primarily to demonstrate the difï¬culty of the task, rather than necessarily the performance of a state-of-the-art RTE system. The edit dis- tance algorithm tunes the weight of the three case- insensitive edit distance operations on the train- In addition ing set, after removing stop words. to the base classiï¬er-based system distributed with the platform, we train a variant which includes in- formation from WordNet (Miller, 1995) and Verb- Ocean (Chklovski and Pantel, 2004), and makes use of features based on tree patterns and depen- dency tree skeletons (Wang and Neumann, 2007).
# 3.2 Lexicalized Classiï¬er
Unlike the RTE datasets, SNLIâs size supports ap- proaches which make use of rich lexicalized fea- tures. We evaluate a simple lexicalized classiï¬er to explore the ability of non-specialized models to exploit these features in lieu of more involved lan- guage understanding. Our classiï¬er implements 6 feature types; 3 unlexicalized and 3 lexicalized:
1. The BLEU score of the hypothesis with re- spect to the premise, using an n-gram length between 1 and 4.
2. The length difference between the hypothesis and the premise, as a real-valued feature. 3. The overlap between words in the premise and hypothesis, both as an absolute count and a percentage of possible overlap, and both over all words and over just nouns, verbs, ad- jectives, and adverbs.
4. An indicator for every unigram and bigram in
# the hypothesis. 5. Cross-unigrams:
for every pair of words across the premise and hypothesis which share a POS tag, an indicator feature over the two words. 6. Cross-bigrams:
for every pair of bigrams across the premise and hypothesis which share a POS tag on the second word, an in- dicator feature over the two bigrams.
We report results in Table 5, along with abla- tion studies for removing the cross-bigram fea- tures (leaving only the cross-unigram feature) and
System SNLI SICK Train Test Train Test Lexicalized Unigrams Only Unlexicalized 99.7 78.2 93.1 71.6 49.4 50.4 90.4 77.8 88.1 77.0 69.9 69.6
Table 5: 3-class accuracy, training on either our data or SICK, including models lacking cross- bigram features (Feature 6), and lacking all lexical features (Features 4â6). We report results both on the test set and the training set to judge overï¬tting.
for removing all lexicalized features. On our large corpus in particular, there is a substantial jump in accuracy from using lexicalized features, and an- other from using the very sparse cross-bigram fea- tures. The latter result suggests that there is value in letting the classiï¬er automatically learn to rec- ognize structures like explicit negations and adjec- tive modiï¬cation. A similar result was shown in Wang and Manning (2012) for bigram features in sentiment analysis.
It is surprising that the classiï¬er performs as well as it does without any notion of alignment or tree transformations. Although we expect that richer models would perform better, the results suggest that given enough data, cross bigrams with the noisy part-of-speech overlap constraint can produce an effective model.
# 3.3 Sentence embeddings and NLI
SNLI is suitably large and diverse to make it pos- sible to train neural network models that produce distributed representations of sentence meaning. In this section, we compare the performance of three such models on the corpus. To focus specif- ically on the strengths of these models at produc- ing informative sentence representations, we use sentence embedding as an intermediate step in the NLI classiï¬cation task: each model must produce a vector representation of each of the two sen- tences without using any context from the other sentence, and the two resulting vectors are then passed to a neural network classiï¬er which pre- dicts the label for the pair. This choice allows us to focus on existing models for sentence embedding, and it allows us to evaluate the ability of those models to learn useful representations of mean- ing (which may be independently useful for sub- sequent tasks), at the cost of excluding from con-
3-way softmax classiï¬er 200d tanh layer 200d tanh layer 200d tanh layer 100d premise 100d hypothesis sentence model with premise input sentence model with hypothesis input
Figure 3: The neural network classiï¬cation archi- tecture: for each sentence embedding model eval- uated in Tables 6 and 7, two identical copies of the model are run with the two sentences as input, and their outputs are used as the two 100d inputs shown here.
sideration possible strong neural models for NLI that directly compare the two inputs at the word or phrase level.
Our neural network classiï¬er, depicted in Fig- ure 3 (and based on a one-layer model in Bow- man et al. 2015), is simply a stack of three 200d tanh layers, with the bottom layer taking the con- catenated sentence representations as input and the top layer feeding a softmax classiï¬er, all trained jointly with the sentence embedding model itself. We test three sentence embedding models, each set to use 100d phrase and sentence embeddings. Our baseline sentence embedding model simply sums the embeddings of the words in each sen- tence. In addition, we experiment with two simple sequence embedding models: a plain RNN and an LSTM RNN (Hochreiter and Schmidhuber, 1997). The word embeddings for all of the models are initialized with the 300d reference GloVe vectors (840B token version, Pennington et al. 2014) and ï¬ne-tuned as part of training. In addition, all of the models use an additional tanh neural net- work layer to map these 300d embeddings into the lower-dimensional phrase and sentence em- bedding space. All of the models are randomly initialized using standard techniques and trained using AdaDelta (Zeiler, 2012) minibatch SGD un- til performance on the development set stops im- proving. We applied L2 regularization to all mod- els, manually tuning the strength coefï¬cient λ for each, and additionally applied dropout (Srivastava et al., 2014) to the inputs and outputs of the sen-
Sentence model Train Test 100d Sum of words 100d RNN 100d LSTM RNN 79.3 73.1 84.8 75.3 72.2 77.6
Table 6: Accuracy in 3-class classiï¬cation on our training and test sets for each model.
tence embedding models (though not to its internal connections) with a ï¬xed dropout rate. All mod- els were implemented in a common framework for this paper, and the implementations will be made available at publication time.
The results are shown in Table 6. The sum of words model performed slightly worse than the fundamentally similar lexicalized classiï¬erâ while the sum of words model can use pretrained word embeddings to better handle rare words, it lacks even the rudimentary sensitivity to word or- der that the lexicalized modelâs bigram features provide. Of the two RNN models, the LSTMâs more robust ability to learn long-term dependen- cies serves it well, giving it a substantial advan- tage over the plain RNN, and resulting in perfor- mance that is essentially equivalent to the lexical- ized classiï¬er on the test set (LSTM performance near the stopping iteration varies by up to 0.5% between evaluation steps). While the lexicalized model ï¬ts the training set almost perfectly, the gap between train and test set accuracy is relatively small for all three neural network models, suggest- ing that research into signiï¬cantly higher capacity versions of these models would be productive.
# 3.4 Analysis and discussion
Figure 4 shows a learning curve for the LSTM and the lexicalized and unlexicalized feature-based models. It shows that the large size of the corpus is crucial to both the LSTM and the lexicalized model, and suggests that additional data would yield still better performance for both. In addi- tion, though the LSTM and the lexicalized model show similar performance when trained on the cur- rent full corpus, the somewhat steeper slope for the LSTM hints that its ability to learn arbitrar- ily structured representations of sentence mean- ing may give it an advantage over the more con- strained lexicalized model on still larger datasets. We were struck by the speed with which the lexicalized classiï¬er outperforms its unlexicalized
Unlexicalized â4~ Lexicalized LSTM ca i=) % Accuracy w oN x 36 tJ S L 3S 30 1 10 100 1,000 10,000 â 100,000 1,000,000 Training pairs used (log scale)
Figure 4: A learning curve showing how the baseline classiï¬ers and the LSTM perform when trained to convergence on varied amounts of train- ing data. The y-axis starts near a random-chance accuracy of 33%. The minibatch size of 64 that we used to tune the LSTM sets a lower bound on data for that model.
counterpart. With only 100 training examples, the cross-bigram classiï¬er is already performing bet- ter. Empirically, we ï¬nd that the top weighted features for the classiï¬er trained on 100 examples tend to be high precision entailments; e.g., playing â outside (most scenes are outdoors), a banana â person eating. If relatively few spurious entail- ments get high weightâas it appears is the caseâ then it makes sense that, when these do ï¬re, they boost accuracy in identifying entailments.
There are revealing patterns in the errors com- mon to all the models considered here. Despite the large size of the training corpus and the distri- butional information captured by GloVe initializa- tion, many lexical relationships are still misana- lyzed, leading to incorrect predictions of indepen- dent, even for pairs that are common in the train- ing corpus like beach/surf and sprinter/runner. Semantic mistakes at the phrasal level (e.g., pre- dicting contradiction for A male is placing an order in a deli/A man buying a sandwich at a deli) indicate that additional attention to composi- tional semantics would pay off. However, many of the persistent problems run deeper, to inferences that depend on world knowledge and context- speciï¬c inferences, as in the entailment pair A race car driver leaps from a burning car/A race car driver escaping danger, for which both the lex- icalized classiï¬er and the LSTM predict neutral. In other cases, the modelsâ attempts to shortcut
this kind of inference through lexical cues can lead them astray. Some of these examples have quali- ties reminiscent of Winograd schemas (Winograd, 1972; Levesque, 2013). For example, all the mod- els wrongly predict entailment for A young girl throws sand toward the ocean/A girl canât stand the ocean, presumably because of distributional associations between throws and canât stand.
Analysis of the modelsâ predictions also yields insights into the extent to which they grapple with event and entity coreference. For the most part, the original image prompts contained a focal element that the caption writer identiï¬ed with a syntac- tic subject, following information structuring con- ventions associating subjects and topics in English (Ward and Birner, 2004). Our annotators generally followed suit, writing sentences that, while struc- turally diverse, share topic/focus (theme/rheme) structure with their premises. This promotes a coherent, situation-speciï¬c construal of each sen- tence pair. This is information that our models can easily take advantage of, but it can lead them astray. For instance, all of them stumble with the amusingly simple case A woman prepares ingre- dients for a bowl of soup/A soup bowl prepares a woman, in which prior expectations about paral- lelism are not met. Another headline example of this type is A man wearing padded arm protec- tion is being bitten by a German shepherd dog/A man bit a dog, which all the models wrongly di- agnose as entailment, though the sentences report two very different stories. A model with access to explicit information about syntactic or semantic structure should perform better on cases like these.
# 4 Transfer learning with SICK
To the extent that successfully training a neural network model like our LSTM on SNLI forces that model to encode broadly accurate representations of English scene descriptions and to build an en- tailment classiï¬er over those relations, we should expect it to be readily possible to adapt the trained model for use on other NLI tasks. In this section, we evaluate on the SICK entailment task using a simple transfer learning method (Pratt et al., 1991) and achieve competitive results.
To perform transfer, we take the parameters of the LSTM RNN model trained on SNLI and use them to initialize a new model, which is trained from that point only on the training portion of SICK. The only newly initialized parameters are
Training sets Train Test Our data only SICK only Our data and SICK (transfer) 42.0 100.0 99.9 46.7 71.3 80.8
Table 7: LSTM 3-class accuracy on the SICK train and test sets under three training regimes.
softmax layer parameters and the embeddings for words that appear in SICK, but not in SNLI (which are populated with GloVe embeddings as above). We use the same model hyperparameters that were used to train the original model, with the excep- tion of the L2 regularization strength, which is re-tuned. We additionally transfer the accumula- tors that are used by AdaDelta to set the learn- ing rates. This lowers the starting learning rates, and is intended to ensure that the model does not learn too quickly in its ï¬rst few epochs after trans- fer and destroy the knowledge accumulated in the pre-transfer phase of training.
The results are shown in Table 7. Training on SICK alone yields poor performance, and the model trained on SNLI fails when tested on SICK data, labeling more neutral examples as contradic- tions than correctly, possibly as a result of subtle differences in how the labeling task was presented. In contrast, transferring SNLI representations to SICK yields the best performance yet reported for an unaugmented neural network model, surpasses the available EOP models, and approaches both the overall state of the art at 84.6% (Lai and Hock- enmaier, 2014) and the 84% level of interannota- tor agreement, which likely represents an approx- imate performance ceiling. This suggests that the introduction of a large high-quality corpus makes it possible to train representation-learning models for sentence meaning that are competitive with the best hand-engineered models on inference tasks.
We attempted to apply this same transfer evalu- ation technique to the RTE-3 challenge, but found that the small training set (800 examples) did not allow the model to adapt to the unfamiliar genre of text used in that corpus, such that no training con- ï¬guration yielded competitive performance. Fur- ther research on effective transfer learning on small data sets with neural models might facilitate improvements here.
# 5 Conclusion
Natural languages are powerful vehicles for rea- soning, and nearly all questions about meaning- fulness in language can be reduced to questions of entailment and contradiction in context. This sug- gests that NLI is an ideal testing ground for the- ories of semantic representation, and that training for NLI tasks can provide rich domain-general se- mantic representations. To date, however, it has not been possible to fully realize this potential due to the limited nature of existing NLI resources. This paper sought to remedy this with a new, large- scale, naturalistic corpus of sentence pairs labeled for entailment, contradiction, and independence. We used this corpus to evaluate a range of models, and found that both simple lexicalized models and neural network models perform well, and that the representations learned by a neural network model on our corpus can be used to dramatically improve performance on a standard challenge dataset. We hope that SNLI presents valuable training data and a challenging testbed for the continued application of machine learning to semantic representation.
# Acknowledgments
We gratefully acknowledge support from a Google Faculty Research Award, a gift from Bloomberg L.P., the Defense Advanced Research Projects Agency (DARPA) Deep Exploration and Filter- ing of Text (DEFT) Program under Air Force Re- search Laboratory (AFRL) contract no. FA8750- 13-2-0040, the National Science Foundation un- der grant no. IIS 1159679, and the Department of the Navy, Ofï¬ce of Naval Research, under grant no. N00014-10-1-0109. Any opinions, ï¬nd- ings, and conclusions or recommendations ex- pressed in this material are those of the authors and do not necessarily reï¬ect the views of Google, Bloomberg L.P., DARPA, AFRL NSF, ONR, or the US government. We also thank our many ex- cellent Mechanical Turk contributors.
# References
Johan Bos and Katja Markert. 2005. Recognising In Proc. textual entailment with logical inference. EMNLP.
Samuel R. Bowman, Christopher Potts, and Christo- pher D. Manning. 2015. Recursive neural networks In Proc. of the 3rd can learn logical semantics. Workshop on Continuous Vector Space Models and their Compositionality.
Timothy Chklovski and Patrick Pantel. 2004. Verb- Ocean: Mining the web for ï¬ne-grained semantic verb relations. In Proc. EMNLP.
Cleo Condoravdi, Dick Crouch, Valeria de Paiva, Rein- hard Stolle, and Daniel G. Bobrow. 2003. En- In tailment, intensionality and text understanding. Proc. of the HLT-NAACL 2003 Workshop on Text Meaning.
Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The PASCAL recognising textual entailment challenge. In Machine learning challenges. Evalu- ating predictive uncertainty, visual object classiï¬ca- tion, and recognising tectual entailment, pages 177â 190. Springer.
Marie-Catherine de Marneffe, Anna N. Rafferty, and Christopher D. Manning. 2008. Finding contradic- tions in text. In Proc. ACL.
W. Nelson Francis and Henry Kucera. 1979. Brown corpus manual. Brown University.
Yaroslav Fyodorov, Yoad Winter, and Nissim Francez. In Proc. 2000. A natural logic inference system. of the 2nd Workshop on Inference in Computational Semantics.
Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. 2007. The third PASCAL recog- nizing textual entailment challenge. In Proc. of the ACL-PASCAL workshop on textual entailment and paraphrasing.
Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Neural computation, Long short-term memory. 9(8):1735â1780.
Jerrold J. Katz. 1972. Semantic Theory. Harper & Row, New York.
Dan Klein and Christopher D. Manning. 2003. Accu- rate unlexicalized parsing. In Proc. ACL.
Alice Lai and Julia Hockenmaier. 2014. Illinois-LH: A denotational and distributional approach to seman- tics. In Proc. SemEval.
Hector J. Levesque. 2013. On our best behaviour. In Proc. AAAI.
Omer Levy, Ido Dagan, and Jacob Goldberger. 2014. Focused entailment graphs for open IE propositions. In Proc. CoNLL.
Bill MacCartney and Christopher D Manning. 2009. An extended model of natural logic. In Proc. of the Eighth International Conference on Computational Semantics.
Bernardo Magnini, Roberto Zanoli, Ido Dagan, Kathrin Eichler, G¨unter Neumann, Tae-Gil Noh, Sebastian Pado, Asher Stern, and Omer Levy. 2014. The Ex- citement Open Platform for textual inferences. Proc. ACL.
Marco Marelli, Luisa Bentivogli, Marco Baroni, Raf- faella Bernardi, Stefano Menini, and Roberto Zam- parelli. 2014a. SemEval-2014 task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and tex- tual entailment. In Proc. SemEval.
Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Roberto Zam- parelli. 2014b. A SICK cure for the evaluation of compositional distributional semantic models. In Proc. LREC.
a lexical database for english. Communications of the ACM, 38(11):39â41.
Sebastian Pad´o, Tae-Gil Noh, Asher Stern, Rui Wang, and Roberto Zanoli. 2014. Design and realization of a modular architecture for textual entailment. Jour- nal of Natural Language Engineering.
Ellie Pavlick, Johan Bos, Malvina Nissim, Charley Beller, Ben Van Durme, and Chris Callison-Burch. 2015. PPDB 2.0: Better paraphrase ranking, ï¬ne- grained entailment relations, word embeddings, and style classiï¬cation. In Proc. ACL.
Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. GloVe: Global vectors for word representation. In Proc. EMNLP.
Lorien Y Pratt, Jack Mostow, Candace A Kamm, and Ace A Kamm. 1991. Direct transfer of learned in- formation among neural networks. In Proc. AAAI.
Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep mod- els for semantic compositionality over a sentiment treebank. In Proc. EMNLP.
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overï¬tting. JMLR.
Johan van Benthem. 2008. A brief history of natural In M. Chakraborty, B. L¨owe, M. Nath Mi- logic. tra, and S. Sarukki, editors, Logic, Navya-Nyaya and Applications: Homage to Bimal Matilal. Col- lege Publications.
2012. Baselines and bigrams: Simple, good sentiment and topic classiï¬cation. In Proc. ACL.
Rui Wang and G¨unter Neumann. 2007. Recognizing textual entailment using sentence similarity based on dependency tree skeletons. In ACL-PASCAL Work- shop on Textual Entailment and Paraphrasing.
Information structure and non-canonical syntax. In Laurence R. Horn and Gregory Ward, editors, Handbook of Prag- matics, pages 153â174. Blackwell, Oxford.
Jason Weston, Antoine Bordes, Sumit Chopra, and 2015a. Towards AI-complete Tomas Mikolov. question answering: A set of prerequisite toy tasks. arXiv:1502.05698.
Jason Weston, Sumit Chopra, and Antoine Bordes. 2015b. Memory networks. In Proc. ICLR.
Terry Winograd. 1972. Understanding natural lan- guage. Cognitive Psychology, 3(1):1â191.
Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. 2014. From image descriptions to vi- sual denotations: New similarity metrics for seman- tic inference over event descriptions. TACL, 2:67â 78.
Matthew D. Zeiler. 2012. adaptive learning rate method. arXiv:1212.5701. ADADELTA: an arXiv preprint | {
"id": "1502.05698"
} |
1506.08909 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response. | http://arxiv.org/pdf/1506.08909 | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | cs.CL, cs.AI, cs.LG, cs.NE | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | cs.CL | 20150630 | 20160204 | 6 1 0 2
b e F 4 ] L C . s c [
3 v 9 0 9 8 0 . 6 0 5 1 : v i X r a
# The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
Ryan Loweâ*, Nissan Pow*, Iulian V. Serbanâ and Joelle Pineau*
*School of Computer Science, McGill University, Montreal, Canada â Department of Computer Science and Operations Research, Universié de Montréal, Montreal, Canada
# Abstract
This paper introduces the Ubuntu Dia- logue Corpus, a dataset containing almost 1 million multi-turn dialogues, with a to- tal of over 7 million utterances and 100 million words. This provides a unique re- source for research into building dialogue managers based on neural language mod- els that can make use of large amounts of unlabeled data. The dataset has both the multi-turn property of conversations in the Dialog State Tracking Challenge datasets, and the unstructured nature of in- teractions from microblog services such as Twitter. We also describe two neural learning architectures suitable for analyz- ing this dataset, and provide benchmark performance on the task of selecting the best next response.
# Introduction
The ability for a computer to converse in a nat- ural and coherent manner with a human has long been held as one of the primary objectives of artiï¬- cial intelligence (AI). In this paper we consider the problem of building dialogue agents that have the ability to interact in one-on-one multi-turn con- versations on a diverse set of topics. We primar- ily target unstructured dialogues, where there is no a priori logical representation for the informa- tion exchanged during the conversation. This is in contrast to recent systems which focus on struc- tured dialogue tasks, using a slot-ï¬lling represen- tation [10, 27, 32].
methods, more speciï¬cally with neural architec- tures [1]; however, it is worth noting that many of the most successful approaches, in particular convolutional and recurrent neural networks, were known for many years prior. It is therefore rea- sonable to attribute this progress to three major factors: 1) the public distribution of very large rich datasets [5], 2) the availability of substantial computing power, and 3) the development of new training methods for neural architectures, in par- ticular leveraging unlabeled data. Similar progress has not yet been observed in the development of dialogue systems. We hypothesize that this is due to the lack of sufï¬ciently large datasets, and aim to overcome this barrier by providing a new large corpus for research in multi-turn conversation.
The new Ubuntu Dialogue Corpus consists of almost one million two-person conversations ex- tracted from the Ubuntu chat logs1, used to receive technical support for various Ubuntu-related prob- lems. The conversations have an average of 8 turns each, with a minimum of 3 turns. All conversa- tions are carried out in text form (not audio). The dataset is orders of magnitude larger than struc- tured corpuses such as those of the Dialogue State Tracking Challenge [32]. It is on the same scale as recent datasets for solving problems such as ques- tion answering and analysis of microblog services, such as Twitter [22, 25, 28, 33], but each conversa- tion in our dataset includes several more turns, as well as longer utterances. Furthermore, because it targets a speciï¬c domain, namely technical sup- port, it can be used as a case study for the devel- opment of AI agents in targeted applications, in contrast to chatbox agents that often lack a well- deï¬ned goal [26].
We observe that in several subï¬elds of AIâ computer vision, speech recognition, machine translationâfundamental break-throughs were achieved in recent years using machine learning
In addition to the corpus, we present learning architectures suitable for analyzing this dataset, ranging from the simple frequency-inverse docu-
âThe ï¬rst two authors contributed equally.
1These logs are available from 2004 to 2015 at http: //irclogs.ubuntu.com/
ment frequency (TF-IDF) approach, to more so- phisticated neural models including a Recurrent Neural Network (RNN) and a Long Short-Term Memory (LSTM) architecture. We provide bench- trained mark performance of these algorithms, with our new corpus, on the task of selecting the best next response, which can be achieved with- out requiring any human labeling. The dataset is ready for public release2. The code developed for the empirical results is also available3.
# 2 Related Work
We brieï¬y review existing dialogue datasets, and some of the more recent learning architectures used for both structured and unstructured dia- logues. This is by no means an exhaustive list (due to space constraints), but surveys resources most related to our contribution. A list of datasets discussed is provided in Table 1.
# 2.1 Dialogue Datasets
The Switchboard dataset [8], and the Dialogue State Tracking Challenge (DSTC) datasets [32] have been used to train and validate dialogue man- agement systems for interactive information re- trieval. The problem is typically formalized as a slot ï¬lling task, where agents attempt to predict the goal of a user during the conversation. These datasets have been signiï¬cant resources for struc- tured dialogues, and have allowed major progress in this ï¬eld, though they are quite small compared to datasets currently used for training neural archi- tectures.
Recently, a few datasets have been used con- taining unstructured dialogues extracted from Twitter4. Ritter et al. [21] collected 1.3 million conversations; this was extended in [28] to take ad- vantage of longer contexts by using A-B-A triples. Shang et al. [25] used data from a similar Chinese website called Weibo5. However to our knowl- edge, these datasets have not been made public, and furthermore, the post-reply format of such mi- croblogging services is perhaps not as represen- tative of natural dialogue between humans as the continuous stream of messages in a chat room. In
is now https://github.com/rkadlec/ available: ubuntu-ranking-dataset-creator. This ver- sion makes some adjustments and ï¬xes some bugs from the ï¬rst version.
3http://github.com/npow/ubottu 4https://twitter.com/ 5http://www.weibo.com/
fact, Ritter et al. estimate that only 37% of posts on Twitter are âconversational in natureâ, and 69% of their collected data contained exchanges of only length 2 [21]. We hypothesize that chat-room style messaging is more closely correlated to human-to- human dialogue than micro-blogging websites, or forum-based sites such as Reddit.
Part of the Ubuntu chat logs have previously been aggregated into a dataset, called the Ubuntu Chat Corpus [30]. However that resource pre- serves the multi-participant structure and thus is less amenable to the investigation of more tradi- tional two-party conversations.
Also weakly related to our contribution is the problem of question-answer systems. Several datasets of question-answer pairs are available [3], however these interactions are much shorter than what we seek to study.
# 2.2 Learning Architectures
Most dialogue research has historically focused on structured slot-ï¬lling tasks [24]. Various ap- proaches were proposed, yet few attempts lever- age more recent developments in neural learning architectures. A notable exception is the work of Henderson et al. [11], which proposes an RNN structure, initialized with a denoising autoencoder, to tackle the DSTC 3 domain.
Work on unstructured dialogues, recently pi- oneered by Ritter et al. [22], proposed a re- sponse generation model for Twitter data based on ideas from Statistical Machine Translation. This is shown to give superior performance to previ- ous information retrieval (e.g. nearest neighbour) approaches [14]. This idea was further devel- oped by Sordoni et al. [28] to exploit information from a longer context, using a structure similar to the Recurrent Neural Network Encoder-Decoder model [4]. This achieves rather poor performance on A-B-A Twitter triples when measured by the BLEU score (a standard for machine translation), yet performs comparatively better than the model of Ritter et al. [22]. Their results are also veriï¬ed with a human-subject study. A similar encoder- decoder framework is presented in [25]. This model uses one RNN to transform the input to some vector representation, and another RNN to âdecodeâ this representation to a response by gen- erating one word at a time. This model is also eval- uated in a human-subject study, although much smaller in size than in [28]. Overall, these models
Dataset Type Task # Dialogues # Utterances # Words Description Switchboard [8] DSTC1 [32] DSTC2 [10] DSTC3 [9] DSTC4[13] Twitter Corpus [21] Twitter Triple Corpus [28] Sina Weibo [25] Ubuntu Dialogue Corpus Human-human spoken Human-computer spoken Human-computer spoken Human-computer spoken Human-human spoken Human-human micro-blog Human-human micro-blog Human-human micro-blog Human-human chat Various State tracking State tracking State tracking State tracking Next utterance generation Next utterance generation Next utterance generation Next utterance classiï¬cation 2,400 15,000 3,000 2,265 35 1,300,000 29,000,000 4,435,959 930,000 â 210,000 24,000 15,000 â 3,000,000 87,000,000 8,871,918 7,100,000 3,000,000 â â â â â â 100,000,000 Telephone conversations on pre-speciï¬ed topics Bus ride information system Restaurant booking system Tourist information system 21 hours of tourist info exchange over Skype Post/ replies extracted from Twitter A-B-A triples from Twitter replies Post/ reply pairs extracted from Weibo Extracted from Ubuntu Chat Logs
Table 1: A selection of structured and unstructured large-scale datasets applicable to dialogue systems. Faded datasets are not publicly available. The last entry is our contribution.
highlight the potential of neural learning architec- tures for interactive systems, yet so far they have been limited to very short conversations.
# 3 The Ubuntu Dialogue Corpus
We seek a large dataset for research in dialogue systems with the following properties:
⢠Two-way (or dyadic) conversation, as op- posed to multi-participant chat, preferably human-human.
⢠Large number of conversations; 105 â 106 is typical of datasets used for neural-network learning in other areas of AI.
⢠Many conversations with several turns (more than 3).
⢠Task-speciï¬c domain, as opposed to chatbot systems.
All of these requirements are satisï¬ed by the Ubuntu Dialogue Corpus presented in this paper.
# 3.1 Ubuntu Chat Logs
The Ubuntu Chat Logs refer to a collection of logs from Ubuntu-related chat rooms on the Freenode Internet Relay Chat (IRC) network. This protocol allows for real-time chat between a large number of participants. Each chat room, or channel, has a particular topic, and every channel participant can see all the messages posted in a given chan- nel. Many of these channels are used for obtaining technical support with various Ubuntu issues.
a potential solution, after ï¬rst addressing the âuser- nameâ of the ï¬rst user. This is called a name men- tion [29], and is done to avoid confusion in the channel â at any given time during the day, there can be between 1 and 20 simultaneous conversa- tions happening in some channels. In the most popular channels, there is almost never a time when only one conversation is occurring; this ren- ders it particularly problematic to extract dyadic dialogues. A conversation between a pair of users generally stops when the problem has been solved, though some users occasionally continue to dis- cuss a topic not related to Ubuntu.
Despite the nature of the chat room being a con- stant stream of messages from multiple users, it is through the fairly rigid structure in the messages that we can extract the dialogues between users. Figure 4 shows an example chat room conversa- tion from the #ubuntu channel as well as the ex- tracted dialogues, which illustrates how users usu- ally state the username of the intended message recipient before writing their reply (we refer to all replies and initial questions as âutterancesâ). For example, it is clear that users âTaruâ and âkujaâ are engaged in a dialogue, as are users âOldâ and âbur[n]erâ, while user â_pmâ is asking an initial question, and âLiveCDâ is perhaps elaborating on a previous comment.
# 3.2 Dataset Creation
As the contents of each channel are moderated, most interactions follow a similar pattern. A new user joins the channel, and asks a general ques- tion about a problem they are having with Ubuntu. Then, another more experienced user replies with
In order to create the Ubuntu Dialogue Corpus, ï¬rst a method had to be devised to extract dyadic dialogues from the chat room multi-party conver- sations. The ï¬rst step was to separate every mes- sage into 4-tuples of (time, sender, recipient, utter- ance). Given these 4-tuples, it is straightforward to
group all tuples where there is a matching sender and recipient. Although it is easy to separate the time and the sender from the rest, ï¬nding the in- tended recipient of the message is not always triv- ial.
3.2.1 Recipient Identiï¬cation While in most cases the recipient is the ï¬rst word of the utterance, it is sometimes located at the end, or not at all in the case of initial questions. Fur- thermore, some users choose names correspond- ing to common English words, such as âtheâ or âstopâ, which could lead to many false positives. In order to solve this issue, we create a dictionary of usernames from the current and previous days, and compare the ï¬rst word of each utterance to its If a match is found, and the word does entries. not correspond to a very common English word6, it is assumed that this user was the intended recip- ient of the message. If no matches are found, it is assumed that the message was an initial question, and the recipient value is left empty.
3.2.2 Utterance Creation The dialogue extraction algorithm works back- wards from the ï¬rst response to ï¬nd the initial question that was replied to, within a time frame of 3 minutes. A ï¬rst response is identiï¬ed by the presence of a recipient name (someone from the recent conversation history). The initial question is identiï¬ed to be the most recent utterance by the recipient identiï¬ed in the ï¬rst response.
All utterances that do not qualify as a ï¬rst re- sponse or an initial question are discarded; initial questions that do not generate any response are also discarded. We additionally discard conversa- tions longer than ï¬ve utterances where one user says more than 80% of the utterances, as these are typically not representative of real chat dialogues. Finally, we consider only extracted dialogues that consist of 3 turns or more to encourage the model- ing of longer-term dependencies.
To alleviate the problem of âholesâ in the dia- logue, where one user does not address the other explicitly, as in Figure 5, we check whether each user talks to someone else for the duration of their conversation. If not, all non-addressed utterances are added to the dialogue. An example conversa- tion along with the extracted dialogues is shown in Figure 5. Note that we also concatenate all con- secutive utterances from a given user.
6We use the GNU Aspell spell checking dictionary.
10° 108 10° Number of dialogues, log scale 10? 10) 10? Number of turns per dialogue, log scale
Figure 1: Plot of number of conversations with a given number of turns. Both axes use a log scale.
# dialogues (human-human) # utterances (in total) # words (in total) Min. # turns per dialogue Avg. # turns per dialogue Avg. # words per utterance Median conversation length (min) 930,000 7,100,000 100,000,000 3 7.71 10.34 6
Table 2: Properties of Ubuntu Dialogue Corpus.
We do not apply any further pre-processing (e.g. tokenization, stemming) to the data as released in the Ubuntu Dialogue Corpus. However the use of pre-processing is standard for most NLP systems, and was also used in our analysis (see Section 4.)
# 3.2.3 Special Cases and Limitations
It is often the case that a user will post an ini- tial question, and multiple people will respond to it with different answers. In this instance, each conversation between the ï¬rst user and the user who replied is treated as a separate dialogue. This has the unfortunate side-effect of having the ini- tial question appear multiple times in several dia- logues. However the number of such cases is suf- ï¬ciently small compared to the size of the dataset. Another issue to note is that the utterance post- ing time is not considered for segmenting conver- sations between two users. Even if two users have a conversation that spans multiple hours, or even days, this is treated as a single dialogue. However, such dialogues are rare. We include the posting time in the corpus so that other researchers may ï¬lter as desired.
# 3.3 Dataset Statistics
Table 2 summarizes properties of the Ubuntu Dia- logue Corpus. One of the most important features
of the Ubuntu chat logs is its size. This is cru- cial for research into building dialogue managers based on neural architectures. Another important characteristic is the number of turns in these dia- logues. The distribution of the number of turns is shown in Figure 1. It can be seen that the num- ber of dialogues and turns per dialogue follow an approximate power law relationship.
# 3.4 Test Set Generation
We set aside 2% of the Ubuntu Dialogue Corpus conversations (randomly selected) to form a test set that can be used for evaluation of response se- lection algorithms. Compared to the rest of the corpus, this test set has been further processed to extract a pair of (context, response, ï¬ag) triples from each dialogue. The ï¬ag is a Boolean vari- able indicating whether or not the response was the actual next utterance after the given context. The response is a target (output) utterance which we aim to correctly identify. The context consists of the sequence of utterances appearing in dialogue prior to the response. We create a pair of triples, where one triple contains the correct response (i.e. the actual next utterance in the dialogue), and the other triple contains a false response, sampled ran- domly from elsewhere within the test set. The ï¬ag is set to 1 in the ï¬rst case and to 0 in the second case. An example pair is shown in Table 3. To make the task harder, we can move from pairs of responses (one correct, one incorrect) to a larger set of wrong responses (all with ï¬ag=0). In our experiments below, we consider both the case of 1 wrong response and 10 wrong responses.
Context well, can I move the drives? __EOS__ ah not like that well, can I move the drives? __EOS__ ah not like that Response I guess I could just get an enclosure and copy via USB you can use "ps ax" and "kill (PID #)" Flag 1 0
Table 3: Test set example with (context, reply, ï¬ag) format. The â__EOS__â tag is used to denote the end of an utterance within the context.
Since we want to learn to predict all parts of a conversation, as opposed to only the closing state- ment, we consider various portions of context for the conversations in the test set. The context size is determined stochastically using a simple formula:
c = min(t â 1, n â 1),
where n = 10C η + 2, η ⼠U nif (C/2, 10C)
Here, C denotes the maximum desired context size, which we set to C = 20. The last term is the desired minimum context size, which we set to be 2. Parameter t is the actual length of that dialogue (thus the constraint that c ⤠t â 1), and n is a random number corresponding to the ran- domly sampled context length, that is selected to be inversely proportional to C.
In practice, this leads to short test dialogues having short contexts, while longer dialogues are often broken into short or medium-length seg- ments, with the occasional long context of 10 or more turns.
# 3.5 Evaluation Metric
We consider the task of best response selection. This can be achieved by processing the data as de- scribed in Section 3.4, without requiring any hu- man labels. This classiï¬cation task is an adapta- tion of the recall and precision metrics previously applied to dialogue datasets [24].
A family of metrics often used in language tasks is Recall@k (denoted R@1 R@2, R@5 below). Here the agent is asked to select the k most likely responses, and it is correct if the true response is among these k candidates. Only the R@1 metric is relevant in the case of binary classiï¬cation (as in the Table 3 example).
Although a language model that performs well on response classiï¬cation is not a gauge of good performance on next utterance generation, we hy- pothesize that improvements on a model with re- gards to the classiï¬cation task will eventually lead to improvements for the generation task. See Sec- tion 6 for further discussion of this point.
# 4 Learning Architectures for Unstructured Dialogues
To provide further evidence of the value of our dataset for research into neural architectures for dialogue managers, we provide performance benchmarks for two neural learning algorithms, as well as one naive baseline. The approaches con- sidered are: TF-IDF, Recurrent Neural networks (RNN), and Long Short-Term Memory (LSTM). Prior to applying each method, we perform stan- dard pre-processing of the data using the NLTK7 library and Twitter tokenizer8 to parse each utter- ance. We use generic tags for various word cat-
7www.nltk.org/ 8http://www.ark.cs.cmu.edu/TweetNLP/
egories, such as names, locations, organizations, URLs, and system paths.
To train the RNN and LSTM architectures, we process the full training Ubuntu Dialogue Corpus into the same format as the test set described in Section 3.4, extracting (context, response, ï¬ag) triples from dialogues. For the training set, we do not sample the context length, but instead con- sider each utterance (starting at the 3rd one) as a potential response, with the previous utterances as its context. So a dialogue of length 10 yields 8 training examples. Since these are overlapping, they are clearly not independent, but we consider this a minor issue given the size of the dataset (we further alleviate the issue by shufï¬ing the training examples). Negative responses are selected at ran- dom from the rest of the training data.
# 4.1 TF-IDF
Term frequency-inverse document frequency is a statistic that intends to capture how important a given word is to some document, which in our case is the context [20]. It is a technique often used in document classiï¬cation and information retrieval. The âterm-frequencyâ term is simply a count of the number of times a word appears in a given context, while the âinverse document frequencyâ term puts a penalty on how often the word appears elsewhere in the corpus. The ï¬nal score is calculated as the product of these two terms, and has the form:
tï¬df(w, d, D) = f (w, d)Ãlog N |{d â D : w â d}| ,
where f (w, d) indicates the number of times word w appeared in context d, N is the total number of dialogues, and the denominator represents the number of dialogues in which the word w appears. For classiï¬cation, the TF-IDF vectors are ï¬rst calculated for the context and each of the candi- date responses. Given a set of candidate response vectors, the one with the highest cosine similarity to the context vector is selected as the output. For Recall@k, the top k responses are returned.
# 4.2 RNN
Recurrent neural networks are a variant of neural networks that allows for time-delayed directed cy- cles between units [17]. This leads to the forma- tion of an internal state of the network, ht, which allows it to model time-dependent data. The in- ternal state is updated at each time step as some
ho ho
Figure 2: Diagram of our model. The RNNs have tied weights. c, r are the last hidden states from the RNNs. ci, ri are word vectors for the context and response, i < t. We consider contexts up to a maximum of t = 160.
function of the observed variables xt, and the hid- den state at the previous time step htâ1. Wx and Wh are matrices associated with the input and hid- den state.
ht = f (Whhtâ1 + Wxxt).
A diagram of an RNN can be seen in Figure 2. RNNs have been the primary building block of many current neural language models [22, 28], which use RNNs for an encoder and decoder. The ï¬rst RNN is used to encode the given context, and the second RNN generates a response by us- ing beam-search, where its initial hidden state is biased using the ï¬nal hidden state from the ï¬rst RNN. In our work, we are concerned with classi- ï¬cation of responses, instead of generation. We build upon the approach in [2], which has also been recently applied to the problem of question answering [33].
We utilize a siamese network consisting of two RNNs with tied weights to produce the embed- dings for the context and response. Given some input context and response, we compute their em- beddings â c, r â Rd, respectively â by feeding the word embeddings one at a time into its respec- tive RNN. Word embeddings are initialized using the pre-trained vectors (Common Crawl, 840B to- kens from [19]), and ï¬ne-tuned during training. The hidden state of the RNN is updated at each step, and the ï¬nal hidden state represents a sum- mary of the input utterance. Using the ï¬nal hid- den states from both RNNs, we then calculate the probability that this is a valid pair:
p(ï¬ag = 1|c, r, M ) = Ï(cT M r + b),
where the bias b and the matrix M ¢ R?*@ are learned model parameters. This can be thought of as a generative approach; given some input re- sponse, we generate a context with the product c! = Mr, and measure the similarity to the actual context using the dot product. This is converted to a probability with the sigmoid function. The model is trained by minimizing the cross entropy of all labeled (context, response) pairs [33]:
j A 2 L=âJ log p( flag, len, tn, M) + mil = ||? n
where ||θ||2 F is the Frobenius norm of θ = {M, b}. In our experiments, we use λ = 0 for computa- tional simplicity.
For training, we used a 1:1 ratio between true re- sponses (ï¬ag = 1), and negative responses (ï¬ag=0) drawn randomly from elsewhere in the training set. The RNN architecture is set to 1 hidden layer with 50 neurons. The Wh matrix is initialized us- ing orthogonal weights [23], while Wx is initial- ized using a uniform distribution with values be- tween -0.01 and 0.01. We use Adam as our opti- mizer [15], with gradients clipped to 10. We found that weight initialization as well as the choice of optimizer were critical for training the RNNs.
# 4.3 LSTM
In addition to the RNN model, we consider the same architecture but changed the hidden units to long-short term memory (LSTM) units [12]. LSTMs were introduced in order to model longer- term dependencies. This is accomplished using a series of gates that determine whether a new in- put should be remembered, forgotten (and the old value retained), or used as output. The error sig- nal can now be fed back indeï¬nitely into the gates of the LSTM unit. This helps overcome the van- ishing and exploding gradient problems in stan- dard RNNs, where the error gradients would oth- erwise decrease or increase at an exponential rate. In training, we used 1 hidden layer with 200 neu- rons. The hyper-parameter conï¬guration (includ- ing number of neurons) was optimized indepen- dently for RNNs and LSTMs using a validation set extracted from the training data.
# 5 Empirical Results
The results for the TF-IDF, RNN, and LSTM mod- els are shown in Table 4. The models were eval- uated using both 1 (1 in 2) and 9 (1 in 10) false
examples. Of course, the Recall@2 and Recall@5 are not relevant in the binary classiï¬cation case9.
Method 1 in 2 R@1 1 in 10 R@1 1 in 10 R@2 1 in 10 R@5 TF-IDF RNN LSTM 65.9% 76.8% 87.8% 41.0% 40.3% 60.4% 54.5% 54.7% 74.5% 70.8% 81.9% 92.6%
Table 4: Results for the three algorithms using var- ious recall measures for binary (1 in 2) and 1 in 10 (1 in 10) next utterance classiï¬cation %.
We observe that the LSTM outperforms both the RNN and TF-IDF on all evaluation metrics. It is interesting to note that TF-IDF actually out- performs the RNN on the Recall@1 case for the 1 in 10 classiï¬cation. This is most likely due to the limited ability of the RNN to take into account long contexts, which can be overcome by using the LSTM. An example output of the LSTM where the response is correctly classiï¬ed is shown in Table 5. We also show, in Figure 3, the increase in per- formance of the LSTM as the amount of data used for training increases. This conï¬rms the impor- tance of having a large training set.
Context ""any apache hax around ? i just deleted all of __path__ - which package provides it ?", "reconï¬guring apache do nât solve it ?" Ranked Responses 1. "does nât seem to, no" 2. "you can log in but not transfer ï¬les ?" Flag 1 0
Table 5: Example showing the ranked responses from the LSTM. Each utterance is shown after pre- processing steps.
# 6 Discussion
This paper presents the Ubuntu Dialogue Corpus, a large dataset for research in unstructured multi- turn dialogue systems. We describe the construc- tion of the dataset and its properties. The availabil- ity of a dataset of this size opens up several inter- esting possibilities for research into dialogue sys- tems based on rich neural-network architectures. We present preliminary results demonstrating use of this dataset to train an RNN and an LSTM for the task of selecting the next best response in a
9Note that these results are on the original dataset. Results on the new dataset should not be compared to the old dataset; baselines on the new dataset will be released shortly.
0.65 Recall@1 for 1 in 10 classification & 0.35, 0 20000 40000 +â«-60000 +~â«80000+~=â«100000 +~â«120000 Number of dialogues used in training
Figure 3: The LSTM (with 200 hidden units), showing Recall@1 for the 1 in 10 classiï¬cation, with increasing dataset sizes.
conversation; we obtain signiï¬cantly better results with the LSTM architecture. There are several in- teresting directions for future work.
# 6.1 Conversation Disentanglement
Our approach to conversation disentanglement consists of a small set of rules. More sophisticated techniques have been proposed, such as training a maximum-entropy classiï¬er to cluster utterances into separate dialogues [6]. However, since we are not trying to replicate the exact conversation between two users, but only to retrieve plausible natural dialogues, the heuristic method presented in this paper may be sufï¬cient. This seems sup- ported through qualitative examination of the data, but could be the subject of more formal evaluation.
# 6.2 Altering Test Set Difï¬culty
One of the interesting properties of the response selection task is the ability to alter the task dif- ï¬culty in a controlled manner. We demonstrated this by moving from 1 to 9 false responses, and by varying the Recall@k parameter. In the future, instead of choosing false responses randomly, we will consider selecting false responses that are similar to the actual response (e.g. as measured by cosine similarity). A dialogue model that performs well on this more difï¬cult task should also manage to capture a more ï¬ne-grained semantic meaning of sentences, as compared to a model that naively picks replies with the most words in common with the context such as TF-IDF.
# 6.3 State Tracking and Utterance Generation
The work described here focuses on the task of re- sponse selection. This can be seen as an interme- diate step between slot ï¬lling and utterance gener- ation. In slot ï¬lling, the set of candidate outputs (states) is identiï¬ed a priori through knowledge engineering, and is typically smaller than the set of responses considered in our work. When the set of candidate responses is close to the size of the dataset (e.g. all utterances ever recorded), then we are quite close to the response generation case. There are several reasons not to proceed directly to response generation. First, it is likely that cur- rent algorithms are not yet able to generate good results for this task, and it is preferable to tackle metrics for which we can make progress. Second, we do not yet have a suitable metric for evaluat- ing performance in the response generation case. One option is to use the BLEU [18] or METEOR [16] scores from machine translation. However, using BLEU to evaluate dialogue systems has been shown to give extremely low scores [28], due to the large space of potential sensible responses [7]. Further, since the BLEU score is calculated us- ing N-grams [18], it would provide a very low score for reasonable responses that do not have any words in common with the ground-truth next utterance.
Alternatively, one could measure the difference between the generated utterance and the actual sentence by comparing their representations in some embedding (or semantic) space. However, different models inevitably use different embed- dings, necessitating a standardized embedding for evaluation purposes. Such a standardized embed- dings has yet to be created.
Another possibility is to use human subjects to score automatically generated responses, but time and expense make this a highly impractical option. In summary, while it is possible that current lan- guage models have outgrown the use of slot ï¬ll- ing as a metric, we are currently unable to mea- sure their ability in next utterance generation in a standardized, meaningful and inexpensive way. This motivates our choice of response selection as a useful metric for the time being.
# Acknowledgments
The authors gratefully acknowledge ï¬nancial sup- port for this work by the Samsung Advanced Institute of Technology (SAIT) and the Natural
Sciences and Engineering Research Council of Canada (NSERC). We would like to thank Lau- rent Charlin for his input into this paper, as well as Gabriel Forgues and Eric Crawford for interesting discussions.
# References
[1] Y. Bengio, A. Courville, and P. Vincent. Representation learning: A review and new Pattern Analysis and Ma- perspectives. chine Intelligence, IEEE Transactions on, 35(8):1798â1828, 2013.
[2] A. Bordes, J. Weston, and N. Usunier. Open question answering with weakly supervised embedding models. In MLKDD, pages 165â 180. Springer, 2014. J. Boyd-Graber, B. Satinoff, H. He, and H. Daume. Besting the quiz master: Crowd- sourcing incremental classiï¬cation games. In EMNLP, 2012.
[4] K. Cho, B. van Merrienboer, C. Gulcehre, F. Bougares, H. Schwenk, and Y. Ben- gio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014. J. Deng, W. Dong, R. Socher, L.J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hier- archical image database. In CVPR, 2009. [6] M. Elsner and E. Charniak. You talking to me? a corpus and algorithm for conversa- In ACL, pages 834â tion disentanglement. 842, 2008.
[7] M. Galley, C. Brockett, A. Sordoni, Y. Ji, M. Auli, C. Quirk, M. Mitchell, J. Gao, and B. Dolan. deltableu: A discriminative metric for generation tasks with intrinsically diverse arXiv preprint arXiv:1506.06863, targets. 2015. J.J. Godfrey, E.C. Holliman, and J. Mc- Switchboard: Telephone speech Daniel. corpus for research and development. In ICASSP, 1992.
[9] M. Henderson, B. Thomson, and J. Williams. Dialog state tracking challenge 2 & 3, 2014. [10] M. Henderson, B. Thomson, and J. Williams. The second dialog state tracking challenge. In SIGDIAL, page 263, 2014.
[11] M. Henderson, B. Thomson, and S. Young. Word-based dialog state tracking with recur-
rent neural networks. In SIGDIAL, page 292, 2014.
[12] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735â1780, 1997.
[13] Dialog state tracking challenge 4. [14] S. Jafarpour, C. Burges, and A. Ritter. Filter, rank, and transfer the knowledge: Learning to chat. Advances in Ranking, 10, 2010.
Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014.
[16] A. Lavie and M.J. Denkowski. The ME- TEOR metric for automatic evaluation of Machine Translation. Machine Translation, 23(2-3):105â115, 2009.
[17] L.R. Medsker and L.C. Jain. Recurrent neu- ral networks. Design and Applications, 2001. [18] K. Papineni, S. Roukos, T. Ward, and W.J. Zhu. Bleu: a method for automatic evalua- tion of machine translation. In ACL, 2002.
[19] J. Pennington, R. Socher, and C.D. Manning. GloVe: Global Vectors for Word Representa- tion. In EMNLP, 2014.
[20] J. Ramos. Using tf-idf to determine word rel- evance in document queries. In ICML, 2003. [21] A. Ritter, C. Cherry, and W. Dolan. Unsu- pervised modeling of twitter conversations. 2010.
[22] A. Ritter, C. Cherry, and W. Dolan. Data- driven response generation in social media. In EMNLP, pages 583â593, 2011.
[23] A.M. Saxe, J.L. McClelland, and S. Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120, 2013. [24] J. Schatzmann, K. Georgila, and S. Young. Quantitative evaluation of user simulation techniques for spoken dialogue systems. In SIGDIAL, 2005.
Neural responding machine for short-text conver- arXiv preprint arXiv:1503.02364, sation. 2015.
[26] B. A. Shawar and E. Atwell. Chatbots: are In LDV Forum, vol- they really useful? ume 22, pages 29â49, 2007.
[27] S. Singh, D. Litman, M. Kearns, and M. Walker. Optimizing dialogue manage- ment with reinforcement learning: Experi- ments with the NJFun system. Journal of
Artiï¬cial Intelligence Research, 16:105â133, 2002.
[28] A. Sordoni, M. Galley, M. Auli, C. Brock- ett, Y. Ji, M. Mitchell, J.Y. Nie, J. Gao, and W. Dolan. A neural network approach to context-sensitive generation of conversa- tional responses. 2015.
[29] D.C. Uthus and D.W. Aha. Extending word highlighting in multiparticipant chat. Tech- nical report, DTIC Document, 2013.
[30] D.C. Uthus and D.W Aha. The ubuntu chat corpus for multiparticipant chat analysis. In AAAI Spring Symposium on Analyzing Mi- crotext, pages 99â102, 2013.
[31] H. Wang, Z. Lu, H. Li, and E. Chen. A dataset for research on short-text conversa- tions. In EMNLP, 2013.
[32] J. Williams, A. Raux, D. Ramachandran, and A. Black. The dialog state tracking chal- lenge. In SIGDIAL, pages 404â413, 2013.
[33] L. Yu, K. M. Hermann, P. Blunsom, Deep learning for an- arXiv preprint and S. Pulman. swer sentence selection. arXiv:1412.1632, 2014.
[34] M.D. Zeiler. Adadelta: learning rate method. arXiv:1212.5701, 2012. an adaptive arXiv preprint
# Appendix A: Dialogue excerpts
Time User Utterance 03:44 03:45 03:45 03:45 03:45 03:45 03:45 03:45 03:46 03:46 Sender Old kuja Taru bur[n]er kuja Taru LiveCD kuja _pm Taru Recipient I dont run graphical ubuntu, I run ubuntu server. Taru: Haha sucker. Kuja: ? Old: you can use "ps ax" and "kill (PID#)" Taru: Anyways, you made the changes right? Kuja: Yes. or killall speedlink Taru: Then from the terminal type: sudo apt-get update if i install the beta version, how can i update it when the ï¬nal version comes out? Kuja: I did. Utterance Old bur[n]er Old I dont run graphical ubuntu, I run ubuntu server. you can use "ps ax" and "kill (PID#)" kuja Taru kuja Taru kuja Taru Taru Kuja Taru Kuja Taru Kuja Haha sucker. ? Anyways, you made the changes right? Yes. Then from the terminal type: sudo apt-get update I did.
Figure 4: Example chat room conversation from the #ubuntu channel of the Ubuntu Chat Logs (top), with the disentangled conversations for the Ubuntu Dialogue Corpus (bottom).
Time User Utterance [12:21] [12:21] [12:21] [12:21] [12:21] [12:21] [12:21] [12:21] [12:22] [12:22] dell cucho RC RC dell dell RC dell dell cucho well, can I move the drives? dell: ah not like that dell: you canât move the drives dell: deï¬nitely not ok lol this is the problem with RAID:) RC haha yeah cucho, I guess I could just get an enclosure and copy via USB... dell: i would advise you to get the disk Sender Recipient Utterance dell cucho dell cucho dell cucho dell well, can I move the drives? ah not like that I guess I could just get an enclosure and copy via USB i would advise you to get the disk dell RC dell dell RC well, can I move the drives? you canât move the drives. deï¬nitely not. this is the problem with RAID :) haha yeah
Figure 5: Example of before (top box) and after (bottom box) the algorithm adds and concatenates utterances in dialogue extraction. Since RC only addresses dell, all of his utterances are added, however this is not done for dell as he addresses both RC and cucho. | {
"id": "1503.02364"
} |
1506.06724 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | 5 1 0 2
n u J 2 2 ] V C . s c [
1 v 4 2 7 6 0 . 6 0 5 1 : v i X r a
# Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
Yukun Zhu â,1 Ryan Kiros*,1 Richard Zemel1 Ruslan Salakhutdinov1 Raquel Urtasun1 Antonio Torralba2 Sanja Fidler1 1University of Toronto 2Massachusetts Institute of Technology
{yukun,rkiros,zemel,rsalakhu,urtasun,fidler}@cs.toronto.edu, torralba@csail.mit.edu
# Abstract
Books are a rich source of both ï¬ne-grained information, how a character, an object or a scene looks like, as well as high-level semantics, what someone is thinking, feeling and how these states evolve through a story. This paper aims to align books to their movie releases in order to provide rich descriptive explanations for visual content that go semanti- cally far beyond the captions available in current datasets. To align movies and books we exploit a neural sentence embedding that is trained in an unsupervised way from a large corpus of books, as well as a video-text neural em- bedding for computing similarities between movie clips and sentences in the book. We propose a context-aware CNN to combine information from multiple sources. We demon- strate good quantitative performance for movie/book align- ment and show several qualitative examples that showcase the diversity of tasks our model can be used for.
# 1. Introduction
Figure 1: Shot from the movie Gone Girl, along with the subtitle, aligned with the book. We reason about the visual and dialog (text) alignment between the movie and a book.
Books provide us with very rich, descriptive text that conveys both ï¬ne-grained visual details (how people or scenes look like) as well as high-level semantics (what peo- ple think and feel, and how their states evolve through a story). This source of knowledge, however, does not come with associated visual information that would enable us to ground it with descriptions. Grounding descriptions in books to vision would allow us to get textual explanations or stories behind visual information rather than simplistic captions available in current datasets. It can also provide us with extremely large amount of data (with tens of thousands books available online).
A truly intelligent machine needs to not only parse the surrounding 3D environment, but also understand why peo- ple take certain actions, what they will do next, what they could possibly be thinking, and even try to empathize with them. In this quest, language will play a crucial role in grounding visual information to high-level semantic con- cepts. Only a few words in a sentence may convey really rich semantic information. Language also represents a natu- ral means of interaction between a naive user and our vision algorithms, which is particularly important for applications such as social robotics or assistive driving.
Combining images or videos with language has gotten signiï¬cant attention in the past year, partly due to the cre- ation of CoCo [18], Microsoftâs large-scale captioned im- age dataset. The ï¬eld has tackled a diverse set of tasks such as captioning [13, 11, 36, 35, 21], alignment [11, 15, 34], Q&A [20, 19], visual model learning from textual descrip- tions [8, 26], and semantic visual search with natural multi- sentence queries [17].
In this paper, we exploit the fact that many books have been turned into movies. Books and their movie releases have a lot of common knowledge as well as they are com- plementary in many ways. For instance, books provide de- tailed descriptions about the intentions and mental states of the characters, while movies are better at capturing visual aspects of the settings.
The ï¬rst challenge we need to address, and the focus of this paper, is to align books with their movie releases in order to obtain rich descriptions for the visual content. We aim to align the two sources with two types of in- formation: visual, where the goal is to link a movie shot to a book paragraph, and dialog, where we want to ï¬nd correspondences between sentences in the movieâs subtitle and sentences in the book. We formulate the problem of movie/book alignment as ï¬nding correspondences between shots in the movie as well as dialog sentences in the sub- titles and sentences in the book (Fig. 1). We introduce a novel sentence similarity measure based on a neural sen-
# âDenotes equal contribution
1
tence embedding trained on millions of sentences from a large corpus of books. On the visual side, we extend the neural image-sentence embeddings to the video domain and train the model on DVS descriptions of movie clips. Our approach combines different similarity measures and takes into account contextual information contained in the nearby shots and book sentences. Our ï¬nal alignment model is for- mulated as an energy minimization problem that encourages the alignment to follow a similar timeline. To evaluate the book-movie alignment model we collected a dataset with 11 movie/book pairs annotated with 2,070 shot-to-sentence correspondences. We demonstrate good quantitative perfor- mance and show several qualitative examples that showcase the diversity of tasks our model can be used for.
The alignment model can have multiple applications. Imagine an app which allows the user to browse the book as the scenes unroll in the movie: perhaps its ending or act- ing are ambiguous, and one would like to query the book for answers. Vice-versa, while reading the book one might want to switch from text to video, particularly for the juicy scenes. We also show other applications of learning from movies and books such as book retrieval (ï¬nding the book that goes with a movie and ï¬nding other similar books), and captioning CoCo images with story-like descriptions.
# 2. Related Work
Most effort in the domain of vision and language has been devoted to the problem of image captioning. Older work made use of ï¬xed visual representations and translated them into textual descriptions [6, 16]. Recently, several approaches based on RNNs emerged, generating captions via a learned joint image-text embedding [13, 11, 36, 21]. These approaches have also been extended to generate de- scriptions of short video clips [35]. In [24], the authors go beyond describing what is happening in an image and pro- vide explanations about why something is happening.
For text-to-image alignment, [15, 7] ï¬nd correspon- dences between nouns and pronouns in a caption and visual objects using several visual and textual potentials. Lin et al. [17] does so for videos. In [11], the authors use RNN embeddings to ï¬nd the correspondences. [37] combines neural embeddings with soft attention in order to align the words to image regions.
Early work on movie-to-text alignment include dynamic time warping for aligning movies to scripts with the help of subtitles [5, 4]. Sankar et al. [28] further developed a system which identiï¬ed sets of visual and audio features to align movies and scripts without making use of the subtitles. Such alignment has been exploited to provide weak labels for person naming tasks [5, 30, 25].
Closest to our work is [34], which aligns plot synopses to shots in the TV series for story-based content retrieval. This work adopts a similarity function between sentences in plot
synopses and shots based on person identities and keywords in subtitles. Our work differs with theirs in several impor- tant aspects. First, we tackle a more challenging problem of movie/book alignment. Unlike plot synopsis, which closely follow the storyline of movies, books are more verbose and might vary in the storyline from their movie release. Fur- thermore, we use learned neural embeddings to compute the similarities rather than hand-designed similarity functions. Parallel to our work, [33] aims to align scenes in movies to chapters in the book. However, their approach operates on a very coarse level (chapters), while ours does so on the sentence/paragraph level. Their dataset thus evaluates on 90 scene-chapter correspondences, while our dataset draws 2,070 shot-to-sentences alignments. Furthermore, the ap- proaches are inherently different. [33] matches the pres- ence of characters in a scene to those in a chapter, as well as uses hand-crafted similarity measures between sentences in the subtitles and dialogs in the books, similarly to [34].
Rohrbach et al. [27] recently released the Movie De- scription dataset which contains clips from movies, each time-stamped with a sentence from DVS (Descriptive Video Service). The dataset contains clips from over a 100 movies, and provides a great resource for the captioning techniques. Our effort here is to align movies with books in order to ob- tain longer, richer and more high-level video descriptions.
We start by describing our new dataset, and then explain our proposed approach.
# 3. The MovieBook and BookCorpus Datasets
We collected two large datasets, one for movie/book alignment and one with a large number of books.
The MovieBook Dataset. Since no prior work or data ex- ist on the problem of movie/book alignment, we collected a new dataset with 11 movies along with the books on which they were based on. For each movie we also have a sub- title ï¬le, which we parse into a set of time-stamped sen- tences. Note that no speaker information is provided in the subtitles. We automatically parse each book into sentences, paragraphs (based on indentation in the book), and chapters (we assume a chapter title has indentation, starts on a new page, and does not end with an end symbol).
Our annotators had the movie and a book opened side by side. They were asked to iterate between browsing the book and watching a few shots/scenes of the movie, and trying to ï¬nd correspondences between them. In particular, they marked the exact time (in seconds) of correspondence in the movie and the matching line number in the book ï¬le, indicating the beginning of the matched sentence. On the video side, we assume that the match spans across a shot (a video unit with smooth camera motion). If the match was longer in duration, the annotator also indicated the ending time of the match. Similarly for the book, if more sentences
Title Gone Girl Fight Club No Country for Old Men Harry Potter and the Sorcerers Stone Shawshank Redemption The Green Mile American Psycho One Flew Over the Cuckoo Nest The Firm Brokeback Mountain The Road # sent. 12,603 4,229 8,050 6,458 2,562 9,467 11,992 7,103 15,498 638 6,638 # words 148,340 48,946 69,824 78,596 40,140 133,241 143,631 112,978 135,529 10,640 58,793 # unique words 3,849 1,833 1,704 2,363 1,360 3,043 4,632 2,949 3,685 470 1,580 BOOK avg. # words per sent. 15 14 10 15 18 17 16 19 11 20 10 max # words per sent. 153 90 68 227 115 119 422 192 85 173 74 # para- graphs 3,927 2,082 3,189 2,925 637 2,760 3,945 2,236 5,223 167 2,345 # shots 2,604 2,365 1,348 2,647 1,252 2,350 1,012 1,671 2,423 1,205 1,108 MOVIE # sent. in subtitles 2,555 1,864 889 1,227 1,879 1,846 1,311 1,553 1,775 1,228 782 ANNOTATION # dialog align. 76 104 223 164 44 208 278 64 82 80 126 # visual align. 106 42 47 73 12 102 85 25 60 20 49 85,238 980,658 9,032 156 29,436 19,985 16,909 1,449 621
15 All Table 1: Statistics for our MovieBook Dataset with ground-truth for alignment between books and their movie releases.
# of books 11,038 # of sentences 74,004,228 # of words 984,846,357 # of unique words mean # of words per sentence median # of words per sentence 1,316,420 13 11
Table 2: Summary statistics of our BookCorpus dataset. We use this corpus to train the sentence embedding model.
matched, the annotator indicated from which to which line a match occurred. Each alignment was also tagged, indicating whether it was a visual, dialogue, or an audio match. Note that even for dialogs, the movie and book versions are se- mantically similar but not exactly the same. Thus deciding on what deï¬nes a match or not is also somewhat subjective and may slightly vary across our annotators. Altogether, the annotators spent 90 hours labeling 11 movie/book pairs, locating 2,070 correspondences.
lished authors. We only included books that had more than 20K words in order to ï¬lter out perhaps noisier shorter sto- ries. The dataset has books in 16 different genres, e.g., Romance (2,865 books), Fantasy (1,479), Science ï¬ction (786), Teen (430), etc. Table 2 highlights the summary statistics of our book corpus.
# 4. Aligning Books and Movies
Table 1 presents our dataset, while Fig. 8 shows a few ground-truth alignments. One can see the complexity and diversity of the data: the number of sentences per book vary from 638 to 15,498, even though the movies are similar in duration. This indicates a huge diversity in descriptiveness across literature, and presents a challenge for matching. The sentences also vary in length, with the sentences in Broke- back Mountain being twice as long as those in The Road. The longest sentence in American Psycho has 422 words and spans over a page in the book.
Aligning movies with books is challenging even for hu- mans, mostly due to the scale of the data. Each movie is on average 2h long and has 1,800 shots, while a book has on average 7,750 sentences. Books also have different styles of writing, formatting, different and challenging language, slang (going vs goinâ, or even was vs âus), etc. As one can see from Table 1, ï¬nding visual matches turned out to be particularly challenging. This is because the visual descrip- tions in books can be either very short and hidden within longer paragraphs or even within a longer sentence, or very verbose â in which case they get obscured with the sur- rounding text â and are hard to spot. Of course, how close the movie follows the book is also up to the director, which can be seen through the number of alignments that our an- notators found across different movie/books.
Our approach aims to align a movie with a book by ex- ploiting visual information as well as dialogs. We take shots as video units and sentences from subtitles to represent di- alogs. Our goal is to match these to the sentences in the book. We propose several measures to compute similari- ties between pairs of sentences as well as shots and sen- tences. We use our novel deep neural embedding trained on our large corpus of books to predict similarities between sentences. Note that an extended version of the sentence embedding is described in detail in [14] showing how to deal with million-word vocabularies, and demonstrating its performance on a large variety of NLP benchmarks. For comparing shots with sentences we extend the neural em- bedding of images and text [13] to operate in the video do- main. We next develop a novel contextual alignment model that combines information from various similarity measures and a larger time-scale in order to make better local align- ment predictions. Finally, we propose a simple pairwise Conditional Random Field (CRF) that smooths the align- ments by encouraging them to follow a linear timeline, both in the video and book domain.
We ï¬rst explain our sentence, followed by our joint video to text embedding. We next propose our contextual model that combines similarities and discuss CRF in more detail.
# 4.1. Skip-Thought Vectors
The BookCorpus Dataset. In order to train our sentence similarity model we collected a corpus of 11,038 books from the web. These are free books written by yet unpub-
In order to score the similarity between two sentences, we exploit our architecture for learning unsupervised rep- resentations of text [14]. The model is loosely inspired by
oe >@â® >@ >@ >@â® a door confronted her <eos> ren a door confronted â her she stopped and tried to pull it i budge _<eos> open <eos> it didnt budge
Figure 2: Sentence neural embedding [14]. Given a tuple (s;_1, 5;, 5:41) of consecutive sentences in text, where s; is the i-th sentence, we encode s; and aim to reconstruct the previous s;_; and the following sentence s;41. Unattached arrows are connected to the encoder output. Colors depict which components share parameters. (eos) is the end of sentence token.
he drove down the street off into the distance . the most effective way to end the battle . he started the car , left the parking lot and merged onto the highway a few miles down the road . he shut the door and watched the taxi drive off . she watched the lights ï¬icker through the trees as the men drove toward the road . he jogged down the stairs , through the small lobby , through the door and into the street . a messy business to be sure , but necessary to achieve a ï¬ne and noble end . they saw their only goal as survival and logically planned a strategy to achieve it . there would be far fewer casualties and far less destruction . the outcome was the lisbon treaty .
Table 3: Qualitative results from the sentence embedding model. For each query sentence on the left, we retrieve the 4 nearest neighbor sentences (by inner product) chosen from books the model has not seen before.
the skip-gram [22] architecture for learning representations of words. In the word skip-gram model, a word wi is cho- sen and must predict its surrounding context (e.g. wi+1 and wiâ1 for a context window of size 1). Our model works in a similar way but at the sentence level. That is, given a sen- tence tuple (siâ1, si, si+1) our model ï¬rst encodes the sen- tence si into a ï¬xed vector, then conditioned on this vector tries to reconstruct the sentences siâ1 and si+1, as shown in Fig. 2. The motivation for this architecture is inspired by the distributional hypothesis: sentences that have similar surrounding context are likely to be both semantically and syntactically similar. Thus, two sentences that have similar syntax and semantics are likely to be encoded to a similar vector. Once the model is trained, we can map any sentence through the encoder to obtain vector representations, then score their similarity through an inner product.
The learning signal of the model depends on having con- tiguous text, where sentences follow one another in se- quence. A natural corpus for training our model is thus a large collection of books. Given the size and diversity of genres, our BookCorpus allows us to learn very general representations of text. For instance, Table 3 illustrates the nearest neighbours of query sentences, taken from held out books that the model was not trained on. These qualitative results demonstrate that our intuition is correct, with result- ing nearest neighbors corresponds largely to syntactically and semantically similar sentences. Note that the sentence embedding is general and can be applied to other domains not considered in this paper, which is explored in [14].
vanishing gradient problem, through the use of gates to con- trol the ï¬ow of information. The LSTM unit explicity em- ploys a cell that acts as a carousel with an identity weight. The ï¬ow of information through a cell is controlled by in- put, output and forget gates which control what goes into a cell, what leaves a cell and whether to reset the contents of the cell. The GRU does not use a cell but employs two gates: an update and a reset gate. In a GRU, the hidden state is a linear combination of the previous hidden state and the proposed hidden state, where the combination weights are controlled by the update gate. GRUs have been shown to perform just as well as LSTM on several sequence predic- tion tasks [3] while being simpler. Thus, we use GRU as the activation function for our encoder and decoder RNNs. are (siâ1, si, si+1), and let wt and let xt description into three parts: objective function. Encoder. Let w1 i denote words in sentence si with N the number of words in the sentence. The encoder pro- duces a hidden state ht i at each time step which forms the representation of the sequence w1 i , . . . , wt i. Thus, the hid- den state hN is the representation of the whole sentence. i The GRU produces the next hidden state as a linear combi- nation of the previous hidden state and the proposed state update (we drop subscript i):
h'=(1-z')oh 142â on! (1)
To construct an encoder, we use a recurrent neural net- work, inspired by the success of encoder-decoder models for neural machine translation [10, 2, 1, 31]. Two kinds of activation functions have recently gained traction: long short-term memory (LSTM) [9] and the gated recurrent unit (GRU) [3]. Both types of activation successfully solve the
where hâ is the proposed state update at time t, zâ is the up- date gate and (©) denotes a component-wise product. The update gate takes values between zero and one. In the ex- treme cases, if the update gate is the vector of ones, the previous hidden state is completely forgotten and hâ = hâ. Alternatively, if the update gate is the zero vector, than the
hidden state from the previous time step is simply copied over, that is ht = htâ1. The update gate is computed as
zt = Ï(Wzxt + Uzhtâ1) (2)
where Wz and Uz are the update gate parameters. The proposed state update is given by
h! = tanh(Wx' + U(r! © hâ~â)) (6))
where rt is the reset gate, which is computed as
rt = Ï(Wrxt + Urhtâ1) (4)
If the reset gate is the zero vector, than the proposed state update is computed only as a function of the current word. Thus after iterating this equation sequence for each word, we obtain a sentence vector hN Decoder. The decoder computation is analogous to the en- coder, except that the computation is conditioned on the sentence vector hi. Two separate decoders are used, one for the previous sentence siâ1 and one for the next sentence si+1. These decoders use different parameters to compute their hidden states but both share the same vocabulary ma- trix V that takes a hidden state and computes a distribution over words. Thus, the decoders are analogous to an RNN language model but conditioned on the encoder sequence. Alternatively, in the context of image caption generation, the encoded sentence hi plays a similar role as the image.
We describe the decoder for the next sentence si+1 (com- putation for siâ1 is identical). Let ht i+1 denote the hidden state of the decoder at time t. The update and reset gates for the decoder are given as follows (we drop i + 1): zhtâ1 + Czhi) rhtâ1 + Crhi)
zt = Ï(Wd rt = Ï(Wd
z xtâ1 + Ud r xtâ1 + Ud i+1 is then computed as:
the hidden state ht
hâ = tanh(W?%x'~! + U4(r! © h'â+) + Ch,) (7) hi, =(1â2') oh! +2! oh! (8) âi Given hj,,, the probability of word w/,, given the previ- ous t â 1 words and the encoder vector is P(wi, whi) x exp(Vng, Wha) Q)
where vwt word of wt the previous sentence siâ1. Objective. Given (siâ1, si, si+1), the objective optimized is the sum of log-probabilities for the next and previous sen- tences conditioned on the representation of the encoder:
logP (wt i+1|w<t i+1, hi) + logP (wt iâ1|w<t iâ1, hi) t t
(10) The total objective is the above summed over all such train- ing tuples. Adam algorithm [12] is used for optimization.
# 4.2. Visual-semantic embeddings of clips and DVS
The model above describes how to obtain a similarity score between two sentences, whose representations are learned from millions of sentences in books. We now dis- cuss how to obtain similarities between shots and sentences. Our approach closely follows the image-sentence rank- ing model proposed by [13]. In their model, an LSTM is used for encoding a sentence into a ï¬xed vector. A linear mapping is applied to image features from a convolutional network. A score is computed based on the inner product between the normalized sentence and image vectors. Cor- rect image-sentence pairs are trained to have high score, while incorrect pairs are assigned low scores.
In our case, we learn a visual-semantic embedding be- tween movie clips and their DVS description. DVS (âDe- scriptive Video Serviceâ) is a service that inserts audio de- scriptions of the movie between the dialogs in order to en- able the visually impaired to follow the movie like anyone else. We used the movie description dataset of [27] for learning our embedding. This dataset has 94 movies, and 54,000 described clips. We represent each movie clip as a vector corresponding to mean-pooled features across each frame in the clip. We used the GoogLeNet architecture [32] as well as hybrid-CNN [38] for extracting frame features. For DVS, we pre-processed the descriptions by removing names and replacing these with a someone token.
The LSTM architecture in this work is implemented us- ing the following equations. As before, we represent a word embedding at time t of a sentence as xt:
i = o(Waix! + Warm t+ Wee) UD ff = o(Wayx' + Wapmâ | + Wereâ) (12) aâ = tanh(Weex! + W),.mâ~') (13) ec = foc t+i' oat (14) of = o(Woox' + Wrom'! + Weoe') (15) mâ = o' @tanh(câ) (16)
where (o) denotes the sigmoid activation function and (©) indicates component-wise multiplication. The states (i', f¢, câ, o', m*) correspond to the input, forget, cell, out- put and memory vectors, respectively. If the sentence is of length N, then the vector m⢠= m is the vector represen- tation of the sentence.
Let q denote a movie clip vector, and let v = WI q be the embedding of the movie clip. We deï¬ne a scoring function s(m, v) = m · v, where m and v are ï¬rst scaled to have unit norm (making s equivalent to cosine similarity). We then optimize the following pairwise ranking loss:
min > So max{0,a âs(m,v) +s(m,vz)} (17) mk +52 S° max{0,0 = s(v,m) + 5(v,ma)},
with mk a contrastive (non-descriptive) sentence vector for a clip embedding v, and vice-versa with vk. We train our model with stochastic gradient descent without momentum.
# 4.3. Context aware similarity
We employ the clip-sentence embedding to compute similarities between each shot in the movie and each sen- tence in the book. For dialogs, we use several similarity measures each capturing a different level of semantic sim- ilarity. We compute BLEU [23] between each subtitle and book sentence to identify nearly identical matches. Simi- larly to [34], we use a tf-idf measure to ï¬nd near duplicates but weighing down the inï¬uence of the less frequent words. Finally, we use our sentence embedding learned from books to score pairs of sentences that are semantically similar but may have a very different wording (i.e., paraphrasing).
These similarity measures indicate the alignment be- tween the two modalities. However, at the local, sentence level, alignment can be rather ambiguous. For example, de- spite being a rather dark book, Gone Girl contains 15 occur- rences of the sentence âI love youâ. We exploit the fact that a match is not completely isolated but that the sentences (or shots) around it are also to some extent similar.
We design a context aware similarity measure that takes into account all individual similarity measures as well as a ï¬xed context window in both, the movie and book do- main, and predicts a new similarity score. We stack a set of M similarity measures into a tensor S(i, j, m), where i, j, and m are the indices of sentences in the subtitle, in the book, and individual similarity measures, respectively. In particular, we use M = 9 similarities: visual and sentence embedding, BLEU1-5, tf-idf, and a uniform prior. We want to predict a combined score score(i, j) = f (S(I, J, M)) at each location (i, j) based on all measurements in a ï¬xed volume deï¬ned by I around i, J around j, and 1, . . . , M . Evaluating the function f (·) at each location (i, j) on a 3-D tensor S is very similar to applying a convolution using a kernel of appropriate size. This motivates us to formulate the function f (·) as a deep convolutional neural network (CNN). In this paper, we adopt a 3-layer CNN as illustrated in Figure 3. We adopt the ReLU non-linearity with dropout to regularize our model. We optimize the cross-entropy loss over the training set using Adam algorithm.
# 4.4. Global Movie/Book Alignment
So far, each shot/sentence was matched independently. However, most shots in movies and passages in the books follow a similar timeline. We would like to incorporate this prior into our alignment. In [34], the authors use dynamic time warping by enforcing that the shots in the movie can only match forward in time (to plot synopses in their case). However, the storyline of the movie and book can have crossings in time (Fig. 8), and the alignment might contain
mW
Figure 3: Our CNN for context-aware similarity computa- tion. It has 3 conv. layers and a sigmoid layer on top.
giant leaps forwards or backwards. Therefore, we formu- late a movie/book alignment problem as inference in a Con- ditional Random Field that encourages nearby shots/dialog alignments to be consistent. Each node yi in our CRF rep- resents an alignment of the shot in the movie with its cor- responding subtitle sentence to a sentence in the book. Its state space is thus the set of all sentences in the book. The CRF energy of a conï¬guration y is formulated as:
= Sondu yi) Ss > wp (Yi. Â¥;) i=1 JEN (i) â log p(x,y;w
where K is the number of nodes (shots), and N (i) the left and right neighbor of yi. Here, Ïu(·) and Ïp(·) are unary and pairwise potentials, respectively, and Ï = (Ïu, Ïp). We directly use the output of the CNN from 4.3 as the unary potential Ïu(·). For the pairwise potential, we measure the time span ds(yi, yj) between two neighbouring sentences in the subtitle and the distance db(yi, yj) of their state space in the book. One pairwise potential is deï¬ned as:
Ïp(yi, yj) = (ds(yi, yj) â db(yi, yj))2 (ds(yi, yj) â db(yi, yj))2 + Ï2 (18)
Here Ï2 is a robustness parameter to avoid punishing gi- ant leaps too harsh. Both ds and db are normalized to [0, 1]. In addition, we also employ another pairwise poten- tial Ïq(yi, yj) = (db(yi,yj ))2 (db(yi,yj ))2+Ï2 to encourage state consis- tency between nearby nodes. This potential is helpful when there is a long silence (no dialog) in the movie.
Inference. Our CRF is a chain, thus exact inference is possible using dynamic programming. We also prune some states that are very far from the uniform alignment (over 1/3 length of the book) to further speed up computation.
Learning. Since ground-truth is only available for a sparse set of shots, we regard the states of unobserved nodes as hidden variables and learn the CRF weights with [29].
# 5. Experimental Evaluation
We evaluate our model on our dataset of 11 movie/book pairs. We train the parameters in our model (CNN and CRF)
on Gone Girl, and test our performance on the remaining 10 movies. In terms of training speed, our video-text model âwatchesâ 1,440 movies per day and our sentence model reads 870 books per day. We also show various qualitative results demonstrating the power of our approach. We pro- vide more results in the Appendix of the paper.
# 5.1. Movie/Book Alignment
Evaluating the performance of movie/book alignment is an interesting problem on its own. This is because our ground-truth is far from exhaustive â around 200 correspon- dences were typically found between a movie and its book, and likely a number of them got missed. Thus, evaluating the precision is rather tricky. We thus focus our evaluation on recall, similar to existing work on retrieval. For each shot that has a GT correspondence in book, we check whether our prediction is close to the annotated one. We evaluate recall at the paragraph level, i.e., we say that the GT para- graph was recalled, if our match was at most 3 paragraphs away, and the shot was at most 5 subtitle sentences away. As a noisier measure, we also compute recall and precision at multiple alignment thresholds and report AP (avg. prec.). The results are presented in Table 4. Columns show dif- ferent instantiations of our model: we show the leave-one- feature-out setting (â
indicates that all features were used), compare how different depths of the context-aware CNN in- ï¬uence the performance, and compare it to our full model (CRF) in the last column. We get the highest boost by adding more layers to the CNN â recall improves by 14%, and AP doubles. Generally, each feature helps performance. Our sentence embedding (BOOK) helps by 4%, while nois- ier video-text embedding helps by 2% in recall. CRF which encourages temporal smoothness generally helps (but not for all movies), bringing additional 2%. We also show how a uniform timeline performs on its own. That is, for each shot (measured in seconds) in the movie, we ï¬nd the sen- tence at the same location (measured in lines) in the book. We add another baseline to evaluate the role of context in our model. Instead of using our CNN that considers con- textual information, we build a linear SVM to combine dif- ferent similarity measures in a single node (shot) â the ï¬nal similarity is used as a unary potential in our CRF alignment model. The Table shows that our CNN contextual model outperforms the SVM baseline by 30% in recall, and dou- bles the AP. We plot alignment for a few movies in Fig. 8.
Running Times. We show the typical running time of each component in our model in Table 5. For each movie- book pair, calculating BLEU score takes most of the time. Note that BLEU does not contribute signiï¬cantly to the per- formance and is of optional use. With respect to the rest, extracting visual features VIS (mean pooling GoogleNet features over the shot frames) and SCENE features (mean pooling hybrid-CNN features [38] over the shot frames),
MOVIE BOOKS b u l C t h g i F . . . y r t n u o C o N . . . w e l F e n O d a o R e h T m r i F e h T . y s P n a c i r e m A . . . k n a h s w a h S Fight Club e l i M n e e r G 100.0 . . . w e l F e n O 45.4 . y s P n a c i r e m A 45.2 . . . k n a h s w a h S 45.1 . . . y r t n u o C o N 43.6 m r i F e h T 43.0 d a o R e h T 42.7 Green Mile r e t t o P y r r a H 100.0 . . . k c a b e k o r B 42.5 . y s P n a c i r e m A 40.1 d a o R e h T 39.6 . . . w e l F e n O 38.9 . . . k n a h s w a h S 38.0 m r i F e h T 36.7 Harry Potter o . y s P n a c i r e m A 100.0 m r i F e h T 40.5 . . . w e l F e n O 39.7 b u l C t h g i F 39.5 . . . k n a h s w a h S 39.1 . . . y r t n u o C o N 39.0 . . . k c a b e k o r B 38.7 American Psy. . . . w e l F e n O 100.0 m r i F e h T 55.5 r e t t o P y r r a H 54.9 d a o R e h T 53.5 . . . k n a h s w a h S 53.1 . . . k c a b e k o r B 52.6 . . . y r t n u o C o N 51.3 One Flew... . . . k n a h s w a h S 100.0 m r i F e h T 84.0 . . . y r t n u o C o N 80.8 . . . w e l F e n O 79.1 d a o R e h T 79.0 . . . k c a b e k o r B 77.8 e l i M n e e r G 76.9 Shawshank ... m r i F e h T 100.0 . . . k n a h s w a h S 66.0 b u l C t h g i F 62.0 . . . k c a b e k o r B 61.4 . . . w e l F e n O 60.9 . y s P n a c i r e m A 59.1 r e t t o P y r r a H 58.0 The Firm . . . k c a b e k o r B 100.0 . . . w e l F e n O 75.0 b u l C t h g i F 73.9 . y s P n a c i r e m A 73.7 e l i M n e e r G 71.5 m r i F e h T 71.4 . . . k n a h s w a h S 68.5 Brokeback ... d a o R e h T 100.0 m r i F e h T 54.8 . . . w e l F e n O 52.2 . . . y r t n u o C o N 51.9 b u l C t h g i F 50.9 . . . k n a h s w a h S 50.7 e l i M n e e r G 50.6 The Road . . . y r t n u o C o N 100.0 d a o R e h T 56.0 . . . k c a b e k o r B 55.9 . . . w e l F e n O 54.8 m r i F e h T 54.1 . . . k n a h s w a h S 53.9 r e t t o P y r r a H 53.4 No Country... 100.0 49.7 49.5 46.8 46.4 45.8 45.8
ae
apie
jgiepaey ING
STEPHEN,
|
Table 6: Book âretrievalâ. For a movie (left), we rank books wrt to their alignment similarity with the movie. We normalize similarity to be 100 for the highest scoring book.
takes most of the time (about 80% of the total time).
We also report training times for our contextual model (CNN) and the CRF alignment model. Note that the times are reported for one movie/book pair since we used only one such pair to train all our CNN and CRF parameters. We chose Gone Girl for training since it had the best balance between the dialog and visual correspondences.
# 5.2. Describing Movies via the Book
We next show qualitative results of our alignment. In particular, we run our model on each movie/book pair, and visualize the passage in the book that a particular shot in the movie aligns to. We show best matching paragraphs as well as a paragraph before and after. The results are shown in Fig. 8. One can see that our model is able to retrieve a semantically meaningful match despite large dialog devia- tions from those in the book, and the challenge of matching a visual representation to the verbose text in the book.
[00:43:16:00:43:19] Okay, | wanna see the hands. Come on. "Certainly, Mr. Cheswick. A vote is now before the group. Will a show of hands be adequate, Mr. McMurphy, or are you going to insist on a secret ballot?""| want to see the hands. | want to see the hands that don't go up, too." âEveryone in favor of changing the television time to the afternoon, raise his hand."
((f7 [02:14:29:02:14:32] Good afternoon, Harry. ... He realized he must be in the hospital wing, He was lying in a bed with white linen sheets, and next to him was a table piled high with what looked like half the candy shop. "Tokens from your friends and admirers," said Dumbledore, beaming. "What happened down in the dungeons between you and Professor Quirrell is a complete secret, so, naturally, the whole school knows. | believe your friends Misters Fred and George Weasley were responsible for trying to send you a toilet seat. No doubt they thought it would amuse you. Madam Pomfrey, however, felt it might not be very hygienic, and confiscated it."
[00:43:16:00:43:19] Okay, | wanna see the hands. Come on. [01:00:02:01:00:04] Are you saying my life is in danger? [01:13:05:01:13:06] Right, Caleb? group. Will a show of hands be adequate, Mr. McMurphy, or are you going to insist on a secret ballot?""| want to see the hands. | want to see the hands that don't go up, too." âEveryone in favor of changing the television time to the afternoon, raise his hand." Mitch braced himself and waited. "Mitch, no lawyer has ever left your law firm alive. Three have tried, and they were killed. Two were about to leave, and they died last summer. Once a lawyer joins Bendini, Lambert & Locke, he never leaves, unless he retires and keeps his mouth shut. And by the time they retire, they are a part of the conspiracy and cannot talk. The Firm has an extensive surveillance operation on the fifth floor. Your house and car are bugged. Your phones are tapped. Your desk and office are wired, Virtually every word you utter is heard and recorded on the fifth ' âou, and sometimes your wife. They are here in Washington as we speak. You see, Mitch, The Firm is more than a firm, It is a division of a very large business, a very profitable business, A very illegal business. The Firm is not owned by the partners.â Mitch turned and watched him closely. The Director looked at the frozen pond as he spoke. A huge, circular scar ran out of his hair, down his forehead, through one dead and indifferently cocked eye, and to the comer of his mouth, which had been disfigured into the knowing leer of a gambler or perhaps a whoremaster. One cheek was smooth and pretty; the other was bunched up like the stump of a tree. | guessed there had been a hole in it, but that, at least, had healed. "He has the one eye," Hammersmith said, caressing the boy's bunched cheek with a lover's kind fingers. "I suppose he's lucky not to be blind. We get down on our knees and thank God for that much, at least, Eh, Caleb?" "Yes, sir," the boy said shyly - the boy who would be beaten mercilessly on the play-yard by laughing, jeering bullies for all his miserable years of education, the boy who would never be asked to play Spin the Bottle or Post Office and would probably never sleep with a woman not bought and paid for once he was grown to manhood's times and needs, the boy who would always stand outside the warm and lighted circle of his peers, the boy who would look at himself in his mirror for the next fifty or sixty or seventy years of his life and think ugly, ugly, ugly. ((f7 [02:14:29:02:14:32] Good afternoon, Harry. 1h Jee prim. aliienlt Patan » (02:15:24:02:15:26] <i>You remember the name of the town, don't you?</i> [01:26:19:01:26:22] You're not the one that has to worry about everything, was lying in a bed with white linen sheets, and next to him was a table piled high with what looked like half the candy shop. "Tokens from your friends and admirers," said Dumbledore, beaming. "What happened down in the dungeons between you and Professor Quirrell is a complete secret, so, naturally, the whole school knows. | believe your friends Misters Fred and George Weasley were responsible for trying to send you a toilet seat. No doubt they thought it would amuse you. Madam Pomfrey, however, felt it might not be very hygienic, and confiscated it." | took the envelope and left the rock where Andy had left it, and Andy's friend before him. Dear Red, If you're reading this, then you're out, One way or another, you're out. And f you've followed along this far, you might be willing to come a little further. | think you remember the name of the town, don't you? | could use a good man to help me get my project on wheels. Meantime, have a drink on me-and do think it over. | will be keeping an eye out for you. Remember that hope is a good thing, Red maybe the best of things, and no good thing ever dies. | will be hoping that this letter finds you, and finds you well. Your friend, Peter Stevens| didn't read that letter in the field The man squatted and looked at him. I'm scared, he said. Do you understand? I'm scared, The boy didn't answer. He just sat there with his head bowed, sobbing. You're not the one who has to worry about everything.
[01:00:02:01:00:04] Are you saying my life is in danger? Mitch braced himself and waited. "Mitch, no lawyer has ever left your law firm alive. Three have tried, and they were killed. Two were about to leave, and they died last summer. Once a lawyer joins Bendini, Lambert & Locke, he never leaves, unless he retires and keeps his mouth shut. And by the time they retire, they are a part of the conspiracy and cannot talk. The Firm has an extensive surveillance operation on the fifth floor. Your house and car are bugged. Your phones are tapped. Your desk and office are wired, Virtually every word you utter is heard and recorded on the fifth ' âou, and sometimes your wife. They are here in Washington as we speak. You see, Mitch, The Firm is more than a firm, It is a division of a very large business, a very profitable business, A very illegal business. The Firm is not owned by the partners.â Mitch turned and watched him closely. The Director looked at the frozen pond as he spoke.
1h Jee prim. aliienlt Patan » (02:15:24:02:15:26] <i>You remember the name of the town, don't you?</i> | took the envelope and left the rock where Andy had left it, and Andy's friend before him. Dear Red, If you're reading this, then you're out, One way or another, you're out. And f you've followed along this far, you might be willing to come a little further. | think you remember the name of the town, don't you? | could use a good man to help me get my project on wheels. Meantime, have a drink on me-and do think it over. | will be keeping an eye out for you. Remember that hope is a good thing, Red maybe the best of things, and no good thing ever dies. | will be hoping that this letter finds you, and finds you well. Your friend, Peter Stevens| didn't read that letter in the field
[01:13:05:01:13:06] Right, Caleb? A huge, circular scar ran out of his hair, down his forehead, through one dead and indifferently cocked eye, and to the comer of his mouth, which had been disfigured into the knowing leer of a gambler or perhaps a whoremaster. One cheek was smooth and pretty; the other was bunched up like the stump of a tree. | guessed there had been a hole in it, but that, at least, had healed. "He has the one eye," Hammersmith said, caressing the boy's bunched cheek with a lover's kind fingers. "I suppose he's lucky not to be blind. We get down on our knees and thank God for that much, at least, Eh, Caleb?" "Yes, sir," the boy said shyly - the boy who would be beaten mercilessly on the play-yard by laughing, jeering bullies for all his miserable years of education, the boy who would never be asked to play Spin the Bottle or Post Office and would probably never sleep with a woman not bought and paid for once he was grown to manhood's times and needs, the boy who would always stand outside the warm and lighted circle of his peers, the boy who would look at himself in his mirror for the next fifty or sixty or seventy years of his life and think ugly, ugly, ugly.
[01:26:19:01:26:22] You're not the one that has to worry about everything, The man squatted and looked at him. I'm scared, he said. Do you understand? I'm scared, The boy didn't answer. He just sat there with his head bowed, sobbing. You're not the one who has to worry about everything.
Figure 4: Describing movie clips via the book: we align the movie to the book, and show a shot from the movie and its corresponding paragraph (plus one before and after) from the book.
American.Psycho r Â¥ , Y [00:13:29:00:13:33] Lady, if you don't shut your fucking mouth, | will kill you. Batman.Begins «\ 2. (02:06:23:02:06:26] - I'm sorry | didn't tell you, Rachel. - No. No, Bruce... (00:30:16:00:30:19] Prolemuris. They're aggressive. Fight.Club | have your license. | know who you are. | know where you live. I'm keeping your license, and I'm going to check on you, mister Raymond K. Hessel. In three months, and then in six months, and then in a year, and if you aren't back in school on your way to being a veterinarian, you will be dead. You didn't say anything. Harry.Potter.and.the.Sorcerers.Stone (00:05:46:00;:05:48] I'm warning you now, boy Bane Chronicles-2 "She has graciously allowed me into her confidence." Magnus could read between the lines. Axel didn't kiss and tell, which made him only more attractive. âThe escape is to be made on Sunday," Alex went on. "The plan is simple, but exacting. We have arranged it so the guards have seen certain people leaving by certain exits at certain times. On ... Adventures of Tom Bombadil Of crystal was his habergeon, his scabbard of chalcedony; with silver tipped at plenilune his spear was hewn of ebony. His javelins were of malachite and stalactite - he brandished them, and went and fought the dragon-flies of Paradise, and vanquished them. He battled with the Dumbledors, the Hummerhorns, and Honeybees, and won the Golden Honeycomb; and running home on sunny seas in ship of leaves and gossamer with blossom for a canopy, he sat... ay! it Batman.Begins â¢~ ~ A) (01:38:41:01:38:44] I'm gonna give you a sedative. You'll wake up back at home. Batman.Begins [01:09:31:01:09:34] I'm going to have to Fight.Club You didn't say anything. Get out of here, and do your little life, but remember I'm watching you, Raymond Hessel, and I'd rather kill you than see you working a shit job for just enough money to buy cheese and watch television. Now, I'm going to walk away so don't turn around. A Captive s Submission â| believe you will enjoy your time here. | am not a harsh master but | am strict. When we are with others, | expect you to present yourself properly. What we do here in your room and in the dungeon is between you and |. It is a testament to the trust and respect we have for each other and no one else needs to Know about our arrangement. I'm sure the past few days have been overwhelming thus far but I have tried to give you as much information as possible. Do you have any questions?" A Dirty Job "This says 'Purveyor of Fine Vintage Clothing and Accessories." "Right! Exactly!" He knew he should have had a second set of business cards printed up. "And where do you think | get those things? From the dead. You see?" âMr. Asher, I'm going to have to ask you to leave."
American.Psycho r ¥ , Y [00:13:29:00:13:33] Lady, if you don't shut your fucking mouth, | will kill you. Fight.Club | have your license. | know who you are. | know where you live. I'm keeping your license, and I'm going to check on you, mister Raymond K. Hessel. In three months, and then in six months, and then in a year, and if you aren't back in school on your way to being a veterinarian, you will be dead. You didn't say anything.
Harry.Potter.and.the.Sorcerers.Stone (00:05:46:00;:05:48] I'm warning you now, boy Fight.Club You didn't say anything. Get out of here, and do your little life, but remember I'm watching you, Raymond Hessel, and I'd rather kill you than see you working a shit job for just enough money to buy cheese and watch television. Now, I'm going to walk away so don't turn around.
Batman.Begins «\ 2. (02:06:23:02:06:26] - I'm sorry | didn't tell you, Rachel. - No. No, Bruce... Bane Chronicles-2 "She has graciously allowed me into her confidence." Magnus could read between the lines. Axel didn't kiss and tell, which made him only more attractive. âThe escape is to be made on Sunday," Alex went on. "The plan is simple, but exacting. We have arranged it so the guards have seen certain people leaving by certain exits at certain times. On ...
ay! it Batman.Begins â¢~ ~ A) (01:38:41:01:38:44] I'm gonna give you a sedative. You'll wake up back at home. A Captive s Submission â| believe you will enjoy your time here. | am not a harsh master but | am strict. When we are with others, | expect you to present yourself properly. What we do here in your room and in the dungeon is between you and |. It is a testament to the trust and respect we have for each other and no one else needs to Know about our arrangement. I'm sure the past few days have been overwhelming thus far but I have tried to give you as much information as possible. Do you have any questions?"
(00:30:16:00:30:19] Prolemuris. They're not aggressive. Adventures of Tom Bombadil Of crystal was his habergeon, his scabbard of chalcedony; with silver tipped at plenilune his spear was hewn of ebony. His javelins were of malachite and stalactite - he brandished them, and went and fought the dragon-flies of Paradise, and vanquished them. He battled with the Dumbledors, the Hummerhorns, and Honeybees, and won the Golden Honeycomb; and running home on sunny seas in ship of leaves and gossamer with blossom for a canopy, he sat...
Batman.Begins [01:09:31:01:09:34] I'm going to have to ask you to leave. A Dirty Job "This says 'Purveyor of Fine Vintage Clothing and Accessories." "Right! Exactly!" He knew he should have had a second set of business cards printed up. "And where do you think | get those things? From the dead. You see?" âMr. Asher, I'm going to have to ask you to leave."
Figure 5: We can use our model to caption movies via a corpus of books. Top: A shot from American Pyscho is captioned with paragraphs from the Fight Club, and a shot from Harry Potter with paragraphs from Fight Club. Middle and Bottom: We match shots from Avatar and Batman Begins against 300 books from our BookCorpus, and show the best matched paragraph.
Fight Club The Green Mile Harry Potter and the Sorcerers Stone American Psycho One Flew Over the Cuckoo Nest Shawshank Redemption The Firm Brokeback Mountain The Road AP Recall AP Recall AP Recall AP Recall AP Recall AP Recall AP Recall AP Recall AP Recall AP Recall UNI 1.22 2.36 0.00 0.00 0.00 0.00 0.00 0.27 0.00 1.01 0.00 1.79 0.05 1.38 2.36 27.0 0.00 1.12 0.00 1.12 SVM 0.73 10.38 14.05 51.42 10.30 44.35 14.78 34.25 5.68 25.25 8.94 46.43 4.46 18.62 24.91 74.00 13.77 41.90 12.11 33.46 â
0.45 12.26 14.12 62.46 8.09 51.05 16.76 67.12 8.14 41.41 8.60 78.57 7.91 33.79 16.55 88.00 6.58 43.02 9.00 48.90 BLEU 0.41 12.74 14.09 60.57 8.18 52.30 17.22 66.58 6.27 34.34 8.89 76.79 8.66 36.55 17.82 92.00 7.83 48.04 9.39 49.63 1 layer CNN w/o one feature BOOK 0.50 11.79 10.12 57.10 7.84 48.54 14.88 64.66 8.49 36.36 7.99 73.21 6.22 23.45 15.16 86.00 5.11 38.55 9.40 47.79 TF-IDF 0.40 11.79 6.92 53.94 5.66 46.03 12.29 60.82 1.93 32.32 4.35 73.21 2.02 26.90 14.60 86.00 3.04 32.96 8.22 46.69 VIS 0.64 12.74 9.83 55.52 7.95 48.54 14.95 63.56 8.51 37.37 8.91 78.57 7.15 26.90 15.58 88.00 5.47 37.99 9.35 51.10 SCENE 0.50 11.79 13.00 60.57 8.04 49.37 15.68 66.58 9.32 36.36 9.22 75.00 7.25 30.34 15.41 86.00 6.09 42.46 8.63 49.26 PRIOR 0.48 11.79 14.42 62.78 8.20 52.72 16.54 67.67 9.04 40.40 7.86 78.57 7.26 31.03 16.21 87.00 7.00 44.13 9.40 48.53 CNN-3 1.95 17.92 28.80 74.13 27.17 76.57 34.32 81.92 14.83 49.49 19.33 94.64 18.34 37.93 31.80 98.00 19.80 65.36 28.75 71.69 CRF 5.17 19.81 27.60 78.23 23.65 78.66 32.87 80.27 21.13 54.55 19.96 96.79 20.74 44.83 30.58 100.00 19.58 65.10 30.45 72.79 No Country for Old Men Mean Recall AP 3.88 0.40 38.01 10.97 52.66 9.62 52.95 9.88 47.07 5.94 48.75 8.57 50.03 8.83 50.77 9.31 52.46 9.64 69.10 23.17
66.77 22.51 Table 4: Performance of our model for the movies in our dataset under different settings and metrics.
Per movie-book pair BLEU 6h TF 10 min BOOK 3 min VIS 2h SCENE 1h CNN (training) 3 min CNN (inference) 0.2 min CRF (training) 5h CRF (inference) 5 min
Table 5: Running time for our model per one movie/book pair.
# 5.3. Book âRetrievalâ
# 6. Conclusion
In this experiment, we compute alignment between a movie and all (test) 10 books, and check whether our model retrieves the correct book. Results are shown in Table 6. Under each book we show the computed similarity. In par- ticular, we use the energy from the CRF, and scale all sim- ilarities relative to the highest one (100). Notice that our model retrieves the correct book for each movie.
Describing a movie via other books. We can also cap- tion movies by matching shots to paragraphs in a corpus of books. Here we do not encourage a linear timeline (CRF) since the stories are unrelated, and we only match at the lo- cal, shot-paragraph level. We show a description for Amer- ican Psycho borrowed from the book Fight Club in Fig. 5.
In this paper, we explored a new problem of aligning a book to its movie release. We proposed an approach that computes several similarities between shots and di- alogs and the sentences in the book. We exploited our new sentence embedding in order to compute similarities be- tween sentences. We further extended the image-text neural embeddings to video, and proposed a context-aware align- ment model that takes into account all the available simi- larity information. We showed results on a new dataset of movie/book alignments as well as several quantitative re- sults that showcase the power and potential of our approach.
# Acknowledgments
# 5.4. The CoCoBook: Writing Stories for CoCo
Our next experiment shows that our model is able to âgenerateâ descriptive stories for (static) images. In par- ticular we used the image-text embedding from [13] and generated a simple caption for an image. We used this cap- tion as a query, and used our sentence embedding trained on books to ï¬nd top 10 nearest sentences (sampled from a few hundred thousand from BookCorpus). We re-ranked these based on the 1-gram precision of non-stop words. Given the best result, we return the sentence as well as the 2 sentences before and after it in the book. The results are in Fig. 6. Our sentence embedding is able to retrieve semantically mean- ingful stories to explain the images.
We acknowledge the support from NSERC, CIFAR, Samsung, Google, and ONR-N00014-14-1-0232. We also thank Lea Jen- sterle for helping us with elaborate annotation, and Relu Patrascu for his help with numerous infrastructure related problems.
# Appendix
In the Appendix we provide more qualitative results.
# A. Qualitative Movie-Book Alignment Results
We show a few qualitative examples of alignment in Fig. 8. In this experiment, we show results obtained with our full model (CRF). For a chosen shot (a node in the CRF) we show the corresponding paragraph in the book.
the club was a little emptier than i would have expected for the late afternoon , and the bartender , in red waistcoat and bowtie , was busy wiping down his counter , replacing peanuts and putting out new coasters . a television with the latest la liga news was hung in an upper corner , and behind him , rows of bottles were reï¬ected in a giant bar mirror . above the stools , a pergola-type overhead structure held rows of wine glasses . it was a classy place , with ferns in the corner , and not the kind of bar to which i was accustomed . my places usually had a more ... relaxed feel . he felt like an idiot for yelling at the child , but his frustration and trepidation was getting the better of him . he glanced toward the shadowed hall and quickly nodded toward melissa before making his way forward . he came across more children sitting upon a couch in the living room . they watched him , but did nât move and did nât speak . his skin started to feel like hundreds of tiny spiders were running up and down it and he hurried on .
a few miles before tioga road reached highway 395 and the town of lee vining , smith turned onto a narrow blacktop road . on either side were parched , grassy open slopes with barbed-wire fences marking property lines . cattle and horses grazed under trees whose black silhouettes stood stark against the gold-velvet mountains . marty burst into song : â home , home on the range , where the deer and the antelope play ! where seldom is heard a discouraging word and the skies are not cloudy all day ! â
ânumber seventy-three , second to last from the corner . â adam slowed the porsche as he approached the quaint-he could think of no other word to use , even though âquaintâ was one he normally , manfully , avoided-townhouse , coming to a halt beside a sleek jaguar sedan . it was a quiet street , devoid of trafï¬c at this hour on a monday night . in the bluish-tinted light of a corner street lamp , he developed a quick visual impression of wrought-iron railings on tidy front stoops , window boxes full of bright chrysanthemums , beveled glass in bay windows , and lace curtains . townhouses around here didnât rent cheaply , he could nât help but observe .
Figure 6: CoCoBook: We generate a caption for a CoCo image via [13] and retrieve its best matched sentence (+ 2 before and after) from a large book corpus. One can see a semantic relevance of the retrieved passage to the image.
Figure 7: Alignment results of our model (bottom) compared to ground-truth alignment (top). In ground-truth, blue lines indicate visual matches, and magenta are the dialog matches. Yellow lines indicate predicted alignments.
We can see that some dialogs in the movies closely fol- low the book and thus help with the alignment. This is particularly important since the visual information is not as strong. Since the text around the dialogs typically describe the scene, the dialogs thus help us ground the visual infor- mation contained in the description and the video.
# B. Borrowing âLinesâ from Other Books
We show a few qualitative examples of top-scoring matches for shot in a movie with a paragraph in another book (a book that does not correspond to this movie).
In this experiment, we allow a clip in our 10 movie dataset (excluding the training movie) to match to paragraphs in the remaining 9 books (excluding the corresponding book). The results are in Fig. 12. Note that the top-scoring matches chosen from only a small set of books may not be too meaningful.
200 book experiment. We scale the experiment by ran- domly selecting 200 books from our BookCorpus. The re- sults are in Fig. 15. One can see that by using many more books results in increasingly better âstoriesâ.
American Psycho
American Psycho
# American Psycho
Harry Potter
Figure 8: Examples of movie-book alignment. We use our model to align a movie to a book. Then for a chosen shot (which is a node in our CRF) we show the corresponding paragraph, plus one before and one after, in the book inferred by our model. On the left we show one (central) frame from the shot along with the subtitle sentence(s) that overlap with the shot. Some dialogs in the movie closely follow the book and thus help with the alignment.
One Flew Over the Cuckooâs Nest
One Flew Over the Cuckooâs Nest
Shawshank Redemption
Figure 9: Examples of movie-book alignment. We use our model to align a movie to a book. Then for a chosen shot (which is a node in our CRF) we show the corresponding paragraph, plus one before and one after, in the book inferred by our model. On the left we show one (central) frame from the shot along with the subtitle sentence(s) that overlap with the shot. Some dialogs in the movie closely follow the book and thus help with the alignment.
The Firm
The Firm
The Firm
Figure 10: Examples of movie-book alignment. We use our model to align a movie to a book. Then for a chosen shot (which is a node in our CRF) we show the corresponding paragraph, plus one before and one after, in the book inferred by our model. On the left we show one (central) frame from the shot along with the subtitle sentence(s) that overlap with the shot. Some dialogs in the movie closely follow the book and thus help with the alignment.
The Green Mile
The Green Mile
The Road
Figure 11: Examples of movie-book alignment. We use our model to align a movie to a book. Then for a chosen shot (which is a node in our CRF) we show the corresponding paragraph, plus one before and one after, in the book inferred by our model. On the left we show one (central) frame from the shot along with the subtitle sentence(s) that overlap with the shot. Some dialogs in the movie closely follow the book and thus help with the alignment.
| have your license. | know who you are. | know where you live. I'm keeping your license, and I'm going to check on you, mister Raymond K. Hessel. In three months, and then in six months, and then in a year, and if you aren't back in school on your way to being a veterinarian, you will be dead. You didn't say anything. [00:13:24:00:13:27] Two: | can only get these sheets in Santa Fe.
Your head rolled up and away from the gun, and you said, yeah. You said, yes, you lived in a basement. You had some pictures in the wallet, too. There was your mother. This was a tough one for you, you'd have to open your eyes and see the picture of Mom and Dad smiling and see the gun at the same time, but you did, and then your eyes closed and you started to cry. You were going to cool, the amazing miracle of death. One minute, you're a person, the next minute, you're an ... [00:21:25:00:21:27] It's okay. | can tell.
I've never been in here before tonight. âIf you say so, sir," the bartender says, âbut Thursday night, you came in to ask how soon the police were planning to shut us down." Last Thursday night, | was awake all night with the insomnia, wondering was | awake, was | sleeping. | woke up late Friday morning, bone tired and feeling | hadn't ever had my eyes closed. "Yes, sir," the bartender says, "Thursday night, you were standing right where you are now and you were asking me about the police crackdown, and you were asking me how many guys we had to turn away from the Wednesday night fight club." [00:23:44:00:23:47] You're late, honey. Oh, yes, you are. | am not late.
Figure 12: Examples of of borrowing paragraphs from other books â 10 book experiment. We show a few examples of top-scoring correspondences between a shot in a movie and a paragraph in a book that does not correspond to the movie. Note that by forcing the model to choose from another book, the top-scoring correspondences may still have a relatively low similarity. In this experiment, we did not enforce a global alignment over the full book â we use the similarity output by our contextual CNN.
âMy friends, thou protest too much to believe the protesting. You are all believing deep inside your stingy little hearts that our Miss Angel of Mercy Ratched is absolutely correct in every assumption she made today about McMurphy. You know she was, and so do I. But why deny it? Let's be honest and give this man his due instead of secretly criticizing his capitalistic talent. What's wrong with him making a little profit? We've all certainly got our money's worth every time he fleeced us, haven't we? He's a shrewd character with an eye out for a quick dollar. He doesn't make any pretense about his motives, does he? Why should we? He has a healthy and honest attitude about his chicanery, and I'm all for him, just as I'm for the dear old capitalistic system of free individual enterprise, comrades, for him and his downright bullheaded gall and the American flag, bless it, and the Lincoln Memorial and the whole bit. Remember the Maine, P. T. Barnum and the Fourth of July. | feel compelled to defend my friend's honor as a good old red, white, and blue hundred-per-cent American con man. Good guy, my [00:35:25:00:35:27] Do you have any witnesses or foot. McMurphy would ... fingerprints ?
You didn't say anything. Get out of here, and do your little life, but remember I'm watching you, Raymond Hessel, and I'd rather kill you than see you working a shit job for just enough money to buy cheese and watch television. Now, I'm going to walk away so don't turn around. [00:05:46:00:05:48] I'm warning you now, boy.
». course. She wasn't quite dead. | have often thought it would have been better - for me, if not for her - if she had been killed instantly. It might have made it possible for me to let her go a little sooner, a little more naturally. Or perhaps I'm only kidding myself about that. All | know for sure is that | have never let her go, not really. She was trembling all over. One of her shoes had come off and | could see her foot jittering. Her ... [00:16:22:00:16:26] "We have a witch in the family. Isn't it wonderful?"
Figure 13: Examples of of borrowing paragraphs from other books â 10 book experiment. We show a few examples of top-scoring correspondences between a shot in a movie and a paragraph in a book that does not correspond to the movie. Note that by forcing the model to choose from another book, the top-scoring correspondences may still have a relatively low similarity. In this experiment, we did not enforce a global alignment over the full book â we use the similarity output by our contextual CNN.
. ya see, the thing is..." He scratched his beard. "See, | done heard yer little twitter feet up on my ceilin' there, so | come up to do some investigatin'. Yep, that's what | reckon, far as | recall." Tick exchanged a baffled look with Sofia and Paul. It didn't take a genius to realize they'd already caught Sally in his first lie. "Well," Tick said, "we need a minute to talk about what we're gonna do." [00:55:19:00:55:23] No, no. | may need to talk to you a little futher, so how about you just let me know if you're gonna leave town.
. last night, or were the Tears still affecting me more than | realized? | didn't think about it again. | just turned and walked to the bathroom. A quick shower and we'd be on our way to the airport. Twenty minutes later | was ready, my hair still soaking wet. | was dressed in a pair of navy blue dress slacks, an emerald green silk blouse, and a navy suit jacket that matched the pants. Jeremy had also chosen a pair of black low-heeled pumps and included a pair of black thigh-highs. Since | didn't own any other kind of hose, that | didn't mind. But the rest of it... "Next time you pick out clothes for me to run for my life in, include some jogging shoes. Pumps, no matter how low-heeled, just aren't made for it." [01:25:28:01:25:30] - Two pair of black pants? - Yes, sir.
You, he wanted to say, I'm thinking of you. I'm thinking of your stink and how bad you smell and how | can't stop smelling you. I'm thinking of how you keep staring at me and how | never say anything about it and | don't know why. I'm thinking of you staring at me and why someone's screaming at me inside my head and how someone's screaming inside my head and why it seems odd that I'm not worried about that. [01:55:38:01:55:41] I'm thinking | don't know what | would do if you were gone.
Figure 14: Examples of of borrowing paragraphs from other books â 200 book experiment. We show a few examples of top-scoring correspondences between a shot in a movie and a paragraph in a book that does not correspond to the movie. By scaling up the experiment (more books to choose from), our model gets increasingly more relevant âstoriesâ.
"A good bodyguard doesn't relax on the job," Ethan said. âYou know we aren't a threat to Ms. Reed, Ethan. | don't know who you're supposed to be protecting her from, but it isn't us." âThey may clean up for the press, but | know what they are, Meredith," Ethan said. A [01:52:05:01:52:09] - How do you know? - Someone's going to try and steal it.
| could use, he reflected, anything that'd help, anything at all. Any hint, like from that girl, any suggestion. He felt dismal and afraid. Shit, he thought, what am | going to do? If I'm off everything, he thought, then I'll never see any of them again, any of my friends, the people | watched and knew. I'll be out of it; I'll be maybe retired the rest of my life-anyhow, I've seen the last of Arctor and Luckman and Jerry Fabin and Charles Freck and most of all Donna Hawthorne. I'll never see any of my friends again, for the rest of eternity. It's over. [00:37:32:00:37:35] ...and I'll never do it again, that's for sure.
He came to his knees and put his hands on my arms, and stared down into my face. "I will love you always. When this red hair is white, | will still love you. When the smooth softness of youth is replaced by the delicate softness of age, | will still want to touch your skin. When your face is full of the line of every smile you have ever smiled, of every surprise | have seen flash through your eyes, when every tear you have ever cried has left its mark upon your face, | will treasure you all the more, because | was there to see it all. | will share your life with you, Meredith, and |... [00:55:54:00:55:58] Now, once you've got hold of your broom, | want you to mount it.
Figure 15: Examples of of borrowing paragraphs from other books â 200 book experiment. We show a few examples of top-scoring correspondences between a shot in a movie and a paragraph in a book that does not correspond to the movie. By scaling up the experiment (more books to choose from), our model gets increasingly more relevant âstoriesâ. Bottom row: failed example.
# C. The CoCoBook
We show more results for captioning CoCo images [18] with passages from the books.
if never â somewhere you âll never ï¬nd it , â owens sneered . meant ï¬ve seconds , his claim was true . the little shit âs gaze cut left , where a laptop sat on a coffee table . trey strode to it . owens â email program was open .
seriously . wreck . just something like that . i try to convince her .
everyone was allowed to rest for the next twenty-four hours . that following evening : the elect , not their entourages , were called to a dining hall for supper with lady dolorous . a table that curved inward was laden with food and drink . the wall behind the table was windows with a view of the planet . girls in pink stood about and at attention .
he had simply ... healed . brian watched his fellow passengers come aboard . a young woman with blonde hair was walking with a little girl in dark glasses . the little girl âs hand was on the blonde âs elbow . the woman murmured to her charge , the girl looked immediately toward the sound of her voice , and brian understood she was blind - it was something in the gesture of the head .
this was a beautiful miniature reproduction of a real london town house , and when jessamine touched it , tessa saw that the front of it swung open on tiny hinges . tessa caught her breath . there were beautiful tiny rooms perfectly decorated with miniature furniture , everything built to scale , from the little wooden chairs with needlepoint cushions to the cast-iron stove in the kitchen . there were small dolls , too , with china heads , and real little oil paintings on the walls . â this was my house . â
if he had been nearby he would have dragged her out of the room by her hair and strangled her . during lunch break she went with a group back to the encampment . out of view of the house , under a stand of towering trees , several tents were sitting in a ï¬eld of mud . the rain the night before had washed the world , but here it had made a mess of things . a few women ï¬red up a camp stove and put on rice and lentils .
Ta? ALL ALM
then a frightened yell . â hang on ! â suddenly , jake was ï¬ying through the air . nefertiti became airborne , too . he screamed , not knowing what was happening-then he splashed into a pool of water .
grabbing his wristwatch off the bedside table he checked the time , grimacing when he saw that it was just after two in the afternoon . jeanne louise should nât be up yet . stiï¬ing a yawn , he slid out of bed and made his way to the en suite bathroom for a shower twenty minutes later paul was showered , dressed , and had . brushed his teeth and hair . feeling somewhat alive now , he made his way out of his and jeanne louise âs room , pausing to look in on livy as he passed .
she cried . quentin put a heavy , warm , calming hand on her thigh , saying , â he should be sober by then . â a cell phone rang . he pulled his from his back pocket , glanced at it , then used the remote to turn the tv to the channel that showed the feed from the camera at the security gate . â oh , it âs rachel . â
now however she was out of his shot . he had missed it completely until he had ended up on the ground with his shotgun . an old clock hung on the wall near the door . the was obviously broken , the small red hand ticking the same second away over and over again . morgan squeezed the trigger and pellets ripped out of their package , bounced down the barrel , ï¬ew through the air and ripped into the old clock tearing it in two before it smashed to the ground .
a man sat in a chair , facing the wall opposite of me . it nearly startled me when i ï¬rst saw him , and made a bit of a squeak , but he did nothing . he had dark gray hair , a black suit and pants , and a gray and blue striped tie . s-sir ? i said .
its been years since we last played together , but as i recall , he was rather weak at the net . or was it his serving ? all i know is he plays tennis much better than he plays cricket . perhaps , mr brearly , frances eventually replied , we should wait until we actually start playing . then we can ascertain our oppositions faults , and make a plan based on the new information .
since it was the middle of summer , there were candles in the ï¬replace instead of a ï¬re . but it still cast a romantic glow over the room . there were candles on the mantle and on a table set up in the corner with ï¬owers . as she looked around , her eyes instinctively turned to ï¬nd max who was behind a bar opening a bottle of champagne . the doors were closed quietly behind her and her mouth felt dry as she looked across the room at the man who had haunted her dreams for so long .
the open doorway of another house provided a view of an ancient game of tiles . it wasnt the game that held reddings attention . it was the four elderly people who sat around a table playing the game . they were well beyond their productive years and the canal township had probably been their whole lives . redding and lin ming stepped away from the doorway right into the path of a wooden pushcart .
along with the ï¬sh , howard had given them some other picnic treats that had spoiled ... mushrooms in cream sauce , rotted greens . the bats and temp were only eating from the river now , but the remaining picnic food was running low . there were a few loaves of stale bread , some cheese , some dried vegetables , and a couple of cakes . gregor looked over the supplies and thought about boots wailing for food and water in the jungle . it had been unbearable .
he felt the ï¬rst stirrings of fear mixing with his anger . a light ï¬icked on in the room and eric jerked , blinking for a minute at the brightness before the images focused . there was a tall , thin man standing over a mannequin . he looked like he was assembling it , since its leg was on the ground next to the man and its arm was in two pieces farther away . then the mannequin âs head turned .
# References
[1] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine trans- lation by jointly learning to align and translate. ICLR, 2015. 4
[2] K. Cho, B. van Merrienboer, C. Gulcehre, F. Bougares, H. Schwenk, and Y. Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. EMNLP, 2014. 4
[3] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014. 4
and B. Taskar. Movie/script: Alignment and parsing of video and text tran- scription. In ECCV, 2008. 2
[5] M. Everingham, J. Sivic, and A. Zisserman. âHello! My name is... Buffyâ â Automatic Naming of Characters in TV Video. BMVC, pages 899â908, 2006. 2
[6] A. Farhadi, M. Hejrati, M. Sadeghi, P. Young, C. Rashtchian, J. Hockenmaier, and D. Forsyth. Every picture tells a story: Generating sentences for images. In ECCV, 2010. 2
[7] S. Fidler, A. Sharma, and R. Urtasun. A sentence is worth a thousand pixels. In CVPR, 2013. 2
[8] A. Gupta and L. Davis. Beyond nouns: Exploiting prepo- sitions and comparative adjectives for learning visual classi- ï¬ers. In ECCV, 2008. 1
[9] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735â1780, 1997. 4
[10] N. Kalchbrenner and P. Blunsom. Recurrent continuous translation models. In EMNLP, pages 1700â1709, 2013. 4
[11] A. Karpathy and L. Fei-Fei. Deep visual-semantic align- In CVPR, 2015. ments for generating image descriptions. 1, 2
[12] D. Kingma and J. Ba. Adam: A method for stochastic opti- mization. arXiv preprint arXiv:1412.6980, 2014. 5
[13] R. Kiros, R. Salakhutdinov, and R. S. Zemel. Unifying visual-semantic embeddings with multimodal neural lan- guage models. CoRR, abs/1411.2539, 2014. 1, 2, 3, 5, 9, 10
[14] R. Kiros, Y. Zhu, R. Salakhutdinov, R. S. Zemel, A. Torralba, R. Urtasun, and S. Fidler. Skip-Thought Vectors. In Arxiv, 2015. 3, 4
[15] C. Kong, D. Lin, M. Bansal, R. Urtasun, and S. Fidler. What are you talking about? text-to-image coreference. In CVPR, 2014. 1, 2
[16] G. Kulkarni, V. Premraj, S. Dhar, S. Li, Y. Choi, A. Berg, and T. Berg. Baby talk: Understanding and generating simple image descriptions. In CVPR, 2011. 2
[17] D. Lin, S. Fidler, C. Kong, and R. Urtasun. Visual Seman- tic Search: Retrieving Videos via Complex Textual Queries. CVPR, pages 2657â2664, 2014. 1, 2
[18] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ra- manan, P. Doll´ar, and C. L. Zitnick. Microsoft coco: Com- mon objects in context. In ECCV, pages 740â755. 2014. 1, 19
[19] X. Lin and D. Parikh. Donât just listen, use your imagination: In Leveraging visual common sense for non-visual tasks. CVPR, 2015. 1
[20] M. Malinowski and M. Fritz. A multi-world approach to question answering about real-world scenes based on uncer- tain input. In NIPS, 2014. 1
[21] J. Mao, W. Xu, Y. Yang, J. Wang, and A. L. Yuille. Ex- plain images with multimodal recurrent neural networks. In arXiv:1410.1090, 2014. 1, 2
[22] T. Mikolov, K. Chen, G. Corrado, and J. Dean. Efï¬cient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013. 4
[23] K. Papineni, S. Roukos, T. Ward, and W. J. Zhu. BLEU: a method for automatic evaluation of machine translation. In ACL, pages 311â318, 2002. 6
[24] H. Pirsiavash, C. Vondrick, and A. Torralba. why in images. arXiv.org, jun 2014. 2 Inferring the
[25] V. Ramanathan, A. Joulin, P. Liang, and L. Fei-Fei. Link- ing People in Videos with âTheirâ Names Using Coreference Resolution. In ECCV, pages 95â110. 2014. 2
[26] V. Ramanathan, P. Liang, and L. Fei-Fei. Video event under- standing using natural language descriptions. In ICCV, 2013. 1
[27] A. Rohrbach, M. Rohrbach, N. Tandon, and B. Schiele. A dataset for movie description. In CVPR, 2015. 2, 5
[28] P. Sankar, C. V. Jawahar, and A. Zisserman. Subtitle-free Movie to Script Alignment. In BMVC, 2009. 2
[29] A. Schwing, T. Hazan, M. Pollefeys, and R. Urtasun. Efï¬- cient Structured Prediction with Latent Variables for General Graphical Models. In ICML, 2012. 6
[30] J. Sivic, M. Everingham, and A. Zisserman. âWho are you?â - Learning person speciï¬c classiï¬ers from video. CVPR, pages 1145â1152, 2009. 2
[31] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In NIPS, 2014. 4
[32] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabi- novich. Going deeper with convolutions. arXiv preprint arXiv:1409.4842, 2014. 5
[33] M. Tapaswi, M. Bauml, and R. Stiefelhagen. Book2Movie: Aligning Video scenes with Book chapters. In CVPR, 2015. 2
[34] M. Tapaswi, M. Buml, and R. Stiefelhagen. Aligning Plot Synopses to Videos for Story-based Retrieval. IJMIR, 4:3â 16, 2015. 1, 2, 6
[35] S. Venugopalan, H. Xu, J. Donahue, M. Rohrbach, R. J. Mooney, and K. Saenko. Translating Videos to Natural Language Using Deep Recurrent Neural Networks. CoRR abs/1312.6229, cs.CV, 2014. 1, 2
[36] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A neural image caption generator. In arXiv:1411.4555, 2014. 1, 2
[37] K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhut- dinov, R. Zemel, and Y. Bengio. Show, attend and tell: Neural image caption generation with visual attention. In arXiv:1502.03044, 2015. 2
[38] B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva. Learning Deep Features for Scene Recognition using Places Database. In NIPS, 2014. 5, 7 | {
"id": "1502.03044"
} |
1506.02438 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | 8 1 0 2
t c O 0 2 ] G L . s c [
6 v 8 3 4 2 0 . 6 0 5 1 : v i X r a
Published as a conference paper at ICLR 2016
# HIGH-DIMENSIONAL CONTINUOUS CONTROL USING GENERALIZED ADVANTAGE ESTIMATION
John Schulman, Philipp Moritz, Sergey Levine, Michael I. Jordan and Pieter Abbeel Department of Electrical Engineering and Computer Science University of California, Berkeley {joschu,pcmoritz,levine,jordan,pabbeel}@eecs.berkeley.edu
# ABSTRACT
Policy gradient methods are an appealing approach in reinforcement learning be- cause they directly optimize the cumulative reward and can straightforwardly be used with nonlinear function approximators such as neural networks. The two main challenges are the large number of samples typically required, and the difï¬- culty of obtaining stable and steady improvement despite the nonstationarity of the incoming data. We address the ï¬rst challenge by using value functions to substan- tially reduce the variance of policy gradient estimates at the cost of some bias, with an exponentially-weighted estimator of the advantage function that is analogous to TD(λ). We address the second challenge by using trust region optimization procedure for both the policy and the value function, which are represented by neural networks. Our approach yields strong empirical results on highly challenging 3D locomo- tion tasks, learning running gaits for bipedal and quadrupedal simulated robots, and learning a policy for getting the biped to stand up from starting out lying on the ground. In contrast to a body of prior work that uses hand-crafted policy repre- sentations, our neural network policies map directly from raw kinematics to joint torques. Our algorithm is fully model-free, and the amount of simulated experi- ence required for the learning tasks on 3D bipeds corresponds to 1-2 weeks of real time.
# INTRODUCTION
The typical problem formulation in reinforcement learning is to maximize the expected total reward of a policy. A key source of difï¬culty is the long time delay between actions and their positive or negative effect on rewards; this issue is called the credit assignment problem in the reinforcement learning literature (Minsky, 1961; Sutton & Barto, 1998), and the distal reward problem in the behavioral literature (Hull, 1943). Value functions offer an elegant solution to the credit assignment problemâthey allow us to estimate the goodness of an action before the delayed reward arrives. Reinforcement learning algorithms make use of value functions in a variety of different ways; this paper considers algorithms that optimize a parameterized policy and use value functions to help estimate how the policy should be improved.
When using a parameterized stochastic policy, it is possible to obtain an unbiased estimate of the gradient of the expected total returns (Williams, 1992; Sutton et al., 1999; Baxter & Bartlett, 2000); these noisy gradient estimates can be used in a stochastic gradient ascent algorithm. Unfortunately, the variance of the gradient estimator scales unfavorably with the time horizon, since the effect of an action is confounded with the effects of past and future actions. Another class of policy gradient algorithms, called actor-critic methods, use a value function rather than the empirical returns, ob- taining an estimator with lower variance at the cost of introducing bias (Konda & Tsitsiklis, 2003; Hafner & Riedmiller, 2011). But while high variance necessitates using more samples, bias is more perniciousâeven with an unlimited number of samples, bias can cause the algorithm to fail to con- verge, or to converge to a poor solution that is not even a local optimum.
We propose a family of policy gradient estimators that signiï¬cantly reduce variance while main- taining a tolerable level of bias. We call this estimation scheme, parameterized by γ â [0, 1] and
1
Published as a conference paper at ICLR 2016
λ â [0, 1], the generalized advantage estimator (GAE). Related methods have been proposed in the context of online actor-critic methods (Kimura & Kobayashi, 1998; Wawrzy´nski, 2009). We provide a more general analysis, which is applicable in both the online and batch settings, and discuss an in- terpretation of our method as an instance of reward shaping (Ng et al., 1999), where the approximate value function is used to shape the reward.
We present experimental results on a number of highly challenging 3D locomotion tasks, where we show that our approach can learn complex gaits using high-dimensional, general purpose neural network function approximators for both the policy and the value function, each with over 104 parameters. The policies perform torque-level control of simulated 3D robots with up to 33 state dimensions and 10 actuators.
The contributions of this paper are summarized as follows:
1. We provide justiï¬cation and intuition for an effective variance reduction scheme for policy gra- dients, which we call generalized advantage estimation (GAE). While the formula has been pro- posed in prior work (Kimura & Kobayashi, 1998; Wawrzy´nski, 2009), our analysis is novel and enables GAE to be applied with a more general set of algorithms, including the batch trust-region algorithm we use for our experiments.
2. We propose the use of a trust region optimization method for the value function, which we ï¬nd is a robust and efï¬cient way to train neural network value functions with thousands of parameters. 3. By combining (1) and (2) above, we obtain an algorithm that empirically is effective at learning neural network policies for challenging control tasks. The results extend the state of the art in using reinforcement learning for high-dimensional continuous control. Videos are available at https://sites.google.com/site/gaepapersupp.
# 2 PRELIMINARIES
We consider an undiscounted formulation of the policy optimization problem. The initial state 80 is sampled from distribution po. A trajectory (so, a0, $1,41,...) is generated by sampling ac- tions according to the policy a; ~ 7(a; | s,) and sampling the states according to the dynamics Stn © P(S141 | Sz, 4), until a terminal (absorbing) state is reached. A reward r, = =I (St, a Si+1) is received at each timestep. The goal is to maximize the expected total reward )7?° 9 rz, which is assumed to be finite for all policies. Note that we are not using a discount as part of the problem spec- ification; it will appear below as an algorithm parameter that adjusts a bias-variance tradeoff. But the discounted problem (maximizing ran y'r,) can be handled as an instance of the undiscounted problem in which we absorb the discount factor into the reward function, making it time-dependent.
Policy gradient methods maximize the expected total reward by repeatedly estimating the gradient g:= VoE Dean r;]. There are several different related expressions for the policy gradient, which have the form
=E So WiVo log 7o(ae | 52) ; dd) t=0
where Ψt may be one of the following:
1. P29 Te: total reward of the trajectory. 4. Qâ¢(s;, a4): state-action value function. 2. OP, rv: reward following action ay. 5. Aâ¢(s,,a;): advantage function. 3. Py rv â b(se): baselined version of previous formula. 6. re + V"(8141) â V7 (s¢): TD residual.
# The latter formulas use the definitions
# » ru |
V Ï(st) := Est+1:â, at:â
# rt+l
l=0
AÏ(st, at) := QÏ(st, at) â V Ï(st),
Q* (st, at) = Eesitticos >» ru (2) 1=0
# (Advantage function).
2
(3)
Published as a conference paper at ICLR 2016
Here, the subscript of E enumerates the variables being integrated over, where states and actions are sampled sequentially from the dynamics model P (st+1 | st, at) and policy Ï(at | st), respectively. The colon notation a : b refers to the inclusive range (a, a + 1, . . . , b). These formulas are well known and straightforward to obtain; they follow directly from Proposition 1, which will be stated shortly. The choice Ψt = AÏ(st, at) yields almost the lowest possible variance, though in practice, the advantage function is not known and must be estimated. This statement can be intuitively justiï¬ed by the following interpretation of the policy gradient: that a step in the policy gradient direction should increase the probability of better-than-average actions and decrease the probability of worse-than- average actions. The advantage function, by itâs deï¬nition AÏ(s, a) = QÏ(s, a) â V Ï(s), measures whether or not the action is better or worse than the policyâs default behavior. Hence, we should choose Ψt to be the advantage function AÏ(st, at), so that the gradient term Ψtâθ log Ïθ(at | st) points in the direction of increased Ïθ(at | st) if and only if AÏ(st, at) > 0. See Greensmith et al. (2004) for a more rigorous analysis of the variance of policy gradient estimators and the effect of using a baseline.
We will introduce a parameter γ that allows us to reduce variance by downweighting rewards cor- responding to delayed effects, at the cost of introducing bias. This parameter corresponds to the discount factor used in discounted formulations of MDPs, but we treat it as a variance reduction parameter in an undiscounted problem; this technique was analyzed theoretically by Marbach & Tsitsiklis (2003); Kakade (2001b); Thomas (2014). The discounted value functions are given by:
Vâ¢7(s¢) = Ese ico, » vn Qâ¢7 (8,41) = Eseyiico, » vn (4) At+1:c0 1=0 1=0 A⢠(8p, a1) = Q⢠(St, a2) â V7 (81). (5)
The discounted approximation to the policy gradient is defined as follows: oo
oo f= Esso..0 DAMS: a0)Vo log 79 (a: | 3) : (6)
The following section discusses how to obtain biased (but not too biased) estimators for AÏ,γ, giving us noisy estimates of the discounted policy gradient in Equation (6).
Before proceeding, we will introduce the notion of a γ-just estimator of the advantage function, which is an estimator that does not introduce bias when we use it in place of AÏ,γ (which is not known and must be estimated) in Equation (6) to estimate gγ.1 Consider an advantage estimator ËAt(s0:â, a0:â), which may in general be a function of the entire trajectory. Deï¬nition 1. The estimator ËAt is γ-just if
Esso:c0 [Ar(s0:205 a0:20)Vo log 9 (at | s1)| = Esso:00 [Aâ¢-7 (sz, at) Vo log 79 (a:z | 82)] - (7)
It follows immediately that if ËAt is γ-just for all t, then
oo Eso.c0 0:00 At(S0:00; do:00) Vo log mo (at | 3) = (8) t=0
One sufï¬cient condition for ËAt to be γ-just is that ËAt decomposes as the difference between two functions Qt and bt, where Qt can depend on any trajectory variables but gives an unbiased estimator of the γ-discounted Q-function, and bt is an arbitrary function of the states and actions sampled before at. Proposition 1. Suppose that ËAt can be written in the form ËAt(s0:â, a0:â) = Qt(st:â, at:â) â bt(s0:t, a0:tâ1) such that for all (st, at), Est+1:â,at+1:â | st,at [Qt(st:â, at:â)] = QÏ,γ(st, at). Then ËA is γ-just.
1Note, that we have already introduced bias by using AÏ,γ in place of AÏ; here we are concerned with obtaining an unbiased estimate of gγ, which is a biased estimate of the policy gradient of the undiscounted MDP.
3
Published as a conference paper at ICLR 2016
The proof is provided in Appendix B. It is easy to verify that the following expressions are γ-just advantage estimators for ËAt:
© Deore e Aâ¢7 (54, at) © Qâ¢7 (81, a2) er, t+ (Stg1) â Vâ¢7 (82).
# 3 ADVANTAGE FUNCTION ESTIMATION
This section will be concerned with producing an accurate estimate ËAt of the discounted advan- tage function AÏ,γ(st, at), which will then be used to construct a policy gradient estimator of the following form:
N oo yb AV log mo(a? | s?) (9) n=1 t=0
where n indexes over a batch of episodes. Let V be an approximate value function. Deï¬ne δV t = rt + γV (st+1) â V (st), i.e., the TD residual of V with discount γ (Sutton & Barto, 1998). Note that δV t can be considered as an estimate of the advantage of the action at. In fact, if we have the correct value function V = V Ï,γ, then it is a γ-just advantage estimator, and in fact, an unbiased estimator of AÏ,γ:
Est+1 δV Ï,γ t = Est+1 [rt + γV Ï,γ(st+1) â V Ï,γ(st)] = Est+1 [QÏ,γ(st, at) â V Ï,γ(st)] = AÏ,γ(st, at). (10)
However, this estimator is only γ-just for V = V Ï,γ, otherwise it will yield biased policy gradient estimates. Next, let us consider taking the sum of k of these δ terms, which we will denote by ËA(k)
# t
# ËA(1) t ËA(2) t ËA(3) t
(11)
# := δV t t + γδV := δV t + γδV
= âV (st) + rt + γV (st+1) = âV (st) + rt + γrt+1 + γ2V (st+2)
(12)
t+1 t+1 + γ2δV
:= δV t+2 = âV (st) + rt + γrt+1 + γ2rt+2 + γ3V (st+3) (13)
k-1 AM; Soyo âV(se) tre trig He te rege $V (sie) (14) 1=0
# ËA(k) t
These equations result from a telescoping sum, and we see that ËA(k) involves a k-step estimate of the returns, minus a baseline term âV (st). Analogously to the case of δV , we can consider ËA(k) to be an estimator of the advantage function, which is only γ-just when V = V Ï,γ. However, t note that the bias generally becomes smaller as k â â, since the term γkV (st+k) becomes more heavily discounted, and the term âV (st) does not affect the bias. Taking k â â, we get
Al) = yy 541 = âV (st) + Vy (15)
which is simply the empirical returns minus the value function baseline.
4
Published as a conference paper at ICLR 2016
The generalized advantage estimator GAE(γ, λ) is deï¬ned as the exponentially-weighted average of these k-step estimators:
AGAFON 1 (4)? + rAAP) 4 2A 4 ...) = (LâA)(5Y + ACY + 8tha) +72 (6F +901 + 775rh2) +...) =(L-A)(6/(L+AFAM +...) GL AF MY + A384...) by oro(AP + M4 A 4...) +...) - (a (+) b88.a(745) 7?68.2(5) b-) ( = VM) ohn 1=0
# ËAGAE(γ,λ)
# t
l=0
From Equation (16), we see that the advantage estimator has a remarkably simple formula involving a discounted sum of Bellman residual terms. Section 4 discusses an interpretation of this formula as the returns in an MDP with a modiï¬ed reward function. The construction we used above is closely analogous to the one used to deï¬ne TD(λ) (Sutton & Barto, 1998), however TD(λ) is an estimator of the value function, whereas here we are estimating the advantage function.
There are two notable special cases of this formula, obtained by setting λ = 0 and λ = 1.
# Ap := 5 Ar
GAE(γ, 0) : = rt + γV (st+1) â V (st) (17)
GAEL): Ar = So 4'b41 = DO a're4i â Vs) (18) 1=0 1=0
GAE(γ, 1) is γ-just regardless of the accuracy of V , but it has high variance due to the sum of terms. GAE(γ, 0) is γ-just for V = V Ï,γ and otherwise induces bias, but it typically has much lower variance. The generalized advantage estimator for 0 < λ < 1 makes a compromise between bias and variance, controlled by parameter λ.
Weâve described an advantage estimator with two separate parameters γ and λ, both of which con- tribute to the bias-variance tradeoff when using an approximate value function. However, they serve different purposes and work best with different ranges of values. γ most importantly determines the scale of the value function V Ï,γ, which does not depend on λ. Taking γ < 1 introduces bias into the policy gradient estimate, regardless of the value functionâs accuracy. On the other hand, λ < 1 introduces bias only when the value function is inaccurate. Empirically, we ï¬nd that the best value of λ is much lower than the best value of γ, likely because λ introduces far less bias than γ for a reasonably accurate value function. Using the generalized advantage estimator, we can construct a biased estimator of gγ, the discounted policy gradient from Equation (6):
xo x xo gf ~E Y- Vo trot sgaear| =E]S~ Vo log mo(ar | 81) SA) |. 19) t=0 t=0 l=0
where equality holds when λ = 1.
4
# INTERPRETATION AS REWARD SHAPING
In this section, we discuss how one can interpret λ as an extra discount factor applied after per- forming a reward shaping transformation on the MDP. We also introduce the notion of a response function to help understand the bias introduced by γ and λ.
Reward shaping (Ng et al., 1999) refers to the following transformation of the reward function of an MDP: let Φ : S â R be an arbitrary scalar-valued function on state space, and deï¬ne the transformed reward function Ër by
F(s,a,sâ) =r(s,a, 8â) + y®(sâ) â ®(s), (20)
5
(16)
Published as a conference paper at ICLR 2016
which in turn deï¬nes a transformed MDP. This transformation leaves the discounted advantage function AÏ,γ unchanged for any policy Ï. To see this, consider the discounted sum of rewards of a trajectory starting with state st:
oo 0° lx l Sy F(Se41, Qt, S441) = Sy 1(Se41,Ge41, St4i41) â O(S2). (21) 1=0 1=0
Letting ËQÏ,γ, ËV Ï,γ, ËAÏ,γ be the value and advantage functions of the transformed MDP, one obtains from the deï¬nitions of these quantities that
ËQÏ,γ(s, a) = QÏ,γ(s, a) â Φ(s) ËV Ï,γ(s, a) = V Ï,γ(s) â Φ(s) ËAÏ,γ(s, a) = (QÏ,γ(s, a) â Φ(s)) â (V Ï,γ(s) â Φ(s)) = AÏ,γ(s, a). (24) Note that if Φ happens to be the state-value function V Ï,γ from the original MDP, then the trans- formed MDP has the interesting property that ËV Ï,γ(s) is zero at every state.
Note that (Ng et al., 1999) showed that the reward shaping transformation leaves the policy gradient and optimal policy unchanged when our objective is to maximize the discounted sum of rewards an y'r(8¢, Gt, 8441). In contrast, this paper is concerned with maximizing the undiscounted sum of rewards, where the discount 7 is used as a variance-reduction parameter.
Having reviewed the idea of reward shaping, let us consider how we could use it to get a policy gradient estimate. The most natural approach is to construct policy gradient estimators that use discounted sums of shaped rewards Ër. However, Equation (21) shows that we obtain the discounted sum of the original MDPâs rewards r minus a baseline term. Next, letâs consider using a âsteeperâ discount γλ, where 0 ⤠λ ⤠1. Itâs easy to see that the shaped reward Ër equals the Bellman residual term δV , introduced in Section 3, where we set Φ = V . Letting Φ = V , we see that
xo xe SON Fsertsar, sestet) = D(âay'6yyy = APAPO, (25) l=0 l=0
Hence, by considering the γλ-discounted sum of shaped rewards, we exactly obtain the generalized advantage estimators from Section 3. As shown previously, λ = 1 gives an unbiased estimate of gγ, whereas λ < 1 gives a biased estimate.
To further analyze the effect of this shaping transformation and parameters γ and λ, it will be useful to introduce the notion of a response function Ï, which we deï¬ne as follows:
XE Se, ¢) = E [regi | se, a] â E [rei | se] - (26) Note that Aâ¢*7(s,a) = 7729 7'x(J; s,a), hence the response function decomposes the advantage function across timesteps. The response function lets us quantify the temporal credit assignment problem: long range dependencies between actions and rewards correspond to nonzero values of the response function for / >> 0. Next, let us revisit the discount factor y and the approximation we are making by using Aâ? rather than Aâ¢!. The discounted policy gradient estimator from Equation (6) has a sum of terms of the form
Vo log mo(a; | 8:)A⢠(St, 44) = Vo log mo(a 51) So y'x(l; St, 41). (27) 1=0
Using a discount 7 < 1 corresponds to dropping the terms with | >> 1/(1 â ). Thus, the error introduced by this approximation will be small if y rapidly decays as / increases, i.e., if the effect of an action on rewards is âforgottenâ after + 1/(1 â 7) timesteps. If the reward function 7 were obtained using @ = Vâ¢7, we would have E[?,4: | s:,a:] = E [r+ | s:] = 0 for 1 > 0, ie., the response function would only be nonzero at | = 0. Therefore, this shaping transformation would turn temporally extended response into an immediate response. Given that V7 completely reduces the temporal spread of the response function, we can hope that a good approximation V + V7 partially reduces it. This observation suggests an interpretation of Equation (16): reshape the rewards using V to shrink the temporal extent of the response function, and then introduce a âsteeperâ discount 7A to cut off the noise arising from long delays, i.e., ignore terms Vo log 9 (az | $¢)6¢,, where 1 >> 1/(1 â 7).
6
(22)
Published as a conference paper at ICLR 2016
# 5 VALUE FUNCTION ESTIMATION
A variety of different methods can be used to estimate the value function (see, e.g., Bertsekas (2012)). When using a nonlinear function approximator to represent the value function, the sim- plest approach is to solve a nonlinear regression problem:
N minimize So l¥s(sn) -V,ll, (28) n=1
n=1
where Vv, = an ¥! rz41 is the discounted sum of rewards, and n indexes over all timesteps in a batch of trajectories. This is sometimes called the Monte Carlo or TD(1) approach for estimating the value function (Sutton & Barto, 1998).
For the experiments in this work, we used a trust region method to optimize the value function in each iteration of a batch optimization procedure. The trust region helps us to avoid overfitting to the most recent batch of data. To formulate the trust region problem, we first compute ¢? = Â¥ lene (Sn) â Vall?, where dog is the parameter vector before optimization. Then we solve the following constrained optimization problem:
N minimize So Vo(sn) â Vall? @ n=l N . 1 Vo(sn) â Vo, (Sn) |? subject to > 7 <e. (29) Na 20"
This constraint is equivalent to constraining the average KL divergence between the previous value function and the new value function to be smaller than ¢, where the value function is taken to pa- rameterize a conditional Gaussian distribution with mean V;,(s) and variance 7.
We compute an approximate solution to the trust region problem using the conjugate gradient algo- rithm (Wright & Nocedal, 1999). Speciï¬cally, we are solving the quadratic program
minimize g7( â doa) ¢ 1 N subject to = de â Goa)â H(e â Goa) < â¬. (30) n=
where g is the gradient of the objective, and H = W DY, jnda, where jn = VeVo(Sn)-. Note that His the âGauss-Newtonâ approximation of the Hessian of the objective, and it is (up to a a? factor) the Fisher information matrix when interpreting the value function as a conditional probability dis- tribution. Using matrix-vector products v + Hv to implement the conjugate gradient algorithm, we compute a step direction s * âH~1g. Then we rescale s + as such that $(as)? H(as) = ⬠and take 6 = ¢o1a + as. This procedure is analogous to the procedure we use for updating the policy, which is described further in Section 6 and based on Schulman et al. (2015).
# 6 EXPERIMENTS
We designed a set of experiments to investigate the following questions:
1. What is the empirical effect of varying λ â [0, 1] and γ â [0, 1] when optimizing episodic total reward using generalized advantage estimation?
2. Can generalized advantage estimation, along with trust region algorithms for policy and value function optimization, be used to optimize large neural network policies for challenging control problems?
2Another natural choice is to compute target values with an estimator based on the TD(λ) backup (Bertsekas, t = VÏold (sn)+ l=0(γλ)lδt+l. While we experimented with this choice, we did not notice a difference in performance from
7
Published as a conference paper at ICLR 2016
# 6.1 POLICY OPTIMIZATION ALGORITHM
While generalized advantage estimation can be used along with a variety of different policy gra- dient methods, for these experiments, we performed the policy updates using trust region policy optimization (TRPO) (Schulman et al., 2015). TRPO updates the policy by approximately solving the following constrained optimization problem each iteration: Lθold (θ)
minimize A) . zeta subject to Dir" (79,,,,70) < ⬠1 Qi melan| sn) 5 where Lo,,,(0) = = ). 22" _ A, N hal TO.1a(An | $n) = i Dict! (ora 7) = 5 > Dix (m10(+ | 8n) || T(- | 8n)) (31)
As described in (Schulman et al., 2015), we approximately solve this problem by linearizing the objective and quadraticizing the constraint, which yields a step in the direction θ â θold â âF â1g, where F is the average Fisher information matrix, and g is a policy gradient estimate. This policy update yields the same step direction as the natural policy gradient (Kakade, 2001a) and natural actor-critic (Peters & Schaal, 2008), however it uses a different stepsize determination scheme and numerical procedure for computing the step.
Since prior work (Schulman et al., 2015) compared TRPO to a variety of different policy optimiza- tion algorithms, we will not repeat these comparisons; rather, we will focus on varying the γ, λ parameters of policy gradient estimator while keeping the underlying algorithm ï¬xed.
For completeness, the whole algorithm for iteratively updating policy and value function is given below:
Initialize policy parameter 69 and value function parameter ¢o. fori =0,1,2,... do Simulate current policy 79, until N timesteps are obtained. Compute 6)â at all timesteps t ⬠{1,2,...,.N}, using V = Vy,. Compute Ay = 37729 (7A)/5Â¥,, at all timesteps. Compute 6; with TRPO update, Equation (31). Compute #;41 with Equation (30). end for
Note that the policy update θi â θi+1 is performed using the value function VÏi for advantage estimation, not VÏi+1. Additional bias would have been introduced if we updated the value function ï¬rst. To see this, consider the extreme case where we overï¬t the value function, and the Bellman residual rt + γV (st+1) â V (st) becomes zero at all timestepsâthe policy gradient estimate would be zero.
6.2 EXPERIMENTAL SETUP
We evaluated our approach on the classic cart-pole balancing problem, as well as several challenging 3D locomotion tasks: (1) bipedal locomotion; (2) quadrupedal locomotion; (3) dynamically standing up, for the biped, which starts off laying on its back. The models are shown in Figure 1.
6.2.1 ARCHITECTURE
We used the same neural network architecture for all of the 3D robot tasks, which was a feedforward network with three hidden layers, with 100, 50 and 25 tanh units respectively. The same architecture was used for the policy and value function. The ï¬nal output layer had linear activation. The value function estimator used the same architecture, but with only one scalar output. For the simpler cart- pole task, we used a linear policy, and a neural network with one 20-unit hidden layer as the value function.
8
Published as a conference paper at ICLR 2016
Figure 1: Top ï¬gures: robot models used for 3D locomotion. Bottom ï¬gures: a sequence of frames from the learned gaits. Videos are available at https://sites.google.com/site/ gaepapersupp.
6.2.2 TASK DETAILS
For the cart-pole balancing task, we collected 20 trajectories per batch, with a maximum length of 1000 timesteps, using the physical parameters from Barto et al. (1983).
The simulated robot tasks were simulated using the MuJoCo physics engine (Todorov et al., 2012). The humanoid model has 33 state dimensions and 10 actuated degrees of freedom, while the quadruped model has 29 state dimensions and 8 actuated degrees of freedom. The initial state for these tasks consisted of a uniform distribution centered on a reference conï¬guration. We used 50000 timesteps per batch for bipedal locomotion, and 200000 timesteps per batch for quadrupedal locomotion and bipedal standing. Each episode was terminated after 2000 timesteps if the robot had not reached a terminal state beforehand. The timestep was 0.01 seconds.
The reward functions are provided in the table below.
Task Reward 3D biped locomotion âvgwa â LO~* Jul]? â 10? |] fimpact |? + 0.2 Quadruped locomotion â vga â 107% |u|? â 1073 || fimpact ||? + 0.05 Biped getting up â(Rheaa â 1.5)? â 107>||ul|?
Here, vfwd := forward velocity, u := vector of joint torques, fimpact := impact forces, hhead := height of the head.
In the locomotion tasks, the episode is terminated if the center of mass of the actor falls below a predeï¬ned height: .8 m for the biped, and .2 m for the quadruped. The constant offset in the reward function encourages longer episodes; otherwise the quadratic reward terms might lead lead to a policy that ends the episodes as quickly as possible.
6.3 EXPERIMENTAL RESULTS
All results are presented in terms of the cost, which is deï¬ned as negative reward and is mini- mized. Videos of the learned policies are available at https://sites.google.com/site/ gaepapersupp. In plots, âNo VFâ means that we used a time-dependent baseline that did not depend on the state, rather than an estimate of the state value function. The time-dependent baseline was computed by averaging the return at each timestep over the trajectories in the batch.
# 6.3.1 CART-POLE
The results are averaged across 21 experiments with different random seeds. Results are shown in Figure 2, and indicate that the best results are obtained at intermediate values of the parameters: γ â [0.96, 0.99] and λ â [0.92, 0.99].
9
Published as a conference paper at ICLR 2016
Cart-pole performance after 20 iterations Cart-pole learning curves (at y=0.99) cost ° 10 20 30 40 50 number of policy iterations Xr
Cart-pole performance after 20 iterations Xr
Figure 2: Left: learning curves for cart-pole task, using generalized advantage estimation with varying values of λ at γ = 0.99. The fastest policy improvement is obtain by intermediate values of λ in the range [0.92, 0.98]. Right: performance after 20 iterations of policy optimization, as γ and λ are varied. White means higher reward. The best results are obtained at intermediate values of both.
3D Biped > 3D Quadruped , No value fn =1 cost cost 8 100 200 300 400 500 8 200 400 600 800 1000 number of policy iterations number of policy iterations
Figure 3: Left: Learning curves for 3D bipedal locomotion, averaged across nine runs of the algo- rithm. Right: learning curves for 3D quadrupedal locomotion, averaged across ï¬ve runs.
3D BIPEDAL LOCOMOTION
Each trial took about 2 hours to run on a 16-core machine, where the simulation rollouts were paral- lelized, as were the function, gradient, and matrix-vector-product evaluations used when optimizing the policy and value function. Here, the results are averaged across 9 trials with different random seeds. The best performance is again obtained using intermediate values of γ â [0.99, 0.995], λ â [0.96, 0.99]. The result after 1000 iterations is a fast, smooth, and stable gait that is effectively completely stable. We can compute how much âreal timeâ was used for this learning process: 0.01 seconds/timestepÃ50000 timesteps/batchÃ1000 batches/3600·24 seconds/day = 5.8 days. Hence, it is plausible that this algorithm could be run on a real robot, or multiple real robots learning in par- allel, if there were a way to reset the state of the robot and ensure that it doesnât damage itself.
# 6.3.3 OTHER 3D ROBOT TASKS
The other two motor behaviors considered are quadrupedal locomotion and getting up off the ground for the 3D biped. Again, we performed 5 trials per experimental condition, with different random seeds (and initializations). The experiments took about 4 hours per trial on a 32-core machine. We performed a more limited comparison on these domains (due to the substantial computational resources required to run these experiments), ï¬xing γ = 0.995 but varying λ = {0, 0.96}, as well as an experimental condition with no value function. For quadrupedal locomotion, the best results are obtained using a value function with λ = 0.96 Section 6.3.2. For 3D standing, the value function always helped, but the results are roughly the same for λ = 0.96 and λ = 1.
10
Published as a conference paper at ICLR 2016
ds 3D Standing Up â 7=0.99, No value fn rm = oc 2 1 0.5 0.0 6 ° 100 208 300 400 560 4 5 number of policy iterations = cost
2 1 6 4 5 =
Figure 4: (a) Learning curve from quadrupedal walking, (b) learning curve for 3D standing up, (c) clips from 3D standing up.
# 7 DISCUSSION
Policy gradient methods provide a way to reduce reinforcement learning to stochastic gradient de- scent, by providing unbiased gradient estimates. However, so far their success at solving difï¬cult control problems has been limited, largely due to their high sample complexity. We have argued that the key to variance reduction is to obtain good estimates of the advantage function.
We have provided an intuitive but informal analysis of the problem of advantage function estimation, and justiï¬ed the generalized advantage estimator, which has two parameters γ, λ which adjust the bias-variance tradeoff. We described how to combine this idea with trust region policy optimization and a trust region algorithm that optimizes a value function, both represented by neural networks. Combining these techniques, we are able to learn to solve difï¬cult control tasks that have previously been out of reach for generic reinforcement learning methods.
Our main experimental validation of generalized advantage estimation is in the domain of simulated robotic locomotion. As shown in our experiments, choosing an appropriate intermediate value of λ in the range [0.9, 0.99] usually results in the best performance. A possible topic for future work is how to adjust the estimator parameters γ, λ in an adaptive or automatic way.
One question that merits future investigation is the relationship between value function estimation error and policy gradient estimation error. If this relationship were known, we could choose an error metric for value function ï¬tting that is well-matched to the quantity of interest, which is typically the accuracy of the policy gradient estimation. Some candidates for such an error metric might include the Bellman error or projected Bellman error, as described in Bhatnagar et al. (2009).
Another enticing possibility is to use a shared function approximation architecture for the policy and the value function, while optimizing the policy using generalized advantage estimation. While for- mulating this problem in a way that is suitable for numerical optimization and provides convergence guarantees remains an open question, such an approach could allow the value function and policy representations to share useful features of the input, resulting in even faster learning.
In concurrent work, researchers have been developing policy gradient methods that involve differen- tiation with respect to the continuous-valued action (Lillicrap et al., 2015; Heess et al., 2015). While we found empirically that the one-step return (λ = 0) leads to excessive bias and poor performance, these papers show that such methods can work when tuned appropriately. However, note that those papers consider control problems with substantially lower-dimensional state and action spaces than the ones considered here. A comparison between both classes of approach would be useful for future work.
# ACKNOWLEDGEMENTS
We thank Emo Todorov for providing the simulator as well as insightful discussions, and we thank Greg Wayne, Yuval Tassa, Dave Silver, Carlos Florensa Campo, and Greg Brockman for insightful discussions. This research was funded in part by the Ofï¬ce of Naval Research through a Young
11
Published as a conference paper at ICLR 2016
Investigator Award and under grant number N00014-11-1-0688, DARPA through a Young Faculty Award, by the Army Research Ofï¬ce through the MAST program.
A FREQUENTLY ASKED QUESTIONS
A.1 WHATâS THE RELATIONSHIP WITH COMPATIBLE FEATURES?
Compatible features are often mentioned in relation to policy gradient algorithms that make use of a value function, and the idea was proposed in the paper On Actor-Critic Methods by Konda & Tsitsiklis (2003). These authors pointed out that due to the limited representation power of the policy, the policy gradient only depends on a certain subspace of the space of advantage functions. This subspace is spanned by the compatible features âθi log Ïθ(at|st), where i â {1, 2, . . . , dim θ}. This theory of compatible features provides no guidance on how to exploit the temporal structure of the problem to obtain better estimates of the advantage function, making it mostly orthogonal to the ideas in this paper.
The idea of compatible features motivates an elegant method for computing the natural policy gradi- ent (Kakade, 2001a; Peters & Schaal, 2008). Given an empirical estimate of the advantage function ËAt at each timestep, we can project it onto the subspace of compatible features by solving the fol- lowing least squares problem:
minimize )|[r - Vo log 76 (at | 81) â Arll?. (32) * t
If ËA is γ-just, the least squares solution is the natural policy gradient (Kakade, 2001a). Note that any estimator of the advantage function can be substituted into this formula, including the ones we derive in this paper. For our experiments, we also compute natural policy gradient steps, but we use the more computationally efï¬cient numerical procedure from Schulman et al. (2015), as discussed in Section 6.
A.2 WHY DONâT YOU JUST USE A Q-FUNCTION?
Previous actor critic methods, e.g. in Konda & Tsitsiklis (2003), use a Q-function to obtain poten- tially low-variance policy gradient estimates. Recent papers, including Heess et al. (2015); Lillicrap et al. (2015), have shown that a neural network Q-function approximator can used effectively in a policy gradient method. However, there are several advantages to using a state-value function in the manner of this paper. First, the state-value function has a lower-dimensional input and is thus easier to learn than a state-action value function. Second, the method of this paper allows us to smoothly interpolate between the high-bias estimator (λ = 0) and the low-bias estimator (λ = 1). On the other hand, using a parameterized Q-function only allows us to use a high-bias estimator. We have found that the bias is prohibitively large when using a one-step estimate of the returns, i.e., the λ = 0 esti- mator, ËAt = δV t = rt + γV (st+1) â V (st). We expect that similar difï¬culty would be encountered when using an advantage estimator involving a parameterized Q-function, ËAt = Q(s, a) â V (s). There is an interesting space of possible algorithms that would use a parameterized Q-function and attempt to reduce bias, however, an exploration of these possibilities is beyond the scope of this work.
# B PROOFS
Proof of Proposition 1: First we can split the expectation into terms involving Q and b,
Es0:â,a0:â [âθ log Ïθ(at | st)(Qt(s0:â, a0:â) â bt(s0:t, a0:tâ1))] = Es0:â,a0:â [âθ log Ïθ(at | st)(Qt(s0:â, a0:â))] â Es0:â,a0:â [âθ log Ïθ(at | st)(bt(s0:t, a0:tâ1))] (33)
12
Published as a conference paper at ICLR 2016
Weâll consider the terms with Q and b in turn.
Eg9.50 40:00 [Vo log m9 (at | 81) Qt(So:00 9:0) = E56. ,00:4 [Ese 1iccteg oo [Vo log T6(at | 81) Qt(So:c0; 0:20) ]] = Eso..ao.e [Vo log m9 (at | 81JEs.44.000e4 1:0 [Qt (80:00; @0:00)]] = Es6.4,a0:1-1 [Vo log m9(at | se) Aâ (Se, ae)
Next,
Bs6..5 00:00 [Vo log 76 (at | 81) be (So:t, ao:tâ1)] = Boojao0-1 [Eseyrce are [Vo log 79(ar | $¢)be(S0:t; @0:4-1)]] = Eso.4,00:¢-1 [Ese 41:c0,0tsc0 [Vo log 7 (at | s2)] bi(So:t; @0:eâ1) | = Ego., 001 [0 «be (S0:2, @0:4â-1)] =0.
# REFERENCES
Barto, Andrew G, Sutton, Richard S, and Anderson, Charles W. Neuronlike adaptive elements that can solve difï¬cult learning control problems. Systems, Man and Cybernetics, IEEE Transactions on, (5):834â846, 1983.
Baxter, Jonathan and Bartlett, Peter L. Reinforcement learning in POMDPs via direct gradient ascent. In ICML, pp. 41â48, 2000.
Bertsekas, Dimitri P. Dynamic programming and optimal control, volume 2. Athena Scientiï¬c, 2012.
Bhatnagar, Shalabh, Precup, Doina, Silver, David, Sutton, Richard S, Maei, Hamid R, and Szepesv´ari, Csaba. In Advances in Convergent temporal-difference learning with arbitrary smooth function approximation. Neural Information Processing Systems, pp. 1204â1212, 2009.
Greensmith, Evan, Bartlett, Peter L, and Baxter, Jonathan. Variance reduction techniques for gradient estimates in reinforcement learning. The Journal of Machine Learning Research, 5:1471â1530, 2004.
Hafner, Roland and Riedmiller, Martin. Reinforcement learning in feedback control. Machine learning, 84 (1-2):137â169, 2011.
Heess, Nicolas, Wayne, Greg, Silver, David, Lillicrap, Timothy, Tassa, Yuval, and Erez, Tom. Learning contin- uous control policies by stochastic value gradients. arXiv preprint arXiv:1510.09142, 2015.
Hull, Clark. Principles of behavior. 1943.
Kakade, Sham. A natural policy gradient. In NIPS, volume 14, pp. 1531â1538, 2001a.
Kakade, Sham. Optimizing average reward using discounted rewards. In Computational Learning Theory, pp. 605â615. Springer, 2001b.
Kimura, Hajime and Kobayashi, Shigenobu. An analysis of actor/critic algorithms using eligibility traces: Reinforcement learning with imperfect value function. In ICML, pp. 278â286, 1998.
Konda, Vijay R and Tsitsiklis, John N. On actor-critic algorithms. SIAM journal on Control and Optimization, 42(4):1143â1166, 2003.
Lillicrap, Timothy P, Hunt, Jonathan J, Pritzel, Alexander, Heess, Nicolas, Erez, Tom, Tassa, Yuval, Sil- ver, David, and Wierstra, Daan. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
Marbach, Peter and Tsitsiklis, John N. Approximate gradient methods in policy-space optimization of markov reward processes. Discrete Event Dynamic Systems, 13(1-2):111â148, 2003.
Minsky, Marvin. Steps toward artiï¬cial intelligence. Proceedings of the IRE, 49(1):8â30, 1961.
Ng, Andrew Y, Harada, Daishi, and Russell, Stuart. Policy invariance under reward transformations: Theory and application to reward shaping. In ICML, volume 99, pp. 278â287, 1999.
Peters, Jan and Schaal, Stefan. Natural actor-critic. Neurocomputing, 71(7):1180â1190, 2008.
13
Published as a conference paper at ICLR 2016
Schulman, John, Levine, Sergey, Moritz, Philipp, Jordan, Michael I, and Abbeel, Pieter. Trust region policy optimization. arXiv preprint arXiv:1502.05477, 2015.
Sutton, Richard S and Barto, Andrew G. Introduction to reinforcement learning. MIT Press, 1998.
Sutton, Richard S, McAllester, David A, Singh, Satinder P, and Mansour, Yishay. Policy gradient methods for reinforcement learning with function approximation. In NIPS, volume 99, pp. 1057â1063. Citeseer, 1999.
Thomas, Philip. Bias in natural actor-critic algorithms. In Proceedings of The 31st International Conference on Machine Learning, pp. 441â448, 2014.
In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pp. 5026â5033. IEEE, 2012.
Wawrzy´nski, PaweÅ. Real-time reinforcement learning by sequential actorâcritics and experience replay. Neural Networks, 22(10):1484â1497, 2009.
Williams, Ronald J. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â256, 1992.
Wright, Stephen J and Nocedal, Jorge. Numerical optimization. Springer New York, 1999.
14 | {
"id": "1502.05477"
} |
1506.02626 | Learning both Weights and Connections for Efficient Neural Networks | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems. Also, conventional
networks fix the architecture before training starts; as a result, training
cannot improve the architecture. To address these limitations, we describe a
method to reduce the storage and computation required by neural networks by an
order of magnitude without affecting their accuracy by learning only the
important connections. Our method prunes redundant connections using a
three-step method. First, we train the network to learn which connections are
important. Next, we prune the unimportant connections. Finally, we retrain the
network to fine tune the weights of the remaining connections. On the ImageNet
dataset, our method reduced the number of parameters of AlexNet by a factor of
9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar
experiments with VGG-16 found that the number of parameters can be reduced by
13x, from 138 million to 10.3 million, again with no loss of accuracy. | http://arxiv.org/pdf/1506.02626 | Song Han, Jeff Pool, John Tran, William J. Dally | cs.NE, cs.CV, cs.LG | Published as a conference paper at NIPS 2015 | null | cs.NE | 20150608 | 20151030 | 5 1 0 2
t c O 0 3 ] E N . s c [
3 v 6 2 6 2 0 . 6 0 5 1 : v i X r a
# Learning both Weights and Connections for Efï¬cient Neural Networks
# Song Han Stanford University songhan@stanford.edu
Jeff Pool NVIDIA jpool@nvidia.com
# John Tran NVIDIA johntran@nvidia.com
William J. Dally Stanford University NVIDIA dally@stanford.edu
# Abstract
Neural networks are both computationally intensive and memory intensive, making them difï¬cult to deploy on embedded systems. Also, conventional networks ï¬x the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unim- portant connections. Finally, we retrain the network to ï¬ne tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9Ã, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13Ã, from 138 million to 10.3 million, again with no loss of accuracy.
# Introduction
Neural networks have become ubiquitous in applications ranging from computer vision [1] to speech recognition [2] and natural language processing [3]. We consider convolutional neural networks used for computer vision tasks which have grown over time. In 1998 Lecun et al. designed a CNN model LeNet-5 with less than 1M parameters to classify handwritten digits [4], while in 2012, Krizhevsky et al. [1] won the ImageNet competition with 60M parameters. Deepface classiï¬ed human faces with 120M parameters [5], and Coates et al. [6] scaled up a network to 10B parameters.
While these large neural networks are very powerful, their size consumes considerable storage, memory bandwidth, and computational resources. For embedded mobile applications, these resource demands become prohibitive. Figure 1 shows the energy cost of basic arithmetic and memory operations in a 45nm CMOS process. From this data we see the energy per connection is dominated by memory access and ranges from 5pJ for 32 bit coefï¬cients in on-chip SRAM to 640pJ for 32bit coefï¬cients in off-chip DRAM [7]. Large networks do not ï¬t in on-chip storage and hence require the more costly DRAM accesses. Running a 1 billion connection neural network, for example, at 20Hz would require (20Hz)(1G)(640pJ) = 12.8W just for DRAM access - well beyond the power envelope of a typical mobile device. Our goal in pruning networks is to reduce the energy required to run such large networks so they can run in real time on mobile devices. The model size reduction from pruning also facilitates storage and transmission of mobile applications incorporating DNNs.
1
Relative Energy Cost
Operation Energy [pJ] Relative Cost 32 bit int ADD 32 bit ï¬oat ADD 32 bit Register File 32 bit int MULT 32 bit ï¬oat MULT 32 bit SRAM Cache 32 bit DRAM Memory 0.1 0.9 1 3.1 3.7 5 640 1 9 10 31 37 50 6400
1 10 100 1000
= 10000
Figure 1: Energy table for 45nm CMOS process [7]. Memory access is 3 orders of magnitude more energy expensive than simple arithmetic.
To achieve this goal, we present a method to prune network connections in a manner that preserves the original accuracy. After an initial training phase, we remove all connections whose weight is lower than a threshold. This pruning converts a dense, fully-connected layer to a sparse layer. This ï¬rst phase learns the topology of the networks â learning which connections are important and removing the unimportant connections. We then retrain the sparse network so the remaining connections can compensate for the connections that have been removed. The phases of pruning and retraining may be repeated iteratively to further reduce network complexity. In effect, this training process learns the network connectivity in addition to the weights - much as in the mammalian brain [8][9], where synapses are created in the ï¬rst few months of a childâs development, followed by gradual pruning of little-used connections, falling to typical adult values.
# 2 Related Work
Neural networks are typically over-parameterized, and there is signiï¬cant redundancy for deep learn- ing models [10]. This results in a waste of both computation and memory. There have been various proposals to remove the redundancy: Vanhoucke et al. [11] explored a ï¬xed-point implementation with 8-bit integer (vs 32-bit ï¬oating point) activations. Denton et al. [12] exploited the linear structure of the neural network by ï¬nding an appropriate low-rank approximation of the parameters and keeping the accuracy within 1% of the original model. With similar accuracy loss, Gong et al. [13] compressed deep convnets using vector quantization. These approximation and quantization techniques are orthogonal to network pruning, and they can be used together to obtain further gains [14].
There have been other attempts to reduce the number of parameters of neural networks by replacing the fully connected layer with global average pooling. The Network in Network architecture [15] and GoogLenet [16] achieves state-of-the-art results on several benchmarks by adopting this idea. However, transfer learning, i.e. reusing features learned on the ImageNet dataset and applying them to new tasks by only ï¬ne-tuning the fully connected layers, is more difï¬cult with this approach. This problem is noted by Szegedy et al. [16] and motivates them to add a linear layer on the top of their networks to enable transfer learning.
Network pruning has been used both to reduce network complexity and to reduce over-ï¬tting. An early approach to pruning was biased weight decay [17]. Optimal Brain Damage [18] and Optimal Brain Surgeon [19] prune networks to reduce the number of connections based on the Hessian of the loss function and suggest that such pruning is more accurate than magnitude-based pruning such as weight decay. However, second order derivative needs additional computation.
HashedNets [20] is a recent technique to reduce model sizes by using a hash function to randomly group connection weights into hash buckets, so that all connections within the same hash bucket share a single parameter value. This technique may beneï¬t from pruning. As pointed out in Shi et al. [21] and Weinberger et al. [22], sparsity will minimize hash collision making feature hashing even more effective. HashedNets may be used together with pruning to give even better parameter savings.
2
Train Connectivity wu Prune Connections we Train Weights
before pruning after pruning pruning synapses --> pruning neurons
Figure 2: Three-Step Training Pipeline.
Figure 3: Synapses and neurons before and after pruning.
# 3 Learning Connections in Addition to Weights
Our pruning method employs a three-step process, as illustrated in Figure 2, which begins by learning the connectivity via normal network training. Unlike conventional training, however, we are not learning the ï¬nal values of the weights, but rather we are learning which connections are important.
The second step is to prune the low-weight connections. All connections with weights below a threshold are removed from the network â converting a dense network into a sparse network, as shown in Figure 3. The ï¬nal step retrains the network to learn the ï¬nal weights for the remaining sparse connections. This step is critical. If the pruned network is used without retraining, accuracy is signiï¬cantly impacted.
# 3.1 Regularization
Choosing the correct regularization impacts the performance of pruning and retraining. L1 regulariza- tion penalizes non-zero parameters resulting in more parameters near zero. This gives better accuracy after pruning, but before retraining. However, the remaining connections are not as good as with L2 regularization, resulting in lower accuracy after retraining. Overall, L2 regularization gives the best pruning results. This is further discussed in experiment section.
# 3.2 Dropout Ratio Adjustment
Dropout [23] is widely used to prevent over-ï¬tting, and this also applies to retraining. During retraining, however, the dropout ratio must be adjusted to account for the change in model capacity. In dropout, each parameter is probabilistically dropped during training, but will come back during inference. In pruning, parameters are dropped forever after pruning and have no chance to come back during both training and inference. As the parameters get sparse, the classiï¬er will select the most informative predictors and thus have much less prediction variance, which reduces over-ï¬tting. As pruning already reduced model capacity, the retraining dropout ratio should be smaller.
Quantitatively, let Ci be the number of connections in layer i, Cio for the original network, Cir for the network after retraining, Ni be the number of neurons in layer i. Since dropout works on neurons, and Ci varies quadratically with Ni, according to Equation 1 thus the dropout ratio after pruning the parameters should follow Equation 2, where Do represent the original dropout rate, Dr represent the dropout rate during retraining.
Ci = NiNiâ1 (1) Dr = Do (2)
# 3.3 Local Pruning and Parameter Co-adaptation
During retraining, it is better to retain the weights from the initial training phase for the connections that survived pruning than it is to re-initialize the pruned layers. CNNs contain fragile co-adapted features [24]: gradient descent is able to ï¬nd a good solution when the network is initially trained, but not after re-initializing some layers and retraining them. So when we retrain the pruned layers, we should keep the surviving parameters instead of re-initializing them.
3
Table 1: Network pruning can save 9Ã to 13Ã parameters with no drop in predictive performance.
Network Top-1 Error Top-5 Error Parameters Compression Rate LeNet-300-100 Ref LeNet-300-100 Pruned LeNet-5 Ref LeNet-5 Pruned AlexNet Ref AlexNet Pruned VGG-16 Ref VGG-16 Pruned 1.64% 1.59% 0.80% 0.77% 42.78% 42.77% 31.50% 31.34% - - - - 19.73% 19.67% 11.32% 10.88% 267K 22K 431K 36K 61M 6.7M 138M 10.3M 12Ã 12Ã 9Ã 13Ã
Retraining the pruned layers starting with retained weights requires less computation because we donât have to back propagate through the entire network. Also, neural networks are prone to suffer the vanishing gradient problem [25] as the networks get deeper, which makes pruning errors harder to recover for deep networks. To prevent this, we ï¬x the parameters for CONV layers and only retrain the FC layers after pruning the FC layers, and vice versa.
# Iterative Pruning
Learning the right connections is an iterative process. Pruning followed by a retraining is one iteration, after many such iterations the minimum number connections could be found. Without loss of accuracy, this method can boost pruning rate from 5à to 9à on AlexNet compared with single-step aggressive pruning. Each iteration is a greedy search in that we ï¬nd the best connections. We also experimented with probabilistically pruning parameters based on their absolute value, but this gave worse results.
# 3.5 Pruning Neurons
After pruning connections, neurons with zero input connections or zero output connections may be safely pruned. This pruning is furthered by removing all connections to or from a pruned neuron. The retraining phase automatically arrives at the result where dead neurons will have both zero input connections and zero output connections. This occurs due to gradient descent and regularization. A neuron that has zero input connections (or zero output connections) will have no contribution to the ï¬nal loss, leading the gradient to be zero for its output connection (or input connection), respectively. Only the regularization term will push the weights to zero. Thus, the dead neurons will be automatically removed during retraining.
# 4 Experiments
We implemented network pruning in Caffe [26]. Caffe was modiï¬ed to add a mask which disregards pruned parameters during network operation for each weight tensor. The pruning threshold is chosen as a quality parameter multiplied by the standard deviation of a layerâs weights. We carried out the experiments on Nvidia TitanX and GTX980 GPUs.
We pruned four representative networks: Lenet-300-100 and Lenet-5 on MNIST, together with AlexNet and VGG-16 on ImageNet. The network parameters and accuracy 1 before and after pruning are shown in Table 1.
# 4.1 LeNet on MNIST
We ï¬rst experimented on MNIST dataset with the LeNet-300-100 and LeNet-5 networks [4]. LeNet- 300-100 is a fully connected network with two hidden layers, with 300 and 100 neurons each, which achieves 1.6% error rate on MNIST. LeNet-5 is a convolutional network that has two convolutional layers and two fully connected layers, which achieves 0.8% error rate on MNIST. After pruning, the network is retrained with 1/10 of the original networkâs original learning rate. Table 1 shows
1Reference model is from Caffe model zoo, accuracy is measured without data augmentation
4
Table 2: For Lenet-300-100, pruning reduces the number of weights by 12Ã and computation by 12Ã.
Layer Weights fc1 fc2 fc3 Total 235K 30K 1K 266K FLOP Act% Weights% FLOP% 8% 470K 38% 65% 60K 9% 100% 26% 2K 8% 532K 46% 8% 4% 17% 8%
Table 3: For Lenet-5, pruning reduces the number of weights by 12Ã and computation by 6Ã.
Layer Weights conv1 conv2 fc1 fc2 Total 0.5K 25K 400K 5K 431K Act% Weights% FLOP% FLOP 66% 576K 82% 12% 3200K 72% 55% 800K 8% 100% 19% 10K 8% 4586K 77% 66% 10% 6% 10% 16%
Figure 4: Visualization of the ï¬rst FC layerâs sparsity pattern of Lenet-300-100. It has a banded structure repeated 28 times, which correspond to the un-pruned parameters in the center of the images, since the digits are written in the center.
pruning saves 12à parameters on these networks. For each layer of the network the table shows (left to right) the original number of weights, the number of ï¬oating point operations to compute that layerâs activations, the average percentage of activations that are non-zero, the percentage of non-zero weights after pruning, and the percentage of actually required ï¬oating point operations.
An interesting byproduct is that network pruning detects visual attention regions. Figure 4 shows the sparsity pattern of the ï¬rst fully connected layer of LeNet-300-100, the matrix size is 784 â 300. It has 28 bands, each bandâs width 28, corresponding to the 28 à 28 input pixels. The colored regions of the ï¬gure, indicating non-zero parameters, correspond to the center of the image. Because digits are written in the center of the image, these are the important parameters. The graph is sparse on the left and right, corresponding to the less important regions on the top and bottom of the image. After pruning, the neural network ï¬nds the center of the image more important, and the connections to the peripheral regions are more heavily pruned.
# 4.2 AlexNet on ImageNet
We further examine the performance of pruning on the ImageNet ILSVRC-2012 dataset, which has 1.2M training examples and 50k validation examples. We use the AlexNet Caffe model as the reference model, which has 61 million parameters across 5 convolutional layers and 3 fully connected layers. The AlexNet Caffe model achieved a top-1 accuracy of 57.2% and a top-5 accuracy of 80.3%. The original AlexNet took 75 hours to train on NVIDIA Titan X GPU. After pruning, the whole network is retrained with 1/100 of the original networkâs initial learning rate. It took 173 hours to retrain the pruned AlexNet. Pruning is not used when iteratively prototyping the model, but rather used for model reduction when the model is ready for deployment. Thus, the retraining time is less a concern. Table 1 shows that AlexNet can be pruned to 1/9 of its original size without impacting accuracy, and the amount of computation can be reduced by 3Ã.
5
Table 4: For AlexNet, pruning reduces the number of weights by 9Ã and computation by 3Ã.
Layer Weights conv1 conv2 conv3 conv4 conv5 fc1 fc2 fc3 Total 35K 307K 885K 663K 442K 38M 17M 4M 61M FLOP Act% Weights% FLOP% 84% 211M 88% 38% 448M 52% 35% 299M 37% 37% 224M 40% 37% 150M 34% 9% 75M 36% 9% 34M 40% 25% 100% 8M 11% 54% 1.5B 84% 33% 18% 14% 14% 3% 3% 10% 30%
Table 5: For VGG-16, pruning reduces the number of weights by 12Ã and computation by 5Ã.
58% 12% 30% 29% 43% 16% 29% 21% 14% 15% 12% 9% 11% 1% 2% 9% 21% 100% 23% 7.5%
# 4.3 VGG-16 on ImageNet
With promising results on AlexNet, we also looked at a larger, more recent network, VGG-16 [27], on the same ILSVRC-2012 dataset. VGG-16 has far more convolutional layers but still only three fully-connected layers. Following a similar methodology, we aggressively pruned both convolutional and fully-connected layers to realize a signiï¬cant reduction in the number of weights, shown in Table 5. We used ï¬ve iterations of pruning an retraining.
The VGG-16 results are, like those for AlexNet, very promising. The network as a whole has been reduced to 7.5% of its original size (13Ã smaller). In particular, note that the two largest fully-connected layers can each be pruned to less than 4% of their original size. This reduction is critical for real time image processing, where there is little reuse of fully connected layers across images (unlike batch processing during training).
# 5 Discussion
The trade-off curve between accuracy and number of parameters is shown in Figure 5. The more parameters pruned away, the less the accuracy. We experimented with L1 and L2 regularization, with and without retraining, together with iterative pruning to give ï¬ve trade off lines. Comparing solid and dashed lines, the importance of retraining is clear: without retraining, accuracy begins dropping much sooner â with 1/3 of the original connections, rather than with 1/10 of the original connections. Itâs interesting to see that we have the âfree lunchâ of reducing 2à the connections without losing accuracy even without retraining; while with retraining we are ably to reduce connections by 9Ã.
6
-O-L2 regularization w/o retrain -A-L1 regularization w/o retrain -&L1 regularization w/ retrain ~OL2 regularization w/ retrain -®L2 regularization w/ iterative prune and retrain 0.5% 0.0% -0.5% 1.0% â1.5% -2.0% -2.5% -3.0% -3.5% -4.0% -4.5% 40% 50% 60% 70% 80% 90% 100% Parametes Pruned Away Accuracy Loss
Figure 5: Trade-off curve for parameter reduction and loss in top-5 accuracy. L1 regularization performs better than L2 at learning the connections without retraining, while L2 regularization performs better than L1 at retraining. Iterative pruning gives the best result.
âconv! ~conv2 fconvs cond -*-convS fet 0% XE SX Bax x18: 0% 0040-0 6 5% § B-10% 3-10% g â15% â18% -20% 25% 50% 75% 400% 0% 25% 50% 75% 100% #Parameters #Parameters
âconv! ~conv2 fconvs cond -*-convS 0% XE SX Bax x18: 6 E | B-10% Es â15% 25% 50% 75% 400% #Parameters
fet 0% 0040-0 5% § 3-10% g â18% -20% 0% 25% 50% 75% 100% #Parameters
Figure 6: Pruning sensitivity for CONV layer (left) and FC layer (right) of AlexNet.
L1 regularization gives better accuracy than L2 directly after pruning (dotted blue and purple lines) since it pushes more parameters closer to zero. However, comparing the yellow and green lines shows that L2 outperforms L1 after retraining, since there is no beneï¬t to further pushing values towards zero. One extension is to use L1 regularization for pruning and then L2 for retraining, but this did not beat simply using L2 for both phases. Parameters from one mode do not adapt well to the other.
The biggest gain comes from iterative pruning (solid red line with solid circles). Here we take the pruned and retrained network (solid green line with circles) and prune and retrain it again. The leftmost dot on this curve corresponds to the point on the green line at 80% (5Ã pruning) pruned to 8Ã. Thereâs no accuracy loss at 9Ã. Not until 10Ã does the accuracy begin to drop sharply.
Two green points achieve slightly better accuracy than the original model. We believe this accuracy improvement is due to pruning ï¬nding the right capacity of the network and hence reducing overï¬tting.
Both CONV and FC layers can be pruned, but with different sensitivity. Figure 6 shows the sensitivity of each layer to network pruning. The ï¬gure shows how accuracy drops as parameters are pruned on a layer-by-layer basis. The CONV layers (on the left) are more sensitive to pruning than the fully connected layers (on the right). The ï¬rst convolutional layer, which interacts with the input image directly, is most sensitive to pruning. We suspect this sensitivity is due to the input layer having only 3 channels and thus less redundancy than the other convolutional layers. We used the sensitivity results to ï¬nd each layerâs threshold: for example, the smallest threshold was applied to the most sensitive layer, which is the ï¬rst convolutional layer.
Storing the pruned layers as sparse matrices has a storage overhead of only 15.6%. Storing relative rather than absolute indices reduces the space taken by the FC layer indices to 5 bits. Similarly, CONV layer indices can be represented with only 8 bits.
7
Table 6: Comparison with other model reduction methods on AlexNet. Data-free pruning [28] saved only 1.5Ã parameters with much loss of accuracy. Deep Fried Convnets [29] worked on fully connected layers only and reduced the parameters by less than 4Ã. [30] reduced the parameters by 4Ã with inferior accuracy. Naively cutting the layer size saves parameters but suffers from 4% loss of accuracy. [12] exploited the linear structure of convnets and compressed each layer individually, where model compression on a single layer incurred 0.9% accuracy penalty with biclustering + SVD.
Network Baseline Caffemodel [26] Data-free pruning [28] Fastfood-32-AD [29] Fastfood-16-AD [29] Collins & Kohli [30] Naive Cut SVD [12] Network Pruning Top-1 Error Top-5 Error 42.78% 44.40% 41.93% 42.90% 44.40% 47.18% 44.02% 42.77% 19.73% - - - - 23.23% 20.56% 19.67% Parameters 61.0M 39.6M 32.8M 16.4M 15.2M 13.8M 11.9M 6.7M Compression Rate 1Ã 1.5Ã 2Ã 3.7Ã 4Ã 4.4Ã 5Ã 9Ã
# Count
Figure 7: Weight distribution before and after parameter pruning. The right ï¬gure has 10à smaller scale.
After pruning, the storage requirements of AlexNet and VGGNet are are small enough that all weights can be stored on chip, instead of off-chip DRAM which takes orders of magnitude more energy to access (Table 1). We are targeting our pruning method for ï¬xed-function hardware specialized for sparse DNN, given the limitation of general purpose hardware on sparse computation. Figure 7 shows histograms of weight distribution before (left) and after (right) pruning. The weight is from the ï¬rst fully connected layer of AlexNet. The two panels have different y-axis scales. The original distribution of weights is centered on zero with tails dropping off quickly. Almost all parameters are between [â0.015, 0.015]. After pruning the large center region is removed. The network parameters adjust themselves during the retraining phase. The result is that the parameters form a bimodal distribution and become more spread across the x-axis, between [â0.025, 0.025].
# 6 Conclusion
We have presented a method to improve the energy efï¬ciency and storage of neural networks without affecting accuracy by ï¬nding the right connections. Our method, motivated in part by how learning works in the mammalian brain, operates by learning which connections are important, pruning the unimportant connections, and then retraining the remaining sparse network. We highlight our experiments on AlexNet and VGGNet on ImageNet, showing that both fully connected layer and convolutional layer can be pruned, reducing the number of connections by 9à to 13à without loss of accuracy. This leads to smaller memory capacity and bandwidth requirements for real-time image processing, making it easier to be deployed on mobile systems.
# References
[1] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â1105, 2012.
8
[2] Alex Graves and J¨urgen Schmidhuber. Framewise phoneme classiï¬cation with bidirectional lstm and other neural network architectures. Neural Networks, 18(5):602â610, 2005.
[3] Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. Natural language processing (almost) from scratch. JMLR, 12:2493â2537, 2011.
[4] Yann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278â2324, 1998.
[5] Yaniv Taigman, Ming Yang, MarcâAurelio Ranzato, and Lior Wolf. Deepface: Closing the gap to human-level performance in face veriï¬cation. In CVPR, pages 1701â1708. IEEE, 2014.
[6] Adam Coates, Brody Huval, Tao Wang, David Wu, Bryan Catanzaro, and Ng Andrew. Deep learning with cots hpc systems. In 30th ICML, pages 1337â1345, 2013.
[7] Mark Horowitz. Energy table for 45nm process, Stanford VLSI wiki. [8] JP Rauschecker. Neuronal mechanisms of developmental plasticity in the catâs visual system. Human
neurobiology, 3(2):109â114, 1983.
[9] Christopher A Walsh. Peter huttenlocher (1931-2013). Nature, 502(7470):172â172, 2013. [10] Misha Denil, Babak Shakibi, Laurent Dinh, Nando de Freitas, et al. Predicting parameters in deep learning.
In Advances in Neural Information Processing Systems, pages 2148â2156, 2013.
[11] Vincent Vanhoucke, Andrew Senior, and Mark Z Mao. Improving the speed of neural networks on cpus. In Proc. Deep Learning and Unsupervised Feature Learning NIPS Workshop, 2011.
[12] Emily L Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus. Exploiting linear structure within convolutional networks for efï¬cient evaluation. In NIPS, pages 1269â1277, 2014.
[13] Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep convolutional networks using vector quantization. arXiv preprint arXiv:1412.6115, 2014.
[14] Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015.
[15] Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. arXiv preprint arXiv:1312.4400, 2013. [16] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. arXiv preprint arXiv:1409.4842, 2014.
[17] Stephen Jos´e Hanson and Lorien Y Pratt. Comparing biases for minimal network construction with back-propagation. In Advances in neural information processing systems, pages 177â185, 1989.
[18] Yann Le Cun, John S. Denker, and Sara A. Solla. Optimal brain damage. In Advances in Neural Information Processing Systems, pages 598â605. Morgan Kaufmann, 1990.
[19] Babak Hassibi, David G Stork, et al. Second order derivatives for network pruning: Optimal brain surgeon. Advances in neural information processing systems, pages 164â164, 1993.
[20] Wenlin Chen, James T. Wilson, Stephen Tyree, Kilian Q. Weinberger, and Yixin Chen. Compressing neural networks with the hashing trick. arXiv preprint arXiv:1504.04788, 2015.
[21] Qinfeng Shi, James Petterson, Gideon Dror, John Langford, Alex Smola, and SVN Vishwanathan. Hash kernels for structured data. The Journal of Machine Learning Research, 10:2615â2637, 2009.
[22] Kilian Weinberger, Anirban Dasgupta, John Langford, Alex Smola, and Josh Attenberg. Feature hashing for large scale multitask learning. In ICML, pages 1113â1120. ACM, 2009.
[23] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overï¬tting. JMLR, 15:1929â1958, 2014.
[24] Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In Advances in Neural Information Processing Systems, pages 3320â3328, 2014.
[25] Yoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with gradient descent is difï¬cult. Neural Networks, IEEE Transactions on, 5(2):157â166, 1994.
[26] Yangqing Jia, et al. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.
[27] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recogni- tion. CoRR, abs/1409.1556, 2014.
[28] Suraj Srinivas and R Venkatesh Babu. Data-free parameter pruning for deep neural networks. arXiv preprint arXiv:1507.06149, 2015.
[29] Zichao Yang, Marcin Moczulski, Misha Denil, Nando de Freitas, Alex Smola, Le Song, and Ziyu Wang. Deep fried convnets. arXiv preprint arXiv:1412.7149, 2014.
[30] Maxwell D Collins and Pushmeet Kohli. Memory bounded deep convolutional networks. arXiv preprint arXiv:1412.1442, 2014.
9 | {
"id": "1507.06149"
} |
1506.01186 | Cyclical Learning Rates for Training Neural Networks | It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks. | http://arxiv.org/pdf/1506.01186 | Leslie N. Smith | cs.CV, cs.LG, cs.NE | Presented at WACV 2017; see https://github.com/bckenstler/CLR for
instructions to implement CLR in Keras | null | cs.CV | 20150603 | 20170404 | 7 1 0 2
r p A 4 ] V C . s c [
6 v 6 8 1 1 0 . 6 0 5 1 : v i X r a
# Cyclical Learning Rates for Training Neural Networks
Leslie N. Smith U.S. Naval Research Laboratory, Code 5514 4555 Overlook Ave., SW., Washington, D.C. 20375 leslie.smith@nrl.navy.mil
# Abstract
It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically elim- inates the need to experimentally ï¬nd the best values and schedule for the global learning rates. Instead of mono- tonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable bound- ary values. Training with cyclical learning rates instead of ï¬xed values achieves improved classiï¬cation accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate âreasonable boundsâ â linearly increasing the learning rate of the net- work for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.
âXponential âCLR (our approach) 0 1 2 3 4 5 6 7 Iteration x 10°
Figure 1. Classiï¬cation accuracy while training CIFAR-10. The red curve shows the result of training with one of the new learning rate policies.
ing training. This paper demonstrates the surprising phe- nomenon that a varying learning rate during training is ben- eï¬cial overall and thus proposes to let the global learning rate vary cyclically within a band of values instead of set- ting it to a ï¬xed value. In addition, this cyclical learning rate (CLR) method practically eliminates the need to tune the learning rate yet achieve near optimal classiï¬cation accu- racy. Furthermore, unlike adaptive learning rates, the CLR methods require essentially no additional computation.
# 1. Introduction
Deep neural networks are the basis of state-of-the-art re- sults for image recognition [17, 23, 25], object detection [7], face recognition [26], speech recognition [8], machine translation [24], image caption generation [28], and driver- less car technology [14]. However, training a deep neural network is a difï¬cult global optimization problem.
The potential beneï¬ts of CLR can be seen in Figure 1, which shows the test data classiï¬cation accuracy of the CIFAR-10 dataset during training1. The baseline (blue curve) reaches a ï¬nal accuracy of 81.4% after 70, 000 it- erations. In contrast, it is possible to fully train the network using the CLR method instead of tuning (red curve) within 25,000 iterations and attain the same accuracy.
The contributions of this paper are:
A deep neural network is typically updated by stochastic gradient descent and the parameters 0 (weights) are updated by 0 = ott â 5, where L is a loss function and â¬; is the learning rate. It is well known that too small a learning rate will make a training algorithm converge slowly while too large a learning rate will make the training algorithm diverge [2]. Hence, one must experiment with a variety of learning rates and schedules.
1. A methodology for setting the global learning rates for training neural networks that eliminates the need to perform numerous experiments to ï¬nd the best values and schedule with essentially no additional computa- tion.
2. A surprising phenomenon is demonstrated - allowing
the learning rate should be a single value that monotonically decreases dur-
1Hyper-parameters and architecture were obtained in April 2015 from caffe.berkeleyvision.org/gathered/examples/cifar10.html
the learning rate to rise and fall is beneï¬cial overall even though it might temporarily harm the networkâs performance.
3. Cyclical learning rates are demonstrated with ResNets, Stochastic Depth networks, and DenseNets on the CIFAR-10 and CIFAR-100 datasets, and on ImageNet with two well-known architectures: AlexNet [17] and GoogleNet [25].
# 2. Related work
The book âNeural Networks: Tricks of the Tradeâ is a terriï¬c source of practical advice. In particular, Yoshua Bengio [2] discusses reasonable ranges for learning rates and stresses the importance of tuning the learning rate. A technical report by Breuel [3] provides guidance on a vari- ety of hyper-parameters. There are also a numerous web- sites giving practical suggestions for setting the learning rates.
Adaptive learning rates: Adaptive learning rates can be considered a competitor to cyclical learning rates because one can rely on local adaptive learning rates in place of global learning rate experimentation but there is a signiï¬- cant computational cost in doing so. CLR does not possess this computational costs so it can be used freely.
A review of the early work on adaptive learning rates can be found in George and Powell [6]. Duchi, et al. [5] pro- posed AdaGrad, which is one of the early adaptive methods that estimates the learning rates from the gradients.
RMSProp is discussed in the slides by Geoffrey Hinton2 [27]. RMSProp is described there as âDivide the learning rate for a weight by a running average of the magnitudes of recent gradients for that weight.â RMSProp is a funda- mental adaptive learning rate method that others have built on.
Schaul et al. [22] discuss an adaptive learning rate based on a diagonal estimation of the Hessian of the gradients. One of the features of their method is that they allow their automatic method to decrease or increase the learning rate. However, their paper seems to limit the idea of increasing learning rate to non-stationary problems. On the other hand, this paper demonstrates that a schedule of increasing the learning rate is more universally valuable.
Zeiler [29] describes his AdaDelta method, which im- proves on AdaGrad based on two ideas: limiting the sum of squared gradients over all time to a limited window, and making the parameter update rule consistent with a units evaluation on the relationship between the update and the Hessian.
More recently, several papers have appeared on adaptive learning rates. Gulcehre and Bengio [9] propose an adaptive learning rate algorithm, called AdaSecant, that utilizes the
2www.cs.toronto.edu/ tijmen/csc321/slides/lecture slides lec6.pdf
root mean square statistics and variance of the gradients. Dauphin et al. [4] show that RMSProp provides a biased estimate and go on to describe another estimator, named ESGD, that is unbiased. Kingma and Lei-Ba [16] introduce Adam that is designed to combine the advantages from Ada- Grad and RMSProp. Bache, et al. [1] propose exploiting solutions to a multi-armed bandit problem for learning rate selection. A summary and tutorial of adaptive learning rates can be found in a recent paper by Ruder [20].
Adaptive learning rates are fundamentally different from CLR policies, and CLR can be combined with adaptive learning rates, as shown in Section 4.1. In addition, CLR policies are computationally simpler than adaptive learning rates. CLR is likely most similar to the SGDR method [18] that appeared recently.
# 3. Optimal Learning Rates
# 3.1. Cyclical Learning Rates
The essence of this learning rate policy comes from the observation that increasing the learning rate might have a short term negative effect and yet achieve a longer term ben- eï¬cial effect. This observation leads to the idea of letting the learning rate vary within a range of values rather than adopt- ing a stepwise ï¬xed or exponentially decreasing value. That is, one sets minimum and maximum boundaries and the learning rate cyclically varies between these bounds. Ex- periments with numerous functional forms, such as a trian- gular window (linear), a Welch window (parabolic) and a Hann window (sinusoidal) all produced equivalent results This led to adopting a triangular window (linearly increas- ing then linearly decreasing), which is illustrated in Figure 2, because it is the simplest function that incorporates this idea. The rest of this paper refers to this as the triangular learning rate policy.
Maximum bound (max_Ir) Minimum bound - (base_Ir) stepsize
Figure 2. Triangular learning rate policy. The blue lines represent learning rate values changing between bounds. The input parame- ter stepsize is the number of iterations in half a cycle.
An intuitive understanding of why CLR methods work comes from considering the loss function topology. Dauphin et al. [4] argue that the difï¬culty in minimizing the loss arises from saddle points rather than poor local minima.
Saddle points have small gradients that slow the learning process. However, increasing the learning rate allows more rapid traversal of saddle point plateaus. A more practical reason as to why CLR works is that, by following the meth- ods in Section 3.3, it is likely the optimum learning rate will be between the bounds and near optimal learning rates will be used throughout training.
The red curve in Figure 1 shows the result of the triangular policy on CIFAR-10. The settings used to cre- ate the red curve were a minimum learning rate of 0.001 (as in the original parameter ï¬le) and a maximum of 0.006. Also, the cycle length (i.e., the number of iterations until the learning rate returns to the initial value) is set to 4, 000 iterations (i.e., stepsize = 2000) and Figure 1 shows that the accuracy peaks at the end of each cycle.
Implementation of the code for a new learning rate policy is straightforward. An example of the code added to Torch 7 in the experiments shown in Section 4.1.2 is the following few lines:
l o c a l c y c l e = math . f l o o r ( 1 + e p o c h C o u n t e r / ( 2 â s t e p s i z e ) ) l o c a l x = math . a b s ( e p o c h C o u n t e r / s t e p s i z e â 2â c y c l e + 1 ) l o c a l l r = o p t . LR + ( maxLR â o p t . LR ) (1âx ) ) â math . max ( 0 ,
where opt.LR is the speciï¬ed lower (i.e., base) learning rate, epochCounter is the number of epochs of training, and lr is the computed learning rate. This policy is named triangular and is as described above, with two new in- put parameters deï¬ned: stepsize (half the period or cycle length) and max lr (the maximum learning rate boundary). This code varies the learning rate linearly between the min- imum (base lr) and the maximum (max lr).
In addition to the triangular policy, the following CLR policies are discussed in this paper:
1. triangular2; the same as the triangular policy ex- cept the learning rate difference is cut in half at the end of each cycle. This means the learning rate difference drops after each cycle.
2. exp range; the learning rate varies between the min- imum and maximum boundaries and each bound- factor of ary value declines by an exponential gammaiteration.
# 3.2. How can one estimate a good value for the cycle length?
The length of a cycle and the input parameter stepsize can be easily computed from the number of iterations in an epoch. An epoch is calculated by dividing the number of training images by the batchsize used. For example, CIFAR-10 has 50, 000 training images and the batchsize is 100 so an epoch = 50, 000/100 = 500 iterations. The ï¬nal
accuracy results are actually quite robust to cycle length but experiments show that it often is good to set stepsize equal to 2 â 10 times the number of iterations in an epoch. For example, setting stepsize = 8 â epoch with the CIFAR-10 training run (as shown in Figure 1) only gives slightly better results than setting stepsize = 2 â epoch.
Furthermore, there is a certain elegance to the rhythm of these cycles and it simpliï¬es the decision of when to drop learning rates and when to stop the current training run. Experiments show that replacing each step of a con- stant learning rate with at least 3 cycles trains the network weights most of the way and running for 4 or more cycles will achieve even better performance. Also, it is best to stop training at the end of a cycle, which is when the learning rate is at the minimum value and the accuracy peaks.
# 3.3. How can one estimate reasonable minimum and maximum boundary values?
There is a simple way to estimate reasonable minimum and maximum boundary values with one training run of the network for a few epochs. It is a âLR range testâ; run your model for several epochs while letting the learning rate in- crease linearly between low and high LR values. This test is enormously valuable whenever you are facing a new ar- chitecture or dataset.
CIFAR-10 0.6 Accuracy 0.1 0 0.005 0.01 Learning rate 0.015 0.02
Figure 3. Classiï¬cation accuracy as a function of increasing learn- ing rate for 8 epochs (LR range test).
The triangular learning rate policy provides a simple mechanism to do this. For example, in Caffe, set base lr to the minimum value and set max lr to the maximum value. Set both the stepsize and max iter to the same number of iterations. In this case, the learning rate will increase lin- early from the minimum value to the maximum value dur- ing this short run. Next, plot the accuracy versus learning rate. Note the learning rate value when the accuracy starts to increase and when the accuracy slows, becomes ragged, or
Dataset CIFAR-10 CIFAR-10 CIFAR-10 CIFAR-10 CIFAR-10 AlexNet AlexNet AlexNet AlexNet AlexNet GoogLeNet GoogLeNet GoogLeNet GoogLeNet LR policy f ixed triangular2 decay exp exp range f ixed triangular2 exp exp exp range f ixed triangular2 exp exp range Iterations Accuracy (%) 70,000 25, 000 25,000 70,000 42,000 400,000 400,000 300,000 460,000 300,000 420,000 420,000 240,000 240,000 81.4 81.4 78.5 79.1 82.2 58.0 58.4 56.0 56.5 56.5 63.0 64.4 58.2 60.2
Table 1. Comparison of accuracy results on test/validation data at the end of the training.
starts to fall. These two learning rates are good choices for bounds; that is, set base lr to the ï¬rst value and set max lr to the latter value. Alternatively, one can use the rule of thumb that the optimum learning rate is usually within a factor of two of the largest one that converges [2] and set base lr to 1
Figure 3 shows an example of making this type of run with the CIFAR-10 dataset, using the architecture and hyper-parameters provided by Caffe. One can see from Fig- ure 3 that the model starts converging right away, so it is rea- sonable to set base lr = 0.001. Furthermore, above a learn- ing rate of 0.006 the accuracy rise gets rough and eventually begins to drop so it is reasonable to set max lr = 0.006.
Whenever one is starting with a new architecture or dataset, a single LR range test provides both a good LR value and a good range. Then one should compare runs with a ï¬xed LR versus CLR with this range. Whichever wins can be used with conï¬dence for the rest of oneâs experiments.
# 4. Experiments
The purpose of this section is to demonstrate the effec- tiveness of the CLR methods on some standard datasets and with a range of architectures. In the subsections below, CLR policies are used for training with the CIFAR-10, CIFAR- 100, and ImageNet datasets. These three datasets and a va- riety of architectures demonstrate the versatility of CLR.
# 4.1. CIFAR-10 and CIFAR-100
# 4.1.1 Caffeâs CIFAR-10 architecture
The CIFAR-10 architecture and hyper-parameter settings on the Caffe website are fairly standard and were used here as a baseline. As discussed in Section 3.2, an epoch is equal
CIFAR-10 ---Exp policy ââExp Range 0.2 1 2 3 4 5 6 7 Iteration * 10°
Figure 4. Classiï¬cation accuracy as a function of iteration for 70, 000 iterations.
CIFAR10; Combining adaptive LR and CLR âNesterov + CLR â Adam > 03 âAdam + CLR Iteration % isâ
Figure 5. Classiï¬cation accuracy as a function of iteration for the CIFAR-10 dataset using adaptive learning methods. See text for explanation.
to 500 iterations and a good setting for stepsize is 2, 000. Section 3.3 discussed how to estimate reasonable minimum and maximum boundary values for the learning rate from Figure 3. All that is needed to optimally train the network is to set base lr = 0.001 and max lr = 0.006. This is all that is needed to optimally train the network. For the triangular2 policy run shown in Figure 1, the stepsize and learning rate bounds are shown in Table 2.
base lr 0.001 0.0001 0.00001 max lr 0.005 0.0005 0.00005 stepsize 2,000 1,000 500 start 0 16,000 22,000 max iter 16,000 22,000 25,000
Table 2. Hyper-parameter settings for CIFAR-10 example in Fig- ure 1.
running with the the result of triangular2 policy with the parameter setting in Table 2. As shown in Table 1, one obtains the same test classiï¬ca- tion accuracy of 81.4% after only 25, 000 iterations with the triangular2 policy as obtained by running the standard hyper-parameter settings for 70, 000 iterations.
8 CIFAR10; Sigmoid + Batch Normalization â . 3.0.6 a g4 to2 %% 1 3 3 4 5 6 Iteration x104
Figure 6. Batch Normalization CIFAR-10 example (provided with the Caffe download).
from the triangular policy derive from reducing the learning rate because this is when the accuracy climbs the most. As a test, a decay policy was implemented where the learn- ing rate starts at the max lr value and then is linearly re- duced to the base lr value for stepsize number of itera- tions. After that, the learning rate is ï¬xed to base lr. For the decay policy, max lr = 0.007, base lr = 0.001, and stepsize = 4000. Table 1 shows that the ï¬nal accuracy is only 78.5%, providing evidence that both increasing and decreasing the learning rate are essential for the beneï¬ts of the CLR method.
Figure 4 compares the exp learning rate policy in Caffe with the new exp range policy using gamma = 0.99994 for both policies. is that when using the exp range policy one can stop training at iteration 42, 000 with a test accuracy of 82.2% (going to iteration 70, 000 does not improve on this result). This is substantially better than the best test accuracy of 79.1% one obtains from using the exp learning rate policy.
The current Caffe download contains additional archi- tectures and hyper-parameters for CIFAR-10 and in partic- ular there is one with sigmoid non-linearities and batch nor- malization. Figure 6 compares the training accuracy using the downloaded hyper-parameters with a ï¬xed learning rate (blue curve) to using a cyclical learning rate (red curve). As can be seen in this Figure, the ï¬nal accuracy for the ï¬xed learning rate (60.8%) is substantially lower than the cyclical learning rate ï¬nal accuracy (72.2%). There is clear perfor- mance improvement when using CLR with this architecture containing sigmoids and batch normalization.
Experiments were carried out with architectures featur- ing both adaptive learning rate methods and CLR. Table 3 lists the ï¬nal accuracy values from various adaptive learning rate methods, run with and without CLR. All of the adap- tive methods in Table 3 were run by invoking the respective option in Caffe. The learning rate boundaries are given in Table 3 (just below the methodâs name), which were deter- mined by using the technique described in Section 3.3. Just the lower bound was used for base lr for the f ixed policy.
LR type/bounds Nesterov [19] 0.001 - 0.006 ADAM [16] 0.0005 - 0.002 RMSprop [27] 0.0001 - 0.0003 AdaGrad [5] 0.003 - 0.035 AdaDelta [29] 0.01 - 0.1 LR policy f ixed triangular f ixed triangular triangular f ixed triangular triangular f ixed triangular f ixed triangular Iterations Accuracy (%) 70,000 25,000 70,000 25,000 70,000 70,000 25,000 70,000 70,000 25,000 70,000 25,000 82.1 81.3 81.4 79.8 81.1 75.2 72.8 75.1 74.6 76.0 67.3 67.3
Table 3. Comparison of CLR with adaptive learning rate methods. The table shows accuracy results for the CIFAR-10 dataset on test data at the end of the training.
Table 3 shows that for some adaptive learning rate meth- ods combined with CLR, the ï¬nal accuracy after only 25,000 iterations is equivalent to the accuracy obtained without CLR after 70,000 iterations. For others, it was nec- essary (even with CLR) to run until 70,000 iterations to ob- tain similar results. Figure 5 shows the curves from running the Nesterov method with CLR (reached 81.3% accuracy in only 25,000 iterations) and the Adam method both with and without CLR (both needed 70,000 iterations). When using adaptive learning rate methods, the beneï¬ts from CLR are sometimes reduced, but CLR can still valuable as it some- times provides beneï¬t at essentially no cost.
# 4.1.2 ResNets, Stochastic Depth, and DenseNets
Residual networks [10, 11], and the family of variations that have subsequently emerged, achieve state-of-the-art re- sults on a variety of tasks. Here we provide comparison experiments between the original implementations and ver- sions with CLR for three members of this residual net- work family: the original ResNet [10], Stochastic Depth networks [13], and the recent DenseNets [12]. Our ex- periments can be readily replicated because the authors of these papers make their Torch code available3. Since all three implementation are available using the Torch 7 frame- work, the experiments in this section were performed using Torch. In addition to the experiment in the previous Sec- tion, these networks also incorporate batch normalization [15] and demonstrate the value of CLR for architectures with batch normalization.
Both CIFAR-10 and the CIFAR-100 datasets were used
# 3https://github.com/facebook/fb.resnet.torch, https://github.com/yueatsprograms/Stochastic Depth, https://github.com/liuzhuang13/DenseNet
in these experiments. The CIFAR-100 dataset is similar to the CIFAR-10 data but it has 100 classes instead of 10 and each class has 600 labeled examples.
Architecture ResNet ResNet ResNet ResNet+CLR SD SD SD SD+CLR DenseNet DenseNet DenseNet CIFAR-10 (LR) CIFAR-100 (LR) 92.8(0.1) 93.3(0.2) 91.8(0.3) 93.6(0.1 â 0.3) 94.6(0.1) 94.5(0.2) 94.2(0.3) 94.5(0.1 â 0.3) 94.5(0.1) 94.5(0.2) 94.2(0.3) 71.2(0.1) 71.6(0.2) 71.9(0.3) 72.5(0.1 â 0.3) 75.2(0.1) 75.2(0.2) 74.6(0.3) 75.4(0.1 â 0.3) 75.2(0.1) 75.3(0.2) 74.5(0.3) 75.9(0.1 â 0.2) DenseNet+CLR 94.9(0.1 â 0.2)
Table 4. Comparison of CLR with ResNets [10, 11], Stochastic Depth (SD) [13], and DenseNets [12]. The table shows the average accuracy of 5 runs for the CIFAR-10 and CIFAR-100 datasets on test data at the end of the training.
The results for these two datasets on these three archi- tectures are summarized in Table 4. The left column give the architecture and whether CLR was used in the experi- ments. The other two columns gives the average ï¬nal ac- curacy from ï¬ve runs and the initial learning rate or range used in parenthesis, which are reduced (for both the ï¬xed learning rate and the range) during the training according to the same schedule used in the original implementation. For all three architectures, the original implementation uses an initial LR of 0.1 which we use as a baseline.
The accuracy results in Table 4 in the right two columns are the average ï¬nal test accuracies of ï¬ve runs. The Stochastic Depth implementation was slightly different than the ResNet and DenseNet implementation in that the au- thors split the 50,000 training images into 45,000 training images and 5,000 validation images. However, the reported results in Table 4 for the SD architecture is only test accura- cies for the ï¬ve runs. The learning rate range used by CLR was determined by the LR range test method and the cycle length was choosen as a tenth of the maximum number of epochs that was speciï¬ed in the original implementation.
In addition to the accuracy results shown in Table 4, similar results were obtained in Caffe for DenseNets [12] on CIFAR-10 using the prototxt ï¬les provided by the au- thors. The average accuracy of ï¬ve runs with learning rates of 0.1, 0.2, 0.3 was 91.67%, 92.17%, 92.46%, respectively, but running with CLR within the range of 0.1 to 0.3, the average accuracy was 93.33%.
The results from all of these experiments show similar or better accuracy performance when using CLR versus using a ï¬xed learning rate, even though the performance drops at
ImageNet on AlexNet 0.2 Accuracy ° a we 0.05) ° 6.005 0.01 0.015 0.02 0.025 6.03 6.035 0.04 6.045 Learning rate
Figure 7. AlexNet LR range test; validation classiï¬cation accuracy as a function of increasing learning rate.
ImageNet/AlexNet architecture S a se 2s Row es) Validation Accuracy ses âob âTriangular2 os. 1 15.3225 °3°«35 Iteration x 10°
Figure 8. Validation data classiï¬cation accuracy as a function of iteration for f ixed versus triangular.
some of the learning rate values within this range. These experiments conï¬rm that it is beneï¬cial to use CLR for a variety of residual architectures and for both CIFAR-10 and CIFAR-100.
# 4.2. ImageNet
The ImageNet dataset [21] is often used in deep learning literature as a standard for comparison. The ImageNet clas- siï¬cation challenge provides about 1, 000 training images for each of the 1, 000 classes, giving a total of 1, 281, 167 labeled training images.
# 4.2.1 AlexNet
The Caffe website provides the architecture and hyper- parameter ï¬les for a slightly modiï¬ed AlexNet [17]. These were downloaded from the website and used as a baseline. In the training results reported in this section, all weights
ImageNet/AlexNet architecture = a S a S nS me (ees) Validation Accuracy âTriangular2 0 â F ; q 4 1 005 1 15. 2. 25. 3. 335 Iteration <i
Figure 9. Validation data classiï¬cation accuracy as a function of iteration for f ixed versus triangular.
were initialized the same so as to avoid differences due to different random initializations.
Since the batchsize in the architecture ï¬le is 256, an epoch is equal to 1, 281, 167/256 = 5, 005 iterations. Hence, a reasonable setting for stepsize is 6 epochs or 30, 000 iterations.
Next, one can estimate reasonable minimum and maxi- mum boundaries for the learning rate from Figure 7. It can be seen from this ï¬gure that the training doesnât start con- verging until at least 0.006 so setting base lr = 0.006 is reasonable. However, for a fair comparison to the baseline where base lr = 0.01, it is necessary to set the base lr to 0.01 for the triangular and triangular2 policies or else the majority of the apparent improvement in the accuracy will be from the smaller learning rate. As for the maxi- mum boundary value, the training peaks and drops above a learning rate of 0.015 so max lr = 0.015 is reasonable. For comparing the exp range policy to the exp policy, set- ting base lr = 0.006 and max lr = 0.014 is reasonable and in this case one expects that the average accuracy of the exp range policy to be equal to the accuracy from the exp policy.
Figure 9 compares the results of running with the f ixed versus the triangular2 policy for the AlexNet architecture. Here, the peaks at iterations that are multiples of 60,000 should produce a classiï¬cation accuracy that corresponds to the f ixed policy. Indeed, the accuracy peaks at the end of a cycle for the triangular2 policy are similar to the ac- curacies from the standard f ixed policy, which implies that the baseline learning rates are set quite well (this is also im- plied by Figure 7). As shown in Table 1, the ï¬nal accuracies from the CLR training run are only 0.4% better than the ac- curacies from the f ixed policy.
Figure 10 compares the results of running with the exp versus the exp range policy for the AlexNet architecture with gamma = 0.999995 for both policies. As expected,
ImageNet/AlexNet architecture 0.5 S & Validation Accuracy Ss bs 0.2 0.1 â Exp Range 0 I 2 3 4 Iteration x10°
Figure 10. Validation data classiï¬cation accuracy as a function of iteration for exp versus exp range.
ImageNet/GoogleNet architecture 0.08 id S HB Validation Accuracy 2 2° o Peg Ne eS 0 0.01 002 003 004 0.05 0.06 0.07 Learning rate
Figure 11. GoogleNet LR range test; validation classiï¬cation ac- curacy as a function of increasing learning rate.
Figure 10 shows that the accuracies from the exp range policy do oscillate around the exp policy accuracies. The advantage of the exp range policy is that the accuracy of 56.5% is already obtained at iteration 300, 000 whereas the exp policy takes until iteration 460, 000 to reach 56.5%.
Finally, a comparison between the f ixed and exp poli- cies in Table 1 shows the f ixed and triangular2 policies produce accuracies that are almost 2% better than their ex- ponentially decreasing counterparts, but this difference is probably due to not having tuned gamma.
# 4.2.2 GoogLeNet/Inception Architecture
The GoogLeNet architecture was a winning entry to the ImageNet 2014 image classiï¬cation competition. Szegedy et al. [25] describe the architecture in detail but did not provide the architecture ï¬le. The architecture ï¬le publicly available from Princeton4 was used in the following exper- iments. The GoogLeNet paper does not state the learning rate values and the hyper-parameter solver ï¬le is not avail-
4vision.princeton.edu/pvt/GoogLeNet/
Imagenet with GoogLeNet architecture 2 g 2 a 2 in 2 5 2 io Validation Accuracy ° is e 0 05 1 15 2 25) 3 3.5 4 45 Iteration x10
Figure 12. Validation data classiï¬cation accuracy as a function of iteration for f ixed versus triangular.
able for a baseline but not having these hyper-parameters is a typical situation when one is developing a new architec- ture or applying a network to a new dataset. This is a situa- tion that CLR readily handles. Instead of running numerous experiments to ï¬nd optimal learning rates, the base lr was set to a best guess value of 0.01.
The ï¬rst step is to estimate the stepsize setting. Since the architecture uses a batchsize of 128 an epoch is equal to 1, 281, 167/128 = 10, 009 iterations. Hence, good settings for stepsize would be 20, 000, 30, 000, or possibly 40, 000. The results in this section are based on stepsize = 30000. The next step is to estimate the bounds for the learning rate, which is found with the LR range test by making a run for 4 epochs where the learning rate linearly increases from 0.001 to 0.065 (Figure 11). This ï¬gure shows that one can use bounds between 0.01 and 0.04 and still have the model reach convergence. However, learning rates above 0.025 cause the training to converge erratically. For both triangular2 and the exp range policies, the base lr was set to 0.01 and max lr was set to 0.026. As above, the accuracy peaks for both these learning rate policies corre- spond to the same learning rate value as the f ixed and exp policies. Hence, the comparisons below will focus on the peak accuracies from the LCR methods.
Figure 12 compares the results of running with the f ixed versus the triangular2 policy for this architecture (due to time limitations, each training stage was not run until it fully plateaued). In this case, the peaks at the end of each cycle for the triangular2 policy produce better accuracies than the f ixed policy. The ï¬nal accuracy shows an improvement from the network trained by the triangular2 policy (Ta- ble 1) to be 1.4% better than the accuracy from the f ixed policy. This demonstrates that the triangular2 policy im- proves on a âbest guessâ for a ï¬xed learning rate.
Figure 13 compares the results of running with the exp versus the exp range policy with gamma = 0.99998. Once again, the peaks at the end of each cycle for the
Imagenet with GoogLeNet architecture Ld © = Validation Accuracy ° hed ie -â ExpLR âExp range 0 0.5 1 15 2 Iteration x10
Figure 13. Validation data classiï¬cation accuracy as a function of iteration for exp versus exp range.
exp range policy produce better validation accuracies than the exp policy. The ï¬nal accuracy from the exp range pol- icy (Table 1) is 2% better than from the exp policy.
# 5. Conclusions
The results presented in this paper demonstrate the ben- eï¬ts of the cyclic learning rate (CLR) methods. A short run of only a few epochs where the learning rate linearly in- creases is sufï¬cient to estimate boundary learning rates for the CLR policies. Then a policy where the learning rate cyclically varies between these bounds is sufï¬cient to ob- tain near optimal classiï¬cation results, often with fewer it- erations. This policy is easy to implement and unlike adap- tive learning rate methods, incurs essentially no additional computational expense.
This paper shows that use of cyclic functions as a learn- ing rate policy provides substantial improvements in perfor- mance for a range of architectures. In addition, the cyclic nature of these methods provides guidance as to times to drop the learning rate values (after 3 - 5 cycles) and when to stop the the training. All of these factors reduce the guess- work in setting the learning rates and make these methods practical tools for everyone who trains neural networks.
This work has not explored the full range of applications for cyclic learning rate methods. We plan to determine if equivalent policies work for training different architectures, such as recurrent neural networks. Furthermore, we believe that a theoretical analysis would provide an improved un- derstanding of these methods, which might lead to improve- ments in the algorithms.
# References
[1] K. Bache, D. DeCoste, and P. Smyth. Hot swapping for online adaptation of optimization hyperparameters. arXiv preprint arXiv:1412.6599, 2014. 2
[2] Y. Bengio. Neural Networks: Tricks of the Trade, chap- ter Practical recommendations for gradient-based training of
deep architectures, pages 437â478. Springer Berlin Heidel- berg, 2012. 1, 2, 4
[3] T. M. Breuel. The effects of hyperparameters on sgd training of neural networks. arXiv preprint arXiv:1508.02788, 2015. 2
[4] Y. N. Dauphin, H. de Vries, J. Chung, and Y. Bengio. Rm- sprop and equilibrated adaptive learning rates for non-convex optimization. arXiv preprint arXiv:1502.04390, 2015. 2 [5] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradi- ent methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12:2121â2159, 2011. 2, 5
[6] A. P. George and W. B. Powell. Adaptive stepsizes for re- cursive estimation with applications in approximate dynamic programming. Machine learning, 65(1):167â198, 2006. 2 [7] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich fea- ture hierarchies for accurate object detection and semantic segmentation. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 580â587. IEEE, 2014. 1
[8] A. Graves and N. Jaitly. Towards end-to-end speech recog- nition with recurrent neural networks. In Proceedings of the 31st International Conference on Machine Learning (ICML- 14), pages 1764â1772, 2014. 1
[9] C. Gulcehre and Y. Bengio. Adasecant: Robust adap- tive secant method for stochastic gradient. arXiv preprint arXiv:1412.7419, 2014. 2
[10] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. Computer Vision and Pattern Recog- nition (CVPR), 2016 IEEE Conference on, 2015. 5, 6 [11] K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. arXiv preprint arXiv:1603.05027, 2016. 5, 6
[12] G. Huang, Z. Liu, and K. Q. Weinberger. Densely connected convolutional networks. arXiv preprint arXiv:1608.06993, 2016. 5, 6
[13] G. Huang, Y. Sun, Z. Liu, D. Sedra, and K. Weinberger. arXiv preprint
Deep networks with stochastic depth. arXiv:1603.09382, 2016. 5, 6 [14] B. Huval, T. Wang, S. Tandon,
J. Kiske, W. Song, J. Pazhayampallil, M. Andriluka, R. Cheng-Yue, F. Mujica, A. Coates, et al. An empirical evaluation of deep learning on highway driving. arXiv preprint arXiv:1504.01716, 2015. 1 [15] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. 5
[16] D. Kingma and J. Lei-Ba. Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2015. 2, 5
Imagenet classiï¬cation with deep convolutional neural networks. Ad- vances in neural information processing systems, 2012. 1, 2, 6
[18] I. Loshchilov and F. Hutter. Sgdr: Stochastic gradient de- scent with restarts. arXiv preprint arXiv:1608.03983, 2016. 2
[19] Y. Nesterov. A method of solving a convex programming In Soviet Mathe- problem with convergence rate o (1/k2). matics Doklady, volume 27, pages 372â376, 1983. 5
[20] S. Ruder. An overview of gradient descent optimization al- gorithms. arXiv preprint arXiv:1600.04747, 2016. 2 [21] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 2015. 6
[22] T. Schaul, S. Zhang, and Y. LeCun. No more pesky learning rates. arXiv preprint arXiv:1206.1106, 2012. 2
[23] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. 1
[24] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In Advances in Neural Infor- mation Processing Systems, pages 3104â3112, 2014. 1 [25] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabi- novich. Going deeper with convolutions. arXiv preprint arXiv:1409.4842, 2014. 1, 2, 7
[26] Y. Taigman, M. Yang, M. Ranzato, and L. Wolf. Deepface: Closing the gap to human-level performance in face veriï¬ca- tion. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 1701â1708. IEEE, 2014. 1 [27] T. Tieleman and G. Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4, 2012. 2, 5
[28] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A neural image caption generator. arXiv preprint arXiv:1411.4555, 2014. 1
[29] M. D. Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012. 2, 5
# A. Instructions for adding CLR to Caffe
Modify SGDSolver¡Dtype¿::GetLearningRate() which is in sgd solver.cpp (near line 38):
} e l s e i n t i f ( i t r > 0 ) { ( l r p o l i c y == â t r i a n g u l a r â ) { i t r = t h i s â> i t e r â t h i s â>p a r a m . s t a r t i f l r p o l i c y ( ) ; i n t / f l o a t x = ( f l o a t ) x = x / r a t e = t h i s â>p a r a m . b a s e l r ( ) + ( t h i s â>p a r a m . m a x l r () â t h i s â>p a r a m . b a s e l r ( ) ) c y c l e = i t r ( 2 â t h i s â>p a r a m . s t e p s i z e ( ) ) ; ( i t r â ( 2 â c y c l e +1)â t h i s â>p a r a m . s t e p s i z e ( ) ) ; t h i s â>p a r a m . s t e p s i z e ( ) ; â s t d : : max ( d o u b l e ( 0 ) , ( 1 . 0 â f a b s ( x ) ) ) ; } e l s e { r a t e = t h i s â>p a r a m . b a s e l r ( ) ; } } e l s e i n t i f ( i t r > 0 ) { ( l r p o l i c y == â t r i a n g u l a r 2 â ) { i t r = t h i s â> i t e r â t h i s â>p a r a m . s t a r t i f l r p o l i c y ( ) ; i n t / f l o a t x = ( f l o a t ) x = x / r a t e = t h i s â>p a r a m . b a s e l r ( ) + ( t h i s â>p a r a m . m a x l r () â t h i s â>p a r a m . b a s e l r ( ) ) ( 2 â t h i s â>p a r a m . s t e p s i z e ( ) ) ; ( i t r â ( 2 â c y c l e +1)â t h i s â>p a r a m . s t e p s i z e ( ) ) ; c y c l e = i t r t h i s â>p a r a m . s t e p s i z e ( ) ; â s t d : : min ( d o u b l e ( 1 ) , f a b s ( x ) ) / pow ( 2 . 0 , d o u b l e ( c y c l e ) ) ) ) ; s t d : : max ( d o u b l e ( 0 ) , ( 1 . 0 â } e l s e { r a t e = t h i s â>p a r a m . b a s e l r ( ) ;
}
Modify message SolverParameter which is in caffe.proto (near line 100): o p t i o n a l o p t i o n a l f l o a t s t a r t f l o a t m a x l r = 4 2 ; l r p o l i c y = 4 1 ; / / The maximum l e a r n i n g r a t e f o r CLR p o l i c i e s
# B. Instructions for adding CLR to Keras
Please see https://github.com/bckenstler/CLR. | {
"id": "1504.01716"
} |
1506.01066 | Visualizing and Understanding Neural Models in NLP | While neural networks have been successfully applied to many NLP tasks the
resulting vector-based models are very difficult to interpret. For example it's
not clear how they achieve {\em compositionality}, building sentence meaning
from the meanings of words and phrases. In this paper we describe four
strategies for visualizing compositionality in neural models for NLP, inspired
by similar work in computer vision. We first plot unit values to visualize
compositionality of negation, intensification, and concessive clauses, allow us
to see well-known markedness asymmetries in negation. We then introduce three
simple and straightforward methods for visualizing a unit's {\em salience}, the
amount it contributes to the final composed meaning: (1) gradient
back-propagation, (2) the variance of a token from the average word node, (3)
LSTM-style gates that measure information flow. We test our methods on
sentiment using simple recurrent nets and LSTMs. Our general-purpose methods
may have wide applications for understanding compositionality and other
semantic properties of deep networks , and also shed light on why LSTMs
outperform simple recurrent nets, | http://arxiv.org/pdf/1506.01066 | Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky | cs.CL | null | null | cs.CL | 20150602 | 20160108 | 6 1 0 2
n a J 8 ] L C . s c [
2 v 6 6 0 1 0 . 6 0 5 1 : v i X r a
# Visualizing and Understanding Neural Models in NLP
Jiwei Li1, Xinlei Chen2, Eduard Hovy2 and Dan Jurafsky1 1Computer Science Department, Stanford University, Stanford, CA 94305, USA 2Language Technology Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA
{jiweil,jurafsky}@stanford.edu {xinleic,ehovy}@andrew.cmu.edu
# Abstract
While neural networks have been success- fully applied to many NLP tasks the re- sulting vector-based models are very difï¬- cult to interpret. For example itâs not clear how they achieve compositionality, build- ing sentence meaning from the meanings of words and phrases. In this paper we describe strategies for visualizing composi- tionality in neural models for NLP, inspired by similar work in computer vision. We ï¬rst plot unit values to visualize composi- tionality of negation, intensiï¬cation, and concessive clauses, allowing us to see well- known markedness asymmetries in nega- tion. We then introduce methods for visu- alizing a unitâs salience, the amount that it contributes to the ï¬nal composed meaning from ï¬rst-order derivatives. Our general- purpose methods may have wide applica- tions for understanding compositionality and other semantic properties of deep net- works.
# Introduction
Neural models match or outperform the perfor- mance of other state-of-the-art systems on a va- riety of NLP tasks. Yet unlike traditional feature- based classiï¬ers that assign and optimize weights to varieties of human interpretable features (parts- of-speech, named entities, word shapes, syntactic parse features etc) the behavior of deep learning models is much less easily interpreted. Deep learn- ing models mainly operate on word embeddings (low-dimensional, continuous, real-valued vectors) through multi-layer neural architectures, each layer of which is characterized as an array of hidden neu- ron units. It is unclear how deep learning models deal with composition, implementing functions like negation or intensiï¬cation, or combining meaning from different parts of the sentence, ï¬ltering away
the informational chaff from the wheat, to build sentence meaning.
In this paper, we explore multiple strategies to interpret meaning composition in neural models. We employ traditional methods like representation plotting, and introduce simple strategies for measur- ing how much a neural unit contributes to meaning composition, its âsalienceâ or importance using ï¬rst derivatives.
Visualization techniques/models represented in this work shed important light on how neural mod- els work: For example, we illustrate that LSTMâs success is due to its ability in maintaining a much sharper focus on the important key words than other models; Composition in multiple clauses works competitively, and that the models are able to cap- ture negative asymmetry, an important property of semantic compositionally in natural language understanding; there is sharp dimensional local- ity, with certain dimensions marking negation and quantiï¬cation in a manner that was surprisingly localist. Though our attempts only touch superï¬- cial points in neural models, and each method has its pros and cons, together they may offer some insights into the behaviors of neural models in lan- guage based tasks, marking one initial step toward understanding how they achieve meaning composi- tion in natural language processing.
The next section describes some visualization models in vision and NLP that have inspired this work. We describe datasets and the adopted neu- ral models in Section 3. Different visualization strategies and correspondent analytical results are presented separately in Section 4,5,6, followed by a brief conclusion.
# 2 A Brief Review of Neural Visualization
Similarity is commonly visualized graphically, gen- erally by projecting the embedding space into two dimensions and observing that similar words tend to be clustered together (e.g., Elman (1989), Ji
and Eisenstein (2014), Faruqui and Dyer (2014)). (Karpathy et al., 2015) attempts to interpret recur- rent neural models from a statical point of view and does deeply touch compositionally of mean- ings. Other relevant attempts include (Fyshe et al., 2015; Faruqui et al., 2015).
Methods for interpreting and visualizing neu- ral models have been much more signiï¬cantly ex- plored in vision, especially for Convolutional Neu- ral Networks (CNNs or ConvNets) (Krizhevsky et al., 2012), multi-layer neural networks in which the original matrix of image pixels is convolved and pooled as it is passed on to hidden layers. ConvNet visualizing techniques consist mainly in mapping the different layers of the network (or other fea- tures like SIFT (Lowe, 2004) and HOG (Dalal and Triggs, 2005)) back to the initial image input, thus capturing the human-interpretable information they represent in the input, and how units in these layers contribute to any ï¬nal decisions (Simonyan et al., 2013; Mahendran and Vedaldi, 2014; Nguyen et al., 2014; Szegedy et al., 2013; Girshick et al., 2014; Zeiler and Fergus, 2014). Such methods include:
(1) Inversion: Inverting the representations by training an additional model to project outputs from different neural levels back to the initial input im- ages (Mahendran and Vedaldi, 2014; Vondrick et al., 2013; Weinzaepfel et al., 2011). The intuition behind reconstruction is that the pixels that are re- constructable from the current representations are the content of the representation. The inverting algorithms allow the current representation to align with corresponding parts of the original images.
(2) Back-propagation (Erhan et al., 2009; Si- monyan et al., 2013) and Deconvolutional Net- works (Zeiler and Fergus, 2014): Errors are back propagated from output layers to each intermedi- ate layer and ï¬nally to the original image inputs. Deconvolutional Networks work in a similar way by projecting outputs back to initial inputs layer by layer, each layer associated with one supervised model for projecting upper ones to lower ones These strategies make it possible to spot active regions or ones that contribute the most to the ï¬nal classiï¬cation decision.
(3) Generation: This group of work generates images in a speciï¬c class from a sketch guided by already trained neural models (Szegedy et al., 2013; Nguyen et al., 2014). Models begin with an image whose pixels are randomly initialized and mutated at each step. The speciï¬c layers that are activated at different stages of image construction can help
in interpretation.
While the above strategies inspire the work we present in this paper, there are fundamental dif- ferences between vision and NLP. In NLP words function as basic units, and hence (word) vectors rather than single pixels are the basic units. Se- quences of words (e.g., phrases and sentences) are also presented in a more structured way than ar- rangements of pixels. In parallel to our research, independent researches (Karpathy et al., 2015) have been conducted to explore similar direction from an error-analysis point of view, by analyzing pre- dictions and errors from a recurrent neural models. Other distantly relevant works include: Murphy et al. (2012; Fyshe et al. (2015) used an manual task to quantify the interpretability of semantic dimen- sions by presetting human users with a list of words and ask them to choose the one that does not belong to the list. Faruqui et al. (2015). Similar strategy is adopted in (Faruqui et al., 2015) by extracting top-ranked words in each vector dimension.
# 3 Datasets and Neural Models
We explored two datasets on which neural models are trained, one of which is of relatively small scale and the other of large scale.
# 3.1 Stanford Sentiment Treebank
Stanford Sentiment Treebank is a benchmark dataset widely used for neural model evaluations. The dataset contains gold-standard sentiment labels for every parse tree constituent, from sentences to phrases to individual words, for 215,154 phrases in 11,855 sentences. The task is to perform both ï¬ne-grained (very positive, positive, neutral, nega- tive and very negative) and coarse-grained (positive vs negative) classiï¬cation at both the phrase and sentence level. For more details about the dataset, please refer to Socher et al. (2013).
While many studies on this dataset use recursive parse-tree models, in this work we employ only standard sequence models (RNNs and LSTMs) since these are the most widely used current neu- ral models, and sequential visualization is more straightforward. We therefore ï¬rst transform each parse tree node to a sequence of tokens. The sequence is ï¬rst mapped to a phrase/sentence representation and fed into a softmax classiï¬er. Phrase/sentence representations are built with the following three models: Standard Recurrent Se- quence with TANH activation functions, LSTMs and
Bidirectional LSTMs. For details about the three models, please refer to Appendix.
Training AdaGrad with mini-batch was used for training, with parameters (L2 penalty, learning rate, mini batch size) tuned on the development set. The number of iterations is treated as a variable to tune and parameters are harvested based on the best performance on the dev set. The number of dimen- sions for the word and hidden layer are set to 60 with 0.1 dropout rate. Parameters are tuned on the dev set. The standard recurrent model achieves 0.429 (ï¬ne grained) and 0.850 (coarse grained) accuracy at the sentence level; LSTM achieves 0.469 and 0.870, and Bidirectional LSTM 0.488 and 0.878, respectively.
# 3.2 Sequence-to-Sequence Models
SEQ2SEQ are neural models aiming at generating a sequence of output texts given inputs. Theoret- ically, SEQ2SEQ models can be adapted to NLP tasks that can be formalized as predicting outputs given inputs and serve for different purposes due to different inputs and outputs, e.g., machine trans- lation where inputs correspond to source sentences and outputs to target sentences (Sutskever et al., 2014; Luong et al., 2014); conversational response generation if inputs correspond to messages and outputs correspond to responses (Vinyals and Le, 2015; Li et al., 2015). SEQ2SEQ need to be trained on massive amount of data for implicitly semantic and syntactic relations between pairs to be learned. SEQ2SEQ models map an input sequence to a vector representation using LSTM models and then sequentially predicts tokens based on the pre- obtained representation. The model deï¬nes a dis- tribution over outputs (Y) and sequentially predicts tokens given inputs (X) using a softmax function.
ny P(Y|X) = [] ewes. LQ, oy Vt, Y1y Y2s +) Yt-1) i=1 exp(f (hi-1, ey) t=1 Ly exp(f (ht-1, eyâ))
where f (htâ1, eyt) denotes the activation function between htâ1 and eyt, where htâ1 is the represen- tation output from the LSTM at time t â 1. For each time step in word prediction, SEQ2SEQ mod- els combine the current token with previously built embeddings for next-step word prediction.
For easy visualization purposes, we turn to the most straightforward taskâautoencoderâ where
inputs and outputs are identical. The goal of an autoencoder is to reconstruct inputs from the pre- obtained representation. We would like to see how individual input tokens affect the overall sentence representation and each of the tokens to predict in outputs. We trained the auto-encoder on a subset of WMTâ14 corpus containing 4 million english sentences with an average length of 22.5 words. We followed training protocols described in (Sutskever et al., 2014).
# 4 Representation Plotting
We begin with simple plots of representations to shed light on local compositions using Stanford Sentiment Treebank.
Local Composition Figure 1 shows a 60d heat- map vector for the representation of selected words/phrases/sentences, with an emphasis on ex- tent modiï¬cations (adverbial and adjectival) and negation. Embeddings for phrases or sentences are attained by composing word representations from the pretrained model.
The intensiï¬cation part of Figure 1 shows sug- gestive patterns where values for a few dimensions are strengthened by modiï¬ers like âa lotâ (the red bar in the ï¬rst example) âso muchâ (the red bar in the second example), and âincrediblyâ. Though the patterns for negations are not as clear, there is still a consistent reversal for some dimensions, visible as a shift between blue and red for dimensions boxed on the left.
We then visualize words and phrases using t- sne (Van der Maaten and Hinton, 2008) in Figure 2, deliberately adding in some random words for com- parative purposes. As can be seen, neural models nicely learn the properties of local composition- ally, clustering negation+positive words (ânot niceâ, ânot goodâ) together with negative words. Note also the asymmetry of negation: ânot badâ is clustered more with the negative than the positive words (as shown both in Figure 1 and 2). This asymmetry has been widely discussed in linguistics, for exam- ple as arising from markedness, since âgoodâ is the unmarked direction of the scale (Clark and Clark, 1977; Horn, 1989; Fraenkel and Schul, 2008). This suggests that although the model does seem to fo- cus on certain units for negation in Figure 1, the neural model is not just learning to apply a ï¬xed transform for ânotâ but is able to capture the subtle differences in the composition of different words.
so neeâ L ° tig vot etgpsteg ro v9 â nrg PM ot goog ata oy f° "er âgoigg not usetylat all NOt geRstinteregting at an âiste wept % ~ i increcy bad | mage 4 vy notugens = POTDIES âââamazinggy bad yn ing naety ys et L âqh 4 / much yorse so pad ve tegit | even getter i AM uch getter} i A 1 Zz - i L J âegos teally good wy pee bast Dayeâ got me oy ie oe na 9 be oneit \ oF smghe tery Nive 7 wondprtul Ee tena
Figure 2: t-SNE Visualization on latent representations for modiï¬cations and negations.
_s hated! the moute apd it was 100 long gSpite the good acting, L {hated the mie: =~ _â â>, | hated the moute althoygh it hac! good acting much yorse Whoted the move buy it had good acting not gad not goog! at all not ugeful L very,pad not great so gad not interegting at all not usetylat all by incrediljy bad not good L hrarclly ysehul not integesting, amazingly bac disige âliked the movie althoygh it had bad! acting not pice im Hikea she mowe gt âwas 100 long Useful S nN tk oye love sqrmuch tke glot belter â much getter \ \ wonedgetul best anegnas SSN we dood oven getter e a L ited the owe angie good ating mtgptic lait . inctectibly good wy gee tenjfic âViked!thig movie co a alee sep teary goo se aged interegting [- amazingly good nige
# i
Figure 4: t-SNE Visualization for clause composition.
Concessive Sentences In concessive sentences, two clauses have opposite polarities, usually re- lated by a contrary-to-expectation implicature. We plot evolving representations over time for two con- cessives in Figure 3. The plots suggest:
1. For tasks like sentiment analysis whose goal is to predict a speciï¬c semantic dimension (as op- posed to general tasks like language model word prediction), too large a dimensionality leads to many dimensions non-functional (with values close to 0), causing two sentences of opposite sentiment to differ only in a few dimensions. This may ex- plain why more dimensions donât necessarily lead to better performance on such tasks (For example, as reported in (Socher et al., 2013), optimal perfor- mance is achieved when word dimensionality is set to between 25 and 35).
2. Both sentences contain two clauses connected by the conjunction âthoughâ. Such two-clause sen- tences might either work collaborativelyâ models
would remember the word âthoughâ and make the second clause share the same sentiment orienta- tion as ï¬rstâor competitively, with the stronger one dominating. The region within dotted line in Figure 3(a) favors the second assumption: the dif- ference between the two sentences is diluted when the ï¬nal words (âinterestingâ and âboringâ) appear.
Clause Composition In Figure 4 we explore this clause composition in more detail. Representations move closer to the negative sentiment region by adding negative clauses like âalthough it had bad actingâ or âbut it is too longâ to the end of a simply positive âI like the movieâ. By contrast, adding a concessive clause to a negative clause does not move toward the positive; âI hate X but ...â is still very negative, not that different than âI hate Xâ. This difference again suggests the model is able to capture negative asymmetry (Clark and Clark, 1977; Horn, 1989; Fraenkel and Schul, 2008).
_ 20 I hate | | . hate the = the movie *° movie hate the movie Recurrent ° ry 2 » Bi - Directional LSTM jozr
Figure 5: Saliency heatmap for for âI hate the movie .â Each row corresponds to saliency scores for the correspondent word representation with each grid representing each dimension.
0.032 10.200 I 0.45, I hat â po bate TT IETT TMI) the 028 oss = the 0.150 novie 0.020 030 movie 0.125 I oor 0.25 I 10.100 saw lo.or2 020 saw ors 015g last 0.008 last | 1.050 . 010 night night 008 ight soos . 0.05 . | ° 10 200 30 4050 2.000 ° 40. 20 30 40 «650 â0.00 o 10 20 30 40 «(80 9.000 Recurrent LSTM Bi- Directional LSTM
Figure 6: Saliency heatmap for âI hate the movie I saw last night .â .
I I I 0.24 10.64 hate Pee 0 mel TL IML Hi) 0.08 oz ose the the the 4 oor . 0.18 . 0.48 movie movie movie | || 0.06 though | | I] though | | | ors â=| jo.40 0.05 the the o.r2 the los2 5 0.04 plot plot 0.09 a ome 03 s â0.02 is 0.08 ad 0.16 interesting interesting interesting | | ii | | | 0.01 os 0.08 â0.00 eo 10 Re hea at 50 o 10 2 3% 40 50 9° 0 10 2 3 40 50 °° ecurren! LSTM Bi - Directional LSTM
Figure 7: Saliency heatmap for âI hate the movie though the plot is interesting .â .
# 5 First-Derivative Saliency
In this section, we describe another strategy which is is inspired by the back-propagation strategy in vision (Erhan et al., 2009; Simonyan et al., 2013). It measures how much each input unit contributes to the ï¬nal decision, which can be approximated by ï¬rst derivatives.
of words, while labels could be POS tags, sentiment labels, the next word index to predict etc.) Given embeddings E for input words with the associated gold class label c, the trained model associates the pair (E, c) with a score Sc(E). The goal is to decide which units of E make the most signiï¬cant contribution to Sc(e), and thus the decision, the choice of class label c.
More formally, for a classiï¬cation model, an input E is associated with a gold-standard class label c. (Depending on the NLP task, an input could be the embedding for a word or a sequence
In the case of deep neural models, the class score Sc(e) is a highly non-linear function. We approxi- mate Sc(e) with a linear function of e by computing
Intensiï¬cation
| | Tlike it Tlike it a lot Thate it = SS | pop Thate it so much ] mec l | | | I] the movie is incredibly good
Negation
10 good 0.8 06 | | | | iT] | not good 0.4 : ETT «= 0.0 02 | I | } | not bad 0.4 7 0.6 ll like 0.8 1.0 | | | | | n't like
Figure 1: Visualizing intensiï¬cation and negation. Each ver- tical bar shows the value of one dimension in the ï¬nal sen- tence/phrase representation after compositions. Embeddings for phrases or sentences are attained by composing word rep- resentations from the pretrained model.
the ï¬rst-order Taylor expansion
Sc(e) â w(e)T e + b (1)
where w(e) is the derivative of Sc with respect to the embedding e.
w(e) = â(Sc) âe |e (2)
The magnitude (absolute value) of the derivative in- dicates the sensitiveness of the ï¬nal decision to the change in one particular dimension, telling us how much one speciï¬c dimension of the word embed- ding contributes to the ï¬nal decision. The saliency score is given by
S(e) = |w(e)| (3)
# 5.1 Results on Stanford Sentiment Treebank
We ï¬rst illustrate results on Stanford Treebank. We plot in Figures 5, 6 and 7 the saliency scores (the
HY 42 i hate the movie though the plot is interesting c= â â : 24 i like the movie though the plot is boring r) Absolute value of dfference
Figure 3: Representations over time from LSTMs. Each col- umn corresponds to outputs from LSTM at each time-step (representations obtained after combining current word em- bedding with previous build embeddings). Each grid from the column corresponds to each dimension of current time-step representation. The last rows correspond to absolute differ- ences for each time step between two sequences.
absolute value of the derivative of the loss function with respect to each dimension of all word inputs) for three sentences, applying the trained model to each sentence. Each row corresponds to saliency score for the correspondent word representation with each grid representing each dimension. The examples are based on the clear sentiment indicator âhateâ that lends them all negative sentiment.
âI hate the movieâ All three models assign high saliency to âhateâ and dampen the inï¬uence of other tokens. LSTM offers a clearer focus on âhateâ than the standard recurrent model, but the bi-directional LSTM shows the clearest focus, at- taching almost zero emphasis on words other than âhateâ. This is presumably due to the gates struc- tures in LSTMs and Bi-LSTMs that controls infor- mation ï¬ow, making these architectures better at ï¬ltering out less relevant information.
jos I I os wae! |i || wate || ll | o7 the the - os | movie novie though | a I the aw | || plot jos last is oo Might meres |}! [II 1} 1 ° 10 2 o 10 2 9 40 60 °° one of the greatest | | | ll I = e . | | 9, family - riented mel ND TE > the «fantasy Jadventure movie * movies . = ever | | | ne Jose pe the \] oa film 0.54 jon make eee strong || p case 0.42 o.08 for â0.00 0.36 * the importance 0.30 os of 0.40 the 0.24 oss musicians Il | om in 0.18 | creating o% the 0.12 on motown 0.15 sound | | 0.06 a -
Figure 8: Variance visualization.
âI hate the movie that I saw last nightâ All three models assign the correct sentiment. The simple recurrent models again do poorly at ï¬lter- ing out irrelevant information, assigning too much salience to words unrelated to sentiment. However none of the models suffer from the gradient van- ishing problems despite this sentence being longer; the salience of âhateâ still stands out after 7-8 fol- lowing convolutional operations.
only a rough approximate to individual contribu- tions and might not sufï¬ce to deal with highly non- linear cases. By contrast, the LSTM emphasizes the ï¬rst clause, sharply dampening the inï¬uence from the second clause, while the Bi-LSTM focuses on both âhate the movieâ and âplot is interestingâ.
# 5.2 Results on Sequence-to-Sequence Autoencoder
âI hate the movie though the plot is interestingâ The simple recurrent model emphasizes only the second clause âthe plot is interestingâ, assigning no credit to the ï¬rst clause âI hate the movieâ. This might seem to be caused by a vanishing gradient, yet the model correctly classiï¬es the sentence as very negative, suggesting that it is successfully incorporating information from the ï¬rst negative clause. We separately tested the individual clause âthough the plot is interestingâ. The standard recur- rent model conï¬dently labels it as positive. Thus despite the lower saliency scores for words in the ï¬rst clause, the simple recurrent system manages to rely on that clause and downplay the information from the latter positive clauseâdespite the higher saliency scores of the later words. This illustrates a limitation of saliency visualization. ï¬rst-order derivatives donât capture all the information we would like to visualize, perhaps because they are
Figure 9 represents saliency heatmap for auto- encoder in terms of predicting correspondent to- ken at each time step. We compute ï¬rst-derivatives for each preceding word through back-propagation as decoding goes on. Each grid corresponds to magnitude of average saliency value for each 1000- dimensional word vector. The heatmaps give clear overview about the behavior of neural models dur- ing decoding. Observations can be summarized as follows: 1.
For each time step of word prediction, SEQ2SEQ models manage to link word to predict back to correspondent region at the inputs (automat- ically learn alignments), e.g., input region centering around token âhateâ exerts more impact when to- ken âhateâ is to be predicted, similar cases with tokens âmovieâ, âplotâ and âboringâ.
2. Neural decoding combines the previously built representation with the word predicted at the current step. As decoding proceeds, the inï¬uence
- 2g 25 2 2 £28 2848 Beppe aé⬠Es a = 3 s hate the movie though the plot is boring 3 - 2 Py - gzgge284e 2 27337 8 = £ â⬠2 8 fi a = es 2 ee gee e828 22 £2£,ro3 7 2 4 BE Es 3 a though â-egeeSeeueme- â-g oo gg¢e $2 g 2 ete a ¢⬠£23 âe 2 8 â âgegegeegu4rr'-ege88 2.38327 8 ¢⬠Be 32 es gs es | FP - hate the movie though the plot i hate the movie though the boring | hate the movie though the plot boring hate the movie though the plot : hate the movie though the plot i hate the movie though the boring
3 8 - 2g 25 2 2 £28 2848 Beppe aé⬠Es a 0.14 = 3 s eS os 8 hate the movie though the plot is boring 3 3 2 - 2 Py - gzgge284e 2 27337 8 = £ â⬠2 8 0.00
fi a = es 2 ee gee e828 22 £2£,ro3 7 2 4 BE Es 3 a though â-egeeSeeueme- â-g oo gg¢e $2 g 2 ete a ¢⬠£23 âe 2 8 â âgegegeegu4rr'-ege88 2.38327 8 ¢⬠Be 32 es gs es
| FP - hate the movie though the plot i hate the movie though the boring | hate the movie though the plot boring hate the movie though the plot : hate the movie though the plot i hate the movie though the boring
Figure 9: Saliency heatmap for SEQ2SEQ auto-encoder in terms of predicting correspondent token at each time step.
of the initial input on decoding (i.e., tokens in source sentences) gradually diminishes as more previously-predicted words are encoded in the vec- tor representations. Meanwhile, the inï¬uence of language model gradually dominates: when word âboringâ is to be predicted, models attach more weight to earlier predicted tokens âplotâ and âisâ but less to correspondent regions in the inputs, i.e., the word âboringâ in inputs.
# 6 Average and Variance
For settings where word embeddings are treated as parameters to optimize from scratch (as opposed to using pre-trained embeddings), we propose a sec-
ond, surprisingly easy and direct way to visualize important indicators. We ï¬rst compute the average of the word embeddings for all the words within the sentences. The measure of salience or inï¬uence for a word is its deviation from this average. The idea is that during training, models would learn to render indicators different from non-indicator words, enabling them to stand out even after many layers of computation.
Figure 8 shows a map of variance; each grid cor- responds to the value of ||e;,; â Ne Viens ew,\|? where e;,; denotes the value for 7 th dimension of word i and N denotes the number of token within the sentences.
As the ï¬gure shows, the variance-based salience measure also does a good job of emphasizing the relevant sentiment words. The model does have shortcomings: (1) it can only be used in to scenar- ios where word embeddings are parameters to learn (2) itâs clear how well the model is able to visualize local compositionality.
# 7 Conclusion
In this paper, we offer several methods to help visualize and interpret neural models, to understand how neural models are able to compose meanings, demonstrating asymmetries of negation and explain some aspects of the strong performance of LSTMs at these tasks.
Though our attempts only touch superï¬cial points in neural models, and each method has its pros and cons, together they may offer some in- sights into the behaviors of neural models in lan- guage based tasks, marking one initial step toward understanding how they achieve meaning compo- sition in natural language processing. Our future work includes using results of the visualization be used to perform error analysis, and understanding strengths limitations of different neural models.
# References
Herbert H. Clark and Eve V. Clark. 1977. Psychology and language: An introduction to psycholinguistics. Harcourt Brace Jovanovich.
Navneet Dalal and Bill Triggs. 2005. Histograms of In Com- oriented gradients for human detection. puter Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, vol- ume 1, pages 886â893. IEEE.
Jeffrey L. Elman. 1989. Representation and structure in connectionist models. Technical Report 8903,
Center for Research in Language, University of Cal- ifornia, San Diego.
Dumitru Erhan, Yoshua Bengio, Aaron Courville, and Pascal Vincent. 2009. Visualizing higher-layer fea- tures of a deep network. Dept. IRO, Universit´e de Montr´eal, Tech. Rep.
Improving vector space word representations using multilingual correlation. In Proceedings of EACL, volume 2014.
Manaal Faruqui, Yulia Tsvetkov, Dani Yogatama, Chris Dyer, and Noah Smith. 2015. Sparse overcom- plete word vector representations. arXiv preprint arXiv:1506.02004.
Tamar Fraenkel and Yaacov Schul. 2008. The mean- ing of negated adjectives. Intercultural Pragmatics, 5(4):517â540.
Alona Fyshe, Leila Wehbe, Partha P Talukdar, Brian Murphy, and Tom M Mitchell. 2015. A compo- sitional and interpretable semantic space. Proceed- ings of the NAACL-HLT, Denver, USA.
Ross Girshick, Jeff Donahue, Trevor Darrell, and Jiten- dra Malik. 2014. Rich feature hierarchies for accu- rate object detection and semantic segmentation. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 580â587. IEEE.
Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Neural computation, Long short-term memory. 9(8):1735â1780.
Laurence R. Horn. 1989. A natural history of negation, volume 960. University of Chicago Press Chicago.
Yangfeng Ji and Jacob Eisenstein. 2014. Represen- tation learning for text-level discourse parsing. In Proceedings of the 52nd Annual Meeting of the As- sociation for Computational Linguistics, volume 1, pages 13â24.
Andrej Karpathy, Justin Johnson, and Fei-Fei Li. 2015. Visualizing and understanding recurrent networks. arXiv preprint arXiv:1506.02078.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hin- ton. 2012. Imagenet classiï¬cation with deep con- volutional neural networks. In Advances in neural information processing systems, pages 1097â1105.
Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2015. A diversity-promoting objec- tive function for neural conversation models. arXiv preprint arXiv:1510.03055.
David G Lowe. 2004. Distinctive image features from International journal of scale-invariant keypoints. computer vision, 60(2):91â110.
Ilya Sutskever, Quoc V Le, Oriol Vinyals, and Wojciech Zaremba. 2014. Addressing the rare word problem in neural machine translation. arXiv preprint arXiv:1410.8206.
Aravindh Mahendran and Andrea Vedaldi. 2014. Un- derstanding deep image representations by inverting them. arXiv preprint arXiv:1412.0035.
Brian Murphy, Partha Pratim Talukdar, and Tom M Mitchell. 2012. Learning effective and interpretable semantic models using non-negative sparse embed- ding. In COLING, pages 1933â1950.
Anh Nguyen, Jason Yosinski, and Jeff Clune. 2014. Deep neural networks are easily fooled: High conï¬- dence predictions for unrecognizable images. arXiv preprint arXiv:1412.1897.
Mike Schuster and Kuldip K Paliwal. 1997. Bidirec- tional recurrent neural networks. Signal Processing, IEEE Transactions on, 45(11):2673â2681.
Karen Simonyan, Andrea Vedaldi, and Andrew Zisser- man. 2013. Deep inside convolutional networks: Visualising image classiï¬cation models and saliency maps. arXiv preprint arXiv:1312.6034.
Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep mod- els for semantic compositionality over a sentiment In Proceedings of the conference on treebank. empirical methods in natural language processing (EMNLP), volume 1631, page 1642. Citeseer.
Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Sys- tems, pages 3104â3112.
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199.
Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of Machine Learning Research, 9(2579-2605):85.
Oriol Vinyals and Quoc Le. 2015. A neural conversa- tional model. arXiv preprint arXiv:1506.05869.
Carl Vondrick, Aditya Khosla, Tomasz Malisiewicz, 2013. Hoggles: Visual- and Antonio Torralba. In Computer Vi- izing object detection features. sion (ICCV), 2013 IEEE International Conference on, pages 1â8. IEEE.
Philippe Weinzaepfel, Herv´e J´egou, and Patrick P´erez. 2011. Reconstructing an image from its local de- scriptors. In Computer Vision and Pattern Recogni- tion (CVPR), 2011 IEEE Conference on, pages 337â 344. IEEE.
Matthew D Zeiler and Rob Fergus. 2014. Visualizing and understanding convolutional networks. In Com- puter VisionâECCV 2014, pages 818â833. Springer.
# Appendix
Recurrent Models A recurrent network succes- sively takes word wt at step t, combines its vector representation et with previously built hidden vec- tor htâ1 from time t â 1, calculates the resulting current embedding ht, and passes it to the next step. The embedding ht for the current time t is thus:
ht = f (W · htâ1 + V · et) (4)
where W and V denote compositional matrices. If Ns denote the length of the sequence, hNs repre- sents the whole sequence S. hNs is used as input a softmax function for classiï¬cation tasks.
Multi-layer Recurrent Models Multi-layer re- current models extend one-layer recurrent structure by operation on a deep neural architecture that en- ables more expressivity and ï¬exibly. The model associates each time step for each layer with a hid- den representation hl,t, where l â [1, L] denotes the index of layer and t denote the index of time step. hl,t is given by:
ht,l = f (W · htâ1,l + V · ht,lâ1) (5)
where ht,0 = et, which is the original word embed- ding input at current time step.
Long-short Term Memory LSTM model, ï¬rst proposed in (Hochreiter and Schmidhuber, 1997), maps an input sequence to a ï¬xed-sized vector by sequentially convoluting the current representation with the output representation of the previous step. LSTM associates each time epoch with an input, control and memory gate, and tries to minimize it, ft and the impact of unrelated information. ot denote to gate states at time t. ht denotes the hidden vector outputted from LSTM model at time t and et denotes the word embedding input at time t. We have
it = Ï(Wi · et + Vi · htâ1) ft = Ï(Wf · et + Vf · htâ1) ot = Ï(Wo · et + Vo · htâ1) lt = tanh(Wl · et + Vl · htâ1) ct = ft · ctâ1 + it à lt ht = ot · mt (6)
where Ï denotes the sigmoid function. it, ft and ot are scalars within the range of [0,1]. Ã denotes pairwise dot.
A multi-layer LSTM models works in the same way as multi-layer recurrent models by enable multi-layerâs compositions.
Bidirectional Models (Schuster and Paliwal, 1997) add bidirectionality to the recurrent frame- work where embeddings for each time are calcu- lated both forwardly and backwardly:
t = f (W â · hâ hâ t = f (W â · hâ hâ tâ1 + V â · et) t+1 + V â · et) (7)
Normally, bidirectional models feed the concate- nation vector calculated from both directions [eâ ] to the classiï¬er. Bidirectional models can be similarly extended to both multi-layer neu- ral model and LSTM version. | {
"id": "1510.03055"
} |
1506.02488 | On the Fuzzy Stability of an Affine Functional Equation | In this paper, we obtain the general solution of the following functional
equation f(3x + y + z) + f(x + 3y + z) + f(x + y + 3z) + f(x) + f(y) + f(z) =
6f(x + y + z): We establish the Hyers-Ulam-Rassias stability of the above
functional equation in the fuzzy normed spaces. Further we show the above
functional equation is stable in the sense of Hyers and Ulam in fuzzy normed
spaces. | http://arxiv.org/pdf/1506.02488 | Md. Nasiruzzaman | math.CA | 14 pages | null | math.CA | 20150524 | 20150524 | arXiv:1506.02488v1_[math.CA] 2015
5 1 0 2
y a M 4 2 ] A C . h t a m [
1 v 8 8 4 2 0 . 6 0 5 1 : v i X r a
# On the Fuzzy Stability of an Aï¬ne Functional Equation
# Md. Nasiruzzaman
Department of Mathematics, Aligarh Muslim University, Aligarh 202002, India Email: nasir3489@gmail.com
Abstract: In this paper, we obtain the general solution of the following functional equation
f (3x + y + z) + f (x + 3y + z) + f (x + y + 3z) + f (x) + f (y) + f (z) = 6f (x + y + z).
We establish the Hyers-Ulam-Rassias stability of the above functional equation in the fuzzy normed spaces. Further we show the above functional equation is stable in the sense of Hyers and Ulam in fuzzy normed spaces. 1. Introduction
In modelling applied problems only partial informations may be known (or) there may be a degree of uncertainty in the parameters used in the model or some measurements may be imprecise. Due to such features, we are tempted to consider the study of functional equations in the fuzzy setting. For the last 40 years, fuzzy theory has become very active area of research and a lot of development has been made in the theory of fuzzy sets [1] to ï¬nd the fuzzy analogues of the classical set theory. This branch ï¬nds a wide range of applications in the ï¬eld of science and engineering. A.K. Katsaras [2] introduced an idea of fuzzy norm on a linear space in 1984, in the same year Cpmgxin Wu and Jinxuan Fang [3] introduced a notion of fuzzy normed space to give a generalization of the Kolmogoroï¬ normalized theorem for fuzzy topological linear spaces. In 1991, R. Biswas [4] deï¬ned and studied fuzzy inner product spaces in linear space. In 1992, C. Felbin [5] introduced an alternative deï¬nition of a fuzzy norm on a linear topological structure of a fuzzy normed linear spaces. In 2003, T. Bag and S.K. Samanta [6] modiï¬ed the deï¬nition of S.C. Cheng and J.N. Mordeson [7] by removing a regular condition. In 1940, Ulam [8] raised a question concerning the stability of group homomorphism as follows: Let G1 be a group and G2 a metric group with the metric d(., .). Given ε > 0, does there exists a δ > 0 such that if a function f : G1 â G2 satisï¬es the inequality
d(f (xy), f (x)f (y)) < δ for all x, y â G1,
then there exists a homomorphism h : G1 â G2 with
d(f (x), H(x)) < ε for all x â G1?
1
The concept of stability for a functional equation arises when we replace the functional equation by an inequality which acts as a perturbation of the equation. In 1941, the case of approximately additive mappings was solved by Hyers [9] under the assumption that G2 is a Banach space. In 1978, a generalized version of the theorem of Hyers for approximately linear mapping was given by Th.M. Rassias [10]. He proved that for a mapping f : E1 â E2 such that f (tx) is continuous in t â R and for each ï¬xed x â E1 assume that there exist a constant ε > 0 and p â [0, 1) with
k f (x + y) â f (x) â f (y) k6 ε(k x kp + k y kp) (1.1)
x, y â E1, then there exist a unique R-Linear mapping T : E1 â E2 such that
k f (x) â T (x) k6 2ε 2 â 2p k x kp (x â E1)
The result of Rassias has inï¬uenced the development of what is now called the Hyers- In 1994, a generalization of Ulam-Rassias stability theory for functional equations. Rassias theorem was obtained by Gavruta [11] by replacing the bound ε(kxkp + kykp) by a general control function Ï(x, y). During the last decades, the stability problems of several functional equations have been extensively investigated by a number of authors (c.f. [12], [13], [14], [17] and [20]â[26] etc.). In 1982-1989, J.M.Rassias [15, 16] replaced the sum appeared in right hand side of the equation (1.1) by the product of powers of norms. In fact, he proved the following theorem.
Theorem 1.1 Let f : E1 â E2 be a mapping from a normed vector space E1 into Banach space E2 subject to the inequality
k f (x + y) â f (x) â f (y) k6 ε(k x kpk y kp) (1.3)
for all x, y â E1, where ε and p are constants with ε > 0 and 0 6 p < 1 limit
f (2nx) 2n (1.4)
L(x) = lim nââ exists for all x â E1, and L : E1 â E2 is the unique additive mapping which satisï¬es
k f (x) â L(x) k6 ε 2 â 22p k x k2p (1.5)
for all x â E1. If p > 1 2 the inequality (1.3) holds for x, y â E1 and the limit
A(x) = lim nââ 2nf x 2n (1.6)
exists for all x â E1 and A : E1 â E2 is the unique additive mapping which satisï¬es
k f (x) â A(x) k6 ε 22p â 2 k x k2p (âx â E1) (1.7)
2
(1.2)
Recently, Cadariu et al [19] studied the generalized Hyers-Ulam stability by using the direct method as well as the ï¬xed point method for the aï¬ne type functional equation
f (2x + y) + f (x + 2y) + f (x) + f (y) = 4f (x + y), for all x, y â G. (1.8)
In the present paper, we obtain the general solution of the following functional equation
f (3x + y + z) + f (x + 3y + z) + f (x + y + 3z) + f (x) + f (y) + f (z) = 6f (x + y + z). (1.9)
where f : X â Y , X and Y are normed spaces. Then, we establish the fuzzy Hyers- Ulam-Rassias stability of the above functional equation.
# 2. Preliminary Notes
Before we proceed to the main results, we will introduce a deï¬nition and some ex- amples to illustrate the idea of fuzzy norm.
Deï¬nition 2.1 Let X be a real linear space. A mapping N : X à R â [0, 1] (the so-called fuzzy subset) is said to be a f uzzy norm on X if for all x, y â X and all s, t â R, (N1) N(x, t) = 0 for t 6 0; (N2) x = 0 if and only if N(x, t) = 1 for all t > 0; (N3) N(cx, t) = N(x, t/ | c |) if c 6= 0; (N4) N(x + y, t + s) > min{N(x, t), N(y, s)}; (N5) N(x, .) is a non-decreasing function on R and lim tââ (N6) for x 6= 0, N(x, .) is continuous on R. The pair (X, N) is called a f uzzy normed linear space. One may regard N(x, t) as the truth value of the statement that the norm of x is less than or equal to the real number t.
Example 2.2 Let (X, k.k) be a normed linear space. One can be easily verify that for each p > 0,
Np(x, t) = t t+pkxk 0 ( t > 0, x â X t 6 0, x â X
is a fuzzy norm on X.
Example 2.3 Let (X, k.k) be a normed linear space. The mapping N : X Ã R â [0, 1] by
N(x, t) = t2âkxk2 t2+kxk2 0 ( t > kxk t 6 kxk
3
is a fuzzy norm on X.
Deï¬nition 2.4 Let (X, N) be a fuzzy normed linear space. A sequence {xn} in X N(xn â x, t) = 1 for all is said to be convergent if there exists an x â X such that lim nââ t > 0. In this case, x is called the limit of the sequence {xn} and we denote it by
N â lim nââ N(xn â x, t) = x.
Deï¬nition 2.5 Let (X, N) be a fuzzy normed linear space. A sequence {xn} in X is said to be Cauchy if for each ε > 0 and each δ > 0 there exists an n0 â N such that
N(xm â xn, δ) > 1 â ε (m, n > n0).
It is well known that every convergent sequence in a fuzzy normed linear space is Cauchy. If each Cauchy sequence is convergent, then the fuzzy norm is said to be complete and the fuzzy normed vector space is called a fuzzy Banach space.
The remaining part of the paper is organized as follows: We discuss the general solution of functional equation (1.9) in Section 3. Section 4 is devoted to investigate the non- uniform version of stability of functional equation (1.9) in fuzzy normed spaces and in section (5), we show under suitable conditions that in fuzzy normed spaces functional equation (1.9) is stable uniformly. Now we proceed to ï¬nd the general solution of the functional equation (1.9) 3. Solution of the Functional Equation (1.9) Theorem 3.1 A mapping f : X â Y , X and Y are normed spaces, is a solution of the functional equation (1.9) if and only if it is an aï¬ne mapping (i.e., it is the sum between a constant and an additive function). Proof. We can easily seen that any aï¬ne function f is a solution of the equation (1.9). Conversely, we have two cases: Case 1 : f (0) = 0. If we take y = z = âx in (1.9), we obtain
2f (x) + 2f (â3x) + 2f (âx) = 6f (âx), for all x â X. (3.1)
Again replacing putting y = z = 0 in (1.9), we obtain
f (3x) = 3f (x), for all x â X. (3.2)
By (3.1) and (3.2), we have f (âx) = âf (x), for all x â X. It results that f is an odd mapping. Replace z by ây in (1.9), we get
f (x + 2y) + f (x â 2y) = 2f (x) (3.3)
4
If we replace x and y by u+v
# 2 and uâv
4 , respectively, in (3.3) and using (3.2), we have
utvu 2 u-v 4 If we replace x and y by and , respectively, in (3.3) and using (3.2), we have
f (u + v) = f (u) + f (v), for all u, v â X.
So, f is an additive mapping. Case 2 : General case. Let us consider the function g(x) := f (x) â f (0). It is clear that g(0) = 0 and f (x) = g(x) + f (0). Replacing f by g in (1.9), it results
g(3x + y + z) + g(x + 3y + z) + g(x + y + 3z) + g(x) + g(y) + g(z) = 6g(x + y + z).
for all x, y, z â X. Taking in account that g(0) = 0, from Case 1, we obtain that g is an additive mapping, hence f (x) = g(x) + f (0) is an aï¬ne function. This completes the proof.
For a given mapping f : X â Y , let us denote
Df (x, y, z) = f (3x + y + z) + f (x + 3y + z) + f (x + y + 3z) + f (x) + f (y) + f (z) â 6f (x + y + z)
# 4. Fuzzy Hyers-Ulam-Rassias Stability: non-uniform version
Theorem 4.1 Let X be a linear space and (Z, N â²) a fuzzy normed space. Let Ï : X 3 â Z be a mapping such that for some α 6= 0 with 0 < α < 3
N â²(Ï(3x, 0, 0), t) > N â²(αÏ(x, 0, 0), t) (4.1)
for all x â X, t > 0 and
lim nââ N â²(Ï(3nx, 3ny, 3nz), 3nt) = 1,
for all x, y, z â X and all t > 0. Suppose that (Y, N) be a fuzzy Banach space and an odd mapping f : X â Y satisï¬es the inequality
N(Df (x, y, z), t) > N â²(Ï(x, y, z), t) (4.2)
for all x, y, z â X and all t > 0. Then the limit
A(x) = N â lim nââ f (3nx) 3n
exists for all x â X and the mapping A : X â Y is the unique aï¬ne mapping satisfying
N(f (x) â A(x) â f (0), t) > N â²(Ï(x, 0, 0), (3 â α)t) (4.3)
5
for all x â X and all t > 0. Proof. Letting y = z = 0 in (4.2), we get
N(f (3x) â 3f (x) + 2f (0), t) > N â²(Ï(x, 0, 0), t) (4.4)
for all x â X and all t > 0. If we deï¬ne the mapping g : X â Y such that g(x) := f (x) â f (0) for all x â X. Indeed g(0) = 0. Then (4.4) implies
N(g(3x) â 3g(x), t) > N â²(Ï(x, 0, 0), t)
Replacing x by 3nx in the last inequality, we obtain
N(g(3"tta) â 3g(3"2), #) > Nâ(y(3"x, 0, 0), t) (â g(8"a ) t Jane (0.0.0).â) 3ntl 3n > 9n4+1 g(3"*tx) g(3"r) at n (2 SS) 2 NC. 0.0).1) (4.5)
# for alla ⬠X oore)
nâ1
g(3j+1x) 3j+1 â g(3j x) 3n â g(x) = 3j and (4.5)
# j=0 P
# that
n-1 . n-1 . g(3"x) alt g(3!*1x) â g(34x) alt v( ~ g(x), >) 3741) =N ae 3741 37° 37H) j=0 j-0 n-1 ; ; g(3ittxr) â g(3âx) alt > min Ute (4 arn 31 =} > N"(:p(@, 0,0), t). ce ceeeeeeeeeeseeseeseeeeeeeeeeeeeens (4.6)
for all x â X and all t > 0. Replacing x by 3mx in (4.6), we get
n-1 ; g Brtmy wero alt (> sta) > N' (ole, 0.0), 3rtm j=0 t qm
# j=0 X
and so
: g(r) g(3â¢ax) "LA* alt v( = a , Dd, azar) ) > N'(v(, 0,0), 1) j=m
# j=m X > N â²
g(3"t"a g(3"x t (> â - d 2 N' (20,0), ae â (4.7) 3741
# j=m P
6
â
( α 3 )j < â, the Cauchy for all x â X, t > 0 and m, n > 0. Since 0 < α < 3 and
# j=0 P
criterion for convergence and (N5) imply that { g(3nx) 3n } is a Cauchy sequence in (Y, N). Since (Y, N) is a fuzzy Banach space, this sequence converges to some point A(x) â Y . f (3nx) Hence, we can deï¬ne a mapping A : X â Y by A(x) = N â lim 3n nââ g(3nx) 3n = N â lim nââ for all x â X, namely. Since f is odd, A is odd. Letting m = 0 in (4.7), we get
(oe _ ule).t)> Nâ (ote 0,0), a)
# j=0 P
Taking the limit as n â â and using (N6), we get
N(A(a) â g(x), t) > W'(e(0.0,0), = t ) ad 3741 j=0 = N'(y(2, 0,0), (3 â a)t) N(f(2) ~~ A(z) ~~ f(0), t) 2 N'(y(@, 0, 0), (3 ~~ a)t)
for all x â X and all t > 0. Now we claim that A is aï¬ne. Replacing x, y, z by 3nx, 3ny, 3nz, respectively, in (4.2), we get
1 N (FOr 3ây, 3"), â) > N'(y(3"2, 3ây, 3"z), 3"t)
# for all x,y,z ⬠X and allt > 0. Since
lim nââ N â²(Ï(3nx, 3ny, 3nz), 3nt) = 1,
A satisï¬es the functional equation (1.9). Hence A is aï¬ne. To prove the uniqueness of A, let Aâ² : X â Y be another aï¬ne mapping satisfying (4.3). Fix x â X. Clearly A(3nx) = 3nA(x) and Aâ²(3nx) = 3nAâ²(x) for all x â X and all n â N. It follows from (4.3) that
N(A(2) â Aâ(2),t) = (= AiS"a) â) 3" 3â > mind N A(3"r) â g(3 t) t _N g(3"r) = AN(3 a) t 3â 3" 2 3â 3â 2 3"(3 â a)t > w'(o(3%2.0.0), ( ; a) ) 3"(3 âa)t > N'| v(x, 0,0), ââââ (02.0.0), ar )
7
3n(3âα)
for all x â X and all t > 0. Since lim nââ 2αn = â, we obtain
lim nââ N â² Ï(x, 0, 0), 3n(3 â α)t 2αn = 1.
Thus N(A(x) â Aâ²(x), t) = 1 for all x â X and all t > 0, and so A(x) = Aâ²(x). This completes the proof. 5. Fuzzy Hyers-Ulam-Rassias Stability: uniform version
Theorem 5.1 Let X be a linear space and (Y, N) be a fuzzy Banach space. Let Ï : X 3 â [0, â) be a function such that
â
ËÏ(x, y, z) = 1 3n Ï(3nx, 3ny, 3nz) < â (5.1)
# n=0 X
for all x, y, z â X. Let f : X â Y be a uniformly approximately aï¬ne mapping with respect to Ï in the sense that
lim tââ N(Df (x, y, z), tÏ(x, y, z)) = 1 (5.2)
uniformly on X 3. Then
A(x) := N â lim nââ f (3nx) 3n
for all x â X exists and deï¬nes an aï¬ne mapping A : X â Y such that if for some α > 0, δ > 0
N(Df (x, y, z), δÏ(x, y, z)) > α (5.3)
for all x, y, z â X, then
N(f (x) â A(x) â f (0), δ 3 ËÏ(0, 0, , x)) > α
for all x â X. Proof. Let ε > 0, by (5.2), we can ï¬nd t0 > 0 such that
N(Df (x, y, z), tÏ(x, y, z)) > 1 â ε (5.4)
for all x, y, z â X and all t > t0. Deï¬ne g : X â Y such that g(x) := f (x) â f (0). It is clear that g(0) = 0 and f (x) = g(x) + f (0). Now (5.4) implies that
N(Dg(x, y, z), tÏ(x, y, z)) > 1 â ε (5.5)
for all x, y, z â X and all t > t0. By induction on n, we will show that
n-1 x(a" = 3"g(x),t 5° 3-1 0(0, 0, 3"0)) >l-e« (5.6) m=0
# m=0 X
8
for all x â X, all t > t0 and n â N. Putting x = y = 0 and z = x in (5.5), we get (5.6) for n = 1. Let (5.6) holds for some positive integers n. Then
n N(g(3n+1x) â 3n+1g(x), t 3nâmÏ(0, 0, 3mx))
# m=0 X
> min{N(g(3n+1x) â 3g(3nx), tÏ(0, 0, 3nx)), n N(3g(3nx) â 3n+1g(x), t 3(nâm)Ï(0, 0, 3mx))} m=0 X > min{1 â ε, 1 â ε} = 1 â ε.
This completes the induction argument. Let t = t0 and put n = p. Then by replacing x with 3nx in (5.6), we obtain
pâ1 N(g(3n+px) â 3pg(3nx), t0 3pâmâ1Ï(0, 0, 3n+mx)) > 1 â ε
# m=0 X
g(3"*Pa) (n+m-+1) nm, (oo ee a Lise y(0,0,3"t"xr) )>1-âe (5.7)
# m=0 X
for all integers n > 0, p > 0. The convergence of (5.1) and the equation
pâ1 3â(n+m+1)Ï(0, 0, 3n+mx)) = 1 k n+pâ1 3âmÏ(0, 0, 3mx)
# m=0 X
# m=n X
guarantees that for given δ > 0, there exists n0 â N such that
t0 3 n+pâ1 3âmÏ(0, 0, 3mx) < δ
# m=n X
for all n > n0 and p > 0. It follows from (5.7) that
n+p, Now n+p, x(& zt) 9(3 e) ns n(n a . to x3 (n+m-+1) y(0, 0, grtmy, )> lâe 3n+p 3n 3n+p m=0 (5.8)
m=0 (5.8) for each n > no and all p > 0. Hence {oe 3 Hy is a Cauchy sequence in Y. Since Y isa fuzzy Banach space, this sequence converges to ome A(x) ⬠Y. Hence we can define a mapping A: X + Y by A(a) :-= N â jim 4 § = 7 â Nâ lim ae for alla ⬠X nâ0o namely. For each t > 0 and x ⬠X
lim nââ N A(x) â f (3nx) 3n , t = 1.
9
Now, let x, y, z â X. Fix t > 0 and 0 < ε < 1. Since lim nââ is some n1 > n0 such that 3n Ï(3nx, 3ny, 3nz) = 0, there 1
N(DA(x, y, z), t) > min N A(3x + y + z) â f (3n(3x + y + z)) 3n , t 8
N(DA(za,y, z),t) > min{ W (A(G +yt2z) :) ; 3â 8 n (Ae + 3y +2) [B"(e ate) *). x (A +y+3z) f3"w = + 32)), *), x (Ale) - pero. ), x (AW) - AY) 2) (4) - ame), n(Aety +2) fONe tye) 5): t N(DAG"e. 3"y,3"2), =)
The ï¬rst 7 terms on the right hand side of the above inequality tend to 1 as n â â and the last term is greater than N(Df (3nx, 3ny, 3nz), t0Ï(3nx, 3ny, 3nz)), i.e., by (5.4), greater than or equal to 1âε. Thus N(DA(x, y, z), t) > 1âε for all t > 0 and 0 < ε < 1. It follows that N(DA(x, y, z), t) = 1 for all t > 0 and by (N2), we have DA(x, y, z) = 1, i.e.,
A(3x + y + z) + A(x + 3y + z) + A(x + y + 3z) + A(x) + A(y) + A(z) = 6A(x + y + z) To end the proof, let for some positive α and δ, (5.3) holds. Let
nâ1 Ïn(x, y, z) := 3â(m+1)Ï(3mx, 3my, 3mz)
# m=0 X
for all x, y, z â X. Let x â X. By a similar discussion as in the begining of the proof, we can obtain from (5.3)
nâ1 N(g(3nx) â 3ng(x), δ 3(nâmâ1)Ïn(0, 0, 3mx)) > δ (5.9)
# m=0 X
for all n â N. Let s > 0. We have
g(3"x) N(g(a)âA(2), 6¢n(0, 0, x) +s) > min{ Â¥ (g(a) 25 Bn (5.10) -5¢n(0,0,2)), (Sa(a,
Combining (5.8), (5.9) and the fact that
lim nââ N g(3nx) 3n â A(x), s = lim nââ N f (3nx) 3n â A(x), s = 1,
10
5) f
we obtain that
N(g(x) â A(x), δÏn(0, 0, x) + s) > α
for large enough n. By the (upper semi) continuity of real function N(g(x) â A(x), .), we obtain that
N g(x) â A(x), δ 3 ËÏ(0, 0, x) + s > α.
Taking the limit as s â 0, we conclude that
N g(x) â A(x), δ 3 ËÏ(0, 0, x) > α
N f (x) â A(x) â f (0), δ 3 ËÏ(0, 0, x) > α.
This completes the proof.
Theorem 5.2 Let X be a linear space and (Y, N) be a fuzzy Banach space. Let Ï : X 3 â [0, â) be a function satisfying (5.1). Let f : X â Y be a uniformly ap- proximately aï¬ne mapping with respect to Ï. Then there is a unique aï¬ne mapping A : X â Y such that
lim tââ N(f (x) â A(x) â f (0), t ËÏ(0, 0, x)) = 1 (5.11)
uniformly on X. Proof. The existence of uniform limit (5.11) immediately follows from Theorem 4.5. It remains to prove the uniqueness assertion. Let Aâ² be another aï¬ne mapping satisfying (5.11). Fix c > 0. Given ε > 0, by (5.11) for A and Aâ², we can ï¬nd some t0 > 0 such that
N(g(x) â A(x), N(g(x) â Aâ²(x), t 2 t 2 ËÏ(0, 0, x)) > 1 â ε, ËÏ(0, 0, x)) > 1 â ε
for all x â X and all t > t0. Fix some x â X and ï¬nd some integer n0 such that t0
# m=n P
â â 1 3n 3â(mân)Ï(0, 0, 3mân(3nx)) 3âmÏ(0, 0, 3mx) = m=n X â m=n X 1 3n 1 3j Ï(0, 0, 3j(3nx)) = j=0 X 1 3n ËÏ(0, 0, 3nx) =
11
We have
g(3nx) N(Aâ²(x) â A(x), c) > min 3n â A(x), N c 2 , N Aâ²(x) â g(3nx) 3n , c 2
â A(a),0) > min (4 vo ~ Ale), <), x(4@) _§ ee, ~ i ( age Aa 5), (pea) > ming Â¥ (93%) â A(3" x), 3"to > 3-0, 0, 0), m=n x(4'G%) â g(3"x), 3"to > k-⢠(0,0, sx) m=n = min Â¥ (43) â A(3â2), toB(0, 0, 32), x (4G â g(3"x), top(0, 0, 3"x))
> 1 â ε.
It follows that N(Aâ²(x) â A(x), c) = 1, for all c > 0. Thus A(x) = Aâ²(x) for all x â X. This completes the proof.
Considering the control function Ï(x, y, z) = ε(kxkp + kykp + kzkp) for some ε > 0, we obtain the following:
Corollary 5.3 Let X be a normed linear space, let (Y, N) be a fuzzy Banach space, let ε > 0, and let 0 6 p < 1. Suppose that f : X â Y is a function such that
lim nââ N(Df (x, y, z), tε(kxkp + kykp + kzkp)) = 1
uniformly on X 3. Then there is a unique aï¬ne mapping A : X â Y such that
lim tââ N f (x) â A(x) â f (0), εt31âpkxkp 31âp â 1 = 1
uniformly on X.
12
# References
[1] L.A. Zadeh, Fuzzy sets, Information and Control 8 (1965) 338-353.
[2] A.K. Katsaras, Fuzzy topological vector spaces II, Fuzzy Sets Syst., 12(1984), 143-154.
[3] C. Wu and J. Fang, Fuzzy generalization of Kolomogoroï¬s theorem, J.Harbin Inst. Technol., 1(1984), 1-7.
[4] R. Biswas, Fuzzy inner product space and fuzzy norm functions, Inform. Sci., 53(1991), 185-190.
[5] C. Felbin, Finite dimensional fuzzy normed space, Fuzzy Sets Syst., 48(1992), 239-248.
[6] T. Bag and S.K. Samanta, Finite dimensional fuzzy normed linear spaces, J. Fuzzy Math., 11:3(2003), 687-705.
[7] S.C. Cheng and J.N. Mordeson, Fuzzy linear operator and fuzzy normed linear spaces, Bull. Calcuta Math. Soc., 86(1994), 429-436.
[8] S.M. Ulam, Problems in Modern Mathematics, Science ed., John Wiley & Sons: New York; 1940 (Chapter VI, Some Questions in Analysis: Section 1, Stability).
[9] D.H. Hyers, On the stability of the linear functional equation, Proc. Natl. Acad. Sci., 27(1941) 222â224.
[10] Th. M. Rassias, On the stability of the linear mapping in Banach spaces, Proc. Amer. Math. Soc., 72(1978), 297-300.
[11] P. Gavruta, A generalization of the HyersUlamRassias stability of approximately additive mappings, J. Math. Anal. Appl. 184 (1994) 431436.
[12] S. Czerwik, Functional Equations and Inequalities in Several Variables, World Scientiï¬c Publishing Co., Inc., River Edge, NJ, 2002.
[13] D.H. Hyers, G. Isac and Th.M. Rassias, Stability of Functional Equations in Sev- eral Variables, Birkh¨auser, Basel; 1998.
[14] P. Kannappan, Functional Equations and Inequalities with Applications, Springer, 2009.
[15] J.M. Rassias, On approximation of approximately linear mappings by linear map- ping, J.Funct. Anal., 46:1(1982), 126-130.
[16] J.M. Rassias, On approximation of approximately linear mappings by linear map- pings, Bull.Sci. Math. (2), 108:4(1984), 445-446.
13
[17] M. Mursaleen, Khursheed J. Ansari, Stability results in intuitionistic fuzzy normed spaces for a cubic functional equation, Appl. Math. Inf. Sci. 7, No. 5, 1685-1692 (2013).
[18] S. Javadi, J. M. Rassias, Stability of General Cubic Mapping in Fuzzy Normed Spaces, An. S¸t. Univ. Ovidius Constant¸a, Vol. 20(1), 2012, 129-150.
[19] L.Cadariu, L. Gavruta, P. Gavruta, On the stability of an aï¬ne functional equa- tion, J. Nonlinear Sci. Appl, 6(2013) 60-67.
[20] S. A. Mohiuddine, Stability of Jensen functional equation in intuitionistic fuzzy normed space, Chaos, Solitons & Fract., 42 (2009) 2989â2996.
[21] S. A. Mohiuddine and M.A. Alghamdi, Stability of functional equation obtained through a ï¬xed-point alternative in intuitionistic fuzzy normed spaces, Adv. Dif- ference Equ. 2012, 2012:141.
[22] S. A. Mohiuddine and H. S¸evli, Stability of Pexiderized quadratic functional equa- tion in intuitionistic fuzzy normed space, J. Comput. Appl. Math., 235 (2011) 2137â2146.
[23] M. Mursaleen and K. J. Ansari, Stability results in intuitionistic fuzzy normed spaces for a cubic functional equation, Appl. Math. Inf. Sci., 7(5) (2013) 1685â 1692.
[24] M. Mursaleen and S. A. Mohiuddine, On stability of a cubic functional equation in intuitionistic fuzzy normed spaces, Chaos, Solitons Fract. 42 (2009) 2997â3005.
[25] S. A. Mohiuddine and A. Alotaibi, Fuzzy stability of a cubic functional equation via ï¬xed point technique, Adv. Diï¬erence Equ. 2012, 2012:48
[26] S. A. Mohiuddine and M. Cancan, H. S¸evli, Intuitionistic fuzzy stability of a Jensen functional equation via ï¬xed point technique, Math. Comput. Modelling, 54 (2011) 2403â2409.
14 | {
"id": "1506.02488"
} |
1505.00853 | Empirical Evaluation of Rectified Activations in Convolutional Network | In this paper we investigate the performance of different types of rectified
activation functions in convolutional neural network: standard rectified linear
unit (ReLU), leaky rectified linear unit (Leaky ReLU), parametric rectified
linear unit (PReLU) and a new randomized leaky rectified linear units (RReLU).
We evaluate these activation function on standard image classification task.
Our experiments suggest that incorporating a non-zero slope for negative part
in rectified activation units could consistently improve the results. Thus our
findings are negative on the common belief that sparsity is the key of good
performance in ReLU. Moreover, on small scale dataset, using deterministic
negative slope or learning it are both prone to overfitting. They are not as
effective as using their randomized counterpart. By using RReLU, we achieved
75.68\% accuracy on CIFAR-100 test set without multiple test or ensemble. | http://arxiv.org/pdf/1505.00853 | Bing Xu, Naiyan Wang, Tianqi Chen, Mu Li | cs.LG, cs.CV, stat.ML | null | null | cs.LG | 20150505 | 20151127 | 5 1 0 2
v o N 7 2 ] G L . s c [
2 v 3 5 8 0 0 . 5 0 5 1 : v i X r a
# Empirical Evaluation of Rectiï¬ed Activations in Convolution Network
# Bing Xu University of Alberta
antinucleon@gmail.com
# Naiyan Wang Hong Kong University of Science and Technology
winsty@gmail.com
# Tianqi Chen University of Washington
tqchen@cs.washington.edu
# Mu Li Carnegie Mellon University
muli@cs.cmu.edu
# Abstract
In this paper we investigate the performance of diï¬erent types of rectiï¬ed activation func- tions in convolutional neural network: stan- dard rectiï¬ed linear unit (ReLU), leaky rec- tiï¬ed linear unit (Leaky ReLU), parametric rectiï¬ed linear unit (PReLU) and a new ran- domized leaky rectiï¬ed linear units (RReLU). We evaluate these activation function on standard image classiï¬cation task. Our ex- periments suggest that incorporating a non- zero slope for negative part in rectiï¬ed acti- vation units could consistently improve the results. Thus our ï¬ndings are negative on the common belief that sparsity is the key of good performance in ReLU. Moreover, on small scale dataset, using deterministic neg- ative slope or learning it are both prone to overï¬tting. They are not as eï¬ective as us- ing their randomized counterpart. By us- ing RReLU, we achieved 75.68% accuracy on CIFAR-100 test set without multiple test or ensemble.
et al., 2014), object detection(Girshick et al., 2014) and tracking(Wang et al., 2015). Despite its depth, one of the key characteristics of modern deep learn- ing system is to use non-saturated activation function (e.g. ReLU) to replace its saturated counterpart (e.g. sigmoid, tanh). The advantage of using non-saturated activation function lies in two aspects: The ï¬rst is to solve the so called âexploding/vanishing gradientâ. The second is to accelerate the convergence speed.
In all of these non-saturated activation functions, the most notable one is rectiï¬ed linear unit (ReLU) (Nair & Hinton, 2010; Sun et al., 2014). Brieï¬y speaking, it is a piecewise linear function which prunes the nega- tive part to zero, and retains the positive part. It has a desirable property that the activations are sparse af- ter passing ReLU. It is commonly believed that the superior performance of ReLU comes from the spar- In this sity (Glorot et al., 2011; Sun et al., 2014). paper, we want to ask two questions: First, is spar- sity the most important factor for a good performance? Second, can we design better non-saturated activation functions that could beat ReLU?
# 1. Introduction
Convolutional neural network (CNN) has made great success in various computer vision tasks, such as im- age classiï¬cation (Krizhevsky et al., 2012; Szegedy
We consider a broader class of activation functions, namely the rectiï¬ed unit family. In particular, we are interested in the leaky ReLU and its variants. In con- trast to ReLU, in which the negative part is totally dropped, leaky ReLU assigns a noon-zero slope to it. The ï¬rst variant is called parametric rectiï¬ed linear unit (PReLU) (He et al., 2015). In PReLU, the slopes of negative part are learned form data rather than pre- deï¬ned. The authors claimed that PReLU is the key factor of surpassing human-level performance on Im- ageNet classiï¬cation (Russakovsky et al., 2015) task.
Empirical Evaluation of Rectiï¬ed Activations in Convolutional Network
The second variant is called randomized rectiï¬ed lin- ear unit (RReLU). In RReLU, the slopes of negative parts are randomized in a given range in the training, and then ï¬xed in the testing. In a recent Kaggle Na- tional Data Science Bowl (NDSB) competition1, it is reported that RReLU could reduce overï¬tting due to its randomized nature.
In this paper, we empirically evaluate these four kinds of activation functions. Based on our experiment, we conclude on small dataset, Leaky ReLU and its vari- ants are consistently better than ReLU in convolu- tional neural networks. RReLU is favorable due to its randomness in training which reduces the risk of overï¬tting. While in case of large dataset, more inves- tigation should be done in future.
# 2. Rectiï¬ed Units
In this section, we introduce the four kinds of rectiï¬ed units: rectiï¬ed linear (ReLU), leaky rectiï¬ed linear (Leaky ReLU), parametric rectiï¬ed linear (PReLU) and randomized rectiï¬ed linear (RReLU). We illus- trate them in Fig.1 for comparisons. In the sequel, we use xji to denote the input of ith channel in jth example , and yji to denote the corresponding output after passing the activation function. In the following subsections, we introduce each rectiï¬ed unit formally.
# 2.2. Leaky Rectiï¬ed Linear Unit
Leaky Rectiï¬ed Linear activation is ï¬rst introduced in acoustic model(Maas et al., 2013). Mathematically, we have
# Li
if xi ⥠0 if xi < 0, yi = (2)
xi ai where ai is a ï¬xed parameter in range (1, +â). In original paper, the authors suggest to set ai to a large number like 100. In additional to this setting, we also experiment smaller ai = 5.5 in our paper.
# 2.3. Parametric Rectiï¬ed Linear Unit
Parametric rectiï¬ed linear is proposed by (He et al., 2015). The authors reported its performance is much better than ReLU in large scale image classiï¬cation task. It is the same as leaky ReLU (Eqn.2) with the exception that ai is learned in the training via back propagation.
# 2.4. Randomized Leaky Rectiï¬ed Linear Unit
Randomized Leaky Rectiï¬ed Linear is the randomized version of leaky ReLU. It is ï¬rst proposed and used in Kaggle NDSB Competition. The highlight of RReLU is that in training process, aji is a random number sampled from a uniform distribution U (l, u). Formally, we have:
t \ { \ \ i Leaky ReLU/PReLU Randomized Leaky ReLU
Figure 1: ReLU, Leaky ReLU, PReLU and RReLU. For PReLU, ai is learned and for Leaky ReLU ai is ï¬xed. For RReLU, aji is a random variable keeps sam- pling in a given range, and remains ï¬xed in testing.
Seyi ifr, >0 . Yai = (a, if aj; <0, (3)
where
aji â¼ U (l, u), l < u and l, u â [0, 1) (4)
In the test phase, we take average of all the aji in training as in the method of dropout (Srivastava et al., 2014) , and thus set aji to l+u to get a deterministic 2 result. Suggested by the NDSB competition winner, aji is sampled from U (3, 8). We use the same conï¬gu- ration in this paper.
In test time, we use:
# 2.1. Rectiï¬ed Linear Unit
yji = xji l+u 2 (5)
Rectiï¬ed Linear is ï¬rst used in Restricted Boltzmann Machines(Nair & Hinton, 2010). Formally, rectiï¬ed linear activation is deï¬ned as:
Xi w= {5 ifa; >0 ifa; <0. (1)
# 3. Experiment Settings
We evaluate classiï¬cation performance on same con- volutional network structure with diï¬erent activa- tion functions. Due to the large parameter search- ing space, we use two state-of-art convolutional net- work structure and same hyper parameters for diï¬er- ent activation setting. All models are trained by using CXXNET2.
1Kaggle National Data Science Bowl Competition: https://www.kaggle.com/c/datasciencebowl
2CXXNET: https://github.com/dmlc/cxxnet
Empirical Evaluation of Rectiï¬ed Activations in Convolutional Network
# 3.1. CIFAR-10 and CIFAR-100
The CIFAR-10 and CIFAR-100 dataset (Krizhevsky & Hinton, 2009) are tiny nature image dataset. CIFAR- 10 datasets contains 10 diï¬erent classes images and CIFAR-100 datasets contains 100 diï¬erent classes. Each image is an RGB image in size 32x32. There are 50,000 training images and 10,000 test images. We use raw images directly without any pre-processing and augmentation. The result is from on single view test without any ensemble.
The network structure is shown in Table 1. It is taken from Network in Network(NIN)(Lin et al., 2013).
Input Size NIN 32 Ã 32 32 Ã 32 32 Ã 32 32 Ã 32 16 Ã 16 16 Ã 16 16 Ã 16 16 Ã 16 16 Ã 16 8 Ã 8 8 Ã 8 8 Ã 8 8 Ã 8 8 Ã 8 10 or 100 5x5, 192 1x1, 160 1x1, 96 3x3 max pooling, /2 dropout, 0.5 5x5, 192 1x1, 192 1x1, 192 3x3,avg pooling, /2 dropout, 0.5 3x3, 192 1x1, 192 1x1, 10 8x8, avg pooling, /1 softmax
We refer the network and augmentation setting from team AuroraXie4, one of competition winners. The network structure is shown in Table 5. We only use single view test in our experiment, which is diï¬erent to original multi-view, multi-scale test.
Input Size NDSB Net 70 à 70 70 à 70 70 à 70 35 à 35 35 à 35 35 à 35 35 à 35 17 à 17 17 à 17 17 à 17 17 à 17 17 à 17 17 à 17 17 à 17 8 à 8 8 à 8 8 à 8 8 à 8 8 à 8 8 à 8 12544 à 1 1024 à 1 1024 à 1 121 3x3, 32 3x3, 32 3x3, max pooling, /2 3x3, 64 3x3, 64 3x3, 64 3x3, max pooling, /2 split: branch1 â branch 2 3x3, 96 â 3x3, 96 3x3, 96 â 3x3, 96 3x3, 96 â 3x3, 96 3x3, 96 channel concat, 192 3x3, max pooling, /2 3x3, 256 3x3, 256 3x3, 256 3x3, 256 3x3, 256 SPP (He et al., 2014) {1, 2, 4} ï¬atten fc1 fc2 softmax
Table 1. CIFAR-10/CIFAR-100 network structure. Each layer is a convolutional layer if not otherwise speciï¬ed. Ac- tivation function is followed by each convolutional layer.
In CIFAR-100 experiment, we also tested RReLU on Batch Norm Inception Network (Ioï¬e & Szegedy, 2015). We use a subset of Inception Network which is started from inception-3a module. This network achieved 75.68% test accuracy without any ensemble or multiple view test 3.
# 3.2. National Data Science Bowl Competition
The task for National Data Science Bowl competition is to classify plankton animals from image with award of $170k. There are 30,336 labeled gray scale images in 121 classes and there are 130,400 test data. Since the test set is private, we divide training set into two parts: 25,000 images for training and 5,336 images for validation. The competition uses multi-class log-loss to evaluate classiï¬cation performance.
3CIFAR-100 Reproduce code: https://github. com/dmlc/mxnet/blob/master/example/notebooks/ cifar-100.ipynb
Table 2. National Data Science Bowl Competition Net- work. All layers are convolutional layers if not otherwise speciï¬ed. Activation function is followed by each convolu- tional layer.
# 4. Result and Discussion
Table 3 and 4 show the results of CIFAR-10/CIFAR- 100 dataset, respectively. Table 5 shows the NDSB result. We use ReLU network as baseline, and com- pare the convergence curve with other three activa- tions pairwisely in Fig. 2, 3 and 4, respectively. All these three leaky ReLU variants are better than base- line on test set. We have the following observations based on our experiment:
1. Not surprisingly, we ï¬nd the performance of nor- mal leaky ReLU (a = 100) is similar to that of ReLU, but very leaky ReLU with larger a = 5.5 is much better.
4Winning Doc of AuroraXie: https://github.com/ auroraxie/Kaggle-NDSB
Empirical Evaluation of Rectiï¬ed Activations in Convolutional Network
2. On training set, the error of PReLU is always the lowest, and the error of Leaky ReLU and RReLU are higher than ReLU. It indicates that PReLU may suï¬er from severe overï¬tting issue in small scale dataset.
reasons of their superior performances still lack rigor- ous justiï¬cation from theoretic aspect. Also, how the activations perform on large scale data is still need to be investigated. This is an open question worth pur- suing in the future.
3. The superiority of RReLU is more signiï¬cant than that on CIFAR-10/CIFAR-100. We conjec- ture that it is because the in the NDSB dataset, the training set is smaller than that of CIFAR- 10/CIFAR-100, but the network we use is even bigger. This validates the eï¬ectiveness of RReLU when combating with overï¬tting.
# Acknowledgement
We would like to thank Jason Rolfe from D-Wave sys- tem for helpful discussion on test network for random- ized leaky ReLU.
# References
4. For RReLU, we still need to investigate how the randomness inï¬uences the network training and testing process.
Girshick, Ross, Donahue, Jeï¬, Darrell, Trevor, and Malik, Jitendra. Rich feature hierarchies for accu- rate object detection and semantic segmentation. In CVPR, pp. 580â587, 2014.
Activation ReLU Leaky ReLU, a = 100 Leaky ReLU, a = 5.5 PReLU RReLU (yji = xji/ l+u 2 ) Training Error Test Error 0.00318 0.0031 0.00362 0.00178 0.00550 0.1245 0.1266 0.1120 0.1179 0.1119
Glorot, Xavier, Bordes, Antoine, and Bengio, Yoshua. In Proceedings of Deep sparse rectiï¬er networks. the 14th International Conference on Artiï¬cial In- telligence and Statistics. JMLR W&CP Volume, vol- ume 15, pp. 315â323, 2011.
Table 3. Error rate of CIFAR-10 Network in Network with diï¬erent activation function
Activation ReLU Leaky ReLU, a = 100 Leaky ReLU, a = 5.5 PReLU RReLU (yji = xji/ l+u 2 ) Training Error Test Error 0.1356 0.11552 0.08536 0.0633 0.1141 0.429 0.4205 0.4042 0.4163 0.4025
He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Spatial pyramid pooling in deep convo- lutional networks for visual recognition. In ECCV, pp. 346â361, 2014.
He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Delving deep into rectiï¬ers: Surpassing human-level performance on imagenet classiï¬cation. arXiv preprint arXiv:1502.01852, 2015.
Table 4. Error rate of CIFAR-100 Network in Network with diï¬erent activation function
Activation ReLU Leaky ReLU, a = 100 Leaky ReLU, a = 5.5 PReLU RReLU (yji = xji/ l+u 2 ) Train Log-Loss Val Log-Loss 0.8092 0.7846 0.7831 0.7187 0.8090 0.7727 0.7601 0.7391 0.7454 0.7292
Ioï¬e, Sergey and Szegedy, Christian. Batch nor- malization: Accelerating deep network training by arXiv preprint reducing internal covariate shift. arXiv:1502.03167, 2015.
Krizhevsky, Alex and Hinton, Geoï¬rey. Learning mul- tiple layers of features from tiny images. Computer Science Department, University of Toronto, Tech. Rep, 1(4):7, 2009.
Table 5. Multi-classes Log-Loss of NDSB Network with dif- ferent activation function
# 5. Conclusion
Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geof- frey E. Imagenet classiï¬cation with deep convolu- In NIPS, pp. 1097â1105, tional neural networks. 2012.
In this paper, we analyzed four rectiï¬ed activation functions using various network architectures on three datasets. Our ï¬ndings strongly suggest that the most popular activation function ReLU is not the end of story: Three types of (modiï¬ed) leaky ReLU all con- sistently outperform the original ReLU. However, the
Lin, Min, Chen, Qiang, and Yan, Shuicheng. Network in network. arXiv preprint arXiv:1312.4400, 2013.
Maas, Andrew L, Hannun, Awni Y, and Ng, An- drew Y. Rectiï¬er nonlinearities improve neural net- work acoustic models. In ICML, volume 30, 2013.
Empirical Evaluation of Rectiï¬ed Activations in Convolutional Network
ReLU Train ReLU Val Leaky ReLU,a=100 Train
ReLU Train ReLU Val Leaky ReLU,a=100 Train ReLU Train ReLU Val Leaky ReLU,a=5.5 Train ReLU Train ReLU Val PReLU Train PReLU Val ReLU Train ReLU Val RReLU,[3,8] Train RReLU,[3.8] Val 100 200 30 00-150 r 0 Epoch Epoch 100 30200250 r Epoch
ReLU Train ReLU Val Leaky ReLU,a=5.5 Train 100 30 r Epoch
ReLU Train ReLU Val PReLU Train PReLU Val 200 00-150 0 Epoch
ReLU Train ReLU Val RReLU,[3,8] Train RReLU,[3.8] Val 100 30200250 r Epoch
Figure 2: Convergence curves for training and test sets of diï¬erent activations on CIFAR-10 Network in Network.
ReLU Train ReLU Val Error
ReLU Train ReLU Val PReLU Train PReLU Val 10 30 200 250 ot Epoch
10 o.s| o.8| 07] 5 0.6| 5 os| o.a| 0.3| o2| oul ReLU Train ReLU Val RReLU,[3,8] Train RReLU,[3.8] Val 0 100 50 Epoch
10 ReLU Train ReLU Val ReLU Train ReLU Val PReLU Train PReLU Val o.s| o.8| 07] Error 5 0.6| 5 os| o.a| 0.3| o2| oul ReLU Train ReLU Val RReLU,[3,8] Train RReLU,[3.8] Val 10 30 200 250 0 ot Epoch 100 50 Epoch
Figure 3: Convergence curves for training and test sets of diï¬erent activations on CIFAR-100 Network in Network.
ReLU Train ReLU Val Leaky ReLU, Leaky ReLU, 100 150 Epoch 200 Bo 300
ReLU Train ReLU Val Leaky ReLU, Leaky ReLU, â030100 150 Epoch 200-250-300
â030100380 Epoch 200-230-300
ReLU Train ReLU Val RReLU,[3,8] Train RReLU,[3.8] Val â030100 150 Epoch 200-250-300
ReLU Train ReLU Val Leaky ReLU, Leaky ReLU, ReLU Train ReLU Val Leaky ReLU, Leaky ReLU, ReLU Train ReLU Val RReLU,[3,8] Train RReLU,[3.8] Val 100 150 Epoch 200 Bo 300 â030100 150 Epoch 200-250-300 â030100380 Epoch 200-230-300 â030100 150 Epoch 200-250-300
Figure 4: Convergence curves for training and test sets of diï¬erent activations on NDSB Net.
Nair, Vinod and Hinton, Geoï¬rey E. Rectiï¬ed linear units improve restricted Boltzmann machines. In ICML, pp. 807â814, 2010.
Erhan, Dumitru, Vanhoucke, Vincent, and Rabi- novich, Andrew. Going deeper with convolutions. arXiv preprint arXiv:1409.4842, 2014.
Russakovsky, Olga, Deng, Jia, Su, Hao, Krause, Jonathan, Satheesh, Sanjeev, Ma, Sean, Huang, Zhi- heng, Karpathy, Andrej, Khosla, Aditya, Bernstein, Michael, Berg, Alexander C., and Fei-Fei, Li. Im- ageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 2015. doi: 10.1007/s11263-015-0816-y.
Wang, Naiyan, Li, Siyi, Gupta, Abhinav, and Ye- ung, Dit-Yan. Transferring rich feature hierar- arXiv preprint chies for robust visual tracking. arXiv:1501.04587, 2015.
Srivastava, Nitish, Hinton, Geoï¬rey, Krizhevsky, Alex, Sutskever, Ilya, and Salakhutdinov, Ruslan. Dropout: A simple way to prevent neural networks from overï¬tting. The Journal of Machine Learning Research, 15(1):1929â1958, 2014.
Sun, Yi, Wang, Xiaogang, and Tang, Xiaoou. Deeply learned face representations are sparse, selective, and robust. arXiv preprint arXiv:1412.1265, 2014.
Szegedy, Christian, Liu, Wei, Jia, Yangqing, Ser- manet, Pierre, Reed, Scott, Anguelov, Dragomir, | {
"id": "1502.03167"
} |
1505.00521 | Reinforcement Learning Neural Turing Machines - Revised | The Neural Turing Machine (NTM) is more expressive than all previously
considered models because of its external memory. It can be viewed as a broader
effort to use abstract external Interfaces and to learn a parametric model that
interacts with them.
The capabilities of a model can be extended by providing it with proper
Interfaces that interact with the world. These external Interfaces include
memory, a database, a search engine, or a piece of software such as a theorem
verifier. Some of these Interfaces are provided by the developers of the model.
However, many important existing Interfaces, such as databases and search
engines, are discrete.
We examine feasibility of learning models to interact with discrete
Interfaces. We investigate the following discrete Interfaces: a memory Tape, an
input Tape, and an output Tape. We use a Reinforcement Learning algorithm to
train a neural network that interacts with such Interfaces to solve simple
algorithmic tasks. Our Interfaces are expressive enough to make our model
Turing complete. | http://arxiv.org/pdf/1505.00521 | Wojciech Zaremba, Ilya Sutskever | cs.LG | null | null | cs.LG | 20150504 | 20160112 | 6 1 0 2
n a J 2 1 ] G L . s c [
3 v 1 2 5 0 0 . 5 0 5 1 : v i X r a
Under review as a conference paper at ICLR 2016
# REINFORCEMENT LEARNING NEURAL TURING MACHINES - REVISED
Wojciech Zaremba1,2 New York University Facebook AI Research woj.zaremba@gmail.com
Ilya Sutskever2 Google Brain ilyasu@google.com
# ABSTRACT
The Neural Turing Machine (NTM) is more expressive than all previously considered models because of its external memory. It can be viewed as a broader effort to use abstract external Interfaces and to learn a parametric model that interacts with them. The capabilities of a model can be extended by providing it with proper Interfaces that interact with the world. These external Interfaces include memory, a database, a search engine, or a piece of software such as a theorem veriï¬er. Some of these Interfaces are provided by the developers of the model. However, many important existing Interfaces, such as databases and search engines, are discrete. We examine feasibility of learning models to interact with discrete Interfaces. We investigate the following discrete Interfaces: a memory Tape, an input Tape, and an output Tape. We use a Reinforcement Learning algorithm to train a neural network that interacts with such Interfaces to solve simple algorithmic tasks. Our Interfaces are expressive enough to make our model Turing complete.
# INTRODUCTION
Graves et al. (2014b)âs Neural Turing Machine (NTM) is model that learns to interact with an external memory that is differentiable and continuous. An external memory extends the capabilities of the NTM, allowing it to solve tasks that were previously unsolvable by conventional machine learning methods. In general, it appears that ML models become This is the source of the NTMâs expressive power. signiï¬cantly more powerful if they are able to learn to interact with external interfaces.
There exist a vast number of Interfaces that could be used with our models. For example, the Google search engine is an example of such Interface. The search engine consumes queries (which are actions), and outputs search results. However, the search engine is not differentiable, and the model interacts with the Interface using discrete actions. This work examines the feasibility of learning to interact with discrete Interfaces using the reinforce algorithm.
Discrete Interfaces cannot be trained directly with standard backpropagation because they are not dif- ferentiable. It is most natural to learn to interact with discrete Interfaces using Reinforcement Learning methods. In this work, we consider an Input Tape and a Memory Tape interface with discrete access. Our concrete proposal is to use the Reinforce algorithm to learn where to access the discrete interfaces, and to use the backpropagation algorithm to determine what to write to the memory and to the output. We call this model the RLâNTM.
Discrete Interfaces are computationally attractive because the cost of accessing a discrete Interface is often independent of its size. It is not the case for the continuous Interfaces, where the cost of access scales linearly with size. It is a signiï¬cant disadvantage since slow models cannot scale to large difï¬cult In addition, an output Interface that lets problems that require intensive training on large datasets. the model decide when it wants to make a prediction allows the modelâs runtime to be in principle unbounded. If the model has an output interface of this kind together with an interface to an unbounded memory, the model becomes Turing complete.
We evaluate the RL-NTM on a number of simple algorithmic tasks. The RL-NTM succeeds on problems such as copying an input several times to the output tape (the ârepeat copyâ task from Graves et al. (2014b)), reversing a sequence, and a few more tasks of comparable difï¬culty. However, its success is highly dependent on the architecture of the âcontrollerâ. We discuss this in more details in Section 8.
1Work done while the author was at Google. 2Both authors contributed equally to this work.
1
# Under review as a conference paper at ICLR 2016
Finally, we found it non-trivial to correctly implement the RL-NTM due its large number of interacting components. We developed a simple procedure to numerically check the gradients of the Reinforce algorithm (Section 5). The procedure can be applied to problems unrelated to NTMs, and is of the independent interest. The code for this work can be found at https://github.com/ilyasu123/rlntm.
# 2 THE MODEL
Many difï¬cult tasks require a prolonged, multi-step interaction with an external environment. Examples of such environments include computer games (Mnih et al., 2013), the stock market, an advertisement system, or the physical world (Levine et al., 2015). A model can observe a partial state from the environment, and inï¬uence the environment through its actions. This is seen as a general reinforcement leaning problem. However, our setting departs from the classical RL, i.e. we have a freedom to design tools available to solve a given problem. Tools might cooperate with the model (i.e. backpropagation through memory), and the tools specify the actions over the environment. We formalize this concept under the name InterfaceâController interaction.
The external environment is exposed to the model through a number of Interfaces, each with its own API. For instance, a human perceives the world through its senses, which include the vision Interface and the touch Interface. The touch Interface provides methods for contracting the various muscles, and methods for sensing the current state of the muscles, pain level, temperature and a few others. In this work, we explore a number of simple Interfaces that allow the controller to access an input tape, a memory tape, and an output tape.
The part of the model that communicates with Interfaces is called the Controller, which is the only part of the system which learns. The Controller can have prior knowledge about behavior of its Interfaces, but it is not the case in our experiments. The Controller learns to interact with Interfaces in a way that allows it to solve a given task. Fig. 1 illustrates the complete InterfacesâController abstraction.
Input Interface Output Interface Memory Interface input position increment -1 0 1 Target prediction to output symbol or not? 0 1 memory address increment -1 0 1 new memory value vector Controller Output Controller Output Past State Controller Future State Past State LSTM Future State Controller Input Controller Input Input Interface Output Interface Memory Interface Current Input Current Memory An abstract InterfaceâController model Our model as an InterfaceâController
Figure 1: (Left) The InterfaceâController abstraction, (Right) an instantiation of our model as an Interfaceâ Controller. The bottom boxes are the read methods, and the top are the write methods. The RLâNTM makes discrete decisions regarding the move over the input tape, the memory tape, and whether to make a prediction at a given timestep. During training, the modelâs prediction is compared with the desired output, and is used to train the model when the RL-NTM chooses to advance its position on the output tape; otherwise it is ignored. The memory value vector is a vector of content that is stored in the memory cell.
We now describe the RLâNTM. As a controller, it uses either LSTM, direct access, or LSTM (see sec. 8.1 for a deï¬nition). It has a one-dimensional input tape, a one-dimensional memory, and a one- dimensional output tape as Interfaces. Both the input tape and the memory tape have a head that reads the Tapeâs content at the current location. The head of the input tape and the memory tape can move in any direction. However, the output tape is a write-only tape, and its head can either stay at the current position or move forward. Fig. 2 shows an example execution trace for the entire RLâNTM on the reverse task (sec. 6).
At the core of the RLâNTM is an LSTM controller which receives multiple inputs and has to generate multiple outputs at each timestep. Table 1 summarizes the controllerâs inputs and outputs, and the way in which the RLâNTM is trained to produce them. The objective function of the RLâNTM is the expected log probability of the desired outputs, where the expectation is taken over all possible sequences of actions, weighted with probability of taking these actions. Both backpropagation and Reinforce maximize this objective. Backpropagation maximizes the log probabilities of the modelâs predictions, while the reinforce algorithm inï¬uences the probabilities of action sequences.
2
# Under review as a conference paper at ICLR 2016
Output @ |Nenory wh) Output @ |Nenoryatr| (Output @ |Nemory +4) [output > |Menory s+] utout 9 |Menory wi) . u raul) ia emety -§ Goh g ath. SS ah 6 vor « bbe tL itr tai het Lonel tetell TL fae t || n ah tL hE fie Memoryi) |nput Wor] [Memory *l] | Input 4°] [Memory «|| Input 490] Memory «BK ] Input 49H] Memory We«|[ Input 490] Emty nidsan 2 . Z tb t ts ty ts Time cap Fre Raden sete
Figure 2: Execution of RLâNTM on the ForwardReverse task. At each timestep, the RL-NTM con- sumes the value of the current input tape, the value of the current memory cell, and a representation of all the actions that have been taken in the previous timestep (not marked on the ï¬gures). The RL- NTM then outputs a new value for the current memory cell (marked with a star), a prediction for the next target symbol, and discrete decisions for changing the positions of the heads on the various tapes. The RL-NTM learns to make discrete decisions using the Reinforce algorithm, and learns to produce continuous outputs using backpropagation.
The global objective can be written formally as:
n Ss Dreinforce (1, @2, --.,@n|) p> log (pop (Yili... +, 2s, 01,-.- i, 8)| [a1,42,...,anJEAt i=l
Aâ represents the space of sequences of actions that lead to the end of episode. The probabilities in the above equation are parametrized with a neural network (the Controller). We have marked with preinforce the part of the equation which is learned with Reinforce. pbp indicates the part of the equation optimized with the classical backpropagation.
Interface Read Write Training Type Input Tape Output Tape Head Head Content window of values surrounding the current position â
â
distribution over [â1, 0, 1] distribution over [0, 1] distribution over output vocabulary Reinforce Reinforce Backpropagation Memory Tape Miscellaneous Head Content window of memory values surrounding the current address all actions taken in the previous time step distribution over [â1, 0, 1] vector of real values to store â
Reinforce Backpropagation â
Table 1: Table summarizes what the Controller reads at every time step, and what it has to produce. The âtrainingâ column indicates how the given part of the model is trained.
The RLâNTM receives a direct learning signal only when it decides to make a prediction. If it chooses to not make a prediction at a given timestep, then it will not receive a direct learning signal. Theoretically, we can allow the RLâNTM to run for an arbitrary number of steps without making any prediction, hoping that after sufï¬ciently many steps, it would decide to make a prediction. Doing so will also provide the RLâNTM with arbitrary computational capability. However, this strategy is both unstable and computationally infeasible. Thus, we resort to limiting the total number of computational steps to a ï¬xed upper bound, and force the RLâNTM to predict the next desired output whenever the number of remaining desired outputs is equal to the number of remaining computational steps.
# 3 RELATED WORK
This work is the most similar to the Neural Turing Machine Graves et al. (2014b). The NTM is an ambitious, computationally universal model that can be trained (or âautomatically programmedâ) with the backpropagation algorithm using only input-output examples.
Following the introduction NTM, several other memory-based models have been introduced. All of them can be seen as part of a larger community effort. These models are constructed according to the InterfaceâController abstraction (Section 2).
Neural Turing Machine (NTM) (Graves et al., 2014a) has a modiï¬ed LSTM as the Controller, and the following three Interfaces: a sequential input, a delayed Output, and a differentiable Memory.
3
# Under review as a conference paper at ICLR 2016
Weakly supervised Memory Network (Sukhbaatar et al., 2015) uses a feed forward network as the Controller, and has a differentiable soft-attention Input, and Delayed Output as Interfaces. Stack RNN (Joulin & Mikolov, 2015) has a RNN as the Controller, and the sequential input, a differen- tiable memory stack, and sequential output as Interfaces. Also uses search to improve its performance. Neural DeQue (Grefenstette et al., 2015) has a LSTM as the Controller, and a Sequential Input, a differentiable Memory Queue, and the Sequential Output as Interfaces. Our model ï¬ts into the InterfacesâController abstraction. It has a direct access LSTM as the Controller (or LSTM or feed forward network), and its three interfaces are the Input Tape, the Memory Tape, and the Output Tape. All three Interfaces of the RLâNTM are discrete and cannot be trained only with backpropagation.
This prior work investigates continuous and differentiable Interfaces, while we consider discrete In- terfaces. Discrete Interfaces are more challenging to train because backpropagation cannot be used. However, many external Interfaces are inherently discrete, even though humans can easily use them (apparently without using continuous backpropagation). For instance, one interacts with the Google search engine with discrete actions. This work examines the possibility of learning models that interact with discrete Interfaces with the Reinforce algorithm.
The Reinforce algorithm (Williams, 1992) is a classical RL algorithm, which has been applied to the broad spectrum of planning problems (Peters & Schaal, 2006; Kohl & Stone, 2004; Aberdeen & Baxter, 2002). In addition, it has been applied in object recognition to implement visual attention (Mnih et al., 2014; Ba et al., 2014). This work uses Reinforce to train an attention mechanism: we use it to train how to access the various tapes provided to the model.
The RLâNTM can postpone prediction for an arbitrary number of timesteps, and in principle has access to the unbounded memory. As a result, the RL-NTM is Turing complete in principle. There have been very few prior models that are Turing complete Schmidhuber (2012; 2004). Although our model is Turing complete, it is not very powerful because it is very difï¬cult to train, and our model can solve only relatively simple problems. Moreover, the RLâNTM does not exploit Turing completeness, as none of tasks that it solves require superlinear runtime to be solved.
# 4 THE REINFORCE ALGORITHM
Notation Let A be a space of actions, and At be a space of all sequences of actions that cause an episode to end (so At c A*). Anaction at time-step ¢ is denoted by a;. We denote time at the end of episode by T (this is not completely formal as some episodes can vary in time). Let a1,, stand for a sequence of actions [a1,@2,..., a4]. Let r(a1,,) denote the reward achieved at time t, having executed the sequence of ac- tions a.,, and R(aj,r) is the cumulative reward, namely R(ax:7) = an r(a1:t)- Let po (ar|1.(1-1)) be a parametric conditional probability of an action a; given all previous actions @1.;_1). Finally, po is a policy parametrized by 6.
This work relies on learning discrete actions with the Reinforce algorithm (Williams, 1992). We now describe this algorithm in detail. Moreover, the supplementary materials include descriptions of tech- niques for reducing variance of the gradient estimators.
The goal of reinforcement learning is to maximize the sum of future rewards. The Reinforce algorithm (Williams, 1992) does so directly by optimizing the parameters of the policy pθ(at|a1:(tâ1)). Reinforce follows the gradient of the sum of the future rewards. The objective function for episodic reinforce can be expressed as the sum over all sequences of valid actions that cause the episode to end:
J(θ) = pθ(a1, a2, . . . , aT )R(a1, a2, . . . , aT ) = pθ(a1:T )R(a1:T ) [a1,a2,...,aT ]âAâ a1:T âAâ
This sum iterates over sequences of all possible actions. This set is usually exponential or even inï¬nite, so it cannot be computed exactly and cheaply for most of problems. However, it can be written as
4
# Under review as a conference paper at ICLR 2016
expectation, which can be approximated with an unbiased estimator. We have that:
= Ss po(ar.r)R(ar.7) = ay:7â¬At Eur~pe > r(a12) = t=1 T Eay~po(a:)Eas~po(aslar) +++ Ear~polar|arcrâ1y) >)" (@1:t) t=1
J(θ) =
The last expression suggests a procedure to estimate J(θ): simply sequentially sample each at from the model distribution pθ(at|a1:(tâ1)) for t from 1 to T . The unbiased estimator of J(θ) is the sum of r(a1:t). This gives us an algorithm to estimate J(θ). However, the main interest is in training a model to maximize this quantity.
The reinforce algorithm maximizes J(0) by following the gradient of it: AJ(0)= S> [Aopo(ar-r)] R(a-r)
âθJ(θ) = a1:T âAâ
However, the above expression is a sum over the set of the possible action sequences, so it cannot be computed directly for most At. Once again, the Reinforce algorithm rewrites this sum as an expectation that is approximated with sampling. It relies on the equation: 0 f (0) = f(@) âee = f(0)Oo[log f (0)]. This identity is valid as long as f(x) 4 0. As typical neural network parametrizations of distributions assign non-zero probability to every action, this condition holds for f = pg. We have that:
S> [Aope(arr)] Rar) = [ar.rJeAt = Ss po(a1:r) [Oo log po(ai:r)| (arr) a1.r â¬At n = Ss polar) |) dp log po(aslar.¢â1))] R(a-r) aypeAt t=1 T = Eay~po(ar)Ea2~po(aslar) +++ Ear~po(arlarr1)| >, 00 log po(ailar-eâ1)] [ 9) r(ar)] t=1 t=1
âθJ(θ) =
The last expression gives us an algorithm for estimating 0gJ(@). We have sketched it at the left side of the Figure|3} Itâs easiest to describe it with respect to computational graph behind a neural network. Reinforce can be implemented as follows. A neural network outputs: 1; = log pg (az |a1.(¢1)). Sequen- tially sample action a, from the distribution eâ*, and execute the sampled action a,. Simultaneously, experience a reward r(aj,,). Backpropagate the sum of the rewards al r(a1:4) to the every node 0p log po (ar1:(-1))-
We have derived an unbiased estimator for the sum of future rewards, and the unbiased estimator of its gradient. However, the derived gradient estimator has high variance, which makes learning difï¬cult. RLâNTM employs several techniques to reduce gradient estimator variance: (1) future rewards back- propagation, (2) online baseline prediction, and (3) ofï¬ine baseline prediction. All these techniques are crucial to solve our tasks. We provide detailed description of techniques in the Supplementary material.
Finally, we needed a way of verifying the correctness of our implementation. We discovered a technique that makes it possible to easily implement a gradient checker for nearly any model that uses Reinforce. Following Section 5 describes this technique.
# 5 GRADIENT CHECKING
The RLâNTM is complex, so we needed to ï¬nd an automated way of verifying the correctness of our implementation. We discovered a technique that makes it possible to easily implement a gradient checker for nearly any model that uses Reinforce. This discovery is an independent contribution of this
5
# Under review as a conference paper at ICLR 2016
Reinforce Gradient Checking of Reinforce
sample(t) , def sample(time=t): {For rowsiin the minibatch 1 (a.@2,...ar] = Al j tetum a 1 'Loop until the end of the episode Execute in the environment âAccumulate reward T \ [Deer (41:2) |p0(ar-r) Backpropagate 06 log pe(at|41:(¢-1))
samplett) def sample(time=t) sample from Po(at|ai-(t-1)) Execute in the environment !Loop until the end of the episode Y âAccumulate reward we (ais) Backpropagate Oo log po(aela1:(4â-1))
Figure 3: Figure sketches algorithms: (Left) the reinforce algorithm, (Right) gradient checking for the reinforce algorithm. The red color indicates necessary steps to override the reinforce to become the gradient checker for the reinforce.
work. This Section describes the gradient checking for any implementation of the reinforce algorithm that uses a general function for sampling from multinomial distribution.
The reinforce gradient veriï¬cation should ensure that expected gradient over all sequences of actions matches the numerical derivative of the expected objective. However, even for a tiny problem, we would need to draw billions of samples to achieve estimates accurate enough to state if there is match or mis- match. Instead, we developed a technique which avoids sampling, and allows for gradient veriï¬cation of reinforce within seconds on a laptop.
First, we have to reduce the size of our a task to make sure that the number of possible actions is manageable (e.g., < 104). This is similar to conventional gradient checkers, which can only be applied to small models. Next, we enumerate all possible sequences of actions that terminate the episode. By deï¬nition, these are precisely all the elements of Aâ .
The key idea is the following: we override the sampling function which turns a multinomial distribu- tion into a random sample with a deterministic function that deterministically chooses actions from an appropriate action sequence from At, while accumulating their probabilities. By calling the modified sampler, it will produce every possible action sequence from Aâ exactly once. For efficiency, it is desirable to use a single minibatch whose size is ##A'. The sampling function needs to be adapted in such a way, so that it incrementally outputs the appropriate sequence from At as we repeatedly call the sampling function. At the end of the minibatch, the sampling function will have access to the total probability of each action sequence ([], 79 (a+|a1:e-1)), which in turn can be used to exactly compute J(6) and its derivative. To compute the derivative, the reinforce gradient produced by each sequence a1. ⬠At should be weighted by its probability pg (a1.r). We summarize this procedure on Figure[3]
The gradient checking is critical for ensuring the correctness of our implementation. While the basic reinforce algorithm is conceptually simple, the RLâNTM is fairly complicated, as reinforce is used to train several Interfaces of our model. Moreover, the RLâNTM uses three separate techniques for reducing the variance of the gradient estimators. The modelâs high complexity greatly increases the probability of a code error. In particular, our early implementations were incorrect, and we were able to ï¬x them only after implementing gradient checking.
# 6 TASKS
This section deï¬nes tasks used in the experiments. Figure 4 shows exemplary instantiations of our tasks. Table 2 summarizes the Interfaces that are available for each task.
6
# Under review as a conference paper at ICLR 2016
Task Interface | tyoutTape Memory Tape Copy v x DuplicatedInput v x Reverse v x RepeatCopy v x ForwardReverse x v
Table 2: This table marks the available Interfaces for each task. The difï¬culty of a task is dependent on the type of Interfaces available to the model.
Copy DuplicatedInput Reverse RepeatCopy ForwardReverse
Figure 4: This Figure presents the initial state for every task. The yellow box indicates the starting position of the reading head over the Input Interface. The gray characters on the Output Tape represent the target symbols. Our tasks involve reordering symbols, and and the symbols xi have been picked uniformly from the set of size 30. Copy. A generic input is x1x2x3 . . . xCâ
and the desired output is x1x2 . . . xCâ
. Thus the goal is to repeat the input. The length of the input sequence is variable and is allowed to change. The input sequence and the desired output both terminate with a special end-of-sequence symbol â
. DuplicatedInput. A generic input has the form x1x1x1x2x2x2x3 . . . xCâ1xCxCxCâ
while the desired output is x1x2x3 . . . xCâ
. Thus each input symbol is replicated three times, so the RL-NTM must emit every third input symbol. Reverse. A generic input is x1x2 . . . xCâ1xCâ
and the desired output is xCxCâ1 . . . x2x1â
. RepeatCopy. is x1x2 . . . xCx1 . . . xCx1 . . . xCâ
, where the number of copies is given by m. Thus the goal is to copy the input m times, where m can be only 2 or 3. ForwardReverse. The task is identical to Reverse, but the RL-NTM is only allowed to move its input tape pointer forward. It means that a perfect solution must use the NTMâs external memory.
is mx1x2x3 . . . xCâ
and
# 7 CURRICULUM LEARNING
Humans and animals learn much better when the examples are not randomly presented but organized in a meaningful order which illustrates gradually more concepts, and gradually more complex ones. . . . and call them âcurriculum learningâ.
Bengio et al. (2009)
We were unable to solve tasks with RLâNTM by training it on the difï¬cult instances of the problems (where difï¬cult usually means long). To succeed, we had to create a curriculum of tasks of increasing complexity. We veriï¬ed that our tasks were completely unsolvable (in an all-or-nothing sense) for all but the shortest sequences when we did not use a curriculum. In our experiments, we measure the complexity c of a problem instance by the maximal length of the desired output to typical inputs. During training, we maintain a distribution over the task complexity. We shift the distribution over the task complexities whenever the performance of the RLâNTM exceeds a threshold. Then, our model focuses on more difï¬cult problem instances as its performance improves.
Probability 10% 25% 65% Procedure to pick complexity d uniformly at random from the possible task complexities. uniformly from [1, C + e] d = D + e.
Table 3: The curriculum learning distribution, indexed by C. Here e is a sample from a geometric 2 , i.e., p(e = k) = 1 distribution whose success probability is 1 2k .
7
# Under review as a conference paper at ICLR 2016
The distribution over task complexities is indexed with an integer c, and is deï¬ned in Table 3. While we have not tuned the coefï¬cients in the curriculum learning setup, we experimentally veriï¬ed that it is critical to always maintain non-negligible mass over the hardest difï¬culty levels (Zaremba & Sutskever, 2014). Removing it makes the curriculum much less effective.
Whenever the average zero-one-loss (normalized by the length of the target sequence) of our RLâNTM decreases below 0.2, we increase c by 1. We kept doing so until c reaches its maximal allowable value. Finally, we enforced a refractory period to ensure that successive increments of C are separated by at least 100 parameter updates, since we encountered situations where C increased in rapid succession which consistently caused learning to fail.
# 8 CONTROLLERS
The success of reinforcement learning training highly depends on the complexity of the controller, and its ease of training. Itâs common to either limit number of parameters of the network, or to constraint it by initialization from pretrained model on some other task (for instance, object recognition network for robotics). Ideally, models should be generic enough to not need such âtricksâ. However, still some tasks require building task speciï¬c architectures.
Figure 6: The direct access controller.
Figure 5: LSTM as a controller.
This work considers two controllers. The ï¬rst is a LSTM (Fig. 5), and the second is a direct access controller (Fig. 6). LSTM is a generic controller, that in principle should be powerful enough to solve any of the considered tasks. However, it has trouble solving many of them. Direct access controller, is a much better ï¬t for symbol rearrangement tasks, however itâs not a generic solution.
8.1 DIRECT ACCESS CONTROLLER
All the tasks that we consider involve rearranging the input symbols in some way. For example, a typical task is to reverse a sequence (section 6 lists the tasks). For such tasks, the controller would beneï¬t from a built-in mechanism for directly copying an appropriate input to memory and to the output. Such a mechanism would free the LSTM controller from remembering the input symbol in its control variables (âregistersâ), and would shorten the backpropagation paths and therefore make learning easier. We implemented this mechanism by adding the input to the memory and the output, and also adding the memory to the output and to the adjacent memories (ï¬gure 6), while modulating these additive contribution by a dynamic scalar (sigmoid) which is computed from the controllerâs state. This way, the controller can decide to effectively not add the current input to the output at a given timestep. Unfortunately the necessity of this architectural modiï¬cation is a drawback of our implementation, since it is not domain independent and would therefore not improve the performance of the RLâNTM on many tasks of interest.
Controller | TstM Direct Access Task Copy v v Duplicatedinput Reverse ForwardReverse RepeatCopy xxx KAKA
Table 4: Success of training on various task for a given controller.
8
# Under review as a conference paper at ICLR 2016
# 9 EXPERIMENTS
We presents results of training RLâNTM on all aforementioned tasks. The main drawback of our experiments is in the lack of comparison to the other models. However, the tasks that we consider have to be considered in conjunction with available Interfaces, and other models havenât been considered with the same set of interfaces. The statement, âthis model solves additionâ is difï¬cult to assess, as the way that digits are delivered deï¬nes task difï¬culty.
The closest model to ours is NTM, and the shared task that they consider is copying. We are able to generalize with copying to an arbitrary length. However, our Interfaces make this task very simple. Table 4 summarizes results.
We trained our model using SGD with a ï¬xed learning rate of 0.05 and a ï¬xed momentum of 0.9. We used a batch of size 200, which we found to work better than smaller batch sizes (such as 50 or 20). We normalized the gradient by batch size but not by sequence length. We independently clip the norm of the gradients w.r.t. the RL-NTM parameters to 5, and the gradient w.r.t. the baseline network to 2. We initialize the RLâNTM controller and the baseline model using a Gaussian with standard deviation 0.1. We used an inverse temperature of 0.01 for the different action distributions. Doing so reduced the effective learning rate of the Reinforce derivatives. The memory consists of 35 real values through which we backpropagate. The initial memory state and the controllerâs initial hidden states were set to the zero vector.
Input Tape | Memory Output Tape Input Tape Output Tape ERFCS7R3BGA AGB3R75CFREO WWW6667778S88SSRRRWWWYYY | W67TE8SRWYO e » |e W â R . * Ww Ww F | W * ¢ : 6 s 5 . : 6 : 7 * + 6 6 R + * 7 * 8 : A 1 ; 8 â : 7 : 5 : : 8 : _ : 3 : A * ic 8 8 A . 8 8 * A * 3 8 8 A : R 8 : A â 7 ® : A * 5 R R A : c ® . A é â : A * * ; : A * t i : A . R ; . A : : y : A â 3 7 :
Input Tape Output Tape WWW6667778S88SSRRRWWWYYY | W67TE8SRWYO W â Ww Ww W * 6 s 6 : 6 6 7 * 1 ; 7 : 8 : 3 : 8 8 8 * 8 8 8 : ® : R R ® . â : ; : i : ; . y : 7 :
Input Tape | Memory Output Tape ERFCS7R3BGA AGB3R75CFREO e » |e R . * F | ¢ : 5 . : 7 * + R + * 8 : A 8 â : 5 : : _ : A * ic A . 8 A * 3 A : R A â 7 A * 5 A : c A é A * * A * t A . R A : : A â 3
# : Time
Figure 7: (Left) Trace of ForwardReverse solution, (Right) trace of RepeatInput. The vertical depicts execution time. The rows show the input pointer, output pointer, and memory pointer (with the â symbol) at each step of the RL-NTMâs execution. Note that we represent the set {1, . . . , 30} with 30 distinct symbols, and lack of prediction with #.
The ForwardReverse task is particularly interesting. In order to solve the problem, the RLâNTM has to move to the end of the sequence without making any predictions. While doing so, it has to store the input sequence into its memory (encoded in real values), and use its memory when reversing the sequence (Fig. 7).
We have also experimented with a number of additional tasks but with less empirical success. Tasks we found to be too difï¬cult include sorting and long integer addition (in base 3 for simplicity), and Repeat- Copy when the input tape is forced to only move forward. While we were able to achieve reasonable performance on the sorting task, the RLâNTM learned an ad-hoc algorithm and made excessive use of its controller memory in order to sort the sequence.
Empirically, we found all the components of the RL-NTM essential to successfully solving these prob- lems. All our tasks are either solvable in under 20,000 parameter updates or fail in arbitrary number of updates. We were completely unable to solve RepeatCopy, Reverse, and Forward reverse with the LSTM controller, but with direct access controller we succeeded. Moreover, we were also unable to solve any of these problems at all without a curriculum (except for short sequences of length 5). We present more traces for our tasks in the supplementary material (together with failure traces).
9
# Under review as a conference paper at ICLR 2016
# 10 CONCLUSIONS
We have shown that the Reinforce algorithm is capable of training an NTM-style model to solve very simple algorithmic problems. While the Reinforce algorithm is very general and is easily applicable to a wide range of problems, it seems that learning memory access patterns with Reinforce is difï¬cult.
Our gradient checking procedure for Reinforce can be applied to a wide variety of implementations. We also found it extremely useful: without it, we had no way of being sure that our gradient was correct, which made debugging and tuning much more difï¬cult.
# 11 ACKNOWLEDGMENTS
We thank Christopher Olah for the LSTM ï¬gure that have been used in the paper, and to Tencia Lee for revising the paper.
# REFERENCES
Aberdeen, Douglas and Baxter, Jonathan. Scaling internal-state policy-gradient methods for pomdps. In MACHINE LEARNING-INTERNATIONAL WORKSHOP THEN CONFERENCE-, pp. 3â10, 2002.
Ba, Jimmy, Mnih, Volodymyr, and Kavukcuoglu, Koray. Multiple object recognition with visual attention. arXiv preprint arXiv:1412.7755, 2014.
Bengio, Yoshua, Louradour, J´erËome, Collobert, Ronan, and Weston, Jason. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pp. 41â48. ACM, 2009.
Graves, Alex, Wayne, Greg, and Danihelka, Ivo. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014a.
Graves, Alex, Wayne, Greg, and Danihelka, Ivo. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014b.
Grefenstette, Edward, Hermann, Karl Moritz, Suleyman, Mustafa, and Blunsom, Phil. Learning to transduce with unbounded memory. arXiv preprint arXiv:1506.02516, 2015.
Joulin, Armand and Mikolov, Tomas. Inferring algorithmic patterns with stack-augmented recurrent nets. arXiv preprint arXiv:1503.01007, 2015.
Kohl, Nate and Stone, Peter. Policy gradient reinforcement learning for fast quadrupedal locomotion. In Robotics and Automation, 2004. Proceedings. ICRAâ04. 2004 IEEE International Conference on, volume 3, pp. 2619â 2624. IEEE, 2004.
Levine, Sergey, Finn, Chelsea, Darrell, Trevor, and Abbeel, Pieter. End-to-end training of deep visuomotor policies. arXiv preprint arXiv:1504.00702, 2015.
Mnih, Volodymyr, Kavukcuoglu, Koray, Silver, David, Graves, Alex, Antonoglou, Ioannis, Wierstra, Daan, and Riedmiller, Martin. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
Mnih, Volodymyr, Heess, Nicolas, Graves, Alex, et al. Recurrent models of visual attention. In Advances in Neural Information Processing Systems, pp. 2204â2212, 2014.
Peters, Jan and Schaal, Stefan. Policy gradient methods for robotics. IEEE/RSJ International Conference on, pp. 2219â2225. IEEE, 2006. In Intelligent Robots and Systems, 2006
Schmidhuber, Juergen. Self-delimiting neural networks. arXiv preprint arXiv:1210.0118, 2012.
Schmidhuber, J¨urgen. Optimal ordered problem solver. Machine Learning, 54(3):211â254, 2004.
Sukhbaatar, Sainbayar, Szlam, Arthur, Weston, Jason, and Fergus, Rob. Weakly supervised memory networks. arXiv preprint arXiv:1503.08895, 2015.
Williams, Ronald J. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â256, 1992.
Zaremba, Wojciech and Sutskever, Ilya. Learning to execute. arXiv preprint arXiv:1410.4615, 2014.
10
# Under review as a conference paper at ICLR 2016
# APPENDIX A: DETAILED REINFORCE EXPLANATION
We present here several techniques to decrease variance of the gradient estimation for the Reinforce. We have employed all of these tricks in our RLâNTM implementation. We expand notation introduced in Sec. 4. Let Aâ¡ denote all valid subsequences of actions (i.e. Aâ¡ â Aâ â Aâ). Moreover, we deï¬ne set of sequences of actions that are valid after executing a sequence a1:t, and that terminate. We denote such set by: Aâ a1:t terminates an episode.
# CAUSALITY OF ACTIONS
Actions at time t cannot possibly inï¬uence rewards obtained in the past, because the past rewards are caused by actions prior to them. This idea allows to derive an unbiased estimator of âθJ(θ) with lower variance. Here, we formalize it:
# po(a)[dp log po(a)] R(a)
âθJ(θ) =
= S> po(a)[dp log po(a)] R(a) ay.7rEAt = SD pola) [a logpo(a)] [> rars)] are At t=1 T SS dp log po (a11)r(a1)| i M SS S ai.reâ¬At t=1 T = Ss po(a) Soa log po(aix)r(ai:e) + Oo log po(acr41):rla1)r(ai)| ay.rEAt t=1 T Ss SS po(ar:2)d0 log pa(a1:4)r(a1:4) + pa(a)Oe log po (4141-741) (a1) ay.reAt t=1 Tv S25 po(a1.2)06 log po (ar1)r(are) + po (are)r (art) ope (ai4s)-r|a1:1) ay.rEAt t=1 T T Ss [35 po(a1:1)00 log po (a1:e)r(a1:2)] + Ss SS [Po(are)r (a1) Popo ayrEAt t=1 ay,pEAt t=1
[Po(are)r (a1) Popo (@e41)-7la1)|
We will show that the right side of this equation is equal to zero. Itâs zero, because the future actions a(t+1):T donât inï¬uence past rewards r(a1:t). Here we formalize it; we use an identity E
a(t+1):T âAâ
# a1:t
T Ss Ss [pe (a1) r(aa:e)Oope (assy lais)| = a,.7EAt t=1 SS [polare)r(aie) > Aopo(aus1rlare)] = aye At ae41): TEAL, SS polars)r(ai1)do1 = 0 ay. â¬At
We can purge the right side of the equation for âθJ(θ):
T 0oF(8) = Ss [35 po (a1:1)00 log po (a1)r(a1:2)| ay.rEAt t=1 T = Ea, ~po(a)Eax~po(alar) «++ Ear~po(alaxcrâay)[ >, 00 108 Po(aelar.(eâ1)) > '(a1:8)] t=1 i=t
âθJ(θ) =
11
# Under review as a conference paper at ICLR 2016
The last line of derived equations describes the learning algorithm. This can be implemented as fol- lows. A neural network outputs: l, = log po (az|a1.4-1)). We sequentially sample action a; from the distribution e!, and execute the sampled action a,. Simultaneously, we experience a reward r(a1;,). We should backpropagate to the node 0 log po(az1.(,â1)) the sum of rewards starting from time step t: a ,7(a1:;). The only difference in comparison to the initial algorithm is that we backpropagate sum of rewards starting from the current time step, instead of the sum of rewards over the entire episode.
# ONLINE BASELINE PREDICTION
Online baseline prediction is an idea, that the importance of reward is determined by its relative relation to other rewards. All the rewards could be shifted by a constant factor and such change shouldnât effect its relation, thus it shouldnât inï¬uence expected gradient. However, it could decrease the variance of the gradient estimate.
Aforementioned shift is called the baseline, and it can be estimated separately for the every time-step. We have that:
Ss Po(A(141):7|@12) = 1 acpi eCAhy., Oo Ss Po(4(141):7 a1) = 0 acpi eCAhy.,
We are allowed to subtract above quantity (multiplied by bt) from our estimate of the gradient without changing its expected value:
T T oF (8) = Eaynpo(a)Ea2~po(alar) ««-Ear~po(alarcrâ1y) | >, 00 log po(ailar.eâ1)) 9 (7(asi) â be) t=1 i=t
Above statement holds for an any sequence of bt. We aim to ï¬nd the sequence bt that yields the lowest variance estimator on âθJ(θ). The variance of our estimator is:
T Var = La, wpe (a) Eag~po(alar) oo -Eap~po(alar.crâ1y) [> 06 log po(aelar.ceâ 1) Sec a1: «) 76 d)â- t=1 i=t r 2 [Eos~po(a)Eos~po(aler) + Eapspo(alarcr-) | >, log po(alan.e-ay) 32 (r(rs) - b)]| i=t 8
The second term doesnât depend on bt, and the variance is always positive. Itâs sufï¬cient to minimize the ï¬rst term. The ï¬rst term is minimal when itâs derivative with respect to bt is zero. This implies
T T Ea, ~po(a)Eas~po(ajar) +++ Ear~polalarcrâ1)) >» 20 log po (ailaa.(eâ1y) 9 (r(aa:) â be) = 0 t=1 i=t T T S506 log po (aelaa.eâ1) S(r(ai2) = be) = 0 t=1 i=t 1 86 log po(ar|ar.(tâ1)) Spee Patt) by oy Oo log po(ar|ar.(¢-1))
This gives us estimate for a vector bt â R#θ. However, it is common to use a single scalar for bt â R, and estimate it as Epθ(at:T |a1:(tâ1))R(at:T ).
# OFFLINE BASELINE PREDICTION
The Reinforce algorithm works much better whenever it has accurate baselines. A separate LSTM can help in the baseline estimation. First, run the baseline LSTM on the entire input tape to produce a vector summarizing the input. Next, continue running the baseline LSTM in tandem with the controller LSTM,
12
# Under review as a conference paper at ICLR 2016
batt=1 batt=2 batt=3 bb =[}{}{}-â input tape the = thenentmatte2 | [the RLNTMat tes
Figure 8: The baseline LSTM computes a baseline bt for every computational step t of the RL-NTM. The baseline LSTM receives the same inputs as the RL-NTM, and it computes a baseline bt for time t before observing the chosen actions of time t. However, it is important to ï¬rst provide the baseline LSTM with the entire input tape as a preliminary inputs, because doing so allows the baseline LSTM to accurately estimate the true difï¬culty of a given problem instance and therefore compute better base- lines. For example, if a problem instance is unusually difï¬cult, then we expect R1 to be large and negative. If the baseline LSTM is given entire input tape as an auxiliary input, it could compute an appropriately large and negative b1.
so that the baseline LSTM receives precisely the same inputs as the controller LSTM, and outputs a baseline ); at each timestep t. The baseline LSTM is trained to minimize yy [R(ar) - bi] (Fig. [8p. This technique introduces a biased estimator, however it works well in practise.
We found it important to ï¬rst have the baseline LSTM go over the entire input before computing the baselines bt. It is especially beneï¬cial whenever there is considerable variation in the difï¬culty of the examples. For example, if the baseline LSTM can recognize that the current instance is unusually difï¬cult, it can output a large negative value for bt=1 in anticipation of a large and a negative R1. In general, it is cheap and therefore worthwhile to provide the baseline network with all of the available information, even if this information would not be available at test time, because the baseline network is not needed at test time.
# APPENDIX B: EXECUTION TRACES
We present several execution traces of the RLâNTM. Each ï¬gure shows execution traces of the trained RL-NTM on each of the tasks. The ï¬rst row shows the input tape and the desired output, while each subsequent row shows the RL-NTMâs position on the input tape and its prediction for the output tape. In these examples, the RL-NTM solved each task perfectly, so the predictions made in the output tape perfectly match the desired outputs listed in the ï¬rst row.
13
# Under review as a conference paper at ICLR 2016
Input Tape Output Tape G8C33EA6W G W6AE33C8GO * + * # * + * # * + W
Input Tape Output Tape WESGLPA67CR68FY W YF86RCT6APLG3SEWO # # ® # * FS # ® # * FS # ® # Y
An RL-NTM successfully solving a small in- stance of the Reverse problem (where the external memory is not used).
An RL-NTM successfully solving a small in- stance of the ForwardReverse problem, where the external memory is used.
Input Tape Output Tape SHBEW*56DL 3 HBEWsSDLHBEWsSDLHBEW+S6DLO H 8 Ww 5 D i. Seen eee
Input Tape | Memory Output Tape QLKDLTP7KL * LKDLTPTKLLKDLTPTKLO 2 * is 2 * 5 L * Pi L * * K * D « + A D * is D * * is * T u * 5 T * P T * * RRR KEK
An RL-NTM successfully solving an instance of the RepeatCopy problem where the input is to be repeated three times.
An example of a failure of the RepeatCopy task, where the input tape is only allowed to move forward. The correct so- lution would have been to copy the input to the memory, and then solve the task using the memory. Instead, the memory pointer is moving randomly.
14 | {
"id": "1503.01007"
} |
1505.00387 | Highway Networks | There is plenty of theoretical and empirical evidence that depth of neural
networks is a crucial ingredient for their success. However, network training
becomes more difficult with increasing depth and training of very deep networks
remains an open problem. In this extended abstract, we introduce a new
architecture designed to ease gradient-based training of very deep networks. We
refer to networks with this architecture as highway networks, since they allow
unimpeded information flow across several layers on "information highways". The
architecture is characterized by the use of gating units which learn to
regulate the flow of information through a network. Highway networks with
hundreds of layers can be trained directly using stochastic gradient descent
and with a variety of activation functions, opening up the possibility of
studying extremely deep and efficient architectures. | http://arxiv.org/pdf/1505.00387 | Rupesh Kumar Srivastava, Klaus Greff, Jürgen Schmidhuber | cs.LG, cs.NE, 68T01, I.2.6; G.1.6 | 6 pages, 2 figures. Presented at ICML 2015 Deep Learning workshop.
Full paper is at arXiv:1507.06228 | null | cs.LG | 20150503 | 20151103 | 5 1 0 2
v o N 3 ] G L . s c [
2 v 7 8 3 0 0 . 5 0 5 1 : v i X r a
# Highway Networks
# Rupesh Kumar Srivastava Klaus Greff J ¨urgen Schmidhuber
RUPESH@IDSIA.CH KLAUS@IDSIA.CH JUERGEN@IDSIA.CH
The Swiss AI Lab IDSIA Istituto Dalle Molle di Studi sullâIntelligenza Artiï¬ciale Universit`a della Svizzera italiana (USI) Scuola universitaria professionale della Svizzera italiana (SUPSI) Galleria 2, 6928 Manno-Lugano, Switzerland
# Abstract
There is plenty of theoretical and empirical evi- dence that depth of neural networks is a crucial ingredient for their success. However, network training becomes more difï¬cult with increasing depth and training of very deep networks remains an open problem. In this extended abstract, we introduce a new architecture designed to ease gradient-based training of very deep networks. We refer to networks with this architecture as highway networks, since they allow unimpeded information ï¬ow across several layers on infor- mation highways. The architecture is character- ized by the use of gating units which learn to reg- ulate the ï¬ow of information through a network. Highway networks with hundreds of layers can be trained directly using stochastic gradient de- scent and with a variety of activation functions, opening up the possibility of studying extremely deep and efï¬cient architectures.
instance, the top-5 image classiï¬cation accuracy on the 1000-class ImageNet dataset has increased from â¼84% (Krizhevsky et al., 2012) to â¼95% (Szegedy et al., 2014; Simonyan & Zisserman, 2014) through the use of ensem- bles of deeper architectures and smaller receptive ï¬elds (Ciresan et al., 2011a;b; 2012) in just a few years.
On the theoretical side, it is well known that deep net- works can represent certain function classes exponentially more efï¬ciently than shallow ones (e.g. the work of HËastad (1987); HËastad & Goldmann (1991) and recently of Mont- ufar et al. (2014)). As argued by Bengio et al. (2013), the use of deep networks can offer both computational and sta- tistical efï¬ciency for complex tasks.
However, training deeper networks is not as straightfor- ward as simply adding layers. Optimization of deep net- works has proven to be considerably more difï¬cult, lead- ing to research on initialization schemes (Glorot & Ben- gio, 2010; Saxe et al., 2013; He et al., 2015), techniques of training networks in multiple stages (Simonyan & Zis- serman, 2014; Romero et al., 2014) or with temporary companion loss functions attached to some of the layers (Szegedy et al., 2014; Lee et al., 2015).
Note: A full paper extending this study is available at http://arxiv.org/abs/1507.06228, with addi- tional references, experiments and analysis.
# 1. Introduction
Many recent empirical breakthroughs in supervised ma- chine learning have been achieved through the applica- tion of deep neural networks. Network depth (referring to the number of successive computation layers) has played perhaps the most important role in these successes. For
In this extended abstract, we present a novel architecture that enables the optimization of networks with virtually ar- bitrary depth. This is accomplished through the use of a learned gating mechanism for regulating information ï¬ow which is inspired by Long Short Term Memory recurrent neural networks (Hochreiter & Schmidhuber, 1995). Due to this gating mechanism, a neural network can have paths along which information can ï¬ow across several layers without attenuation. We call such paths information high- ways, and such networks highway networks.
Presented at the Deep Learning Workshop, International Confer- ence on Machine Learning, Lille, France, 2015. Copyright 2015 by the author(s).
In preliminary experiments, we found that highway net- works as deep as 900 layers can be optimized using simple Stochastic Gradient Descent (SGD) with momentum. For
Highway Networks
up to 100 layers we compare their training behavior to that of traditional networks with normalized initialization (Glo- rot & Bengio, 2010; He et al., 2015). We show that opti- mization of highway networks is virtually independent of depth, while for traditional networks it suffers signiï¬cantly as the number of layers increases. We also show that archi- tectures comparable to those recently presented by Romero et al. (2014) can be directly trained to obtain similar test set accuracy on the CIFAR-10 dataset without the need for a pre-trained teacher network.
# 1.1. Notation
We use boldface letters for vectors and matrices, and ital- icized capital letters to denote transformation functions. 0 and 1 denote vectors of zeros and ones respectively, and I denotes an identity matrix. The function Ï(x) is deï¬ned as Ï(x) = 1
Similarly, for the Jacobian of the layer transform,
if T(x, Wr) = 0, if T(x, Wy) =1. ©) dy _ JI, dx H'(x, Wy),
Thus, depending on the output of the transform gates, a highway layer can smoothly vary its behavior between that of a plain layer and that of a layer which simply passes its inputs through. Just as a plain layer consists of multi- ple computing units such that the ith unit computes yi = Hi(x), a highway network consists of multiple blocks such that the ith block computes a block state Hi(x) and trans- form gate output Ti(x). Finally, it produces the block out- put yi = Hi(x) â Ti(x) + xi â (1 â Ti(x)), which is con- nected to the next layer.
# 2.1. Constructing Highway Networks
# 2. Highway Networks
A plain feedforward neural network typically consists of L layers where the lth layer (l â {1, 2, ..., L}) applies a non- linear transform H (parameterized by WH,l) on its input xl to produce its output yl. Thus, x1 is the input to the network and yL is the networkâs output. Omitting the layer index and biases for clarity,
As mentioned earlier, Equation (3) requires that the dimen- sionality of x, y, H(x, WH) and T (x, WT) be the same. In cases when it is desirable to change the size of the rep- resentation, one can replace x with Ëx obtained by suitably sub-sampling or zero-padding x. Another alternative is to use a plain layer (without highways) to change dimension- ality and then continue with stacking highway layers. This is the alternative we use in this study.
y = H(x, WH). (1)
H is usually an afï¬ne transform followed by a non-linear activation function, but in general it may take other forms.
Convolutional highway layers are constructed similar to fully connected layers. Weight-sharing and local receptive ï¬elds are utilized for both H and T transforms. We use zero-padding to ensure that the block state and transform gate feature maps are the same size as the input.
For a highway network, we additionally deï¬ne two non- linear transforms T (x, WT) and C(x, WC) such that
# 2.2. Training Deep Highway Networks
y = H(x, WH)· T (x, WT) + x · C(x, WC).
We refer to T as the transform gate and C as the carry gate, since they express how much of the output is produced by transforming the input and carrying it, respectively. For simplicity, in this paper we set C = 1 â T , giving
y = H(x, WH)· T (x, WT) + x · (1 â T (x, WT)). (3)
The dimensionality of x, y, H(x, WH) and T (x, WT) must be the same for Equation (3) to be valid. Note that this re-parametrization of the layer transformation is much more ï¬exible than Equation (1). In particular, observe that
For plain deep networks, training with SGD stalls at the beginning unless a speciï¬c weight initialization scheme is used such that the variance of the signals during forward and backward propagation is preserved initially (Glorot & Bengio, 2010; He et al., 2015). This initialization depends on the exact functional form of H.
For highway layers, we use the transform gate deï¬ned as T x + bT), where WT is the weight matrix T (x) = Ï(WT and bT the bias vector for the transform gates. This sug- gests a simple initialization scheme which is independent of the nature of H: bT can be initialized with a negative value (e.g. -1, -3 etc.) such that the network is initially biased towards carry behavior. This scheme is strongly in- spired by the proposal of Gers et al. (1999) to initially bias the gates in a Long Short-Term Memory recurrent network to help bridge long-term temporal dependencies early in learning. Note that Ï(x) â (0, 1), âx â R, so the condi- tions in Equation (4) can never be exactly true.
y = x, H(x, WH), if T (x, WT) = 0, if T (x, WT) = 1.
(4)
In our experiments, we found that a negative bias initial-
Highway Networks
ization was sufï¬cient for learning to proceed in very deep networks for various zero-mean initial distributions of WH and different activation functions used by H. This is sig- niï¬cant property since in general it may not be possible to ï¬nd effective initialization schemes for many choices of H.
address this question, we compared highway networks to the thin and deep architectures termed Fitnets proposed re- cently by Romero et al. (2014) on the CIFAR-10 dataset augmented with random translations. Results are summa- rized in Table 1.
# 3. Experiments
# 3.1. Optimization
Very deep plain networks become difï¬cult to optimize even if using the variance-preserving initialization scheme form (He et al., 2015). To show that highway networks do not suffer from depth in the same way we train run a series of experiments on the MNIST digit classiï¬cation dataset. We measure the cross entropy error on the training set, to investigate optimization, without conï¬ating them with gen- eralization issues.
We train both plain networks and highway networks with the same architecture and varying depth. The ï¬rst layer is always a regular fully-connected layer followed by 9, 19, 49, or 99 fully-connected plain or highway layers and a single softmax output layer. The number of units in each layer is kept constant and it is 50 for highways and 71 for plain networks. That way the number of parameters is roughly the same for both. To make the comparison fair we run a random search of 40 runs for both plain and high- way networks to ï¬nd good settings for the hyperparame- ters. We optimized the initial learning rate, momentum, learning rate decay rate, activation function for H (either ReLU or tanh) and, for highway networks, the value for the transform gate bias (between -1 and -10). All other weights were initialized following the scheme introduced by (He et al., 2015).
The convergence plots for the best performing networks for each depth can be seen in Figure 1. While for 10 layers plain network show very good performance, their perfor- mance signiï¬cantly degrades as depth increases. Highway networks on the other hand do not seem to suffer from an increase in depth at all. The ï¬nal result of the 100 layer highway network is about 1 order of magnitude better than the 10 layer one, and is on par with the 10 layer plain net- work. In fact, we started training a similar 900 layer high- way network on CIFAR-100 which is only at 80 epochs as of now, but so far has shown no signs of optimization difï¬culties. It is also worth pointing out that the highway networks always converge signiï¬cantly faster than the plain ones.
# 3.2. Comparison to Fitnets
Romero et al. (2014) reported that training using plain backpropogation was only possible for maxout networks with depth up to 5 layers when number of parameters was limited to â¼250K and number of multiplications to â¼30M. Training of deeper networks was only possible through the use of a two-stage training procedure and addition of soft targets produced from a pre-trained shallow teacher net- work (hint-based training). Similarly it was only possible to train 19-layer networks with a budget of 2.5M parame- ters using hint-based training.
We found that it was easy to train highway networks with number of parameters and operations comparable to ï¬t- nets directly using backpropagation. As shown in Table 1, Highway 1 and Highway 4, which are based on the archi- tecture of Fitnet 1 and Fitnet 4 respectively obtain similar or higher accuracy on the test set. We were also able to train thinner and deeper networks: a 19-layer highway net- work with â¼1.4M parameters and a 32-layer highway net- work with â¼1.25M parameter both perform similar to the teacher network of Romero et al. (2014).
# 4. Analysis
In Figure 2 we show some inspections on the inner work- ings of the best1 50 hidden layer fully-connected high- way networks trained on MNIST (top row) and CIFAR- 100 (bottom row). The ï¬rst three columns show, for each transform gate, the bias, the mean activity over 10K ran- dom samples, and the activity for a single random sample respectively. The block outputs for the same single sample are displayed in the last column.
The transform gate biases of the two networks were initial- ized to -2 and -4 respectively. It is interesting to note that contrary to our expectations most biases actually decreased further during training. For the CIFAR-100 network the bi- ases increase with depth forming a gradient. Curiously this gradient is inversely correlated with the average activity of the transform gates as seen in the second column. This in- dicates that the strong negative biases at low depths are not used to shut down the gates, but to make them more selec- tive. This behavior is also suggested by the fact that the transform gate activity for a single example (column 3) is very sparse. This effect is more pronounced for the CIFAR- 100 network, but can also be observed to a lesser extent in the MNIST network.
Deep highway networks are easy to optimize, but are they also beneï¬cial for supervised learning where we are in- terested in generalization performance on a test set? To
1obtained via random search over hyperparameters to mini- mize the best training set error achieved using each conï¬guration
Highway Networks
10° Depth 10 Depth 20 107 10° Mean Cross Entropy Error 10% 10° Depth 50 Depth 100 â plain â highway 0 50 100 150 200 250 300 350 4000 Number of Epochs Number of Epochs 50 100 150 200 250 300 350 4000 50 100 150 200 250 300 350 4000 Number of Epochs 50 100 150 200 250 300 350 400 Number of Epochs
Figure 1. Comparison of optimization of plain networks and highway networks of various depths. All networks were optimized using SGD with momentum. The curves shown are for the best hyperparameter settings obtained for each conï¬guration using a random search. Plain networks become much harder to optimize with increasing depth, while highway networks with up to 100 layers can still be optimized well.
Network Fitnet Results reported by Romero et al. (2014) Number of Layers Number of Parameters Accuracy Teacher Fitnet 1 Fitnet 2 Fitnet 3 Fitnet 4 5 11 11 13 19 â¼9M â¼250K â¼862K â¼1.6M â¼2.5M 90.18% 89.01% 91.06% 91.10% 91.61% Highway networks Highway 1 (Fitnet 1) Highway 2 (Fitnet 4) Highway 3* Highway 4* 11 19 19 32 â¼236K â¼2.3M â¼1.4M â¼1.25M 89.18% 92.24% 90.68% 90.34%
Table 1. CIFAR-10 test set accuracy of convolutional highway networks with rectiï¬ed linear activation and sigmoid gates. For compar- ison, results reported by Romero et al. (2014) using maxout networks are also shown. Fitnets were trained using a two step training procedure using soft targets from the trained Teacher network, which was trained using backpropagation. We trained all highway net- works directly using backpropagation. * indicates networks which were trained only on a set of 40K out of 50K examples in the training set.
The last column of Figure 2 displays the block outputs and clearly visualizes the concept of âinformation highwaysâ. Most of the outputs stay constant over many layers form- ing a pattern of stripes. Most of the change in outputs hap- pens in the early layers (â 10 for MNIST and â 30 for CIFAR-100). We hypothesize that this difference is due to the higher complexity of the CIFAR-100 dataset.
# 5. Conclusion
Learning to route information through neural networks has helped to scale up their application to challenging prob- lems by improving credit assignment and making training easier (Srivastava et al., 2015). Even so, training very deep networks has remained difï¬cult, especially without consid- erably increasing total network size.
In summary it is clear that highway networks actually uti- lize the gating mechanism to pass information almost un- changed through many layers. This mechanism serves not just as a means for easier training, but is also heavily used to route information in a trained network. We observe very selective activity of the transform gates, varying strongly in reaction to the current input patterns.
Highway networks are novel neural network architectures which enable the training of extremely deep networks us- ing simple SGD. While the traditional plain neural archi- tectures become increasingly difï¬cult to train with increas- ing network depth (even with variance-preserving initial- ization), our experiments show that optimization of high- way networks is not hampered even as network depth in- creases to a hundred layers.
The ability to train extremely deep networks opens up the possibility of studying the impact of depth on complex
Highway Networks
Transform Gate Biases MNIST CIFAR-100 Depth Mean Transform Gate Outputs Transform Gate Outputs Block Outputs ecooor oN B&D wD L ° iy 0.4 0.8 eoory, RD ®oM ° ! | i 0 10 20 30 40 Block Block
Figure 2. Visualization of certain internals of the blocks in the best 50 hidden layer highway networks trained on MNIST (top row) and CIFAR-100 (bottom row). The ï¬rst hidden layer is a plain layer which changes the dimensionality of the representation to 50. Each of the 49 highway layers (y-axis) consists of 50 blocks (x-axis). The ï¬rst column shows the transform gate biases, which were initialized to -2 and -4 respectively. In the second column the mean output of the transform gate over 10,000 training examples is depicted. The third and forth columns show the output of the transform gates and the block outputs for a single random training sample.
problems without restrictions. Various activation functions which may be more suitable for particular problems but for which robust initialization schemes are unavailable can be used in deep highway networks. Future work will also at- tempt to improve the understanding of learning in highway networks.
# Acknowledgments
ï¬c sign classiï¬cation. In Neural Networks (IJCNN), The 2011 International Joint Conference on, pp. 1918â1921. IEEE, 2011a. URL http://ieeexplore.ieee. org/xpls/abs_all.jsp?arnumber=6033458.
Ciresan, Dan, Meier, Ueli, and Schmidhuber, J¨urgen. Multi-column deep neural networks for image classiï¬- In IEEE Conference on Computer Vision and cation. Pattern Recognition, 2012.
This research was supported by the by EU project âNASCENCEâ (FP7-ICT-317662). We gratefully ac- knowledge the support of NVIDIA Corporation with the donation of the Tesla K40 GPUs used for this research.
# References
Ciresan, DC, Meier, Ueli, Masci, Jonathan, Gambardella, Luca M, and Schmidhuber, J¨urgen. Flexible, high performance convolutional neural networks for image classiï¬cation. In IJCAI, 2011b. URL http://www. aaai.org/ocs/index.php/IJCAI/IJCAI11/ paper/download/3098/3425%0020http: //dl.acm.org/citation.cfm?id=2283603.
Bengio, Yoshua, Courville, Aaron, and Vincent, Pas- Representation learning: A review and new Pattern Analysis and Machine In- IEEE Transactions on, 35(8):1798â1828, URL http://ieeexplore.ieee.org/ cal. perspectives. telligence, 2013. xpls/abs_all.jsp?arnumber=6472238.
Gers, Felix A., Schmidhuber, J¨urgen, and Cummins, Learning to forget: Continual prediction In ICANN, volume 2, pp. 850â855, URL http://ieeexplore.ieee.org/
Ciresan, Dan, Meier, Ueli, Masci, Jonathan, and Schmid- huber, J¨urgen. A committee of neural networks for traf-
Glorot, Xavier and Bengio, Yoshua. Understanding the difï¬culty of training deep feedforward neural networks.
Highway Networks
In International Conference on Artiï¬cial Intelligence URL http: and Statistics, pp. 249â256, 2010. //machinelearning.wustl.edu/mlpapers/ paper_files/AISTATS2010_GlorotB10.pdf.
Simonyan, Karen and Zisserman, Andrew. Very deep con- volutional networks for large-scale image recognition. arXiv:1409.1556 [cs], September 2014. URL http: //arxiv.org/abs/1409.1556.
HËastad, Johan. Computational limitations of small-depth circuits. MIT press, 1987. URL http://dl.acm. org/citation.cfm?id=SERIES9056.27031.
HËastad, Johan and Goldmann, Mikael. On the Compu- URL power of small-depth threshold circuits. tational Complexity, 1(2):113â129, 1991. http://link.springer.com/article/10. 1007/BF01272517.
He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Delving deep into rectiï¬ers: Surpassing human-level performance on ImageNet classiï¬cation. arXiv:1502.01852 [cs], February 2015. URL http: //arxiv.org/abs/1502.01852.
Srivastava, Rupesh Kumar, Masci, Jonathan, Gomez, Faustino, and Schmidhuber, J¨urgen. Understanding lo- In International Confer- cally competitive networks. ence on Learning Representations, 2015. URL http: //arxiv.org/abs/1410.1165.
Szegedy, Christian, Liu, Wei, Jia, Yangqing, Sermanet, Pierre, Reed, Scott, Anguelov, Dragomir, Erhan, Du- mitru, Vanhoucke, Vincent, and Rabinovich, Andrew. Going deeper with convolutions. arXiv:1409.4842 [cs], September 2014. URL http://arxiv.org/abs/ 1409.4842.
Long short term memory. Technical Report FKI-207-95, Technische Universit¨at M¨unchen, M¨unchen, August 1995. URL http://citeseerx.ist.psu.edu/ viewdoc/summary?doi=10.1.1.51.3117.
Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in Neural Information Process- ing Systems, 2012. URL http://books.nips.cc/ papers/files/nips25/NIPS2012_0534.pdf.
Lee, Chen-Yu, Xie, Saining, Gallagher, Patrick, Zhang, Zhengyou, and Tu, Zhuowen. Deeply-supervised nets. URL http://jmlr.org/ pp. 562â570, 2015. proceedings/papers/v38/lee15a.html.
and Bengio, Yoshua. regions of deep neural networks. in Neural URL 5422-on-the-number-of-linear-regions-of-deep-neural-networks. pdf.
Kahou, Ballas, Samira Ebrahimi, Chassang, Antoine, Gatta, Carlo, and Bengio, Yoshua. FitNets: Hints for thin deep arXiv:1412.6550 [cs], December 2014. URL nets. http://arxiv.org/abs/1412.6550.
Saxe, Andrew M., McClelland, James L., and Gan- guli, Surya. Exact solutions to the nonlinear dy- namics of learning in deep linear neural networks. arXiv:1312.6120 [cond-mat, q-bio, stat], December URL http://arxiv.org/abs/1312. 2013. 6120. | {
"id": "1502.01852"
} |
1504.00702 | End-to-End Training of Deep Visuomotor Policies | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods. | http://arxiv.org/pdf/1504.00702 | Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | cs.LG, cs.CV, cs.RO | updating with revisions for JMLR final version | null | cs.LG | 20150402 | 20160419 | 6 1 0 2
r p A 9 1 ] G L . s c [
5 v 2 0 7 0 0 . 4 0 5 1 : v i X r a
Journal of Machine Learning Research 17 (2016) 1-40
Submitted 10/15; Published 4/16
# End-to-End Training of Deep Visuomotor Policies
Sergey Levineâ Chelsea Finnâ Trevor Darrell Pieter Abbeel Division of Computer Science University of California Berkeley, CA 94720-1776, USA â These authors contributed equally.
svlevine@eecs.berkeley.edu cbfinn@eecs.berkeley.edu trevor@eecs.berkeley.edu pabbeel@eecs.berkeley.edu
Editor: Jan Peters
# Abstract
Policy search methods can allow robots to learn control policies for a wide range of tasks, but practical applications of policy search often require hand-engineered components for perception, state estimation, and low-level control. In this paper, we aim to answer the following question: does training the perception and control systems jointly end-to- end provide better performance than training each component separately? To this end, we develop a method that can be used to learn policies that map raw image observations directly to torques at the robotâs motors. The policies are represented by deep convolutional neural networks (CNNs) with 92,000 parameters, and are trained using a guided policy search method, which transforms policy search into supervised learning, with supervision provided by a simple trajectory-centric reinforcement learning method. We evaluate our method on a range of real-world manipulation tasks that require close coordination between vision and control, such as screwing a cap onto a bottle, and present simulated comparisons to a range of prior policy search methods. Keywords: Reinforcement Learning, Optimal Control, Vision, Neural Networks
# 1. Introduction
Robots can perform impressive tasks under human control, including surgery (Lanfranco et al., 2004) and household chores (Wyrobek et al., 2008). However, designing the perception and control software for autonomous operation remains a major challenge, even for basic tasks. Policy search methods hold the promise of allowing robots to automatically learn new behaviors through experience (Kober et al., 2010b; Deisenroth et al., 2011; Kalakrishnan et al., 2011; Deisenroth et al., 2013). However, policies learned using such methods often rely on a number of hand-engineered components for perception and control, so as to present the policy with a more manageable and low-dimensional representation of observations and actions. The vision system in particular can be complex and prone to errors, and it is typically not improved during policy training, nor adapted to the goal of the task.
In this article, we aim to answer the following question: can we acquire more eï¬ec- tive policies for sensorimotor control if the perception system is trained jointly with the control policy, rather than separately? In order to represent a policy that performs both
©2016 Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel.
Levine, Finn, Darrell, and Abbeel
hanger cube hammer bottle
hanger
cube
hammer
bottle
Figure 1: Our method learns visuomotor policies that directly use camera image observa- tions (left) to set motor torques on a PR2 robot (right).
perception and control, we use deep neural networks. Deep neural network representations have recently seen widespread success in a variety of domains, such as computer vision and speech recognition, and even playing video games. However, using deep neural networks for real-world sensorimotor policies, such as robotic controllers that map image pixels and joint angles to motor torques, presents a number of unique challenges. Successful applications of deep neural networks typically rely on large amounts of data and direct supervision of the output, neither of which is available in robotic control. Real-world robot interaction data is scarce, and task completion is deï¬ned at a high level by means of a cost function, which means that the learning algorithm must determine on its own which action to take at each point. From the control perspective, a further complication is that observations from the robotâs sensors do not provide us with the full state of the system. Instead, important state information, such as the positions of task-relevant objects, must be inferred from inputs such as camera images.
We address these challenges by developing a guided policy search algorithm for senso- rimotor deep learning, as well as a novel CNN architecture designed for robotic control. Guided policy search converts policy search into supervised learning, by iteratively con- structing the training data using an eï¬cient model-free trajectory optimization procedure. We show that this can be formalized as an instance of Bregman ADMM (BADMM) (Wang and Banerjee, 2014), which can be used to show that the algorithm converges to a locally optimal solution. In our method, the full state of the system is observable at training time, but not at test time. For most tasks, providing the full state simply requires position- ing objects in one of several known positions for each trial during training. At test time, the learned CNN policy can handle novel, unknown conï¬gurations, and no longer requires full state information. Since the policy is optimized with supervised learning, we can use standard methods like stochastic gradient descent for training. Our CNNs have 92,000 pa- rameters and 7 layers, including a novel spatial feature point transformation that provides accurate spatial reasoning and reduces overï¬tting. This allows us to train our policies with relatively modest amounts of data and only tens of minutes of real-world interaction time. We evaluate our method by learning policies for inserting a block into a shape sorting cube, screwing a cap onto a bottle, ï¬tting the claw of a toy hammer under a nail with various grasps, and placing a coat hanger on a rack with a PR2 robot (see Figure 1). These tasks require localization, visual tracking, and handling complex contact dynamics. Our results demonstrate improvements in consistency and generalization from training visuomotor poli- cies end-to-end, when compared to training the vision and control components separately. We also present simulated comparisons that show that guided policy search outperforms a
2
End-to-End Training of Deep Visuomotor Policies
number of prior methods when training high-dimensional neural network policies. Some of the material in this article has previously appeared in two conference papers (Levine and Abbeel, 2014; Levine et al., 2015), which we extend to introduce visual input into the policy.
# 2. Related Work
Reinforcement learning and policy search methods (Gullapalli, 1990; Williams, 1992) have been applied in robotics for playing games such as table tennis (Kober et al., 2010b), object manipulation (Gullapalli, 1995; Peters and Schaal, 2008; Kober et al., 2010a; Deisenroth et al., 2011; Kalakrishnan et al., 2011), locomotion (Benbrahim and Franklin, 1997; Kohl and Stone, 2004; Tedrake et al., 2004; Geng et al., 2006; Endo et al., 2008), and ï¬ight (Ng et al., 2004). Several recent papers provide surveys of policy search in robotics (Deisenroth et al., 2013; Kober et al., 2013). Such methods are typically applied to one component of the robot control pipeline, which often sits on top of a hand-designed controller, such as a PD controller, and accepts processed input, for example from an existing vision pipeline (Kalakrishnan et al., 2011). Our method learns policies that map visual input and joint encoder readings directly to the torques at the robotâs joints. By learning the entire map- ping from perception to control, the perception layers can be adapted to optimize task performance, and the motor control layers can be adapted to imperfect perception.
We represent our policies with convolutional neural networks (CNNs). CNNs have a long history in computer vision and deep learning (Fukushima, 1980; LeCun et al., 1989; Schmidhuber, 2015), and have recently gained prominence due to excellent results on a number of vision benchmarks (Ciresan et al., 2011; Krizhevsky et al., 2012; Ciresan et al., 2012; Girshick et al., 2014a; Tompson et al., 2014; LeCun et al., 2015; He et al., 2015). Most applications of CNNs focus on classiï¬cation, where locational information is discarded by means of successive pooling layers to provide for invariance (Lee et al., 2009). Applications to localization typically either use a sliding window (Girshick et al., 2014a) or object pro- posals (Endres and Hoiem, 2010; Uijlings et al., 2013; Girshick et al., 2014b) to localize the object, reducing the task to classiï¬cation, perform regression to a heatmap of manually labeled keypoints (Tompson et al., 2014), requiring precise knowledge of the object posi- tion in the image and camera calibration, or use 3D models to localize previously scanned objects (Pepik et al., 2012; Savarese and Fei-Fei, 2007). Many prior robotic applications of CNNs do not directly consider control, but employ CNNs for the perception component of a larger robotic system (Hadsell et al., 2009; Sung et al., 2015; Lenz et al., 2015b; Pinto and Gupta, 2015). We use a novel CNN architecture for our policies that automatically learn feature points that capture spatial information about the scene, without any supervision beyond the information from the robotâs encoders and camera.
Applications of deep learning in robotic control have been less prevalent in recent years than in visual recognition. Backpropagation through the dynamics and the image for- mation process is typically impractical, since they are often non-diï¬erentiable, and such long-range backpropagation can lead to extreme numerical instability, since the lineariza- tion of a suboptimal policy is likely to be unstable. This issue has also been observed in the related context of recurrent neural networks (Hochreiter et al., 2001; Pascanu and Bengio, 2012). The high dimensionality of the network also makes reinforcement learning diï¬cult (Deisenroth et al., 2013). Pioneering early work on neural network control used
3
Levine, Finn, Darrell, and Abbeel
small, simple networks (Pomerleau, 1989; Hunt et al., 1992; Bekey and Goldberg, 1992; Lewis et al., 1998; Bakker et al., 2003; Mayer et al., 2006), and has largely been supplanted by methods that use carefully designed policies that can be learned eï¬ciently with rein- forcement learning (Kober et al., 2013). More recent work on sensorimotor deep learning has tackled simple task-space motions (Lenz et al., 2015a; Lampe and Riedmiller, 2013) and used unsupervised learning to obtain low-dimensional state spaces from images (Lange et al., 2012). Such methods have been demonstrated on tasks with a low-dimensional un- derlying structure: Lenz et al. (2015a) controls the end-eï¬ector in 2D space, while Lange et al. (2012) controls a 2-dimensional slot car with 1-dimensional actions. Our experiments include full torque control of 7-DoF robotic arms interacting with objects, with 30-40 state dimensions. In simple synthetic environments, control from images has been addressed with image features (Jodogne and Piater, 2007), nonparametric methods (van Hoof et al., 2015), and unsupervised state-space learning (B¨ohmer et al., 2013; Jonschkowski and Brock, 2014). CNNs have also been trained to play video games with Q-learning, Monte Carlo tree search, and stochastic search (Mnih et al., 2013; Koutn´ık et al., 2013; Guo et al., 2014), and have been applied to simple simulated control tasks (Watter et al., 2015; Lillicrap et al., 2015). However, such methods have only been demonstrated on synthetic domains that lack the visual complexity of the real world, and require an impractical number of samples for real- world robotic learning. Our method is sample eï¬cient, requiring only minutes of interaction time. To the best of our knowledge, this is the ï¬rst method that can train deep visuomotor policies for complex, high-dimensional manipulation skills with direct torque control.
Learning visuomotor policies on a real robot requires handling complex observations and high dimensional policy representations. We tackle these challenges using guided pol- icy search. In guided policy search, the policy is optimized using supervised learning, which scales gracefully with the dimensionality of the policy. The training set for supervised learn- ing can be constructed using trajectory optimization under known dynamics (Levine and Koltun, 2013a,b, 2014; Mordatch and Todorov, 2014) and trajectory-centric reinforcement learning methods that operate under unknown dynamics (Levine and Abbeel, 2014; Levine et al., 2015), which is the approach taken in this work. In both cases, the supervision is adapted to the policy, to ensure that the ï¬nal policy can reproduce the training data. The use of supervised learning in the inner loop of iterative policy search has also been pro- posed in the context of imitation learning (Ross et al., 2011, 2013). However, such methods typically do not address the question of how the supervision should be adapted to the policy. The goal of our approach is also similar to visual servoing, which performs feedback control on feature points in a camera image (Espiau et al., 1992; Mohta et al., 2014; Wilson et al., 1996). However, our visuomotor policies are entirely learned from real-world data, and do not require feature points or feedback controllers to be speciï¬ed by hand. This allows our method much more ï¬exibility in choosing how to use the visual signal. Our approach also does not require any sort of camera calibration, in contrast to many visual servoing methods (though not all â see e.g. J¨agersand et al. (1997); Yoshimi and Allen (1994)).
# 3. Background and Overview
In this section, we deï¬ne the visuomotor policy learning problem and present an overview of our approach. The core component of our approach is a guided policy search algorithm
4
End-to-End Training of Deep Visuomotor Policies
that separates the problem of learning visuomotor policies into separate supervised learning and trajectory learning phases, each of which is easier than optimizing the policy directly. We also discuss a policy architecture suitable for end-to-end learning of vision and control, and a training setup that allows our method to be applied to real robotic platforms.
# 3.1 Deï¬nitions and Problem Formulation
In policy search, the goal is to learn a policy 79(u;|o,) that allows an agent to choose actions u, in response to observations o; to control a dynamical system, such as a robot. The policy comes from some parametric class parameterized by 9, which could be, for example, the weights of a neural network. The system is defined by states x,, actions u,, and observations o;. For example, x; might include the joint angles of the robot, the positions of objects in the world, and their time derivatives, u; might consist of motor torque commands, and o; might include an image from the robotâs onboard camera. In this paper, we address finite horizon episodic tasks with t ⬠[1,..., 7]. The states evolve in time according to the system dynamics p(x++1|x¢, uz), and the observations are, in general, a stochastic consequence of the states, according to p(o;|x;). Neither the dynamics nor the observation distribution are assumed to be known in general. For notational convenience, we will use 79(u;|x;) to denote the distribution over actions under the policy conditioned on the state. However, since the policy is conditioned on the observation o;, this distribution is in fact given by 9(ui|xz) =f 79(ur|oz)p(o1|xz)do;. The dynamics and 79(u;,|x;) together induce a distribution over trajectories tT = {x), U1, X2,Uo,...,x7, ur}:
T mo(T) = p(x1) | J ro (uelxe)p(xes1]Xe, ue): tt
The goal of a task is given by a cost function ¢(x;, uz), and the objective in policy search is to minimize the expectation E,,(;) ee 1 (xt, uz)], which we will abbreviate as E,,,(7)[(7)]- A summary of the notation used in the paper is provided in Table 1.
# 3.2 Approach Summary
Our methods consists of two main components, which are illustrated in Figure 3. The first is a supervised learning algorithm that trains policies of the form 79(u;|o,) = N(u7 (oz), &(0z)), where both yâ¢(o¢) and (oz) are general nonlinear functions. In our implementation, 1 (0¢) is a deep convolutional neural network, while ©7(o;) is an observation-independent earned covariance, though other representations are possible. The second component is a rajectory-centric reinforcement learning (RL) algorithm that generates guiding distribu- ions p;(u;|x;) that provide the supervision used to train the policy. These two components orm a policy search algorithm that can be used to learn complex robotic tasks using only a high-level cost function ¢(x;, uz). During training, only samples from the guiding distribu- ions p;(u;|xz) are generated by running rollouts on the physical system, which avoids the need to execute partially trained neural network policies on physical hardware.
Supervised learning will not, in general, produce a policy with good long-horizon per- formance, since a small mistake on the part of the policy will place the system into states that are outside the distribution in the training data, causing compounding errors. To
5
Levine, Finn, Darrell, and Abbeel
symbol definition example/details Markovian system state at time step t ⬠Joint angles, end-effector pose, object Posl- Xe 1,7] tions, and their velocities; dimensionality: ; 14 to 32 trol i t ti tep t ⬠[1,7] joint motor torque commands; dimensional- ur control or action at time step : ity: 7 (for the PR2 robot) RGB camera image, joint encoder readings oO observation at time step t ⬠[1, T] & velocities, end-effector pose; dimensional- ity: around 200,000 r trajectory: notational shorthand for a sequence of states T = {X1,U1,X2,U2,...,xr, ur} and actions , . . dista betwe a bject in th i L(Xt, Xe) cost function that defines the goal of the task istance Detween an object In be gripper and the target P(Xt+1|Xe, Ue) unknown system dynamics physics that govern the robot and any ob- jects it interacts with stochastic process that produces camera im- 01x vat istributi p(o|Xz) unknown observation distribution ages from system state learned nonlinear global policy parameter-| convolutional neural network, such as the To (ut|Oz) . . an : ized by weights 0 one in Figure 2 (ui lx) f (wilor)p(orlxe)a notational shorthand for observation-based 9 (ut |x. To (ur|Oz)p(Or|xz)do. . sys o\mee oe Or) Portlet aor policy conditioned on state (us[xe) learned local time-varying linear-Gaussian | time-varying linear-Gaussian controller has pituelx controller for initial state x} form N(Kux: + ki, Cri) (7) trajectory distribution for â7@(u,|xz):| notational shorthand for trajectory distribu- 0 P(x1) Wes To (We |Xt)P(Xt41[Xe, Us) tion induced by a policy
Table 1: Summary of the notation frequently used in this article.
avoid this issue, the training data must come from the policyâs own state distribution (Ross et al., 2011). We achieve this by alternating between trajectory-centric RL and supervised learning. The RL stage adapts to the current policy Ïθ(ut|ot), providing supervision at states that are iteratively brought closer to the states visited by the policy. This is for- malized as a variant of the BADMM algorithm (Wang and Banerjee, 2014) for constrained optimization, which can be used to show that, at convergence, the policy Ïθ(ut|ot) and the guiding distributions pi(ut|xt) will exhibit the same behavior. This algorithm is derived in Section 4. The guiding distributions are substantially easier to optimize than learning the policy parameters directly (e.g., using model-free reinforcement learning), because they use the full state of the system xt, while the policy Ïθ(ut|ot) only uses the observations. This means that the method requires the full state to be known during training, but not at test time. This makes it possible to eï¬ciently learn complex visuomotor policies, but imposes additional assumptions on the observability of xt during training that we discuss in Section 4.
When learning visuomotor tasks, the policy Ïθ(ut|ot) is represented by a novel convo- lutional neural network (CNN) architecture, which we describe in Section 5.2. CNNs have enjoyed considerable success in computer vision (LeCun et al., 2015), but the most popular
6
# End-to-End Training of Deep Visuomotor Policies
RGB image convt conv conv3 spatial softmax feature motor . . . points torques 7x7 conv s fully fully fully stride 2 expected connected) connected >} connected J Ry ReLU [2D position ReLU. ReLU linear 240 109 a 40 40 7 109 109 robot configuration 39
Figure 2: Visuomotor policy architecture. The network contains three convolutional lay- ers, followed by a spatial softmax and an expected position layer that converts pixel-wise features to feature points, which are better suited for spatial computations. The points are concatenated with the robot conï¬guration, then passed through three fully connected layers to produce the torques.
architectures rely on large datasets and focus on semantic tasks such as classiï¬cation, often intentionally discarding spatial information. Our architecture, illustrated in Figure 2, uses a ï¬xed transformation from the last convolutional layer to a set of spatial feature points, which form a concise representation of the visual scene suitable for feedback control. Our network has 7 layers and around 92,000 parameters, which presents a major challenge for standard policy search methods (Deisenroth et al., 2013). To reduce the amount of experience needed to train visuomotor policies, we also introduce a pretraining scheme that allows us to train eï¬ective policies with a relatively small number of iterations. The pretraining steps are illustrated in Figure 3. The intuition behind our pretraining is that, although we ultimately seek to obtain sensorimotor policies that combine both vision and control, low-level aspects of vision can be initialized independently. To that end, we pretrain the convolu- tional layers of our network by predicting elements of xt that are not provided in the observation ot, such as the positions of objects in the scene. We also initially train the guiding trajectory distributions pi(ut|xt) indepen- dently of the convolutional network until the trajecto- ries achieve a basic level of competence at the task, and then switch to full guided policy search with end-to-end In our implementation, we also training of Ïθ(ut|ot). initialize the ï¬rst layer ï¬lters from the model of Szegedy et al. (2014), which is trained on ImageNet (Deng et al., 2009) classiï¬cation. The initialization and pretraining scheme is described in Section 5.2.
{Ï j i } pi Ïθ pi pi
# 4. Guided Policy Search with BADMM
Guided policy search transforms policy search into a supervised learning problem, where the training set is generated by a simple trajectory-centric RL algorithm. This algorithm
7
# Levine, Finn, Darrell, and Abbeel
optimizes linear-Gaussian controllers pi(ut|xt), and is described in Section 4.2. We refer to the trajectory distribution induced by pi(ut|xt) as pi(Ï ). Each pi(ut|xt) succeeds from diï¬erent initial states. For example, in the task of placing a cap on a bottle, these initial states correspond to diï¬erent positions of the bottle. By training on trajectories for multiple bottle positions, the ï¬nal CNN policy can succeed from all initial states, and can generalize to other states from the same distribution.
The ï¬nal policy Ïθ(ut|ot) learned with guided policy search is only provided with observations ot of the full state xt, and the dynamics are assumed to be unknown. A diagram of this method, which corresponds to an expanded version of the guided policy search box in Figure 3, is shown on the right. In the outer loop, we draw sample trajectories {Ï j i } for each ini- tial state on the physical system by running the corresponding controller pi(ut|xt). The samples are used to ï¬t the dynamics pi(xt+1|xt, ut) that are used to improve pi(ut|xt), and serve as training data for the policy. The inner loop alternates between optimizing each pi(Ï ) and optimizing the policy to match these trajectory distributions. The policy is trained to predict the actions along each trajectory from the observations ot, rather than the full state xt. This allows the policy to directly use raw observations at test time. This alternating optimization can be framed as an instance of the BADMM algorithm (Wang and Banerjee, 2014), which converges to a solution where the trajectory distributions and the policy have the same state distribution. This allows greedy supervised training of the policy to produce a policy with good long-horizon performance.
outer loop run each pi(ut|xt) on robot inner loop samples {Ï j i } ï¬t dynamics optimize Ïθ w.r.t. Lθ optimize each pi(Ï ) w.r.t. Lp
# 4.1 Algorithm Derivation
Policy search methods minimize the expected cost Ex, [(7)], where 7 = {x1,u1,...,x7, ur} is a trajectory, and (7) = a 1 (Xt, uz) is the cost of an episode. In the fully observed case, the expectation is taken under 79(T) = p(x1) Tha mo (ur|Xt)p(Xt41|Xt, Uz). The final policy o(uz|oz) is conditioned on the observations o;, but 79(uz|xz) can be recovered as 79(uz|Xt) = J 79(u:|04)p(04|x;)do;. We will present the derivation in this section for 7(u;|x;), but we do not require knowledge of p(o;|x;) in the final algorithm. As discussed in Section 4.3, the integral will be evaluated with samples from the real system, which include both x; and o;. We begin by rewriting the expected cost minimization as a constrained problem:
min Ep[¢(r)| s.t. p(u|xz) = 79(us|xt) V Xe, Ue, t, (1)
where we will refer to p(Ï ) as a guiding distribution. This formulation is equivalent to the original problem, since the constraint forces the two distributions to be identical. However, if we approximate the initial state distribution p(x1) with samples xi 1, we can choose p(Ï ) to be a class of distributions that is much easier to optimize than Ïθ, as we will show later. This will allow us to use simple local learning methods for p(Ï ), without needing to train the complex neural network policy Ïθ(ut|ot) directly with reinforcement learning, which would require a prohibitive amount of experience on real physical systems.
The constrained problem can be solved by a dual descent method, which alternates between minimizing the Lagrangian with respect to the primal variables, and incrementing
8
End-to-End Training of Deep Visuomotor Policies
the Lagrange multipliers by their subgradient. Minimization of the Lagrangian with respect to p(Ï ) and θ is done in alternating fashion: minimizing with respect to θ corresponds to supervised learning (making Ïθ match p(Ï )), and minimizing with respect to p(Ï ) consists of one or more trajectory optimization problems. The dual descent method we use is based on BADMM (Wang and Banerjee, 2014), a variant of ADMM (Boyd et al., 2011) that augments the Lagrangian with a Bregman divergence between the constrained variables. We use the KL-divergence as the Bregman constraint, which is particularly convenient for working with probability distributions. We will also modify the constraint p(ut|xt) = Ïθ(ut|xt) by multiplying both sides by p(xt), to get p(ut|xt)p(xt) = Ïθ(ut|xt)p(xt). This constraint is equivalent, but has the convenient property that we can express the Lagrangian in terms of expectations. The BADMM augmented Lagrangians for θ and p are therefore given by
T Lo(O,p) = > Ep (xy ay) EX u,)] + Ey(xe)mo (uslxe) [Ax:url _ Ep(xeu,) Ax:uel +r 464 (8, p) t=1 T Ly(p, 6) = > Evycxeu) (EX, uz)] Tr Eoy(xe) mo (uelxe) Axe.ue] _ Ey(xeur) xen T a (9, P), t=1
where λxt,ut is the Lagrange multiplier for state xt and action ut at time t, and Ïθ Ïp t (θ, p) are expectations of the KL-divergences:
(0, p) are
Ot (p,9) = Encx,)[Dxx (p(uelxe)|| 79 (uelxe))] 9 (8, p) = Eycx,)(Dxu(mo (uy) [lp (ul x2)]-
Dual descent with alternating primal minimization is then described by the following steps:
T 6 < arg min)? Eoy(xce)mro (uelxe) Axe] + 10% (0, p) t=1 T pear min)? Enyce au) E(C%t; Wt) â Aree ay] + 4d? (p, t=1 Axes â Areas + 04 (779 (Ue Xt) (Xt) â p(Ur|Xt)P(%t))-
# t (p, θ)
This procedure is an instance of BADMM, and therefore inherits its convergence guarantees. Note that we drop terms that are independent of the optimization variables on each line. The parameter α is a step size. As with most augmented Lagrangian methods, the weight νt is set heuristically, as described in Appendix A.1.
The dynamics only aï¬ect the optimization with respect to p(Ï ). In order to make this optimization eï¬cient, we choose p(Ï ) to be a mixture of N Gaussians pi(Ï ), one for each initial state sample xi 1. This makes the action conditionals pi(ut|xt) and the dynamics pi(xt+1|xt, ut) linear-Gaussian, as discussed in Section 4.2. This is a reasonable choice when the system is deterministic, or the noise is Gaussian or small, and we found that this approach is suï¬ciently tolerant to noise for use on real physical systems. Our choice of p also assumes that the policy Ïθ(ut|ot) is conditionally Gaussian. This is also reasonable, since the mean and covariance of Ïθ(ut|ot) can be any nonlinear function of the observations
9
# Levine, Finn, Darrell, and Abbeel
ot, which themselves are a function of the unobserved state xt. In Section 4.2, we show how these assumptions enable each pi(Ï ) to be optimized very eï¬ciently. We will refer to pi(Ï ) as guiding distributions, since they serve to focus the policy on good, low-cost behaviors.
Aside from learning pi(Ï ), we must choose a tractable way to represent the inï¬nite set of constraints p(ut|xt)p(xt) = Ïθ(ut|xt)p(xt). One approximate approach proposed in prior work is to replace the exact constraints with expectations of features (Peters et al., 2010). When the features consist of linear, quadratic, or higher order monomial functions of the random variable, this can be viewed as a constraint on the moments of the distributions. If we only use the ï¬rst moment, we get a constraint on the expected action: Ep(ut|xt)p(xt)[ut] = EÏθ(ut|xt)p(xt)[ut]. If the stochasticity in the dynamics is low, as we assumed previously, the optimal solution for each pi(Ï ) will have low entropy, making this ï¬rst moment constraint a reasonable approximation. The KL-divergence terms in the augmented Lagrangians will still serve to softly enforce agreement between the higher moments. While this simpliï¬cation is quite drastic, we found that it was more stable in practice than including higher moments, likely because these higher moments are harder to estimate accurately with a limited number of samples. The alternating optimization is now given by
T 0<¢ arg min Epcxz)mo(uelxe) [uf Apel + 1.69 (0, p) (2) t=1
# t=1 T
pe arg min)? Eny(x.uz)(E(Xt Ws) â Uf Ae] + MeO (D, 9) (3) t=1
λµt â λµt + ανt(EÏθ(ut|xt)p(xt)[ut] â Ep(ut|xt)p(xt)[ut]),
where λµt is the Lagrange multiplier on the expected action at time t. In the rest of the paper, we will use Lθ(θ, p) and Lp(p, θ) to denote the two augmented Lagrangians in Equations (2) and (3), respectively. In the next two sections, we will describe how Lp(p, θ) can be optimized with respect to p under unknown dynamics, and how Lθ(θ, p) can be optimized for complex, high-dimensional policies. Implementation details of the BADMM optimization are presented in Appendix A.1.
# 4.2 Trajectory Optimization under Unknown Dynamics
Since the Lagrangian £,(p, 0) in the previous section factorizes over the mixture elements in p(t) = 30; pi(r), we describe the trajectory optimization method for a single Gaussian p(t). When there are multiple mixture elements, this procedure is applied in parallel to each pi(T). Since p(T) is Gaussian, the conditionals p(x;+41|x,, uz) and p(u;,|x;), which correspond to the dynamics and the controller, are time-varying linear-Gaussian, and given by
p(ut|xt) = N (Ktxt + kt, Ct) p(xt+1|xt, ut) = N (fxtxt + futut + fct, Ft).
This type of controller can be learned eï¬ciently with a small number of real-world samples, making it a good choice for optimizing the guiding distributions. Since a diï¬erent set of time- varying linear-Gaussian dynamics is ï¬tted for each initial state, this dynamics representation can model any continuous deterministic system that can be locally linearized. Stochastic dynamics can violate the local linearity assumption in principle, but we found that in practice this representation was well suited for a wide variety of noisy real-world tasks.
10
End-to-End Training of Deep Visuomotor Policies
The dynamics are determined by the environment. If they are known, p(u;|x;) can be optimized with a variant of the iterative linear-quadratic-Gaussian regulator (iLQG) (Li and Todorov, 2004; Levine and Koltun, 2013a), which is a variant of DDP (Jacobson and Mayne, 1970). In the case of unknown dynamics, we can fit p(x:41|Xz, Uz) to sample trajectories sampled from the trajectory distribution at the previous iteration, denoted f(r). If p(7) is too different from p(T), these samples will not give a good estimate of p(x:+1|xz, uz), and the optimization will diverge. To avoid this, we can bound the change from p(7) to p(T) in terms of their KL-divergence by a step size â¬, producing the following constrained problem:
i Ly(p,9) s.t. Dxr(y p Se. te 3 p(p, 4) s KL(P(T)||B(7)) < â¬
This type of policy update has previously been proposed by several authors in the con- text of policy search (Bagnell and Schneider, 2003; Peters and Schaal, 2008; Peters et al., 2010; Levine and Abbeel, 2014). In the case when p(Ï ) is Gaussian, this problem can be solved eï¬ciently using dual gradient descent, while the dynamics p(xt+1|xt, ut) are ï¬tted to samples gathered by running the previous controller Ëp(ut|xt) on the robot. Fitting a global Gaussian mixture model to tuples (xt, ut, xt+1) and using it as a prior for ï¬tting the dynamics p(xt+1|xt, ut) serves to greatly reduce the sample complexity. We describe the dynamics ï¬tting procedure in detail in Appendix A.3.
Note that the trajectory optimization cost function £,(p,0) also depends on the policy mo(uz|xz), while we only have access to 79(u;|oz). In order to compute a local quadratic expansion of the KL-divergence term Dkr (p(uz|x¢)||7(uz|xz)) inside L,(p, 0) for iLQG, we also estimate a linearization of the mean of the conditionally Gaussian policy 7(u;|o;) with respect to the state x;, using the same procedure that we use to linearize the dynamics. The data for this estimation consists of tuples {x}, E, a(u,|oi)[Ui]}, which we can obtain because both the states x} and the observations 0} are available for all of the samples evaluated on the real physical system.
This constrained optimization is performed in the âinner loopâ of the optimization described in the previous section, and the KL-divergence constraint Dx i(p(7)||B(7)) < ⬠imposes a step size on the trajectory update. The overall algorithm then becomes an instance of generalized BADMM (Wang and Banerjee, 2014). Note that the augmented Lagrangian £L,(p, @) consists of an expectation under p(7) of a quantity that is independent of p. We can locally approximate this quantity with a quadratic by using a quadratic expansion of (xz, uz), and fitting a linear-Gaussian to 79(u;|x;) with the same method we used for the dynamics. We can then solve the primal optimization in the dual gradient descent procedure with a standard LQR backward pass. This is significantly simpler and much faster than the forward-backward dynamic programming procedure employed in previous work (Levine and Abbeel, 2014; Levine and Koltun, 2014). This improvement is enabled by the use of BADMM, which allows us to always formulate the KL-divergence term in the Lagrangian with the distribution being optimized as the first argument. Since the KL-divergence is convex in its first argument, this makes the corresponding optimization significantly easier. The details of this LQR-based dual gradient descent algorithm are derived in Appendix A.4. We can further improve the efficiency of the method by allowing samples from multiple trajectories p;(7) to be used to fit a shared dynamics p(x:41|Xz, Uz), while the controllers pi(uz|x¢) are allowed to vary. This makes sense when the initial states of these trajectories
11
Levine, Finn, Darrell, and Abbeel
are similar, and they therefore visit similar regions. This allows us to draw just a single sample from each pi(Ï ) at each iteration, allowing us to handle many more initial states.
# 4.3 Supervised Policy Optimization
Since the policy parameters θ participate only in the constraints of the optimization problem in Equation (1), optimizing the policy corresponds to minimizing the KL-divergence between the policy and trajectory distribution, as well as the expectation of λT µtut. For a conditional Gaussian policy of the form Ïθ(ut|ot) = N (µÏ(ot), ΣÏ(ot)), the objective is
N T 1 _ £66.) =ay Yd Eriixnor) [tt[Cz;'=" (0r)] log |=" (0r)| i=1t=1 +(H" (01) â Hp (x1) Cai" (U⢠(Ot) â Mei (Xe)) + BAeâ (04)]
where µp ti(xt) is the mean of pi(ut|xt) and Cti is the covariance, and the expectation is eval- uated using samples from each pi(Ï ) with corresponding observations ot. The observations are sampled from p(ot|xt) by recording camera images on the real system. Since the input to µÏ(ot) and ΣÏ(ot) is not the state xt, but only an observation ot, we can train the policy to directly use raw observations. Note that Lθ(θ, p) is simply a weighted quadratic loss on the diï¬erence between the policy mean and the mean action of the trajectory distribution, oï¬set by the Lagrange multiplier. The weighting is the precision matrix of the conditional in the trajectory distribution, which is equal to the curvature of its cost-to-go function (Levine and Koltun, 2013a). This has an intuitive interpretation: Lθ(θ, p) penalizes de- viation from the trajectory distribution, with a penalty that is locally proportional to its cost-to-go. At convergence, when the policy Ïθ(ut|ot) takes the same actions as pi(ut|xt), their Q-functions are equal, and the supervised policy objective becomes equivalent to the policy iteration objective (Levine and Koltun, 2014)
In this work, we optimize Lθ(θ, p) with respect to θ using stochastic gradient descent (SGD), a standard method for neural network training. The covariance of the Gaussian policy does not depend on the observation in our implementation, though adding this de- pendence would be straightforward. Since training complex neural networks requires a substantial number of samples, we found it beneï¬cial to include sampled observations from previous iterations into the policy optimization, evaluating the action µp ti(xt) at their corre- sponding states using the current trajectory distributions. Since these samples come from the wrong state distribution, we use importance sampling and weight them according to the ratio of their probability under the current distribution p(xt) and the one they were sampled from, which is straightforward to evaluate under the estimated linear-Gaussian dynamics (Levine and Koltun, 2013b).
# 4.4 Comparison with Prior Guided Policy Search Methods
We presented a guided policy search method where the policy is trained on observations, while the trajectories are trained on the full state. The BADMM formulation of guided policy search is new to this work, though several prior guided policy search methods based on constrained optimization have been proposed. Levine and Koltun (2014) proposed a formulation similar to Equation (1), but with a constraint on the KL-divergence between
12
# End-to-End Training of Deep Visuomotor Policies
p(Ï ) and Ïθ. This results in a more complex, non-convex forward-backward trajectory optimization phase. Since the BADMM formulation solves a convex problem during the trajectory optimization phase, it is substantially faster and easier to implement and use, especially when the number of trajectories pi(Ï ) is large.
The use of ADMM for guided policy search was also proposed by Mordatch and Todorov (2014) for deterministic policies under known dynamics. This approach requires known, de- terministic dynamics and trains deterministic policies. Furthermore, because this approach uses a simple quadratic augmented Lagrangian term, it further requires penalty terms on the gradient of the policy to account for local feedback. Our approach enforces this feed- back behavior due to the higher moments included in the KL-divergence term, but does not require computing the second derivative of the policy.
# 5. End-to-End Visuomotor Policies
Guided policy search allows us to optimize complex, high-dimensional policies with raw observations, such as when the input to the policy consists of images from a robotâs onboard camera. However, leveraging this capability to directly learn policies for visuomotor control requires designing a policy representation that is both data-eï¬cient and capable of learning complex control strategies directly from raw visual inputs. In this section, we describe a deep convolutional neural network (CNN) model that is uniquely suited to this task. Our approach combines a novel spatial soft-argmax layer with a pretraining procedure that provides for ï¬exibility and data-eï¬ciency.
# 5.1 Visuomotor Policy Architecture
Our visuomotor policy runs at 20 Hz on the robot, mapping monocular RGB images and the robot conï¬gurations to joint torques on a 7 DoF arm. The conï¬guration includes the angles of the joints and the pose of the end-eï¬ector (deï¬ned by 3 points in the space of the end-eï¬ector), as well as their velocities, but does not include the position of the target ob- ject or goal, which must be determined from the image. CNNs often use pooling to discard the locational information that is necessary to determine positions, since it is an irrelevant distractor for tasks such as object classiï¬cation (Lee et al., 2009). Because locational in- formation is important for control, our policy does not use pooling. Additionally, CNNs built for spatial tasks such as human pose estimation often also rely on the availability of location labels in image-space, such as hand-labeled keypoints (Tompson et al., 2014). We propose a novel CNN architecture capable of estimating spatial information from an image without direct supervision in image space. Our pose estimation experiments, discussed in Section 5.2, show that this network can learn useful visual features using only 3D position information provided by the robot, and no camera calibration. Further training the network with guided policy search to directly output motor torques causes it to acquire task-speciï¬c visual features. Our experiments in Section 6.4 show that this improves performance beyond the level achieved with features trained only for pose estimation.
Our network architecture is shown in Figure 2. The visual processing layers of the network consist of three convolutional layers, each of which learns a bank of ï¬lters that are applied to patches centered on every pixel of its input. These ï¬lters form a hierarchy of local image features. Each convolutional layer is followed by a rectifying nonlinearity of
13
# Levine, Finn, Darrell, and Abbeel
the form a,j; = max(0, 2.;;) for each channel c and each pixel coordinate (i,j). The third convolutional layer contains 32 response maps with resolution 109 x 109. These response maps are passed through a spatial softmax function of the form sq; = e%4/Y>i, i! etlâ, Each output channel of the softmax is a probability distribution over the location of a feature in the image. To convert from this distribution to a coordinate representation (fee, fey), the network calculates the expected image position of each feature, yielding a 2D coordinate for each channel: fe = Ly Seijtiy and fry = Ly SeijYig, Where (jj, Yij) is the image-space position of the point (i,7) in the response map. Since this is a linear operation, it corresponds to a fixed, sparse fully connected layer with weights Wai2 = xj; and W.jy = yij- The combination of the spatial softmax and expectation operator implement a kind of soft-argmax. The spatial feature points (for, fey) are concatenated with the robotâs configuration and fed into two fully connected layers, each with 40 rectified units, followed by linear connections to the torques. The full network contains about 92,000 parameters, of which 86,000 are in the convolutional layers.
The spatial softmax and the expected position computation serve to convert pixel-wise representations in the convolutional layers to spatial coordinate representations, which can be manipulated by the fully connected layers into 3D positions or motor torques. The softmax also provides lateral inhibition, which suppresses low, erroneous activations, only keeping strong activations that are more likely to be accurate. This makes our policy more robust to distractors, providing generalization to novel visual variation. We compare our architecture with more standard alternatives in Section 6.3 and evaluate robustness to visual distractors in Section 6.4. However, the proposed architecture is also in some sense more specialized for visuomotor control, in contrast to more general standard convolutional networks. For example, not all perception tasks require information that can be coherently summarized by a set of spatial locations.
# 5.2 Visuomotor Policy Training
The guided policy search trajectory optimization phase uses the full state of the system, though the ï¬nal policy only uses the observations. This type of instrumented training is a natural choice for many robotics tasks, where the robot is trained under controlled conditions, but must then act intelligently in uncon- trolled, real-world situations. In our tasks, the unobserved vari- ables are the pose of a target object (e.g. the bottle on which a cap must be placed). During training, this target object is typi- cally held in the robotâs left gripper, while the robotâs right arm performs the task, as shown to the right. This allows the robot to move the target through a range of known positions. The ï¬nal visuomotor policy does not receive this position as input, but must instead use the camera images. Due to the modest amount of training data, distractors that are correlated with task-relevant variables can hamper generalization. For this reason, the left arm is covered with cloth to prevent the policy from associating its appearance with the objectâs position.
14
End-to-End Training of Deep Visuomotor Policies
While we can train the visuomotor policy entirely from scratch, the algorithm would spend a large number of iterations learning basic visual features and arm motions that can more eï¬ciently be learned by themselves, before being incorporated into the policy. To speed up learning, we initialize both the vision layers in the policy and the trajectory distributions for guided policy search by leveraging the fully observed training setup. To initialize the vision layers, the robot moves the target object through a range of random positions, recording camera images and the objectâs pose, which is computed automatically from the pose of the gripper. This dataset is used to train a pose regression CNN, which consists of the same vision layers as the policy, followed by a fully connected layer that outputs the 3D points that deï¬ne the target. Since the training set is still small (we use 1000 images collected from random arm motions), we initialize the ï¬lters in the ï¬rst layer with weights from the model of Szegedy et al. (2014), which is trained on ImageNet (Deng et al., 2009) classiï¬cation. After training on pose regression, the weights in the convolutional layers are transferred to the policy CNN. This enables the robot to learn the appearance of the objects prior to learning the behavior.
To initialize the linear-Gaussian controllers for each of the initial states, we take 15 iterations of guided policy search without optimizing the visuomotor policy. This allows for much faster training in the early iterations, when the trajectories are not yet successful, and optimizing the full visuomotor policy is unnecessarily time consuming. Since we still want the trajectories to arrive at compatible strategies for each target position, we replace the visuomotor policy during these iterations with a small network that receives the full state, which consisted of two layers with 40 rectiï¬ed linear hidden units in our experiments. This network serves only to constrain the trajectories and avoid divergent behaviors from emerging for similar initial states, which would make subsequent policy learning diï¬cult.
After initialization, we train the full visuomotor policy with guided policy search. During the supervised policy optimiza- tion phase, the fully connected motor control layers are ï¬rst optimized by themselves, since they are not initialized with pre- training. This can be done very quickly because these layers are small. Then, the entire network is further optimized end-to-end. We found that ï¬rst training the upper layers before end-to-end optimization prevented the convolutional layers from forgetting useful features learning during pretraining, when the error sig- nal due to the untrained upper layers is very large. The entire pretraining scheme is summarized in the diagram on the right. Note that the trajectories can be pretrained in parallel with the vision layer pretraining, which does not require access to the physical system. Further- more, the entire initialization procedure does not use any additional information that is not already available from the robot.
requires robot collect visual pose data pretrain trajectories train pose CNN initial trajectories initial visual features end-to-end training policy
requires robot
# collect visual pose data
# pretrain trajectories
# train pose CNN
# initial trajectories
# initial visual features
# end-to-end training
# policy
# 6. Experimental Evaluation
In this section, we present a series of experiments aimed at evaluating our approach and answering the following questions:
15
Levine, Finn, Darrell, and Abbeel
1. How does the guided policy search algorithm compare to other policy search methods for training complex, high-dimensional policies, such as neural networks?
2. Does our trajectory optimization algorithm work on a real robotic platform with unknown dynamics, for a range of diï¬erent tasks?
3. How does our spatial softmax architecture compare to other, more standard convolu- tional neural network architectures?
4. Does training the perception and control systems in a visuomotor policy jointly end- to-end provide better performance than training each component separately?
Evaluating a wide range of policy search algorithms on a real robot would be extremely time consuming, particularly for methods that require a large number of samples. We therefore answer question (1) by using a physical simulator and simpler policies that do not use vision. This also allows us to test the generality of guided policy search on tasks that include manipulation, walking, and swimming. To answer question (2), we present a wide range of experiments on a PR2 robot. These experiments allow us to evaluate the sample eï¬ciency of our trajectory optimization algorithm. To address question (3), we compare a range of diï¬erent policy architectures on the task of localizing a target object (the cube in the shape sorting cube task). Since localizing the target object is a prerequisite for completing the shape sorting cube task, this serves as a good proxy for evaluating diï¬erent architectures. Finally, we answer the last and most important question (4) by training visuomotor policies for hanging a coat hanger on a clothes rack, inserting a block into a shape sorting cube, ï¬tting the claw of a toy hammer under a nail with various grasps, and screwing on a bottle cap. These tasks are illustrated in Figure 8.
# 6.1 Simulated Comparisons to Prior Policy Search Methods
In this section, we compare our method against prior policy search techniques on a range of simulated robotic control tasks. These results previously appeared in our conference paper that introduced the trajectory optimization procedure with local linear models (Levine and Abbeel, 2014). In these tasks, the state xt consists of the joint angles and velocities of each robot, and the actions ut consist of the torques at each joint. The neural network policies used one hidden layer and soft rectiï¬er nonlinearities of the form a = log(1 + exp(z)). Since these policies use the state as input, they only have a few hundred parameters, far fewer than our visuomotor policies. However, even this number of parameters can pose a major challenge for prior policy search methods (Deisenroth et al., 2013).
Experimental tasks. We simulated 2D and 3D peg insertion, octopus arm control, and planar swimming and walking. The diï¬culty in the peg insertion tasks stems from the need to align the peg with the slot and the complex contacts between the peg and the walls, which result in discontinuous dynamics. Octopus arm control involves moving the tip of a ï¬exible arm to a goal position (Engel et al., 2005). The challenge in this task stems from its high dimensionality: the arm has 25 degrees of freedom, corresponding to 50 state dimensions. The swimming task requires controlling a three-link snake, and the walking task requires a seven-link biped to maintain a target velocity. The challenge in these tasks comes from underactuation. Details of the simulation and cost for each task are in Appendix B.1.
16
# End-to-End Training of Deep Visuomotor Policies
itr 1 itr 2 itr 4 itr 1 itr 5 itr 1 itr 20 itr 40 itr 10
Figure 4: Results for learning linear-Gaussian controllers for 2D and 3D insertion, octopus arm, and swimming. Our approach uses fewer samples and ï¬nds better solutions than prior methods, and the GMM further reduces the required sample count. Images in the lower- right show the last time step for each system at several iterations of our method, with red lines indicating end eï¬ector trajectories.
Prior methods. We compare to REPS (Peters et al., 2010), reward-weighted regression (RWR) (Peters and Schaal, 2007; Kober and Peters, 2009), the cross-entropy method (CEM) (Rubinstein and Kroese, 2004), and PILCO (Deisenroth and Rasmussen, 2011). We also use iLQG (Li and Todorov, 2004) with a known model as a baseline, shown as a black horizontal line in all plots. REPS is a model-free method that, like our approach, enforces a KL-divergence constraint between the new and old policy. We compare to a variant of REPS that also ï¬ts linear dynamics to generate 500 pseudo-samples (Lioutikov et al., 2014), which we label âREPS (20 + 500).â RWR is an EM algorithm that ï¬ts the policy to previous samples weighted by the exponential of their reward, and CEM ï¬ts the policy to the best samples in each batch. With Gaussian trajectories, CEM and RWR only diï¬er in the weights. These methods represent a class of RL algorithms that ï¬t the policy to weighted samples, including PoWER and PI2 (Kober and Peters, 2009; Theodorou et al., 2010; Stulp and Sigaud, 2012). PILCO is a model-based method that uses a Gaussian process to learn a global dynamics model that is used to optimize the policy. We used the open-source implementation of PILCO provided by the authors. Both REPS and PILCO require solving large nonlinear optimizations at each iteration, while our method does not. Our method used 5 rollouts with the Gaussian mixture model prior, and 20 without. Due to its computational cost, PILCO was provided with 5 rollouts per iteration, while other prior methods used 20 and 100. For all prior methods with free hyperparameters (such as the fraction of elites for CEM), we performed hyperparameter sweeps and chose the most successful settings for the comparison.
Gaussian trajectory distributions. In the ï¬rst set of comparisons, we evaluate only the trajectory optimization procedure for training linear-Gaussian controllers under unknown dynamics to determine its sample-eï¬ciency and applicability to complex, high-dimensional problems. The results of this comparison for the peg insertion, octopus arm, and swimming
17
# Levine, Finn, Darrell, and Abbeel
#1 #2 #3 #1 #2 #3 #4 #4
Figure 5: Comparison on neural network policies. For insertion, the policy was trained to search for an unknown slot position on four slot positions (shown above). Generalization to new positions is graphed with dashed lines. Note how the end eï¬ector (red) follows the surface to ï¬nd the slot, and how the swimming gait is smoother due to the stationary policy.
tasks appears in Figure 4. The horizontal axis shows the total number of samples, and the vertical axis shows the minimum distance between the end of the peg and the bottom of the slot, the distance to the target for the octopus arm, or the total distance travelled by the swimmer. Since the peg is 0.5 units long, distances above this amount correspond to controllers that cannot perform an insertion. Our method learned much more eï¬ective controllers with fewer samples, especially when using the Gaussian mixture model prior. On 3D insertion, it outperformed the iLQG baseline, which used a known model. Contact discontinuities cause problems for derivative-based methods like iLQG, as well as methods like PILCO that learn a smooth global dynamics model. We use a time-varying local model, which preserves more detail, and ï¬tting the model to samples has a smoothing eï¬ect that mitigates discontinuity issues. Prior policy search methods could servo to the hole, but were unable to insert the peg. On the octopus arm, our method succeeded despite the high dimensionality of the state and action spaces.1 Our method also successfully learned a swimming gait, while prior model-free methods could not initiate forward motion. PILCO also learned an eï¬ective gait due to the smooth dynamics of this task, but its GP- based optimization required orders of magnitude more computation time than our method, taking about 50 minutes per iteration. In the case of prior model-free methods, the high dimensionality of the time-varying linear-Gaussian controllers likely caused considerable diï¬culty (Deisenroth et al., 2013), while our approach exploits the structure of linear- Gaussian controllers for eï¬cient learning.
1. The high dimensionality of the octopus arm made it diï¬cult to run PILCO, though in principle, such methods should perform well on this task given the armâs smooth dynamics.
18
End-to-End Training of Deep Visuomotor Policies
Neural network policies. In the second set of comparisons, shown in Figure 5, we compare guided policy search to RWR and CEM2 on the challenging task of training high- dimensional neural network policies for the peg insertion and locomotion tasks. The vari- ant of guided policy search used in this comparison diï¬ers somewhat from the version described in Section 4, in that it used a simpler dual gradient descent formulation, rather than BADMM. In practice, we found the performance of these methods to be very similar, though the BADMM variant was substantially faster and easier to implement.
On swimming, our method achieved similar performance to the linear-Gaussian case, but since the neural network policy was stationary, the resulting gait was much smoother. Previous methods could only solve this task with 100 samples per iteration, with RWR eventually obtaining a distance of 0.5m after 4000 samples, and CEM reaching 2.1m after 3000. Our method was able to reach such distances with many fewer samples. Following prior work (Levine and Koltun, 2013a), the walker trajectory was initialized from a demon- stration, which was stabilized with simple linear feedback. The RWR and CEM policies were initialized with samples from this controller to provide a fair comparison. The graph shows the average distance travelled on rollouts that did not fall, and shows that only our method was able to learn walking policies that succeeded consistently.
On peg insertion, the neural network was trained to insert the peg without precise knowledge of the position of the hole, resulting in a partially observed problem. The holes were placed in a region of radius 0.2 units in 2D and 0.1 units in 3D. The policies were trained on four diï¬erent hole positions, and then tested on four new hole positions to evaluate generalization. The hole position was not provided to the neural network, and the policies therefore had to search for the hole, with only joint angles and velocities as input. Only our method could acquire a successful strategy to locate both the training and test holes, although RWR was eventually able to insert the peg into one of the four holes in 2D. These comparisons show that training even medium-sized neural network policies for continuous control tasks with a limited number of samples is very diï¬cult for many prior policy search algorithms. Indeed, it is generally known that model-free policy search meth- In ods struggle with policies that have over 100 parameters (Deisenroth et al., 2013). subsequent sections, we will evaluate our method on real robotic tasks, showing that it can scale from these simulated tasks all the way up to end-to-end learning of visuomotor control.
# 6.2 Learning Linear-Gaussian Controllers on a PR2 Robot
In this section, we demonstrate the range of manipulation tasks that can be learned using our trajectory optimization algorithm on a real PR2 robot. These experiments previously appeared in our conference paper on guided policy search (Levine et al., 2015). Since performing trajectory optimization is a prerequisite for guided policy search to learn eï¬ective visuomotor policies, it is important to evaluate that our trajectory optimization can learn a wide variety of robotic manipulation tasks under unknown dynamics. The tasks in these experiments are shown in Figure 6, while Figure 7 shows the learning curves for each task. For all robotic experiments in this article, the tasks were learned entirely from scratch,
2. PILCO cannot optimize neural network policies, and we could not obtain reasonable results with REPS. Prior applications of REPS generally focus on simpler, lower-dimensional policy classes (Peters et al., 2010; Lioutikov et al., 2014).
19
Levine, Finn, Darrell, and Abbeel
(a) (b) (c) (d) (e) (f) (g) (h) (i)
(a)
(b)
(d)
(e)
(f)
(g)
(h)
(c)
(i)
Figure 6: Tasks for linear-Gaussian controller evaluation: (a) stacking lego blocks on a ï¬xed base, (b) onto a free-standing block, (c) held in both gripper; (d) threading wooden rings onto a peg; (e) attaching the wheels to a toy airplane; (f) inserting a shoe tree into a shoe; (g,h) screwing caps onto pill bottles and (i) onto a water bottle.
with the initialization of the controllers p(ut|xt) described in Appendix B.2. The number of samples required to learn each controller is around 20-25, substantially lower than many prior policy search meth- ods in the literature (Peters and Schaal, 2008; Kober et al., 2010b; Theodorou et al., 2010; Deisenroth et al., 2013). Total learn- ing time was about ten minutes for each task, of which only 3-4 minutes involved sys- tem interaction. The rest of the time was spent resetting the robot to the initial state and on computation.
linear-Gaussian controller learning curves ââ ego block (xed) ââ toy airplane ~ 8h ââ lego block (free) ââ shoe tree E ââ lego block (hand) + â=â pill bottle 3 ° ââ~ Fing on peg âââ waler bottle 5 4 ~° 2 olâ DE nS eS samples
Figure 7: Distance to target point during training of linear-Gaussian controllers. The actual target may diï¬er due to perturbations. Error bars indicate one standard deviation.
The linear-Gaussian controllers are optimized for a speciï¬c condition â e.g., a speciï¬c position of the target lego block. To evaluate their robustness to errors in the speciï¬ed target position, we conducted experiments on the lego block and ring tasks where the target object (the lower block and the peg) was perturbed at each trial during training, and then tested with various perturbations. For each task, controllers were trained with Gaussian perturbations with standard deviations of 0, 1, and 2 cm in the position of the target object, and each controller was tested with perturbations of radius 0, 1, 2, and 3 cm. Note that with a radius of 2 cm, the peg would be placed about one ring-width away from the expected position. The results are shown in Table 2. All controllers were robust to perturbations of 1 cm, and would often succeed at 2 cm. Robustness increased slightly when more noise was injected during training, but even controllers trained without noise exhibited considerable robustness, since the linear-Gaussian controllers themselves add noise during sampling. We also evaluated a kinematic baseline for each perturbation level, which planned a straight path from a point 5 cm above the target to the expected (unperturbed) target location. This baseline was only able to place the lego block in the absence of perturbations. The rounded top of the peg provided an easier condition for the baseline, with occasional successes at higher perturbation levels. Our controllers outperformed the baseline by a wide margin.
All of the robotic experiments discussed in this section may be viewed in the corre- sponding supplementary video, available online: http://rll.berkeley.edu/icra2015gps. A video illustration of the visuomotor policies, discussed in the following sections, is also available: http://sites.google.com/site/visuomotorpolicy.
20
# End-to-End Training of Deep Visuomotor Policies
test perturbation lego block ring on peg g n i n i a r t . 0 cm b 1 cm r u t 2 cm r e kinematic baseline p 0 cm 1 cm 2 cm 3 cm 0 cm 1 cm 2 cm 3 cm 5/5 5/5 5/5 5/5 5/5 5/5 5/5 0/5 3/5 3/5 5/5 0/5 2/5 2/5 3/5 0/5 5/5 5/5 5/5 5/5 5/5 5/5 5/5 3/5 0/5 3/5 3/5 0/5 0/5 0/5 0/5 0/5
Table 2: Success rates of linear-Gaussian controllers under target object perturbation. 6.3 Spatial Softmax CNN Architecture Evaluation
In this section, we evaluate the neural network architecture that we propose in Section 5.1 in comparison to more standard convolutional networks. To isolate the architectures from other confounding factors, we measure their accuracy on the pose estimation pretraining task described in Section 5.2. This is a reasonable proxy for evaluating how well the network can overcome two major challenges in visuomotor learning: the ability to handle relatively small datasets without overï¬tting, and the capability to learn tasks that are inherently spatial. We compare to a network where the expectation operator after the softmax is replaced with a learned fully connected layer, as is standard in the literature, a network where both the softmax and the expectation operators are replaced with a fully connected layer, and a version of this network that also uses 3 à 3 max pooling with stride 2 at the ï¬rst two layers. These alternative architectures have many more parameters, since the fully connected layer takes the entire bank of response maps from the third convolutional layer as input. Pooling helps to reduce the number of parameters, but not to the same degree as the spatial softmax and expectation operators in our architecture.
The results in Table 3 indicate that using the softmax and expectation operators im- proves pose estimation accuracy substantially. Our network is able to outperform the more standard architectures because it is forced by the softmax and expectation operators to learn feature points, which provide a concise representation suitable for spatial inference. Since most of the parameters in this architecture are in the convolutional layers, which ben- eï¬t from extensive weight sharing, overï¬tting is also greatly reduced. By removing pooling, our network also maintains higher resolution in the convolutional layers, improving spatial accuracy. Although we did attempt to regularize the larger standard architectures with higher weight decay and dropout, we did not observe a signiï¬cant improvement on this dataset. We also did not extensively optimize the parameters of this network, such as ï¬l- ter size and number of channels, and investigating these design decisions further would be valuable to investigate in future work.
network architecture softmax + feature points (ours) softmax + fully connected layer fully connected layer max-pooling + fully connected test error (cm) 1.30 ± 0.73 2.59 ± 1.19 4.75 ± 2.29 3.71 ± 1.73
Table 3: Average pose estimation accuracy and standard deviation with various architec- tures, measured as average Euclidean error for the three target points in 3D, with ground truth determined by forward kinematics from the left arm.
21
Levine, Finn, Darrell, and Abbeel
(a) hanger (b) cube (c) hammer (d) bottle
Figure 8: Illustration of the tasks in our visuomotor policy experiments, showing the vari- ation in the position of the target for the hanger, cube, and bottle tasks, as well as two of the three grasps for the hammer, which also included variation in position (not shown).
# 6.4 Deep Visuomotor Policy Evaluation
In this section, we present an evaluation of our full visuomotor policy training algorithm on a PR2 robot. The aim of this evaluation is to answer the following question: does training the perception and control systems in a visuomotor policy jointly end-to-end provide better performance than training each component separately?
Experimental tasks. We trained policies for hanging a coat hanger on a clothes rack, inserting a block into a shape sorting cube, ï¬tting the claw of a toy hammer under a nail with various grasps, and screwing on a bottle cap. The cost function for these tasks encourages low distance between three points on the end-eï¬ector and corresponding target points, low torques, and, for the bottle task, spinning the wrist. The equations for these cost functions and the details of each task are presented in Appendix B.2. The tasks are illustrated in Figure 8. Each task involved variation of 10-20 cm in each direction in the position of the target object (the rack, shape sorting cube, nail, and bottle). In addition, the coat hanger and hammer tasks were trained with two and three grasps, respectively. The current angle of the grasp was not provided to the policy, but had to be inferred from observing the robotâs gripper in the camera images. All tasks used the same policy architecture and model parameters.
Experimental conditions. We evaluated the visuomotor policies in three conditions: (1) the training target positions and grasps, (2) new target positions not seen during training and, for the hammer, new grasps (spatial test), and (3) training positions with visual distractors (visual test). A selection of these experiments is shown in the supplementary video.3 For the visual test, the shape sorting cube was placed on a table rather than held in
3. The video can be viewed at http://sites.google.com/site/visuomotorpolicy
22
End-to-End Training of Deep Visuomotor Policies
the gripper, the coat hanger was placed on a rack with clothes, and the bottle and hammer tasks were done in the presence of clutter. Illustrations of this test are shown in Figure 9.
Comparison. The success rates for each test are shown in Figure 9. We compared to two baselines, both of which train the vision layers in advance for pose prediction, instead of training the entire policy end-to-end. The features baseline discards the last layer of the pose predictor and uses the feature points, resulting in the same architecture as our policy, while the prediction baseline feeds the predicted pose into the control layers. The pose prediction baseline is analogous to a standard modular approach to policy learning, where the vision system is ï¬rst trained to localize the target, and the policy is trained on top of it. This variant achieves poor performance. As discussed in Section 6.3, the pose estimate is accurate to about 1 cm. However, unlike the tasks in Section 6.2, where robust controllers could succeed even with inaccurate perception, many of these tasks have tolerances of just a few millimeters. In fact, the pose prediction baseline is only successful on the coat hanger, which requires comparatively little accuracy. Millimeter accuracy is diï¬cult to achieve even with calibrated cameras and checkerboards. Indeed, prior work has reported that the PR2 can maintain a camera to end eï¬ector accuracy of about 2 cm during open loop motion (Meeussen et al., 2010). This suggests that the failure of this baseline is not atypical, and that our visuomotor policies are learning visual features and control strategies that improve the robotâs accuracy. When provided with pose estimation features, the policy has more freedom in how it uses the visual information, and achieves somewhat higher success rates. However, full end-to-end training performs signiï¬cantly better, achieving high accuracy even on the challenging bottle task, and successfully adapting to the variety of grasps on the hammer task. This suggests that, although the vision layer pretraining is clearly beneï¬cial for reducing computation time, it is not suï¬cient by itself for discovering good features for visuomotor policies.
Visual distractors. The policies exhibit moderate tolerance to distractors that are visu- ally separated from the target object. This is enabled in part by the spatial softmax, which has a lateral inhibition eï¬ect that suppresses non-maximal activations. Since distractors are unlikely to activate each feature as much as the true object, their activations are there- fore suppressed. However, as expected, the learned policies tend to perform poorly under drastic changes to the backdrop, or when the distractors are adjacent to or occluding the manipulated objects, as shown in the supplementary video. A standard solution to this issue to expose the policy to a greater variety of visual situations during training. This issue could also be mitigated by artiï¬cially augmenting the image samples with synthetic transformations, as discussed in prior work in computer vision (Simard et al., 2003), or even incorporating ideas from transfer and semi-supervised learning.
# 6.5 Features Learned with End-to-End Training
The visual processing layers of our architecture automatically learn features points using the spatial softmax and expectation operators. These feature points encapsulate all of the visual information received by the motor layers of the policy. In Figure 10, we show the features points discovered by our visuomotor policy through guided policy search. Each policy learns features on the target object and the robot manipulator, both clearly relevant
23
Levine, Finn, Darrell, and Abbeel
# training
# visual test
r e g n a h e b u c r e m m a h e l t t o b
training (18) spatial test (24) visual test (18) coat hanger 100% end-to-end pose features 88.9% pose prediction 55.6% shape cube end-to-end pose features pose prediction 0% training (45) spatial test (60) visual test (60) toy hammer 91.1% end-to-end pose features 62.2% pose prediction 8.9% bottle cap end-to-end pose features
Success rates on training positions, on novel test positions, and in the presence of visual distractors. The number of trials per test is shown in parentheses.
Figure 9: Training and visual test scenes as seen by the policy (left), and experimental results (right). The hammer and bottle images were cropped for visualization only.
to task execution. The policy tends to pick out robust, distinctive features on the objects, such as the left pole of the clothes rack, the left corners of the shape-sorting cube and the bottom-left corner of the toy tool bench. In the bottle task, the end-to-end trained policy outputs points on both sides of the bottle, including one on the cap, while the pose prediction network only ï¬nds points on the right edge of the bottle.
In Figure 11, we compare the feature points learned through guided policy search to those learned by a CNN trained for pose prediction. After end-to-end training, the policy acquired a distinctly diï¬erent set of feature points compared to the pose prediction CNN used for initialization. The end-to-end trained model ï¬nds more feature points on task- relevant objects and fewer points on background objects. This suggests that the policy improves its performance by acquiring goal-driven visual features that diï¬er from those learned for object localization.
The feature point representation is very simple, since it assumes that the learned features are present at all times, and only one instance of each feature is ever present in the image. While this is a drastic simpliï¬cation, both the pose predictor and the policy still achieve good results. A more ï¬exible architecture that still learns a concise feature point representation could further improve policy performance. We hope to explore this in future work.
# 6.6 Computational Performance and Sample Eï¬ciency
We used the Caï¬e deep learning library (Jia et al., 2014) for CNN training. Each visuomotor policy required a total of 3-4 hours of training time: 20-30 minutes for the pose prediction data collection on the robot, 40-60 minutes for the fully observed trajectory pretraining on
24
# End-to-End Training of Deep Visuomotor Policies
(a) hanger (b) cube (c) hammer (d) bottle
Figure 10: Feature points tracked by the policy during task execution for each of the four tasks. Each feature point is displayed in a diï¬erent random color, with consistent coloring across images. The policy ï¬nds features on the target object and the robot gripper and arm. In the bottle cap task, note that the policy correctly ignores the distractor bottle in the background, even though it was not present during training.
(a) hanger (b) cube (c) hammer (d) bottle
Figure 11: Feature points learned for each task. For each input image, the feature points produced by the policy are shown in blue, while the feature points of the pose prediction network are shown in red. The end-to-end trained policy tends to discover more feature points on the target object and the robot arm than the pose prediction network.
25
Levine, Finn, Darrell, and Abbeel
the robot and oï¬ine pose pretraining (which can be done in parallel), and between 1.5 and 2.5 hours for end-to-end training with guided policy search. The coat hanger task required two iterations of guided policy search, the shape sorting cube and the hammer required three, and the bottle task required four. Only about 15 minutes of the training time consisted of executing trials on the robot. Since training was dominated by computation, we expect signiï¬cant speedup from a more eï¬cient implementation. The number of samples for training each policy is shown in Table 4. Each trial was ï¬ve seconds in length, and the numbers do not include the time needed to collect about 1000 images for pretraining the visual processing layers of the policy.
number of trials task coat hanger shape cube toy hammer bottle cap trajectory pretraining 120 90 150 180 end-to-end training 36 81 90 108 total 156 171 240 288
Table 4: Total number of trials used for learning each visuomotor policy.
# 7. Discussion and Future Work
In this paper, we presented a method for learning robotic control policies that use raw input from a monocular camera. These policies are represented by a novel convolutional neural network architecture, and can be trained end-to-end using our guided policy search algorithm, which decomposes the policy search problem in a trajectory optimization phase that uses full state information and a supervised learning phase that only uses the obser- vations. This decomposition allows us to leverage state-of-the-art tools from supervised learning, making it straightforward to optimize extremely high-dimensional policies. Our experimental results show that our method can execute complex manipulation skills, and that end-to-end training produces signiï¬cant improvements in policy performance compared to using ï¬xed vision layers trained for pose prediction.
Although we demonstrate moderate generalization over variations in the scene, our current method does not generalize to dramatically diï¬erent settings, especially when visual distractors occlude the manipulated object or break up its silhouette in ways that diï¬er from the training. The success of CNNs on exceedingly challenging vision tasks suggests that this class of models is capable of learning invariance to irrelevant distractor features (LeCun et al., 2015), and in principle this issue can be addressed by training the policy in a variety of environments, though this poses certain logistical challenges. More practical alternatives that could be explored in future work include simultaneously training the policy on multiple robots, each of which is located in a diï¬erent environment, developing more sophisticated regularization and pretraining techniques to avoid overï¬tting, and introducing artiï¬cial data augmentation to encourage the policy to be invariant to irrelevant clutter. However, even without these improvements, our method has numerous applications in, for example, an industrial setting where the robot must repeatedly and eï¬ciently perform a task that requires visual feedback under moderate variation in background and clutter conditions.
Our method takes advantage of a known, fully observed state space during training. This is both a weakness and a strength. It allows us to train linear-Gaussian controllers
26
End-to-End Training of Deep Visuomotor Policies
for guided policy search using a very small number of samples, far more eï¬ciently than standard policy search methods. However, the requirement to observe the full state during training limits the tasks to which the method can be applied. In many cases, this limitation is minor, and the only âinstrumentationâ required at training is to position the objects in the scene at consistent positions. However, tasks that require, for example, manipulating freely moving objects require more extensive instrumentation, such as motion capture. A promising direction for addressing this limitation is to combine our method with unsuper- vised state-space learning, as proposed in several recent works, including our own (Lange et al., 2012; Watter et al., 2015; Finn et al., 2015).
In future work, we hope to explore more complex policy architectures, such as recurrent policies that can deal with extensive occlusions by keeping a memory of past observations. We also hope to extend our method to a wider range of tasks that can beneï¬t from visual input, as well as a variety of other rich sensory modalities, including haptic input from pressure sensors and auditory input. With a wider range of sensory modalities, end-to- end training of sensorimotor policies will become increasingly important: while it is often straightforward to imagine how vision might help to localize the position of an object in the scene, it is much less apparent how sound can be integrated into robotic control. A learned sensorimotor policy would be able to naturally integrate a wide range of modalities and utilize them to directly aid in control.
# Acknowledgements
This research was funded in part by DARPA through a Young Faculty Award, the Army Research Oï¬ce through the MAST program, NSF awards IIS-1427425 and IIS-1212798, the Berkeley Vision and Learning Center, and a Berkeley EECS Department Fellowship.
# Appendix A. Guided Policy Search Algorithm Details
In this appendix, we describe a number of implementation details of our BADMM-based guided policy search algorithm and our linear-Gaussian controller optimization method.
# A.1 BADMM Dual Variables and Weight Adjustment
Recall that the inner loop alternating optimization is given by
T 0 Harg min Eop(xp)rro (url) Ue Apt] +1469 (0,p) t=1 T p arg min > Enyxeq,any) E(%t, We) â UF Ape] +44? (p, 8) t=1 Aut â Aut + Ot (Erry (aye) p(xe) [Ue] _ Ey(us|xe)p(o) [Ue])-
We use a step size of α = 0.1 in all of our experiments, which we found to be more stable than α = 1.0. The weights νt are initialized to 0.01 and incremented based on the following schedule: at every iteration, we compute the average KL-divergence between p(ut|xt) and Ïθ(ut|xt) at each time step, as well as its standard deviation over time steps.
27
Levine, Finn, Darrell, and Abbeel
The weights νt corresponding to time steps where the KL-divergence is higher than the average are increased by a factor of 2, and the weights corresponding to time steps where the KL-divergence is two standard deviations or more below the average are decreased by a factor of 2. The rationale behind this schedule is to adjust the KL-divergence penalty to keep the policy and trajectory in agreement by roughly the same amount at all time steps. Increasing νt too quickly can lead to the policy and trajectory becoming âlockedâ together, which makes it diï¬cult for the trajectory to decrease its cost, while leaving it too low requires more iterations for convergence. We found this schedule to work well across all tasks, both during trajectory pretraining and while training the visuomotor policy.
To update the dual variables λµt, we evaluate the expectations over p(xt) by using the latest batch of sampled trajectories. For each state {xi t} along these sampled trajectories, we evaluate the expectations over ut under Ïθ(ut|xt) and p(ut|xt), which correspond simply to the means of these conditional Gaussian distributions, in closed form.
# A.2 Policy Variance Optimization
As discussed in Section 4, the variance of the Gaussian policy Ïθ(ut|ot) does not depend on the observation, though this dependence would be straightforward to add. Analyzing the objective Lθ(θ, p), we can write out only the terms that depend on ΣÏ:
N 1 _ L£o(9,p) = oN » ye Epicxs.on) [a [C,"2"] â log |X*]] . i=1 t=1
Diï¬erentiating and setting the derivative to zero, we obtain the following equation for ΣÏ:
where the expectation under pi(xt) is omitted, since Cti does not depend on xt.
# A.3 Dynamics Fitting
Optimizing the linear-Gaussian controllers pi(ut|xt) that induce the trajectory distributions pi(Ï ) requires ï¬tting the system dynamics pi(xt+1|xt, ut) at each iteration to samples gen- erated on the physical system from the previous controller Ëpi(ut|xt). In this section, we describe how these dynamics are ï¬tted. As in Section 4, we drop the subscript i, since the dynamics are ï¬tted the same way for all of the trajectory distributions.
The linear-Gaussian dynamics are deï¬ned as p(xt+1|xt, ut) = N (fxtxt + futut + fct, Ft), and the data that we obtain from the robot can be viewed as tuples {xi t+1}. A simple way to ï¬t these linear-Gaussian dynamics is to use linear regression to determine fx, fu, and fc, and ï¬t Ft based on the errors. However, the sample complexity of linear regression scales with the dimensionality of xt. For a high-dimensional robotic system, we might need an impractically large number of samples at each iteration to obtain a good ï¬t. However, we can observe that the dynamics at nearby time steps are strongly correlated, and we can dramatically reduce the sample complexity of the dynamics ï¬tting by bringing in information from other time steps, and even prior iterations. We will bring in this
28
# End-to-End Training of Deep Visuomotor Policies
information by ï¬tting a global model to all of the transitions {xi t+1} for all t and all tuples from several prior iterations (we use three prior iterations in our implementation), and then use this model as a prior for ï¬tting the dynamics at each time step. Note that this global model does not itself need to be a good forward dynamics model â it just needs to serve as a good prior to reduce the sample complexity of linear regression.
To make it more convenient to incorporate a data-driven prior, we will ï¬rst reformulate this linear regression ï¬t and view it as ï¬tting a Gaussian model to the dataset {xi t+1} at each time step t, and then conditioning this Gaussian to obtain p(xt+1|xt, ut). While this is equivalent to linear regression, it allows us to easily incorporate a normal-inverse- Wishart prior on this Gaussian in order to bring in prior information. Let ËΣ be the empirical covariance of our dataset, and let ˵ be the empirical mean. The normal-inverse-Wishart prior is deï¬ned by prior parameters Φ, µ0, m, and n0. Under this prior, the maximum a posteriori estimates for the covariance Σ and mean µ are given by
Σ = Φ + N ËΣ + N m N +m (˵ â µ0)(˵ â µ0)T N + n0 µ = mµ0 + n0 ˵ m + n0 .
Having obtained Σ and µ, we can obtain an estimate of the dynamics p(xt+1|xt, ut) by conditioning the distribution N (µ, Σ) on [xt; ut], which produces linear-Gaussian dynamics p(xt+1|xt, ut) = N (fxtxt + futut + fct, Ft). The parameters of the normal-inverse-Wishart prior are obtained from the global model of the dynamics which, as described previously, is ï¬tted to all available tuples {xi
The simplest prior can be obtained by fitting a Gaussian distribution to vectors [x; u; xâ]. If the mean and covariance of this data are given by ji and ©, the prior is given by ® = nod and pio = ft, while no and m should be set to the number of data points in the datasets. In practice, settings ng and m to 1 tends to produce better results, since the prior is fitted to many more samples than are available for linear regression at each time step. While this prior is simple, we can obtain a better prior by employing a nonlinear model.
The particular global model we use in this work is a Gaussian mixture model over vectors [x;u;xâ]. Systems of articulated rigid bodies undergoing contact dynamics, such as robots interacting with their environment, can be coarsely modeled as having piecewise inear dynamics. The Gaussian mixture model provides a good approximation for such piecewise linear systems, with each mixture element corresponding to a different linear mode Khansari-Zadeh and Billard, 2010). Under this model, the state transition tuple is assumed o come from a distribution that depends on some hidden state h, which corresponds to he mixture element identity. In practice, this hidden state might correspond to the type of contact profile experienced by a robotic arm at step 7. The prior for the dynamics fit at time step t is then obtained by inferring the hidden state distribution for the transition dataset {xi, ui, x} ai}; and using the mean and covariance of the corresponding mixture elements weighted by their probabilities) to obtain fi and X. The prior parameters can then be obtained as described above.
In our experiments, we set the number of mixture elements for the Gaussian mixture model prior such that there were at least 40 samples per mixture element, or 20 total mixture elements, whichever was lower. In general, we did not ï¬nd the performance of the method to be sensitive to this parameter, though overï¬tting did tend to occur in the early iterations when the number of samples is low, if the number of mixtures was too high.
29
Levine, Finn, Darrell, and Abbeel
# A.4 Trajectory Optimization
In this section, we show how the LQR backward pass can be used to optimize the constrained objective in Section 4.2. The constrained trajectory optimization problem is given by
oe Ly(p, 8) s.t. Di (p(r)||B(7)) < â¬.
The augmented Lagrangian £,(p, 0) consists of an entropy term and an expectation under p(T) of a quantity that is independent of p. We can locally approximate this quantity with a quadratic by using a quadratic expansion of ¢(x;,u;), and fitting a linear Gaussian to mo (uy|X,) with the same method we used for the dynamics. We can then solve the primal optimization in the dual gradient descent procedure with a standard LQR backward pass. As discussed in Section 4, £,(p,) can be written as the expectation of some function c(r) that is independent of p, such that L,(p,0) = E,,7)[e(7)] â 4H (p(7)). Specifically,
c(xt, Ue) = C(Xt, Us) â UP Ape â Ue log 79 (UE| xe)
Writing the Lagrangian of the constrained optimization, we have
L(p) = Eqryle(r) â nlog p(r)] â (n+ u)H(p(r)) â ne,
where η is the Lagrange multiplier. Note that L(p) is the Lagrangian of the constrained trajectory optimization, which is not related to the augmented Lagrangian Lp(Ï, θ). Group- ing the terms in the expectation and omitting constants, we can rewrite the minimization of the Lagrangian with respect to the primal variables as
min p(Ï )âN (Ï ) Ep(Ï ) 1 η + νt c(Ï )â η η + νt log Ëp(Ï ) â H(p(Ï )). (4)
Let Ëc(Ï ) = 1 log Ëp(Ï ). The above optimization corresponds to minimizing Ep(Ï )[Ëc(Ï )] â H(p(Ï )). This type of maximum entropy problem can be solved using the LQR algorithm, and the solution is given by
p(ut|xt) = N (Ktxt + kt; Qâ1
where Kt and kt are the feedback and open loop terms of the optimal linear feedback controller corresponding to the cost Ëc(xt, ut) and the dynamics p(xt+1|xt, ut), and Qu,ut is the quadratic term in the Q-function at time step t. All of these terms can be obtained from a standard LQR backward pass (Li and Todorov, 2004), which we summarize below. Recall that the estimated linear-Gaussian dynamics have the form p(xt+1|xt, ut) =
N (fxtxt + futut + fct, Ft). The quadratic cost approximation has the form
Ëc(xt, ut) â 1 2 [xt; ut]TËcxu,xut[xt; ut] + [xt; ut]TËcxut + const,
where subscripts denote derivatives, e.g. Ëcxut is the gradient of Ëc with respect to [xt; ut], while Ëcxu,xut is the Hessian.4 Under this model of the dynamics and cost function, the
4. We assume that all Taylor expansions here are recentered around zero. Otherwise, the point around which the derivatives are computed must be subtracted from xt and ut in all of these equations.
30
# End-to-End Training of Deep Visuomotor Policies
optimal controller can be computed by recursively computing the quadratic Q-function and value function, starting with the last time step. These functions are given by
V (xt) = Q(xt, ut) = 1 2 1 2 t Vx,xtxt + xT xT t Vxt + const [xt; ut]TQxu,xut[xt; ut] + [xt; ut]TQxut + const
We can express them with the following recurrence, which is computed starting at the last time step t = T and moving backward through time:
Qxu,xut = Ëcxu,xut + f T Qxut = Ëcxut + f T Vx,xt = Qx,xt â QT Vxt = Qxt â QT xutVx,xt+1fxut xutVxt+1 + f T u,xtQâ1 u,xtQâ1 xutVx,xt+1fct u,utQu,xt u,utQut,
and the optimal control law is then given by g(xt) = Ktxt + kt, where Kt = âQâ1 u,utQu,xt and kt = âQâ1 u,utQut. If, instead of simply minimizing the expected cost, we instead wish to optimize the maximum entropy objective in Equation (4), the optimal controller is instead linear-Gaussian, with the solution given by p(ut|xt) = N (Ktxt + kt; Qâ1 u,ut), as shown in prior work (Levine and Koltun, 2013a).
# Appendix B. Experimental Setup Details
In this appendix, we present a detailed summary of the experimental setup for our simulated and real-world experiments.
# B.1 Simulated Experiment Details
All of the simulated experiments used the MuJoCo simulation package (Todorov et al., 2012), with simulated frictional contacts and torque motors at the joints used for actuation. Although no control or state noise was added during simulation, noise was injected naturally by the linear-Gaussian controllers. The linear-Gaussian controllers pi(ut|xt) were initialized to stay near the initial state x1 using linear feedback based on a proportional-derivative control law for all tasks, except for the octopus arm, where pi(ut|xt) was initialized to be zero mean with a ï¬xed spherical covariance, and the walker, which was initialized to track a demonstration trajectory with proportional-derivative feedback. The walker was the only task that used a demonstration, as described previously. We describe the details of each system below.
Peg insertion: The 2D peg insertion task has 6 state dimensions (joint angles and angular velocities) and 2 action dimensions. The 3D version of the task has 12 state dimensions, since the arm has 3 degrees of freedom at the shoulder, 1 at the elbow, and 2 at the wrist. Trials were 8 seconds in length and simulated at 100 Hz, resulting in 800 time steps per rollout. The cost function is given by
1 2 * (xy, We) = zeullwell? + wpli2(Px, â Pâ),
31
Levine, Finn, Darrell, and Abbeel
where px, is the position of the end effector for state x;, p* is the desired end effector position at the bottom of the slot, and the norm f19(z) is given by 4||z||? + Va + 2, which corresponds to the sum of an é2 and soft @; norm. We use this norm to encourage the peg to precisely reach the target position at the bottom of the hole, but to also receive a larger penalty when far away. The task also works well in 2D with a simple ¢2 penalty, though we found that the 3D version of the task takes longer to insert the peg all the way into the hole without the ¢;-like square root term. The weights were set to wy = 10~® and Wp = 1. Initial states were chosen by moving the shoulder of the arm relative to the hole, with four equally spaced starting states in a 20 cm region for the 2D arm, and four random starting states in a 10 cm radius for the 3D arm.
Octopus arm: The octopus arm consists of six four-sided chambers. Each edge of each chamber is a simulated muscle, and actions correspond to contracting or relaxing the mus- cle. The state space consists of the positions and velocities of the chamber vertices. The midpoint of one edge of the ï¬rst chamber is ï¬xed, resulting in a total of 25 degrees of free- dom: the 2D positions of the 12 unconstrained points, and the orientation of the ï¬rst edge. Including velocities, the total dimensionality of the state space is 50. The cost function depends on the activation of the muscles and distance between the tip of the arm and the target point, in the same way as for peg insertion. The weights are set to wu = 10â3 and wp = 1.
Swimmer: The swimmer consists of 3 links and 5 degrees of freedom, including the global position and orientation which, together with the velocities, produces a 10 dimensional state space. The swimmer has 2 action dimensions corresponding to the torques between joints. The simulation applied drag on each link of the swimmer to roughly simulate a ï¬uid, allowing it to propel itself. The rollouts were 20 seconds in length at 20 Hz, resulting in 400 time steps per rollout. The cost function for the swimmer is given by
1 1 (xt, Ut) = 3 wull uelâ + qwellerx. â vt?
5 where vz, is the horizontal velocity, vk = 2.0m/s, and the weights were wy = 2- 107° and Wy = 1.
Walker: The bipedal walker consists of a torso and two legs, each with three links, for a total of 9 degrees of freedom and 18 dimensions, with velocity, and 6 action dimensions. The simulation ran for 5 seconds at 100 Hz, for a total of 500 time steps. The cost function is given by
1 2 2 + gwallPuse, _ Pyll 1 1 0(Xt, Ut) = zwulluel|â + gwellvex. âv,
where Ury, is again the horizontal velocity, Pyx, 38 the vertical position of the root, v} = 2.1m/s, py = 1.1m, and the weights were set to wa = 0-4, wy = 1, and wp, = 1.
32
End-to-End Training of Deep Visuomotor Policies
# B.2 Robotic Experiment Details
All of the robotic experiments were conducted on a PR2 robot. The robot was controlled at 20 Hz via direct eï¬ort control,5 and camera images were recorded using the RGB camera on a PrimeSense Carmine sensor. The images were downsampled to 240 à 240 à 3. The learned policies controlled one 7 DoF arm of the robot, while the other arm was used to move objects in the scene to automatically vary the initial conditions. The camera was kept ï¬xed in each experiment. Each episode was 5 seconds in length. For each task, the cost function required placing the object held in the gripper at a particular location (which might require, for example, to insert a shape into a shape sorting cube). The cost was given by the following equation:
C(xt, Ut) = wed? + Wiog log(d? + a) + wullurl|?,
where d; is the distance between three points in the space of the end-effector and their target positions.®, and the weights are set to Wey = 10-3, Wiog = 1.0, and wa = 10-?. The quadratic term encourages moving the end-effector toward the target when it is far, while the logarithm term encourages placing it precisely at the target location, as discussed in prior work (Levine et al., 2015). The bottle cap task used an additional cost term consisting of a quadratic penalty on the difference between the wrist angular velocity and a target velocity. For all of the tasks, we initialized all of the linear-Gaussian controllers p;(u;|x;) to stay near the initial state x;, with a diagonal noise covariance. The covariance of the noise was chosen to be proportional to a diagonal approximation of the inverse effective mass at each joint, as provided by the manufacturer of the PR2 robot, and the feedback controller was constructed using LQR, with an approximate linear model obtained from the same diagonal inverse mass matrix. The role of this initial controller was primarily to avoid dangerous actions during the first iteration. We discuss the particular setup for each experiment below:
Coat hanger: The coat hanger task required the robot to hang a coat hanger on a clothes rack. The coat hanger was grasped at one of two angles, about 35⦠apart, and the rack was positioned at three diï¬erent distances from the robot during training, with diï¬erences of about 10 cm between each position. The rack was moved manually between these positions during training. A trial was considered successful if, when the coat hanger was released, it remained hanging on the rack rather than dropping to the ground.
Shape sorting cube: The shape sorting cube task required the robot to insert a red trapezoid into a trapezoidal hole on a shape sorting cube. During training, the cube was positioned at nine diï¬erent positions, situated at the corners, edges, and middle of a rect- angular region 16 cm à 10 cm in size. During training, the shape sorting cube was moved through the training positions by using the left arm. A trial was considered successful if the bottom face of the trapezoid was completely inside the shape sorting cube, such that if the robot were to release the trapezoid, it would fall inside the cube.
5. The PR2 robot does not provide for closed loop torque control, but instead supports an eï¬ort control in- terface that directly sets feedforward motor voltages. In practice, these voltages are roughly proportional to feedforward torques, but are also aï¬ected by friction and damping.
6. Three points fully deï¬ne the pose of the end-eï¬ector. For the bottle cap task, which is radially symmetric, we use only two points.
33
Levine, Finn, Darrell, and Abbeel
Toy hammer: The hammer task required the robot to insert the claw of a toy hammer underneath a toy plastic nail, placing the claw around the base of the nail. The hammer was grasped at one of three angles, each 22.5⦠apart, for a total variation of 45⦠degrees, and the nail was positioned at ï¬ve positions, at the corners and center of a rectangular region 10 cm à 7 cm in size. During training, the toy tool bench containing the nail was moved using the left arm. A trial was considered successful if the tip of the claw of the hammer was at least under the centerline of the nail.
Bottle cap: The bottle cap task required the robot to screw a cap onto a bottle at various positions. The bottle was located at nine diï¬erent positions, situated at the corners, edges, and middle of a rectangular region 16 cm à 10 cm in size, and the left arm was used to move the bottle through the training positions. A trial was considered successful if, after completion, the cap could not be removed from bottle simply by pulling vertically.
# References
J. A. Bagnell and J. Schneider. Covariant policy search. In International Joint Conference on Artiï¬cial Intelligence (IJCAI), 2003.
B. Bakker, V. Zhumatiy, G. Gruener, and J. Schmidhuber. A robot that reinforcement- learns to identify and memorize important previous observations. In International Con- ference on Intelligent Robots and Systems (IROS), 2003.
G. Bekey and K. Goldberg. Neural Networks in Robotics. Springer US, 1992.
H. Benbrahim and J. A. Franklin. Biped dynamic walking using reinforcement learning. Robotics and Autonomous Systems, 22:283â302, 1997.
W. B¨ohmer, S. Gr¨unew¨alder, Y. Shen, M. Musial, and K. Obermayer. Construction of approximation spaces for reinforcement learning. Journal of Machine Learning Research, 14(1):2067â2118, January 2013.
S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 3(1):1122, 2011.
D. Ciresan, U. Meier, J. Masci, L. Gambardella, and J. Schmidhuber. Flexible, high per- formance convolutional neural networks for image classiï¬cation. In International Joint Conference on Artiï¬cial Intelligence (IJCAI), 2011.
D. Ciresan, U. Meier, and J. Schmidhuber. Multi-column deep neural networks for image classiï¬cation. In Computer Vision and Pattern Recognition (CVPR), 2012.
M. Deisenroth and C. Rasmussen. PILCO: a model-based and data-eï¬cient approach to policy search. In International Conference on Machine Learning (ICML), 2011.
M. Deisenroth, C. Rasmussen, and D. Fox. Learning to control a low-cost manipulator using data-eï¬cient reinforcement learning. In Robotics: Science and Systems (RSS), 2011.
34
End-to-End Training of Deep Visuomotor Policies
M. Deisenroth, G. Neumann, and J. Peters. A survey on policy search for robotics. Foun- dations and Trends in Robotics, 2(1-2):1â142, 2013.
J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei. ImageNet: A large-scale In Computer Vision and Pattern Recognition (CVPR), hierarchical image database. 2009.
G. Endo, J. Morimoto, T. Matsubara, J. Nakanishi, and G. Cheng. Learning CPG-based biped locomotion with a policy gradient method: Application to a humanoid robot. International Journal of Robotic Research, 27(2):213â228, 2008.
I. Endres and D. Hoiem. Category independent object proposals. In European Conference on Computer Vision (ECCV). 2010.
Y. Engel, P. Szab´o, and D. Volkinshtein. Learning to control an octopus arm with Gaussian In Advances in Neural Information Processing process temporal diï¬erence methods. Systems (NIPS), 2005.
B. Espiau, F. Chaumette, and P. Rives. A new approach to visual servoing in robotics. IEEE Transactions on Robotics and Automation, 8(3), 1992.
C. Finn, X. Tan, Y. Duan, T. Darrell, S. Levine, and P. Abbeel. Learning visual fea- ture spaces for robotic manipulation with deep spatial autoencoders. arXiv preprint arXiv:1509.06113, 2015.
K. Fukushima. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaï¬ected by shift in position. Biological Cybernetics, 36:193â202, 1980.
T. Geng, B. Porr, and F. W¨org¨otter. Fast biped walking with a reï¬exive controller and realtime policy searching. In Advances in Neural Information Processing Systems (NIPS), 2006.
R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate In Conference on Computer Vision and object detection and semantic segmentation. Pattern Recognition (CVPR), 2014a.
R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014b.
V. Gullapalli. A stochastic reinforcement learning algorithm for learning real-valued func- tions. Neural Networks, 3(6):671â692, 1990.
V. Gullapalli. Skillful control under uncertainty via direct reinforcement learning. Rein- forcement Learning and Robotics, 15(4):237â246, 1995.
X. Guo, S. Singh, H. Lee, R. L. Lewis, and X. Wang. Deep learning for real-time Atari game play using oï¬ine Monte-Carlo tree search planning. In Advances in Neural Information Processing Systems (NIPS), 2014.
35
Levine, Finn, Darrell, and Abbeel
R. Hadsell, P. Sermanet, J. B. A. Erkan, and M. Scoï¬er. Learning long-range vision for autonomous oï¬-road driving. Journal of Field Robotics, pages 120â144, 2009.
K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015.
S. Hochreiter, Y. Bengio, P. Frasconi, and J. Schmidhuber. Gradient ï¬ow in recurrent nets: the diï¬culty of learning long-term dependencies. In A Field Guide to Dynamic Recurrent Neural Networks. IEEE Press, 2001.
K. J. Hunt, D. Sbarbaro, R. ËZbikowski, and P. J. Gawthrop. Neural networks for control systems: A survey. Automatica, 28(6):1083â1112, November 1992.
D. Jacobson and D. Mayne. Diï¬erential Dynamic Programming. Elsevier, 1970.
M. J¨agersand, O. Fuentes, and R. C. Nelson. Experimental evaluation of uncalibrated visual servoing for precision manipulation. In International Conference on Robotics and Automation (ICRA), 1997.
Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caï¬e: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.
S. Jodogne and J. H. Piater. Closed-loop learning of visual control policies. Journal of Artiï¬cial Intelligence Research, 28:349â391, 2007.
R. Jonschkowski and O. Brock. State representation learning in robotics: Using prior knowledge about physical interaction. In Proceedings of Robotics: Science and Systems, 2014.
M. Kalakrishnan, L. Righetti, P. Pastor, and S. Schaal. Learning force control policies for compliant manipulation. In International Conference on Intelligent Robots and Systems (IROS), 2011.
S. M. Khansari-Zadeh and A. Billard. BM: An iterative algorithm to learn stable non- linear dynamical systems with Gaussian mixture models. In International Conference on Robotics and Automation (ICRA), 2010.
J. Kober and J. Peters. Learning motor primitives for robotics. In International Conference on Robotics and Automation (ICRA), 2009.
J. Kober, K. Muelling, O. Kroemer, C.H. Lampert, B. Schoelkopf, and J. Peters. Movement templates for learning of hitting and batting. In International Conference on Robotics and Automation (ICRA), 2010a.
J. Kober, E. Oztop, and J. Peters. Reinforcement learning to adjust robot movements to new situations. In Robotics: Science and Systems (RSS), 2010b.
J. Kober, J. A. Bagnell, and J. Peters. Reinforcement learning in robotics: A survey. International Journal of Robotic Research, 32(11):1238â1274, 2013.
36
End-to-End Training of Deep Visuomotor Policies
N. Kohl and P. Stone. Policy gradient reinforcement learning for fast quadrupedal locomo- tion. In International Conference on Robotics and Automation (IROS), 2004.
J. Koutn´ık, G. Cuccu, J. Schmidhuber, and F. Gomez. Evolving large-scale neural networks for vision-based reinforcement learning. In Conference on Genetic and Evolutionary Com- putation, GECCO â13, 2013.
A. Krizhevsky, I. Sutskever, and G. Hinton. ImageNet classiï¬cation with deep convolutional neural networks. In Advances in Neural Information Processing Systems (NIPS). 2012.
T. Lampe and M. Riedmiller. Acquiring visual servoing reaching and grasping skills using In International Joint Conference on Neural Networks neural reinforcement learning. (IJCNN), 2013.
A. Lanfranco, A. Castellanos, J. Desai, and W. Meyers. Robotic surgery: a current per- spective. Annals of surgery, 239(1):14, 2004.
S. Lange, M. Riedmiller, and A. Voigtlaender. Autonomous reinforcement learning on raw visual input data in a real world application. In International Joint Conference on Neural Networks, 2012.
Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Handwritten digit recognition with a back-propagation network. In Advances in Neural Information Processing Systems (NIPS), 1989.
Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521:436â444, May 2015.
H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In International Conference on Machine Learning (ICML), 2009.
Ian Lenz, Ross Knepper, and Ashutosh Saxena. DeepMPC: Learning deep latent features for model predictive control. In RSS, 2015a.
Ian Lenz, Honglak Lee, and Ashutosh Saxena. Deep learning for detecting robotic grasps. IJRR, 2015b.
S. Levine and P. Abbeel. Learning neural network policies with guided policy search under unknown dynamics. In Advances in Neural Information Processing Systems (NIPS), 2014.
S. Levine and V. Koltun. Guided policy search. In International Conference on Machine Learning (ICML), 2013a.
S. Levine and V. Koltun. Variational policy search via trajectory optimization. In Advances in Neural Information Processing Systems (NIPS), 2013b.
S. Levine and V. Koltun. Learning complex neural network policies with trajectory opti- mization. In International Conference on Machine Learning (ICML), 2014.
S. Levine, N. Wagener, and P. Abbeel. Learning contact-rich manipulation skills with guided policy search. In International Conference on Robotics and Automation (ICRA), 2015.
37
Levine, Finn, Darrell, and Abbeel
F. L. Lewis, A. Yesildirak, and S. Jagannathan. Neural Network Control of Robot Manipu- lators and Nonlinear Systems. Taylor & Francis, Inc., 1998.
W. Li and E. Todorov. Iterative linear quadratic regulator design for nonlinear biological movement systems. In ICINCO (1), pages 222â229, 2004.
T. Lillicrap, J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
R. Lioutikov, A. Paraschos, G. Neumann, and J. Peters. Sample-based information-theoretic In International Conference on Robotics and Automation, stochastic optimal control. 2014.
H. Mayer, F. Gomez, D. Wierstra, I. Nagy, A. Knoll, and J. Schmidhuber. A system In for robotic heart surgery that learns to tie knots using recurrent neural networks. International Conference on Intelligent Robots and Systems (IROS), 2006.
W. Meeussen, M. Wise, S. Glaser, S. Chitta, C. McGann, P. Mihelich, E. Marder-Eppstein, M. Muja, Victor Eruhimov, T. Foote, J. Hsu, R.B. Rusu, B. Marthi, G. Bradski, K. Kono- lige, B. Gerkey, and E. Berger. Autonomous door opening and plugging in with a personal robot. In International Conference on Robotics and Automation (ICRA), 2010.
V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Ried- miller. Playing Atari with deep reinforcement learning. NIPS â13 Workshop on Deep Learning, 2013.
K. Mohta, V. Kumar, and K. Daniilidis. Vision based control of a quadrotor for perching on planes and lines. In International Conference on Robotics and Automation (ICRA), 2014.
I. Mordatch and E. Todorov. Combining the beneï¬ts of function approximation and tra- jectory optimization. In Robotics: Science and Systems (RSS), 2014.
A. Y. Ng, H. J. Kim, M. I. Jordan, and S. Sastry. Inverted autonomous helicopter ï¬ight via reinforcement learning. In International Symposium on Experimental Robotics, 2004.
R. Pascanu and Y. Bengio. On the diï¬culty of training recurrent neural networks. Technical Report arXiv:1211.5063, Universite de Montreal, 2012.
B. Pepik, M. Stark, P. Gehler, and B. Schiele. Teaching 3D geometry to deformable part models. In Computer Vision and Pattern Recognition (CVPR), 2012.
J. Peters and S. Schaal. Applying the episodic natural actor-critic architecture to motor In European Symposium on Artiï¬cial Neural Networks (ESANN), primitive learning. 2007.
J. Peters and S. Schaal. Reinforcement learning of motor skills with policy gradients. Neural Networks, 21(4):682â697, 2008.
38
End-to-End Training of Deep Visuomotor Policies
J. Peters, K. M¨ulling, and Y. Alt¨un. Relative entropy policy search. In AAAI Conference on Artiï¬cial Intelligence, 2010.
Lerrel Pinto and Abhinav Gupta. Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot hours. CoRR, abs/1509.06825, 2015.
D. Pomerleau. ALVINN: an autonomous land vehicle in a neural network. In Advances in Neural Information Processing Systems (NIPS), 1989.
S. Ross, G. Gordon, and A. Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. Journal of Machine Learning Research, 15:627â 635, 2011.
S. Ross, N. Melik-Barkhudarov, K. Shaurya Shankar, A. Wendel, D. Dey, J. A. Bagnell, and M. Hebert. Learning monocular reactive UAV control in cluttered natural environments. In International Conference on Robotics and Automation (ICRA), 2013.
R. Rubinstein and D. Kroese. The Cross-Entropy Method: A Uniï¬ed Approach to Combi- natorial Optimization, Monte-Carlo Simulation and Machine Learning. Springer, 2004.
S. Savarese and L. Fei-Fei. 3D generic object categorization, localization and pose estima- tion. In International Conference on Computer Vision (ICCV), 2007.
J. Schmidhuber. Deep learning in neural networks: An overview. Neural Networks, 61: 85â117, 2015.
P. Y. Simard, D. Steinkraus, and J. C. Platt. Best practices for convolutional neural networks applied to visual document analysis. In Seventh International Conference on Document Analysis and Recognition, 2003.
F. Stulp and O. Sigaud. Path integral policy improvement with covariance matrix adapta- tion. In International Conference on Machine Learning (ICML), 2012.
Jaeyong Sung, Seok Hyun Jin, and Ashutosh Saxena. Robobarista: Object part based transfer of manipulation trajectories from crowd-sourcing in 3d pointclouds. CoRR, abs/1504.03071, 2015.
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. arXiv preprint arXiv:1409.4842, 2014.
R. Tedrake, T. Zhang, and H. Seung. Stochastic policy gradient reinforcement learning on a simple 3d biped. In International Conference on Intelligent Robots and Systems (IROS), 2004.
E. Theodorou, J. Buchli, and S. Schaal. Reinforcement learning of motor skills in high dimensions. In International Conference on Robotics and Automation (ICRA), 2010.
E. Todorov, T. Erez, and Y. Tassa. MuJoCo: A physics engine for model-based control. In IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012.
39
Levine, Finn, Darrell, and Abbeel
J. J. Tompson, A. Jain, Y. LeCun, and C. Bregler. Joint training of a convolutional network and a graphical model for human pose estimation. In Advances in Neural Information Processing Systems (NIPS), 2014.
J. Uijlings, K. van de Sande, T. Gevers, and A. Smeulders. Selective search for object recognition. International Journal of Computer Vision, 2013.
H. van Hoof, J. Peters, and G. Neumann. Learning of non-parametric control policies with In International Conference on Artiï¬cial Intelligence high-dimensional state features. and Statistics, 2015.
H. Wang and A. Banerjee. Bregman alternating direction method of multipliers. In Advances in Neural Information Processing Systems (NIPS). 2014.
M. Watter, J. Springenberg, J. Boedecker, and M. Riedmiller. Embed to control: A lo- cally linear latent dynamics model for control from raw images. In Advanced in Neural Information Processing Systems (NIPS), 2015.
R. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8(3-4):229â256, May 1992.
W. J. Wilson, C. W. Williams Hulls, and G. S. Bell. Relative end-eï¬ector control using cartesian position based visual servoing. IEEE Transactions on Robotics and Automation, 12(5), 1996.
K.A. Wyrobek, E.H. Berger, HF M. Van der Loos, and K. Salisbury. Towards a personal robotics development platform: Rationale and design of an intrinsically safe personal robot. In International Conference on Robotics and Automation (ICRA), 2008.
B. H. Yoshimi and P. K. Allen. Active, uncalibrated visual servoing. In International Conference on Robotics and Automation (ICRA), 1994.
40 | {
"id": "1509.06113"
} |
1504.00325 | Microsoft COCO Captions: Data Collection and Evaluation Server | In this paper we describe the Microsoft COCO Caption dataset and evaluation
server. When completed, the dataset will contain over one and a half million
captions describing over 330,000 images. For the training and validation
images, five independent human generated captions will be provided. To ensure
consistency in evaluation of automatic caption generation algorithms, an
evaluation server is used. The evaluation server receives candidate captions
and scores them using several popular metrics, including BLEU, METEOR, ROUGE
and CIDEr. Instructions for using the evaluation server are provided. | http://arxiv.org/pdf/1504.00325 | Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, C. Lawrence Zitnick | cs.CV, cs.CL | arXiv admin note: text overlap with arXiv:1411.4952 | null | cs.CV | 20150401 | 20150403 | 5 1 0 2
r p A 3 ] V C . s c [
2 v 5 2 3 0 0 . 4 0 5 1 : v i X r a
# Microsoft COCO Captions: Data Collection and Evaluation Server
Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam Saurabh Gupta, Piotr Doll ´ar, C. Lawrence Zitnick
AbstractâIn this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, ï¬ve independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided.
1 INTRODUCTION The automatic generation of captions for images is a long standing and challenging problem in artiï¬cial in- telligence [1], [2], [3], [4], [5], [6], [7], [8], [9], [10], [11], [12], [13], [14], [15], [16], [17], [18], [19]. Research in this area spans numerous domains, such as computer vision, natural language processing, and machine learn- ing. Recently there has been a surprising resurgence of interest in this area [20], [21], [22], [23], [24], [25], [26], [27], [28], [29], [30], due to the renewed interest in neural network learning techniques [31], [32] and increasingly large datasets [33], [34], [35], [7], [36], [37], [38].
In this paper, we describe our process of collecting captions for the Microsoft COCO Caption dataset, and the evaluation server we have set up to evaluate perfor- mance of different algorithms. The MS COCO caption dataset contains human generated captions for images contained in the Microsoft Common Objects in COntext (COCO) dataset [38]. Similar to previous datasets [7], [36], we collect our captions using Amazonâs Mechanical Turk (AMT). Upon completion of the dataset it will contain over a million captions.
A large bus sitting next to a very tall building. The man at bat readies to swing at the pitch while the umpire looks on. Bunk bed with a narrow shelf sitting underneath it. A horse carrying a large load of hay and two people sitting on it.
Fig. 1: Example images and captions from the Microsoft COCO Caption dataset.
When evaluating image caption generation algo- rithms, it is essential that a consistent evaluation protocol is used. Comparing results from different approaches can be difï¬cult since numerous evaluation metrics exist [39], [40], [41], [42]. To further complicate matters the imple- mentations of these metrics often differ. To help alleviate these issues, we have built an evaluation server to enable consistency in evaluation of different caption generation approaches. Using the testing data, our evaluation server evaluates captions output by different approaches using numerous automatic metrics: BLEU [39], METEOR [41],
ROUGE [40] and CIDEr [42]. We hope to augment these results with human evaluations on an annual basis.
This paper is organized as follows: First we describe the data collection process. Next, we describe the caption evaluation server and the various metrics used. Human performance using these metrics are provided. Finally the annotation format and instructions for using the eval- uation server are described for those who wish to submit results. We conclude by discussing future directions and known issues.
Xinlei Chen is with Carnegie Mellon University. ⢠Hao Fang is with the University of Washington. ⢠T.Y. Lin is with Cornell NYC Tech. ⢠Ramakrishna Vedantam is with Virginia Tech. ⢠Saurabh Gupta is with the Univeristy of California, Berkeley. ⢠P. Doll´ar is with Facebook AI Research. ⢠C. L. Zitnick is with Microsoft Research, Redmond.
# 2 DATA COLLECTION
In this section we describe how the data is gathered for the MS COCO captions dataset. For images, we use the dataset collected by Microsoft COCO [38]. These images are split into training, validation and testing sets.
1
2
The images were gathered by searching for pairs of 80 object categories and various scene types on Flickr. The goal of the MS COCO image collection process was to gather images containing multiple objects in their natural context. Given the visual complexity of most images in the dataset, they pose an interesting and difï¬cult challenge for image captioning.
For generating a dataset of image captions, the same training, validation and testing sets were used as in the original MS COCO dataset. Two datasets were collected. The ï¬rst dataset MS COCO c5 contains ï¬ve reference captions for every image in the MS COCO training, validation and testing datasets. The second dataset MS COCO c40 contains 40 reference sentences for a ran- domly chosen 5,000 images from the MS COCO testing dataset. MS COCO c40 was created since many auto- matic evaluation metrics achieve higher correlation with human judgement when given more reference sentences [42]. MS COCO c40 may be expanded to include the MS COCO validation dataset in the future.
Our process for gathering captions received signiï¬cant inspiration from the work of Young etal. [36] and Ho- dosh etal. [7] that collected captions on Flickr images using Amazonâs Mechanical Turk (AMT). Each of our captions are also generated using human subjects on AMT. Each subject was shown the user interface in Figure 2. The subjects were instructed to:
Describe all the important parts of the scene. ⢠Do not start the sentences with âThere is. ⢠Do not describe unimportant details. ⢠Do not describe things that might have happened
in the future or past.
Do not describe what a person might say. ⢠Do not give people proper names. ⢠The sentences should contain at least 8 words.
The number of captions gathered is 413,915 captions for 82,783 images in training, 202,520 captions for 40,504 images in validation and 379,249 captions for 40,775 images in testing including 179,189 for MS COCO c5 and 200,060 for MS COCO c40. For each testing image, we collected one additional caption to compute the scores of human performance for comparing scores of machine generated captions. The total number of collected cap- tions is 1,026,459. We plan to collect captions for the MS COCO 2015 dataset when it is released, which should approximately double the size of the caption dataset. The AMT interface may be obtained from the MS COCO website.
3 CAPTION EVALUATION In this section we describe the MS COCO caption evalu- ation server. Instructions for using the evaluation server are provided in Section 5. As input the evaluation server receives candidate captions for both the validation and testing datasets in the format speciï¬ed in Section 5. The validation and test images are provided to the submit- ter. However, the human generated reference sentences
Instructions: H+ Describe all the important parts of the soane. + Do not start the sentences with "There is". + Do not describe unimportant details. + Do not deseribe things that might have happened in the future or past. * Do not describe what a person might say. + Do not give people proper names. + The sentence should contain at least 8 words. Please describe the image: prev || next
Fig. 2: Example user interface for the caption gathering task.
are only provided for the validation set. The reference sentences for the testing set are kept private to reduce the risk of overï¬tting.
Numerous evaluation metrics are computed on both MS COCO c5 and MS COCO c40. These include BLEU- 1, BLEU-2, BLEU-3, BLEU-4, ROUGE-L, METEOR and CIDEr-D. The details of the these metrics are described next.
# 3.1 Tokenization and preprocessing
Both the candidate captions and the reference captions are pre-processed by the evaluation server. To tokenize the captions, we use Stanford PTBTokenizer in Stanford CoreNLP tools (version 3.4.1) [43] which mimics Penn Treebank 3 tokenization. In addition, punctuations1 are removed from the tokenized captions.
3.2 Evaluation metrics Our goal is to automatically evaluate for an image Ii the quality of a candidate caption ci given a set of reference captions Si = {si1, . . . , sim} â S. The caption sentences are represented using sets of n-grams, where an n-gram Ïk â ⦠is a set of one or more ordered words. In this paper we explore n-grams with one to four words. No stemming is performed on the words. The number of times an n-gram Ïk occurs in a sentence sij is denoted hk(sij) or hk(ci) for the candidate sentence ci â C.
# 3.3 BLEU
BLEU [39] is a popular machine translation metric that analyzes the co-occurrences of n-grams between the candidate and reference sentences. It computes a corpus- level clipped n-gram precision between sentences as follows:
i Dig min(has (ci), max ha (sig) A) CP,(C, S$) (1)
1. The full list of punctuations: {â, â, â, â, -LRB-, -RRB-, -LCB-, -RCB-, ., ?, !, ,, :, -, â, ..., ;}.
where k indexes the set of possible n-grams of length n. The clipped precision metric limits the number of times an n-gram may be counted to the maximum number of times it is observed in a single reference sentence. Note that CPn is a precision score and it favors short sentences. So a brevity penalty is also used:
if lo > Is 1 Io > I: W(C,S) = fri i y (2) Ig < lg
where lC is the total length of candidate sentences ciâs and lS is the length of the corpus-level effective refer- ence length. When there are multiple references for a candidate sentence, we choose to use the closest reference length for the brevity penalty.
The overall BLEU score is computed using a weighted geometric mean of the individual n-gram precision:
N BLEUN(C, 8) = 0(C, 8) exp (x Wn log CP, (C, 8) n=1 (3)
where N = 1, 2, 3, 4 and wn is typically held constant for all n.
BLEU has shown good performance for corpus- level comparisons over which a high number of n- gram matches exist. However, at a sentence-level the n-gram matches for higher n rarely occur. As a result, BLEU performs poorly when comparing individual sen- tences.
# 3.4 ROUGE
ROUGE [40] is a set of evaluation metrics designed to evaluate text summarization algorithms.
1) ROUGEy: The first ROUGE metric computes a simple n-gram recall over all reference summaries given a candidate sentence: Yj Vex min(he (ci); ha (Sig)
Yj Vex min(he (ci); ha (Sig) vi Ve hx (ij) ROUGEN (ci, i)
(4) 2) ROUGEL: ROUGEL uses a measure based on the Longest Common Subsequence (LCS). An LCS is a set words shared by two sentences which occur in the same order. However, unlike n-grams there may be words in between the words that create the LCS. Given the length l(ci, sij) of the LCS between a pair of sentences, ROUGEL is found by computing an F-measure:
Rl = max j Pl = max j l(ci, sij) |sij| l(ci, sij) |ci| ROU GEL(ci, Si) = (1 + β2)RlPl Rl + β2Pl
(3)
(5)
(6)
(7)
Rl and Pl are recall and precision of LCS. β is usually set to favor recall (β = 1.2). Since n- grams are implicit in this measure due to the use of the LCS, they need not be speciï¬ed.
3) ROUGES: The ï¬nal ROUGE metric uses skip bi- grams instead of the LCS or n-grams. Skip bi-grams are pairs of ordered words in a sentence. However, similar to the LCS, words may be skipped between pairs of words. Thus, a sentence with 4 words would have C 4 2 = 6 skip bi-grams. Precision and recall are again incorporated to compute an F- measure score. If fk(sij) is the skip bi-gram count for sentence sij, ROUGES is computed as:
Re = max LA Min( Seles) Fels) wo Vx Feliz) (8)
P= max Demin Sule) FalSi)) eG Lx Se(ci)
ROU GES(ci, Si) = (1 + β2)RsPs Rs + β2Ps (10)
Skip bi-grams are capable of capturing long range sentence structure. In practice, skip bi-grams are computed so that the component words occur at a distance of at most 4 from each other.
# 3.5 METEOR
METEOR [41] is calculated by generating an alignment between the words in the candidate and reference sen- tences, with an aim of 1:1 correspondence. This align- ment is computed while minimizing the number of chunks, ch, of contiguous and identically ordered tokens in the sentence pair. The alignment is based on exact token matching, followed by WordNet synonyms [44], stemmed tokens and then paraphrases. Given a set of alignments, m, the METEOR score is the harmonic mean of precision Pm and recall Rm between the best scoring reference and candidate:
Pen=y(%) (11) m
Fmean = PmRm αPm + (1 â α)Rm (12)
|m| k hk(ci) |m| k hk(sij) M ET EOR = (1 â P en)Fmean
|m| Pin = SO 13, 5 hel) 0)
|m| Rn = = (14) dhe (siz)
(15)
Thus, the ï¬nal METEOR score includes a penalty P en based on chunkiness of resolved matches and a har- monic mean term that gives the quality of the resolved matches. The default parameters α, γ and θ are used for this evaluation. Note that similar to BLEU, statistics of precision and recall are ï¬rst aggregated over the entire corpus, which are then combined to give the corpus-level METEOR score.
3
(8)
(9)
4
# 3.6 CIDEr
The CIDEr metric [42] measures consensus in image captions by performing a Term Frequency Inverse Doc- ument Frequency (TF-IDF) weighting for each n-gram. The number of times an n-gram Ïk occurs in a reference sentence sij is denoted by hk(sij) or hk(ci) for the candi- date sentence ci. CIDEr computes the TF-IDF weighting gk(sij) for each n-gram Ïk using:
# gk(sij) =
Pe(sig) toe Z| Soca Talsa) | g (< oat rm) , (16)
where ⦠is the vocabulary of all n-grams and I is the set of all images in the dataset. The ï¬rst term measures the TF of each n-gram Ïk, and the second term measures the rarity of Ïk using its IDF. Intuitively, TF places higher weight on n-grams that frequently occur in the reference sentences describing an image, while IDF reduces the weight of n-grams that commonly occur across all de- scriptions. That is, the IDF provides a measure of word saliency by discounting popular words that are likely to be less visually informative. The IDF is computed using the logarithm of the number of images in the dataset |I| divided by the number of images for which Ïk occurs in any of its reference sentences.
The CIDErn score for n-grams of length n is com- puted using the average cosine similarity between the candidate sentence and the reference sentences, which accounts for both precision and recall:
CIDE rp (ci, 5%) +> ra (ci) -gâ (siz) (17) J ) Cea){llloâ sis MI
where gâ(c;) is a vector formed by 9,(c;) corresponding to all n-grams of length n and ||gâ(c;)|| is the magnitude of the vector gâ(c;). Similarly for gâ(s;;).
Higher order (longer) n-grams to are used to cap- ture grammatical properties as well as richer semantics. Scores from n-grams of varying lengths are combined as follows:
N CIDEr(c;, $;) = S> wnCIDEr (ci, $i), n=1 (18)
Uniform weights are used wn = 1/N . N = 4 is used.
CIDEr-D is a modiï¬cation to CIDEr to make it more robust to gaming. Gaming refers to the phenomenon where a sentence that is poorly judged by humans tends to score highly with an automated metric. To defend the CIDEr metric against gaming effects, [42] add clipping and a length based gaussian penalty to the CIDEr metric described above. This results in the following equations for CIDEr-D:
TABLE 1: Human Agreement for Image Captioning: Various metrics when benchmarking a human generated caption against ground truth captions.
Metric Name MS COCO c5 MS COCO c40 BLEU 1 BLEU 2 BLEU 3 BLEU 4 0.663 0.469 0.321 0.217 0.880 0.744 0.603 0.471 METEOR ROUGEL CIDEr-D 0.252 0.484 0.854 0.335 0.626 0.910
\ 10 =(ei) =U sig)? CIDEr-D,, (¢;, S;) = Tn DS e302 & j min(gâ (ci), 9â (siz)) 9" (Sis) llaâ (ea)MIlloâ (sis) Il » (19)
Where l(ci) and l(sij) denote the lengths of candidate and reference sentences respectively. Ï = 6 is used. A factor of 10 is used in the numerator to make the CIDEr- D scores numerically similar to the other metrics.
The ï¬nal CIDEr-D metric is computed in a similar manner to CIDEr (analogous to eqn. 18):
N CIDEr-D(;, 5:) = }> waCIDEr-D, (ci, $i), n=1 (20)
Note that just like the BLEU and ROUGE metrics, CIDEr- D does not use stemming. We adopt the CIDEr-D metric for the evaluation server.
4 HUMAN PERFORMANCE In this section, we study the human agreement among humans at this task. We start with analyzing the inter- human agreement for image captioning (Section. 4.1) and then analyze human agreement for the word prediction sub-task and provide a simple model which explains human agreement for this sub-task (Section. 4.2).
# 4.1 Human Agreement for Image Captioning
When examining human agreement on captions, it be- comes clear that there are many equivalent ways to say essentially the same thing. We quantify this by conducting the following experiment: We collect one additional human caption for each image in the test set and treat this caption as the prediction. Using the MS COCO caption evaluation server we compute the various metrics. The results are tabulated in Table 1.
# 4.2 Human Agreement for Word Prediction
We can do a similar analysis for human agreement at the sub-task of word prediction. Consider the task of tagging the image with words that occur in the captions. For this task, we can compute the human precision and recall for
TABLE 2: Model deï¬ntions.
# o w n k q p
object or visual concept = = word associated with o total number of images = number of captions per image = P (o = 1) = P (w = 1|o = 1) =
a given word w by benchmarking words used in the k+1 human caption with respect to words used in the ï¬rst k reference captions. Note that we use weighted versions of precision and recall, where each negative image has a weight of 1 and each positive image has a weight equal to the number of captions containing the word w. Human precision (Hp) and human recall (Hr) can be computed from the counts of how many subjects out of k use the word w to describe a given image over the whole dataset.
We plot Hp versus Hr for a set of nouns, verbs and adjectives, and all 1000 words considered in Figure 3. Nouns referring to animals like âelephantâ have a high recall, which means that if an âelephantâ exists in the image, a subject is likely to talk about it (which makes intuitive sense, given âelephantâ images are somewhat rare, and there are no alternative words that could be used instead of âelephantâ). On the other hand, an adjective like âbrightâ is used inconsistently and hence has low recall. Interestingly, words with high recall also have high precision. Indeed, all the points of human agreement appear to lie on a one-dimensional curve in the two-dimension precision-recall space.
This observation motivates us to propose a simple model for when subjects use a particular word w for describing an image. Let o denote an object or visual concept associated with word w, n be the total number of images, and k be the number of reference captions. Next, let q = P (o = 1) be the probability that object o exists in an image. For clarity these deï¬nitions are summarized in Table 2. We make two simpliï¬cations. First, we ig- nore image level saliency and instead focus on word level saliency. Speciï¬cally, we only model p = P (w = 1|o = 1), the probability a subject uses w given that o is in the image, without conditioning on the image itself. Second, we assume that P (w = 1|o = 0) = 0, i.e. that a subject does not use w unless o is in the image. As we will show, even with these simpliï¬cations our model sufï¬ces to explain the empirical observations in Figure 3 to a reasonable degree of accuracy.
Given these assumptions, we can model human preci- sion Hy and recall H, for a word w given only p and k. First, given k captions per image, we need to compute the expected number of (1) captions containing w (cw), (2) true positives (tp), and (3) false positives (fp). Note that in our definition there can be up to k true positives per image (if cw = k, i.e. each of the k captions contains word w) but at most 1 false positive (if none of the k captions contains w). The expectations, in terms of k, p,
and q are:
E[cw] = Σk i=1P (wi = 1) = ΣiP (wi = 1|o = 1)P (o = 1) +ΣiP (wi = 1|o = 0)P (o = 0) = kpq + 0 = kpq E[tp] = Σk i=1P (wi = 1 ⧠wk+1 = 1) = ΣiP (wi = 1 ⧠wk+1 = 1|o = 1)P (o = 1) +ΣiP (wi = 1 ⧠wk+1 = 1|o = 0)P (o = 0) = kppq + 0 = kp2q E[f p] = P (w1 . . . wk = 0 ⧠wk+1 = 1) = P (o = 1 ⧠w1 . . . wk = 0 ⧠wk+1 = 1) +P (o = 0 ⧠w1 . . . wk = 0 ⧠wk+1 = 1) = q(1 â p)kp + 0 = q(1 â p)kp
In the above wi = 1 denotes that w appeared in the ith caption. Note that we are also assuming independence between subjects conditioned on o. We can now deï¬ne model precision and recall as:
ir : nE p] pk » nEftp] + nE[ fp] ~ pk + (1 âp)é 7 nE[tp] _ Hy 3 nE|cu] P
Note that these expressions are independent of q and only depend on p. Interestingly, because of the use of weighted precision and recall, the recall for a category comes out to be exactly equal to p, the probability a subject uses w given that o is in the image.
We set k = 4 and vary p to plot Hp versus H,, getting the curve as shown in blue in Figure 3 (bottom left). The curve explains the observed data quite well, closely matching the precision-recall tradeoffs of the empirical data (although not perfectly). We can also reduce the number of captions from four, and look at how the empirical and predicted precision and recall change. Figure 3 (bottom right), shows this variation as we reduce the number of reference captions per image from four to one annotations. We see that the points of human agreement remain at the same recall value, but decrease in their precision, which is consistent with what the model predicts. Also, the human precision at infinite subjects will approach one, which is again reasonable given that a subject will only use the word w if the corresponding object is in the image (and in the presence of infinite subjects someone else will also use the word w).
In fact, the ï¬xed recall value can help us recover p, the probability that a subject will use the word w in describing the image given the object is present. Nouns like âelephantâ and âtennisâ have large p, which is reasonable. Verbs and adjectives, on the other hand, have smaller p values, which can be justiï¬ed from the fact that a) subjects are less likely to describe attributes
5
6
Nouns boy ote Precision Precision Adjectives Precision Recall Precision Recall Recall "Recall Precision ânumber of reference = 1 Recall
Fig. 3: Precision-recall points for human agreement: we compute precision and recall by treating one human caption as prediction and benchmark it against the others to obtain points on the precision recall curve. We plot these points for example nouns (top left), adjectives (top center), and verbs (top right), and for all words (bottom left). We also plot the ï¬t of our model for human agreement with the empirical data (bottom left) and show how the human agreement changes with different number of captions being used (bottom right). We see that the human agreement point remains at the same recall value but dips in precision when using fewer captions.
of objects and b) subjects might use a different word (synonym) to describe the same attribute.
This analysis of human agreement also motivates us- ing a different metric for measuring performance. We propose Precision at Human Recall (PHR) as a metric for measuring performance of a vision system perform- ing this task. Given that human recall for a particular word is ï¬xed and precision varies with the number of annotations, we can look at system precision at human recall and compare it with human precision to report the performance of the vision system.
5 EVALUATION SERVER INSTRUCTIONS Directions on how to use the MS COCO caption evalu- ation server can be found on the MS COCO website. The evaluation server is hosted by CodaLab. To par- ticipate, a user account on CodaLab must be created. The participants need to generate results on both the validation and testing datasets. When training for the generation of results on the test dataset, the training and validation dataset may be used as the participant sees ï¬t. That is, the validation dataset may be used for training if desired. However, when generating results on the validation set, we ask participants to only train on the training dataset, and only use the validation dataset
for tuning meta-parameters. Two JSON ï¬les should be created corresponding to results on each dataset in the following format:
[{ âimage idâ âcaptionâ }] : : int, str,
The results may then be placed into a zip ï¬le and uploaded to the server for evaluation. Code is also provided on GitHub to evaluate results on the validation dataset without having to upload to the server. The number of submissions per user is limited to a ï¬xed amount.
# 6 DISCUSSION
Many challenges exist when creating an image caption dataset. As stated in [7], [42], [45] the captions generated by human subjects can vary signiï¬cantly. However even though two captions may be very different, they may be judged equally âgoodâ by human subjects. Designing effective automatic evaluation metrics that are highly correlated with human judgment remains a difï¬cult challenge [7], [42], [45], [46]. We hope that by releasing
results on the validation data, we can help enable future research in this area.
Since automatic evaluation metrics do not always correspond to human judgment, we hope to conduct experiments using human subjects to judge the quality of automatically generated captions, which are most similar to human captions, and whether they are grammatically correct [45], [42], [7], [4], [5]. This is essential to determin- ing whether future algorithms are indeed improving, or whether they are merely over ï¬tting to a speciï¬c metric. These human experiments will also allow us to evaluate the automatic evaluation metrics themselves, and see which ones are correlated to human judgment.
# REFERENCES
[1] K. Barnard and D. Forsyth, âLearning the semantics of words and pictures,â in ICCV, vol. 2, 2001, pp. 408â415.
[2] K. Barnard, P. Duygulu, D. Forsyth, N. De Freitas, D. M. Blei, and M. I. Jordan, âMatching words and pictures,â JMLR, vol. 3, pp. 1107â1135, 2003.
[3] V. Lavrenko, R. Manmatha, and J. Jeon, âA model for learning the semantics of pictures,â in NIPS, 2003.
[4] G. Kulkarni, V. Premraj, S. Dhar, S. Li, Y. Choi, A. C. Berg, and T. L. Berg, âBaby talk: Understanding and generating simple image descriptions,â in CVPR, 2011.
[5] M. Mitchell, X. Han, J. Dodge, A. Mensch, A. Goyal, A. Berg, K. Yamaguchi, T. Berg, K. Stratos, and H. Daum´e III, âMidge: Generating image descriptions from computer vision detections,â in EACL, 2012.
[6] A. Farhadi, M. Hejrati, M. A. Sadeghi, P. Young, C. Rashtchian, J. Hockenmaier, and D. Forsyth, âEvery picture tells a story: Generating sentences from images,â in ECCV, 2010.
[7] M. Hodosh, P. Young, and J. Hockenmaier, âFraming image de- scription as a ranking task: Data, models and evaluation metrics.â JAIR, vol. 47, pp. 853â899, 2013.
[8] P. Kuznetsova, V. Ordonez, A. C. Berg, T. L. Berg, and Y. Choi, âCollective generation of natural image descriptions,â in ACL, 2012.
[9] Y. Yang, C. L. Teo, H. Daum´e III, and Y. Aloimonos, âCorpus- guided sentence generation of natural images,â in EMNLP, 2011. [10] A. Gupta, Y. Verma, and C. Jawahar, âChoosing linguistics over
vision to describe images.â in AAAI, 2012.
[11] E. Bruni, G. Boleda, M. Baroni, and N.-K. Tran, âDistributional semantics in technicolor,â in ACL, 2012.
[12] Y. Feng and M. Lapata, âAutomatic caption generation for news images,â TPAMI, vol. 35, no. 4, pp. 797â812, 2013.
[13] D. Elliott and F. Keller, âImage description using visual depen- dency representations.â in EMNLP, 2013, pp. 1292â1302.
[14] A. Karpathy, A. Joulin, and F.-F. Li, âDeep fragment embeddings for bidirectional image sentence mapping,â in NIPS, 2014. [15] Y. Gong, L. Wang, M. Hodosh, J. Hockenmaier, and S. Lazebnik, âImproving image-sentence embeddings using large weakly an- notated photo collections,â in ECCV, 2014, pp. 529â545.
[16] R. Mason and E. Charniak, âNonparametric method for data- driven image captioning,â in ACL, 2014.
[17] P. Kuznetsova, V. Ordonez, T. Berg, and Y. Choi, âTreetalk: Com- position and compression of trees for image descriptions,â TACL, vol. 2, pp. 351â362, 2014.
[18] K. Ramnath, S. Baker, L. Vanderwende, M. El-Saban, S. N. Sinha, A. Kannan, N. Hassan, M. Galley, Y. Yang, D. Ramanan, A. Bergamo, and L. Torresani, âAutocaption: Automatic caption generation for personal photos,â in WACV, 2014.
[19] A. Lazaridou, E. Bruni, and M. Baroni, âIs this a wampimuk? cross-modal mapping between distributional semantics and the visual world,â in ACL, 2014.
[20] R. Kiros, R. Salakhutdinov, and R. Zemel, âMultimodal neural language models,â in ICML, 2014.
[21] J. Mao, W. Xu, Y. Yang, J. Wang, and A. L. Yuille, âExplain im- ages with multimodal recurrent neural networks,â arXiv preprint arXiv:1410.1090, 2014.
[22] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan, âShow and tell: A neural image caption generator,â arXiv preprint arXiv:1411.4555, 2014.
[23] A. Karpathy and L. Fei-Fei, âDeep visual-semantic alignments for generating image descriptions,â arXiv preprint arXiv:1412.2306, 2014.
[24] R. Kiros, R. Salakhutdinov, and R. S. Zemel, âUnifying visual- semantic embeddings with multimodal neural language models,â arXiv preprint arXiv:1411.2539, 2014.
[25] J. Donahue, L. A. Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell, âLong-term recurrent convolutional networks for visual recognition and description,â arXiv preprint arXiv:1411.4389, 2014.
[26] H. Fang, S. Gupta, F. Iandola, R. Srivastava, L. Deng, P. Doll´ar, J. Gao, X. He, M. Mitchell, J. Platt et al., âFrom captions to visual concepts and back,â arXiv preprint arXiv:1411.4952, 2014.
[27] X. Chen and C. L. Zitnick, âLearning a recurrent visual representa- tion for image caption generation,â arXiv preprint arXiv:1411.5654, 2014.
[28] R. Lebret, P. O. Pinheiro, and R. Collobert, âPhrase-based image captioning,â arXiv preprint arXiv:1502.03671, 2015.
[29] ââ, âSimple image description generator via a linear phrase- based approach,â arXiv preprint arXiv:1412.8419, 2014.
[30] A. Lazaridou, N. T. Pham, and M. Baroni, âCombining language and vision with a multimodal skip-gram model,â arXiv preprint arXiv:1501.02598, 2015.
[31] A. Krizhevsky, I. Sutskever, and G. Hinton, âImageNet classiï¬ca- tion with deep convolutional neural networks,â in NIPS, 2012.
[32] S. Hochreiter and J. Schmidhuber, âLong short-term memory,â Neural computation, vol. 9, no. 8, pp. 1735â1780, 1997.
[33] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, âIm- ageNet: A Large-Scale Hierarchical Image Database,â in CVPR, 2009.
[34] M. Grubinger, P. Clough, H. M ¨uller, and T. Deselaers, âThe iapr tc- 12 benchmark: A new evaluation resource for visual information systems,â in LREC Workshop on Language Resources for Content- based Image Retrieval, 2006.
[35] V. Ordonez, G. Kulkarni, and T. Berg, âIm2text: Describing images using 1 million captioned photographs.â in NIPS, 2011.
[36] P. Young, A. Lai, M. Hodosh, and J. Hockenmaier, âFrom image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions,â TACL, vol. 2, pp. 67â 78, 2014.
[37] J. Chen, P. Kuznetsova, D. Warren, and Y. Choi, âD´ej´a image- captions: A corpus of expressive image descriptions in repetition,â in NAACL, 2015.
[38] T. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Doll´ar, and C. L. Zitnick, âMicrosoft COCO: Common objects in context,â in ECCV, 2014.
[39] K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu, âBleu: a method for automatic evaluation of machine translation,â in ACL, 2002.
[40] C.-Y. Lin, âRouge: A package for automatic evaluation of sum- maries,â in ACL Workshop, 2004.
[41] M. Denkowski and A. Lavie, âMeteor universal: Language spe- ciï¬c translation evaluation for any target language,â in EACL Workshop on Statistical Machine Translation, 2014.
âCider: Consensus-based image description evaluation,â arXiv preprint arXiv:1411.5726, 2014.
J. Bethard, and D. McClosky, âThe Stanford CoreNLP natural language processing toolkit,â in Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, 2014, pp. 55â60. [Online]. Available: http: //www.aclweb.org/anthology/P/P14/P14-5010
[44] G. A. Miller, âWordnet: a lexical database for english,â Communi- cations of the ACM, vol. 38, no. 11, pp. 39â41, 1995.
[45] D. Elliott and F. Keller, âComparing automatic evaluation mea- sures for image description,â in Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, vol. 2, 2014, pp. 452â457.
[46] C. Callison-Burch, M. Osborne, and P. Koehn, âRe-evaluation the role of bleu in machine translation research.â in EACL, vol. 6, 2006, pp. 249â256.
7 | {
"id": "1502.03671"
} |
1502.05698 | Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks | One long-term goal of machine learning research is to produce methods that
are applicable to reasoning and natural language, in particular building an
intelligent dialogue agent. To measure progress towards that goal, we argue for
the usefulness of a set of proxy tasks that evaluate reading comprehension via
question answering. Our tasks measure understanding in several ways: whether a
system is able to answer questions via chaining facts, simple induction,
deduction and many more. The tasks are designed to be prerequisites for any
system that aims to be capable of conversing with a human. We believe many
existing learning systems can currently not solve them, and hence our aim is to
classify these tasks into skill sets, so that researchers can identify (and
then rectify) the failings of their systems. We also extend and improve the
recently introduced Memory Networks model, and show it is able to solve some,
but not all, of the tasks. | http://arxiv.org/pdf/1502.05698 | Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, Tomas Mikolov | cs.AI, cs.CL, stat.ML | null | null | cs.AI | 20150219 | 20151231 | 5 1 0 2 c e D 1 3 ] I A . s c [ 0 1 v 8 9 6 5 0 . 2 0 5 1 : v i X r a
# Under review as a conference paper at ICLR 2016
TOWARDS AI-COMPLETE QUESTION ANSWERING: A SET OF PREREQUISITE TOY TASKS
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merri¨enboer, Armand Joulin & Tomas Mikolov Facebook AI Research 770 Broadway New York, USA {jase,abordes,spchopra,tmikolov,sashar,bartvm}@fb.com
# ABSTRACT
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the use- fulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
1
# INTRODUCTION
There is a rich history of the use of synthetic tasks in machine learning, from the XOR problem which helped motivate neural networks (Minsky & Papert, 1969; Rumelhart et al., 1985), to circle and ring datasets that helped motivate some of the most well-known clustering and semi-supervised learning algorithms (Ng et al., 2002; Zhu et al., 2003), Mackey Glass equations for time series (M¨uller et al., 1997), and so on â in fact some of the well known UCI datasets (Bache & Lichman, 2013) are synthetic as well (e.g., waveform). Recent work continues this trend. For example, in the area of developing learning algorithms with a memory component synthetic datasets were used to help develop both the Neural Turing Machine of Graves et al. (2014) and the Memory Networks of Weston et al. (2014), the latter of which is relevant to this work.
One of the reasons for the interest in synthetic data is that it can be easier to develop new techniques using it. It is well known that working with large amounts of real data (âbig dataâ) tends to lead researchers to simpler models as âsimple models and a lot of data trump more elaborate models based on less dataâ (Halevy et al., 2009). For example, N -grams for language modeling work well relative to existing competing methods, but are far from being a model that truly understands text. As researchers we can become stuck in local minima in algorithm space; development of synthetic data is one way to try and break out of that.
In this work we propose a framework and a set of synthetic tasks for the goal of helping to develop learning algorithms for text understanding and reasoning. While it is relatively difï¬cult to auto- matically evaluate the performance of an agent in general dialogue â a long term-goal of AI â it is relatively easy to evaluate responses to input questions, i.e., the task of question answering (QA). Question answering is incredibly broad: more or less any task one can think of can be cast into this setup. This enables us to propose a wide ranging set of different tasks, that test different capabilities of learning algorithms, under a common framework.
Our tasks are built with a uniï¬ed underlying simulation of a physical world, akin to a classic text adventure game (Montfort, 2005) whereby actors move around manipulating objects and interacting
1
# Under review as a conference paper at ICLR 2016
with each other. As the simulation runs, grounded text and question answer pairs are simultaneously generated. Our goal is to categorize different kinds of questions into skill sets, which become our tasks. Our hope is that the analysis of performance on these tasks will help expose weaknesses of current models and help motivate new algorithm designs that alleviate these weaknesses. We further envision this as a feedback loop where new tasks can then be designed in response, perhaps in an adversarial fashion, in order to break the new models.
The tasks we design are detailed in Section 3, and the simulation used to generate them in Section 4. In Section 5 we give benchmark results of standard methods on our tasks, and analyse their successes and failures. In order to exemplify the kind of feedback loop between algorithm development and task development we envision, in Section A we propose a set of improvements to the recent Memory Network method, which has shown to give promising performance in QA. We show our proposed approach does indeed give improved performance on some tasks, but is still unable to solve some of them, which we consider as open problems.
# 2 RELATED WORK
Several projects targeting language understanding using QA-based strategies have recently emerged. Unlike tasks like dialogue or summarization, QA is easy to evaluate (especially in true/false or multiple choice scenarios) and hence makes it an appealing research avenue. The difï¬culty lies in the deï¬nition of questions: they must be unambiguously answerable by adult humans (or children), but still require some thinking. The Allen Institute for AIâs ï¬agship project ARISTO1 is organized around a collection of QA tasks derived from increasingly difï¬cult science exams, at the 4th, 8th, and 12th grade levels. Richardson et al. (2013) proposed the MCTest2 a set of 660 stories and associated questions intended for research on the machine comprehension of text. Each question requires the reader to understand different aspects of the story.
These two initiatives go in a promising direction but interpreting the results on these benchmarks remain complicated. Indeed, no system has yet been able to fully solve the proposed tasks and since many sub-tasks need to be solved to answer any of their questions (coreference, deduction, use of common-sense, etc.), it is difï¬cult to clearly identify capabilities and limitations of these systems and hence to propose improvements and modiï¬cations. As a result, conclusions drawn from these projects are not much clearer than that coming from more traditional works on QA over large-scale Knowledge Bases (Berant et al., 2013; Fader et al., 2014). Besides, the best performing systems are based on hand-crafted patterns and features, and/or statistics acquired on very large corpora. It is difï¬cult to argue that such systems actually understand language and are not simply light upgrades of traditional information extraction methods (Yao et al., 2014). The system of Berant et al. (2014) is more evolved since it builds a structured representation of a text and of a question to answer. Despite its potential this method remains highly domain speciï¬c and relies on a lot of prior knowledge.
Based on these observations, we chose to conceive a collection of much simpler QA tasks, with the main objective that failure or success of a system on any of them can unequivocally provide feedback on its capabilities. In that, we are close to the Winograd Schema Challenge Levesque et al. (2011), which is organized around simple statements followed by a single binary choice question such as: âJoan made sure to thank Susan for all the help she had received. Who had received the help? Joan or Susan?â. In this challenge, and our tasks, it is straightforward to interpret results. Yet, where the Winograd Challenge is mostly centered around evaluating if systems can acquire and make use of background knowledge that is not expressed in the words of the statement, our tasks are self-contained and are more diverse. By self-contained we mean our tasks come with both training data and evaluation data, rather than just the latter as in the case of ARISTO and the Winograd Challenge. MCTest has a train/test split but the training set is likely too small to capture all the reasoning needed to do well on the test set. In our setup one can assess the amount of training examples needed to perform well (which can be increased as desired) and commonsense knowledge and reasoning required for the test set should be contained in the training set. In terms of diversity, some of our tasks are related to existing setups but we also propose many additional ones; tasks 8 and 9 are inspired by previous work on lambda dependency-based compositional semantics (Liang et al., 2013; Liang, 2013) for instance. For us, each task checks one skill that the system must
1
http://allenai.org/aristo.html
2
http://research.microsoft.com/mct
2
# Under review as a conference paper at ICLR 2016
have and we postulate that performing well on all of them is a prerequisite for any system aiming at full text understanding and reasoning.
# 3 THE TASKS
Principles Our main idea is to provide a set of tasks, in a similar way to how software testing is built in computer science. Ideally each task is a âleafâ test case, as independent from oth- ers as possible, and tests in the simplest way possible one aspect of intended behavior. Subse- quent (ânon-leafâ) tests can build on these by testing combinations as well. The tasks are pub- licly available at http://fb.ai/babi. Source code to generate the tasks is available at https://github.com/facebook/bAbI-tasks.
Each task provides a set of training and test data, with the intention that a successful model performs well on test data. Following Weston et al. (2014), the supervision in the training set is given by the true answers to questions, and the set of relevant statements for answering a given question, which may or may not be used by the learner. We set up the tasks so that correct answers are limited to a single word (Q: Where is Mark? A: bathroom), or else a list of words (Q: What is Mark holding?) as evaluation is then clear-cut, and is measured simply as right or wrong.
All of the tasks are noiseless and a human able to read that language can potentially achieve 100% accuracy. We tried to choose tasks that are natural to a human: they are based on simple usual situ- ations and no background in areas such as formal semantics, machine learning, logic or knowledge representation is required for an adult to solve them.
The data itself is produced using a simple simulation of characters and objects moving around and interacting in locations, described in Section 4. The simulation allows us to generate data in many different scenarios where the true labels are known by grounding to the simulation. For each task, we describe it by giving a small sample of the dataset including statements, questions and the true labels (in red) in Tables 1 and 2.
Single Supporting Fact Task 1 consists of questions where a previously given single supporting fact, potentially amongst a set of other irrelevant facts, provides the answer. We ï¬rst test one of the simplest cases of this, by asking for the location of a person, e.g. âMary travelled to the ofï¬ce. Where is Mary?â. This kind of task was already employed in Weston et al. (2014). It can be considered the simplest case of some real world QA datasets such as in Fader et al. (2013).
Two or Three Supporting Facts A harder task is to answer questions where two supporting state- ments have to be chained to answer the question, as in task 2, where to answer the question âWhere is the football?â one has to combine information from two sentences âJohn is in the playgroundâ and âJohn picked up the footballâ. Again, this kind of task was already used in Weston et al. (2014). Similarly, one can make a task with three supporting facts, given in task 3, whereby the ï¬rst three statements are all required to answer the question âWhere was the apple before the kitchen?â.
Two or Three Argument Relations To answer questions the ability to differentiate and recognize subjects and objects is crucial. In task 4 we consider the extreme case where sentences feature re- ordered words, i.e. a bag-of-words will not work. For example, the questions âWhat is north of the bedroom?â and âWhat is the bedroom north of?â have exactly the same words, but a different order, with different answers. A step further, sometimes one needs to differentiate three separate arguments. Task 5 involves statements like âJeff was given the milk by Billâ and then queries who is the giver, receiver or which object is involved.
Yes/No Questions Task 6 tests, on some of the simplest questions possible (speciï¬cally, ones with a single supporting fact), the ability of a model to answer true/false type questions like âIs John in the playground?â.
Counting and Lists/Sets Task 7 tests the ability of the QA system to perform simple counting operations, by asking about the number of objects with a certain property, e.g. âHow many objects is Daniel holding?â. Similarly, task 8 tests the ability to produce a set of single word answers in the form of a list, e.g. âWhat is Daniel holding?â. These tasks can be seen as QA tasks related to basic database search operations.
3
# Under review as a conference paper at ICLR 2016
Table 1: Sample statements and questions from tasks 1 to 10.
Task 2: Two Supporting Facts John is in the playground. John picked up the football. Bob went to the kitchen. Where is the football? A:playground
Task 1: Single Supporting Fact Mary went to the bathroom. John moved to the hallway. Mary travelled to the ofï¬ce. Where is Mary? A:ofï¬ce
Task 3: Three Supporting Facts John picked up the apple. John went to the ofï¬ce. John went to the kitchen. John dropped the apple. Where was the apple before the kitchen? A:ofï¬ce Task 4: Two Argument Relations The ofï¬ce is north of the bedroom. The bedroom is north of the bathroom. The kitchen is west of the garden. What is north of the bedroom? A: ofï¬ce What is the bedroom north of? A: bathroom Task 5: Three Argument Relations Mary gave the cake to Fred. Fred gave the cake to Bill. Jeff was given the milk by Bill. Who gave the cake to Fred? A: Mary Who did Fred give the cake to? A: Bill Task 6: Yes/No Questions John moved to the playground. Daniel went to the bathroom. John went back to the hallway. Is John in the playground? A:no Is Daniel in the bathroom? A:yes Task 7: Counting Daniel picked up the football. Daniel dropped the football. Daniel got the milk. Daniel took the apple. How many objects is Daniel holding? A: two Task 8: Lists/Sets Daniel picks up the football. Daniel drops the newspaper. Daniel picks up the milk. John took the apple. What is Daniel holding? milk, football
Task 9: Simple Negation Sandra travelled to the ofï¬ce. Fred is no longer in the ofï¬ce. Is Fred in the ofï¬ce? A:no Is Sandra in the ofï¬ce? A:yes Task 10: Indeï¬nite Knowledge John is either in the classroom or the playground. Sandra is in the garden. Is John in the classroom? A:maybe Is John in the ofï¬ce? A:no
Simple Negation and Indeï¬nite Knowledge Tasks 9 and 10 test slightly more complex natural language constructs. Task 9 tests one of the simplest forms of negation, that of supporting facts that imply a statement is false e.g. âFred is no longer in the ofï¬ceâ rather than âFred travelled to the ofï¬ceâ. (In this case, task 6 (yes/no questions) is a prerequisite to the task.) Task 10 tests if we can model statements that describe possibilities rather than certainties, e.g. âJohn is either in the classroom or the playground.â, where in that case the answer is âmaybeâ to the question âIs John in the classroom?â.
Basic Coreference, Conjunctions and Compound Coreference Task 11 tests the simplest type of coreference, that of detecting the nearest referent, e.g. âDaniel was in the kitchen. Then he went to the studio.â. Real-world data typically addresses this as a labeling problem and studies more sophisticated phenomena (Soon et al., 2001), whereas we evaluate it as in all our other tasks as a question answering problem. Task 12 (conjunctions) tests referring to multiple subjects in a single statement, e.g. âMary and Jeff went to the kitchen.â. Task 13 tests coreference in the case where the pronoun can refer to multiple actors, e.g. âDaniel and Sandra journeyed to the ofï¬ce. Then they went to the gardenâ.
Time Reasoning While our tasks so far have included time implicitly in the order of the state- ments, task 14 tests understanding the use of time expressions within the statements, e.g. âIn the afternoon Julie went to the park. Yesterday Julie was at school.â, followed by questions about the order of events such as âWhere was Julie before the park?â. Real-world datasets address the task of evaluating time expressions typically as a labeling, rather than a QA task, see e.g. UzZaman et al. (2012).
Basic Deduction and Induction Task 15 tests basic deduction via inheritance of properties, e.g. âSheep are afraid of wolves. Gertrude is a sheep. What is Gertrude afraid of?â. Task 16 similarly
4
# Under review as a conference paper at ICLR 2016
Table 2: Sample statements and questions from tasks 11 to 20.
Task 11: Basic Coreference Daniel was in the kitchen. Then he went to the studio. Sandra was in the ofï¬ce. Where is Daniel? A:studio Task 12: Conjunction Mary and Jeff went to the kitchen. Then Jeff went to the park. Where is Mary? A: kitchen Where is Jeff? A: park Task 13: Compound Coreference Daniel and Sandra journeyed to the ofï¬ce. Then they went to the garden. Sandra and John travelled to the kitchen. After that they moved to the hallway. Where is Daniel? A: garden Task 14: Time Reasoning In the afternoon Julie went to the park. Yesterday Julie was at school. Julie went to the cinema this evening. Where did Julie go after the park? A:cinema Where was Julie before the park? A:school Task 15: Basic Deduction Sheep are afraid of wolves. Cats are afraid of dogs. Mice are afraid of cats. Gertrude is a sheep. What is Gertrude afraid of? A:wolves Task 16: Basic Induction Lily is a swan. Lily is white. Bernhard is green. Greg is a swan. What color is Greg? A:white Task 17: Positional Reasoning The triangle is to the right of the blue square. The red square is on top of the blue square. The red sphere is to the right of the blue square. Is the red sphere to the right of the blue square? A:yes Is the red square to the left of the triangle? A:yes Task 18: Size Reasoning The football ï¬ts in the suitcase. The suitcase ï¬ts in the cupboard. The box is smaller than the football. Will the box ï¬t in the suitcase? A:yes Will the cupboard ï¬t in the box? A:no
tests basic induction via inheritance of properties. A full analysis of induction and deduction is clearly beyond the scope of this work, and future tasks should analyse further, deeper aspects.
Positional and Size Reasoning Task 17 tests spatial reasoning, one of many components of the classical SHRDLU system (Winograd, 1972) by asking questions about the relative positions of colored blocks. Task 18 requires reasoning about the relative size of objects and is inspired by the commonsense reasoning examples in the Winograd schema challenge (Levesque et al., 2011).
Path Finding The goal of task 19 is to ï¬nd the path between locations: given the description of various locations, it asks: how do you get from one to another? This is related to the work of Chen & Mooney (2011) and effectively involves a search problem.
Agentâs Motivations Finally, task 20 questions, in the simplest way possible, why an agent per- forms an action. It addresses the case of actors being in a given state (hungry, thirsty, tired, . . . ) and the actions they then take, e.g. it should learn that hungry people might go to the kitchen, and so on.
As already stated, these tasks are meant to foster the development and understanding of machine learning algorithms. A single model should be evaluated across all the tasks (not tuning per task) and then the same model should be tested on additional real-world tasks.
In our data release, in addition to providing the above 20 tasks in English, we also provide them (i) in Hindi; and (ii) with shufï¬ed English words so they are no longer readable by humans. A good learning algorithm should perform similarly on all three, which would likely not be the case for a method using external resources, a setting intended to mimic a learner being ï¬rst presented a language and having to learn from scratch.
5
# Under review as a conference paper at ICLR 2016
# 4 SIMULATION
All our tasks are generated with a simulation which behaves like a classic text adventure game. The idea is that generating text within this simulation allows us to ground the language used into a coherent and controlled (artiï¬cial) world. Our simulation follows those of Bordes et al. (2010); Weston et al. (2014) but is somewhat more complex.
The simulated world is composed of entities of various types (locations, objects, persons. etc.) and of various actions that operate on these entities. Entities have internal states: their location, whether they carry objects on top or inside them (e.g., tables and boxes), the mental state of actors (e.g. hungry), as well as properties such as size, color, and edibility. For locations, the nearby places that are connected (e.g. what lies to the east, or above) are encoded. For actors, a set of pre-speciï¬ed rules per actor can also be speciï¬ed to control their behavior, e.g. if they are hungry they may try to ï¬nd food. Random valid actions can also be executed if no rule is set, e.g. walking around randomly.
The actions an actor can execute in the simulation consist of the following: go <location>, get <object>, get <object1> from <object2>, put <object1> in/on <object2>, give <object> to <actor>, drop <object>, set <entitity> <state>, look, inventory and examine <object>. A set of universal constraints is imposed on those actions to enforce coherence in the simulation. For example an actor cannot get something that they or someone else already has, they cannot go to a place that is not connected to the current location, cannot drop something they do not already have, and so on. Using the underlying actions, rules for actors, and their constraints, deï¬nes how actors act. For each task we limit the actions needed for that task, e.g. task 1 only needs go whereas task 2 uses go, get and drop. If we write the commands down this gives us a very simple âstoryâ which is executable by the simulation, e.g., joe go playground; bob go ofï¬ce; joe get football. This example corresponds to task 2. The system can then ask questions about the state of the simulation e.g., where john?, where football? and so on. It is easy to calculate the true answers for these questions as we have access to the underlying world.
To produce more natural looking text with lexical variety from statements and questions we employ a simple automated grammar. Each verb is assigned a set of synonyms, e.g., the simulation command get is replaced with either picked up, got, grabbed or took, and drop is replaced with either dropped, left, discarded or put down. Similarly, each object and actor can have a set of replacement synonyms as well, e.g. replacing Daniel with he in task 11. Adverbs are crucial for some tasks such as the time reasoning task 14.
There are a great many aspects of language not yet modeled. For example, all sentences are so far relatively short and contain little nesting. Further, the entities and the vocabulary size is small (150 words, and typically 4 actors, 6 locations and 3 objects used per task). The hope is that deï¬ning a set of well deï¬ned tasks will help evaluate models in a controlled way within the simulated environment, which is hard to do with real data. That is, these tasks are not a substitute for real data, but should complement them, especially when developing and analysing algorithms.
# 5 EXPERIMENTS
We compared the following methods on our tasks (on the English dataset): (i) an N - gram classiï¬er baseline, (ii) LSTMs (long short term memory Recurrent Neural Networks) (Hochreiter & Schmidhuber, 1997), (iii) Memory Networks (MemNNs) (Weston et al., 2014), (iv) some extensions of Memory Networks we will detail; and (v) a structured SVM that incorporates external labeled data from existing NLP tasks. These models belong to three separate tracks. Weakly supervised models are only given question answer pairs at training time, whereas strong supervision provides the set of supporting facts at training time (but not testing time) as well. Strongly super- vised ones give accuracy upper bounds for weakly supervised models, i.e. the performance should be superior given the same model class. Methods in the last external resources track can use labeled data from other sources rather than just the training set provided, e.g. coreference and semantic role labeling tasks, as well as strong supervision. For each task we use 1000 questions for training, and 1000 for testing, and report the test accuracy. We consider a task successfully passed if ⥠95% accuracy is obtained3.
3The choice of 95% (and 1000 training examples) is arbitrary.
6
# Under review as a conference paper at ICLR 2016
Table 3: Test accuracy (%) on our 20 Tasks for various methods (1000 training examples each). Our proposed extensions to MemNNs are in columns 5-9: with adaptive memory (AM), N -grams (NG), nonlinear matching function (NL), and combinations thereof. Bold numbers indicate tasks where our extensions achieve ⥠95% accuracy but the original MemNN model of Weston et al. (2014) did not. The last two columns (10-11) give method. Column 10 gives the amount of training data for each task needed to extra analysis of the MemNN AM + NG + NL
obtain ⥠95% accuracy, or FAIL if this is not achievable with 1000 training examples. The ï¬nal column gives the accuracy when training on all data at once, rather than separately.
Weakly Supervised Uses External Resources Strong Supervision (using supporting facts) TASK 1 - Single Supporting Fact 2 - Two Supporting Facts 3 - Three Supporting Facts 4 - Two Arg. Relations 5 - Three Arg. Relations 6 - Yes/No Questions 7 - Counting 8 - Lists/Sets 9 - Simple Negation 10 - Indeï¬nite Knowledge 11 - Basic Coreference 12 - Conjunction 13 - Compound Coref. 14 - Time Reasoning 15 - Basic Deduction 16 - Basic Induction 17 - Positional Reasoning 18 - Size Reasoning 19 - Path Finding 20 - Agentâs Motivations Mean Performance M SV features +SRL 95 ⥠req. ex. of No. RY N O N M E Mem M E PTIV DA A 100 100 100 69 83 52 78 90 71 57 100 100 100 100 73 100 46 50 9 100 79 (2014) N N Mem al. et Weston 100 100 20 71 83 47 68 77 65 59 100 100 100 99 74 27 54 57 0 100 75 R A E N N NLIN O N + Structured N MS N A R N-G + N NL N + G N + N-gram Classiï¬er M LST Mem Mem Mem F E R O C M A 100 100 100 73 86 100 83 94 100 97 100 100 100 100 77 100 57 54 15 100 87 M A 100 100 99 100 86 53 86 88 63 54 100 100 100 99 100 100 49 74 3 100 83 M A 100 100 100 100 98 100 85 91 100 98 100 100 100 99 100 100 65 95 36 100 93 99 74 17 98 83 99 69 70 100 99 100 96 99 99 96 24 61 62 49 95 79 36 2 7 50 20 49 52 40 62 45 29 9 26 19 20 43 46 52 0 76 34 250 ex. 500 ex. 500 ex. 500 ex. 1000 ex. 500 ex. FAIL FAIL 500 ex. 1000 ex. 250 ex. 250 ex. 250 ex. 500 ex. 100 ex. 100 ex. FAIL 1000 ex. FAIL 250 ex. 100 50 20 20 61 70 48 49 45 64 44 72 74 94 27 21 23 51 52 8 91 49 Training MultiTask 100 100 98 80 99 100 86 93 100 98 100 100 100 99 100 94 72 93 19 100 92
Methods The N -gram classiï¬er baseline is inspired by the baselines in Richardson et al. (2013) but applied to the case of producing a 1-word answer rather than a multiple choice question: we construct a bag-of-N -grams for all sentences in the story that share at least one word with the question, and then learn a linear classiï¬er to predict the answer using those features4.
LSTMs are a popular method for sequence prediction (Sutskever et al., 2014) and outperform stan- dard RNNs (Recurrent Neural Networks) for tasks similar to ours (Weston et al., 2014). They work by reading the story until the point they reach a question and then have to output an answer. Note that they are weakly supervised by answers only, and are hence at a disadvantage compared to strongly supervised methods or methods that use external resources.
MemNNs (Weston et al., 2014) are a recently proposed class of models that have been shown to perform well at QA. They work by a âcontrollerâ neural network performing inference over the stored memories that consist of the previous statements in the story. The original proposed model performs 2 hops of inference: ï¬nding the ï¬rst supporting fact with the maximum match score with the question, and then the second supporting fact with the maximum match score with both the question and the ï¬rst fact that was found. The matching function consists of mapping the bag-of- words for the question and facts into an embedding space by summing word embeddings. The word embeddings are learnt using strong supervision to optimize the QA task. After ï¬nding supporting facts, a ï¬nal ranking is performed to rank possible responses (answer words) given those facts. We also consider some extensions to this model:
⢠Adaptive memories performing a variable number of hops rather than 2, the model is trained to predict a hop or the special âSTOPâ class. A similar procedure can be applied to output multiple tokens as well.
4Constructing N -grams from all sentences rather than using the ï¬ltered set gave worse results.
7
# Under review as a conference paper at ICLR 2016
⢠N -grams We tried using a bag of 3-grams rather than a bag-of-words to represent the text. In both cases the ï¬rst step of the MemNN is to convert these into vectorial embeddings.
⢠Nonlinearity We apply a classical 2-layer neural network with tanh nonlinearity in the matching function.
More details of these variants is given in Sec A of the appendix.
Finally, we built a classical cascade NLP system baseline using a structured support vector ma- chine (SVM), which incorporates coreference resolution and semantic role labeling (SRL) prepro- cessing steps, which are themselves trained on large amounts of costly labeled data. The Stanford coreference system (Raghunathan et al., 2010) and the SENNA semantic role labeling (SRL) system (Collobert et al., 2011) are used to build features for the input to the SVM, trained with strong super- vision to ï¬nd the supporting facts, e.g. features based on words, word pairs, and the SRL verb and verb-argument pairs. After ï¬nding the supporting facts, we build a similar structured SVM for the response stage, with features tuned for that goal as well. More details are in Sec. B of the appendix.
Learning rates and other hyperparameters for all methods are chosen using the training set. The summary of our experimental results on the tasks is given in Table 3. We give results for each of the 20 tasks separately, as well as mean performance and number of failed tasks in the ï¬nal two rows.
Results Standard MemNNs generally outperform the N -gram and LSTM baselines, which is con- sistent with the results in Weston et al. (2014). However they still âfailâ at a number of tasks; that is, test accuracy is less than 95%. Some of these failures are expected due to insufï¬cient modeling power as described in more detail in Sec. A.1, e.g. k = 2 facts, single word answers and bag-of- words do not succeed on tasks 3, 4, 5, 7, 8 and 18. However, there were also failures on tasks we did not at ï¬rst expect, for example yes/no questions (6) and indeï¬nite knowledge (10). Given hindsight, we realize that the linear scoring function of standard MemNNs cannot model the match between query, supporting fact and a yes/no answer as this requires three-way interactions.
Columns 5-9 of Table 3 give the results for our MemNN extensions: adaptive memories (AM), N -grams (NG) and nonlinearities (NL), plus combinations thereof. The adaptive approach gives a straight-forward improvement in tasks 3 and 16 because they both require more than two supporting facts, and also gives (small) improvements in 8 and 19 because they require multi-word outputs (but still remain difï¬cult). We hence use the AM model in combination with all our other extensions in the subsequent experiments.
MemNNs with N -gram modeling yield clear improvements when word order matters, e.g. tasks 4 and 15. However, N -grams do not seem to be a substitute for nonlinearities in the embedding function as the NL model outperforms N -grams on average, especially in the yes/no (6) and indef- inite tasks (10), as explained before. On the other hand, the NL method cannot model word order and so fails e.g., on task 4. The obvious step is thus to combine these complimentary approaches: indeed AM+NG+NL (column 9) gives improved results over both, with a total of 9 tasks that have been upgraded from failure to success compared to the original MemNN model.
The structured SVM, despite having access to external resources, does not perform better, still fail- ing at 9 tasks. It does perform better than vanilla MemNNs (without extensions) on tasks 6, 9 and 10 where the hand-built feature conjunctions capture the necessary nonlinearities. However, com- pared to MemNN (AM+NG+NL) it seems to do signiï¬cantly worse on tasks requiring three (and sometimes, two) supporting facts (e.g. tasks 3, 16 and 2) presumably as ranking over so many possi- bilities introduces more mistakes. However, its non-greedy search does seem to help on other tasks, such as path ï¬nding (task 19) where search is very important. Since it relies on external resources speciï¬cally designed for English, it is unsure that it would perform as well on other languages, like Hindi, where such external resources might be of worse quality.
The ï¬nal two columns (10-11) give further analysis of the AM+NG+NL MemNN method. The second to last column (10) shows the minimum number of training examples required to achieve ⥠95% accuracy, or FAIL if this is not achieved with 1000 examples. This is important as it is not only desirable to perform well on a task, but also using the fewest number of examples (to generalize well, quickly). Most succeeding tasks require 100-500 examples. Task 8 requires 5000 examples and 7 requires 10000, hence they are labeled as FAIL. The latter task can presumably be solved by adding all the times an object is picked up, and subtracting the times it is dropped, which seems
8
# Under review as a conference paper at ICLR 2016
possible for an MemNN, but it does not do perfectly. Two tasks, positional reasoning 17 and path ï¬nding 19 cannot be solved even with 10000 examples, it seems those (and indeed more advanced forms of induction and deduction, which we plan to build) require a general search algorithm to be built into the inference procedure, which MemNN (and the other approaches tried) are lacking.
The last column shows the performance of AM+NG+NL MemNNs when training on all the tasks jointly, rather than just on a single one. The performance is generally encouragingly similar, showing such a model can learn many aspects of text understanding and reasoning simultaneously. The main issues are that these models still fail on several of the tasks, and use a far stronger form of supervision (using supporting facts) than is typically realistic.
# 6 DISCUSSION
A prerequisite set We developed a set of tasks that we believe are a prerequisite to full language understanding and reasoning. While any learner that can solve these tasks is not necessarily close to full reasoning, if a learner fails on any of our tasks then there are likely real-world tasks that it will fail on too (i.e., real-world tasks that require the same kind of reasoning). Even if the situations and the language of the tasks are artiï¬cial, we believe that the mechanisms required to learn how to solve them are part of the key towards text understanding and reasoning.
A ï¬exible framework This set of tasks is not a deï¬nitive set. The purpose of a simulation-based approach is to provide ï¬exibility and control of the tasksâ construction. We grounded the tasks into language because it is then easier to understand the usefulness of the tasks and to interpret their results. However, our primary goal is to ï¬nd models able to learn to detect and combine patterns in symbolic sequences. One might even want to decrease the intrinsic difï¬culty by removing any lexical variability and ambiguity and reason only over bare symbols, stripped down from their lin- guistic meaning. One could also decorrelate the long-term memory from the reasoning capabilities of systems by, for instance, arranging the supporting facts closer to the questions. In the opposing view, one could instead want to transform the tasks into more realistic stories using annotators or more complex grammars. The set of 20 tasks presented here is a subset of what can be achieved with a simulation. We chose them because they offer a variety of skills that we would like a text reasoning model to have, but we hope researchers from the community will develop more tasks of varying complexity in order to develop and analyze models that try to solve them. Transfer learning across tasks is also a very important goal, beyond the scope of this paper. We have thus made the simulator and code for the tasks publicly available for those purposes.
Testing learning methods Our tasks are designed as a test-bed for learning methods: we provide training and test sets because we intend to evaluate the capability of models to discover how to reason from patterns hidden within them. It could be tempting to hand-code solutions for them or to use existing large-scale QA systems like Cyc (Curtis et al., 2005). They might succeed at solving them, even if our structured SVM results (a cascaded NLP system with hand-built features) show that this is not straightforward; however this is not the tasksâ purpose since those approaches would not be learning to solve them. Our experiments show that some existing machine learning methods are successful on some of the tasks, in particular Memory Networks, for which we introduced some useful extensions (in Sec. A). However, those models still fail on several of the tasks, and use a far stronger form of supervision (using supporting facts) than is typically realistic.
These datasets are not yet solved. Future research should aim to minimize the amount of required supervision, as well as the number of training examples needed to solve a new task, to move closer to the task transfer capabilities of humans. That is, in the weakly supervised case with only 1000 training examples or less there is no known general (i.e. non-hand engineered) method that solves the tasks. Further, importantly, our hope is that a feedback loop of developing more challenging tasks, and then algorithms that can solve them, leads us to fruitful research directions.
Note that these tasks are not a substitute for real data, but should complement them, especially when developing and analysing algorithms. There are many complementary real-world datasets, see for example Hermann et al. (2015); Bordes et al. (2015); Hill et al. (2015). That is, even if a method works well on our 20 tasks, it should be shown to be useful on real data as well.
9
# Under review as a conference paper at ICLR 2016
Impact Since being online, the bAbI tasks have already directly inï¬uenced the development of several promising new algorithms, including weakly supervised end-to-end Memory Networks (MemN2N) of Sukhbaatar et al. (2015), Dynamic Memory Networks of Kumar et al. (2015), and the Neural Reasoner (Peng et al., 2015). MemN2N has since been shown to perform well on some real-world tasks (Hill et al., 2015).
# REFERENCES
Bache, K. and Lichman, M. http://archive.ics.uci.edu/ml. UCI machine learning repository, 2013. URL
Berant, Jonathan, Chou, Andrew, Frostig, Roy, and Liang, Percy. Semantic parsing on freebase from question-answer pairs. In EMNLP, pp. 1533â1544, 2013.
Berant, Jonathan, Srikumar, Vivek, Chen, Pei-Chun, Huang, Brad, Manning, Christopher D, Van- der Linden, Abby, Harding, Brittany, and Clark, Peter. Modeling biological processes for reading comprehension. In Proc. EMNLP, 2014.
Bordes, Antoine, Usunier, Nicolas, Collobert, Ronan, and Weston, Jason. Towards understanding situated natural language. In AISTATS, 2010.
Bordes, Antoine, Usunier, Nicolas, Chopra, Sumit, and Weston, Jason. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075, 2015.
Chen, David L and Mooney, Raymond J. Learning to interpret natural language navigation instruc- tions from observations. San Francisco, CA, pp. 859â865, 2011.
Collobert, Ronan, Weston, Jason, Bottou, L´eon, Karlen, Michael, Kavukcuoglu, Koray, and Kuksa, Pavel. Natural language processing (almost) from scratch. The Journal of Machine Learning Research, 12:2493â2537, 2011.
Curtis, Jon, Matthews, Gavin, and Baxter, David. On the effective use of cyc in a question answering system. In IJCAI Workshop on Knowledge and Reasoning for Answering Questions, pp. 61â70, 2005.
Fader, Anthony, Zettlemoyer, Luke, and Etzioni, Oren. Paraphrase-driven learning for open question answering. In ACL, pp. 1608â1618, 2013.
Fader, Anthony, Zettlemoyer, Luke, and Etzioni, Oren. Open question answering over curated and extracted knowledge bases. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 1156â1165. ACM, 2014.
Graves, Alex, Wayne, Greg, and Danihelka, Ivo. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014.
Halevy, Alon, Norvig, Peter, and Pereira, Fernando. The unreasonable effectiveness of data. Intelli- gent Systems, IEEE, 24(2):8â12, 2009.
Hermann, Karl Moritz, KoËcisk´y, Tom´aËs, Grefenstette, Edward, Espeholt, Lasse, Kay, Will, Teaching machines to read and compre- URL Suleyman, Mustafa, and Blunsom, Phil. hend. http://arxiv.org/abs/1506.03340. In Advances in Neural Information Processing Systems (NIPS), 2015.
Hill, Felix, Bordes, Antoine, Chopra, Sumit, and Weston, Jason. The goldilocks principle: Reading childrenâs books with explicit memory representation s. arXiv preprint arXiv:1511.02301, 2015.
Hochreiter, Sepp and Schmidhuber, J¨urgen. Long short-term memory. Neural computation, 9(8): 1735â1780, 1997.
Kumar, Ankit, Irsoy, Ozan, Su, Jonathan, Bradbury, James, English, Robert, Pierce, ËBrian, On- druska, Peter, Gulrajani, Ishaan, and Socher, Richard. Ask me anything: Dynamic memory net- works for natural language processing. http://arxiv.org/abs/1506.07285, 2015.
10
# Under review as a conference paper at ICLR 2016
Levesque, Hector J, Davis, Ernest, and Morgenstern, Leora. The winograd schema challenge. In AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning, 2011.
Liang, Percy. Lambda dependency-based compositional semantics. arXiv preprint arXiv:1309.4408, 2013.
Liang, Percy, Jordan, Michael I, and Klein, Dan. Learning dependency-based compositional seman- tics. Computational Linguistics, 39(2):389â446, 2013.
Minsky, Marvin and Papert, Seymour. Perceptron: an introduction to computational geometry. The MIT Press, Cambridge, expanded edition, 19:88, 1969.
Montfort, Nick. Twisty Little Passages: an approach to interactive ï¬ction. Mit Press, 2005.
M¨uller, K-R, Smola, Alex J, R¨atsch, Gunnar, Sch¨olkopf, Bernhard, Kohlmorgen, Jens, and Vapnik, Vladimir. Predicting time series with support vector machines. In Artiï¬cial Neural NetworksI- CANNâ97, pp. 999â1004. Springer, 1997.
Ng, Andrew Y, Jordan, Michael I, Weiss, Yair, et al. On spectral clustering: Analysis and an algo- rithm. Advances in neural information processing systems, 2:849â856, 2002.
Peng, Baolin, Lu, Zhengdong, Li, Hang, and Wong, Kam-Fai. Towards neural network-based rea- soning. arXiv preprint arXiv:1508.05508, 2015.
Raghunathan, Karthik, Lee, Heeyoung, Rangarajan, Sudarshan, Chambers, Nathanael, Surdeanu, Mihai, Jurafsky, Dan, and Manning, Christopher. A multi-pass sieve for coreference resolution. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pp. 492â501. Association for Computational Linguistics, 2010.
Richardson, Matthew, Burges, Christopher JC, and Renshaw, Erin. Mctest: A challenge dataset for the open-domain machine comprehension of text. In EMNLP, pp. 193â203, 2013.
Rumelhart, David E, Hinton, Geoffrey E, and Williams, Ronald J. Learning internal representations by error propagation. Technical report, DTIC Document, 1985.
Soon, Wee Meng, Ng, Hwee Tou, and Lim, Daniel Chung Yong. A machine learning approach to coreference resolution of noun phrases. Computational linguistics, 27(4):521â544, 2001.
Sukhbaatar, Sainbayar, Szlam, Arthur, Weston, Jason, and Fergus, Rob. End-to-end memory net- works. Proceedings of NIPS, 2015.
Sutskever, Ilya, Vinyals, Oriol, and Le, Quoc VV. Sequence to sequence learning with neural net- works. In Advances in Neural Information Processing Systems, pp. 3104â3112, 2014.
UzZaman, Naushad, Llorens, Hector, Allen, James, Derczynski, Leon, Verhagen, Marc, and Puste- jovsky, James. Tempeval-3: Evaluating events, time expressions, and temporal relations. arXiv preprint arXiv:1206.5333, 2012.
Weston, Jason, Chopra, Sumit, and Bordes, Antoine. Memory networks. CoRR, abs/1410.3916, 2014.
Winograd, Terry. Understanding natural language. Cognitive psychology, 3(1):1â191, 1972.
Yao, Xuchen, Berant, Jonathan, and Van Durme, Benjamin. Freebase qa: Information extraction or semantic parsing? ACL 2014, pp. 82, 2014.
Yu, Mo, Gormley, Matthew R, and Dredze, Mark. Factor-based compositional embedding models. NIPS 2014 workshop on Learning Semantics, 2014.
Zhu, Xiaojin, Ghahramani, Zoubin, Lafferty, John, et al. Semi-supervised learning using gaussian ï¬elds and harmonic functions. In ICML, volume 3, pp. 912â919, 2003.
11
# Under review as a conference paper at ICLR 2016
# A EXTENSIONS TO MEMORY NETWORKS
Memory Networks Weston et al. (2014) are a promising class of models, shown to perform well at QA, that we can apply to our tasks. They consist of a memory m (an array of objects indexed by mi) and four potentially learnable components I, G, O and R that are executed given an input:
I: (input feature map) â convert input sentence x to an internal feature representation I(x). G: (generalization) â update the current memory state m given the new input: mi =
G(mi, I(x), m), âi.
O: (output feature map) â compute output o given the new input and the memory: o = O(I(x), m).
R: (response) â ï¬nally, decode output features o to give the ï¬nal textual response to the user: r = R(o).
Potentially, component I can make use of standard pre-processing, e.g., parsing and entity resolu- tion, but the simplest form is to do no processing at all. The simplest form of G is store the new incoming example in an empty memory slot, and leave the rest of the memory untouched. Thus, in Weston et al. (2014) the actual implementation used is exactly this simple form, where the bulk of the work is in the O and R components. The former is responsible for reading from memory and performing inference, e.g., calculating what are the relevant memories to answer a question, and the latter for producing the actual wording of the answer given O.
The O module produces output features by ï¬nding k supporting memories given x. They use k = 2. For k = 1 the highest scoring supporting memory is retrieved with:
o1 = O1(x, m) = arg max i=1,...,N sO(x, mi) (1)
where sO is a function that scores the match between the pair of sentences x and mi. For the case k = 2 they then ï¬nd a second supporting memory given the ï¬rst found in the previous iteration:
o2 = O2(q, m) = arg max i=1,...,N where the candidate supporting memory mi is now scored with respect to both the original input and the ï¬rst supporting memory, where square brackets denote a list. The ï¬nal output o is [x, mo1, mo2], which is input to the module R.
Finally, R needs to produce a textual response r. While the authors also consider Recurrent Neural Networks (RNNs), their standard setup limits responses to be a single word (out of all the words seen by the model) by ranking them:
r = R(q, w) = argmaxwâW sR([x, mo1, mo2], w)
(3)
where W is the set of all words in the dictionary, and sR is a function that scores the match.
The scoring functions sO and sR have the same form, that of an embedding model:
s(x, y) = Φx(x)â¤U â¤U Φy(y). (4)
where U is a n à D matrix where D is the number of features and n is the embedding dimension. The role of Φx and Φy is to map the original text to the D-dimensional feature space. They choose a bag of words representation, and D = 3|W | for sO, i.e., every word in the dictionary has three different representations: one for Φy(.) and two for Φx(.) depending on whether the words of the input arguments are from the actual input x or from the supporting memories so that they can be modeled differently.
They consider various extensions of their model, in particular modeling write time and modeling unseen words. Here we only discuss the former which we also use. In order for the model to work on QA tasks over stories it needs to know which order the sentences were uttered which is not available in the model directly. They thus add extra write time extra features to SO which take on the value 0 or 1 indicating which sentence is older than another being compared, and compare triples of pairs of sentences and the question itself. Training is carried out by stochastic gradient descent using supervision from both the question answer pairs and the supporting memories (to select o1 and o2). See Weston et al. (2014) for more details.
12
# Under review as a conference paper at ICLR 2016
A.1 SHORTCOMINGS OF THE EXISTING MEMNNS
The Memory Networks models deï¬ned in (Weston et al., 2014) are one possible technique to try on our tasks, however there are several tasks which they are likely to fail on:
⢠They model sentences with a bag of words so are likely to fail on tasks such as the 2- argument (task 4) and 3-argument (task 5) relation problems.
⢠They perform only two max operations (k = 2) so they cannot handle questions involving more than two supporting facts such as tasks 3 and 7.
⢠Unless a RNN is employed in the R module, they are unable to provide multiple answers in the standard setting using eq. (3). This is required for the list (8) and path ï¬nding (19) tasks.
We therefore propose improvements to their model in the following section.
IMPROVING MEMORY NETWORKS
A.2.1 ADAPTIVE MEMORIES (AND RESPONSES)
We consider a variable number of supporting facts that is automatically adapted dependent on the question being asked. To do this we consider scoring a special fact mâ
. Computation of supporting memories then becomes:
i = 1 oi = O(x, m) while oi 6= mâ
do i â i + 1 oi = O([x, mo1 , . . . , moiâ1 ], m) end while
That is, we keep predicting supporting facts i, conditioning at each step on the previously found facts, until mâ
is predicted at which point we stop. mâ
has its own unique embedding vector, which is also learned. In practice we still impose a hard maximum number of loops in our experiments to avoid fail cases where the computation never stops (in our experiments we use a limit of 10).
Multiple Answers We use a similar trick for the response module as well in order to output multi- ple words. That is, we add a special word wâ
to the dictionary and predict word wi on each iteration i conditional on the previous words, i.e., wi = R([x, mo1 , . . . , m|o|, wi, . . . , wiâ1], w), until we predict wâ
.
A.2.2 NONLINEAR SENTENCE MODELING
There are several ways of modeling sentences that go beyond a bag-of-words, and we explore three variants here. The simplest is a bag-of-N -grams, we consider N = 1, 2 and 3 in the bag. The main disadvantage of such a method is that the dictionary grows rapidly with N . We therefore consider an alternative neural network approach, which we call a multilinear map. Each word in a sentence is binned into one of Psz positions with p(i, l) = â(iPsz)/l)â where i is the position of the word in a sentence of length l, and for each position we employ a n à n matrix Pp(i,l). We then model the matching score with:
s(q, d) = E(q) · E(d); E(x) = tanh( X Pp(i,l)Φx(xi)â¤U ) i=1,...,l (5)
whereby we apply a linear map for each word dependent on its position, followed by a tanh non- linearity on the sum of mappings. Note that this is related to the model of (Yu et al., 2014) who consider tags rather than positions. While the results of this method are not shown in the main paper due to space restrictions, it performs similarly well to N -grams to and may be useful in real-world cases where N -grams cause the dictionary to be too large. Comparing to Table 3 MemNN with adaptive memories (AM) + multilinear obtains a mean performance of 93, the same as MemNNs with AM+NG+NL (i.e., using N-grams instead).
13
# Under review as a conference paper at ICLR 2016
Finally, to assess the performance of nonlinear maps that do not model word position at all we also consider the following nonlinear embedding:
E(x) = tanh(W tanh(Φx(x)â¤U )). (6) where W is a n à n matrix. This is similar to a classical two-layer neural network, but applied to both sides q and d of s(q, d). We also consider the straight-forward combination of bag-of-N -grams followed by this nonlinearity.
# B BASELINE USING EXTERNAL RESOURCES
We also built a classical cascade NLP system baseline using a structured SVM, which incorpo- rates coreference resolution and semantic role labeling preprocessing steps, which are themselves trained on large amounts of costly labeled data. We ï¬rst run the Stanford coreference system (Raghunathan et al., 2010) on the stories and each mention is then replaced with the ï¬rst mention of its entity class. Second, the SENNA semantic role labeling system (SRL) (Collobert et al., 2011) is run, and we collect the set of arguments for each verb. We then deï¬ne a ranking task for ï¬nding the supporting facts (trained using strong supervision):
o1, o2, o3 = arg max SO(x, fo1, fo2, fo3; Î) oâO
where given the question x we ï¬nd at most three supporting facts with indices oi from the set of facts f in the story (we also consider selecting an âempty factâ for the case of less than three), and SO is a linear scoring function with parameters Î. Computing the argmax requires doing exhaustive search, unlike e.g. the MemNN method which is greedy. For scalability, we thus prune the set of possible matches by requiring that facts share one common non-determiner word with each other match or with x. SO is constructed as a set of indicator features. For simplicity each of the features only looks at pairs of sentences, i.e. SO(x, fo1, fo2, fo3; Î) = Î â (g(x, fo1 ), g(x, fo2), g(x, fo3 ), g(fo1, fo2), g(fo2, fo3), g(fo1, fo3)). The feature function g is made up of the following feature types, shown here for g(fo1, fo2): (1) Word pairs: One indicator variable for each pair of words in fo1 and fo2. (2) Pair distance: Indicator for the distance between the sentence, i.e. o1 â o2. (3) Pair order: Indicator for the order of the sentence, i.e. o1 > o2. (4) SRL Verb Pair: Indicator variables for each pair of SRL verbs in fo1 and fo2. (5) SRL Verb-Arg Pair: Indicator variables for each pair of SRL arguments in fo1, fo2 and their corresponding verbs. After ï¬nding the supporting facts, we build a similar structured SVM for the response stage, also with features tuned for that goal: Words â indicator for each word in x, Word Pairs â indicator for each pair of words in x and supporting facts, and similar SRL Verb and SRL Verb-Arg Pair features as before.
Results are given in Table 3. The structured SVM, despite having access to external resources, does It does perform well on tasks not perform better than MemNNs overall, still failing at 9 tasks. 6, 9 and 10where the hand-built feature conjunctions capture the necessary nonlinearities that the original MemNNs do not. However, it seems to do signiï¬cantly worse on tasks requiring three (and sometimes, two) supporting facts (e.g. tasks 3, 16 and 2) presumably as ranking over so many possibilities introduces more mistakes. However, its non-greedy search does seem to help on other tasks, such as path ï¬nding (task 19) where search is very important.
14 | {
"id": "1511.02301"
} |
1502.03167 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | arXiv:1502.03167v3 [cs.LG] Mar 2015
5 1 0 2
# r a
# M 2
]
# G L . s c [
3 v 7 6 1 3 0 . 2 0 5 1 : v i X r a
# Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe Google Inc., sioffe@google.com
Christian Szegedy Google Inc., szegedy@google.com
# Abstract
Training Deep Neural Networks is complicated by the fact that the distribution of each layerâs inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it no- toriously hard to train models with saturating nonlineari- ties. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer in- puts. Our method draws its strength from making normal- ization a part of the model architecture and performing the normalization for each training mini-batch. Batch Nor- malization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regu- larizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classiï¬cation model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a signiï¬cant margin. Using an ensemble of batch- normalized networks, we improve upon the best published result on ImageNet classiï¬cation: reaching 4.9% top-5 validation error (and 4.8% test error), exceeding the ac- curacy of human raters.
Using mini-batches of examples, as opposed to one exam- ple at a time, is helpful in several ways. First, the gradient of the loss over a mini-batch is an estimate of the gradient over the training set, whose quality improves as the batch size increases. Second, computation over a batch can be much more efï¬cient than m computations for individual examples, due to the parallelism afforded by the modern computing platforms.
While stochastic gradient is simple and effective, it requires careful tuning of the model hyper-parameters, speciï¬cally the learning rate used in optimization, as well as the initial values for the model parameters. The train- ing is complicated by the fact that the inputs to each layer are affected by the parameters of all preceding layers â so that small changes to the network parameters amplify as the network becomes deeper.
The change in the distributions of layersâ inputs presents a problem because the layers need to continu- ously adapt to the new distribution. When the input dis- tribution to a learning system changes, it is said to experi- ence covariate shift (Shimodaira, 2000). This is typically handled via domain adaptation (Jiang, 2008). However, the notion of covariate shift can be extended beyond the learning system as a whole, to apply to its parts, such as a sub-network or a layer. Consider a network computing
# 1 Introduction
â = F2(F1(u, Î1), Î2)
Deep learning has dramatically advanced the state of the art in vision, speech, and many other areas. Stochas- tic gradient descent (SGD) has proved to be an effec- tive way of training deep networks, and SGD variants such as momentum (Sutskever et al., 2013) and Adagrad (Duchi et al., 2011) have been used to achieve state of the art performance. SGD optimizes the parameters Î of the network, so as to minimize the loss
where F1 and F2 are arbitrary transformations, and the parameters Î1, Î2 are to be learned so as to minimize the loss â. Learning Î2 can be viewed as if the inputs x = F1(u, Î1) are fed into the sub-network
â = F2(x, Î2).
For example, a gradient descent step
Î = arg min Î 1 N N Xi=1 â(xi, Î)
Î2 â Î2 â α m m Xi=1 âF2(xi, Î2) âÎ2
where x1...N is the training data set. With SGD, the train- ing proceeds in steps, and at each step we consider a mini- batch x1...m of size m. The mini-batch is used to approx- imate the gradient of the loss function with respect to the parameters, by computing
1 m ââ(xi, Î) âÎ .
(for batch size m and learning rate α) is exactly equivalent to that for a stand-alone network F2 with input x. There- fore, the input distribution properties that make training more efï¬cient â such as having the same distribution be- tween the training and test data â apply to training the sub-network as well. As such it is advantageous for the distribution of x to remain ï¬xed over time. Then, Î2 does
1
not have to readjust to compensate for the change in the distribution of x.
Fixed distribution of inputs to a sub-network would have positive consequences for the layers outside the sub- network, as well. Consider a layer with a sigmoid activa- tion function z = g(W u + b) where u is the layer input, the weight matrix W and bias vector b are the layer pa- rameters to be learned, and g(x) = x | increases, gâ²(x) tends to zero. This means that for all di- mensions of x = W u+b except those with small absolute values, the gradient ï¬owing down to u will vanish and the model will train slowly. However, since x is affected by W, b and the parameters of all the layers below, changes to those parameters during training will likely move many dimensions of x into the saturated regime of the nonlin- earity and slow down the convergence. This effect is ampliï¬ed as the network depth increases. In practice, the saturation problem and the resulting vanishing gradi- ents are usually addressed by using Rectiï¬ed Linear Units (Nair & Hinton, 2010) ReLU (x) = max(x, 0), careful initialization (Bengio & Glorot, 2010; Saxe et al., 2013), and small learning rates. If, however, we could ensure that the distribution of nonlinearity inputs remains more stable as the network trains, then the optimizer would be less likely to get stuck in the saturated regime, and the training would accelerate.
We refer to the change in the distributions of internal nodes of a deep network, in the course of training, as In- ternal Covariate Shift. Eliminating it offers a promise of faster training. We propose a new mechanism, which we call Batch Normalization, that takes a step towards re- ducing internal covariate shift, and in doing so dramati- cally accelerates the training of deep neural nets. It ac- complishes this via a normalization step that ï¬xes the means and variances of layer inputs. Batch Normalization also has a beneï¬cial effect on the gradient ï¬ow through the network, by reducing the dependence of gradients on the scale of the parameters or of their initial values. This allows us to use much higher learning rates with- out the risk of divergence. Furthermore, batch normal- ization regularizes the model and reduces the need for Dropout (Srivastava et al., 2014). Finally, Batch Normal- ization makes it possible to use saturating nonlinearities by preventing the network from getting stuck in the satu- rated modes.
In Sec. 4.2, we apply Batch Normalization to the best- performing ImageNet classiï¬cation network, and show that we can match its performance using only 7% of the training steps, and can further exceed its accuracy by a substantial margin. Using an ensemble of such networks trained with Batch Normalization, we achieve the top-5 error rate that improves upon the best known results on ImageNet classiï¬cation.
2
# 2 Towards
# Reducing
Internal
# Covariate Shift
We deï¬ne Internal Covariate Shift as the change in the distribution of network activations due to the change in network parameters during training. To improve the train- ing, we seek to reduce the internal covariate shift. By ï¬xing the distribution of the layer inputs x as the training progresses, we expect to improve the training speed. It has been long known (LeCun et al., 1998b; Wiesler & Ney, 2011) that the network training converges faster if its in- puts are whitened â i.e., linearly transformed to have zero means and unit variances, and decorrelated. As each layer observes the inputs produced by the layers below, it would be advantageous to achieve the same whitening of the in- puts of each layer. By whitening the inputs to each layer, we would take a step towards achieving the ï¬xed distri- butions of inputs that would remove the ill effects of the internal covariate shift.
We could consider whitening activations at every train- ing step or at some interval, either by modifying the network directly or by changing the parameters of the optimization algorithm to depend on the network ac- tivation values (Wiesler et al., 2014; Raiko et al., 2012; Povey et al., 2014; Desjardins & Kavukcuoglu). How- ever, if these modiï¬cations are interspersed with the op- timization steps, then the gradient descent step may at- tempt to update the parameters in a way that requires the normalization to be updated, which reduces the ef- fect of the gradient step. For example, consider a layer with the input u that adds the learned bias b, and normal- izes the result by subtracting the mean of the activation computed over the training data: E[x] where â is the set of values of x over x = u + b, = the training set, and E[x] = 1 If a gradient N descent step ignores the dependence of E[x] on b, then it P b + âb, where âb will update b x. Then â â E[u + b]. E[u + (b + âb)] = u + b u + (b + âb) b Thus, the combination of the update to b and subsequent change in normalization led to no change in the output of the layer nor, consequently, the loss. As the training continues, b will grow indeï¬nitely while the loss remains ï¬xed. This problem can get worse if the normalization not only centers but also scales the activations. We have ob- served this empirically in initial experiments, where the model blows up when the normalization parameters are computed outside the gradient descent step.
The issue with the above approach is that the gradient descent optimization does not take into account the fact that the normalization takes place. To address this issue, we would like to ensure that, for any parameter values, the network always produces activations with the desired distribution. Doing so would allow the gradient of the loss with respect to the model parameters to account for the normalization, and for its dependence on the model parameters Î. Let again x be a layer input, treated as a
vector, and be the set of these inputs over the training data set. The normalization can then be written as a trans- formation
x = Norm(x, )
# X
which depends not only on the given training example x but on all examples â each of which depends on Î if x is generated by another layer. For backpropagation, we would need to compute the Jacobians
âNorm(x, X âx ) and âNorm(x, â X ) ;
# X
ignoring the latter term would lead to the explosion de- scribed above. Within this framework, whitening the layer inputs is expensive, as it requires computing the covari- ance matrix Cov[x] = ExâX [xxT ] E[x]E[x]T and its inverse square root, to produce the whitened activations Cov[x]â1/2(x E[x]), as well as the derivatives of these transforms for backpropagation. This motivates us to seek an alternative that performs input normalization in a way that is differentiable and does not require the analysis of the entire training set after every parameter update.
(e.g. previous (Lyu & Simoncelli, 2008)) use computed over a single training example, or, in the case of image networks, over different feature maps at a given location. However, this changes the representation ability of a network by discarding the absolute scale of activations. We want to a preserve the information in the network, by normalizing the activations in a training example relative to the statistics of the entire training data.
# 3 Normalization via Mini-Batch Statistics
Since the full whitening of each layerâs inputs is costly and not everywhere differentiable, we make two neces- sary simpliï¬cations. The ï¬rst is that instead of whitening the features in layer inputs and outputs jointly, we will normalize each scalar feature independently, by making it have the mean of zero and the variance of 1. For a layer with d-dimensional input x = (x(1) . . . x(d)), we will nor- malize each dimension
x(k) = x(k) E[x(k)] â Var[x(k)]
p where the expectation and variance are computed over the training data set. As shown in (LeCun et al., 1998b), such normalization speeds up convergence, even when the fea- tures are not decorrelated.
Note that simply normalizing each input of a layer may change what the layer can represent. For instance, nor- malizing the inputs of a sigmoid would constrain them to the linear regime of the nonlinearity. To address this, we make sure that the transformation inserted in the network can represent the identity transform. To accomplish this,
we introduce, for each activation x(k), a pair of parameters γ(k), β(k), which scale and shift the normalized value:
y(k) = γ(k) x(k) + β(k).
These parameters are learned along with the original b model parameters, and restore the representation power of the network. Indeed, by setting γ(k) = Var[x(k)] and β(k) = E[x(k)], we could recover the original activations, if that were the optimal thing to do.
In the batch setting where each training step is based on the entire training set, we would use the whole set to nor- malize activations. However, this is impractical when us- ing stochastic optimization. Therefore, we make the sec- ond simpliï¬cation: since we use mini-batches in stochas- tic gradient training, each mini-batch produces estimates of the mean and variance of each activation. This way, the statistics used for normalization can fully participate in the gradient backpropagation. Note that the use of mini- batches is enabled by computation of per-dimension vari- ances rather than joint covariances; in the joint case, reg- ularization would be required since the mini-batch size is likely to be smaller than the number of activations being whitened, resulting in singular covariance matrices.
of size m. Since the normal- ization is applied to each activation independently, let us focus on a particular activation x(k) and omit k for clarity. We have m values of this activation in the mini-batch,
= .
# x1...m} x1...m, and their linear trans-
{ Let the normalized values be formations be y1...m. We refer to the transform
# B
# b
BNγ,β : x1...m â
y1...m
as the Batch Normalizing Transform. We present the BN Transform in Algorithm 1. In the algorithm, Ç« is a constant added to the mini-batch variance for numerical stability.
Input: Values of x over a mini-batch: Parameters to be learned: γ, β ; = x1...m} { B Output: yi = BNγ,β(xi) } { m 1 m // mini-batch mean xi µB â Xi=1 m 1 m Xi=1 xi â Ï2 p xi + β γ Ï2 B µB)2 // mini-batch variance (xi â µB B + Ç« â // normalize xi â b yi â // scale and shift BNγ,β(xi) â¡
# b
Algorithm 1: Batch Normalizing Transform, applied to activation x over a mini-batch.
The BN transform can be added to a network to manip- ulate any activation. In the notation y = BNγ,β(x), we
3
indicate that the parameters γ and β are to be learned, but it should be noted that the BN transform does not independently process the activation in each training ex- ample. Rather, BNγ,β(x) depends both on the training example and the other examples in the mini-batch. The scaled and shifted values y are passed to other network layers. The normalized activations x are internal to our transformation, but their presence is crucial. The distri- butions of values of any x has the expected value of 0 and the variance of 1, as long as the elements of each mini-batch are sampled from the same distribution, and if we neglect Ç«. This can be seen by observing that x2 i = 1, and taking expec- x(k) can be viewed as tations. Each normalized activation P b an input to a sub-network composed of the linear trans- b form y(k) = γ(k) x(k) + β(k), followed by the other pro- cessing done by the original network. These sub-network inputs all have ï¬xed means and variances, and although x(k) can change the joint distribution of these normalized over the course of training, we expect that the introduc- tion of normalized inputs accelerates the training of the sub-network and, consequently, the network as a whole.
During training we need to backpropagate the gradi- ent of loss â through this transformation, as well as com- pute the gradients with respect to the parameters of the BN transform. We use chain rule, as follows (before sim- pliï¬cation):
ae _ ae. ae: â By 7 ak _ vm ae . =1/,2 â3/2 Bom = Lint Bayâ (ti â Ms) (OB + â¬)*/ oe _ ym of 1 4 06, hy â2(@:i-HB) due i=1 Oa; ae daz ⢠Ok _ dol 1 (aie Hs + ar = Darâ Torre f + one =v: a ym oo i=1 Oy: op
# P
Thus, BN transform is a differentiable transformation that introduces normalized activations into the network. This ensures that as the model is training, layers can continue learning on input distributions that exhibit less internal co- variate shift, thus accelerating the training. Furthermore, the learned afï¬ne transform applied to these normalized activations allows the BN transform to represent the iden- tity transformation and preserves the network capacity.
# 3.1 Training and Inference with Batch- Normalized Networks
To Batch-Normalize a network, we specify a subset of ac- tivations and insert the BN transform for each of them, according to Alg. 1. Any layer that previously received x as the input, now receives BN(x). A model employing Batch Normalization can be trained using batch gradient descent, or Stochastic Gradient Descent with a mini-batch size m > 1, or with any of its variants such as Adagrad
(Duchi et al., 2011). The normalization of activations that depends on the mini-batch allows efï¬cient training, but is neither necessary nor desirable during inference; we want the output to depend only on the input, deterministically. For this, once the network has been trained, we use the normalization
x = E[x] x Var[x] + Ç« â
# p
# b
using the population, rather than mini-batch, statistics. Neglecting Ç«, these normalized activations have the same mean 0 and variance 1 as during training. We use the un- biased variance estimate Var[x] = m B], where the expectation is over training mini-batches of size m and Ï2 B are their sample variances. Using moving averages in- stead, we can track the accuracy of a model as it trains. Since the means and variances are ï¬xed during inference, the normalization is simply a linear transform applied to each activation. It may further be composed with the scal- ing by γ and shift by β, to yield a single linear transform that replaces BN(x). Algorithm 2 summarizes the proce- dure for training batch-normalized networks.
Input: Network N with trainable parameters O; subset of activations {a} Output: Batch-normalized network for inference, N24, 1: Ngx <â N_ // Training BN network 2: fork =1...K do 3: Add transformation yâ) = BN) gc) (x 0K)) to sn (Alg. 1) 4: Modify each layer in Nf with input xâ) to take y) instead 5: end for 6: Train Ngy to optimize the parameters © U (9), 80}, 7: Net < Ngy_ // Inference BN network with frozen // parameters 8: fork =1...K do 9: // For clarity, 2 = 2), yÂ¥ =, we = nw, etc. 10: Process multiple training mini-batches B, each of size m, and average over them: E[z] â Es[us] Var[x] â 4 Eg(o3] ll: In N3X, replace the transform y = BN,,g(a) with = ~L.-r+(B- Ele] Â¥ a/Var[x]+e 7 ( at) 12: end for
# 12: end for
Algorithm 2: Training a Batch-Normalized Network
# 3.2 Batch-Normalized Convolutional Net- works
Batch Normalization can be applied to any set of acti- vations in the network. Here, we focus on transforms
4
that consist of an afï¬ne transformation followed by an element-wise nonlinearity:
z = g(W u + b)
where W and b are learned parameters of the model, and ) is the nonlinearity such as sigmoid or ReLU. This for- g( · mulation covers both fully-connected and convolutional layers. We add the BN transform immediately before the nonlinearity, by normalizing x = W u + b. We could have also normalized the layer inputs u, but since u is likely the output of another nonlinearity, the shape of its distri- bution is likely to change during training, and constraining its ï¬rst and second moments would not eliminate the co- variate shift. In contrast, W u + b is more likely to have a symmetric, non-sparse distribution, that is âmore Gaus- sianâ (Hyv¨arinen & Oja, 2000); normalizing it is likely to produce activations with a stable distribution.
Note that, since we normalize W u+b, the bias b can be ignored since its effect will be canceled by the subsequent mean subtraction (the role of the bias is subsumed by β in Alg. 1). Thus, z = g(W u + b) is replaced with
z = g(BN(W u))
where the BN transform is applied independently to each dimension of x = W u, with a separate pair of learned parameters γ(k), β(k) per dimension.
For convolutional layers, we additionally want the nor- malization to obey the convolutional property â so that different elements of the same feature map, at different locations, are normalized in the same way. To achieve this, we jointly normalize all the activations in a mini- be the set of batch, over all locations. In Alg. 1, we let all values in a feature map across both the elements of a mini-batch and spatial locations â so for a mini-batch of q, we use the effec- size m and feature maps of size p tive mini-batch of size mâ² = p q. We learn a pair of parameters γ(k) and β(k) per feature map, rather than per activation. Alg. 2 is modiï¬ed similarly, so that during inference the BN transform applies the same linear transformation to each activation in a given feature map.
# 3.3 Batch Normalization enables higher learning rates
In traditional deep networks, too-high learning rate may result in the gradients that explode or vanish, as well as getting stuck in poor local minima. Batch Normaliza- tion helps address these issues. By normalizing activa- tions throughout the network, it prevents small changes to the parameters from amplifying into larger and subop- timal changes in activations in gradients; for instance, it prevents the training from getting stuck in the saturated regimes of nonlinearities.
Batch Normalization also makes training more resilient to the parameter scale. Normally, large learning rates may increase the scale of layer parameters, which then amplify
5
the gradient during backpropagation and lead to the model explosion. However, with Batch Normalization, back- propagation through a layer is unaffected by the scale of its parameters. Indeed, for a scalar a,
BN(W u) = BN((aW )u)
and we can show that
âBN((aW )u) âu âBN((aW )u) = âBN(W u) âu âBN(W u) âW â(aW ) = 1 a ·
The scale does not affect the layer Jacobian nor, con- sequently, the gradient propagation. Moreover, larger weights lead to smaller gradients, and Batch Normaliza- tion will stabilize the parameter growth.
We further conjecture that Batch Normalization may lead the layer Jacobians to have singular values close to 1, which is known to be beneï¬cial for training (Saxe et al., 2013). Consider two consecutive layers with normalized inputs, and the transformation between these normalized vectors: z are Gaussian x and x is a linear transfor- and uncorrelated, and that F ( b b mation for the given model parameters, then both x and z b x]J T = z] = JCov[ have unit covariances, and I = Cov[ b b JJ T . Thus, JJ T = I, and so all singular values of J b are equal to 1, which preserves the gradient magnitudes during backpropagation. In reality, the transformation is not linear, and the normalized values are not guaranteed to be Gaussian nor independent, but we nevertheless expect Batch Normalization to help make gradient propagation better behaved. The precise effect of Batch Normaliza- tion on gradient propagation remains an area of further study.
# 3.4 Batch Normalization regularizes the model
When training with Batch Normalization, a training ex- ample is seen in conjunction with other examples in the mini-batch, and the training network no longer produc- ing deterministic values for a given training example. In our experiments, we found this effect to be advantageous to the generalization of the network. Whereas Dropout (Srivastava et al., 2014) is typically used to reduce over- ï¬tting, in a batch-normalized network we found that it can be either removed or reduced in strength.
# 4 Experiments
# 4.1 Activations over time
To verify the effects of internal covariate shift on train- ing, and the ability of Batch Normalization to combat it, we considered the problem of predicting the digit class on the MNIST dataset (LeCun et al., 1998a). We used a very simple network, with a 28x28 binary image as input, and
1 2 2 0.9 0.8 Without BN With BN 0 0 0.7 10K 20K 30K 40K 50K â2 â2 (a) (b) Without BN (c) With BN
Figure 1: (a) The test accuracy of the MNIST network the trained with and without Batch Normalization, vs. number of training steps. Batch Normalization helps the network train faster and achieve higher accuracy. (b, c) The evolution of input distributions to a typical sig- moid, over the course of training, shown as th percentiles. Batch Normalization makes the distribution more stable and reduces the internal covariate shift.
3 fully-connected hidden layers with 100 activations each. Each hidden layer computes y = g(W u+b) with sigmoid nonlinearity, and the weights W initialized to small ran- dom Gaussian values. The last hidden layer is followed by a fully-connected layer with 10 activations (one per class) and cross-entropy loss. We trained the network for 50000 steps, with 60 examples per mini-batch. We added Batch Normalization to each hidden layer of the network, as in Sec. 3.1. We were interested in the comparison be- tween the baseline and batch-normalized networks, rather than achieving the state of the art performance on MNIST (which the described architecture does not).
Figure 1(a) shows the fraction of correct predictions by the two networks on held-out test data, as training progresses. The batch-normalized network enjoys the higher test accuracy. To investigate why, we studied in- puts to the sigmoid, in the original network N and batch- normalized network Ntr BN (Alg. 2) over the course of train- ing. In Fig. 1(b,c) we show, for one typical activation from the last hidden layer of each network, how its distribu- tion evolves. The distributions in the original network change signiï¬cantly over time, both in their mean and the variance, which complicates the training of the sub- sequent layers. In contrast, the distributions in the batch- normalized network are much more stable as training pro- gresses, which aids the training.
# 4.2 ImageNet classiï¬cation
We applied Batch Normalization to a new variant of the Inception network (Szegedy et al., 2014), trained on the ImageNet classiï¬cation task (Russakovsky et al., 2014). The network has a large number of convolutional and pooling layers, with a softmax layer to predict the image class, out of 1000 possibilities. Convolutional layers use ReLU as the nonlinearity. The main difference to the net- work described in (Szegedy et al., 2014) is that the 5 5 convolutional layers are replaced by two consecutive lay- 3 convolutions with up to 128 ï¬lters. The net- ers of 3 à 106 parameters, and, other than the work contains 13.6 top softmax layer, has no fully-connected layers. More
6
details are given in the Appendix. We refer to this model as Inceptionin the rest of the text. The model was trained using a version of Stochastic Gradient Descent with mo- mentum (Sutskever et al., 2013), using the mini-batch size of 32. The training was performed using a large-scale, dis- tributed architecture (similar to (Dean et al., 2012)). All networks are evaluated as training progresses by comput- the probability of ing the validation accuracy @1, i.e. predicting the correct label out of 1000 possibilities, on a held-out set, using a single crop per image.
In our experiments, we evaluated several modiï¬cations of Inception with Batch Normalization. In all cases, Batch Normalization was applied to the input of each nonlinear- ity, in a convolutional way, as described in section 3.2, while keeping the rest of the architecture constant.
# 4.2.1 Accelerating BN Networks
Simply adding Batch Normalization to a network does not take full advantage of our method. To do so, we further changed the network and its training parameters, as fol- lows:
Increase learning rate. In a batch-normalized model, we have been able to achieve a training speedup from higher learning rates, with no ill side effects (Sec. 3.3).
Remove Dropout. As described in Sec. 3.4, Batch Nor- malization fulï¬lls some of the same goals as Dropout. Re- moving Dropout from Modiï¬ed BN-Inception speeds up training, without increasing overï¬tting.
Reduce the L2 weight regularization. While in Incep- tion an L2 loss on the model parameters controls overï¬t- ting, in Modiï¬ed BN-Inception the weight of this loss is reduced by a factor of 5. We ï¬nd that this improves the accuracy on the held-out validation data.
Accelerate the learning rate decay. In training Incep- tion, learning rate was decayed exponentially. Because our network trains faster than Inception, we lower the learning rate 6 times faster.
Remove Local Response Normalization While Incep- tion and other networks (Srivastava et al., 2014) beneï¬t from it, we found that with Batch Normalization it is not necessary.
Shufï¬e training examples more thoroughly. We enabled within-shard shufï¬ing of the training data, which prevents the same examples from always appearing in a mini-batch together. This led to about 1% improvements in the val- idation accuracy, which is consistent with the view of Batch Normalization as a regularizer (Sec. 3.4): the ran- domization inherent in our method should be most bene- ï¬cial when it affects an example differently each time it is seen.
Reduce the photometric distortions. Because batch- normalized networks train faster and observe each train- ing example fewer times, we let the trainer focus on more ârealâ images by distorting them less.
0.8 0.7 0.6 0.5 Inception BNâBaseline BNâx5 BNâx30 BNâx5âSigmoid Steps to match Inception 0.4 5M 10M 15M 20M 25M 30M
Figure 2: Single crop validation accuracy of Inception and its batch-normalized variants, vs. the number of training steps.
# 4.2.2 Single-Network Classiï¬cation
We evaluated the following networks, all trained on the LSVRC2012 training data, and tested on the validation data:
Inception: the network described at the beginning of Section 4.2, trained with the initial learning rate of 0.0015. BN-Baseline: Same as Inception with Batch Normal-
ization before each nonlinearity.
BN-x5: Inception with Batch Normalization and the modiï¬cations in Sec. 4.2.1. The initial learning rate was increased by a factor of 5, to 0.0075. The same learning rate increase with original Inception caused the model pa- rameters to reach machine inï¬nity.
BN-x30: Like BN-x5, but with the initial learning rate 0.045 (30 times that of Inception).
BN-x5-Sigmoid: Like BN-x5, but with sigmoid non- 1+exp(âx) instead of ReLU. We also at- linearity g(t) = tempted to train the original Inception with sigmoid, but the model remained at the accuracy equivalent to chance. In Figure 2, we show the validation accuracy of the networks, as a function of the number of training steps. 106 Inception reached the accuracy of 72.2% after 31 training steps. The Figure 3 shows, for each network, the number of training steps required to reach the same 72.2% accuracy, as well as the maximum validation accu- racy reached by the network and the number of steps to reach it.
By only using Batch Normalization (BN-Baseline), we match the accuracy of Inception in less than half the num- ber of training steps. By applying the modiï¬cations in Sec. 4.2.1, we signiï¬cantly increase the training speed of the network. BN-x5 needs 14 times fewer steps than In- Interestingly, in- ception to reach the 72.2% accuracy. creasing the learning rate further (BN-x30) causes the model to train somewhat slower initially, but allows it to 106 reach a higher ï¬nal accuracy. It reaches 74.8% after 6 steps, i.e. 5 times fewer steps than required by Inception to reach 72.2%.
We also veriï¬ed that the reduction in internal covari- ate shift allows deep networks with Batch Normalization
7
Model Inception BN-Baseline BN-x5 BN-x30 BN-x5-Sigmoid Steps to 72.2% Max accuracy 72.2% 72.7% 73.0% 74.8% 69.8% 106 106 106 106 31.0 13.3 2.1 2.7 · · · ·
Figure 3: For Inception and the batch-normalized variants, the number of training steps required to reach the maximum accuracy of Inception (72.2%), and the maximum accuracy achieved by the net- work.
to be trained when sigmoid is used as the nonlinearity, despite the well-known difï¬culty of training such net- works. Indeed, BN-x5-Sigmoid achieves the accuracy of 69.8%. Without Batch Normalization, Inception with sig- moid never achieves better than 1/1000 accuracy.
# 4.2.3 Ensemble Classiï¬cation
The current reported best results on the ImageNet Large Scale Visual Recognition Competition are reached by the Deep Image ensemble of traditional models (Wu et al., 2015) and the ensemble model of (He et al., 2015). The latter reports the top-5 error of 4.94%, as evaluated by the ILSVRC server. Here we report a top-5 validation error of 4.9%, and test error of 4.82% (according to the ILSVRC server). This improves upon the previous best result, and exceeds the estimated accuracy of human raters according to (Russakovsky et al., 2014).
For our ensemble, we used 6 networks. Each was based on BN-x30, modiï¬ed via some of the following: increased initial weights in the convolutional layers; using Dropout (with the Dropout probability of 5% or 10%, vs. 40% for the original Inception); and using non-convolutional, per-activation Batch Normalization with last hidden lay- ers of the model. Each network achieved its maximum 106 training steps. The ensemble accuracy after about 6 prediction was based on the arithmetic average of class probabilities predicted by the constituent networks. The details of ensemble and multicrop inference are similar to (Szegedy et al., 2014).
# Sep
201
We demonstrate in Fig. 4 that batch normalization al- lows us to set new state-of-the-art by a healthy margin on the ImageNet classiï¬cation challenge benchmarks.
# 5 Conclusion
We have presented a novel mechanism for dramatically accelerating the training of deep networks. It is based on the premise that covariate shift, which is known to com- plicate the training of machine learning systems, also ap-
Model GoogLeNet ensemble Deep Image low-res Deep Image high-res Deep Image ensemble BN-Inception single crop BN-Inception multicrop BN-Inception ensemble 224 256 512 variable 224 224 224 144 - - - 1 144 144 - - 24.88 - 25.2% 21.99% 20.1%
Figure 4: Batch-Normalized Inception comparison with previous state of the art on the provided validation set com- prising 50000 images. *BN-Inception ensemble has reached 4.82% top-5 error on the 100000 images of the test set of the ImageNet as reported by the test server.
plies to sub-networks and layers, and removing it from internal activations of the network may aid in training. Our proposed method draws its power from normalizing activations, and from incorporating this normalization in the network architecture itself. This ensures that the nor- malization is appropriately handled by any optimization method that is being used to train the network. To en- able stochastic optimization methods commonly used in deep network training, we perform the normalization for each mini-batch, and backpropagate the gradients through the normalization parameters. Batch Normalization adds only two extra parameters per activation, and in doing so preserves the representation ability of the network. We presented an algorithm for constructing, training, and per- forming inference with batch-normalized networks. The resulting networks can be trained with saturating nonlin- earities, are more tolerant to increased training rates, and often do not require Dropout for regularization.
Merely adding Batch Normalization to a state-of-the- art image classiï¬cation model yields a substantial speedup in training. By further increasing the learning rates, re- moving Dropout, and applying other modiï¬cations af- forded by Batch Normalization, we reach the previous state of the art with only a small fraction of training steps â and then beat the state of the art in single-network image classiï¬cation. Furthermore, by combining multiple mod- els trained with Batch Normalization, we perform better than the best known system on ImageNet, by a signiï¬cant margin.
Interestingly, our method bears similarity to the stan- dardization layer of (G¨ulc¸ehre & Bengio, 2013), though the two methods stem from very different goals, and per- form different tasks. The goal of Batch Normalization is to achieve a stable distribution of activation values throughout training, and in our experiments we apply it before the nonlinearity since that is where matching the ï¬rst and second moments is more likely to result in a stable distribution. On the contrary, (G¨ulc¸ehre & Bengio, 2013) apply the standardization layer to the output of the nonlinearity, which results in sparser activations. In our large-scale image classiï¬cation experiments, we have not observed the nonlinearity inputs to be sparse, neither with nor without Batch Normalization. Other notable differ-
entiating characteristics of Batch Normalization include the learned scale and shift that allow the BN transform to represent identity (the standardization layer did not re- quire this since it was followed by the learned linear trans- form that, conceptually, absorbs the necessary scale and shift), handling of convolutional layers, deterministic in- ference that does not depend on the mini-batch, and batch- normalizing each convolutional layer in the network.
In this work, we have not explored the full range of possibilities that Batch Normalization potentially enables. Our future work includes applications of our method to Recurrent Neural Networks (Pascanu et al., 2013), where the internal covariate shift and the vanishing or exploding gradients may be especially severe, and which would al- low us to more thoroughly test the hypothesis that normal- ization improves gradient propagation (Sec. 3.3). We plan to investigate whether Batch Normalization can help with domain adaptation, in its traditional sense â i.e. whether the normalization performed by the network would al- low it to more easily generalize to new data distribu- tions, perhaps with just a recomputation of the population means and variances (Alg. 2). Finally, we believe that fur- ther theoretical analysis of the algorithm would allow still more improvements and applications.
# References
Bengio, Yoshua and Glorot, Xavier. Understanding the difï¬culty of training deep feedforward neural networks. In Proceedings of AISTATS 2010, volume 9, pp. 249â 256, May 2010.
Dean, Jeffrey, Corrado, Greg S., Monga, Rajat, Chen, Kai, Devin, Matthieu, Le, Quoc V., Mao, Mark Z., Ranzato, MarcâAurelio, Senior, Andrew, Tucker, Paul, Yang, Ke, and Ng, Andrew Y. Large scale distributed deep net- works. In NIPS, 2012.
Desjardins, Guillaume and Kavukcuoglu, Koray. Natural neural networks. (unpublished).
Duchi, John, Hazan, Elad, and Singer, Yoram. Adaptive subgradient methods for online learning and stochastic
8
optimization. J. Mach. Learn. Res., 12:2121â2159, July 2011. ISSN 1532-4435.
G¨ulc¸ehre, C¸ aglar and Bengio, Yoshua. Knowledge mat- ters: Importance of prior information for optimization. CoRR, abs/1301.4083, 2013.
He, K., Zhang, X., Ren, S., and Sun, J. Delving Deep into Rectiï¬ers: Surpassing Human-Level Performance on ImageNet Classiï¬cation. ArXiv e-prints, February 2015.
Hyv¨arinen, A. and Oja, E. Independent component anal- ysis: Algorithms and applications. Neural Netw., 13 (4-5):411â430, May 2000.
Jiang, Jing. A literature survey on domain adaptation of statistical classiï¬ers, 2008.
LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. Gradient-based learning applied to document recog- nition. Proceedings of the IEEE, 86(11):2278â2324, November 1998a.
LeCun, Y., Bottou, L., Orr, G., and Muller, K. Efï¬cient backprop. In Orr, G. and K., Muller (eds.), Neural Net- works: Tricks of the trade. Springer, 1998b.
Lyu, S and Simoncelli, E P. Nonlinear image representa- tion using divisive normalization. In Proc. Computer Vision and Pattern Recognition, pp. 1â8. IEEE Com- puter Society, Jun 23-28 2008. doi: 10.1109/CVPR. 2008.4587821.
Nair, Vinod and Hinton, Geoffrey E. Rectiï¬ed linear units improve restricted boltzmann machines. In ICML, pp. 807â814. Omnipress, 2010.
Pascanu, Razvan, Mikolov, Tomas, and Bengio, Yoshua. On the difï¬culty of training recurrent neural networks. In Proceedings of the 30th International Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 16- 21 June 2013, pp. 1310â1318, 2013.
Povey, Daniel, Zhang, Xiaohui, and Khudanpur, San- jeev. Parallel training of deep neural networks with CoRR, natural gradient and parameter averaging. abs/1410.7455, 2014.
Raiko, Tapani, Valpola, Harri, and LeCun, Yann. Deep learning made easier by linear transformations in per- ceptrons. In International Conference on Artiï¬cial In- telligence and Statistics (AISTATS), pp. 924â932, 2012.
Russakovsky, Olga, Deng, Jia, Su, Hao, Krause, Jonathan, Satheesh, Sanjeev, Ma, Sean, Huang, Zhiheng, Karpa- thy, Andrej, Khosla, Aditya, Bernstein, Michael, Berg, Alexander C., and Fei-Fei, Li. ImageNet Large Scale Visual Recognition Challenge, 2014.
9
Saxe, Andrew M., McClelland, James L., and Ganguli, Surya. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. CoRR, abs/1312.6120, 2013.
Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of Statistical Planning and Inference, 90(2):227â244, October 2000.
Srivastava, Nitish, Hinton, Geoffrey, Krizhevsky, Alex, Sutskever, Ilya, and Salakhutdinov, Ruslan. Dropout: A simple way to prevent neural networks from overï¬t- ting. J. Mach. Learn. Res., 15(1):1929â1958, January 2014.
Sutskever, Ilya, Martens, James, Dahl, George E., and Hinton, Geoffrey E. On the importance of initial- ization and momentum in deep learning. In ICML (3), volume 28 of JMLR Proceedings, pp. 1139â1147. JMLR.org, 2013.
Szegedy, Christian, Liu, Wei, Jia, Yangqing, Sermanet, Pierre, Reed, Scott, Anguelov, Dragomir, Erhan, Du- mitru, Vanhoucke, Vincent, and Rabinovich, An- CoRR, Going deeper with convolutions. drew. abs/1409.4842, 2014.
Wiesler, Simon and Ney, Hermann. A convergence anal- ysis of log-linear training. In Shawe-Taylor, J., Zemel, R.S., Bartlett, P., Pereira, F.C.N., and Weinberger, K.Q. (eds.), Advances in Neural Information Processing Sys- tems 24, pp. 657â665, Granada, Spain, December 2011.
Wiesler, Simon, Richard, Alexander, Schl¨uter, Ralf, and Ney, Hermann. Mean-normalized stochastic gradient for large-scale deep learning. In IEEE International Conference on Acoustics, Speech, and Signal Process- ing, pp. 180â184, Florence, Italy, May 2014.
Wu, Ren, Yan, Shengen, Shan, Yi, Dang, Qingqing, and Sun, Gang. Deep image: Scaling up image recognition, 2015.
# Appendix
# Variant of the Inception Model Used
Figure 5 documents the changes that were performed compared to the architecture with respect to the GoogleNet archictecture. For the interpretation of this table, please consult (Szegedy et al., 2014). The notable architecture changes compared to the GoogLeNet model include:
5 convolutional layers are replaced by two The 5 Ã consecutive 3 This in- creases the maximum depth of the network by 9
weight layers. Also it increases the number of pa- rameters by 25% and the computational cost is in- creased by about 30%.
⢠The number 28 from 2 to 3. à 28 inception modules is increased
Inside the modules, sometimes average, sometimes maximum-pooling is employed. This is indicated in the entries corresponding to the pooling layers of the table.
There are no across the board pooling layers be- tween any two Inception modules, but stride-2 con- volution/pooling layers are employed before the ï¬l- ter concatenation in the modules 3c, 4e.
Our model employed separable convolution with depth multiplier 8 on the ï¬rst convolutional layer. This reduces the computational cost while increasing the memory con- sumption at training time.
10
#3Ã3 reduce double #3Ã3 reduce patch size/ stride 7Ã7/2 3Ã3/2 3Ã3/1 3Ã3/2 output size 112Ã112Ã64 56Ã56Ã64 56Ã56Ã192 28Ã28Ã192 28Ã28Ã256 28Ã28Ã320 28Ã28Ã576 14Ã14Ã576 14Ã14Ã576 14Ã14Ã576 14Ã14Ã576 14Ã14Ã1024 7Ã7Ã1024 7Ã7Ã1024 1Ã1Ã1024 double #3Ã3 #3Ã3 depth #1Ã1 Pool +proj type convolution* max pool convolution max pool inception (3a) inception (3b) inception (3c) inception (4a) inception (4b) inception (4c) inception (4d) inception (4e) inception (5a) inception (5b) avg pool 1 0 1 0 3 3 3 3 3 3 3 3 3 3 0 192 64 64 96 160 96 128 160 192 192 320 320 64 64 0 224 192 160 96 0 352 352 64 64 64 96 96 128 160 192 160 192 64 64 128 64 96 128 128 128 192 192 96 96 96 128 128 160 192 256 224 224 avg + 32 avg + 64 max + pass through avg + 128 avg + 128 avg + 128 avg + 128 max + pass through avg + 128 max + 128 stride 2 stride 2 7Ã7/1
Figure 5: Inception architecture
11 | {
"id": "1502.03167"
} |
1502.02251 | From Pixels to Torques: Policy Learning with Deep Dynamical Models | Data-efficient learning in continuous state-action spaces using very
high-dimensional observations remains a key challenge in developing fully
autonomous systems. In this paper, we consider one instance of this challenge,
the pixels to torques problem, where an agent must learn a closed-loop control
policy from pixel information only. We introduce a data-efficient, model-based
reinforcement learning algorithm that learns such a closed-loop policy directly
from pixel information. The key ingredient is a deep dynamical model that uses
deep auto-encoders to learn a low-dimensional embedding of images jointly with
a predictive model in this low-dimensional feature space. Joint learning
ensures that not only static but also dynamic properties of the data are
accounted for. This is crucial for long-term predictions, which lie at the core
of the adaptive model predictive control strategy that we use for closed-loop
control. Compared to state-of-the-art reinforcement learning methods for
continuous states and actions, our approach learns quickly, scales to
high-dimensional state spaces and is an important step toward fully autonomous
learning from pixels to torques. | http://arxiv.org/pdf/1502.02251 | Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth | stat.ML, cs.LG, cs.RO, cs.SY | 9 pages | null | stat.ML | 20150208 | 20150618 | 5 1 0 2
n u J 8 1 ] L M . t a t s [
3 v 1 5 2 2 0 . 2 0 5 1 : v i X r a
# From Pixels to Torques: Policy Learning with Deep Dynamical Models
# Niklas Wahlstr¨om Division of Automatic Control, Link¨oping University, Link¨oping, Sweden
NIKWA@ISY.LIU.SE
# Thomas B. Sch¨on Department of Information Technology, Uppsala University, Sweden
# THOMAS.SCHON@IT.UU.SE
# Marc Peter Deisenroth Department of Computing, Imperial College London, United Kingdom
M.DEISENROTH@IMPERIAL.AC.UK
# Abstract
Data-efï¬cient learning in continuous state-action spaces using very high-dimensional observations remains a key challenge in developing fully autonomous systems. In this paper, we con- the pix- sider one instance of this challenge, els to torques problem, where an agent must learn a closed-loop control policy from pixel in- formation only. We introduce a data-efï¬cient, model-based reinforcement learning algorithm that learns such a closed-loop policy directly from pixel information. The key ingredient is a deep dynamical model that uses deep auto- encoders to learn a low-dimensional embedding of images jointly with a predictive model in this low-dimensional feature space. Joint learning ensures that not only static but also dynamic properties of the data are accounted for. This is crucial for long-term predictions, which lie at the core of the adaptive model predictive con- trol strategy that we use for closed-loop con- trol. Compared to state-of-the-art reinforcement learning methods for continuous states and ac- tions, our approach learns quickly, scales to high- dimensional state spaces and is an important step toward fully autonomous learning from pixels to torques.
mation, (3) take new information into account for learning and adaptation. Effectively, any fully autonomous system has to close this perception-action-learning loop without relying on speciï¬c human expert knowledge. The pixels to torques problem (Brock, 2011) identiï¬es key aspects of an autonomous system: autonomous thinking and decision making using sensor measurements only, intelligent explo- ration and learning from mistakes.
We consider the problem of learning closed-loop policies (âtorquesâ) from pixel information end-to-end. A possible scenario is a scene in which a robot is moving about. The only available sensor information is provided by a camera, i.e., no direct information of the robotâs joint conï¬gura- tion is available. The objective is to learn a continuous- valued policy that allows the robotic agent to solve a task in this continuous environment in a data-efï¬cient way, i.e., we want to keep the number of trials small. To date, there is no fully autonomous system that convincingly closes the perception-action-learning loop and solves the pixels to torques problem in continuous state-action spaces, the natural domains in robotics.
A promising approach toward solving the pixels to torques problem is Reinforcement Learning (RL) (Sutton & Barto, 1998), a principled mathematical framework that deals with fully autonomous learning from trial and error. How- ever, one practical shortcoming of many existing RL algo- rithms is that they require many trials to learn good poli- cies, which is prohibitive when working with real-world mechanical plants or robots.
# 1. Introduction
The vision of fully autonomous and intelligent systems that learn by themselves has inï¬uenced AI and robotics re- search for many decades. To devise fully autonomous sys- tems, it is necessary to (1) process perceptual data (e.g., im- ages) to summarize knowledge about the surrounding envi- ronment and the systemâs behavior in this environment, (2) make decisions based on uncertain and incomplete infor-
One way of using data efï¬ciently (and therefore keep the number of experiments small) is to learn forward models of the underlying dynamical system, which are then used for internal simulations and policy learning. These ideas have been successfully applied to RL, control and robotics in (Schmidhuber, 1990; Atkeson & Schaal, 1997; Bagnell & Schneider, 2001; Contardo et al., 2013;
From Pixels to Torques: Policy Learning with Deep Dynamical Models
Image at time t-1 Vr-1 Zr] Feature at time t-1 Prediction model Encoder ââ__> âââ> g! Decoder eS g Feature at time t Image at time t Zt vr
Figure 1. Illustration of our idea of combining deep learning architectures for feature learning and prediction models in feature space. A camera observes a robot approaching an object. A good low-dimensional feature representation of an image is important for learning a predictive model if the camera is the only sensor available.
Pan & Theodorou, 2014; Deisenroth et al., 2015; Pan & Theodorou, 2014; van Hoof et al., 2015; Levine et al., 2015), for instance. However, these methods use heuris- tic or engineered low-dimensional features, and they do not easily scale to data-efï¬cient RL using pixel informa- tion only because even âsmallâ images possess thousands of dimensions.
we can use for internal simulation of the dynamical sys- tem. For this purpose, we employ deep auto-encoders for the lower-dimensional embedding and a multi-layer feed- forward neural network for the transition function. We use this deep dynamical model to predict trajectories and apply an adaptive model-predictive-control (MPC) algo- rithm (Mayne, 2014) for online closed-loop control, which is practically based on pixel information only.
A common way of dealing with high-dimensional data is to learn low-dimensional feature representations. Deep learn- ing architectures, such as deep neural networks (Hinton & Salakhutdinov, 2006), stacked auto-encoders (Bengio et al., 2007; Vincent et al., 2008), or convolutional neu- ral networks (LeCun et al., 1998), are the current state of the art in learning parsimonious representations of high- dimensional data. Deep learning has been successfully ap- plied to image, text and speech data in commercial prod- ucts, e.g., by Google, Amazon and Facebook.
Deep learning has been used to produce ï¬rst promising results in the context of model-free RL on images: For instance, (Mnih et al., 2015) present an approach based on Deep-Q-learning, in which human-level game strategies are learned autonomously, purely based on pixel informa- tion. Moreover, (Lange et al., 2012) presented an approach that learns good discrete actions to control a slot car based on raw images, employing deep architectures for ï¬nding compact low-dimensional representations. Other examples of deep learning in the context of RL on image data in- clude (Cuccu et al., 2011; Koutnik et al., 2013). These ap- proaches have in common that they try to estimate the value function from which the policy is derived. However, nei- ther of these algorithms learns a predictive model and are, therefore, prone to data inefï¬ciency, either requiring data collection from millions of experiments or relying on dis- cretization and very low-dimensional feature spaces, limit- ing their applicability to mechanical systems.
To increase data efï¬ciency, we therefore introduce a model- based approach to learning from pixels to torques. In par- ticular, exploit results from (Wahlstr¨om et al., 2015) and jointly learn a lower-dimensional embedding of images and a transition function in this lower-dimensional space that
MPC has been well explored in the control community, However, adaptive MPC has so far not received much atten- tion in the literature (Mayne, 2014). An exception is (Sha, 2008), where the authors advocate a neural network ap- proach similar to ours. However, they do not consider high- dimensional data but assume that they have direct access to low-dimensional measurements.
Our approach beneï¬ts from the application of model- based optimal control principles within a machine learn- ing framework. Along these lines, (Deisenroth et al., 2009; Abramova et al., 2012; Boedecker et al., 2014; Pan & Theodorou, 2014; Levine et al., 2015) suggested to ï¬rst learn a transition model and then use optimal control meth- ods to solve RL problems. Unlike these methods, our ap- proach does not need to estimate value functions and scales to high-dimensional problems.
Similar to our approach, (Boots et al., 2014; Levine et al., 2015; van Hoof et al., 2015) recently proposed model- based RL methods that learn policies directly from vi- sual information. Unlike these methods, we exploit a low- dimensional feature representation that allows for fast pre- dictions and online control learning via MPC.
# Problem Set-up and Objective
We consider a classical N-step finite-horizon RL setting in which an agent attempts to solve a particular task by trial and error. In particular, our objective is to find a closed-loop policy 7* that minimizes the long-term cost v= yea fo(xt, uz), where fo denotes an immediate cost, 7, ⬠R? is the continuous-valued system state and uz ⬠RF are continuous control inputs.
â
From Pixels to Torques: Policy Learning with Deep Dynamical Models
Input layer (high-dim. data) Y1yt Hidden layer (feature) Output layer (reconstructed) YL Encoder g~1 Decoder g
Figure 2. Auto-encoder that consists of an encoder g~! and a decoder g. The encoder maps the original image yw ⬠R⢠onto its low-dimensional representation z; = goâ (ys) eRâ, where m < M; the decoder maps this feature back to a high- dimensional representation 7 = g(Z). The gray color represents high-dimensional observations.
High-dim. observations Features Control inputs
Figure 3. Prediction model: Each feature z; is computed from high-dimensional data y; via the encoder g~'. The transition model predicts the feature 2,41)/,,, at the next time step based on the n-step history of n past features z;-n41,..., Z¢ and con- trol inputs weân+1,-.., ut. The predicted feature 241), can be mapped to a high-dimensional prediction #41 via the decoder g. The gray color represents high-dimensional observations.
# 2.1. Deep Auto-Encoder
The learning agent faces the following additional chal- lenges: (a) The agent has no access to the true state, but perceives the environment only through high-dimensional pixel information (images), (b) a good control policy is re- quired in only a few trials. This setting is practically rel- evant, e.g., when the agent is a robot that is monitored by a video camera based on which the robot has to learn to solve tasks fully autonomously. Therefore, this setting is an instance of the pixels to torques problem.
# 2. Deep Dynamical Model
We use a deep auto-encoder for embedding images in a low-dimensional feature space, where both the encoder g~! and the decoder g are modeled with deep neural networks. Each layer k of the encoder neural network g~! computes yt) = (Any + by), where o is a sigmoidal acti- vation function (we used arctan) and A, and by are free parameters. The input to the first layer is the image, i.e., (1) Y= Yt The last layer is the low-dimensional fea- ture representation of the image z:(Oz) = g~'(yt;@e), where 6 = [..., Ax, bx, -..] are the parameters of all neu- ral network layers. The decoder g consists of the same number of layers in reverse order, see Fig. 2, and ap- proximately inverts the encoder g, such that %; (9g, 0p) = 9(g~* (yt; 9E); OD) © ys is the reconstructed version of yz with an associated reconstruction error
Our approach to solve the pixels-to-torques problem is based on a deep dynamical model (DDM), which jointly (i) embeds high-dimensional images in a low-dimensional feature space via deep auto-encoders and (ii) learns a pre- dictive forward model in this feature space (Wahlstro6m et al., 2015). In particular, we consider a DDM with con- trol inputs u and high-dimensional observations y. We as- sume that the relevant properties of y can be compactly represented by a feature variable z. The two components of the DDM, i.e., the low-dimensional embedding and the prediction model, which predicts future observations yt+1 based on past observations and control inputs, are de- tailed in the following. Throughout this paper, y, denotes the high-dimensional measurements, z, the corresponding low-dimensional encoded features and %; the reconstructed high-dimensional measurement. Further, 2,41 and #41 de- note a predicted feature and measurement at time t + 1, respectively.
εR t (θE, θD) = yt (1)
# â Ge(Oe, OD).
The main purpose of the deep auto-encoder is to keep this reconstruction error and the associated compression loss negligible, such that the features zt are a compact repre- sentation of the images yt.
# 2.2. Prediction Model
We now turn the static auto-encoder into a dynamical model that can predict future features 2, and images Ji41. The encoder g~+ allows us to map high-dimensional observations y; onto low-dimensional features z;. For pre- dicting we assume that future features 241
,, depend on an n-step history h, of past features and control inputs, ie.,
Zr ajh, (Op) = f (Zt, Ue, ++ Ztâng1, Weng; Op),
(2)
From Pixels to Torques: Policy Learning with Deep Dynamical Models
where f is a nonlinear transition function, in our case a feed-forward neural network, and θP are the correspond- ing model parameters. This is a nonlinear autoregressive exogenous model (NARX) (Ljung, 1999). The predictive performance of the model will be important for model pre- dictive control (see Section 3) and for model learning based on the prediction error (Ljung, 1999).
To predict future observations Y141
,, We exploit the de- coder, such that +1)n, = 9(2:41)n,39D)- The deep de- coder g maps features z to high-dimensional observations y parameterized by Op.
cost function is minimized by the BFGS algorithm (No- cedal & Wright, 2006). Note that in (5a) it is crucial to include not only the prediction error VP, but also the re- construction error VR. Without this term the multi-step ahead prediction performance will decrease because pre- dicted features are not consistent with features achieved from the encoder. Since we consider a control problem in this paper, multi-step ahead predictive performance is cru- cial.
Now, we are ready to put the pieces together: With feature prediction model (2) and the deep auto-encoder, the DDM predicts future features and images according to
zt(θE) = gâ1(yt; θE),
(3a) n+1; θP), (3b)
Zr4ajn, (Op, Op) = f (Zt; Wes +s Zn gas Urn $13 OP) Tesrjn,, (Oe, Od, OP) = g(Zr41]h,3 9D), (3b) which is illustrated in Fig. 3. With this prediction model we define the prediction error
â
εP t+1(θE, θD, θP) = yt+1 (4)
â Tern, (Ox, 4%, Op),
Initialization. With a linear activation function the auto- encoder and PCA are identical (Bourlard & Kamp, 1988), which we exploit to initialize the parameters of the auto- encoder: The auto-encoder network is unfolded, each pair of layers in the encoder and the decoder are combined, and the corresponding PCA solution is computed for each of these pairs. We start with high-dimensional image data at the top layer and use the principal components from that pair of layers as input to the next pair of layers. Thereby, we recursively compute a good initialization for all parameters of the auto-encoder. Similar pre-training routines are found in (Hinton & Salakhutdinov, 2006), in which a restricted Boltzmann machine is used instead of PCA.
where yt+1 is the observed image at time t + 1.
# 2.3. Training
The DDM is parameterized by the encoder parameters 6p, the decoder parameters @p and the prediction model param- eters Op. In the DDM, we train both the prediction model and the deep auto-encoder jointly by finding parameters (6, , 6p). such that
such that Op) =arg min 8.00 N
(6x, 8p, Op) =arg min Va(9g, Op) + Vo(Oz, Op, Op), (Sa) 8.00
N c Vez, 8; 0) = D>, _, ler Oz, 6, 8). (5b)
N Va(G, 40) = D2, llet @z, 0) |, (Se)
which minimizes the sums of squared reconstruction (1) and prediction (4) errors.
We learn all model parameters θE, θD, θP jointly by solv- ing (5a).1 The required gradients with respect to the param- eters are computed efï¬ciently by back-propagation, and the
In this section, we have presented a DDM that facili- tates fast predictions of high-dimensional observations via a low-dimensional embedded time series. The property of fast predictions will be exploited by the online feedback control strategy presented in the following. More details on the proposed model are given in (Wahlstr¨om et al., 2015).
# 3. Learning Closed-Loop Policies from Images
We use the DDM for learning a closed-loop policy by means of nonlinear model predictive control (MPC). We start off by an introduction to classical MPC, before mov- ing on to MPC on images in Section 3.1. MPC ï¬nds an op- timal sequence of control signals that minimizes a K-step loss function, where K is typically smaller than the full horizon. In general, MPC relies on (a) a reference trajec- 1, . . . , xâ tory xref = xâ K (which can be a constant reference signal) and (b) a dynamics model
âNormally when features are used for learning dynamical models, they are first extracted from the data in a pre-processing step by minimizing (5c) with respect to the auto-encoder param- eters 02,4. In a second step, the prediction model parameters Op are estimated based on these features by minimizing (5b) con- ditioned on the estimated 05 and . In our experience, a prob- lem with this approach is that the learned features might have a small reconstruction error, but this representation will not be ideal for learning a transition model. The supplementary material dis- cusses this in more detail.
xt+1 = f (xt, ut), (6)
which, assuming that the current state is denoted by xo, can be used to compute/predict a state trajectory Z1,...,£« for a given sequence uo,...,WuxKâ1 of control signals. Using the dynamics model MPC determines an optimal (open- loop) control sequence ug,...,Uj,_,, such that the pre- dicted trajectory %1,...,2« gets as close to the reference
From Pixels to Torques: Policy Learning with Deep Dynamical Models
trajectory xref as possible, such that
K-1 Up,.+., Uz, ⬠arg min > |Z, â a |? + Aljuel|?, Uu0o:K-1 i=0
where ||7; â 27||? is a cost associated with the deviation of the predicted state trajectory Zo. ; from the reference tra- jectory ayer, and ||u;||? penalizes the amplitude of the con- trol signals. Note that the predicted £, depends on all pre- vious ug:7â1. When the control sequence up,...,Uj_1 is determined, the first control ug is applied to the system. After observing the next state, MPC repeats the entire op- timization and turns the overall policy into a closed-loop (feedback) control strategy.
# 3.1. MPC on Images
of our MPC formulation lies the DDM, which is used to predict future states (8) from a sequence of control inputs. The quality of the MPC controller is inherently bound to the prediction quality of the dynamical model, which is typical in model-based RL (Schneider, 1997; Schaal, 1997; Deisenroth et al., 2015).
To learn models and controllers from scratch, we apply a control scheme that allows us to update the DDM as new data arrives. In particular, we use the MPC controller in an adaptive fashion to gradually improve the model by col- lected data in the feedback loop without any speciï¬c prior knowledge of the system at hand. Data collection is per- formed in closed-loop (online MPC), and it is divided into multiple sequential trials. After each trial, we add the data of the most recent trajectory to the data set, and the model is re-trained using all data that has been collected so far.
We now turn the classical MPC procedure into MPC on im- ages by exploiting some convenient properties of the DDM. The DDM allows us to predict features 21,...,2« based on a sequence of controls uo, ..., ux â1. By comparing (6) with (2), we define the state xo as the present and past nâ1 features and the past n â 1 control inputs, such that
â
x0 = [z0, . . . , zân+1, uâ1, . . . , uân+1]. (8)
The DDM computes the present and past features with the encoder zt = gâ1(yt, θE), such that x0 is known at the current time, which matches the MPC requirement. Our objective is to control the system towards a desired refer- ence image frame yref. This reference frame yref can also be encoded to a corresponding reference feature zref = gâ1(yref, θE), which results in the MPC objective
K-1 Up,-++,Uxâ1 © arg min > 2: â zreel|? +Alfuell?, (9) uoK-1 4=9
Up,-++,Uxâ1 © arg min > 2: â zreel|? +Alfuell?, (9) uoK-1 4=9 where x, defined in (8), is the current state. The gradi- ents of the cost function (9) with respect to the control sig- nals uo,...,WKâ1 are computed in closed form, and we use BFGS to find the optimal sequence of control signals. Note that the objective function depends on uo,...,uKâ1 not only via the control penalty |||? but also via the fea- ture predictions 21.â1 of the DDM via (2). Overall, we now have an online MPC algorithm that, given a trained DDM, works indirectly on images by exploiting their feature representation. In the following, we will now turn this into an iterative algorithm that learns predictive models from images and good controllers from scratch.
Algorithm 1 Adaptive MPC in feature space
Algorithm 1 Adaptive MPC in feature space Follow a random control strategy and record data loop Update DDM with all data collected so far for t = 0 to Nâ1do Get state x; via auto-encoder uy < â¬-greedy MPC policy using DDM prediction Apply uj and record data end for end loop
Simply applying the MPC controller based on a randomly initialized model would make the closed-loop system very likely to converge to a point, which is far away from the desired reference value, due to the poor model that can- not extrapolate well to unseen states. This would in turn imply that no data is collected in unexplored regions, in- cluding the region that we actually are interested in. There are two solutions to this problem: Either we use a proba- bilistic dynamics model as suggested in (Schneider, 1997; Deisenroth et al., 2015) to explicitly account for model un- certainty and the implied natural exploration or we follow an explicit exploration strategy to ensure proper excitation of the system. In this paper, we follow the latter approach. In particular, we choose an e-greedy exploration strategy where the optimal feedback uw at each time step is selected with a probability 1 â ¢, and a random action is selected with probability e.
# 3.2. Adaptive MPC for Learning from Scratch
We will now turn over to describe how (adaptive) MPC can be used together with our DDM to address the pixels to torques problem and to learn from scratch. At the core
Algorithm | summarizes our adaptive online MPC scheme. We initialize the DDM with a random trial. We use the learned DDM to find an e-greedy policy using predicted features within MPC. This happens online. The collected data is added to the data set and the DDM is updated after each trial.
From Pixels to Torques: Policy Learning with Deep Dynamical Models
True video frames Yeo Yer Yer2 Yer3 Vera Yes Yer6 YT Yes Predicted video frames Yerole -Yerrle â-Yerait âYersie â Yerale â Yersie Yer olt t+sie
Figure 4. Long-term (up to eight steps) predictive performance of the DDM: True (upper plot) and predicted (lower plot) video frames on test data.
# 4. Experimental Results
(a) Autoencoder and prediction model
In the following, we empirically assess the components of our proposed methodology for autonomous learning from high-dimensional synthetic image data: (a) the quality of the learned DDM and (b) the overall learning framework.
In both cases, we consider a sequence of images (51 51 = 2601 pixels) and a control input associated with these im- ages. Each pixel y(i) is a component of the measurement t R2601 and assumes a continuous gray-value in the in- yt terval [0, 1]. No access to the underlying dynamics or the state (angle Ï and angular velocity ËÏ) was available, i.e., we are dealing with a high-dimensional continuous state space. The challenge was to learn (a) a good dynamics model (b) a good controller from pixel information only. We used a sampling frequency of 0.2 s and a time horizon of 25 s, which corresponds to 100 frames per trial.
The input dimension has been reduced to dim(yt) = 50 prior to model learning using PCA. With these 50- dimensional inputs, a four-layer auto-encoder network was used with dimension 50-25-12-6-2, such that the features were of dimension dim(zt) = 2, which is optimal to model the periodic angle of the pendulum. The order of the dy- namics was selected to be n = 2 (i.e., we consider two consecutive image frames) to capture velocity information, such that zt+1 = f (zt, ut, ztâ1, utâ1). For the prediction model f we used a feedforward neural network with a 6-4- 2 architecture. Note that the dimension of the ï¬rst layer is given by n(dim(zt) + dim(ut)) = 2(2 + 1) = 6.
# (b) Only auto-encoder
Figure 5. Feature space for both joint (a) and sequential training (b) of auto-encoder and prediction model. The feature space is divided into grid points. For each grid point the decoded high- dimensional image is displayed and the feature values for the training data (red) and validation data (yellow) are overlain. For the joint training the feature values reside on a two-dimensional manifold that corresponds to the two-dimensional position of the tile. For the separate training the feature values are scattered with- out structure.
# 4.1. Learning Predictive Models from Pixels
To assess the predictive performance of the DDM, we took 601 screenshots of a moving tile, see Fig. 4. The control inputs are the (random) increments in position in horizontal and vertical directions.
We evaluate the performance of the learned DDM in terms of long-term predictions, which play a central role in MPC for autonomous learning. Long-term predictions are ob- tained by concatenating multiple 1-step ahead predictions.
The performance of the DDM is illustrated in Fig. 4 on a
test data set. The top row shows the ground truth images and the bottom row shows the DDMâs long-term predic- tions. The model predicts future frames of the tile with high accuracy both for 1-step ahead and multiple steps ahead. The model yields a good predictive performance for both one-step ahead prediction and multiple-step ahead predic- tion.
In Fig. 5(a), the feature representation of the data is dis- played. The features reside on a two-dimensional manifold that encodes the two-dimensional position of the moving
From Pixels to Torques: Policy Learning with Deep Dynamical Models
Ist trial 4th trial 7th trial Angle [rad] Angle [rad] Time [s] Time [s] Time [s]
Figure 7. Control performance after 1st to 15th trial evaluated with ε = 0 for 16 different experiments. The objective was to reach an angle of ±Ï.
Figure 6. The feature space z â [â1, 1] Ã [â1, 1] is divided into 9 Ã 9 grid points for illustration purposes. For each grid point the decoded high-dimensional image is displayed. Green: Feature values that correspond to collected experience in previous trials. Cyan: Feature value that corresponds to the current time step. Red: Desired reference value. Yellow: 15-steps-ahead prediction after optimizing for the optimal control inputs.
tile. By inspecting the decoded images we can see that each corner of the manifold corresponds to a corner po- sition of the tile. Due to this structure a relatively simple prediction model is sufï¬cient to describe the dynamics. In case the auto-encoder and the prediction model would have been learned sequentially (ï¬rst training the auto-encoder, and then based on these features values train the predic- tion model) such a structure would not have been enforced. In Fig. 5(b) the corresponding feature representation is displayed where only the auto-encoder has been trained. Clearly, these features does not exhibit such a structure.
# 4.2. Closed-Loop Policy Learning from Pixels
the DDM using all collected data so far, where we also in- clude the reference image while learning the auto-encoder.
Fig. 6 displays the decoded images corresponding to 1, 1]2. The learned fea- learned latent representations in [ ture values of the training data (green) line up in a circular shape, such that a relatively simple prediction model is suf- ï¬cient to describe the dynamics. If we would not have opti- mized for both the prediction error and reconstruction error, such an advantageous structure of the feature values would not have been obtained. The DDM extracts features that can also model the dynamic behavior compactly. The ï¬gure also shows the predictions produced by the MPC controller (yellow), starting from the current time step (cyan) and tar- geting the reference feature (red) where the pendulum is in the target position.
To assess the controller performance after each trial, we applied a greedy policy (⬠= 0). In Fig. 7, angle trajectories for 15 of the 50 experiments at different learning stages are displayed. In the first trial, the controller managed only ina few cases to drive the pendulum toward the reference value +t. The control performance increased gradually with the number of trials, and after the 15th trial, it manages in most cases to get it to an upright position.
In this section, we report results on learning a policy that moves a pendulum (1-link robot arm with length 1m, weight | kg and friction coefficient 1 Nsm/rad) from a start position y = 0 to a target position y = +7. The reference signal was the screenshot of the pendulum in the target po- sition. For the MPC controller, we used a planning horizon of P = 15 steps and a control penalty \ = 0.01. For the e-greedy exploration strategy we used ⬠= 0.2. We con- ducted 50 independent experiments with different random initializations. The learning algorithm was run for 15 trials (plus an initial random trial). After each trial, we retrained
To assess the data efï¬ciency of our approach, we compared it with the PILCO RL framework (Deisenroth et al., 2015) to learning closed-loop control policies for the pendulum task above. PILCO is a current state-of-the art model-based RL algorithm for data-efï¬cient learning of control policies in continuous state-control spaces. Using collected data PILCO learns a probabilistic model of the system dynam- ics, implemented as a Gaussian process (GP) (Rasmussen & Williams, 2006). Subsequently, this model is used to compute a distribution over trajectories and the correspond-
From Pixels to Torques: Policy Learning with Deep Dynamical Models
1 0.8 e t a R s s e c c u S 0.6 0.4 0.2 PILCO w/ 2D state (Ï, ËÏ) PILCO w/ 2D AE features PILCO w/ 20D PCA features DDM+MPC 0 0 500 1,000 1,500
separately. The auto-encoder ï¬nds good features that min- imize the reconstruction error. However, these features are not good for modeling the dynamic behavior of the sys- tem,3 and lead to bad long-term predictions.
Computation times of PILCO and our method are vastly different: While PILCO spends most time optimizing pol- icy parameters, our model spends most of the time on learn- ing the DDM. Computing the optimal nonparametric MPC policy happens online and does not require signiï¬cant com- putational overhead. To put this into context, PILCO re- quired a few days of learning time for 10 trials (in a 20D feature space). In a 2D feature space, running PILCO for 10 trials and 1000 data points requires about 10 hours.
# Number of frames (100 per trial)
Figure 8. Average learning success with standard errors. Blue: PILCO ground-truth RL baseline using the true state (Ï, ËÏ). Red: PILCO with learned auto-encoder features from image pixels. Cyan: PILCO on 20D feature determined by PCA. Black: Our proposed MPC solution using the DDM.
ing expected cost, which is used for gradient-based opti- mization of the controller parameters.
Although PILCO uses data very efï¬ciently, its computa- tional demand makes its direct application impractical for 20 D) problems, many data points or high-dimensional ( such that we had to make suitable adjustments to apply PILCO to the pixels-to-torques problem. In particular, we performed the following experiments: (1) PILCO applied to 20D PCA features, (2) PILCO applied to 2D features learned by deep auto-encoders, (3) An optimal baseline where we applied PILCO to the standard RL setting with access to the âtrueâ state (Ï, ËÏ) (Deisenroth et al., 2015).
Overall, our DDM+MPC approach to learning closed-loop policies from high-dimensional observations exploits the learned Deep Dynamical Model to learn good policies fairly data efï¬ciently.
# 5. Conclusion
We have proposed a data-efï¬cient model-based RL algo- rithm that learns closed-loop policies in continuous state and action spaces directly from pixel information. The key components of our solution are (1) a deep dynamical model (DDM) that is used for long-term predictions in a compact feature space and (2) an MPC controller that uses the pre- dictions of the DDM to determine optimal actions on the ï¬y without the need for value function estimation. For the suc- cess of this RL algorithm it is crucial that the DDM learns the feature mapping and the predictive model in feature space jointly to capture dynamic behavior for high-quality long-term predictions. Compared to state-of-the-art RL our algorithm learns fairly quickly, scales to high-dimensional state spaces and facilitates learning from pixels to torques.
# Acknowledgments
Fig. 8 displays the average success rate of PILCO (in- cluding standard error) and our proposed method using deep dynamical models together with a tailored MPC (DDM+MPC). We deï¬ne âsuccessâ if the pendulumâs an- gle is stabilized within 10⦠around the target state.2 The baseline (PILCO trained on the ground-truth 2D state (Ï, ËÏ)) is shown in blue and solves the task very quickly. The graph shows that our proposed algorithm (black), which learns torques directly from pixels, is not too far behind the ground-truth RL solution, achieving a n almost 90% success rate after 15 trials (1500 image frames). How- ever, PILCO trained on the 2D auto-encoder features (red) and 20D PCA features fail consistently in all experiments We explain PILCOâs failure by the fact that we trained the auto-encoder and the transition dynamics in feature space
2Since we consider a continuous setting, we have to deï¬ne a target region.
This work was supported by the Swedish Foundation for Strategic Research under the project Cooperative Localiza- tion and the Swedish Research Council under the project Probabilistic modeling of dynamical systems (Contract number: 621-2013-5524). MPD was supported by an Im- perial College Junior Research Fellowship.
# References
Abramova, Ekatarina, Dickens, Luke, Kuhn, Daniel, and Faisal, A. Aldo. Hierarchical, heterogeneous control us- ing reinforcement learning. In EWRL, 2012.
3When we inspected the latent-space embedding of the auto- encoder, the pendulum angles do not nicely line up along an âeasyâ manifold as in Fig. 6. See supplementary material for more details.
From Pixels to Torques: Policy Learning with Deep Dynamical Models
Atkeson, Christopher G. and Schaal, S. Learning tasks from a single demonstration. In ICRA, 1997.
LeCun, Y, Bottou, L, Bengio, Y, and Haffner, P. Gradient- based learning applied to document recognition. Proc. of the IEEE, 86(11):2278â2324, 1998.
Bagnell, James A. and Schneider, Jeff G. Autonomous helicopter control using reinforcement learning policy search methods. In ICRA, 2001.
Levine, Sergey, Finn, Chelsea, Darrell, Trevor, and Abbeel, Pieter. End-to-end training of deep visuomotor policies. arXiv preprint arXiv:1504.00702, 2015.
Bengio, Yoshua, Lamblin, Pascal, Popovici, Dan, and Larochelle, Hugo. Greedy layer-wise training of deep networks. In NIPS, 2007.
Ljung, L. System Identiï¬cation: Theory for the User. Pren- tice Hall, 1999.
Boedecker, Joschka, Springenberg, Jost Tobias, W¨ulï¬ng, Jan, and Riedmiller, Martin. Approximate real-time op- timal control based on sparse Gaussian process models. In ADPRL, 2014.
Boots, Byron, Byravan, Arunkumar, and Fox, Dieter. Learning predictive models of a depth camera & manip- ulator from raw execution traces. In ICRA, 2014.
Mayne, David Q. Model predictive control: Recent devel- opments and future promise. Automatica, 50(12):2967â 2986, 2014.
Mnih, Volodymyr, Kavukcuoglu, Koray, Silver, David, Rusu, Andrei A, Veness, Joel, Bellemare, Marc G, Graves, Alex, Riedmiller, Martin, Fidjeland, Andreas K, Ostrovski, Georg, and et al. Human-level control Nature, 518 through deep reinforcement (7540):529â533, 2015.
Bourlard, Herv´e and Kamp, Yves. Auto-association by multilayer perceptrons and singular value decomposi- tion. Biological Cybernetics, 59(4-5):291â294, 1988.
Nocedal, J. and Wright, S. J. Numerical Optimization. Springer, 2006.
Brock, Oliver. Berlin Summit on Robotics: Conference Re- port, chapter Is Robotics in Need of a Paradigm Shift?, pp. 1â10. 2011.
Contardo, Gabriella, Denoyer, Ludovic, Artieres, Thierry, and Gallinari, Patrick. Learning states representations in POMDP. arXiv preprint arXiv:1312.6042, 2013.
Cuccu, Giuseppe, Luciw, Matthew, Schmidhuber, J¨urgen, and Gomez, Faustino. Intrinsically motivated neuroevo- lution for vision-based reinforcement learning. In ICDL, 2011.
Pan, Yunpeng and Theodorou, Evangelos. Probabilistic dif- ferential dynamic programming. In NIPS, 2014.
Rasmussen, Carl E. and Williams, Christopher K. I. Gaus- sian Processes for Machine Learning. The MIT Press, 2006.
Schaal, Stefan. Learning from demonstration. In NIPS. 1997.
Schmidhuber, J¨urgen. An on-line algorithm for dynamic reinforcement learning and planning in reactive environ- ments. In IJCNN, 1990.
Deisenroth, Marc P., Rasmussen, Carl E., and Peters, Jan. Gaussian process dynamic programming. Neurocomput- ing, 72(7â9):1508â1524, 2009.
Deisenroth, Marc P., Fox, Dieter, and Rasmussen, Carl E. Gaussian processes for data-efï¬cient learning in robotics and control. IEEE-TPAMI, 37(2):408â423, 2015.
Hinton, G and Salakhutdinov, R. Reducing the dimension- ality of data with neural networks. Science, 313:504â 507, 2006.
Koutnik, Jan, Cuccu, Giuseppe, Schmidhuber, J¨urgen, and Gomez, Faustino. Evolving large-scale neural networks In GECCO, for vision-based reinforcement learning. 2013.
Schneider, Jeff G. Exploiting model uncertainty estimates for safe dynamic control learning. In NIPS. 1997.
Sha, Daohang. A new neural networks based adaptive model predictive control for unknown multiple variable non-linear systems. IJAMS, 1(2):146â155, 2008.
Sutton, Richard S. and Barto, Andrew G. Reinforcement Learning: An Introduction. The MIT Press, 1998.
van Hoof, Herke, Peters, Jan, and Neumann, Gerhard. Learning of non-parametric control policies with high- dimensional state features. In AISTATS, 2015.
Vincent, P, Larochelle, H, Bengio, Y, and Manzagol, Pierre-Antoine. Extracting and composing robust fea- tures with denoising autoencoders. In ICML, 2008.
Lange, Sascha, Riedmiller, Martin, and Voigtl¨ander, Arne. Autonomous reinforcement learning on raw visual input data in a real-world application. In IJCNN, 2012.
Wahlstr¨om, Niklas, Sch¨on, Thomas B., and Deisenroth, Marc P. Learning deep dynamical models from image pixels. In SYSID, 2015. | {
"id": "1504.00702"
} |