id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
sequencelengths
1
1
1504.00325#31
Microsoft COCO Captions: Data Collection and Evaluation Server
Framing image de- scription as a ranking task: Data, models and evaluation metrics.â JAIR, vol. 47, pp. 853â 899, 2013. [8] P. Kuznetsova, V. Ordonez, A. C. Berg, T. L. Berg, and Y. Choi, â Collective generation of natural image descriptions,â in ACL, 2012. [9] Y. Yang, C. L. Teo, H. Daum´e III, and Y. Aloimonos, â Corpus- guided sentence generation of natural images,â in EMNLP, 2011. [10] A. Gupta, Y. Verma, and C. Jawahar, â
1504.00325#30
1504.00325#32
1504.00325
[ "1502.03671" ]
1504.00325#32
Microsoft COCO Captions: Data Collection and Evaluation Server
Choosing linguistics over vision to describe images.â in AAAI, 2012. [11] E. Bruni, G. Boleda, M. Baroni, and N.-K. Tran, â Distributional semantics in technicolor,â in ACL, 2012. [12] Y. Feng and M. Lapata, â Automatic caption generation for news images,â TPAMI, vol. 35, no. 4, pp. 797â 812, 2013. [13] D. Elliott and F. Keller, â Image description using visual depen- dency representations.â in EMNLP, 2013, pp. 1292â
1504.00325#31
1504.00325#33
1504.00325
[ "1502.03671" ]
1504.00325#33
Microsoft COCO Captions: Data Collection and Evaluation Server
1302. [14] A. Karpathy, A. Joulin, and F.-F. Li, â Deep fragment embeddings for bidirectional image sentence mapping,â in NIPS, 2014. [15] Y. Gong, L. Wang, M. Hodosh, J. Hockenmaier, and S. Lazebnik, â Improving image-sentence embeddings using large weakly an- notated photo collections,â in ECCV, 2014, pp. 529â
1504.00325#32
1504.00325#34
1504.00325
[ "1502.03671" ]
1504.00325#34
Microsoft COCO Captions: Data Collection and Evaluation Server
545. [16] R. Mason and E. Charniak, â Nonparametric method for data- driven image captioning,â in ACL, 2014. [17] P. Kuznetsova, V. Ordonez, T. Berg, and Y. Choi, â Treetalk: Com- position and compression of trees for image descriptions,â TACL, vol. 2, pp. 351â 362, 2014. [18] K. Ramnath, S. Baker, L. Vanderwende, M. El-Saban, S. N. Sinha, A. Kannan, N. Hassan, M. Galley, Y. Yang, D. Ramanan, A. Bergamo, and L. Torresani, â Autocaption: Automatic caption generation for personal photos,â in WACV, 2014. [19] A. Lazaridou, E. Bruni, and M. Baroni, â
1504.00325#33
1504.00325#35
1504.00325
[ "1502.03671" ]
1504.00325#35
Microsoft COCO Captions: Data Collection and Evaluation Server
Is this a wampimuk? cross-modal mapping between distributional semantics and the visual world,â in ACL, 2014. [20] R. Kiros, R. Salakhutdinov, and R. Zemel, â Multimodal neural language models,â in ICML, 2014. [21] J. Mao, W. Xu, Y. Yang, J. Wang, and A. L. Yuille, â Explain im- ages with multimodal recurrent neural networks,â arXiv preprint arXiv:1410.1090, 2014. [22] O. Vinyals, A. Toshev, S. Bengio, and D.
1504.00325#34
1504.00325#36
1504.00325
[ "1502.03671" ]
1504.00325#36
Microsoft COCO Captions: Data Collection and Evaluation Server
Erhan, â Show and tell: A neural image caption generator,â arXiv preprint arXiv:1411.4555, 2014. [23] A. Karpathy and L. Fei-Fei, â Deep visual-semantic alignments for generating image descriptions,â arXiv preprint arXiv:1412.2306, 2014. [24] R. Kiros, R. Salakhutdinov, and R. S.
1504.00325#35
1504.00325#37
1504.00325
[ "1502.03671" ]
1504.00325#37
Microsoft COCO Captions: Data Collection and Evaluation Server
Zemel, â Unifying visual- semantic embeddings with multimodal neural language models,â arXiv preprint arXiv:1411.2539, 2014. [25] J. Donahue, L. A. Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell, â Long-term recurrent convolutional networks for visual recognition and description,â
1504.00325#36
1504.00325#38
1504.00325
[ "1502.03671" ]
1504.00325#38
Microsoft COCO Captions: Data Collection and Evaluation Server
arXiv preprint arXiv:1411.4389, 2014. [26] H. Fang, S. Gupta, F. Iandola, R. Srivastava, L. Deng, P. Doll´ar, J. Gao, X. He, M. Mitchell, J. Platt et al., â From captions to visual concepts and back,â arXiv preprint arXiv:1411.4952, 2014. [27] X. Chen and C. L. Zitnick, â
1504.00325#37
1504.00325#39
1504.00325
[ "1502.03671" ]
1504.00325#39
Microsoft COCO Captions: Data Collection and Evaluation Server
Learning a recurrent visual representa- tion for image caption generation,â arXiv preprint arXiv:1411.5654, 2014. [28] R. Lebret, P. O. Pinheiro, and R. Collobert, â Phrase-based image captioning,â arXiv preprint arXiv:1502.03671, 2015. [29] â â , â Simple image description generator via a linear phrase- based approach,â
1504.00325#38
1504.00325#40
1504.00325
[ "1502.03671" ]
1504.00325#40
Microsoft COCO Captions: Data Collection and Evaluation Server
arXiv preprint arXiv:1412.8419, 2014. [30] A. Lazaridou, N. T. Pham, and M. Baroni, â Combining language and vision with a multimodal skip-gram model,â arXiv preprint arXiv:1501.02598, 2015. [31] A. Krizhevsky, I. Sutskever, and G. Hinton, â ImageNet classiï¬ ca- tion with deep convolutional neural networks,â in NIPS, 2012. [32] S. Hochreiter and J. Schmidhuber, â Long short-term memory,â Neural computation, vol. 9, no. 8, pp. 1735â 1780, 1997. [33] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, â Im- ageNet: A Large-Scale Hierarchical Image Database,â in CVPR, 2009. [34] M. Grubinger, P. Clough, H. M ¨uller, and T. Deselaers, â The iapr tc- 12 benchmark: A new evaluation resource for visual information systems,â in LREC Workshop on Language Resources for Content- based Image Retrieval, 2006. [35] V. Ordonez, G. Kulkarni, and T. Berg, â
1504.00325#39
1504.00325#41
1504.00325
[ "1502.03671" ]
1504.00325#41
Microsoft COCO Captions: Data Collection and Evaluation Server
Im2text: Describing images using 1 million captioned photographs.â in NIPS, 2011. [36] P. Young, A. Lai, M. Hodosh, and J. Hockenmaier, â From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions,â TACL, vol. 2, pp. 67â 78, 2014. [37] J. Chen, P. Kuznetsova, D. Warren, and Y. Choi, â D´ej´a image- captions: A corpus of expressive image descriptions in repetition,â in NAACL, 2015. [38] T. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Doll´ar, and C. L. Zitnick, â Microsoft COCO: Common objects in context,â in ECCV, 2014. [39] K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu, â Bleu: a method for automatic evaluation of machine translation,â in ACL, 2002. [40] C.-Y. Lin, â Rouge:
1504.00325#40
1504.00325#42
1504.00325
[ "1502.03671" ]
1504.00325#42
Microsoft COCO Captions: Data Collection and Evaluation Server
A package for automatic evaluation of sum- maries,â in ACL Workshop, 2004. [41] M. Denkowski and A. Lavie, â Meteor universal: Language spe- ciï¬ c translation evaluation for any target language,â in EACL Workshop on Statistical Machine Translation, 2014. â Cider: Consensus-based image description evaluation,â arXiv preprint arXiv:1411.5726, 2014. J. Bethard, and D. McClosky, â
1504.00325#41
1504.00325#43
1504.00325
[ "1502.03671" ]
1504.00325#43
Microsoft COCO Captions: Data Collection and Evaluation Server
The Stanford CoreNLP natural language processing toolkit,â in Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, 2014, pp. 55â 60. [Online]. Available: http: //www.aclweb.org/anthology/P/P14/P14-5010 [44] G. A. Miller, â Wordnet: a lexical database for english,â Communi- cations of the ACM, vol. 38, no. 11, pp. 39â 41, 1995. [45] D. Elliott and F. Keller, â Comparing automatic evaluation mea- sures for image description,â in Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, vol. 2, 2014, pp. 452â 457. [46] C. Callison-Burch, M. Osborne, and P. Koehn, â
1504.00325#42
1504.00325#44
1504.00325
[ "1502.03671" ]
1504.00325#44
Microsoft COCO Captions: Data Collection and Evaluation Server
Re-evaluation the role of bleu in machine translation research.â in EACL, vol. 6, 2006, pp. 249â 256. 7
1504.00325#43
1504.00325
[ "1502.03671" ]
1502.05698#0
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
5 1 0 2 c e D 1 3 ] I A . s c [ 0 1 v 8 9 6 5 0 . 2 0 5 1 : v i X r a # Under review as a conference paper at ICLR 2016 TOWARDS AI-COMPLETE QUESTION ANSWERING: A SET OF PREREQUISITE TOY TASKS Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merri¨enboer, Armand Joulin & Tomas Mikolov Facebook AI Research 770 Broadway New York, USA {jase,abordes,spchopra,tmikolov,sashar,bartvm}@fb.com # ABSTRACT One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the use- fulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
1502.05698#1
1502.05698
[ "1511.02301" ]
1502.05698#1
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
1 # INTRODUCTION There is a rich history of the use of synthetic tasks in machine learning, from the XOR problem which helped motivate neural networks (Minsky & Papert, 1969; Rumelhart et al., 1985), to circle and ring datasets that helped motivate some of the most well-known clustering and semi-supervised learning algorithms (Ng et al., 2002; Zhu et al., 2003), Mackey Glass equations for time series (M¨uller et al., 1997), and so on â in fact some of the well known UCI datasets (Bache & Lichman, 2013) are synthetic as well (e.g., waveform). Recent work continues this trend. For example, in the area of developing learning algorithms with a memory component synthetic datasets were used to help develop both the Neural Turing Machine of Graves et al. (2014) and the Memory Networks of Weston et al. (2014), the latter of which is relevant to this work. One of the reasons for the interest in synthetic data is that it can be easier to develop new techniques using it. It is well known that working with large amounts of real data (â big dataâ ) tends to lead researchers to simpler models as â simple models and a lot of data trump more elaborate models based on less dataâ (Halevy et al., 2009). For example, N -grams for language modeling work well relative to existing competing methods, but are far from being a model that truly understands text. As researchers we can become stuck in local minima in algorithm space; development of synthetic data is one way to try and break out of that. In this work we propose a framework and a set of synthetic tasks for the goal of helping to develop learning algorithms for text understanding and reasoning.
1502.05698#0
1502.05698#2
1502.05698
[ "1511.02301" ]
1502.05698#2
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
While it is relatively difï¬ cult to auto- matically evaluate the performance of an agent in general dialogue â a long term-goal of AI â it is relatively easy to evaluate responses to input questions, i.e., the task of question answering (QA). Question answering is incredibly broad: more or less any task one can think of can be cast into this setup. This enables us to propose a wide ranging set of different tasks, that test different capabilities of learning algorithms, under a common framework. Our tasks are built with a uniï¬ ed underlying simulation of a physical world, akin to a classic text adventure game (Montfort, 2005) whereby actors move around manipulating objects and interacting 1
1502.05698#1
1502.05698#3
1502.05698
[ "1511.02301" ]
1502.05698#3
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
# Under review as a conference paper at ICLR 2016 with each other. As the simulation runs, grounded text and question answer pairs are simultaneously generated. Our goal is to categorize different kinds of questions into skill sets, which become our tasks. Our hope is that the analysis of performance on these tasks will help expose weaknesses of current models and help motivate new algorithm designs that alleviate these weaknesses. We further envision this as a feedback loop where new tasks can then be designed in response, perhaps in an adversarial fashion, in order to break the new models. The tasks we design are detailed in Section 3, and the simulation used to generate them in Section 4. In Section 5 we give benchmark results of standard methods on our tasks, and analyse their successes and failures. In order to exemplify the kind of feedback loop between algorithm development and task development we envision, in Section A we propose a set of improvements to the recent Memory Network method, which has shown to give promising performance in QA. We show our proposed approach does indeed give improved performance on some tasks, but is still unable to solve some of them, which we consider as open problems. # 2 RELATED WORK Several projects targeting language understanding using QA-based strategies have recently emerged. Unlike tasks like dialogue or summarization, QA is easy to evaluate (especially in true/false or multiple choice scenarios) and hence makes it an appealing research avenue.
1502.05698#2
1502.05698#4
1502.05698
[ "1511.02301" ]
1502.05698#4
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
The difï¬ culty lies in the deï¬ nition of questions: they must be unambiguously answerable by adult humans (or children), but still require some thinking. The Allen Institute for AIâ s ï¬ agship project ARISTO1 is organized around a collection of QA tasks derived from increasingly difï¬ cult science exams, at the 4th, 8th, and 12th grade levels. Richardson et al. (2013) proposed the MCTest2 a set of 660 stories and associated questions intended for research on the machine comprehension of text. Each question requires the reader to understand different aspects of the story. These two initiatives go in a promising direction but interpreting the results on these benchmarks remain complicated. Indeed, no system has yet been able to fully solve the proposed tasks and since many sub-tasks need to be solved to answer any of their questions (coreference, deduction, use of common-sense, etc.), it is difï¬ cult to clearly identify capabilities and limitations of these systems and hence to propose improvements and modiï¬
1502.05698#3
1502.05698#5
1502.05698
[ "1511.02301" ]
1502.05698#5
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
cations. As a result, conclusions drawn from these projects are not much clearer than that coming from more traditional works on QA over large-scale Knowledge Bases (Berant et al., 2013; Fader et al., 2014). Besides, the best performing systems are based on hand-crafted patterns and features, and/or statistics acquired on very large corpora. It is difï¬ cult to argue that such systems actually understand language and are not simply light upgrades of traditional information extraction methods (Yao et al., 2014). The system of Berant et al. (2014) is more evolved since it builds a structured representation of a text and of a question to answer. Despite its potential this method remains highly domain speciï¬ c and relies on a lot of prior knowledge. Based on these observations, we chose to conceive a collection of much simpler QA tasks, with the main objective that failure or success of a system on any of them can unequivocally provide feedback on its capabilities. In that, we are close to the Winograd Schema Challenge Levesque et al. (2011), which is organized around simple statements followed by a single binary choice question such as: â
1502.05698#4
1502.05698#6
1502.05698
[ "1511.02301" ]
1502.05698#6
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
Joan made sure to thank Susan for all the help she had received. Who had received the help? Joan or Susan?â . In this challenge, and our tasks, it is straightforward to interpret results. Yet, where the Winograd Challenge is mostly centered around evaluating if systems can acquire and make use of background knowledge that is not expressed in the words of the statement, our tasks are self-contained and are more diverse. By self-contained we mean our tasks come with both training data and evaluation data, rather than just the latter as in the case of ARISTO and the Winograd Challenge. MCTest has a train/test split but the training set is likely too small to capture all the reasoning needed to do well on the test set. In our setup one can assess the amount of training examples needed to perform well (which can be increased as desired) and commonsense knowledge and reasoning required for the test set should be contained in the training set. In terms of diversity, some of our tasks are related to existing setups but we also propose many additional ones; tasks 8 and 9 are inspired by previous work on lambda dependency-based compositional semantics (Liang et al., 2013; Liang, 2013) for instance. For us, each task checks one skill that the system must 1 http://allenai.org/aristo.html 2 http://research.microsoft.com/mct 2 # Under review as a conference paper at ICLR 2016 have and we postulate that performing well on all of them is a prerequisite for any system aiming at full text understanding and reasoning. # 3 THE TASKS Principles Our main idea is to provide a set of tasks, in a similar way to how software testing is built in computer science. Ideally each task is a â
1502.05698#5
1502.05698#7
1502.05698
[ "1511.02301" ]
1502.05698#7
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
leafâ test case, as independent from oth- ers as possible, and tests in the simplest way possible one aspect of intended behavior. Subse- quent (â non-leafâ ) tests can build on these by testing combinations as well. The tasks are pub- licly available at http://fb.ai/babi. Source code to generate the tasks is available at https://github.com/facebook/bAbI-tasks. Each task provides a set of training and test data, with the intention that a successful model performs well on test data. Following Weston et al. (2014), the supervision in the training set is given by the true answers to questions, and the set of relevant statements for answering a given question, which may or may not be used by the learner. We set up the tasks so that correct answers are limited to a single word (Q:
1502.05698#6
1502.05698#8
1502.05698
[ "1511.02301" ]
1502.05698#8
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
Where is Mark? A: bathroom), or else a list of words (Q: What is Mark holding?) as evaluation is then clear-cut, and is measured simply as right or wrong. All of the tasks are noiseless and a human able to read that language can potentially achieve 100% accuracy. We tried to choose tasks that are natural to a human: they are based on simple usual situ- ations and no background in areas such as formal semantics, machine learning, logic or knowledge representation is required for an adult to solve them. The data itself is produced using a simple simulation of characters and objects moving around and interacting in locations, described in Section 4. The simulation allows us to generate data in many different scenarios where the true labels are known by grounding to the simulation. For each task, we describe it by giving a small sample of the dataset including statements, questions and the true labels (in red) in Tables 1 and 2. Single Supporting Fact Task 1 consists of questions where a previously given single supporting fact, potentially amongst a set of other irrelevant facts, provides the answer.
1502.05698#7
1502.05698#9
1502.05698
[ "1511.02301" ]
1502.05698#9
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
We ï¬ rst test one of the simplest cases of this, by asking for the location of a person, e.g. â Mary travelled to the ofï¬ ce. Where is Mary?â . This kind of task was already employed in Weston et al. (2014). It can be considered the simplest case of some real world QA datasets such as in Fader et al. (2013). Two or Three Supporting Facts A harder task is to answer questions where two supporting state- ments have to be chained to answer the question, as in task 2, where to answer the question â
1502.05698#8
1502.05698#10
1502.05698
[ "1511.02301" ]
1502.05698#10
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
Where is the football?â one has to combine information from two sentences â John is in the playgroundâ and â John picked up the footballâ . Again, this kind of task was already used in Weston et al. (2014). Similarly, one can make a task with three supporting facts, given in task 3, whereby the ï¬ rst three statements are all required to answer the question â Where was the apple before the kitchen?â . Two or Three Argument Relations To answer questions the ability to differentiate and recognize subjects and objects is crucial. In task 4 we consider the extreme case where sentences feature re- ordered words, i.e. a bag-of-words will not work. For example, the questions â What is north of the bedroom?â and â What is the bedroom north of?â have exactly the same words, but a different order, with different answers. A step further, sometimes one needs to differentiate three separate arguments. Task 5 involves statements like â Jeff was given the milk by Billâ and then queries who is the giver, receiver or which object is involved. Yes/No Questions Task 6 tests, on some of the simplest questions possible (speciï¬ cally, ones with a single supporting fact), the ability of a model to answer true/false type questions like â Is John in the playground?â . Counting and Lists/Sets Task 7 tests the ability of the QA system to perform simple counting operations, by asking about the number of objects with a certain property, e.g. â How many objects is Daniel holding?â . Similarly, task 8 tests the ability to produce a set of single word answers in the form of a list, e.g. â What is Daniel holding?â .
1502.05698#9
1502.05698#11
1502.05698
[ "1511.02301" ]
1502.05698#11
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
These tasks can be seen as QA tasks related to basic database search operations. 3 # Under review as a conference paper at ICLR 2016 Table 1: Sample statements and questions from tasks 1 to 10. Task 2: Two Supporting Facts John is in the playground. John picked up the football. Bob went to the kitchen. Where is the football? A:playground Task 1: Single Supporting Fact Mary went to the bathroom. John moved to the hallway. Mary travelled to the ofï¬ ce. Where is Mary? A:ofï¬ ce Task 3: Three Supporting Facts John picked up the apple. John went to the ofï¬ ce. John went to the kitchen. John dropped the apple. Where was the apple before the kitchen? A:ofï¬ ce Task 4: Two Argument Relations The ofï¬ ce is north of the bedroom. The bedroom is north of the bathroom. The kitchen is west of the garden. What is north of the bedroom?
1502.05698#10
1502.05698#12
1502.05698
[ "1511.02301" ]
1502.05698#12
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
A: ofï¬ ce What is the bedroom north of? A: bathroom Task 5: Three Argument Relations Mary gave the cake to Fred. Fred gave the cake to Bill. Jeff was given the milk by Bill. Who gave the cake to Fred? A: Mary Who did Fred give the cake to? A: Bill Task 6: Yes/No Questions John moved to the playground. Daniel went to the bathroom. John went back to the hallway. Is John in the playground? A:no Is Daniel in the bathroom? A:yes Task 7: Counting Daniel picked up the football. Daniel dropped the football. Daniel got the milk. Daniel took the apple. How many objects is Daniel holding? A: two Task 8: Lists/Sets Daniel picks up the football. Daniel drops the newspaper. Daniel picks up the milk. John took the apple. What is Daniel holding? milk, football Task 9:
1502.05698#11
1502.05698#13
1502.05698
[ "1511.02301" ]
1502.05698#13
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
Simple Negation Sandra travelled to the ofï¬ ce. Fred is no longer in the ofï¬ ce. Is Fred in the ofï¬ ce? A:no Is Sandra in the ofï¬ ce? A:yes Task 10: Indeï¬ nite Knowledge John is either in the classroom or the playground. Sandra is in the garden. Is John in the classroom? A:maybe Is John in the ofï¬ ce? A:no Simple Negation and Indeï¬ nite Knowledge Tasks 9 and 10 test slightly more complex natural language constructs. Task 9 tests one of the simplest forms of negation, that of supporting facts that imply a statement is false e.g. â
1502.05698#12
1502.05698#14
1502.05698
[ "1511.02301" ]
1502.05698#14
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
Fred is no longer in the ofï¬ ceâ rather than â Fred travelled to the ofï¬ ceâ . (In this case, task 6 (yes/no questions) is a prerequisite to the task.) Task 10 tests if we can model statements that describe possibilities rather than certainties, e.g. â John is either in the classroom or the playground.â , where in that case the answer is â maybeâ to the question â Is John in the classroom?â . Basic Coreference, Conjunctions and Compound Coreference Task 11 tests the simplest type of coreference, that of detecting the nearest referent, e.g. â
1502.05698#13
1502.05698#15
1502.05698
[ "1511.02301" ]
1502.05698#15
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
Daniel was in the kitchen. Then he went to the studio.â . Real-world data typically addresses this as a labeling problem and studies more sophisticated phenomena (Soon et al., 2001), whereas we evaluate it as in all our other tasks as a question answering problem. Task 12 (conjunctions) tests referring to multiple subjects in a single statement, e.g. â Mary and Jeff went to the kitchen.â . Task 13 tests coreference in the case where the pronoun can refer to multiple actors, e.g. â Daniel and Sandra journeyed to the ofï¬ ce. Then they went to the gardenâ . Time Reasoning While our tasks so far have included time implicitly in the order of the state- ments, task 14 tests understanding the use of time expressions within the statements, e.g. â
1502.05698#14
1502.05698#16
1502.05698
[ "1511.02301" ]
1502.05698#16
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
In the afternoon Julie went to the park. Yesterday Julie was at school.â , followed by questions about the order of events such as â Where was Julie before the park?â . Real-world datasets address the task of evaluating time expressions typically as a labeling, rather than a QA task, see e.g. UzZaman et al. (2012). Basic Deduction and Induction Task 15 tests basic deduction via inheritance of properties, e.g. â Sheep are afraid of wolves. Gertrude is a sheep. What is Gertrude afraid of?â
1502.05698#15
1502.05698#17
1502.05698
[ "1511.02301" ]
1502.05698#17
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
. Task 16 similarly 4 # Under review as a conference paper at ICLR 2016 Table 2: Sample statements and questions from tasks 11 to 20. Task 11: Basic Coreference Daniel was in the kitchen. Then he went to the studio. Sandra was in the ofï¬ ce. Where is Daniel? A:studio Task 12: Conjunction Mary and Jeff went to the kitchen. Then Jeff went to the park. Where is Mary? A: kitchen Where is Jeff? A: park Task 13:
1502.05698#16
1502.05698#18
1502.05698
[ "1511.02301" ]
1502.05698#18
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
Compound Coreference Daniel and Sandra journeyed to the ofï¬ ce. Then they went to the garden. Sandra and John travelled to the kitchen. After that they moved to the hallway. Where is Daniel? A: garden Task 14: Time Reasoning In the afternoon Julie went to the park. Yesterday Julie was at school. Julie went to the cinema this evening. Where did Julie go after the park? A:cinema Where was Julie before the park? A:school Task 15:
1502.05698#17
1502.05698#19
1502.05698
[ "1511.02301" ]
1502.05698#19
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
Basic Deduction Sheep are afraid of wolves. Cats are afraid of dogs. Mice are afraid of cats. Gertrude is a sheep. What is Gertrude afraid of? A:wolves Task 16: Basic Induction Lily is a swan. Lily is white. Bernhard is green. Greg is a swan. What color is Greg? A:white Task 17: Positional Reasoning The triangle is to the right of the blue square. The red square is on top of the blue square. The red sphere is to the right of the blue square. Is the red sphere to the right of the blue square? A:yes Is the red square to the left of the triangle? A:yes Task 18: Size Reasoning The football ï¬ ts in the suitcase. The suitcase ï¬ ts in the cupboard. The box is smaller than the football. Will the box ï¬ t in the suitcase? A:yes Will the cupboard ï¬ t in the box? A:no tests basic induction via inheritance of properties. A full analysis of induction and deduction is clearly beyond the scope of this work, and future tasks should analyse further, deeper aspects. Positional and Size Reasoning Task 17 tests spatial reasoning, one of many components of the classical SHRDLU system (Winograd, 1972) by asking questions about the relative positions of colored blocks. Task 18 requires reasoning about the relative size of objects and is inspired by the commonsense reasoning examples in the Winograd schema challenge (Levesque et al., 2011). Path Finding The goal of task 19 is to ï¬ nd the path between locations: given the description of various locations, it asks: how do you get from one to another? This is related to the work of Chen & Mooney (2011) and effectively involves a search problem.
1502.05698#18
1502.05698#20
1502.05698
[ "1511.02301" ]
1502.05698#20
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
Agentâ s Motivations Finally, task 20 questions, in the simplest way possible, why an agent per- forms an action. It addresses the case of actors being in a given state (hungry, thirsty, tired, . . . ) and the actions they then take, e.g. it should learn that hungry people might go to the kitchen, and so on. As already stated, these tasks are meant to foster the development and understanding of machine learning algorithms. A single model should be evaluated across all the tasks (not tuning per task) and then the same model should be tested on additional real-world tasks. In our data release, in addition to providing the above 20 tasks in English, we also provide them (i) in Hindi; and (ii) with shufï¬ ed English words so they are no longer readable by humans. A good learning algorithm should perform similarly on all three, which would likely not be the case for a method using external resources, a setting intended to mimic a learner being ï¬ rst presented a language and having to learn from scratch.
1502.05698#19
1502.05698#21
1502.05698
[ "1511.02301" ]
1502.05698#21
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
5 # Under review as a conference paper at ICLR 2016 # 4 SIMULATION All our tasks are generated with a simulation which behaves like a classic text adventure game. The idea is that generating text within this simulation allows us to ground the language used into a coherent and controlled (artiï¬ cial) world. Our simulation follows those of Bordes et al. (2010); Weston et al. (2014) but is somewhat more complex. The simulated world is composed of entities of various types (locations, objects, persons. etc.) and of various actions that operate on these entities. Entities have internal states: their location, whether they carry objects on top or inside them (e.g., tables and boxes), the mental state of actors (e.g. hungry), as well as properties such as size, color, and edibility. For locations, the nearby places that are connected (e.g. what lies to the east, or above) are encoded. For actors, a set of pre-speciï¬ ed rules per actor can also be speciï¬ ed to control their behavior, e.g. if they are hungry they may try to ï¬ nd food.
1502.05698#20
1502.05698#22
1502.05698
[ "1511.02301" ]
1502.05698#22
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
Random valid actions can also be executed if no rule is set, e.g. walking around randomly. The actions an actor can execute in the simulation consist of the following: go <location>, get <object>, get <object1> from <object2>, put <object1> in/on <object2>, give <object> to <actor>, drop <object>, set <entitity> <state>, look, inventory and examine <object>. A set of universal constraints is imposed on those actions to enforce coherence in the simulation. For example an actor cannot get something that they or someone else already has, they cannot go to a place that is not connected to the current location, cannot drop something they do not already have, and so on. Using the underlying actions, rules for actors, and their constraints, deï¬ nes how actors act. For each task we limit the actions needed for that task, e.g. task 1 only needs go whereas task 2 uses go, get and drop. If we write the commands down this gives us a very simple â
1502.05698#21
1502.05698#23
1502.05698
[ "1511.02301" ]
1502.05698#23
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
storyâ which is executable by the simulation, e.g., joe go playground; bob go ofï¬ ce; joe get football. This example corresponds to task 2. The system can then ask questions about the state of the simulation e.g., where john?, where football? and so on. It is easy to calculate the true answers for these questions as we have access to the underlying world. To produce more natural looking text with lexical variety from statements and questions we employ a simple automated grammar. Each verb is assigned a set of synonyms, e.g., the simulation command get is replaced with either picked up, got, grabbed or took, and drop is replaced with either dropped, left, discarded or put down. Similarly, each object and actor can have a set of replacement synonyms as well, e.g. replacing Daniel with he in task 11. Adverbs are crucial for some tasks such as the time reasoning task 14. There are a great many aspects of language not yet modeled. For example, all sentences are so far relatively short and contain little nesting. Further, the entities and the vocabulary size is small (150 words, and typically 4 actors, 6 locations and 3 objects used per task).
1502.05698#22
1502.05698#24
1502.05698
[ "1511.02301" ]
1502.05698#24
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
The hope is that deï¬ ning a set of well deï¬ ned tasks will help evaluate models in a controlled way within the simulated environment, which is hard to do with real data. That is, these tasks are not a substitute for real data, but should complement them, especially when developing and analysing algorithms. # 5 EXPERIMENTS We compared the following methods on our tasks (on the English dataset): (i) an N - gram classiï¬ er baseline, (ii) LSTMs (long short term memory Recurrent Neural Networks) (Hochreiter & Schmidhuber, 1997), (iii) Memory Networks (MemNNs) (Weston et al., 2014), (iv) some extensions of Memory Networks we will detail; and (v) a structured SVM that incorporates external labeled data from existing NLP tasks. These models belong to three separate tracks. Weakly supervised models are only given question answer pairs at training time, whereas strong supervision provides the set of supporting facts at training time (but not testing time) as well. Strongly super- vised ones give accuracy upper bounds for weakly supervised models, i.e. the performance should be superior given the same model class. Methods in the last external resources track can use labeled data from other sources rather than just the training set provided, e.g. coreference and semantic role labeling tasks, as well as strong supervision. For each task we use 1000 questions for training, and 1000 for testing, and report the test accuracy. We consider a task successfully passed if â ¥ 95% accuracy is obtained3. 3The choice of 95% (and 1000 training examples) is arbitrary. 6 # Under review as a conference paper at ICLR 2016 Table 3: Test accuracy (%) on our 20 Tasks for various methods (1000 training examples each). Our proposed extensions to MemNNs are in columns 5-9: with adaptive memory (AM), N -grams (NG), nonlinear matching function (NL), and combinations thereof. Bold numbers indicate tasks where our extensions achieve â ¥ 95% accuracy but the original MemNN model of Weston et al. (2014) did not. The last two columns (10-11) give method. Column 10 gives the amount of training data for each task needed to extra analysis of the MemNN AM + NG + NL
1502.05698#23
1502.05698#25
1502.05698
[ "1511.02301" ]
1502.05698#25
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
obtain â ¥ 95% accuracy, or FAIL if this is not achievable with 1000 training examples. The ï¬ nal column gives the accuracy when training on all data at once, rather than separately. Weakly Supervised Uses External Resources Strong Supervision (using supporting facts) TASK 1 - Single Supporting Fact 2 - Two Supporting Facts 3 - Three Supporting Facts 4 - Two Arg. Relations 5 - Three Arg. Relations 6 - Yes/No Questions 7 - Counting 8 - Lists/Sets 9 - Simple Negation 10 - Indeï¬ nite Knowledge 11 - Basic Coreference 12 - Conjunction 13 - Compound Coref. 14 - Time Reasoning 15 - Basic Deduction 16 - Basic Induction 17 - Positional Reasoning 18 - Size Reasoning 19 - Path Finding 20 - Agentâ s Motivations Mean Performance M SV features +SRL 95 â ¥ req. ex. of No. RY N O N M E Mem M E PTIV DA A 100 100 100 69 83 52 78 90 71 57 100 100 100 100 73 100 46 50 9 100 79 (2014) N N Mem al. et Weston 100 100 20 71 83 47 68 77 65 59 100 100 100 99 74 27 54 57 0 100 75 R A E N N NLIN O N + Structured N MS N A R N-G + N NL N + G N + N-gram Classiï¬
1502.05698#24
1502.05698#26
1502.05698
[ "1511.02301" ]
1502.05698#26
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
er M LST Mem Mem Mem F E R O C M A 100 100 100 73 86 100 83 94 100 97 100 100 100 100 77 100 57 54 15 100 87 M A 100 100 99 100 86 53 86 88 63 54 100 100 100 99 100 100 49 74 3 100 83 M A 100 100 100 100 98 100 85 91 100 98 100 100 100 99 100 100 65 95 36 100 93 99 74 17 98 83 99 69 70 100 99 100 96 99 99 96 24 61 62 49 95 79 36 2 7 50 20 49 52 40 62 45 29 9 26 19 20 43 46 52 0 76 34 250 ex. 500 ex. 500 ex. 500 ex. 1000 ex. 500 ex. FAIL FAIL 500 ex. 1000 ex. 250 ex. 250 ex. 250 ex. 500 ex. 100 ex. 100 ex. FAIL 1000 ex. FAIL 250 ex. 100 50 20 20 61 70 48 49 45 64 44 72 74 94 27 21 23 51 52 8 91 49 Training MultiTask 100 100 98 80 99 100 86 93 100 98 100 100 100 99 100 94 72 93 19 100 92 Methods The N -gram classiï¬ er baseline is inspired by the baselines in Richardson et al. (2013) but applied to the case of producing a 1-word answer rather than a multiple choice question: we construct a bag-of-N -grams for all sentences in the story that share at least one word with the question, and then learn a linear classiï¬ er to predict the answer using those features4.
1502.05698#25
1502.05698#27
1502.05698
[ "1511.02301" ]
1502.05698#27
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
LSTMs are a popular method for sequence prediction (Sutskever et al., 2014) and outperform stan- dard RNNs (Recurrent Neural Networks) for tasks similar to ours (Weston et al., 2014). They work by reading the story until the point they reach a question and then have to output an answer. Note that they are weakly supervised by answers only, and are hence at a disadvantage compared to strongly supervised methods or methods that use external resources. MemNNs (Weston et al., 2014) are a recently proposed class of models that have been shown to perform well at QA. They work by a â controllerâ neural network performing inference over the stored memories that consist of the previous statements in the story. The original proposed model performs 2 hops of inference: ï¬ nding the ï¬ rst supporting fact with the maximum match score with the question, and then the second supporting fact with the maximum match score with both the question and the ï¬ rst fact that was found. The matching function consists of mapping the bag-of- words for the question and facts into an embedding space by summing word embeddings. The word embeddings are learnt using strong supervision to optimize the QA task.
1502.05698#26
1502.05698#28
1502.05698
[ "1511.02301" ]
1502.05698#28
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
After ï¬ nding supporting facts, a ï¬ nal ranking is performed to rank possible responses (answer words) given those facts. We also consider some extensions to this model: â ¢ Adaptive memories performing a variable number of hops rather than 2, the model is trained to predict a hop or the special â STOPâ class. A similar procedure can be applied to output multiple tokens as well. 4Constructing N -grams from all sentences rather than using the ï¬ ltered set gave worse results.
1502.05698#27
1502.05698#29
1502.05698
[ "1511.02301" ]
1502.05698#29
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
7 # Under review as a conference paper at ICLR 2016 â ¢ N -grams We tried using a bag of 3-grams rather than a bag-of-words to represent the text. In both cases the ï¬ rst step of the MemNN is to convert these into vectorial embeddings. â ¢ Nonlinearity We apply a classical 2-layer neural network with tanh nonlinearity in the matching function. More details of these variants is given in Sec A of the appendix. Finally, we built a classical cascade NLP system baseline using a structured support vector ma- chine (SVM), which incorporates coreference resolution and semantic role labeling (SRL) prepro- cessing steps, which are themselves trained on large amounts of costly labeled data. The Stanford coreference system (Raghunathan et al., 2010) and the SENNA semantic role labeling (SRL) system (Collobert et al., 2011) are used to build features for the input to the SVM, trained with strong super- vision to ï¬ nd the supporting facts, e.g. features based on words, word pairs, and the SRL verb and verb-argument pairs. After ï¬ nding the supporting facts, we build a similar structured SVM for the response stage, with features tuned for that goal as well. More details are in Sec.
1502.05698#28
1502.05698#30
1502.05698
[ "1511.02301" ]
1502.05698#30
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
B of the appendix. Learning rates and other hyperparameters for all methods are chosen using the training set. The summary of our experimental results on the tasks is given in Table 3. We give results for each of the 20 tasks separately, as well as mean performance and number of failed tasks in the ï¬ nal two rows. Results Standard MemNNs generally outperform the N -gram and LSTM baselines, which is con- sistent with the results in Weston et al. (2014).
1502.05698#29
1502.05698#31
1502.05698
[ "1511.02301" ]
1502.05698#31
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
However they still â failâ at a number of tasks; that is, test accuracy is less than 95%. Some of these failures are expected due to insufï¬ cient modeling power as described in more detail in Sec. A.1, e.g. k = 2 facts, single word answers and bag-of- words do not succeed on tasks 3, 4, 5, 7, 8 and 18. However, there were also failures on tasks we did not at ï¬ rst expect, for example yes/no questions (6) and indeï¬ nite knowledge (10). Given hindsight, we realize that the linear scoring function of standard MemNNs cannot model the match between query, supporting fact and a yes/no answer as this requires three-way interactions. Columns 5-9 of Table 3 give the results for our MemNN extensions: adaptive memories (AM), N -grams (NG) and nonlinearities (NL), plus combinations thereof. The adaptive approach gives a straight-forward improvement in tasks 3 and 16 because they both require more than two supporting facts, and also gives (small) improvements in 8 and 19 because they require multi-word outputs (but still remain difï¬
1502.05698#30
1502.05698#32
1502.05698
[ "1511.02301" ]
1502.05698#32
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
cult). We hence use the AM model in combination with all our other extensions in the subsequent experiments. MemNNs with N -gram modeling yield clear improvements when word order matters, e.g. tasks 4 and 15. However, N -grams do not seem to be a substitute for nonlinearities in the embedding function as the NL model outperforms N -grams on average, especially in the yes/no (6) and indef- inite tasks (10), as explained before. On the other hand, the NL method cannot model word order and so fails e.g., on task 4. The obvious step is thus to combine these complimentary approaches: indeed AM+NG+NL (column 9) gives improved results over both, with a total of 9 tasks that have been upgraded from failure to success compared to the original MemNN model. The structured SVM, despite having access to external resources, does not perform better, still fail- ing at 9 tasks. It does perform better than vanilla MemNNs (without extensions) on tasks 6, 9 and 10 where the hand-built feature conjunctions capture the necessary nonlinearities. However, com- pared to MemNN (AM+NG+NL) it seems to do signiï¬ cantly worse on tasks requiring three (and sometimes, two) supporting facts (e.g. tasks 3, 16 and 2) presumably as ranking over so many possi- bilities introduces more mistakes. However, its non-greedy search does seem to help on other tasks, such as path ï¬ nding (task 19) where search is very important. Since it relies on external resources speciï¬ cally designed for English, it is unsure that it would perform as well on other languages, like Hindi, where such external resources might be of worse quality. The ï¬ nal two columns (10-11) give further analysis of the AM+NG+NL MemNN method. The second to last column (10) shows the minimum number of training examples required to achieve â ¥ 95% accuracy, or FAIL if this is not achieved with 1000 examples. This is important as it is not only desirable to perform well on a task, but also using the fewest number of examples (to generalize well, quickly). Most succeeding tasks require 100-500 examples.
1502.05698#31
1502.05698#33
1502.05698
[ "1511.02301" ]
1502.05698#33
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
Task 8 requires 5000 examples and 7 requires 10000, hence they are labeled as FAIL. The latter task can presumably be solved by adding all the times an object is picked up, and subtracting the times it is dropped, which seems 8 # Under review as a conference paper at ICLR 2016 possible for an MemNN, but it does not do perfectly. Two tasks, positional reasoning 17 and path ï¬ nding 19 cannot be solved even with 10000 examples, it seems those (and indeed more advanced forms of induction and deduction, which we plan to build) require a general search algorithm to be built into the inference procedure, which MemNN (and the other approaches tried) are lacking. The last column shows the performance of AM+NG+NL MemNNs when training on all the tasks jointly, rather than just on a single one. The performance is generally encouragingly similar, showing such a model can learn many aspects of text understanding and reasoning simultaneously. The main issues are that these models still fail on several of the tasks, and use a far stronger form of supervision (using supporting facts) than is typically realistic. # 6 DISCUSSION A prerequisite set We developed a set of tasks that we believe are a prerequisite to full language understanding and reasoning. While any learner that can solve these tasks is not necessarily close to full reasoning, if a learner fails on any of our tasks then there are likely real-world tasks that it will fail on too (i.e., real-world tasks that require the same kind of reasoning). Even if the situations and the language of the tasks are artiï¬ cial, we believe that the mechanisms required to learn how to solve them are part of the key towards text understanding and reasoning. A ï¬ exible framework This set of tasks is not a deï¬ nitive set. The purpose of a simulation-based approach is to provide ï¬ exibility and control of the tasksâ construction. We grounded the tasks into language because it is then easier to understand the usefulness of the tasks and to interpret their results. However, our primary goal is to ï¬ nd models able to learn to detect and combine patterns in symbolic sequences. One might even want to decrease the intrinsic difï¬ culty by removing any lexical variability and ambiguity and reason only over bare symbols, stripped down from their lin- guistic meaning.
1502.05698#32
1502.05698#34
1502.05698
[ "1511.02301" ]
1502.05698#34
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
One could also decorrelate the long-term memory from the reasoning capabilities of systems by, for instance, arranging the supporting facts closer to the questions. In the opposing view, one could instead want to transform the tasks into more realistic stories using annotators or more complex grammars. The set of 20 tasks presented here is a subset of what can be achieved with a simulation. We chose them because they offer a variety of skills that we would like a text reasoning model to have, but we hope researchers from the community will develop more tasks of varying complexity in order to develop and analyze models that try to solve them. Transfer learning across tasks is also a very important goal, beyond the scope of this paper. We have thus made the simulator and code for the tasks publicly available for those purposes. Testing learning methods Our tasks are designed as a test-bed for learning methods: we provide training and test sets because we intend to evaluate the capability of models to discover how to reason from patterns hidden within them. It could be tempting to hand-code solutions for them or to use existing large-scale QA systems like Cyc (Curtis et al., 2005). They might succeed at solving them, even if our structured SVM results (a cascaded NLP system with hand-built features) show that this is not straightforward; however this is not the tasksâ purpose since those approaches would not be learning to solve them. Our experiments show that some existing machine learning methods are successful on some of the tasks, in particular Memory Networks, for which we introduced some useful extensions (in Sec. A). However, those models still fail on several of the tasks, and use a far stronger form of supervision (using supporting facts) than is typically realistic. These datasets are not yet solved. Future research should aim to minimize the amount of required supervision, as well as the number of training examples needed to solve a new task, to move closer to the task transfer capabilities of humans. That is, in the weakly supervised case with only 1000 training examples or less there is no known general (i.e. non-hand engineered) method that solves the tasks. Further, importantly, our hope is that a feedback loop of developing more challenging tasks, and then algorithms that can solve them, leads us to fruitful research directions. Note that these tasks are not a substitute for real data, but should complement them, especially when developing and analysing algorithms.
1502.05698#33
1502.05698#35
1502.05698
[ "1511.02301" ]
1502.05698#35
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
There are many complementary real-world datasets, see for example Hermann et al. (2015); Bordes et al. (2015); Hill et al. (2015). That is, even if a method works well on our 20 tasks, it should be shown to be useful on real data as well. 9 # Under review as a conference paper at ICLR 2016 Impact Since being online, the bAbI tasks have already directly inï¬ uenced the development of several promising new algorithms, including weakly supervised end-to-end Memory Networks (MemN2N) of Sukhbaatar et al. (2015), Dynamic Memory Networks of Kumar et al. (2015), and the Neural Reasoner (Peng et al., 2015). MemN2N has since been shown to perform well on some real-world tasks (Hill et al., 2015). # REFERENCES Bache, K. and Lichman, M. http://archive.ics.uci.edu/ml. UCI machine learning repository, 2013. URL
1502.05698#34
1502.05698#36
1502.05698
[ "1511.02301" ]
1502.05698#36
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
Berant, Jonathan, Chou, Andrew, Frostig, Roy, and Liang, Percy. Semantic parsing on freebase from question-answer pairs. In EMNLP, pp. 1533â 1544, 2013. Berant, Jonathan, Srikumar, Vivek, Chen, Pei-Chun, Huang, Brad, Manning, Christopher D, Van- der Linden, Abby, Harding, Brittany, and Clark, Peter. Modeling biological processes for reading comprehension. In Proc. EMNLP, 2014. Bordes, Antoine, Usunier, Nicolas, Collobert, Ronan, and Weston, Jason. Towards understanding situated natural language. In AISTATS, 2010. Bordes, Antoine, Usunier, Nicolas, Chopra, Sumit, and Weston, Jason. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075, 2015. Chen, David L and Mooney, Raymond J. Learning to interpret natural language navigation instruc- tions from observations. San Francisco, CA, pp. 859â 865, 2011.
1502.05698#35
1502.05698#37
1502.05698
[ "1511.02301" ]
1502.05698#37
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
Collobert, Ronan, Weston, Jason, Bottou, L´eon, Karlen, Michael, Kavukcuoglu, Koray, and Kuksa, Pavel. Natural language processing (almost) from scratch. The Journal of Machine Learning Research, 12:2493â 2537, 2011. Curtis, Jon, Matthews, Gavin, and Baxter, David. On the effective use of cyc in a question answering system. In IJCAI Workshop on Knowledge and Reasoning for Answering Questions, pp. 61â 70, 2005. Fader, Anthony, Zettlemoyer, Luke, and Etzioni, Oren.
1502.05698#36
1502.05698#38
1502.05698
[ "1511.02301" ]
1502.05698#38
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
Paraphrase-driven learning for open question answering. In ACL, pp. 1608â 1618, 2013. Fader, Anthony, Zettlemoyer, Luke, and Etzioni, Oren. Open question answering over curated and extracted knowledge bases. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 1156â 1165. ACM, 2014. Graves, Alex, Wayne, Greg, and Danihelka, Ivo. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014. Halevy, Alon, Norvig, Peter, and Pereira, Fernando. The unreasonable effectiveness of data. Intelli- gent Systems, IEEE, 24(2):8â 12, 2009.
1502.05698#37
1502.05698#39
1502.05698
[ "1511.02301" ]
1502.05698#39
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
Hermann, Karl Moritz, KoË cisk´y, Tom´aË s, Grefenstette, Edward, Espeholt, Lasse, Kay, Will, Teaching machines to read and compre- URL Suleyman, Mustafa, and Blunsom, Phil. hend. http://arxiv.org/abs/1506.03340. In Advances in Neural Information Processing Systems (NIPS), 2015. Hill, Felix, Bordes, Antoine, Chopra, Sumit, and Weston, Jason.
1502.05698#38
1502.05698#40
1502.05698
[ "1511.02301" ]
1502.05698#40
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
The goldilocks principle: Reading childrenâ s books with explicit memory representation s. arXiv preprint arXiv:1511.02301, 2015. Hochreiter, Sepp and Schmidhuber, J¨urgen. Long short-term memory. Neural computation, 9(8): 1735â 1780, 1997. Kumar, Ankit, Irsoy, Ozan, Su, Jonathan, Bradbury, James, English, Robert, Pierce, Ë Brian, On- druska, Peter, Gulrajani, Ishaan, and Socher, Richard. Ask me anything:
1502.05698#39
1502.05698#41
1502.05698
[ "1511.02301" ]
1502.05698#41
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
Dynamic memory net- works for natural language processing. http://arxiv.org/abs/1506.07285, 2015. 10 # Under review as a conference paper at ICLR 2016 Levesque, Hector J, Davis, Ernest, and Morgenstern, Leora. The winograd schema challenge. In AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning, 2011. Liang, Percy. Lambda dependency-based compositional semantics. arXiv preprint arXiv:1309.4408, 2013. Liang, Percy, Jordan, Michael I, and Klein, Dan. Learning dependency-based compositional seman- tics. Computational Linguistics, 39(2):389â 446, 2013. Minsky, Marvin and Papert, Seymour. Perceptron: an introduction to computational geometry. The MIT Press, Cambridge, expanded edition, 19:88, 1969. Montfort, Nick. Twisty Little Passages: an approach to interactive ï¬ ction. Mit Press, 2005. M¨uller, K-R, Smola, Alex J, R¨atsch, Gunnar, Sch¨olkopf, Bernhard, Kohlmorgen, Jens, and Vapnik, Vladimir. Predicting time series with support vector machines.
1502.05698#40
1502.05698#42
1502.05698
[ "1511.02301" ]
1502.05698#42
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
In Artiï¬ cial Neural NetworksI- CANNâ 97, pp. 999â 1004. Springer, 1997. Ng, Andrew Y, Jordan, Michael I, Weiss, Yair, et al. On spectral clustering: Analysis and an algo- rithm. Advances in neural information processing systems, 2:849â 856, 2002. Peng, Baolin, Lu, Zhengdong, Li, Hang, and Wong, Kam-Fai.
1502.05698#41
1502.05698#43
1502.05698
[ "1511.02301" ]
1502.05698#43
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
Towards neural network-based rea- soning. arXiv preprint arXiv:1508.05508, 2015. Raghunathan, Karthik, Lee, Heeyoung, Rangarajan, Sudarshan, Chambers, Nathanael, Surdeanu, Mihai, Jurafsky, Dan, and Manning, Christopher. A multi-pass sieve for coreference resolution. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pp. 492â 501. Association for Computational Linguistics, 2010. Richardson, Matthew, Burges, Christopher JC, and Renshaw, Erin. Mctest: A challenge dataset for the open-domain machine comprehension of text. In EMNLP, pp. 193â 203, 2013. Rumelhart, David E, Hinton, Geoffrey E, and Williams, Ronald J. Learning internal representations by error propagation.
1502.05698#42
1502.05698#44
1502.05698
[ "1511.02301" ]
1502.05698#44
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
Technical report, DTIC Document, 1985. Soon, Wee Meng, Ng, Hwee Tou, and Lim, Daniel Chung Yong. A machine learning approach to coreference resolution of noun phrases. Computational linguistics, 27(4):521â 544, 2001. Sukhbaatar, Sainbayar, Szlam, Arthur, Weston, Jason, and Fergus, Rob. End-to-end memory net- works. Proceedings of NIPS, 2015. Sutskever, Ilya, Vinyals, Oriol, and Le, Quoc VV. Sequence to sequence learning with neural net- works. In Advances in Neural Information Processing Systems, pp. 3104â 3112, 2014.
1502.05698#43
1502.05698#45
1502.05698
[ "1511.02301" ]
1502.05698#45
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
UzZaman, Naushad, Llorens, Hector, Allen, James, Derczynski, Leon, Verhagen, Marc, and Puste- jovsky, James. Tempeval-3: Evaluating events, time expressions, and temporal relations. arXiv preprint arXiv:1206.5333, 2012. Weston, Jason, Chopra, Sumit, and Bordes, Antoine. Memory networks. CoRR, abs/1410.3916, 2014. Winograd, Terry. Understanding natural language. Cognitive psychology, 3(1):1â
1502.05698#44
1502.05698#46
1502.05698
[ "1511.02301" ]
1502.05698#46
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
191, 1972. Yao, Xuchen, Berant, Jonathan, and Van Durme, Benjamin. Freebase qa: Information extraction or semantic parsing? ACL 2014, pp. 82, 2014. Yu, Mo, Gormley, Matthew R, and Dredze, Mark. Factor-based compositional embedding models. NIPS 2014 workshop on Learning Semantics, 2014. Zhu, Xiaojin, Ghahramani, Zoubin, Lafferty, John, et al. Semi-supervised learning using gaussian ï¬ elds and harmonic functions. In ICML, volume 3, pp. 912â 919, 2003. 11
1502.05698#45
1502.05698#47
1502.05698
[ "1511.02301" ]
1502.05698#47
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
# Under review as a conference paper at ICLR 2016 # A EXTENSIONS TO MEMORY NETWORKS Memory Networks Weston et al. (2014) are a promising class of models, shown to perform well at QA, that we can apply to our tasks. They consist of a memory m (an array of objects indexed by mi) and four potentially learnable components I, G, O and R that are executed given an input: I: (input feature map) â convert input sentence x to an internal feature representation I(x). G: (generalization) â update the current memory state m given the new input: mi = G(mi, I(x), m), â i. O: (output feature map) â compute output o given the new input and the memory: o = O(I(x), m). R: (response) â ï¬ nally, decode output features o to give the ï¬ nal textual response to the user: r = R(o). Potentially, component I can make use of standard pre-processing, e.g., parsing and entity resolu- tion, but the simplest form is to do no processing at all. The simplest form of G is store the new incoming example in an empty memory slot, and leave the rest of the memory untouched. Thus, in Weston et al. (2014) the actual implementation used is exactly this simple form, where the bulk of the work is in the O and R components. The former is responsible for reading from memory and performing inference, e.g., calculating what are the relevant memories to answer a question, and the latter for producing the actual wording of the answer given O. The O module produces output features by ï¬ nding k supporting memories given x. They use k = 2. For k = 1 the highest scoring supporting memory is retrieved with: o1 = O1(x, m) = arg max i=1,...,N sO(x, mi) (1) where sO is a function that scores the match between the pair of sentences x and mi. For the case k = 2 they then ï¬ nd a second supporting memory given the ï¬ rst found in the previous iteration: o2 = O2(q, m) = arg max i=1,...,N where the candidate supporting memory mi is now scored with respect to both the original input and the ï¬ rst supporting memory, where square brackets denote a list.
1502.05698#46
1502.05698#48
1502.05698
[ "1511.02301" ]
1502.05698#48
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
The ï¬ nal output o is [x, mo1, mo2], which is input to the module R. Finally, R needs to produce a textual response r. While the authors also consider Recurrent Neural Networks (RNNs), their standard setup limits responses to be a single word (out of all the words seen by the model) by ranking them: r = R(q, w) = argmaxwâ W sR([x, mo1, mo2], w) (3) where W is the set of all words in the dictionary, and sR is a function that scores the match. The scoring functions sO and sR have the same form, that of an embedding model: s(x, y) = Φx(x)â ¤U â ¤U Φy(y). (4) where U is a n à D matrix where D is the number of features and n is the embedding dimension. The role of Φx and Φy is to map the original text to the D-dimensional feature space. They choose a bag of words representation, and D = 3|W | for sO, i.e., every word in the dictionary has three different representations: one for Φy(.) and two for Φx(.) depending on whether the words of the input arguments are from the actual input x or from the supporting memories so that they can be modeled differently. They consider various extensions of their model, in particular modeling write time and modeling unseen words. Here we only discuss the former which we also use. In order for the model to work on QA tasks over stories it needs to know which order the sentences were uttered which is not available in the model directly. They thus add extra write time extra features to SO which take on the value 0 or 1 indicating which sentence is older than another being compared, and compare triples of pairs of sentences and the question itself. Training is carried out by stochastic gradient descent using supervision from both the question answer pairs and the supporting memories (to select o1 and o2). See Weston et al. (2014) for more details.
1502.05698#47
1502.05698#49
1502.05698
[ "1511.02301" ]
1502.05698#49
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
12 # Under review as a conference paper at ICLR 2016 A.1 SHORTCOMINGS OF THE EXISTING MEMNNS The Memory Networks models deï¬ ned in (Weston et al., 2014) are one possible technique to try on our tasks, however there are several tasks which they are likely to fail on: â ¢ They model sentences with a bag of words so are likely to fail on tasks such as the 2- argument (task 4) and 3-argument (task 5) relation problems. â ¢ They perform only two max operations (k = 2) so they cannot handle questions involving more than two supporting facts such as tasks 3 and 7. â ¢ Unless a RNN is employed in the R module, they are unable to provide multiple answers in the standard setting using eq. (3). This is required for the list (8) and path ï¬ nding (19) tasks.
1502.05698#48
1502.05698#50
1502.05698
[ "1511.02301" ]
1502.05698#50
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
We therefore propose improvements to their model in the following section. IMPROVING MEMORY NETWORKS A.2.1 ADAPTIVE MEMORIES (AND RESPONSES) We consider a variable number of supporting facts that is automatically adapted dependent on the question being asked. To do this we consider scoring a special fact mâ . Computation of supporting memories then becomes: i = 1 oi = O(x, m) while oi 6= mâ do i â i + 1 oi = O([x, mo1 , . . . , moiâ 1 ], m) end while That is, we keep predicting supporting facts i, conditioning at each step on the previously found facts, until mâ is predicted at which point we stop. mâ
1502.05698#49
1502.05698#51
1502.05698
[ "1511.02301" ]
1502.05698#51
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
has its own unique embedding vector, which is also learned. In practice we still impose a hard maximum number of loops in our experiments to avoid fail cases where the computation never stops (in our experiments we use a limit of 10). Multiple Answers We use a similar trick for the response module as well in order to output multi- ple words. That is, we add a special word wâ to the dictionary and predict word wi on each iteration i conditional on the previous words, i.e., wi = R([x, mo1 , . . . , m|o|, wi, . . . , wiâ 1], w), until we predict wâ
1502.05698#50
1502.05698#52
1502.05698
[ "1511.02301" ]
1502.05698#52
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
. A.2.2 NONLINEAR SENTENCE MODELING There are several ways of modeling sentences that go beyond a bag-of-words, and we explore three variants here. The simplest is a bag-of-N -grams, we consider N = 1, 2 and 3 in the bag. The main disadvantage of such a method is that the dictionary grows rapidly with N . We therefore consider an alternative neural network approach, which we call a multilinear map. Each word in a sentence is binned into one of Psz positions with p(i, l) = â (iPsz)/l)â where i is the position of the word in a sentence of length l, and for each position we employ a n à n matrix Pp(i,l). We then model the matching score with: s(q, d) = E(q) · E(d); E(x) = tanh( X Pp(i,l)Φx(xi)â ¤U ) i=1,...,l (5) whereby we apply a linear map for each word dependent on its position, followed by a tanh non- linearity on the sum of mappings. Note that this is related to the model of (Yu et al., 2014) who consider tags rather than positions. While the results of this method are not shown in the main paper due to space restrictions, it performs similarly well to N -grams to and may be useful in real-world cases where N -grams cause the dictionary to be too large. Comparing to Table 3 MemNN with adaptive memories (AM) + multilinear obtains a mean performance of 93, the same as MemNNs with AM+NG+NL (i.e., using N-grams instead).
1502.05698#51
1502.05698#53
1502.05698
[ "1511.02301" ]
1502.05698#53
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
13 # Under review as a conference paper at ICLR 2016 Finally, to assess the performance of nonlinear maps that do not model word position at all we also consider the following nonlinear embedding: E(x) = tanh(W tanh(Φx(x)⠤U )). (6) where W is a n à n matrix. This is similar to a classical two-layer neural network, but applied to both sides q and d of s(q, d). We also consider the straight-forward combination of bag-of-N -grams followed by this nonlinearity.
1502.05698#52
1502.05698#54
1502.05698
[ "1511.02301" ]
1502.05698#54
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
# B BASELINE USING EXTERNAL RESOURCES We also built a classical cascade NLP system baseline using a structured SVM, which incorpo- rates coreference resolution and semantic role labeling preprocessing steps, which are themselves trained on large amounts of costly labeled data. We ï¬ rst run the Stanford coreference system (Raghunathan et al., 2010) on the stories and each mention is then replaced with the ï¬ rst mention of its entity class. Second, the SENNA semantic role labeling system (SRL) (Collobert et al., 2011) is run, and we collect the set of arguments for each verb. We then deï¬ ne a ranking task for ï¬ nding the supporting facts (trained using strong supervision): o1, o2, o3 = arg max SO(x, fo1, fo2, fo3; Î ) oâ O where given the question x we ï¬ nd at most three supporting facts with indices oi from the set of facts f in the story (we also consider selecting an â empty factâ for the case of less than three), and SO is a linear scoring function with parameters Î .
1502.05698#53
1502.05698#55
1502.05698
[ "1511.02301" ]
1502.05698#55
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
Computing the argmax requires doing exhaustive search, unlike e.g. the MemNN method which is greedy. For scalability, we thus prune the set of possible matches by requiring that facts share one common non-determiner word with each other match or with x. SO is constructed as a set of indicator features. For simplicity each of the features only looks at pairs of sentences, i.e. SO(x, fo1, fo2, fo3; Î ) = Î â (g(x, fo1 ), g(x, fo2), g(x, fo3 ), g(fo1, fo2), g(fo2, fo3), g(fo1, fo3)). The feature function g is made up of the following feature types, shown here for g(fo1, fo2): (1) Word pairs: One indicator variable for each pair of words in fo1 and fo2. (2) Pair distance: Indicator for the distance between the sentence, i.e. o1 â o2. (3) Pair order: Indicator for the order of the sentence, i.e. o1 > o2. (4) SRL Verb Pair: Indicator variables for each pair of SRL verbs in fo1 and fo2. (5) SRL Verb-Arg Pair: Indicator variables for each pair of SRL arguments in fo1, fo2 and their corresponding verbs. After ï¬ nding the supporting facts, we build a similar structured SVM for the response stage, also with features tuned for that goal: Words â indicator for each word in x, Word Pairs â indicator for each pair of words in x and supporting facts, and similar SRL Verb and SRL Verb-Arg Pair features as before.
1502.05698#54
1502.05698#56
1502.05698
[ "1511.02301" ]
1502.05698#56
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
Results are given in Table 3. The structured SVM, despite having access to external resources, does It does perform well on tasks not perform better than MemNNs overall, still failing at 9 tasks. 6, 9 and 10where the hand-built feature conjunctions capture the necessary nonlinearities that the original MemNNs do not. However, it seems to do signiï¬ cantly worse on tasks requiring three (and sometimes, two) supporting facts (e.g. tasks 3, 16 and 2) presumably as ranking over so many possibilities introduces more mistakes. However, its non-greedy search does seem to help on other tasks, such as path ï¬ nding (task 19) where search is very important.
1502.05698#55
1502.05698#57
1502.05698
[ "1511.02301" ]
1502.05698#57
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
14
1502.05698#56
1502.05698
[ "1511.02301" ]
1502.03167#0
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
arXiv:1502.03167v3 [cs.LG] Mar 2015 5 1 0 2 # r a # M 2 ] # G L . s c [ 3 v 7 6 1 3 0 . 2 0 5 1 : v i X r a # Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift Sergey Ioffe Google Inc., sioffe@google.com Christian Szegedy Google Inc., szegedy@google.com
1502.03167#1
1502.03167
[ "1502.03167" ]
1502.03167#1
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
# Abstract Training Deep Neural Networks is complicated by the fact that the distribution of each layerâ s inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it no- toriously hard to train models with saturating nonlineari- ties. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer in- puts. Our method draws its strength from making normal- ization a part of the model architecture and performing the normalization for each training mini-batch. Batch Nor- malization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regu- larizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classiï¬ cation model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a signiï¬
1502.03167#0
1502.03167#2
1502.03167
[ "1502.03167" ]
1502.03167#2
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
cant margin. Using an ensemble of batch- normalized networks, we improve upon the best published result on ImageNet classiï¬ cation: reaching 4.9% top-5 validation error (and 4.8% test error), exceeding the ac- curacy of human raters. Using mini-batches of examples, as opposed to one exam- ple at a time, is helpful in several ways. First, the gradient of the loss over a mini-batch is an estimate of the gradient over the training set, whose quality improves as the batch size increases. Second, computation over a batch can be much more efï¬ cient than m computations for individual examples, due to the parallelism afforded by the modern computing platforms. While stochastic gradient is simple and effective, it requires careful tuning of the model hyper-parameters, speciï¬ cally the learning rate used in optimization, as well as the initial values for the model parameters. The train- ing is complicated by the fact that the inputs to each layer are affected by the parameters of all preceding layers â so that small changes to the network parameters amplify as the network becomes deeper. The change in the distributions of layersâ inputs presents a problem because the layers need to continu- ously adapt to the new distribution. When the input dis- tribution to a learning system changes, it is said to experi- ence covariate shift (Shimodaira, 2000). This is typically handled via domain adaptation (Jiang, 2008). However, the notion of covariate shift can be extended beyond the learning system as a whole, to apply to its parts, such as a sub-network or a layer. Consider a network computing # 1 Introduction â = F2(F1(u, Î 1), Î 2)
1502.03167#1
1502.03167#3
1502.03167
[ "1502.03167" ]
1502.03167#3
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Deep learning has dramatically advanced the state of the art in vision, speech, and many other areas. Stochas- tic gradient descent (SGD) has proved to be an effec- tive way of training deep networks, and SGD variants such as momentum (Sutskever et al., 2013) and Adagrad (Duchi et al., 2011) have been used to achieve state of the art performance. SGD optimizes the parameters Î of the network, so as to minimize the loss where F1 and F2 are arbitrary transformations, and the parameters Î 1, Î 2 are to be learned so as to minimize the loss â . Learning Î 2 can be viewed as if the inputs x = F1(u, Î 1) are fed into the sub-network â = F2(x, Î 2). For example, a gradient descent step Î = arg min Î 1 N N Xi=1 â (xi, Î ) Î 2 â Î 2 â α m m Xi=1 â F2(xi, Î 2) â Î 2
1502.03167#2
1502.03167#4
1502.03167
[ "1502.03167" ]
1502.03167#4
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
where x1...N is the training data set. With SGD, the train- ing proceeds in steps, and at each step we consider a mini- batch x1...m of size m. The mini-batch is used to approx- imate the gradient of the loss function with respect to the parameters, by computing 1 m â â (xi, Î ) â Î . (for batch size m and learning rate α) is exactly equivalent to that for a stand-alone network F2 with input x. There- fore, the input distribution properties that make training more efï¬ cient â such as having the same distribution be- tween the training and test data â apply to training the sub-network as well. As such it is advantageous for the distribution of x to remain ï¬ xed over time. Then, Î 2 does 1
1502.03167#3
1502.03167#5
1502.03167
[ "1502.03167" ]
1502.03167#5
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
not have to readjust to compensate for the change in the distribution of x. Fixed distribution of inputs to a sub-network would have positive consequences for the layers outside the sub- network, as well. Consider a layer with a sigmoid activa- tion function z = g(W u + b) where u is the layer input, the weight matrix W and bias vector b are the layer pa- rameters to be learned, and g(x) = x | increases, gâ ²(x) tends to zero. This means that for all di- mensions of x = W u+b except those with small absolute values, the gradient ï¬ owing down to u will vanish and the model will train slowly. However, since x is affected by W, b and the parameters of all the layers below, changes to those parameters during training will likely move many dimensions of x into the saturated regime of the nonlin- earity and slow down the convergence. This effect is ampliï¬ ed as the network depth increases. In practice, the saturation problem and the resulting vanishing gradi- ents are usually addressed by using Rectiï¬ ed Linear Units (Nair & Hinton, 2010) ReLU (x) = max(x, 0), careful initialization (Bengio & Glorot, 2010; Saxe et al., 2013), and small learning rates. If, however, we could ensure that the distribution of nonlinearity inputs remains more stable as the network trains, then the optimizer would be less likely to get stuck in the saturated regime, and the training would accelerate. We refer to the change in the distributions of internal nodes of a deep network, in the course of training, as In- ternal Covariate Shift. Eliminating it offers a promise of faster training. We propose a new mechanism, which we call Batch Normalization, that takes a step towards re- ducing internal covariate shift, and in doing so dramati- cally accelerates the training of deep neural nets. It ac- complishes this via a normalization step that ï¬ xes the means and variances of layer inputs. Batch Normalization also has a beneï¬ cial effect on the gradient ï¬ ow through the network, by reducing the dependence of gradients on the scale of the parameters or of their initial values. This allows us to use much higher learning rates with- out the risk of divergence.
1502.03167#4
1502.03167#6
1502.03167
[ "1502.03167" ]
1502.03167#6
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Furthermore, batch normal- ization regularizes the model and reduces the need for Dropout (Srivastava et al., 2014). Finally, Batch Normal- ization makes it possible to use saturating nonlinearities by preventing the network from getting stuck in the satu- rated modes. In Sec. 4.2, we apply Batch Normalization to the best- performing ImageNet classiï¬ cation network, and show that we can match its performance using only 7% of the training steps, and can further exceed its accuracy by a substantial margin. Using an ensemble of such networks trained with Batch Normalization, we achieve the top-5 error rate that improves upon the best known results on ImageNet classiï¬
1502.03167#5
1502.03167#7
1502.03167
[ "1502.03167" ]
1502.03167#7
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
cation. 2 # 2 Towards # Reducing Internal # Covariate Shift We deï¬ ne Internal Covariate Shift as the change in the distribution of network activations due to the change in network parameters during training. To improve the train- ing, we seek to reduce the internal covariate shift. By ï¬ xing the distribution of the layer inputs x as the training progresses, we expect to improve the training speed. It has been long known (LeCun et al., 1998b; Wiesler & Ney, 2011) that the network training converges faster if its in- puts are whitened â i.e., linearly transformed to have zero means and unit variances, and decorrelated. As each layer observes the inputs produced by the layers below, it would be advantageous to achieve the same whitening of the in- puts of each layer. By whitening the inputs to each layer, we would take a step towards achieving the ï¬ xed distri- butions of inputs that would remove the ill effects of the internal covariate shift. We could consider whitening activations at every train- ing step or at some interval, either by modifying the network directly or by changing the parameters of the optimization algorithm to depend on the network ac- tivation values (Wiesler et al., 2014; Raiko et al., 2012; Povey et al., 2014; Desjardins & Kavukcuoglu). How- ever, if these modiï¬ cations are interspersed with the op- timization steps, then the gradient descent step may at- tempt to update the parameters in a way that requires the normalization to be updated, which reduces the ef- fect of the gradient step. For example, consider a layer with the input u that adds the learned bias b, and normal- izes the result by subtracting the mean of the activation computed over the training data: E[x] where â is the set of values of x over x = u + b, = the training set, and E[x] = 1 If a gradient N descent step ignores the dependence of E[x] on b, then it P b + â b, where â b will update b x. Then â â E[u + b]. E[u + (b + â b)] = u + b u + (b + â
1502.03167#6
1502.03167#8
1502.03167
[ "1502.03167" ]
1502.03167#8
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
b) b Thus, the combination of the update to b and subsequent change in normalization led to no change in the output of the layer nor, consequently, the loss. As the training continues, b will grow indeï¬ nitely while the loss remains ï¬ xed. This problem can get worse if the normalization not only centers but also scales the activations. We have ob- served this empirically in initial experiments, where the model blows up when the normalization parameters are computed outside the gradient descent step. The issue with the above approach is that the gradient descent optimization does not take into account the fact that the normalization takes place. To address this issue, we would like to ensure that, for any parameter values, the network always produces activations with the desired distribution. Doing so would allow the gradient of the loss with respect to the model parameters to account for the normalization, and for its dependence on the model parameters Î
1502.03167#7
1502.03167#9
1502.03167
[ "1502.03167" ]
1502.03167#9
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
. Let again x be a layer input, treated as a vector, and be the set of these inputs over the training data set. The normalization can then be written as a trans- formation x = Norm(x, ) # X which depends not only on the given training example x but on all examples â each of which depends on Î if x is generated by another layer. For backpropagation, we would need to compute the Jacobians â Norm(x, X â x ) and â Norm(x, â X ) ; # X ignoring the latter term would lead to the explosion de- scribed above. Within this framework, whitening the layer inputs is expensive, as it requires computing the covari- ance matrix Cov[x] = Exâ X [xxT ] E[x]E[x]T and its inverse square root, to produce the whitened activations Cov[x]â 1/2(x E[x]), as well as the derivatives of these transforms for backpropagation. This motivates us to seek an alternative that performs input normalization in a way that is differentiable and does not require the analysis of the entire training set after every parameter update. (e.g. previous (Lyu & Simoncelli, 2008)) use computed over a single training example, or, in the case of image networks, over different feature maps at a given location. However, this changes the representation ability of a network by discarding the absolute scale of activations. We want to a preserve the information in the network, by normalizing the activations in a training example relative to the statistics of the entire training data. # 3 Normalization via Mini-Batch Statistics
1502.03167#8
1502.03167#10
1502.03167
[ "1502.03167" ]
1502.03167#10
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Since the full whitening of each layerâ s inputs is costly and not everywhere differentiable, we make two neces- sary simpliï¬ cations. The ï¬ rst is that instead of whitening the features in layer inputs and outputs jointly, we will normalize each scalar feature independently, by making it have the mean of zero and the variance of 1. For a layer with d-dimensional input x = (x(1) . . . x(d)), we will nor- malize each dimension x(k) = x(k) E[x(k)] â Var[x(k)] p where the expectation and variance are computed over the training data set. As shown in (LeCun et al., 1998b), such normalization speeds up convergence, even when the fea- tures are not decorrelated. Note that simply normalizing each input of a layer may change what the layer can represent. For instance, nor- malizing the inputs of a sigmoid would constrain them to the linear regime of the nonlinearity. To address this, we make sure that the transformation inserted in the network can represent the identity transform. To accomplish this, we introduce, for each activation x(k), a pair of parameters γ(k), β(k), which scale and shift the normalized value: y(k) = γ(k) x(k) + β(k). These parameters are learned along with the original b model parameters, and restore the representation power of the network. Indeed, by setting γ(k) = Var[x(k)] and β(k) = E[x(k)], we could recover the original activations, if that were the optimal thing to do. In the batch setting where each training step is based on the entire training set, we would use the whole set to nor- malize activations. However, this is impractical when us- ing stochastic optimization. Therefore, we make the sec- ond simpliï¬ cation: since we use mini-batches in stochas- tic gradient training, each mini-batch produces estimates of the mean and variance of each activation. This way, the statistics used for normalization can fully participate in the gradient backpropagation.
1502.03167#9
1502.03167#11
1502.03167
[ "1502.03167" ]
1502.03167#11
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Note that the use of mini- batches is enabled by computation of per-dimension vari- ances rather than joint covariances; in the joint case, reg- ularization would be required since the mini-batch size is likely to be smaller than the number of activations being whitened, resulting in singular covariance matrices. of size m. Since the normal- ization is applied to each activation independently, let us focus on a particular activation x(k) and omit k for clarity. We have m values of this activation in the mini-batch, = . # x1...m} x1...m, and their linear trans- { Let the normalized values be formations be y1...m. We refer to the transform # B # b BNγ,β : x1...m â y1...m as the Batch Normalizing Transform. We present the BN Transform in Algorithm 1. In the algorithm, Ç« is a constant added to the mini-batch variance for numerical stability.
1502.03167#10
1502.03167#12
1502.03167
[ "1502.03167" ]
1502.03167#12
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Input: Values of x over a mini-batch: Parameters to be learned: γ, β ; = x1...m} { B Output: yi = BNγ,β(xi) } { m 1 m // mini-batch mean xi µB â Xi=1 m 1 m Xi=1 xi â Ï 2 p xi + β γ Ï 2 B µB)2 // mini-batch variance (xi â µB B + Ç« â // normalize xi â b yi â // scale and shift BNγ,β(xi) â ¡ # b Algorithm 1: Batch Normalizing Transform, applied to activation x over a mini-batch. The BN transform can be added to a network to manip- ulate any activation. In the notation y = BNγ,β(x), we
1502.03167#11
1502.03167#13
1502.03167
[ "1502.03167" ]
1502.03167#13
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
3 indicate that the parameters γ and β are to be learned, but it should be noted that the BN transform does not independently process the activation in each training ex- ample. Rather, BNγ,β(x) depends both on the training example and the other examples in the mini-batch. The scaled and shifted values y are passed to other network layers. The normalized activations x are internal to our transformation, but their presence is crucial. The distri- butions of values of any x has the expected value of 0 and the variance of 1, as long as the elements of each mini-batch are sampled from the same distribution, and if we neglect Ç«. This can be seen by observing that x2 i = 1, and taking expec- x(k) can be viewed as tations. Each normalized activation P b an input to a sub-network composed of the linear trans- b form y(k) = γ(k) x(k) + β(k), followed by the other pro- cessing done by the original network. These sub-network inputs all have ï¬ xed means and variances, and although x(k) can change the joint distribution of these normalized over the course of training, we expect that the introduc- tion of normalized inputs accelerates the training of the sub-network and, consequently, the network as a whole. During training we need to backpropagate the gradi- ent of loss â through this transformation, as well as com- pute the gradients with respect to the parameters of the BN transform. We use chain rule, as follows (before sim- pliï¬ cation): ae _ ae. ae: â By 7 ak _ vm ae . =1/,2 â 3/2 Bom = Lint Bayâ (ti â Ms) (OB + â ¬)*/ oe _ ym of 1 4 06, hy â 2(@:i-HB) due i=1 Oa; ae daz â ¢ Ok _ dol 1 (aie Hs + ar = Darâ Torre f + one =v: a ym oo i=1 Oy: op # P
1502.03167#12
1502.03167#14
1502.03167
[ "1502.03167" ]
1502.03167#14
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Thus, BN transform is a differentiable transformation that introduces normalized activations into the network. This ensures that as the model is training, layers can continue learning on input distributions that exhibit less internal co- variate shift, thus accelerating the training. Furthermore, the learned afï¬ ne transform applied to these normalized activations allows the BN transform to represent the iden- tity transformation and preserves the network capacity. # 3.1 Training and Inference with Batch- Normalized Networks To Batch-Normalize a network, we specify a subset of ac- tivations and insert the BN transform for each of them, according to Alg. 1. Any layer that previously received x as the input, now receives BN(x). A model employing Batch Normalization can be trained using batch gradient descent, or Stochastic Gradient Descent with a mini-batch size m > 1, or with any of its variants such as Adagrad
1502.03167#13
1502.03167#15
1502.03167
[ "1502.03167" ]
1502.03167#15
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
(Duchi et al., 2011). The normalization of activations that depends on the mini-batch allows efï¬ cient training, but is neither necessary nor desirable during inference; we want the output to depend only on the input, deterministically. For this, once the network has been trained, we use the normalization x = E[x] x Var[x] + Ç« â # p # b using the population, rather than mini-batch, statistics. Neglecting Ç«, these normalized activations have the same mean 0 and variance 1 as during training. We use the un- biased variance estimate Var[x] = m B], where the expectation is over training mini-batches of size m and Ï 2 B are their sample variances. Using moving averages in- stead, we can track the accuracy of a model as it trains. Since the means and variances are ï¬ xed during inference, the normalization is simply a linear transform applied to each activation. It may further be composed with the scal- ing by γ and shift by β, to yield a single linear transform that replaces BN(x). Algorithm 2 summarizes the proce- dure for training batch-normalized networks.
1502.03167#14
1502.03167#16
1502.03167
[ "1502.03167" ]
1502.03167#16
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Input: Network N with trainable parameters O; subset of activations {a} Output: Batch-normalized network for inference, N24, 1: Ngx <â N_ // Training BN network 2: fork =1...K do 3: Add transformation yâ ) = BN) gc) (x 0K)) to sn (Alg. 1) 4: Modify each layer in Nf with input xâ ) to take y) instead 5: end for 6: Train Ngy to optimize the parameters © U (9), 80}, 7: Net < Ngy_ // Inference BN network with frozen // parameters 8: fork =1...K do 9: // For clarity, 2 = 2), yÂ¥ =, we = nw, etc. 10: Process multiple training mini-batches B, each of size m, and average over them: E[z] â Es[us] Var[x] â 4 Eg(o3] ll: In N3X, replace the transform y = BN,,g(a) with = ~L.-r+(B- Ele] Â¥ a/Var[x]+e 7 ( at) 12: end for # 12: end for Algorithm 2: Training a Batch-Normalized Network # 3.2 Batch-Normalized Convolutional Net- works Batch Normalization can be applied to any set of acti- vations in the network. Here, we focus on transforms
1502.03167#15
1502.03167#17
1502.03167
[ "1502.03167" ]
1502.03167#17
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
4 that consist of an afï¬ ne transformation followed by an element-wise nonlinearity: z = g(W u + b) where W and b are learned parameters of the model, and ) is the nonlinearity such as sigmoid or ReLU. This for- g( · mulation covers both fully-connected and convolutional layers. We add the BN transform immediately before the nonlinearity, by normalizing x = W u + b. We could have also normalized the layer inputs u, but since u is likely the output of another nonlinearity, the shape of its distri- bution is likely to change during training, and constraining its ï¬ rst and second moments would not eliminate the co- variate shift. In contrast, W u + b is more likely to have a symmetric, non-sparse distribution, that is â more Gaus- sianâ (Hyv¨arinen & Oja, 2000); normalizing it is likely to produce activations with a stable distribution. Note that, since we normalize W u+b, the bias b can be ignored since its effect will be canceled by the subsequent mean subtraction (the role of the bias is subsumed by β in Alg. 1). Thus, z = g(W u + b) is replaced with z = g(BN(W u)) where the BN transform is applied independently to each dimension of x = W u, with a separate pair of learned parameters γ(k), β(k) per dimension. For convolutional layers, we additionally want the nor- malization to obey the convolutional property â so that different elements of the same feature map, at different locations, are normalized in the same way. To achieve this, we jointly normalize all the activations in a mini- be the set of batch, over all locations. In Alg. 1, we let all values in a feature map across both the elements of a mini-batch and spatial locations â so for a mini-batch of q, we use the effec- size m and feature maps of size p tive mini-batch of size mâ ² = p q. We learn a pair of parameters γ(k) and β(k) per feature map, rather than per activation. Alg. 2 is modiï¬ ed similarly, so that during inference the BN transform applies the same linear transformation to each activation in a given feature map.
1502.03167#16
1502.03167#18
1502.03167
[ "1502.03167" ]
1502.03167#18
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
# 3.3 Batch Normalization enables higher learning rates In traditional deep networks, too-high learning rate may result in the gradients that explode or vanish, as well as getting stuck in poor local minima. Batch Normaliza- tion helps address these issues. By normalizing activa- tions throughout the network, it prevents small changes to the parameters from amplifying into larger and subop- timal changes in activations in gradients; for instance, it prevents the training from getting stuck in the saturated regimes of nonlinearities. Batch Normalization also makes training more resilient to the parameter scale. Normally, large learning rates may increase the scale of layer parameters, which then amplify
1502.03167#17
1502.03167#19
1502.03167
[ "1502.03167" ]
1502.03167#19
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
5 the gradient during backpropagation and lead to the model explosion. However, with Batch Normalization, back- propagation through a layer is unaffected by the scale of its parameters. Indeed, for a scalar a, BN(W u) = BN((aW )u) and we can show that â BN((aW )u) â u â BN((aW )u) = â BN(W u) â u â BN(W u) â W â (aW ) = 1 a ·
1502.03167#18
1502.03167#20
1502.03167
[ "1502.03167" ]
1502.03167#20
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
The scale does not affect the layer Jacobian nor, con- sequently, the gradient propagation. Moreover, larger weights lead to smaller gradients, and Batch Normaliza- tion will stabilize the parameter growth. We further conjecture that Batch Normalization may lead the layer Jacobians to have singular values close to 1, which is known to be beneï¬ cial for training (Saxe et al., 2013). Consider two consecutive layers with normalized inputs, and the transformation between these normalized vectors: z are Gaussian x and x is a linear transfor- and uncorrelated, and that F ( b b mation for the given model parameters, then both x and z b x]J T = z] = JCov[ have unit covariances, and I = Cov[ b b JJ T . Thus, JJ T = I, and so all singular values of J b are equal to 1, which preserves the gradient magnitudes during backpropagation. In reality, the transformation is not linear, and the normalized values are not guaranteed to be Gaussian nor independent, but we nevertheless expect Batch Normalization to help make gradient propagation better behaved. The precise effect of Batch Normaliza- tion on gradient propagation remains an area of further study. # 3.4 Batch Normalization regularizes the model When training with Batch Normalization, a training ex- ample is seen in conjunction with other examples in the mini-batch, and the training network no longer produc- ing deterministic values for a given training example. In our experiments, we found this effect to be advantageous to the generalization of the network. Whereas Dropout (Srivastava et al., 2014) is typically used to reduce over- ï¬ tting, in a batch-normalized network we found that it can be either removed or reduced in strength. # 4 Experiments # 4.1 Activations over time
1502.03167#19
1502.03167#21
1502.03167
[ "1502.03167" ]
1502.03167#21
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
To verify the effects of internal covariate shift on train- ing, and the ability of Batch Normalization to combat it, we considered the problem of predicting the digit class on the MNIST dataset (LeCun et al., 1998a). We used a very simple network, with a 28x28 binary image as input, and 1 2 2 0.9 0.8 Without BN With BN 0 0 0.7 10K 20K 30K 40K 50K â 2 â 2 (a) (b) Without BN (c) With BN Figure 1: (a) The test accuracy of the MNIST network the trained with and without Batch Normalization, vs. number of training steps. Batch Normalization helps the network train faster and achieve higher accuracy. (b, c) The evolution of input distributions to a typical sig- moid, over the course of training, shown as th percentiles. Batch Normalization makes the distribution more stable and reduces the internal covariate shift. 3 fully-connected hidden layers with 100 activations each. Each hidden layer computes y = g(W u+b) with sigmoid nonlinearity, and the weights W initialized to small ran- dom Gaussian values. The last hidden layer is followed by a fully-connected layer with 10 activations (one per class) and cross-entropy loss. We trained the network for 50000 steps, with 60 examples per mini-batch. We added Batch Normalization to each hidden layer of the network, as in Sec. 3.1. We were interested in the comparison be- tween the baseline and batch-normalized networks, rather than achieving the state of the art performance on MNIST (which the described architecture does not). Figure 1(a) shows the fraction of correct predictions by the two networks on held-out test data, as training progresses. The batch-normalized network enjoys the higher test accuracy. To investigate why, we studied in- puts to the sigmoid, in the original network N and batch- normalized network Ntr BN (Alg. 2) over the course of train- ing. In Fig. 1(b,c) we show, for one typical activation from the last hidden layer of each network, how its distribu- tion evolves. The distributions in the original network change signiï¬
1502.03167#20
1502.03167#22
1502.03167
[ "1502.03167" ]
1502.03167#22
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
cantly over time, both in their mean and the variance, which complicates the training of the sub- sequent layers. In contrast, the distributions in the batch- normalized network are much more stable as training pro- gresses, which aids the training. # 4.2 ImageNet classiï¬ cation We applied Batch Normalization to a new variant of the Inception network (Szegedy et al., 2014), trained on the ImageNet classiï¬ cation task (Russakovsky et al., 2014). The network has a large number of convolutional and pooling layers, with a softmax layer to predict the image class, out of 1000 possibilities. Convolutional layers use ReLU as the nonlinearity. The main difference to the net- work described in (Szegedy et al., 2014) is that the 5 5 convolutional layers are replaced by two consecutive lay- 3 convolutions with up to 128 ï¬
1502.03167#21
1502.03167#23
1502.03167
[ "1502.03167" ]
1502.03167#23
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
lters. The net- ers of 3 à 106 parameters, and, other than the work contains 13.6 top softmax layer, has no fully-connected layers. More 6 details are given in the Appendix. We refer to this model as Inceptionin the rest of the text. The model was trained using a version of Stochastic Gradient Descent with mo- mentum (Sutskever et al., 2013), using the mini-batch size of 32. The training was performed using a large-scale, dis- tributed architecture (similar to (Dean et al., 2012)). All networks are evaluated as training progresses by comput- the probability of ing the validation accuracy @1, i.e. predicting the correct label out of 1000 possibilities, on a held-out set, using a single crop per image. In our experiments, we evaluated several modiï¬ cations of Inception with Batch Normalization. In all cases, Batch Normalization was applied to the input of each nonlinear- ity, in a convolutional way, as described in section 3.2, while keeping the rest of the architecture constant. # 4.2.1 Accelerating BN Networks Simply adding Batch Normalization to a network does not take full advantage of our method. To do so, we further changed the network and its training parameters, as fol- lows: Increase learning rate. In a batch-normalized model, we have been able to achieve a training speedup from higher learning rates, with no ill side effects (Sec. 3.3). Remove Dropout. As described in Sec. 3.4, Batch Nor- malization fulï¬ lls some of the same goals as Dropout. Re- moving Dropout from Modiï¬ ed BN-Inception speeds up training, without increasing overï¬ tting. Reduce the L2 weight regularization. While in Incep- tion an L2 loss on the model parameters controls overï¬
1502.03167#22
1502.03167#24
1502.03167
[ "1502.03167" ]
1502.03167#24
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
t- ting, in Modiï¬ ed BN-Inception the weight of this loss is reduced by a factor of 5. We ï¬ nd that this improves the accuracy on the held-out validation data. Accelerate the learning rate decay. In training Incep- tion, learning rate was decayed exponentially. Because our network trains faster than Inception, we lower the learning rate 6 times faster. Remove Local Response Normalization While Incep- tion and other networks (Srivastava et al., 2014) beneï¬ t from it, we found that with Batch Normalization it is not necessary.
1502.03167#23
1502.03167#25
1502.03167
[ "1502.03167" ]
1502.03167#25
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Shufï¬ e training examples more thoroughly. We enabled within-shard shufï¬ ing of the training data, which prevents the same examples from always appearing in a mini-batch together. This led to about 1% improvements in the val- idation accuracy, which is consistent with the view of Batch Normalization as a regularizer (Sec. 3.4): the ran- domization inherent in our method should be most bene- ï¬ cial when it affects an example differently each time it is seen. Reduce the photometric distortions. Because batch- normalized networks train faster and observe each train- ing example fewer times, we let the trainer focus on more â
1502.03167#24
1502.03167#26
1502.03167
[ "1502.03167" ]
1502.03167#26
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
realâ images by distorting them less. 0.8 0.7 0.6 0.5 Inception BNâ Baseline BNâ x5 BNâ x30 BNâ x5â Sigmoid Steps to match Inception 0.4 5M 10M 15M 20M 25M 30M Figure 2: Single crop validation accuracy of Inception and its batch-normalized variants, vs. the number of training steps. # 4.2.2 Single-Network Classiï¬
1502.03167#25
1502.03167#27
1502.03167
[ "1502.03167" ]
1502.03167#27
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
cation We evaluated the following networks, all trained on the LSVRC2012 training data, and tested on the validation data: Inception: the network described at the beginning of Section 4.2, trained with the initial learning rate of 0.0015. BN-Baseline: Same as Inception with Batch Normal- ization before each nonlinearity. BN-x5: Inception with Batch Normalization and the modiï¬ cations in Sec. 4.2.1. The initial learning rate was increased by a factor of 5, to 0.0075. The same learning rate increase with original Inception caused the model pa- rameters to reach machine inï¬
1502.03167#26
1502.03167#28
1502.03167
[ "1502.03167" ]