doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1502.04623 | 22 | where wt is the N . For colour images each point in the input and error image (and hence in the reading and writing patches) is an RGB triple. In this case the same reading and writing ï¬lters are used for all three channels.
# 4. Experimental Results
Our model consists of an LSTM recurrent network that re- 12 âglimpseâ from the input image at each ceives a 12 time-step, using the selective read operation deï¬ned in Sec- tion 3.2. After a ï¬xed number of glimpses the network uses a softmax layer to classify the MNIST digit. The network is similar to the recently introduced Recurrent Attention Model (RAM) (Mnih et al., 2014), except that our attention method is differentiable; we therefore refer to it as âDiffer- entiable RAMâ.
We assess the ability of DRAW to generate realistic- looking images by training on three datasets of progres- sively increasing visual complexity: MNIST (LeCun et al., 1998), Street View House Numbers (SVHN) (Netzer et al., 2011) and CIFAR-10 (Krizhevsky, 2009). The images
The results in Table 1 demonstrate a signiï¬cant improve- ment in test error over the original RAM network. More- over our model had only a single attention patch at each | 1502.04623#22 | DRAW: A Recurrent Neural Network For Image Generation | This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural
network architecture for image generation. DRAW networks combine a novel
spatial attention mechanism that mimics the foveation of the human eye, with a
sequential variational auto-encoding framework that allows for the iterative
construction of complex images. The system substantially improves on the state
of the art for generative models on MNIST, and, when trained on the Street View
House Numbers dataset, it generates images that cannot be distinguished from
real data with the naked eye. | http://arxiv.org/pdf/1502.04623 | Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20150216 | 20150520 | [] |
1502.04623 | 24 | Time â>
Figure 5. Cluttered MNIST classiï¬cation with attention. Each sequence shows a succession of four glimpses taken by the net- work while classifying cluttered translated MNIST. The green rectangle indicates the size and location of the attention patch, while the line width represents the variance of the ï¬lters.
Table 1. Classiï¬cation test error on 100 à 100 Cluttered Trans- lated MNIST.
Model Convolutional, 2 layers 12, 4 scales RAM, 4 glimpses, 12 RAM, 8 glimpses, 12 12, 4 scales Differentiable RAM, 4 glimpses, 12 Differentiable RAM, 8 glimpses, 12 Ã Ã 12 12 Error 14.35% 9.41% 8.11% 4.18% 3.36%
à Ã
time-step, whereas RAM used four, at different zooms.
# 4.2. MNIST Generation | 1502.04623#24 | DRAW: A Recurrent Neural Network For Image Generation | This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural
network architecture for image generation. DRAW networks combine a novel
spatial attention mechanism that mimics the foveation of the human eye, with a
sequential variational auto-encoding framework that allows for the iterative
construction of complex images. The system substantially improves on the state
of the art for generative models on MNIST, and, when trained on the Street View
House Numbers dataset, it generates images that cannot be distinguished from
real data with the naked eye. | http://arxiv.org/pdf/1502.04623 | Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20150216 | 20150520 | [] |
1502.04623 | 25 | Ã Ã
time-step, whereas RAM used four, at different zooms.
# 4.2. MNIST Generation
Table 2. Negative log-likelihood (in nats) per test-set example on the binarised MNIST data set. The right hand column, where present, gives an upper bound (Eq. 12) on the negative log- likelihood. The previous results are from [1] (Salakhutdinov & Hinton, 2009), [2] (Murray & Salakhutdinov, 2009), [3] (Uria et al., 2014), [4] (Raiko et al., 2014), [5] (Rezende et al., 2014), [6] (Salimans et al., 2014), [7] (Gregor et al., 2014). | 1502.04623#25 | DRAW: A Recurrent Neural Network For Image Generation | This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural
network architecture for image generation. DRAW networks combine a novel
spatial attention mechanism that mimics the foveation of the human eye, with a
sequential variational auto-encoding framework that allows for the iterative
construction of complex images. The system substantially improves on the state
of the art for generative models on MNIST, and, when trained on the Street View
House Numbers dataset, it generates images that cannot be distinguished from
real data with the naked eye. | http://arxiv.org/pdf/1502.04623 | Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20150216 | 20150520 | [] |
1502.04623 | 26 | Model DBM 2hl [1] DBN 2hl [2] NADE [3] EoNADE 2hl (128 orderings) [3] EoNADE-5 2hl (128 orderings) [4] DLGM [5] DLGM 8 leapfrog steps [6] DARN 1hl [7] DARN 12hl [7] DRAW without attention DRAW log p 84.62 84.55 â â â 88.33 85.10 84.68 â â â 86.60 85.51 84.13 - - - ⤠88.30 88.30 87.72 87.40 80.97
3 3 g / g 7 \ 7 t SHWe YP HOLA VEQNROQALHOD PoepIvYyleqy NEDA STUN ~~âwt JI ncnw-~s AK~DNPA LL Ooms SS Bo Ro 4DYAXSOZAIF O33 {5 \ 4O* ¢ OH 19Q 373 ! S70 SgS Z
Figure 6. Generated MNIST images. All digits were generated by DRAW except those in the rightmost column, which shows the training set images closest to those in the column second to the right (pixelwise L2 is the distance measure). Note that the net- work was trained on binary samples, while the generated images are mean probabilities. | 1502.04623#26 | DRAW: A Recurrent Neural Network For Image Generation | This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural
network architecture for image generation. DRAW networks combine a novel
spatial attention mechanism that mimics the foveation of the human eye, with a
sequential variational auto-encoding framework that allows for the iterative
construction of complex images. The system substantially improves on the state
of the art for generative models on MNIST, and, when trained on the Street View
House Numbers dataset, it generates images that cannot be distinguished from
real data with the naked eye. | http://arxiv.org/pdf/1502.04623 | Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20150216 | 20150520 | [] |
1502.04623 | 27 | We trained the full DRAW network as a generative model on the binarized MNIST dataset (Salakhutdinov & Mur- ray, 2008). This dataset has been widely studied in the literature, allowing us to compare the numerical perfor- mance (measured in average nats per image on the test set) of DRAW with existing methods. Table 2 shows that DRAW without selective attention performs comparably to other recent generative models such as DARN, NADE and DBMs, and that DRAW with attention considerably im- proves on the state of the art.
Once the DRAW network was trained, we generated MNIST digits following the method in Section 2.3, exam- ples of which are presented in Fig. 6. Fig. 7 illustrates the image generation sequence for a DRAW network with- out selective attention (see Section 3.1). It is interesting to compare this with the generation sequence for DRAW with attention, as depicted in Fig. 1. Whereas without attention it progressively sharpens a blurred image in a global way,
DRAW: A Recurrent Neural Network For Image Generation
7 seisisssssiss
Time â~
Figure 7. MNIST generation sequences for DRAW without at- tention. Notice how the network ï¬rst generates a very blurry im- age that is subsequently reï¬ned. | 1502.04623#27 | DRAW: A Recurrent Neural Network For Image Generation | This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural
network architecture for image generation. DRAW networks combine a novel
spatial attention mechanism that mimics the foveation of the human eye, with a
sequential variational auto-encoding framework that allows for the iterative
construction of complex images. The system substantially improves on the state
of the art for generative models on MNIST, and, when trained on the Street View
House Numbers dataset, it generates images that cannot be distinguished from
real data with the naked eye. | http://arxiv.org/pdf/1502.04623 | Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20150216 | 20150520 | [] |
1502.04623 | 28 | Figure 8. Generated MNIST images with two digits.
with attention it constructs the digit by tracing the linesâ much like a person with a pen.
# 4.3. MNIST Generation with Two Digits
The main motivation for using an attention-based genera- tive model is that large images can be built up iteratively, by adding to a small part of the image at a time. To test this capability in a controlled fashion, we trained DRAW 28 MNIST images cho- to generate images with two 28 sen at random and placed at random locations in a 60 60 black background. In cases where the two digits overlap, the pixel intensities were added together at each point and clipped to be no greater than one. Examples of generated data are shown in Fig. 8. The network typically generates one digit and then the other, suggesting an ability to recre- ate composite scenes from simple pieces.
Fy âiste aii iI a
# 4.4. Street View House Number Generation | 1502.04623#28 | DRAW: A Recurrent Neural Network For Image Generation | This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural
network architecture for image generation. DRAW networks combine a novel
spatial attention mechanism that mimics the foveation of the human eye, with a
sequential variational auto-encoding framework that allows for the iterative
construction of complex images. The system substantially improves on the state
of the art for generative models on MNIST, and, when trained on the Street View
House Numbers dataset, it generates images that cannot be distinguished from
real data with the naked eye. | http://arxiv.org/pdf/1502.04623 | Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20150216 | 20150520 | [] |
1502.04623 | 29 | Fy âiste aii iI a
# 4.4. Street View House Number Generation
MNIST digits are very simplistic in terms of visual struc- ture, and we were keen to see how well DRAW performed on natural images. Our ï¬rst natural image generation ex- periment used the multi-digit Street View House Numbers dataset (Netzer et al., 2011). We used the same preprocess- 64 house ing as (Goodfellow et al., 2013), yielding a 64 number image for each training example. The network was then trained using 54 54 patches extracted at random lo- cations from the preprocessed images. The SVHN training set contains 231,053 images, and the validation set contains 4,701 images.
Figure 9. Generated SVHN images. The rightmost column shows the training images closest (in L2 distance) to the gener- ated images beside them. Note that the two columns are visually similar, but the numbers are generally different.
highly realistic, as shown in Figs. 9 and 10. Fig. 11 reveals that, despite the long training time, the DRAW network un- derï¬t the SVHN training data.
# 4.5. Generating CIFAR Images
The house number images generated by the network are
The most challenging dataset we applied DRAW to was the CIFAR-10 collection of natural images (Krizhevsky, | 1502.04623#29 | DRAW: A Recurrent Neural Network For Image Generation | This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural
network architecture for image generation. DRAW networks combine a novel
spatial attention mechanism that mimics the foveation of the human eye, with a
sequential variational auto-encoding framework that allows for the iterative
construction of complex images. The system substantially improves on the state
of the art for generative models on MNIST, and, when trained on the Street View
House Numbers dataset, it generates images that cannot be distinguished from
real data with the naked eye. | http://arxiv.org/pdf/1502.04623 | Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20150216 | 20150520 | [] |
1502.04623 | 30 | The house number images generated by the network are
The most challenging dataset we applied DRAW to was the CIFAR-10 collection of natural images (Krizhevsky,
DRAW: A Recurrent Neural Network For Image Generation
# s
Table 3. Experimental Hyper-Parameters. Task 100 MNIST Model SVHN Model CIFAR Model 100 MNIST Classiï¬cation à #glimpses LSTM #h 256 256 800 400 8 64 32 64 #z Read Size Write Size 12 2 12 5 - 100 100 200 - 12 2 12 5 5 12 5 5 12 5
à à à Ã
à à Ã
Ld) 3) 10) S70] BRieeiaan am 71 Tm
Pe Zn ee mea
Time â>
Figure 10. SVHN Generation Sequences. The red rectangle in- dicates the attention patch. Notice how the network draws the dig- its one at a time, and how it moves and scales the writing patch to produce numbers with different slopes and sizes.
5220 5200 5180 5160 5140 5120 5100 5080 5060 training â+â validation â=»â cost per example 0 50 100 150 200 250 minibatch number (thousands) 300 350
Figure 12. Generated CIFAR images. The rightmost column shows the nearest training examples to the column beside it. | 1502.04623#30 | DRAW: A Recurrent Neural Network For Image Generation | This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural
network architecture for image generation. DRAW networks combine a novel
spatial attention mechanism that mimics the foveation of the human eye, with a
sequential variational auto-encoding framework that allows for the iterative
construction of complex images. The system substantially improves on the state
of the art for generative models on MNIST, and, when trained on the Street View
House Numbers dataset, it generates images that cannot be distinguished from
real data with the naked eye. | http://arxiv.org/pdf/1502.04623 | Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20150216 | 20150520 | [] |
1502.04623 | 31 | Figure 12. Generated CIFAR images. The rightmost column shows the nearest training examples to the column beside it.
looking objects without overï¬tting (in other words, without copying from the training set). Nonetheless the images in Fig. 12 demonstrate that DRAW is able to capture much of the shape, colour and composition of real photographs.
# 5. Conclusion
This paper introduced the Deep Recurrent Attentive Writer (DRAW) neural network architecture, and demonstrated its ability to generate highly realistic natural images such as photographs of house numbers, as well as improving on the best known results for binarized MNIST generation. We also established that the two-dimensional differentiable at- tention mechanism embedded in DRAW is beneï¬cial not only to image generation, but also to image classiï¬cation.
Figure 11. Training and validation cost on SVHN. The valida- tion cost is consistently lower because the validation set patches were extracted from the image centre (rather than from random locations, as in the training set). The network was never able to overï¬t on the training data.
# Acknowledgments
Of the many who assisted in creating this paper, we are es- pecially thankful to Koray Kavukcuoglu, Volodymyr Mnih, Jimmy Ba, Yaroslav Bulatov, Greg Wayne, Andrei Rusu and Shakir Mohamed. | 1502.04623#31 | DRAW: A Recurrent Neural Network For Image Generation | This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural
network architecture for image generation. DRAW networks combine a novel
spatial attention mechanism that mimics the foveation of the human eye, with a
sequential variational auto-encoding framework that allows for the iterative
construction of complex images. The system substantially improves on the state
of the art for generative models on MNIST, and, when trained on the Street View
House Numbers dataset, it generates images that cannot be distinguished from
real data with the naked eye. | http://arxiv.org/pdf/1502.04623 | Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20150216 | 20150520 | [] |
1502.04623 | 32 | 2009). CIFAR-10 is very diverse, and with only 50,000 training examples it is very difï¬cult to generate realisticDRAW: A Recurrent Neural Network For Image Generation
# References
Ba, Jimmy, Mnih, Volodymyr, and Kavukcuoglu, Koray. Multiple object recognition with visual attention. arXiv preprint arXiv:1412.7755, 2014.
Dayan, Peter, Hinton, Geoffrey E, Neal, Radford M, and Zemel, Richard S. The helmholtz machine. Neural com- putation, 7(5):889â904, 1995.
Larochelle, Hugo and Murray, Iain. The neural autoregres- sive distribution estimator. Journal of Machine Learning Research, 15:29â37, 2011.
LeCun, Yann, Bottou, L´eon, Bengio, Yoshua, and Haffner, Patrick. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278â 2324, 1998.
Denil, Misha, Bazzani, Loris, Larochelle, Hugo, and de Freitas, Nando. Learning where to attend with deep architectures for image tracking. Neural computation, 24(8):2151â2184, 2012. | 1502.04623#32 | DRAW: A Recurrent Neural Network For Image Generation | This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural
network architecture for image generation. DRAW networks combine a novel
spatial attention mechanism that mimics the foveation of the human eye, with a
sequential variational auto-encoding framework that allows for the iterative
construction of complex images. The system substantially improves on the state
of the art for generative models on MNIST, and, when trained on the Street View
House Numbers dataset, it generates images that cannot be distinguished from
real data with the naked eye. | http://arxiv.org/pdf/1502.04623 | Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20150216 | 20150520 | [] |
1502.04623 | 33 | Mnih, Andriy and Gregor, Karol. Neural variational infer- ence and learning in belief networks. In Proceedings of the 31st International Conference on Machine Learning, 2014.
Gers, Felix A, Schmidhuber, J¨urgen, and Cummins, Fred. Learning to forget: Continual prediction with lstm. Neu- ral computation, 12(10):2451â2471, 2000.
Mnih, Volodymyr, Heess, Nicolas, Graves, Alex, et al. Re- current models of visual attention. In Advances in Neural Information Processing Systems, pp. 2204â2212, 2014.
Julian, Arnoud, Sacha, Multi-digit number recognition from street view imagery using arXiv preprint deep convolutional neural networks. arXiv:1312.6082, 2013.
Graves, Alex. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013.
Murray, Iain and Salakhutdinov, Ruslan. Evaluating prob- abilities under high-dimensional latent variable models. In Advances in neural information processing systems, pp. 1137â1144, 2009. | 1502.04623#33 | DRAW: A Recurrent Neural Network For Image Generation | This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural
network architecture for image generation. DRAW networks combine a novel
spatial attention mechanism that mimics the foveation of the human eye, with a
sequential variational auto-encoding framework that allows for the iterative
construction of complex images. The system substantially improves on the state
of the art for generative models on MNIST, and, when trained on the Street View
House Numbers dataset, it generates images that cannot be distinguished from
real data with the naked eye. | http://arxiv.org/pdf/1502.04623 | Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20150216 | 20150520 | [] |
1502.04623 | 34 | Netzer, Yuval, Wang, Tao, Coates, Adam, Bissacco, Alessandro, Wu, Bo, and Ng, Andrew Y. Reading dig- its in natural images with unsupervised feature learning. 2011.
Graves, Alex, Wayne, Greg, and Danihelka, Ivo. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014.
Gregor, Karol, Danihelka, Ivo, Mnih, Andriy, Blundell, Charles, and Wierstra, Daan. Deep autoregressive net- works. In Proceedings of the 31st International Confer- ence on Machine Learning, 2014.
Raiko, Tapani, Li, Yao, Cho, Kyunghyun, and Bengio, Yoshua. Iterative neural autoregressive distribution es- timator nade-k. In Advances in Neural Information Pro- cessing Systems, pp. 325â333. 2014.
Ranzato, MarcâAurelio. On learning where to look. arXiv preprint arXiv:1405.5488, 2014.
Hinton, Geoffrey E and Salakhutdinov, Ruslan R. Reduc- ing the dimensionality of data with neural networks. Sci- ence, 313(5786):504â507, 2006. | 1502.04623#34 | DRAW: A Recurrent Neural Network For Image Generation | This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural
network architecture for image generation. DRAW networks combine a novel
spatial attention mechanism that mimics the foveation of the human eye, with a
sequential variational auto-encoding framework that allows for the iterative
construction of complex images. The system substantially improves on the state
of the art for generative models on MNIST, and, when trained on the Street View
House Numbers dataset, it generates images that cannot be distinguished from
real data with the naked eye. | http://arxiv.org/pdf/1502.04623 | Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20150216 | 20150520 | [] |
1502.04623 | 35 | Hochreiter, Sepp and Schmidhuber, J¨urgen. Long short- term memory. Neural computation, 9(8):1735â1780, 1997.
Kingma, Diederik and Ba, Jimmy. method for stochastic optimization. arXiv:1412.6980, 2014. A arXiv preprint Adam:
Rezende, Danilo J, Mohamed, Shakir, and Wierstra, Daan. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of the 31st In- ternational Conference on Machine Learning, pp. 1278â 1286, 2014.
Salakhutdinov, Ruslan and Hinton, Geoffrey E. Deep boltz- mann machines. In International Conference on Artiï¬- cial Intelligence and Statistics, pp. 448â455, 2009.
Kingma, Diederik P and Welling, Max. Auto-encoding In Proceedings of the International variational bayes. Conference on Learning Representations (ICLR), 2014.
Salakhutdinov, Ruslan and Murray, Iain. On the quantita- tive analysis of Deep Belief Networks. In Proceedings of the 25th Annual International Conference on Machine Learning, pp. 872â879. Omnipress, 2008.
Krizhevsky, Alex. Learning multiple layers of features from tiny images. 2009. | 1502.04623#35 | DRAW: A Recurrent Neural Network For Image Generation | This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural
network architecture for image generation. DRAW networks combine a novel
spatial attention mechanism that mimics the foveation of the human eye, with a
sequential variational auto-encoding framework that allows for the iterative
construction of complex images. The system substantially improves on the state
of the art for generative models on MNIST, and, when trained on the Street View
House Numbers dataset, it generates images that cannot be distinguished from
real data with the naked eye. | http://arxiv.org/pdf/1502.04623 | Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20150216 | 20150520 | [] |
1502.04623 | 36 | Krizhevsky, Alex. Learning multiple layers of features from tiny images. 2009.
Salimans, Tim, Kingma, Diederik P, and Welling, Max. Markov chain monte carlo and variational inference: Bridging the gap. arXiv preprint arXiv:1410.6460, 2014.
Larochelle, Hugo and Hinton, Geoffrey E. Learning to combine foveal glimpses with a third-order boltzmann machine. In Advances in Neural Information Processing Systems, pp. 1243â1251. 2010.
Sermanet, Pierre, Frome, Andrea, and Real, Esteban. At- tention for ï¬ne-grained categorization. arXiv preprint arXiv:1412.7054, 2014.
DRAW: A Recurrent Neural Network For Image Generation
Sutskever, Ilya, Vinyals, Oriol, and Le, Quoc VV. Se- In quence to sequence learning with neural networks. Advances in Neural Information Processing Systems, pp. 3104â3112, 2014.
Tang, Yichuan, Srivastava, Nitish, and Salakhutdinov, Rus- lan. Learning generative models with visual attention. arXiv preprint arXiv:1312.6110, 2013. | 1502.04623#36 | DRAW: A Recurrent Neural Network For Image Generation | This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural
network architecture for image generation. DRAW networks combine a novel
spatial attention mechanism that mimics the foveation of the human eye, with a
sequential variational auto-encoding framework that allows for the iterative
construction of complex images. The system substantially improves on the state
of the art for generative models on MNIST, and, when trained on the Street View
House Numbers dataset, it generates images that cannot be distinguished from
real data with the naked eye. | http://arxiv.org/pdf/1502.04623 | Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20150216 | 20150520 | [] |
1502.04623 | 37 | Tieleman, Tijmen. Optimizing Neural Networks that Gen- erate Images. PhD thesis, University of Toronto, 2014.
Uria, Benigno, Murray, Iain, and Larochelle, Hugo. A deep In Proceedings of the and tractable density estimator. 31st International Conference on Machine Learning, pp. 467â475, 2014.
Zheng, Yin, Zemel, Richard S, Zhang, Yu-Jin, and Larochelle, Hugo. A neural autoregressive approach to attention-based recognition. International Journal of Computer Vision, pp. 1â13, 2014. | 1502.04623#37 | DRAW: A Recurrent Neural Network For Image Generation | This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural
network architecture for image generation. DRAW networks combine a novel
spatial attention mechanism that mimics the foveation of the human eye, with a
sequential variational auto-encoding framework that allows for the iterative
construction of complex images. The system substantially improves on the state
of the art for generative models on MNIST, and, when trained on the Street View
House Numbers dataset, it generates images that cannot be distinguished from
real data with the naked eye. | http://arxiv.org/pdf/1502.04623 | Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra | cs.CV, cs.LG, cs.NE | null | null | cs.CV | 20150216 | 20150520 | [] |
1502.03167 | 1 | Sergey Ioffe Google Inc., sioffe@google.com
Christian Szegedy Google Inc., szegedy@google.com
# Abstract
Training Deep Neural Networks is complicated by the fact that the distribution of each layerâs inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it no- toriously hard to train models with saturating nonlineari- ties. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer in- puts. Our method draws its strength from making normal- ization a part of the model architecture and performing the normalization for each training mini-batch. Batch Nor- malization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regu- larizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classiï¬cation model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a signiï¬cant margin. Using an ensemble of batch- normalized networks, we improve upon the best published result on ImageNet classiï¬cation: reaching 4.9% top-5 validation error (and 4.8% test error), exceeding the ac- curacy of human raters. | 1502.03167#1 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 2 | Using mini-batches of examples, as opposed to one exam- ple at a time, is helpful in several ways. First, the gradient of the loss over a mini-batch is an estimate of the gradient over the training set, whose quality improves as the batch size increases. Second, computation over a batch can be much more efï¬cient than m computations for individual examples, due to the parallelism afforded by the modern computing platforms.
While stochastic gradient is simple and effective, it requires careful tuning of the model hyper-parameters, speciï¬cally the learning rate used in optimization, as well as the initial values for the model parameters. The train- ing is complicated by the fact that the inputs to each layer are affected by the parameters of all preceding layers â so that small changes to the network parameters amplify as the network becomes deeper.
The change in the distributions of layersâ inputs presents a problem because the layers need to continu- ously adapt to the new distribution. When the input dis- tribution to a learning system changes, it is said to experi- ence covariate shift (Shimodaira, 2000). This is typically handled via domain adaptation (Jiang, 2008). However, the notion of covariate shift can be extended beyond the learning system as a whole, to apply to its parts, such as a sub-network or a layer. Consider a network computing
# 1 Introduction | 1502.03167#2 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 3 | # 1 Introduction
â = F2(F1(u, Î1), Î2)
Deep learning has dramatically advanced the state of the art in vision, speech, and many other areas. Stochas- tic gradient descent (SGD) has proved to be an effec- tive way of training deep networks, and SGD variants such as momentum (Sutskever et al., 2013) and Adagrad (Duchi et al., 2011) have been used to achieve state of the art performance. SGD optimizes the parameters Î of the network, so as to minimize the loss
where F1 and F2 are arbitrary transformations, and the parameters Î1, Î2 are to be learned so as to minimize the loss â. Learning Î2 can be viewed as if the inputs x = F1(u, Î1) are fed into the sub-network
â = F2(x, Î2).
For example, a gradient descent step
Î = arg min Î 1 N N Xi=1 â(xi, Î) | 1502.03167#3 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 5 | (for batch size m and learning rate α) is exactly equivalent to that for a stand-alone network F2 with input x. There- fore, the input distribution properties that make training more efï¬cient â such as having the same distribution be- tween the training and test data â apply to training the sub-network as well. As such it is advantageous for the distribution of x to remain ï¬xed over time. Then, Î2 does
1
not have to readjust to compensate for the change in the distribution of x. | 1502.03167#5 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 6 | Fixed distribution of inputs to a sub-network would have positive consequences for the layers outside the sub- network, as well. Consider a layer with a sigmoid activa- tion function z = g(W u + b) where u is the layer input, the weight matrix W and bias vector b are the layer pa- rameters to be learned, and g(x) = x | increases, gâ²(x) tends to zero. This means that for all di- mensions of x = W u+b except those with small absolute values, the gradient ï¬owing down to u will vanish and the model will train slowly. However, since x is affected by W, b and the parameters of all the layers below, changes to those parameters during training will likely move many dimensions of x into the saturated regime of the nonlin- earity and slow down the convergence. This effect is ampliï¬ed as the network depth increases. In practice, the saturation problem and the resulting vanishing gradi- ents are usually addressed by using Rectiï¬ed Linear Units (Nair & Hinton, 2010) ReLU (x) = max(x, 0), careful initialization (Bengio & Glorot, 2010; Saxe et al., 2013), and small learning | 1502.03167#6 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 8 | We refer to the change in the distributions of internal nodes of a deep network, in the course of training, as In- ternal Covariate Shift. Eliminating it offers a promise of faster training. We propose a new mechanism, which we call Batch Normalization, that takes a step towards re- ducing internal covariate shift, and in doing so dramati- cally accelerates the training of deep neural nets. It ac- complishes this via a normalization step that ï¬xes the means and variances of layer inputs. Batch Normalization also has a beneï¬cial effect on the gradient ï¬ow through the network, by reducing the dependence of gradients on the scale of the parameters or of their initial values. This allows us to use much higher learning rates with- out the risk of divergence. Furthermore, batch normal- ization regularizes the model and reduces the need for Dropout (Srivastava et al., 2014). Finally, Batch Normal- ization makes it possible to use saturating nonlinearities by preventing the network from getting stuck in the satu- rated modes. | 1502.03167#8 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 9 | In Sec. 4.2, we apply Batch Normalization to the best- performing ImageNet classiï¬cation network, and show that we can match its performance using only 7% of the training steps, and can further exceed its accuracy by a substantial margin. Using an ensemble of such networks trained with Batch Normalization, we achieve the top-5 error rate that improves upon the best known results on ImageNet classiï¬cation.
2
# 2 Towards
# Reducing
Internal
# Covariate Shift | 1502.03167#9 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 10 | 2
# 2 Towards
# Reducing
Internal
# Covariate Shift
We deï¬ne Internal Covariate Shift as the change in the distribution of network activations due to the change in network parameters during training. To improve the train- ing, we seek to reduce the internal covariate shift. By ï¬xing the distribution of the layer inputs x as the training progresses, we expect to improve the training speed. It has been long known (LeCun et al., 1998b; Wiesler & Ney, 2011) that the network training converges faster if its in- puts are whitened â i.e., linearly transformed to have zero means and unit variances, and decorrelated. As each layer observes the inputs produced by the layers below, it would be advantageous to achieve the same whitening of the in- puts of each layer. By whitening the inputs to each layer, we would take a step towards achieving the ï¬xed distri- butions of inputs that would remove the ill effects of the internal covariate shift. | 1502.03167#10 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 11 | We could consider whitening activations at every train- ing step or at some interval, either by modifying the network directly or by changing the parameters of the optimization algorithm to depend on the network ac- tivation values (Wiesler et al., 2014; Raiko et al., 2012; Povey et al., 2014; Desjardins & Kavukcuoglu). How- ever, if these modiï¬cations are interspersed with the op- timization steps, then the gradient descent step may at- tempt to update the parameters in a way that requires the normalization to be updated, which reduces the ef- fect of the gradient step. For example, consider a layer with the input u that adds the learned bias b, and normal- izes the result by subtracting the mean of the activation computed over the training data: E[x] where â is the set of values of x over x = u + b, = the training set, and E[x] = 1 If a gradient N descent step ignores the dependence of E[x] on b, then it P b + âb, where âb will update b x. Then â â E[u + b]. E[u + (b + âb)] = u + b u + (b + âb) b | 1502.03167#11 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 12 | will update b x. Then â â E[u + b]. E[u + (b + âb)] = u + b u + (b + âb) b Thus, the combination of the update to b and subsequent change in normalization led to no change in the output of the layer nor, consequently, the loss. As the training continues, b will grow indeï¬nitely while the loss remains ï¬xed. This problem can get worse if the normalization not only centers but also scales the activations. We have ob- served this empirically in initial experiments, where the model blows up when the normalization parameters are computed outside the gradient descent step. | 1502.03167#12 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 13 | The issue with the above approach is that the gradient descent optimization does not take into account the fact that the normalization takes place. To address this issue, we would like to ensure that, for any parameter values, the network always produces activations with the desired distribution. Doing so would allow the gradient of the loss with respect to the model parameters to account for the normalization, and for its dependence on the model parameters Î. Let again x be a layer input, treated as a
vector, and be the set of these inputs over the training data set. The normalization can then be written as a trans- formation
x = Norm(x, )
# X
which depends not only on the given training example x but on all examples â each of which depends on Î if x is generated by another layer. For backpropagation, we would need to compute the Jacobians
âNorm(x, X âx ) and âNorm(x, â X ) ; | 1502.03167#13 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 14 | # X
ignoring the latter term would lead to the explosion de- scribed above. Within this framework, whitening the layer inputs is expensive, as it requires computing the covari- ance matrix Cov[x] = ExâX [xxT ] E[x]E[x]T and its inverse square root, to produce the whitened activations Cov[x]â1/2(x E[x]), as well as the derivatives of these transforms for backpropagation. This motivates us to seek an alternative that performs input normalization in a way that is differentiable and does not require the analysis of the entire training set after every parameter update.
(e.g. previous (Lyu & Simoncelli, 2008)) use computed over a single training example, or, in the case of image networks, over different feature maps at a given location. However, this changes the representation ability of a network by discarding the absolute scale of activations. We want to a preserve the information in the network, by normalizing the activations in a training example relative to the statistics of the entire training data.
# 3 Normalization via Mini-Batch Statistics | 1502.03167#14 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 15 | # 3 Normalization via Mini-Batch Statistics
Since the full whitening of each layerâs inputs is costly and not everywhere differentiable, we make two neces- sary simpliï¬cations. The ï¬rst is that instead of whitening the features in layer inputs and outputs jointly, we will normalize each scalar feature independently, by making it have the mean of zero and the variance of 1. For a layer with d-dimensional input x = (x(1) . . . x(d)), we will nor- malize each dimension
x(k) = x(k) E[x(k)] â Var[x(k)] | 1502.03167#15 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 16 | p where the expectation and variance are computed over the training data set. As shown in (LeCun et al., 1998b), such normalization speeds up convergence, even when the fea- tures are not decorrelated.
Note that simply normalizing each input of a layer may change what the layer can represent. For instance, nor- malizing the inputs of a sigmoid would constrain them to the linear regime of the nonlinearity. To address this, we make sure that the transformation inserted in the network can represent the identity transform. To accomplish this,
we introduce, for each activation x(k), a pair of parameters γ(k), β(k), which scale and shift the normalized value:
y(k) = γ(k) x(k) + β(k).
These parameters are learned along with the original b model parameters, and restore the representation power of the network. Indeed, by setting γ(k) = Var[x(k)] and β(k) = E[x(k)], we could recover the original activations, if that were the optimal thing to do. | 1502.03167#16 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 17 | In the batch setting where each training step is based on the entire training set, we would use the whole set to nor- malize activations. However, this is impractical when us- ing stochastic optimization. Therefore, we make the sec- ond simpliï¬cation: since we use mini-batches in stochas- tic gradient training, each mini-batch produces estimates of the mean and variance of each activation. This way, the statistics used for normalization can fully participate in the gradient backpropagation. Note that the use of mini- batches is enabled by computation of per-dimension vari- ances rather than joint covariances; in the joint case, reg- ularization would be required since the mini-batch size is likely to be smaller than the number of activations being whitened, resulting in singular covariance matrices.
of size m. Since the normal- ization is applied to each activation independently, let us focus on a particular activation x(k) and omit k for clarity. We have m values of this activation in the mini-batch,
= .
# x1...m} x1...m, and their linear trans{ Let the normalized values be formations be y1...m. We refer to the transform
# B
# b
BNγ,β : x1...m â
y1...m | 1502.03167#17 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 18 | # B
# b
BNγ,β : x1...m â
y1...m
as the Batch Normalizing Transform. We present the BN Transform in Algorithm 1. In the algorithm, Ç« is a constant added to the mini-batch variance for numerical stability.
Input: Values of x over a mini-batch: Parameters to be learned: γ, β ; = x1...m} { B Output: yi = BNγ,β(xi) } { m 1 m // mini-batch mean xi µB â Xi=1 m 1 m Xi=1 xi â Ï2 p xi + β γ Ï2 B µB)2 // mini-batch variance (xi â µB B + Ç« â // normalize xi â b yi â // scale and shift BNγ,β(xi) â¡
# b | 1502.03167#18 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 20 | 3
indicate that the parameters γ and β are to be learned, but it should be noted that the BN transform does not independently process the activation in each training ex- ample. Rather, BNγ,β(x) depends both on the training example and the other examples in the mini-batch. The scaled and shifted values y are passed to other network layers. The normalized activations x are internal to our transformation, but their presence is crucial. The distri- butions of values of any x has the expected value of 0 and the variance of 1, as long as the elements of each mini-batch are sampled from the same distribution, and if we neglect Ç«. This can be seen by observing that x2 i = 1, and taking expec- x(k) can be viewed as tations. Each normalized activation P b an input to a sub-network composed of the linear trans- b form y(k) = γ(k) x(k) + β(k), followed by the other pro- cessing done by the original network. These sub-network inputs all have ï¬xed means and variances, and although x(k) can change the joint distribution of these normalized over the course of training, we expect that the introduc- tion of normalized inputs accelerates the training of the sub-network and, consequently, the network as a whole. | 1502.03167#20 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 21 | During training we need to backpropagate the gradi- ent of loss â through this transformation, as well as com- pute the gradients with respect to the parameters of the BN transform. We use chain rule, as follows (before sim- pliï¬cation):
ae _ ae. ae: â By 7 ak _ vm ae . =1/,2 â3/2 Bom = Lint Bayâ (ti â Ms) (OB + â¬)*/ oe _ ym of 1 4 06, hy â2(@:i-HB) due i=1 Oa; ae daz ⢠Ok _ dol 1 (aie Hs + ar = Darâ Torre f + one =v: a ym oo i=1 Oy: op | 1502.03167#21 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 22 | # P
Thus, BN transform is a differentiable transformation that introduces normalized activations into the network. This ensures that as the model is training, layers can continue learning on input distributions that exhibit less internal co- variate shift, thus accelerating the training. Furthermore, the learned afï¬ne transform applied to these normalized activations allows the BN transform to represent the iden- tity transformation and preserves the network capacity.
# 3.1 Training and Inference with Batch- Normalized Networks
To Batch-Normalize a network, we specify a subset of ac- tivations and insert the BN transform for each of them, according to Alg. 1. Any layer that previously received x as the input, now receives BN(x). A model employing Batch Normalization can be trained using batch gradient descent, or Stochastic Gradient Descent with a mini-batch size m > 1, or with any of its variants such as Adagrad
(Duchi et al., 2011). The normalization of activations that depends on the mini-batch allows efï¬cient training, but is neither necessary nor desirable during inference; we want the output to depend only on the input, deterministically. For this, once the network has been trained, we use the normalization
x = E[x] x Var[x] + Ç« â | 1502.03167#22 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 23 | # p
# b
using the population, rather than mini-batch, statistics. Neglecting Ç«, these normalized activations have the same mean 0 and variance 1 as during training. We use the un- biased variance estimate Var[x] = m B], where the expectation is over training mini-batches of size m and Ï2 B are their sample variances. Using moving averages in- stead, we can track the accuracy of a model as it trains. Since the means and variances are ï¬xed during inference, the normalization is simply a linear transform applied to each activation. It may further be composed with the scal- ing by γ and shift by β, to yield a single linear transform that replaces BN(x). Algorithm 2 summarizes the proce- dure for training batch-normalized networks. | 1502.03167#23 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 24 | Input: Network N with trainable parameters O; subset of activations {a} Output: Batch-normalized network for inference, N24, 1: Ngx <â N_ // Training BN network 2: fork =1...K do 3: Add transformation yâ) = BN) gc) (x 0K)) to sn (Alg. 1) 4: Modify each layer in Nf with input xâ) to take y) instead 5: end for 6: Train Ngy to optimize the parameters © U (9), 80}, 7: Net < Ngy_ // Inference BN network with frozen // parameters 8: fork =1...K do 9: // For clarity, 2 = 2), yÂ¥ =, we = nw, etc. 10: Process multiple training mini-batches B, each of size m, and average over them: E[z] â Es[us] Var[x] â 4 Eg(o3] ll: In N3X, replace the transform y = BN,,g(a) with = ~L.-r+(B- Ele] Â¥ a/Var[x]+e 7 ( at) 12: end for | 1502.03167#24 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 25 | # 12: end for
Algorithm 2: Training a Batch-Normalized Network
# 3.2 Batch-Normalized Convolutional Net- works
Batch Normalization can be applied to any set of acti- vations in the network. Here, we focus on transforms
4
that consist of an afï¬ne transformation followed by an element-wise nonlinearity:
z = g(W u + b)
where W and b are learned parameters of the model, and ) is the nonlinearity such as sigmoid or ReLU. This for- g( · mulation covers both fully-connected and convolutional layers. We add the BN transform immediately before the nonlinearity, by normalizing x = W u + b. We could have also normalized the layer inputs u, but since u is likely the output of another nonlinearity, the shape of its distri- bution is likely to change during training, and constraining its ï¬rst and second moments would not eliminate the co- variate shift. In contrast, W u + b is more likely to have a symmetric, non-sparse distribution, that is âmore Gaus- sianâ (Hyv¨arinen & Oja, 2000); normalizing it is likely to produce activations with a stable distribution. | 1502.03167#25 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 26 | Note that, since we normalize W u+b, the bias b can be ignored since its effect will be canceled by the subsequent mean subtraction (the role of the bias is subsumed by β in Alg. 1). Thus, z = g(W u + b) is replaced with
z = g(BN(W u))
where the BN transform is applied independently to each dimension of x = W u, with a separate pair of learned parameters γ(k), β(k) per dimension.
For convolutional layers, we additionally want the nor- malization to obey the convolutional property â so that different elements of the same feature map, at different locations, are normalized in the same way. To achieve this, we jointly normalize all the activations in a mini- be the set of batch, over all locations. In Alg. 1, we let all values in a feature map across both the elements of a mini-batch and spatial locations â so for a mini-batch of q, we use the effec- size m and feature maps of size p tive mini-batch of size mâ² = p q. We learn a pair of parameters γ(k) and β(k) per feature map, rather than per activation. Alg. 2 is modiï¬ed similarly, so that during inference the BN transform applies the same linear transformation to each activation in a given feature map. | 1502.03167#26 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 27 | # 3.3 Batch Normalization enables higher learning rates
In traditional deep networks, too-high learning rate may result in the gradients that explode or vanish, as well as getting stuck in poor local minima. Batch Normaliza- tion helps address these issues. By normalizing activa- tions throughout the network, it prevents small changes to the parameters from amplifying into larger and subop- timal changes in activations in gradients; for instance, it prevents the training from getting stuck in the saturated regimes of nonlinearities.
Batch Normalization also makes training more resilient to the parameter scale. Normally, large learning rates may increase the scale of layer parameters, which then amplify
5
the gradient during backpropagation and lead to the model explosion. However, with Batch Normalization, back- propagation through a layer is unaffected by the scale of its parameters. Indeed, for a scalar a,
BN(W u) = BN((aW )u)
and we can show that
âBN((aW )u) âu âBN((aW )u) = âBN(W u) âu âBN(W u) âW â(aW ) = 1 a · | 1502.03167#27 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 28 | The scale does not affect the layer Jacobian nor, con- sequently, the gradient propagation. Moreover, larger weights lead to smaller gradients, and Batch Normaliza- tion will stabilize the parameter growth.
We further conjecture that Batch Normalization may lead the layer Jacobians to have singular values close to 1, which is known to be beneï¬cial for training (Saxe et al., 2013). Consider two consecutive layers with normalized inputs, and the transformation between these normalized vectors: z are Gaussian x and x is a linear transfor- and uncorrelated, and that F ( b b mation for the given model parameters, then both x and z b x]J T = z] = JCov[ have unit covariances, and I = Cov[ b b JJ T . Thus, JJ T = I, and so all singular values of J b are equal to 1, which preserves the gradient magnitudes during backpropagation. In reality, the transformation is not linear, and the normalized values are not guaranteed to be Gaussian nor independent, but we nevertheless expect Batch Normalization to help make gradient propagation better behaved. The precise effect of Batch Normaliza- tion on gradient propagation remains an area of further study.
# 3.4 Batch Normalization regularizes the model | 1502.03167#28 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 29 | # 3.4 Batch Normalization regularizes the model
When training with Batch Normalization, a training ex- ample is seen in conjunction with other examples in the mini-batch, and the training network no longer produc- ing deterministic values for a given training example. In our experiments, we found this effect to be advantageous to the generalization of the network. Whereas Dropout (Srivastava et al., 2014) is typically used to reduce over- ï¬tting, in a batch-normalized network we found that it can be either removed or reduced in strength.
# 4 Experiments
# 4.1 Activations over time
To verify the effects of internal covariate shift on train- ing, and the ability of Batch Normalization to combat it, we considered the problem of predicting the digit class on the MNIST dataset (LeCun et al., 1998a). We used a very simple network, with a 28x28 binary image as input, and
1 2 2 0.9 0.8 Without BN With BN 0 0 0.7 10K 20K 30K 40K 50K â2 â2 (a) (b) Without BN (c) With BN | 1502.03167#29 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 30 | Figure 1: (a) The test accuracy of the MNIST network the trained with and without Batch Normalization, vs. number of training steps. Batch Normalization helps the network train faster and achieve higher accuracy. (b, c) The evolution of input distributions to a typical sig- moid, over the course of training, shown as th percentiles. Batch Normalization makes the distribution more stable and reduces the internal covariate shift.
3 fully-connected hidden layers with 100 activations each. Each hidden layer computes y = g(W u+b) with sigmoid nonlinearity, and the weights W initialized to small ran- dom Gaussian values. The last hidden layer is followed by a fully-connected layer with 10 activations (one per class) and cross-entropy loss. We trained the network for 50000 steps, with 60 examples per mini-batch. We added Batch Normalization to each hidden layer of the network, as in Sec. 3.1. We were interested in the comparison be- tween the baseline and batch-normalized networks, rather than achieving the state of the art performance on MNIST (which the described architecture does not). | 1502.03167#30 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 31 | Figure 1(a) shows the fraction of correct predictions by the two networks on held-out test data, as training progresses. The batch-normalized network enjoys the higher test accuracy. To investigate why, we studied in- puts to the sigmoid, in the original network N and batch- normalized network Ntr BN (Alg. 2) over the course of train- ing. In Fig. 1(b,c) we show, for one typical activation from the last hidden layer of each network, how its distribu- tion evolves. The distributions in the original network change signiï¬cantly over time, both in their mean and the variance, which complicates the training of the sub- sequent layers. In contrast, the distributions in the batch- normalized network are much more stable as training pro- gresses, which aids the training.
# 4.2 ImageNet classiï¬cation | 1502.03167#31 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 32 | # 4.2 ImageNet classiï¬cation
We applied Batch Normalization to a new variant of the Inception network (Szegedy et al., 2014), trained on the ImageNet classiï¬cation task (Russakovsky et al., 2014). The network has a large number of convolutional and pooling layers, with a softmax layer to predict the image class, out of 1000 possibilities. Convolutional layers use ReLU as the nonlinearity. The main difference to the net- work described in (Szegedy et al., 2014) is that the 5 5 convolutional layers are replaced by two consecutive lay- 3 convolutions with up to 128 ï¬lters. The net- ers of 3 à 106 parameters, and, other than the work contains 13.6 top softmax layer, has no fully-connected layers. More
6 | 1502.03167#32 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 33 | 6
details are given in the Appendix. We refer to this model as Inceptionin the rest of the text. The model was trained using a version of Stochastic Gradient Descent with mo- mentum (Sutskever et al., 2013), using the mini-batch size of 32. The training was performed using a large-scale, dis- tributed architecture (similar to (Dean et al., 2012)). All networks are evaluated as training progresses by comput- the probability of ing the validation accuracy @1, i.e. predicting the correct label out of 1000 possibilities, on a held-out set, using a single crop per image.
In our experiments, we evaluated several modiï¬cations of Inception with Batch Normalization. In all cases, Batch Normalization was applied to the input of each nonlinear- ity, in a convolutional way, as described in section 3.2, while keeping the rest of the architecture constant.
# 4.2.1 Accelerating BN Networks
Simply adding Batch Normalization to a network does not take full advantage of our method. To do so, we further changed the network and its training parameters, as fol- lows:
Increase learning rate. In a batch-normalized model, we have been able to achieve a training speedup from higher learning rates, with no ill side effects (Sec. 3.3). | 1502.03167#33 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 34 | Increase learning rate. In a batch-normalized model, we have been able to achieve a training speedup from higher learning rates, with no ill side effects (Sec. 3.3).
Remove Dropout. As described in Sec. 3.4, Batch Nor- malization fulï¬lls some of the same goals as Dropout. Re- moving Dropout from Modiï¬ed BN-Inception speeds up training, without increasing overï¬tting.
Reduce the L2 weight regularization. While in Incep- tion an L2 loss on the model parameters controls overï¬t- ting, in Modiï¬ed BN-Inception the weight of this loss is reduced by a factor of 5. We ï¬nd that this improves the accuracy on the held-out validation data.
Accelerate the learning rate decay. In training Incep- tion, learning rate was decayed exponentially. Because our network trains faster than Inception, we lower the learning rate 6 times faster.
Remove Local Response Normalization While Incep- tion and other networks (Srivastava et al., 2014) beneï¬t from it, we found that with Batch Normalization it is not necessary. | 1502.03167#34 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 35 | Shufï¬e training examples more thoroughly. We enabled within-shard shufï¬ing of the training data, which prevents the same examples from always appearing in a mini-batch together. This led to about 1% improvements in the val- idation accuracy, which is consistent with the view of Batch Normalization as a regularizer (Sec. 3.4): the ran- domization inherent in our method should be most bene- ï¬cial when it affects an example differently each time it is seen.
Reduce the photometric distortions. Because batch- normalized networks train faster and observe each train- ing example fewer times, we let the trainer focus on more ârealâ images by distorting them less.
0.8 0.7 0.6 0.5 Inception BNâBaseline BNâx5 BNâx30 BNâx5âSigmoid Steps to match Inception 0.4 5M 10M 15M 20M 25M 30M
Figure 2: Single crop validation accuracy of Inception and its batch-normalized variants, vs. the number of training steps.
# 4.2.2 Single-Network Classiï¬cation
We evaluated the following networks, all trained on the LSVRC2012 training data, and tested on the validation data: | 1502.03167#35 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 36 | # 4.2.2 Single-Network Classiï¬cation
We evaluated the following networks, all trained on the LSVRC2012 training data, and tested on the validation data:
Inception: the network described at the beginning of Section 4.2, trained with the initial learning rate of 0.0015. BN-Baseline: Same as Inception with Batch Normalization before each nonlinearity.
BN-x5: Inception with Batch Normalization and the modiï¬cations in Sec. 4.2.1. The initial learning rate was increased by a factor of 5, to 0.0075. The same learning rate increase with original Inception caused the model pa- rameters to reach machine inï¬nity.
BN-x30: Like BN-x5, but with the initial learning rate 0.045 (30 times that of Inception). | 1502.03167#36 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 37 | BN-x30: Like BN-x5, but with the initial learning rate 0.045 (30 times that of Inception).
BN-x5-Sigmoid: Like BN-x5, but with sigmoid non- 1+exp(âx) instead of ReLU. We also at- linearity g(t) = tempted to train the original Inception with sigmoid, but the model remained at the accuracy equivalent to chance. In Figure 2, we show the validation accuracy of the networks, as a function of the number of training steps. 106 Inception reached the accuracy of 72.2% after 31 training steps. The Figure 3 shows, for each network, the number of training steps required to reach the same 72.2% accuracy, as well as the maximum validation accu- racy reached by the network and the number of steps to reach it. | 1502.03167#37 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 38 | By only using Batch Normalization (BN-Baseline), we match the accuracy of Inception in less than half the num- ber of training steps. By applying the modiï¬cations in Sec. 4.2.1, we signiï¬cantly increase the training speed of the network. BN-x5 needs 14 times fewer steps than In- Interestingly, in- ception to reach the 72.2% accuracy. creasing the learning rate further (BN-x30) causes the model to train somewhat slower initially, but allows it to 106 reach a higher ï¬nal accuracy. It reaches 74.8% after 6 steps, i.e. 5 times fewer steps than required by Inception to reach 72.2%.
We also veriï¬ed that the reduction in internal covari- ate shift allows deep networks with Batch Normalization
7
Model Inception BN-Baseline BN-x5 BN-x30 BN-x5-Sigmoid Steps to 72.2% Max accuracy 72.2% 72.7% 73.0% 74.8% 69.8% 106 106 106 106 31.0 13.3 2.1 2.7 · · · · | 1502.03167#38 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 39 | Figure 3: For Inception and the batch-normalized variants, the number of training steps required to reach the maximum accuracy of Inception (72.2%), and the maximum accuracy achieved by the net- work.
to be trained when sigmoid is used as the nonlinearity, despite the well-known difï¬culty of training such net- works. Indeed, BN-x5-Sigmoid achieves the accuracy of 69.8%. Without Batch Normalization, Inception with sig- moid never achieves better than 1/1000 accuracy.
# 4.2.3 Ensemble Classiï¬cation
The current reported best results on the ImageNet Large Scale Visual Recognition Competition are reached by the Deep Image ensemble of traditional models (Wu et al., 2015) and the ensemble model of (He et al., 2015). The latter reports the top-5 error of 4.94%, as evaluated by the ILSVRC server. Here we report a top-5 validation error of 4.9%, and test error of 4.82% (according to the ILSVRC server). This improves upon the previous best result, and exceeds the estimated accuracy of human raters according to (Russakovsky et al., 2014). | 1502.03167#39 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 40 | For our ensemble, we used 6 networks. Each was based on BN-x30, modiï¬ed via some of the following: increased initial weights in the convolutional layers; using Dropout (with the Dropout probability of 5% or 10%, vs. 40% for the original Inception); and using non-convolutional, per-activation Batch Normalization with last hidden lay- ers of the model. Each network achieved its maximum 106 training steps. The ensemble accuracy after about 6 prediction was based on the arithmetic average of class probabilities predicted by the constituent networks. The details of ensemble and multicrop inference are similar to (Szegedy et al., 2014).
# Sep
201
We demonstrate in Fig. 4 that batch normalization al- lows us to set new state-of-the-art by a healthy margin on the ImageNet classiï¬cation challenge benchmarks.
# 5 Conclusion | 1502.03167#40 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 41 | # 5 Conclusion
We have presented a novel mechanism for dramatically accelerating the training of deep networks. It is based on the premise that covariate shift, which is known to com- plicate the training of machine learning systems, also apModel GoogLeNet ensemble Deep Image low-res Deep Image high-res Deep Image ensemble BN-Inception single crop BN-Inception multicrop BN-Inception ensemble 224 256 512 variable 224 224 224 144 - - - 1 144 144 - - 24.88 - 25.2% 21.99% 20.1% | 1502.03167#41 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 42 | Figure 4: Batch-Normalized Inception comparison with previous state of the art on the provided validation set com- prising 50000 images. *BN-Inception ensemble has reached 4.82% top-5 error on the 100000 images of the test set of the ImageNet as reported by the test server.
plies to sub-networks and layers, and removing it from internal activations of the network may aid in training. Our proposed method draws its power from normalizing activations, and from incorporating this normalization in the network architecture itself. This ensures that the nor- malization is appropriately handled by any optimization method that is being used to train the network. To en- able stochastic optimization methods commonly used in deep network training, we perform the normalization for each mini-batch, and backpropagate the gradients through the normalization parameters. Batch Normalization adds only two extra parameters per activation, and in doing so preserves the representation ability of the network. We presented an algorithm for constructing, training, and per- forming inference with batch-normalized networks. The resulting networks can be trained with saturating nonlin- earities, are more tolerant to increased training rates, and often do not require Dropout for regularization. | 1502.03167#42 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 43 | Merely adding Batch Normalization to a state-of-the- art image classiï¬cation model yields a substantial speedup in training. By further increasing the learning rates, re- moving Dropout, and applying other modiï¬cations af- forded by Batch Normalization, we reach the previous state of the art with only a small fraction of training steps â and then beat the state of the art in single-network image classiï¬cation. Furthermore, by combining multiple mod- els trained with Batch Normalization, we perform better than the best known system on ImageNet, by a signiï¬cant margin. | 1502.03167#43 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 44 | Interestingly, our method bears similarity to the stan- dardization layer of (G¨ulc¸ehre & Bengio, 2013), though the two methods stem from very different goals, and per- form different tasks. The goal of Batch Normalization is to achieve a stable distribution of activation values throughout training, and in our experiments we apply it before the nonlinearity since that is where matching the ï¬rst and second moments is more likely to result in a stable distribution. On the contrary, (G¨ulc¸ehre & Bengio, 2013) apply the standardization layer to the output of the nonlinearity, which results in sparser activations. In our large-scale image classiï¬cation experiments, we have not observed the nonlinearity inputs to be sparse, neither with nor without Batch Normalization. Other notable differentiating characteristics of Batch Normalization include the learned scale and shift that allow the BN transform to represent identity (the standardization layer did not re- quire this since it was followed by the learned linear trans- form that, conceptually, absorbs the necessary scale and shift), handling of convolutional layers, deterministic in- ference that does not depend on the mini-batch, and batch- normalizing each convolutional layer in the network. | 1502.03167#44 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 45 | In this work, we have not explored the full range of possibilities that Batch Normalization potentially enables. Our future work includes applications of our method to Recurrent Neural Networks (Pascanu et al., 2013), where the internal covariate shift and the vanishing or exploding gradients may be especially severe, and which would al- low us to more thoroughly test the hypothesis that normal- ization improves gradient propagation (Sec. 3.3). We plan to investigate whether Batch Normalization can help with domain adaptation, in its traditional sense â i.e. whether the normalization performed by the network would al- low it to more easily generalize to new data distribu- tions, perhaps with just a recomputation of the population means and variances (Alg. 2). Finally, we believe that fur- ther theoretical analysis of the algorithm would allow still more improvements and applications.
# References
Bengio, Yoshua and Glorot, Xavier. Understanding the difï¬culty of training deep feedforward neural networks. In Proceedings of AISTATS 2010, volume 9, pp. 249â 256, May 2010. | 1502.03167#45 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 46 | Dean, Jeffrey, Corrado, Greg S., Monga, Rajat, Chen, Kai, Devin, Matthieu, Le, Quoc V., Mao, Mark Z., Ranzato, MarcâAurelio, Senior, Andrew, Tucker, Paul, Yang, Ke, and Ng, Andrew Y. Large scale distributed deep net- works. In NIPS, 2012.
Desjardins, Guillaume and Kavukcuoglu, Koray. Natural neural networks. (unpublished).
Duchi, John, Hazan, Elad, and Singer, Yoram. Adaptive subgradient methods for online learning and stochastic
8
optimization. J. Mach. Learn. Res., 12:2121â2159, July 2011. ISSN 1532-4435.
G¨ulc¸ehre, C¸ aglar and Bengio, Yoshua. Knowledge mat- ters: Importance of prior information for optimization. CoRR, abs/1301.4083, 2013.
He, K., Zhang, X., Ren, S., and Sun, J. Delving Deep into Rectiï¬ers: Surpassing Human-Level Performance on ImageNet Classiï¬cation. ArXiv e-prints, February 2015. | 1502.03167#46 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 47 | Hyv¨arinen, A. and Oja, E. Independent component anal- ysis: Algorithms and applications. Neural Netw., 13 (4-5):411â430, May 2000.
Jiang, Jing. A literature survey on domain adaptation of statistical classiï¬ers, 2008.
LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. Gradient-based learning applied to document recog- nition. Proceedings of the IEEE, 86(11):2278â2324, November 1998a.
LeCun, Y., Bottou, L., Orr, G., and Muller, K. Efï¬cient backprop. In Orr, G. and K., Muller (eds.), Neural Net- works: Tricks of the trade. Springer, 1998b.
Lyu, S and Simoncelli, E P. Nonlinear image representa- tion using divisive normalization. In Proc. Computer Vision and Pattern Recognition, pp. 1â8. IEEE Com- puter Society, Jun 23-28 2008. doi: 10.1109/CVPR. 2008.4587821.
Nair, Vinod and Hinton, Geoffrey E. Rectiï¬ed linear units improve restricted boltzmann machines. In ICML, pp. 807â814. Omnipress, 2010. | 1502.03167#47 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 48 | Pascanu, Razvan, Mikolov, Tomas, and Bengio, Yoshua. On the difï¬culty of training recurrent neural networks. In Proceedings of the 30th International Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 16- 21 June 2013, pp. 1310â1318, 2013.
Povey, Daniel, Zhang, Xiaohui, and Khudanpur, San- jeev. Parallel training of deep neural networks with CoRR, natural gradient and parameter averaging. abs/1410.7455, 2014.
Raiko, Tapani, Valpola, Harri, and LeCun, Yann. Deep learning made easier by linear transformations in per- ceptrons. In International Conference on Artiï¬cial In- telligence and Statistics (AISTATS), pp. 924â932, 2012.
Russakovsky, Olga, Deng, Jia, Su, Hao, Krause, Jonathan, Satheesh, Sanjeev, Ma, Sean, Huang, Zhiheng, Karpa- thy, Andrej, Khosla, Aditya, Bernstein, Michael, Berg, Alexander C., and Fei-Fei, Li. ImageNet Large Scale Visual Recognition Challenge, 2014.
9 | 1502.03167#48 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 49 | 9
Saxe, Andrew M., McClelland, James L., and Ganguli, Surya. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. CoRR, abs/1312.6120, 2013.
Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of Statistical Planning and Inference, 90(2):227â244, October 2000.
Srivastava, Nitish, Hinton, Geoffrey, Krizhevsky, Alex, Sutskever, Ilya, and Salakhutdinov, Ruslan. Dropout: A simple way to prevent neural networks from overï¬t- ting. J. Mach. Learn. Res., 15(1):1929â1958, January 2014.
Sutskever, Ilya, Martens, James, Dahl, George E., and Hinton, Geoffrey E. On the importance of initial- ization and momentum in deep learning. In ICML (3), volume 28 of JMLR Proceedings, pp. 1139â1147. JMLR.org, 2013. | 1502.03167#49 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 50 | Szegedy, Christian, Liu, Wei, Jia, Yangqing, Sermanet, Pierre, Reed, Scott, Anguelov, Dragomir, Erhan, Du- mitru, Vanhoucke, Vincent, and Rabinovich, An- CoRR, Going deeper with convolutions. drew. abs/1409.4842, 2014.
Wiesler, Simon and Ney, Hermann. A convergence anal- ysis of log-linear training. In Shawe-Taylor, J., Zemel, R.S., Bartlett, P., Pereira, F.C.N., and Weinberger, K.Q. (eds.), Advances in Neural Information Processing Sys- tems 24, pp. 657â665, Granada, Spain, December 2011.
Wiesler, Simon, Richard, Alexander, Schl¨uter, Ralf, and Ney, Hermann. Mean-normalized stochastic gradient for large-scale deep learning. In IEEE International Conference on Acoustics, Speech, and Signal Process- ing, pp. 180â184, Florence, Italy, May 2014.
Wu, Ren, Yan, Shengen, Shan, Yi, Dang, Qingqing, and Sun, Gang. Deep image: Scaling up image recognition, 2015.
# Appendix
# Variant of the Inception Model Used | 1502.03167#50 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 51 | # Appendix
# Variant of the Inception Model Used
Figure 5 documents the changes that were performed compared to the architecture with respect to the GoogleNet archictecture. For the interpretation of this table, please consult (Szegedy et al., 2014). The notable architecture changes compared to the GoogLeNet model include:
5 convolutional layers are replaced by two The 5 Ã consecutive 3 This in- creases the maximum depth of the network by 9
weight layers. Also it increases the number of pa- rameters by 25% and the computational cost is in- creased by about 30%.
⢠The number 28 from 2 to 3. à 28 inception modules is increased
Inside the modules, sometimes average, sometimes maximum-pooling is employed. This is indicated in the entries corresponding to the pooling layers of the table.
There are no across the board pooling layers be- tween any two Inception modules, but stride-2 con- volution/pooling layers are employed before the ï¬l- ter concatenation in the modules 3c, 4e.
Our model employed separable convolution with depth multiplier 8 on the ï¬rst convolutional layer. This reduces the computational cost while increasing the memory con- sumption at training time.
10 | 1502.03167#51 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03167 | 52 | #3Ã3 reduce double #3Ã3 reduce patch size/ stride 7Ã7/2 3Ã3/2 3Ã3/1 3Ã3/2 output size 112Ã112Ã64 56Ã56Ã64 56Ã56Ã192 28Ã28Ã192 28Ã28Ã256 28Ã28Ã320 28Ã28Ã576 14Ã14Ã576 14Ã14Ã576 14Ã14Ã576 14Ã14Ã576 14Ã14Ã1024 7Ã7Ã1024 7Ã7Ã1024 1Ã1Ã1024 double #3Ã3 #3Ã3 depth #1Ã1 Pool +proj type convolution* max pool convolution max pool inception (3a) inception (3b) inception (3c) inception (4a) inception (4b) inception (4c) inception (4d) inception (4e) inception (5a) inception (5b) avg pool 1 0 1 0 3 3 3 3 3 3 3 3 3 3 0 192 64 64 96 160 96 128 160 192 192 320 320 64 64 0 224 192 160 96 0 352 352 64 64 64 96 96 128 160 192 160 192 64 64 128 64 96 128 128 128 192 192 96 96 96 128 128 160 192 256 224 224 avg + 32 avg + 64 max + pass through avg + 128 avg + 128 avg + 128 avg | 1502.03167#52 | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the
distribution of each layer's inputs changes during training, as the parameters
of the previous layers change. This slows down the training by requiring lower
learning rates and careful parameter initialization, and makes it notoriously
hard to train models with saturating nonlinearities. We refer to this
phenomenon as internal covariate shift, and address the problem by normalizing
layer inputs. Our method draws its strength from making normalization a part of
the model architecture and performing the normalization for each training
mini-batch. Batch Normalization allows us to use much higher learning rates and
be less careful about initialization. It also acts as a regularizer, in some
cases eliminating the need for Dropout. Applied to a state-of-the-art image
classification model, Batch Normalization achieves the same accuracy with 14
times fewer training steps, and beats the original model by a significant
margin. Using an ensemble of batch-normalized networks, we improve upon the
best published result on ImageNet classification: reaching 4.9% top-5
validation error (and 4.8% test error), exceeding the accuracy of human raters. | http://arxiv.org/pdf/1502.03167 | Sergey Ioffe, Christian Szegedy | cs.LG | null | null | cs.LG | 20150211 | 20150302 | [
{
"id": "1502.03167"
}
] |
1502.03044 | 0 | 6 1 0 2
r p A 9 1 ] G L . s c [
3 v 4 4 0 3 0 . 2 0 5 1 : v i X r a
# Show, Attend and Tell: Neural Image Caption Generation with Visual Attention
Kelvin Xu Jimmy Lei Ba Ryan Kiros Kyunghyun Cho Aaron Courville Ruslan Salakhutdinov Richard S. Zemel Yoshua Bengio
KELVIN.XU@UMONTREAL.CA JIMMY@PSI.UTORONTO.CA RKIROS@CS.TORONTO.EDU KYUNGHYUN.CHO@UMONTREAL.CA AARON.COURVILLE@UMONTREAL.CA RSALAKHU@CS.TORONTO.EDU ZEMEL@CS.TORONTO.EDU FIND-ME@THE.WEB
# Abstract | 1502.03044#0 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |
1502.03044 | 1 | # Abstract
Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to ï¬x its gaze on salient objects while generating the cor- responding words in the output sequence. We validate the use of attention with state-of-the- art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.
Figure 1. Our model learns a words/image alignment. The visual- ized attentional maps (3) are explained in section 3.1 & 5.4
14x14 Feature Map = 1. Input Image 2. Convolutional 3. RNN with attention 4. Word by Feature Extraction over the image word generation
has signiï¬cantly improved the quality of caption genera- tion using a combination of convolutional neural networks (convnets) to obtain vectorial representation of images and recurrent neural networks to decode those representations into natural language sentences (see Sec. 2).
# 1. Introduction | 1502.03044#1 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |
1502.03044 | 2 | # 1. Introduction
Automatically generating captions of an image is a task very close to the heart of scene understanding â one of the primary goals of computer vision. Not only must caption generation models be powerful enough to solve the com- puter vision challenges of determining which objects are in an image, but they must also be capable of capturing and expressing their relationships in a natural language. For this reason, caption generation has long been viewed as a difï¬cult problem. It is a very important challenge for machine learning algorithms, as it amounts to mimicking the remarkable human ability to compress huge amounts of salient visual infomation into descriptive language.
Despite the challenging nature of this task, there has been a recent surge of research interest in attacking the image caption generation problem. Aided by advances in training neural networks (Krizhevsky et al., 2012) and large clas- siï¬cation datasets (Russakovsky et al., 2014), recent work | 1502.03044#2 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |
1502.03044 | 3 | One of the most curious facets of the human visual sys- tem is the presence of attention (Rensink, 2000; Corbetta & Shulman, 2002). Rather than compress an entire image into a static representation, attention allows for salient features to dynamically come to the forefront as needed. This is especially important when there is a lot of clutter in an im- age. Using representations (such as those from the top layer of a convnet) that distill information in image down to the most salient objects is one effective solution that has been widely adopted in previous work. Unfortunately, this has one potential drawback of losing information which could be useful for richer, more descriptive captions. Using more low-level representation can help preserve this information. However working with these features necessitates a power- ful mechanism to steer the model to information important to the task at hand.
In this paper, we describe approaches to caption genera- tion that attempt to incorporate a form of attention with
Neural Image Caption Generation with Visual Attention
Figure 2. Attention over time. As the model generates each word, its attention changes to reï¬ect the relevant parts of the image. âsoftâ (top row) vs âhardâ (bottom row) attention. (Note that both models generated the same captions in this example.)
Slee FERRAEReR bird flying over body water | 1502.03044#3 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |
1502.03044 | 4 | Slee FERRAEReR bird flying over body water
Figure 3. Examples of attending to the correct object (white indicates the attended regions, underlines indicated the corresponding word)
A woman is throwing a frisbee in a park. 4 £ A little girl sitting on a bed with a teddy bear. in the water, A dog is standing on a hardwood floor. A group of people sitting on a boat A stop sign is on a road with a mountain in the background. 7 A giraffe standing in a forest with trees in the background.
two variants: a âhardâ attention mechanism and a âsoftâ attention mechanism. We also show how one advantage of including attention is the ability to visualize what the model âseesâ. Encouraged by recent advances in caption genera- tion and inspired by recent success in employing attention in machine translation (Bahdanau et al., 2014) and object recognition (Ba et al., 2014; Mnih et al., 2014), we investi- gate models that can attend to salient part of an image while generating its caption.
The contributions of this paper are the following: | 1502.03044#4 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |
1502.03044 | 5 | The contributions of this paper are the following:
⢠We introduce two attention-based image caption gen- erators under a common framework (Sec. 3.1): 1) a âsoftâ deterministic attention mechanism trainable by standard back-propagation methods and 2) a âhardâ stochastic attention mechanism trainable by maximiz- ing an approximate variational lower bound or equiv- alently by REINFORCE (Williams, 1992).
⢠We show how we can gain insight and interpret the results of this framework by visualizing âwhereâ and âwhatâ the attention focused on. (see Sec. 5.4)
⢠Finally, we quantitatively validate the usefulness of attention in caption generation with state of the art performance (Sec. 5.3) on three benchmark datasets: Flickr8k (Hodosh et al., 2013) , Flickr30k (Young et al., 2014) and the MS COCO dataset (Lin et al., 2014).
# 2. Related Work | 1502.03044#5 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |
1502.03044 | 6 | # 2. Related Work
In this section we provide relevant background on previous work on image caption generation and attention. Recently, several methods have been proposed for generating image descriptions. Many of these methods are based on recur- rent neural networks and inspired by the successful use of sequence to sequence training with neural networks for ma- chine translation (Cho et al., 2014; Bahdanau et al., 2014; Sutskever et al., 2014). One major reason image caption generation is well suited to the encoder-decoder framework (Cho et al., 2014) of machine translation is because it is analogous to âtranslatingâ an image to a sentence. | 1502.03044#6 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |
1502.03044 | 7 | The ï¬rst approach to use neural networks for caption gener- ation was Kiros et al. (2014a), who proposed a multimodal log-bilinear model that was biased by features from the im- age. This work was later followed by Kiros et al. (2014b) whose method was designed to explicitly allow a natural way of doing both ranking and generation. Mao et al. (2014) took a similar approach to generation but replaced a feed-forward neural language model with a recurrent one. Both Vinyals et al. (2014) and Donahue et al. (2014) use LSTM RNNs for their models. Unlike Kiros et al. (2014a) and Mao et al. (2014) whose models see the image at each time step of the output word sequence, Vinyals et al. (2014) only show the image to the RNN at the beginning. Along
Neural Image Caption Generation with Visual Attention
with images, Donahue et al. (2014) also apply LSTMs to videos, allowing their model to generate video descriptions. | 1502.03044#7 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |
1502.03044 | 8 | Neural Image Caption Generation with Visual Attention
with images, Donahue et al. (2014) also apply LSTMs to videos, allowing their model to generate video descriptions.
All of these works represent images as a single feature vec- tor from the top layer of a pre-trained convolutional net- work. Karpathy & Li (2014) instead proposed to learn a joint embedding space for ranking and generation whose model learns to score sentence and image similarity as a function of R-CNN object detections with outputs of a bidi- rectional RNN. Fang et al. (2014) proposed a three-step pipeline for generation by incorporating object detections. Their model ï¬rst learn detectors for several visual concepts based on a multi-instance learning framework. A language model trained on captions was then applied to the detector outputs, followed by rescoring from a joint image-text em- bedding space. Unlike these models, our proposed atten- tion framework does not explicitly use object detectors but instead learns latent alignments from scratch. This allows our model to go beyond âobjectnessâ and learn to attend to abstract concepts. | 1502.03044#8 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |
1502.03044 | 9 | Prior to the use of neural networks for generating captions, two main approaches were dominant. The ï¬rst involved generating caption templates which were ï¬lled in based on the results of object detections and attribute discovery (Kulkarni et al. (2013), Li et al. (2011), Yang et al. (2011), Mitchell et al. (2012), Elliott & Keller (2013)). The second approach was based on ï¬rst retrieving similar captioned im- ages from a large database then modifying these retrieved captions to ï¬t the query (Kuznetsova et al., 2012; 2014). These approaches typically involved an intermediate âgen- eralizationâ step to remove the speciï¬cs of a caption that are only relevant to the retrieved image, such as the name of a city. Both of these approaches have since fallen out of favour to the now dominant neural network methods.
There has been a long line of previous work incorpo- rating attention into neural networks for vision related tasks. Some that share the same spirit as our work include Larochelle & Hinton (2010); Denil et al. (2012); Tang et al. (2014). In particular however, our work directly extends the work of Bahdanau et al. (2014); Mnih et al. (2014); Ba et al. (2014).
# 3. Image Caption Generation with Attention Mechanism | 1502.03044#9 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |
1502.03044 | 10 | # 3. Image Caption Generation with Attention Mechanism
Ze Ze Nea â\ input gate output gate 2, . SO pC : th, | input modulatorâ memory cell Ey. forget gate a Ey.4
Figure 4. A LSTM cell, lines with bolded squares imply projec- tions with a learnt weight vector. Each cell learns how to weigh its input components (input gate), while learning how to modulate that contribution to the memory (input modulator). It also learns weights which erase the memory cell (forget gate), and weights which control how this memory should be emitted (output gate).
3.1.1. ENCODER: CONVOLUTIONAL FEATURES
Our model takes a single raw image and generates a caption y encoded as a sequence of 1-of-K encoded words.
y = {y1, . . . , yC} , yi â RK
where K is the size of the vocabulary and C is the length of the caption.
We use a convolutional neural network in order to extract a set of feature vectors which we refer to as annotation vec- tors. The extractor produces L vectors, each of which is a D-dimensional representation corresponding to a part of the image.
a = {a1, . . . , aL} , ai â RD | 1502.03044#10 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |
1502.03044 | 11 | a = {a1, . . . , aL} , ai â RD
In order to obtain a correspondence between the feature vectors and portions of the 2-D image, we extract features from a lower convolutional layer unlike previous work which instead used a fully connected layer. This allows the decoder to selectively focus on certain parts of an image by selecting a subset of all the feature vectors.
# 3.1. Model Details
In this section, we describe the two variants of our attention-based model by ï¬rst describing their common framework. The main difference is the deï¬nition of the Ï function which we describe in detail in Section 4. We denote vectors with bolded font and matrices with capital letters. In our description below, we suppress bias terms for readability.
3.1.2. DECODER: LONG SHORT-TERM MEMORY NETWORK
We use long short-term memory (LSTM) net- work (Hochreiter & Schmidhuber, 1997) that produces a caption by generating one word at every time step condi- tioned on a context vector, the previous hidden state and the previously generated words. Our implementation of LSTM
Neural Image Caption Generation with Visual Attention | 1502.03044#11 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |
1502.03044 | 12 | Neural Image Caption Generation with Visual Attention
closely follows the one used in Zaremba et al. (2014) (see Fig. 4). Using Ts,t : Rs â Rt to denote a simple afï¬ne transformation with parameters that are learned,
it ft ot gt = Ï Ï Ï tanh TD+m+n,n Eytâ1 htâ1 Ëzt (1)
c, =f, Ocr-1 +i Og hy; = o © tanh(c;).
(2)
(3)
through two separate MLPs (init,c and init,h):
ic co = finice( = D2 ai) ve ho = Finitan DS ai)
In this work, we use a deep output layer (Pascanu et al., 2014) to compute the output word probability given the LSTM state, the context vector and the previous word: | 1502.03044#12 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |
1502.03044 | 13 | In this work, we use a deep output layer (Pascanu et al., 2014) to compute the output word probability given the LSTM state, the context vector and the previous word:
Here, i;, f;, cz, 04, hy are the input, forget, memory, out- put and hidden state of the LSTM, respectively. The vector z ⬠R® is the context vector, capturing the visual infor- mation associated with a particular input location, as ex- plained below. E ⬠Râ¢** is an embedding matrix. Let m and n denote the embedding and LSTM dimensionality respectively and o and © be the logistic sigmoid activation and element-wise multiplication respectively.
p(yt|a, ytâ1 1 ) â exp(Lo(Eytâ1 + Lhht + LzËzt)) (7)
Where Lo â RKÃm, Lh â RmÃn, Lz â RmÃD, and E are learned parameters initialized randomly.
# 4. Learning Stochastic âHardâ vs Deterministic âSoftâ Attention | 1502.03044#13 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |
1502.03044 | 14 | # 4. Learning Stochastic âHardâ vs Deterministic âSoftâ Attention
In simple terms, the context vector Ëzt (equations (1)â(3)) is a dynamic representation of the relevant part of the image input at time t. We deï¬ne a mechanism Ï that computes Ëzt from the annotation vectors ai, i = 1, . . . , L corresponding to the features extracted at different image locations. For each location i, the mechanism generates a positive weight αi which can be interpreted either as the probability that location i is the right place to focus for producing the next word (the âhardâ but stochastic attention mechanism), or as the relative importance to give to location i in blending the aiâs together. The weight αi of each annotation vector ai is computed by an attention model fatt for which we use a multilayer perceptron conditioned on the previous hidden state htâ1. The soft version of this attention mechanism was introduced by Bahdanau et al. (2014). For emphasis, we note that the hidden state varies as the output RNN ad- vances in its output sequence: âwhereâ the network looks next depends on the sequence of words that has already been generated.
eti =fatt(ai, htâ1) (4) | 1502.03044#14 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |
1502.03044 | 15 | eti =fatt(ai, htâ1) (4)
exp(eti) ay =ââââââ_.. (5) "SSE exper)
Once the weights (which sum to one) are computed, the context vector Ëzt is computed by
Ëzt = Ï ({ai} , {αi}) , (6)
In this section we discuss two alternative mechanisms for the attention model fatt: stochastic attention and determin- istic attention.
# 4.1. Stochastic âHardâ Attention
We represent the location variable st as where the model decides to focus attention when generating the tth word. st,i is an indicator one-hot variable which is set to 1 if the i-th location (out of L) is the one used to extract visual features. By treating the attention locations as intermedi- ate latent variables, we can assign a multinoulli distribution parametrized by {αi}, and view Ëzt as a random variable:
p(st,i = 1 | sj<t, a) = αt,i (8)
Ëzt = st,iai. i (9) | 1502.03044#15 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |
1502.03044 | 16 | p(st,i = 1 | sj<t, a) = αt,i (8)
Ëzt = st,iai. i (9)
We define a new objective function L, that is a variational lower bound on the marginal log-likelihood log p(y | a) of observing the sequence of words y given image features a. The learning algorithm for the parameters W of the models can be derived by directly optimizing L,:
Ls => p(s | a) log ply | s,a) Slog 7 p(s | a)ply | s,a) s =log ply | a) (10)
where Ï is a function that returns a single vector given the set of annotation vectors and their corresponding weights. The details of Ï function are discussed in Sec. 4.
The initial memory state and hidden state of the LSTM are predicted by an average of the annotation vectors fed
OLs _ Alog ply | s,a) ie = | a)| Marylee) log p(s | a) ow ap log p(y | s,a)
Neural Image Caption Generation with Visual Attention
Figure 5. Examples of mistakes where we can use attention to gain intuition into what the model saw. | 1502.03044#16 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |
1502.03044 | 17 | Neural Image Caption Generation with Visual Attention
Figure 5. Examples of mistakes where we can use attention to gain intuition into what the model saw.
OL A large white bird standing in a forest. A person is standing on a beach with a surfboard. A woman holding a clock in her hand. A woman is sitting at a table with a large pizza. A man wearing a hat and a hat on a skateboard. A man is talking on his cell phone while another man watches.
Equation 11 suggests a Monte Carlo based sampling ap- proximation of the gradient with respect to the model pa- rameters. This can be done by sampling the location st from a multinouilli distribution deï¬ned by Equation 8.
Ëst â¼ MultinoulliL({αi})
following:
Alog ply | 5" ma N >» ow ' log p(5" | a) ow aH[s"] ow Ar (log p(y | â, a) â b) + re
Alog ply | 5" i N 2s ow ' log p(5" | a) ow log p(y | 8", a) (12) | 1502.03044#17 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |
1502.03044 | 18 | Alog ply | 5" i N 2s ow ' log p(5" | a) ow log p(y | 8", a) (12)
where, λr and λe are two hyper-parameters set by cross- validation. As pointed out and used in Ba et al. (2014) and Mnih et al. (2014), this is formulation is equivalent to the REINFORCE learning rule (Williams, 1992), where the reward for the attention choosing a sequence of actions is a real value proportional to the log likelihood of the target sentence under the sampled attention trajectory.
A moving average baseline is used to reduce the vari- ance in the Monte Carlo estimator of the gradient, follow- ing Weaver & Tao (2001). Similar, but more complicated variance reduction techniques have previously been used by Mnih et al. (2014) and Ba et al. (2014). Upon seeing the kth mini-batch, the moving average baseline is estimated as an accumulated sum of the previous log likelihoods with exponential decay:
In making a hard choice at every point, Ï ({ai} , {αi}) from Equation 6 is a function that returns a sampled ai at every point in time based upon a multinouilli distribution parameterized by α.
# 4.2. Deterministic âSoftâ Attention | 1502.03044#18 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |
1502.03044 | 19 | # 4.2. Deterministic âSoftâ Attention
Learning stochastic attention requires sampling the atten- tion location st each time, instead we can take the expecta- tion of the context vector Ëzt directly,
bk = 0.9 Ã bkâ1 + 0.1 Ã log p(y | Ësk, a)
To further reduce the estimator variance, an entropy term on the multinouilli distribution H[s] is added. Also, with probability 0.5 for a given image, we set the sampled at- tention location Ës to its expected value α. Both techniques improve the robustness of the stochastic attention learning algorithm. The ï¬nal learning rule for the model is then the
L Ep(s,la)@e] = Y > an iai (13) i=l
and formulate a deterministic attention model by com- puting a soft attention weighted annotation vector 6({ai},{ai}) = SOY aja; as introduced by Bahdanau et al. (2014). This corresponds to feeding in a soft a
Neural Image Caption Generation with Visual Attention
weighted context into the system. The whole model is smooth and differentiable under the deterministic attention, so learning end-to-end is trivial by using standard back- propagation.
the scalar β. | 1502.03044#19 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |
1502.03044 | 20 | the scalar β.
Concretely, the model is trained end-to-end by minimizing the following penalized negative log-likelihood:
Learning the deterministic attention can also be under- stood as approximately optimizing the marginal likelihood in Equation 10 under the attention location random vari- able st from Sec. 4.1. The hidden activation of LSTM ht is a linear projection of the stochastic context vector Ëzt followed by tanh non-linearity. To the ï¬rst order Tay- lor approximation, the expected value Ep(st|a)[ht] is equal to computing ht using a single forward prop with the ex- pected context vector Ep(st|a)[Ëzt]. Considering Eq. 7, let nt = Lo(Eytâ1+Lhht+LzËzt), nt,i denotes nt computed by setting the random variable Ëz value to ai. We deï¬ne the normalized weighted geometric mean for the softmax kth word prediction:
I exp(nz,p,5) PSO 3, T exp(ne,j,)? OH â exp(Ep(s,ja){ne,x]) ~ YO; exp(Ep(se ja) [70,3]) NWGM p(y = k | a)] | 1502.03044#20 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |
1502.03044 | 21 | The equation above shows the normalized weighted ge- ometric mean of the caption prediction can be approxi- mated well by using the expected context vector, where E[nt] = Lo(Eytâ1 + LhE[ht] + LzE[Ëzt]). It shows that the NWGM of a softmax unit is obtained by applying soft- max to the expectations of the underlying linear projec- tions. Also, from the results in (Baldi & Sadowski, 2014), N W GM [p(yt = k | a)] â E[p(yt = k | a)] under softmax activation. That means the expectation of the out- puts over all possible attention locations induced by ran- dom variable st is computed by simple feedforward propa- gation with expected context vector E[Ëzt]. In other words, the deterministic attention model is an approximation to the marginal likelihood over the attention locations.
L Cc La = âlog(P(ylx)) +A 1 â Dfan)? 4)
# 4.3. Training Procedure | 1502.03044#21 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |
1502.03044 | 22 | L Cc La = âlog(P(ylx)) +A 1 â Dfan)? 4)
# 4.3. Training Procedure
Both variants of our attention model were trained with stochastic gradient descent using adaptive learning rate al- gorithms. For the Flickr8k dataset, we found that RM- SProp (Tieleman & Hinton, 2012) worked best, while for Flickr30k/MS COCO dataset we used the recently pro- posed Adam algorithm (Kingma & Ba, 2014) .
To create the annotations ai used by our decoder, we used the Oxford VGGnet (Simonyan & Zisserman, 2014) pre- trained on ImageNet without ï¬netuning. In principle how- ever, any encoding function could be used. In addition, with enough data, we could also train the encoder from scratch (or ï¬ne-tune) with the rest of the model. In our ex- periments we use the 14Ã14Ã512 feature map of the fourth convolutional layer before max pooling. This means our decoder operates on the ï¬attened 196 à 512 (i.e L à D) encoding. | 1502.03044#22 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |
1502.03044 | 23 | As our implementation requires time proportional to the length of the longest sentence per update, we found train- ing on a random group of captions to be computationally wasteful. To mitigate this problem, in preprocessing we build a dictionary mapping the length of a sentence to the corresponding subset of captions. Then, during training we randomly sample a length and retrieve a mini-batch of size 64 of that length. We found that this greatly improved con- vergence speed with no noticeable diminishment in perfor- mance. On our largest dataset (MS COCO), our soft atten- tion model took less than 3 days to train on an NVIDIA Titan Black GPU. | 1502.03044#23 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |
1502.03044 | 24 | 4.2.1. DOUBLY STOCHASTIC ATTENTION By construction, }>; a,; = 1 as they are the output of a softmax. In training the deterministic version of our model we introduce a form of doubly stochastic regularization, where we also encourage Ye ati = 1. This can be in- terpreted as encouraging the model to pay equal attention to every part of the image over the course of generation. In our experiments, we observed that this penalty was impor- tant quantitatively to improving overall BLEU score and that qualitatively this leads to more rich and descriptive captions. In addition, the soft attention model predicts a gating scalar 3 from previous hidden state h;_, at each time step t, such that, ¢ ({a;}, {a;}) = 8 yy a;a;, where 6, = o(fe(he_1)). We notice our attention weights put more emphasis on the objects in the images by including | 1502.03044#24 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |
1502.03044 | 25 | In addition to dropout (Srivastava et al., 2014), the only other regularization strategy we used was early stopping on BLEU score. We observed a breakdown in correla- tion between the validation set log-likelihood and BLEU in the later stages of training during our experiments. Since BLEU is the most commonly reported metric, we used BLEU on our validation set for model selection.
In our experiments with soft attention, we also used Whet- lab1 (Snoek et al., 2012; 2014) in our Flickr8k experi- ments. Some of the intuitions we gained from hyperparam- eter regions it explored were especially important in our Flickr30k and COCO experiments.
We make our code for these models based in Theano
1https://www.whetlab.com/
Neural Image Caption Generation with Visual Attention
Table 1. BLEU-1,2,3,4/METEOR metrics compared to other methods, â indicates a different split, (â) indicates an unknown metric, ⦠indicates the authors kindly provided missing metrics by personal communication, Σ indicates an ensemble, a indicates using AlexNet | 1502.03044#25 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |
1502.03044 | 26 | BLEU Dataset Flickr8k Flickr30k COCO Model Google NIC(Vinyals et al., 2014)â Σ Log Bilinear (Kiros et al., 2014a)⦠Soft-Attention Hard-Attention Google NICâ â¦Î£ Log Bilinear Soft-Attention Hard-Attention CMU/MS Research (Chen & Zitnick, 2014)a MS Research (Fang et al., 2014)â a BRNN (Karpathy & Li, 2014)⦠Google NICâ â¦Î£ Log Bilinear⦠Soft-Attention Hard-Attention BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR 27 27.7 29.9 31.4 27.7 25.4 28.8 29.6 â â 30.4 32.9 34.4 34.4 35.7 63 65.6 67 67 66.3 60.0 66.7 66.9 â â 64.2 66.6 70.8 70.7 71.8 41 42.4 44.8 45.7 42.3 38 43.4 43.9 â â 45.1 46.1 48.9 49.2 50.4 â 17.7 19.5 21.3 18.3 17.1 19.1 19.9 â â 20.3 | 1502.03044#26 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |
1502.03044 | 28 | (Bergstra et al., 2010) publicly available upon publication to encourage future research in this area.
# 5. Experiments
out a brevity penalty. There has been, however, criticism of BLEU, so in addition we report another common met- ric METEOR (Denkowski & Lavie, 2014), and compare whenever possible.
We describe our experimental methodology and quantita- tive results which validate the effectiveness of our model for caption generation.
# 5.1. Data
We report results on the popular Flickr8k and Flickr30k dataset which has 8,000 and 30,000 images respectively as well as the more challenging Microsoft COCO dataset which has 82,783 images. The Flickr8k/Flickr30k dataset both come with 5 reference sentences per image, but for the MS COCO dataset, some of the images have references in excess of 5 which for consistency across our datasets we discard. We applied only basic tokenization to MS COCO so that it is consistent with the tokenization present in Flickr8k and Flickr30k. For all our experiments, we used a ï¬xed vocabulary size of 10,000.
# 5.2. Evaluation Procedures | 1502.03044#28 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |
1502.03044 | 29 | # 5.2. Evaluation Procedures
A few challenges exist for comparison, which we explain here. The ï¬rst is a difference in choice of convolutional feature extractor. For identical decoder architectures, us- ing more recent architectures such as GoogLeNet or Ox- ford VGG Szegedy et al. (2014), Simonyan & Zisserman (2014) can give a boost in performance over using the AlexNet (Krizhevsky et al., 2012). In our evaluation, we compare directly only with results which use the compa- rable GoogLeNet/Oxford VGG features, but for METEOR comparison we note some results that use AlexNet.
The second challenge is a single model versus ensemble comparison. While other methods have reported perfor- mance boosts by using ensembling, in our results we report a single model performance. | 1502.03044#29 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |
1502.03044 | 30 | The second challenge is a single model versus ensemble comparison. While other methods have reported perfor- mance boosts by using ensembling, in our results we report a single model performance.
Results for our attention-based architecture are reported in Table 4.2.1. We report results with the frequently used BLEU metric2 which is the standard in the caption gen- eration literature. We report BLEU from 1 to 4 with2We veriï¬ed that our BLEU evaluation code matches the au- thors of Vinyals et al. (2014), Karpathy & Li (2014) and Kiros et al. (2014b). For fairness, we only compare against results for which we have veriï¬ed that our BLEU evaluation code is the same. With the upcoming release of the COCO evaluation server, we will include comparison results with all other recent image captioning models. | 1502.03044#30 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |
1502.03044 | 31 | there is challenge due to differences between Finally, dataset splits. In our reported results, we use the pre- deï¬ned splits of Flickr8k. However, one challenge for the Flickr30k and COCO datasets is the lack of standardized splits. As a result, we report with the publicly available splits3 used in previous work (Karpathy & Li, 2014). In our experience, differences in splits do not make a substantial difference in overall performance, but we note the differ3http://cs.stanford.edu/people/karpathy/ deepimagesent/
Neural Image Caption Generation with Visual Attention
ences where they exist.
combined with attention to have useful applications in other domains.
# 5.3. Quantitative Analysis
In Table 4.2.1, we provide a summary of the experi- ment validating the quantitative effectiveness of attention. We obtain state of the art performance on the Flickr8k, Flickr30k and MS COCO. In addition, we note that in our experiments we are able to signiï¬cantly improve the state of the art performance METEOR on MS COCO that we speculate is connected to some of the regularization tech- niques we used 4.2.1 and our lower level representation. Finally, we also note that we are able to obtain this perfor- mance using a single model without an ensemble.
# Acknowledgments | 1502.03044#31 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |
1502.03044 | 32 | # Acknowledgments
The authors would like to thank the developers of Theano (Bergstra et al., 2010; Bastien et al., 2012). We acknowledge the support of the following organizations for research funding and computing support: the Nuance Foundation, NSERC, Samsung, Calcul Qu´ebec, Compute Canada, the Canada Research Chairs and CIFAR. The au- thors would also like to thank Nitish Srivastava for assis- tance with his ConvNet package as well as preparing the Oxford convolutional network and Relu Patrascu for help- ing with numerous infrastructure related problems.
# 5.4. Qualitative Analysis: Learning to attend
By visualizing the attention component learned by the model, we are able to add an extra layer of interpretabil- ity to the output of the model (see Fig. 1). Other systems that have done this rely on object detection systems to pro- duce candidate alignment targets (Karpathy & Li, 2014). Our approach is much more ï¬exible, since the model can attend to ânon objectâ salient regions.
# References
Ba, Jimmy Lei, Mnih, Volodymyr, and Kavukcuoglu, Ko- recognition with visual attention. ray. Multiple object arXiv:1412.7755, December 2014. | 1502.03044#32 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |
1502.03044 | 33 | Bahdanau, Dzmitry, Cho, Kyunghyun, and Bengio, Yoshua. Neu- ral machine translation by jointly learning to align and trans- late. arXiv:1409.0473, September 2014.
The 19-layer OxfordNet uses stacks of 3x3 ï¬lters mean- ing the only time the feature maps decrease in size are due to the max pooling layers. The input image is resized so that the shortest side is 256 dimensional with preserved as- pect ratio. The input to the convolutional network is the center cropped 224x224 image. Consequently, with 4 max pooling layers we get an output dimension of the top con- volutional layer of 14x14. Thus in order to visualize the attention weights for the soft model, we simply upsample the weights by a factor of 24 = 16 and apply a Gaussian ï¬lter. We note that the receptive ï¬elds of each of the 14x14 units are highly overlapping.
Baldi, Pierre and Sadowski, Peter. The dropout learning algo- rithm. Artiï¬cial intelligence, 210:78â122, 2014. | 1502.03044#33 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |
1502.03044 | 34 | Baldi, Pierre and Sadowski, Peter. The dropout learning algo- rithm. Artiï¬cial intelligence, 210:78â122, 2014.
Bastien, Frederic, Lamblin, Pascal, Pascanu, Razvan, Bergstra, James, Goodfellow, Ian, Bergeron, Arnaud, Bouchard, Nico- las, Warde-Farley, David, and Bengio, Yoshua. Theano: new features and speed improvements. Submited to the Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop, 2012.
Bergstra, James, Breuleux, Olivier, Bastien, Fr´ed´eric, Lam- blin, Pascal, Pascanu, Razvan, Desjardins, Guillaume, Turian, Joseph, Warde-Farley, David, and Bengio, Yoshua. Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientiï¬c Computing Conference (SciPy), 2010.
As we can see in Figure 2 and 3, the model learns align- ments that correspond very strongly with human intuition. Especially in the examples of mistakes, we see that it is possible to exploit such visualizations to get an intuition as to why those mistakes were made. We provide a more ex- tensive list of visualizations in Appendix A for the reader. | 1502.03044#34 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |
1502.03044 | 35 | Chen, Xinlei and Zitnick, C Lawrence. Learning a recurrent arXiv visual representation for image caption generation. preprint arXiv:1411.5654, 2014.
Cho, Kyunghyun, van Merrienboer, Bart, Gulcehre, Caglar, Bougares, Fethi, Schwenk, Holger, and Bengio, Yoshua. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In EMNLP, October 2014.
# 6. Conclusion
We propose an attention based approach that gives state of the art performance on three benchmark datasets us- ing the BLEU and METEOR metric. We also show how the learned attention can be exploited to give more inter- pretability into the models generation process, and demon- strate that the learned alignments correspond very well to human intuition. We hope that the results of this paper will encourage future work in using visual attention. We also expect that the modularity of the encoder-decoder approach
Corbetta, Maurizio and Shulman, Gordon L. Control of goal- directed and stimulus-driven attention in the brain. Nature re- views neuroscience, 3(3):201â215, 2002. | 1502.03044#35 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |
1502.03044 | 36 | Denil, Misha, Bazzani, Loris, Larochelle, Hugo, and de Freitas, Nando. Learning where to attend with deep architectures for image tracking. Neural Computation, 2012.
Denkowski, Michael and Lavie, Alon. Meteor universal: Lan- guage speciï¬c translation evaluation for any target language. In Proceedings of the EACL 2014 Workshop on Statistical Ma- chine Translation, 2014.
Neural Image Caption Generation with Visual Attention
Jeff, Hendrikcs, Lisa Anne, Guadarrama, Se- gio, Rohrbach, Marcus, Venugopalan, Subhashini, Saenko, Long-term recurrent convo- Kate, and Darrell, Trevor. lutional networks for visual recognition and description. arXiv:1411.4389v2, November 2014.
Lin, Tsung-Yi, Maire, Michael, Belongie, Serge, Hays, James, Perona, Pietro, Ramanan, Deva, Doll´ar, Piotr, and Zitnick, C Lawrence. Microsoft coco: Common objects in context. In ECCV, pp. 740â755. 2014.
Elliott, Desmond and Keller, Frank. Image description using vi- sual dependency representations. In EMNLP, 2013. | 1502.03044#36 | Show, Attend and Tell: Neural Image Caption Generation with Visual Attention | Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO. | http://arxiv.org/pdf/1502.03044 | Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio | cs.LG, cs.CV | null | null | cs.LG | 20150210 | 20160419 | [] |