text
stringlengths
844
8.87k
source
stringclasses
5 values
Rethinking the Inception Architecture for Computer Vision Christian Szegedy Google Inc. szegedy@google. com Vincent Vanhoucke vanhoucke@google. com Sergey Ioffe sioffe@google. com Jonathon Shlens shlens@google. com Zbigniew Wojna University College London zbigniewwojna@gmail. com Abstract Convolutional networks are at the core of most state-of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in vari-ous benchmarks. Although increased model size and com-putational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we are explor-ing ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21. 2%top-1and5. 6%top-5error for single frame evaluation using a network with a computa-tional cost of 5billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4models and multi-crop evaluation, we report 3. 5%top-5 error and 17. 3%top-1error. 1. Introduction Since the 2012 Image Net competition [16] winning en-try by Krizhevsky et al [9], their network “Alex Net” has been successfully applied to a larger variety of computer vision tasks, for example to object-detection [5], segmen-tation [12], human pose estimation [22], video classifica-tion [8], object tracking [23], and superresolution [3]. These successes spurred a new line of research that fo-cused on finding higher performing convolutional neural networks. Starting in 2014, the quality of network architec-tures significantly improved by utilizing deeper and wider networks. VGGNet [18] and Goog Le Net [20] yielded simi-larly high performance in the 2014 ILSVRC [16] classifica-tion challenge. One interesting observation was that gains in the classification performance tend to transfer to signifi-cant quality gains in a wide variety of application domains. This means that architectural improvements in deep con-volutional architecture can be utilized for improving perfor-mance for most other computer vision tasks that are increas-ingly reliant on high quality, learned visual features. Also, improvements in the network quality resulted in new appli-cation domains for convolutional networks in cases where Alex Net features could not compete with hand engineered, crafted solutions, e. g. proposal generation in detection[4]. Although VGGNet [18] has the compelling feature of architectural simplicity, this comes at a high cost: evalu-ating the network requires a lot of computation. On the other hand, the Inception architecture of Goog Le Net [20] was also designed to perform well even under strict con-straints on memory and computational budget. For exam-ple, Google Net employed only 5 million parameters, which represented a 12×reduction with respect to its predeces-sor Alex Net, which used 60million parameters. Further-more, VGGNet employed about 3x more parameters than Alex Net. The computational cost of Inception is also much lower than VGGNet or its higher performing successors [6]. This has made it feasible to utilize Inception networks in big-data scenarios[17], [13], where huge amount of data needed to be processed at reasonable cost or scenarios where memory or computational capacity is inherently limited, for example in mobile vision settings. It is certainly possible to mitigate parts of these issues by applying specialized solutions to tar-get memory use [2], [15] or by optimizing the execution of certain operations via computational tricks [10]. However, these methods add extra complexity. Furthermore, these methods could be applied to optimize the Inception archi-tecture as well, widening the efficiency gap again. Still, the complexity of the Inception architecture makes 1ar Xiv:1512. 00567v3 [cs. CV] 11 Dec 2015
Googles Inception v3.pdf
it more difficult to make changes to the network. If the ar-chitecture is scaled up naively, large parts of the computa-tional gains can be immediately lost. Also, [20] does not provide a clear description about the contributing factors that lead to the various design decisions of the Goog Le Net architecture. This makes it much harder to adapt it to new use-cases while maintaining its efficiency. For example, if it is deemed necessary to increase the capacity of some Inception-style model, the simple transformation of just doubling the number of all filter bank sizes will lead to a 4x increase in both computational cost and number of pa-rameters. This might prove prohibitive or unreasonable in a lot of practical scenarios, especially if the associated gains are modest. In this paper, we start with describing a few general principles and optimization ideas that that proved to be useful for scaling up convolution networks in efficient ways. Although our principles are not limited to Inception-type networks, they are easier to observe in that context as the generic structure of the Inception style building blocks is flexible enough to incorporate those constraints naturally. This is enabled by the generous use of dimensional reduc-tion and parallel structures of the Inception modules which allows for mitigating the impact of structural changes on nearby components. Still, one needs to be cautious about doing so, as some guiding principles should be observed to maintain high quality of the models. 2. General Design Principles Here we will describe a few design principles based on large-scale experimentation with various architectural choices with convolutional networks. At this point, the util-ity of the principles below are speculative and additional future experimental evidence will be necessary to assess their accuracy and domain of validity. Still, grave devia-tions from these principles tended to result in deterioration in the quality of the networks and fixing situations where those deviations were detected resulted in improved archi-tectures in general. 1. Avoid representational bottlenecks, especially early in the network. Feed-forward networks can be repre-sented by an acyclic graph from the input layer(s) to the classifier or regressor. This defines a clear direction for the information flow. For any cut separating the in-puts from the outputs, one can access the amount of information passing though the cut. One should avoid bottlenecks with extreme compression. In general the representation size should gently decrease from the in-puts to the outputs before reaching the final represen-tation used for the task at hand. Theoretically, infor-mation content can not be assessed merely by the di-mensionality of the representation as it discards impor-tant factors like correlation structure; the dimensional-ity merely provides a rough estimate of information content. 2. Higher dimensional representations are easier to pro-cess locally within a network. Increasing the activa-tions per tile in a convolutional network allows for more disentangled features. The resulting networks will train faster. 3. Spatial aggregation can be done over lower dimen-sional embeddings without much or any loss in rep-resentational power. For example, before performing a more spread out (e. g. 3×3) convolution, one can re-duce the dimension of the input representation before the spatial aggregation without expecting serious ad-verse effects. We hypothesize that the reason for that is the strong correlation between adjacent unit results in much less loss of information during dimension re-duction, if the outputs are used in a spatial aggrega-tion context. Given that these signals should be easily compressible, the dimension reduction even promotes faster learning. 4. Balance the width and depth of the network. Optimal performance of the network can be reached by balanc-ing the number of filters per stage and the depth of the network. Increasing both the width and the depth of the network can contribute to higher quality net-works. However, the optimal improvement for a con-stant amount of computation can be reached if both are increased in parallel. The computational budget should therefore be distributed in a balanced way between the depth and width of the network. Although these principles might make sense, it is not straightforward to use them to improve the quality of net-works out of box. The idea is to use them judiciously in ambiguous situations only. 3. Factorizing Convolutions with Large Filter Size Much of the original gains of the Goog Le Net net-work [20] arise from a very generous use of dimension re-duction. This can be viewed as a special case of factorizing convolutions in a computationally efficient manner. Con-sider for example the case of a 1×1convolutional layer followed by a 3×3convolutional layer. In a vision net-work, it is expected that the outputs of near-by activations are highly correlated. Therefore, we can expect that their activations can be reduced before aggregation and that this should result in similarly expressive local representations. Here we explore other ways of factorizing convolutions in various settings, especially in order to increase the com-putational efficiency of the solution. Since Inception net-works are fully convolutional, each weight corresponds to
Googles Inception v3.pdf
Figure 1. Mini-network replacing the 5×5convolutions. one multiplication per activation. Therefore, any reduction in computational cost results in reduced number of param-eters. This means that with suitable factorization, we can end up with more disentangled parameters and therefore with faster training. Also, we can use the computational and memory savings to increase the filter-bank sizes of our network while maintaining our ability to train each model replica on a single computer. 3. 1. Factorization into smaller convolutions Convolutions with larger spatial filters (e. g. 5×5or 7×7) tend to be disproportionally expensive in terms of computation. For example, a 5×5convolution with nfil-ters over a grid with mfilters is 25/9 = 2. 78 times more computationally expensive than a 3×3convolution with the same number of filters. Of course, a 5×5filter can cap-ture dependencies between signals between activations of units further away in the earlier layers, so a reduction of the geometric size of the filters comes at a large cost of expres-siveness. However, we can ask whether a 5×5convolution could be replaced by a multi-layer network with less pa-rameters with the same input size and output depth. If we zoom into the computation graph of the 5×5convolution, we see that each output looks like a small fully-connected network sliding over 5×5tiles over its input (see Figure 1). Since we are constructing a vision network, it seems natural to exploit translation invariance again and replace the fully connected component by a two layer convolutional archi-tecture: the first layer is a 3×3convolution, the second is a fully connected layer on top of the 3×3output grid of the first layer (see Figure 1). Sliding this small network over the input activation grid boils down to replacing the 5×5 convolution with two layers of 3×3convolution (compare Figure 4 with 5). This setup clearly reduces the parameter count by shar-ing the weights between adjacent tiles. To analyze the ex-0 0. 5 1 1. 5 2 2. 5 3 3. 5 4 x 10600. 10. 20. 30. 40. 50. 60. 70. 8 Iteration Top-1 Accuracy Factorization with Linear vs Re LU activation Re LU Linear Figure 2. One of several control experiments between two Incep-tion models, one of them uses factorization into linear + Re LU layers, the other uses two Re LU layers. After 3. 86million opera-tions, the former settles at 76. 2%, while the latter reaches 77. 2% top-1 Accuracy on the validation set. pected computational cost savings, we will make a few sim-plifying assumptions that apply for the typical situations: We can assume that n=αm, that is that we want to change the number of activations/unit by a constant alpha factor. Since the 5×5convolution is aggregating, αis typically slightly larger than one (around 1. 5 in the case of Goog Le Net). Having a two layer replacement for the 5×5layer, it seems reasonable to reach this expansion in two steps: increasing the number of filters by√αin both steps. In order to simplify our estimate by choosing α= 1 (no expansion), If we would naivly slide a network without reusing the computation between neighboring grid tiles, we would increase the computational cost. sliding this network can be represented by two 3×3convolutional layers which reuses the activations between adjacent tiles. This way, we end up with a net9+9 25×reduction of computation, resulting in a relative gain of 28% by this factorization. The exact same saving holds for the parameter count as each parame-ter is used exactly once in the computation of the activation of each unit. Still, this setup raises two general questions: Does this replacement result in any loss of expressiveness? If our main goal is to factorize the linear part of the compu-tation, would it not suggest to keep linear activations in the first layer? We have ran several control experiments (for ex-ample see figure 2) and using linear activation was always inferior to using rectified linear units in all stages of the fac-torization. We attribute this gain to the enhanced space of variations that the network can learn especially if we batch-normalize [7] the output activations. One can see similar effects when using linear activations for the dimension re-duction components. 3. 2. Spatial Factorization into Asymmetric Convo-lutions The above results suggest that convolutions with filters larger 3×3a might not be generally useful as they can always be reduced into a sequence of 3×3convolutional
Googles Inception v3.pdf
Figure 3. Mini-network replacing the 3×3convolutions. The lower layer of this network consists of a 3×1convolution with 3 output units. 1x1 1x1 5x5 3x3 Pool 1x1 Base Filter Concat 1x1 Figure 4. Original Inception module as described in [20]. layers. Still we can ask the question whether one should factorize them into smaller, for example 2×2convolutions. However, it turns out that one can do even better than 2×2 by using asymmetric convolutions, e. g. n×1. For example using a 3×1convolution followed by a 1×3convolution is equivalent to sliding a two layer network with the same receptive field as in a 3×3convolution (see figure 3). Still the two-layer solution is 33% cheaper for the same number of output filters, if the number of input and output filters is equal. By comparison, factorizing a 3×3convolution into a two 2×2convolution represents only a 11% saving of computation. In theory, we could go even further and argue that one can replace any n×nconvolution by a 1×nconvolu-1x1 1x1 3x3 3x3 Pool 1x1 Base Filter Concat 3x3 1x1 Figure 5. Inception modules where each 5×5convolution is re-placed by two 3×3convolution, as suggested by principle 3 of Section 2. tion followed by a n×1convolution and the computational cost saving increases dramatically as ngrows (see figure 6). In practice, we have found that employing this factorization does not work well on early layers, but it gives very good re-sults on medium grid-sizes (On m×mfeature maps, where mranges between 12and20). On that level, very good re-sults can be achieved by using 1×7convolutions followed by7×1convolutions. 4. Utility of Auxiliary Classifiers [20] has introduced the notion of auxiliary classifiers to improve the convergence of very deep networks. The origi-nal motivation was to push useful gradients to the lower lay-ers to make them immediately useful and improve the con-vergence during training by combating the vanishing gra-dient problem in very deep networks. Also Lee et al[11] argues that auxiliary classifiers promote more stable learn-ing and better convergence. Interestingly, we found that auxiliary classifiers did not result in improved convergence early in the training: the training progression of network with and without side head looks virtually identical before both models reach high accuracy. Near the end of training, the network with the auxiliary branches starts to overtake the accuracy of the network without any auxiliary branch and reaches a slightly higher plateau. Also [20] used two side-heads at different stages in the network. The removal of the lower auxiliary branch did not have any adverse effect on the final quality of the network. Together with the earlier observation in the previous para-
Googles Inception v3.pdf
1x1 1x1 1xn Pool 1x1 Base Filter Concat nx1 1xn nx1 1xn nx1 1x1 Figure 6. Inception modules after the factorization of the n×n convolutions. In our proposed architecture, we chose n= 7 for the17×17grid. (The filter sizes are picked using principle 3). graph, this means that original the hypothesis of [20] that these branches help evolving the low-level features is most likely misplaced. Instead, we argue that the auxiliary clas-sifiers act as regularizer. This is supported by the fact that the main classifier of the network performs better if the side branch is batch-normalized [7] or has a dropout layer. This also gives a weak supporting evidence for the conjecture that batch normalization acts as a regularizer. 5. Efficient Grid Size Reduction Traditionally, convolutional networks used some pooling operation to decrease the grid size of the feature maps. In order to avoid a representational bottleneck, before apply-ing maximum or average pooling the activation dimension of the network filters is expanded. For example, starting a d×dgrid withkfilters, if we would like to arrive at ad 2×d 2 grid with 2kfilters, we first need to compute a stride-1 con-volution with 2kfilters and then apply an additional pooling step. This means that the overall computational cost is dom-inated by the expensive convolution on the larger grid using 2d2k2operations. One possibility would be to switch to pooling with convolution and therefore resulting in 2(d 2)2k2 1x1 1x1 3x3 Pool 1x1 Base Filter Concat 1x3 1x3 1x1 3x1 3x1 Figure 7. Inception modules with expanded the filter bank outputs. This architecture is used on the coarsest ( 8×8) grids to promote high dimensional representations, as suggested by principle 2 of Section 2. We are using this solution only on the coarsest grid, since that is the place where producing high dimensional sparse representation is the most critical as the ratio of local processing (by1×1convolutions) is increased compared to the spatial ag-gregation. 17x17x768 5x5x768 8x8x1280 Inception 5x5x128 1x1x1024 5x5 Average pooling with stride 3 1x1 Convolution Fully connected... Figure 8. Auxiliary classifier on top of the last 17×17layer. Batch normalization[7] of the layers in the side head results in a 0. 4% absolute gain in top-1accuracy. The lower axis shows the number of itertions performed, each with batch size 32. reducing the computational cost by a quarter. However, this creates a representational bottlenecks as the overall dimen-sionality of the representation drops to (d 2)2kresulting in less expressive networks (see Figure 9). Instead of doing so, we suggest another variant the reduces the computational cost even further while removing the representational bot-tleneck. (see Figure 10). We can use two parallel stride 2 blocks:Pand C. Pis a pooling layer (either average or maximum pooling) the activation, both of them are stride 2 the filter banks of which are concatenated as in figure 10.
Googles Inception v3.pdf
35x35x320 17x17x320 17x17x640 Pooling Inception 35x35x320 35x35x640 17x17x640 Inception Pooling Figure 9. Two alternative ways of reducing the grid size. The so-lution on the left violates the principle 1 of not introducing an rep-resentational bottleneck from Section 2. The version on the right is3times more expensive computationally. Pool stride 2 Base Filter Concat 1x1 3x3 stride 2 3x3 stride 1 1x1 3x3 stride 2 35x35x320 17x17x320 17x17x320 17x17x640 pool conv concat Figure 10. Inception module that reduces the grid-size while ex-pands the filter banks. It is both cheap and avoids the representa-tional bottleneck as is suggested by principle 1. The diagram on the right represents the same solution but from the perspective of grid sizes rather than the operations. 6. Inception-v2 Here we are connecting the dots from above and pro-pose a new architecture with improved performance on the ILSVRC 2012 classification benchmark. The layout of our network is given in table 1. Note that we have factorized the traditional 7×7convolution into three 3×3convolu-tions based on the same ideas as described in section 3. 1. For the Inception part of the network, we have 3traditional inception modules at the 35×35with288filters each. This is reduced to a 17×17grid with 768filters using the grid reduction technique described in section 5. This is is fol-lowed by 5instances of the factorized inception modules as depicted in figure 5. This is reduced to a 8×8×1280 grid with the grid reduction technique depicted in figure 10. At the coarsest 8×8level, we have two Inception modules as depicted in figure 6, with a concatenated output filter bank size of 2048 for each tile. The detailed structure of the net-work, including the sizes of filter banks inside the Inception modules, is given in the supplementary material, given in themodel. txt that is in the tar-file of this submission. typepatch size/stride or remarksinput size conv 3×3/2 299×299×3 conv 3×3/1 149×149×32 conv padded 3×3/1 147×147×32 pool 3×3/2 147×147×64 conv 3×3/1 73×73×64 conv 3×3/2 71×71×80 conv 3×3/1 35×35×192 3×Inception As in figure 5 35×35×288 5×Inception As in figure 6 17×17×768 2×Inception As in figure 7 8×8×1280 pool 8×8 8×8×2048 linear logits 1×1×2048 softmax classifier 1×1×1000 Table 1. The outline of the proposed network architecture. The output size of each module is the input size of the next one. We are using variations of reduction technique depicted Figure 10 to reduce the grid sizes between the Inception blocks whenever ap-plicable. We have marked the convolution with 0-padding, which is used to maintain the grid size. 0-padding is also used inside those Inception modules that do not reduce the grid size. All other layers do not use padding. The various filter bank sizes are chosen to observe principle 4 from Section 2. However, we have observed that the quality of the network is relatively stable to variations as long as the principles from Section 2 are observed. Although our network is 42 layers deep, our computation cost is only about 2. 5higher than that of Goog Le Net and it is still much more efficient than VGGNet. 7. Model Regularization via Label Smoothing Here we propose a mechanism to regularize the classifier layer by estimating the marginalized effect of label-dropout during training. For each training example x, our model computes the probability of each label k∈ {1... K}:p(k|x) = exp(zk)∑K i=1exp(zi). Here,ziare the logits or unnormalized log-probabilities. Consider the ground-truth distribution over labelsq(k|x)for this training example, normalized so that∑ kq(k|x) = 1. For brevity, let us omit the dependence ofpandqon example x. We define the loss for the ex-ample as the cross entropy: ℓ=-∑K k=1log(p(k))q(k). Minimizing this is equivalent to maximizing the expected log-likelihood of a label, where the label is selected accord-ing to its ground-truth distribution q(k). Cross-entropy loss is differentiable with respect to the logits zkand thus can be used for gradient training of deep models. The gradient has a rather simple form:∂ℓ ∂zk=p(k)-q(k), which is bounded between-1and1. Consider the case of a single ground-truth label y, so thatq(y) = 1 andq(k) = 0 for allk̸=y. In this case,
Googles Inception v3.pdf
minimizing the cross entropy is equivalent to maximizing the log-likelihood of the correct label. For a particular ex-amplexwith labely, the log-likelihood is maximized for q(k) =δk,y, whereδk,yis Dirac delta, which equals 1for k=yand0otherwise. This maximum is not achievable for finitezkbut is approached if zy≫zkfor allk̸=y-that is, if the logit corresponding to the ground-truth la-bel is much great than all other logits. This, however, can cause two problems. First, it may result in over-fitting: if the model learns to assign full probability to the ground-truth label for each training example, it is not guaranteed to generalize. Second, it encourages the differences between the largest logit and all others to become large, and this, combined with the bounded gradient∂ℓ ∂zk, reduces the abil-ity of the model to adapt. Intuitively, this happens because the model becomes too confident about its predictions. We propose a mechanism for encouraging the model to be less confident. While this may not be desired if the goal is to maximize the log-likelihood of training labels, it does regularize the model and makes it more adaptable. The method is very simple. Consider a distribution over labels u(k),independent of the training example x, and a smooth-ing parameter ϵ. For a training example with ground-truth labely, we replace the label distribution q(k|x) =δk,ywith q′(k|x) = (1-ϵ)δk,y+ϵu(k) which is a mixture of the original ground-truth distribution q(k|x)and the fixed distribution u(k), with weights 1-ϵ andϵ, respectively. This can be seen as the distribution of the labelkobtained as follows: first, set it to the ground-truth labelk=y; then, with probability ϵ, replacekwith a sample drawn from the distribution u(k). We propose to use the prior distribution over labels as u(k). In our exper-iments, we used the uniform distribution u(k) = 1/K, so that q′(k) = (1-ϵ)δk,y+ϵ K. We refer to this change in ground-truth label distribution as label-smoothing regularization, or LSR. Note that LSR achieves the desired goal of preventing the largest logit from becoming much larger than all others. Indeed, if this were to happen, then a single q(k)would approach 1while all others would approach 0. This would result in a large cross-entropy with q′(k)because, unlike q(k) =δk,y, allq′(k)have a positive lower bound. Another interpretation of LSR can be obtained by con-sidering the cross entropy: H(q′,p) =-K∑ k=1logp(k)q′(k) = (1-ϵ)H(q,p)+ϵH(u,p) Thus, LSR is equivalent to replacing a single cross-entropy loss H(q,p)with a pair of such losses H(q,p)and H(u,p). The second loss penalizes the deviation of predicted label distributionpfrom the prior u, with the relative weightϵ 1-ϵ. Note that this deviation could be equivalently captured by the KL divergence, since H(u,p) =DKL(u∥p) +H(u) and H(u)is fixed. When uis the uniform distribution, H(u,p)is a measure of how dissimilar the predicted dis-tributionpis to uniform, which could also be measured (but not equivalently) by negative entropy-H(p); we have not experimented with this approach. In our Image Net experiments with K= 1000 classes, we usedu(k) = 1/1000 andϵ= 0. 1. For ILSVRC 2012, we have found a consistent improvement of about 0. 2%ab-solute both for top-1error and the top-5error (cf. Table 3). 8. Training Methodology We have trained our networks with stochastic gradient utilizing the Tensor Flow [1] distributed machine learning system using 50replicas running each on a NVidia Kepler GPU with batch size 32for100epochs. Our earlier experi-ments used momentum [19] with a decay of 0. 9, while our best models were achieved using RMSProp [21] with de-cay of 0. 9andϵ= 1. 0. We used a learning rate of 0. 045, decayed every two epoch using an exponential rate of 0. 94. In addition, gradient clipping [14] with threshold 2. 0was found to be useful to stabilize the training. Model evalua-tions are performed using a running average of the parame-ters computed over time. 9. Performance on Lower Resolution Input A typical use-case of vision networks is for the the post-classification of detection, for example in the Multibox [4] context. This includes the analysis of a relative small patch of the image containing a single object with some context. The tasks is to decide whether the center part of the patch corresponds to some object and determine the class of the object if it does. The challenge is that objects tend to be relatively small and low-resolution. This raises the question of how to properly deal with lower resolution input. The common wisdom is that models employing higher resolution receptive fields tend to result in significantly im-proved recognition performance. However it is important to distinguish between the effect of the increased resolution of the first layer receptive field and the effects of larger model capacitance and computation. If we just change the reso-lution of the input without further adjustment to the model, then we end up using computationally much cheaper mod-els to solve more difficult tasks. Of course, it is natural, that these solutions loose out already because of the reduced computational effort. In order to make an accurate assess-ment, the model needs to analyze vague hints in order to be able to “hallucinate” the fine details. This is computa-tionally costly. The question remains therefore: how much
Googles Inception v3.pdf
Receptive Field Size Top-1 Accuracy (single frame) 79×79 75. 2% 151×151 76. 4% 299×299 76. 6% Table 2. Comparison of recognition performance when the size of the receptive field varies, but the computational cost is constant. does higher input resolution helps if the computational ef-fort is kept constant. One simple way to ensure constant effort is to reduce the strides of the first two layer in the case of lower resolution input, or by simply removing the first pooling layer of the network. For this purpose we have performed the following three experiments: 1. 299×299receptive field with stride 2and maximum pooling after the first layer. 2. 151×151receptive field with stride 1and maximum pooling after the first layer. 3. 79×79receptive field with stride 1andwithout pool-ing after the first layer. All three networks have almost identical computational cost. Although the third network is slightly cheaper, the cost of the pooling layer is marginal and (within 1%of the total cost of the)network. In each case, the networks were trained until convergence and their quality was measured on the validation set of the Image Net ILSVRC 2012 classifica-tion benchmark. The results can be seen in table 2. Al-though the lower-resolution networks take longer to train, the quality of the final result is quite close to that of their higher resolution counterparts. However, if one would just naively reduce the network size according to the input resolution, then network would perform much more poorly. However this would an unfair comparison as we would are comparing a 16 times cheaper model on a more difficult task. Also these results of table 2 suggest, one might con-sider using dedicated high-cost low resolution networks for smaller objects in the R-CNN [5] context. 10. Experimental Results and Comparisons Table 3 shows the experimental results about the recog-nition performance of our proposed architecture (Inception-v2) as described in Section 6. Each Inception-v2 line shows the result of the cumulative changes including the high-lighted new modification plus all the earlier ones. Label Smoothing refers to method described in Section 7. Fac-torized 7×7includes a change that factorizes the first 7×7convolutional layer into a sequence of 3×3convo-lutional layers. BN-auxiliary refers to the version in which Network Top-1 Error Top-5 Error Cost Bn Ops Goog Le Net [20] 29% 9. 2% 1. 5 BN-Goog Le Net 26. 8%-1. 5 BN-Inception [7] 25. 2% 7. 8 2. 0 Inception-v2 23. 4%-3. 8 Inception-v2 RMSProp 23. 1% 6. 3 3. 8 Inception-v2 Label Smoothing 22. 8% 6. 1 3. 8 Inception-v2 Factorized 7×7 21. 6% 5. 8 4. 8 Inception-v2 BN-auxiliary21. 2% 5. 6% 4. 8 Table 3. Single crop experimental results comparing the cumula-tive effects on the various contributing factors. We compare our numbers with the best published single-crop inference for Ioffe at al [7]. For the “Inception-v2” lines, the changes are cumulative and each subsequent line includes the new change in addition to the previous ones. The last line is referring to all the changes is what we refer to as “Inception-v3” below. Unfortunately, He et al [6] reports the only 10-crop evaluation results, but not single crop results, which is reported in the Table 4 below. Network Crops Evaluated Top-5 Error Top-1 Error Goog Le Net [20] 10-9. 15% Goog Le Net [20] 144-7. 89% VGG [18]-24. 4% 6. 8% BN-Inception [7] 144 22% 5. 82% PRe LU [6] 10 24. 27% 7. 38% PRe LU [6]-21. 59% 5. 71% Inception-v3 12 19. 47% 4. 48% Inception-v3 144 18. 77% 4. 2% Table 4. Single-model, multi-crop experimental results compar-ing the cumulative effects on the various contributing factors. We compare our numbers with the best published single-model infer-ence results on the ILSVRC 2012 classification benchmark. the fully connected layer of the auxiliary classifier is also batch-normalized, not just the convolutions. We are refer-ring to the model in last row of Table 3 as Inception-v3 and evaluate its performance in the multi-crop and ensemble set-tings. All our evaluations are done on the 48238 non-blacklisted examples on the ILSVRC-2012 validation set, as suggested by [16]. We have evaluated all the 50000 ex-amples as well and the results were roughly 0. 1% worse in top-5 error and around 0. 2% in top-1 error. In the upcom-ing version of this paper, we will verify our ensemble result on the test set, but at the time of our last evaluation of BN-Inception in spring [7] indicates that the test and validation set error tends to correlate very well.
Googles Inception v3.pdf
Network Models Evaluated Crops Evaluated Top-1 Error Top-5 Error VGGNet [18] 2-23. 7% 6. 8% Goog Le Net [20] 7 144-6. 67% PRe LU [6]---4. 94% BN-Inception [7] 6 144 20. 1% 4. 9% Inception-v3 4 144 17. 2% 3. 58%∗ Table 5. Ensemble evaluation results comparing multi-model, multi-crop reported results. Our numbers are compared with the best published ensemble inference results on the ILSVRC 2012 classification benchmark. ∗All results, but the top-5 ensemble result reported are on the validation set. The ensemble yielded 3. 46% top-5 error on the validation set. 11. Conclusions We have provided several design principles to scale up convolutional networks and studied them in the context of the Inception architecture. This guidance can lead to high performance vision networks that have a relatively mod-est computation cost compared to simpler, more monolithic architectures. Our highest quality version of Inception-v3 reaches 21. 2%, top-1and5. 6%top-5 error for single crop evaluation on the ILSVR 2012 classification, setting a new state of the art. This is achieved with relatively modest (2. 5×) increase in computational cost compared to the net-work described in Ioffe et al [7]. Still our solution uses much less computation than the best published results based on denser networks: our model outperforms the results of He et al [6]-cutting the top-5(top-1) error by 25% (14%) relative, respectively-while being six times cheaper com-putationally and using at least five times less parameters (estimated). Our ensemble of four Inception-v3 models reaches 3. 5%with multi-crop evaluation reaches 3. 5%top-5error which represents an over 25% reduction to the best published results and is almost half of the error of ILSVRC 2014 winining Goog Le Net ensemble. We have also demonstrated that high quality results can be reached with receptive field resolution as low as 79×79. This might prove to be helpful in systems for detecting rel-atively small objects. We have studied how factorizing con-volutions and aggressive dimension reductions inside neural network can result in networks with relatively low computa-tional cost while maintaining high quality. The combination of lower parameter count and additional regularization with batch-normalized auxiliary classifiers and label-smoothing allows for training high quality networks on relatively mod-est sized training sets. References [1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghe-mawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia,R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Man ´e, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Vi ´egas, O. Vinyals, P. War-den, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng. Tensor-Flow: Large-scale machine learning on heterogeneous sys-tems, 2015. Software available from tensorflow. org. [2] W. Chen, J. T. Wilson, S. Tyree, K. Q. Weinberger, and Y. Chen. Compressing neural networks with the hashing trick. In Proceedings of The 32nd International Conference on Machine Learning, 2015. [3] C. Dong, C. C. Loy, K. He, and X. Tang. Learning a deep convolutional network for image super-resolution. In Com-puter Vision-ECCV 2014, pages 184-199. Springer, 2014. [4] D. Erhan, C. Szegedy, A. Toshev, and D. Anguelov. Scalable object detection using deep neural networks. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Confer-ence on, pages 2155-2162. IEEE, 2014. [5] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich fea-ture hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014. [6] K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. ar Xiv preprint ar Xiv:1502. 01852, 2015. [7] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of The 32nd International Conference on Ma-chine Learning, pages 448-456, 2015. [8] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei. Large-scale video classification with con-volutional neural networks. In Computer Vision and Pat-tern Recognition (CVPR), 2014 IEEE Conference on, pages 1725-1732. IEEE, 2014. [9] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097-1105, 2012. [10] A. Lavin. Fast algorithms for convolutional neural networks. ar Xiv preprint ar Xiv:1509. 09308, 2015. [11] C.-Y. Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu. Deeply-supervised nets. ar Xiv preprint ar Xiv:1409. 5185, 2014. [12] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recogni-tion, pages 3431-3440, 2015. [13] Y. Movshovitz-Attias, Q. Yu, M. C. Stumpe, V. Shet, S. Arnoud, and L. Yatziv. Ontological supervision for fine grained classification of street view storefronts. In Proceed-ings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1693-1702, 2015. [14] R. Pascanu, T. Mikolov, and Y. Bengio. On the diffi-culty of training recurrent neural networks. ar Xiv preprint ar Xiv:1211. 5063, 2012. [15] D. C. Psichogios and L. H. Ungar. Svd-net: an algorithm that automatically selects network structure. IEEE transac-tions on neural networks/a publication of the IEEE Neural Networks Council, 5(3):513-515, 1993.
Googles Inception v3.pdf
[16] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. 2014. [17] F. Schroff, D. Kalenichenko, and J. Philbin. Facenet: A uni-fied embedding for face recognition and clustering. ar Xiv preprint ar Xiv:1503. 03832, 2015. [18] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. ar Xiv preprint ar Xiv:1409. 1556, 2014. [19] I. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the importance of initialization and momentum in deep learning. In Proceedings of the 30th International Conference on Ma-chine Learning (ICML-13), volume 28, pages 1139-1147. JMLR Workshop and Conference Proceedings, May 2013. [20] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1-9, 2015. [21] T. Tieleman and G. Hinton. Divide the gradient by a run-ning average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4, 2012. Accessed: 2015-11-05. [22] A. Toshev and C. Szegedy. Deeppose: Human pose estima-tion via deep neural networks. In Computer Vision and Pat-tern Recognition (CVPR), 2014 IEEE Conference on, pages 1653-1660. IEEE, 2014. [23] N. Wang and D.-Y. Yeung. Learning a deep compact image representation for visual tracking. In Advances in Neural Information Processing Systems, pages 809-817, 2013.
Googles Inception v3.pdf
Under review as a conference paper at ICLR 2017 SQUEEZE NET: A LEXNET-LEVEL ACCURACY WITH 50X FEWER PARAMETERS AND <0. 5MB MODEL SIZE Forrest N. Iandola1, Song Han2, Matthew W. Moskewicz1, Khalid Ashraf1, William J. Dally2, Kurt Keutzer1 1Deep Scale∗& UC Berkeley2Stanford University {forresti, moskewcz, kashraf, keutzer }@eecs. berkeley. edu {songhan, dally }@stanford. edu ABSTRACT Recent research on deep convolutional neural networks (CNNs) has focused pri-marily on improving accuracy. For a given accuracy level, it is typically possi-ble to identify multiple CNN architectures that achieve that accuracy level. With equivalent accuracy, smaller CNN architectures offer at least three advantages: (1) Smaller CNNs require less communication across servers during distributed train-ing. (2) Smaller CNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller CNNs are more feasible to deploy on FP-GAs and other hardware with limited memory. To provide all of these advantages, we propose a small CNN architecture called Squeeze Net. Squeeze Net achieves Alex Net-level accuracy on Image Net with 50x fewer parameters. Additionally, with model compression techniques, we are able to compress Squeeze Net to less than 0. 5MB ( 510×smaller than Alex Net). The Squeeze Net architecture is available for download here: https://github. com/Deep Scale/Squeeze Net 1 I NTRODUCTION AND MOTIVATION Much of the recent research on deep convolutional neural networks (CNNs) has focused on increas-ing accuracy on computer vision datasets. For a given accuracy level, there typically exist multiple CNN architectures that achieve that accuracy level. Given equivalent accuracy, a CNN architecture with fewer parameters has several advantages: More efficient distributed training. Communication among servers is the limiting factor to the scalability of distributed CNN training. For distributed data-parallel training, com-munication overhead is directly proportional to the number of parameters in the model (Ian-dola et al., 2016). In short, small models train faster due to requiring less communication. Less overhead when exporting new models to clients. For autonomous driving, compa-nies such as Tesla periodically copy new models from their servers to customers' cars. This practice is often referred to as an over-the-air update. Consumer Reports has found that the safety of Tesla's Autopilot semi-autonomous driving functionality has incrementally improved with recent over-the-air updates (Consumer Reports, 2016). However, over-the-air updates of today's typical CNN/DNN models can require large data transfers. With Alex Net, this would require 240MB of communication from the server to the car. Smaller models require less communication, making frequent updates more feasible. Feasible FPGA and embedded deployment. FPGAs often have less than 10MB1of on-chip memory and no off-chip memory or storage. For inference, a sufficiently small model could be stored directly on the FPGA instead of being bottlenecked by memory band-width (Qiu et al., 2016), while video frames stream through the FPGA in real time. Further, when deploying CNNs on Application-Specific Integrated Circuits (ASICs), a sufficiently small model could be stored directly on-chip, and smaller models may enable the ASIC to fit on a smaller die. ∗http://deepscale. ai 1For example, the Xilinx Vertex-7 FPGA has a maximum of 8. 5 MBytes (i. e. 68 Mbits) of on-chip memory and does not provide off-chip memory. 1ar Xiv:1602. 07360v4 [cs. CV] 4 Nov 2016
SQUEEZENET.pdf
Under review as a conference paper at ICLR 2017 As you can see, there are several advantages of smaller CNN architectures. With this in mind, we focus directly on the problem of identifying a CNN architecture with fewer parameters but equivalent accuracy compared to a well-known model. We have discovered such an architecture, which we call Squeeze Net. In addition, we present our attempt at a more disciplined approach to searching the design space for novel CNN architectures. The rest of the paper is organized as follows. In Section 2 we review the related work. Then, in Sections 3 and 4 we describe and evaluate the Squeeze Net architecture. After that, we turn our attention to understanding how CNN architectural design choices impact model size and accuracy. We gain this understanding by exploring the design space of Squeeze Net-like architectures. In Section 5, we do design space exploration on the CNN microarchitecture, which we define as the organization and dimensionality of individual layers and modules. In Section 6, we do design space exploration on the CNN macroarchitecture, which we define as high-level organization of layers in a CNN. Finally, we conclude in Section 7. In short, Sections 3 and 4 are useful for CNN researchers as well as practitioners who simply want to apply Squeeze Net to a new application. The remaining sections are aimed at advanced researchers who intend to design their own CNN architectures. 2 R ELATED WORK 2. 1 M ODEL COMPRESSION The overarching goal of our work is to identify a model that has very few parameters while preserv-ing accuracy. To address this problem, a sensible approach is to take an existing CNN model and compress it in a lossy fashion. In fact, a research community has emerged around the topic of model compression, and several approaches have been reported. A fairly straightforward approach by Den-tonet al. is to apply singular value decomposition (SVD) to a pretrained CNN model (Denton et al., 2014). Han et al. developed Network Pruning, which begins with a pretrained model, then replaces parameters that are below a certain threshold with zeros to form a sparse matrix, and finally performs a few iterations of training on the sparse CNN (Han et al., 2015b). Recently, Han et al. extended their work by combining Network Pruning with quantization (to 8 bits or less) and huffman encoding to create an approach called Deep Compression (Han et al., 2015a), and further designed a hardware accelerator called EIE (Han et al., 2016a) that operates directly on the compressed model, achieving substantial speedups and energy savings. 2. 2 CNN M ICROARCHITECTURE Convolutions have been used in artificial neural networks for at least 25 years; Le Cun et al. helped to popularize CNNs for digit recognition applications in the late 1980s (Le Cun et al., 1989). In neural networks, convolution filters are typically 3D, with height, width, and channels as the key dimensions. When applied to images, CNN filters typically have 3 channels in their first layer (i. e. RGB), and in each subsequent layer Lithe filters have the same number of channels as Li-1has filters. The early work by Le Cun et al. (Le Cun et al., 1989) uses 5x5x Channels2filters, and the recent VGG (Simonyan & Zisserman, 2014) architectures extensively use 3x3 filters. Models such as Network-in-Network (Lin et al., 2013) and the Goog Le Net family of architectures (Szegedy et al., 2014; Ioffe & Szegedy, 2015; Szegedy et al., 2015; 2016) use 1x1 filters in some layers. With the trend of designing very deep CNNs, it becomes cumbersome to manually select filter di-mensions for each layer. To address this, various higher level building blocks, or modules, comprised of multiple convolution layers with a specific fixed organization have been proposed. For example, the Goog Le Net papers propose Inception modules, which are comprised of a number of different di-mensionalities of filters, usually including 1x1 and 3x3, plus sometimes 5x5 (Szegedy et al., 2014) and sometimes 1x3 and 3x1 (Szegedy et al., 2015). Many such modules are then combined, perhaps with additional ad-hoc layers, to form a complete network. We use the term CNN microarchitecture to refer to the particular organization and dimensions of the individual modules. 2. 3 CNN M ACROARCHITECTURE While the CNN microarchitecture refers to individual layers and modules, we define the CNN macroarchitecture as the system-level organization of multiple modules into an end-to-end CNN architecture. 2From now on, we will simply abbreviate Hx Wx Channels to Hx W. 2
SQUEEZENET.pdf
Under review as a conference paper at ICLR 2017 Perhaps the mostly widely studied CNN macroarchitecture topic in the recent literature is the impact ofdepth (i. e. number of layers) in networks. Simoyan and Zisserman proposed the VGG (Simonyan & Zisserman, 2014) family of CNNs with 12 to 19 layers and reported that deeper networks produce higher accuracy on the Image Net-1k dataset (Deng et al., 2009). K. He et al. proposed deeper CNNs with up to 30 layers that deliver even higher Image Net accuracy (He et al., 2015a). The choice of connections across multiple layers or modules is an emerging area of CNN macroar-chitectural research. Residual Networks (Res Net) (He et al., 2015b) and Highway Networks (Sri-vastava et al., 2015) each propose the use of connections that skip over multiple layers, for example additively connecting the activations from layer 3 to the activations from layer 6. We refer to these connections as bypass connections. The authors of Res Net provide an A/B comparison of a 34-layer CNN with and without bypass connections; adding bypass connections delivers a 2 percentage-point improvement on Top-5 Image Net accuracy. 2. 4 N EURAL NETWORK DESIGN SPACE EXPLORATION Neural networks (including deep and convolutional NNs) have a large design space, with numerous options for microarchitectures, macroarchitectures, solvers, and other hyperparameters. It seems natural that the community would want to gain intuition about how these factors impact a NN's accuracy (i. e. the shape of the design space). Much of the work on design space exploration (DSE) of NNs has focused on developing automated approaches for finding NN architectures that deliver higher accuracy. These automated DSE approaches include bayesian optimization (Snoek et al., 2012), simulated annealing (Ludermir et al., 2006), randomized search (Bergstra & Bengio, 2012), and genetic algorithms (Stanley & Miikkulainen, 2002). To their credit, each of these papers pro-vides a case in which the proposed DSE approach produces a NN architecture that achieves higher accuracy compared to a representative baseline. However, these papers make no attempt to provide intuition about the shape of the NN design space. Later in this paper, we eschew automated ap-proaches-instead, we refactor CNNs in such a way that we can do principled A/B comparisons to investigate how CNN architectural decisions influence model size and accuracy. In the following sections, we first propose and evaluate the Squeeze Net architecture with and with-out model compression. Then, we explore the impact of design choices in microarchitecture and macroarchitecture for Squeeze Net-like CNN architectures. 3 S QUEEZE NET:PRESERVING ACCURACY WITH FEW PARAMETERS In this section, we begin by outlining our design strategies for CNN architectures with few param-eters. Then, we introduce the Fire module, our new building block out of which to build CNN architectures. Finally, we use our design strategies to construct Squeeze Net, which is comprised mainly of Fire modules. 3. 1 A RCHITECTURAL DESIGN STRATEGIES Our overarching objective in this paper is to identify CNN architectures that have few parameters while maintaining competitive accuracy. To achieve this, we employ three main strategies when designing CNN architectures: Strategy 1. Replace 3x3 filters with 1x1 filters. Given a budget of a certain number of convolution filters, we will choose to make the majority of these filters 1x1, since a 1x1 filter has 9X fewer parameters than a 3x3 filter. Strategy 2. Decrease the number of input channels to 3x3 filters. Consider a convolution layer that is comprised entirely of 3x3 filters. The total quantity of parameters in this layer is (number of input channels) * (number of filters) * (3*3). So, to maintain a small total number of parameters in a CNN, it is important not only to decrease the number of 3x3 filters (see Strategy 1 above), but also to decrease the number of input channels to the 3x3 filters. We decrease the number of input channels to 3x3 filters using squeeze layers, which we describe in the next section. Strategy 3. Downsample late in the network so that convolution layers have large activation maps. In a convolutional network, each convolution layer produces an output activation map with a spatial resolution that is at least 1x1 and often much larger than 1x1. The height and width of these activation maps are controlled by: (1) the size of the input data (e. g. 256x256 images) and (2) 3
SQUEEZENET.pdf
Under review as a conference paper at ICLR 2017 h"p://www. presenta. on0process. com/lego0blocks0in0powerpoint. html88 squeeze 8expand 81x18convolu. on8filters 81x18and83x38convolu. on8filters 8Re LU 8 Re LU 8 Figure 1: Microarchitectural view: Organization of convolution filters in the Fire module. In this example, s1x1= 3,e1x1= 4, and e3x3= 4. We illustrate the convolution filters but not the activations. the choice of layers in which to downsample in the CNN architecture. Most commonly, downsam-pling is engineered into CNN architectures by setting the (stride >1) in some of the convolution or pooling layers (e. g. (Szegedy et al., 2014; Simonyan & Zisserman, 2014; Krizhevsky et al., 2012)). If early3layers in the network have large strides, then most layers will have small activation maps. Conversely, if most layers in the network have a stride of 1, and the strides greater than 1 are con-centrated toward the end4of the network, then many layers in the network will have large activation maps. Our intuition is that large activation maps (due to delayed downsampling) can lead to higher classification accuracy, with all else held equal. Indeed, K. He and H. Sun applied delayed down-sampling to four different CNN architectures, and in each case delayed downsampling led to higher classification accuracy (He & Sun, 2015). Strategies 1 and 2 are about judiciously decreasing the quantity of parameters in a CNN while attempting to preserve accuracy. Strategy 3 is about maximizing accuracy on a limited budget of parameters. Next, we describe the Fire module, which is our building block for CNN architectures that enables us to successfully employ Strategies 1, 2, and 3. 3. 2 T HEFIREMODULE We define the Fire module as follows. A Fire module is comprised of: a squeeze convolution layer (which has only 1x1 filters), feeding into an expand layer that has a mix of 1x1 and 3x3 convolution filters; we illustrate this in Figure 1. The liberal use of 1x1 filters in Fire modules is an application of Strategy 1 from Section 3. 1. We expose three tunable dimensions (hyperparameters) in a Fire module: s1x1,e1x1, and e3x3. In a Fire module, s1x1is the number of filters in the squeeze layer (all 1x1), e1x1is the number of 1x1 filters in the expand layer, and e3x3is the number of 3x3 filters in the expand layer. When we use Fire modules we set s1x1to be less than ( e1x1+e3x3), so the squeeze layer helps to limit the number of input channels to the 3x3 filters, as per Strategy 2 from Section 3. 1. 3. 3 T HESQUEEZE NET ARCHITECTURE We now describe the Squeeze Net CNN architecture. We illustrate in Figure 2 that Squeeze Net begins with a standalone convolution layer (conv1), followed by 8 Fire modules (fire2-9), ending with a final conv layer (conv10). We gradually increase the number of filters per fire module from the beginning to the end of the network. Squeeze Net performs max-pooling with a stride of 2 after layers conv1, fire4, fire8, and conv10; these relatively late placements of pooling are per Strategy 3 from Section 3. 1. We present the full Squeeze Net architecture in Table 1. 3In our terminology, an “early” layer is close to the input data. 4In our terminology, the “end” of the network is the classifier. 4
SQUEEZENET.pdf
Under review as a conference paper at ICLR 2017 "labrador retriever dog" conv196 fire2128 fire3128 fire4256 fire5256 fire6384 fire7384 fire8512 fire9512 conv101000 softmaxmaxpool/2 maxpool/2 maxpool/2global avgpool conv196 fire2128 fire3128 fire4256 fire5256 fire6384 fire7384 fire8512 fire9512 conv101000 softmaxmaxpool/2 maxpool/2 maxpool/2global avgpool conv196 fire2128 fire3128 fire4256 fire5256 fire6384 fire7384 fire8512 fire9512 conv101000 softmaxmaxpool/2 maxpool/2 maxpool/2global avgpool conv1x1 conv1x1 conv1x1 conv1x196 Figure 2: Macroarchitectural view of our Squeeze Net architecture. Left: Squeeze Net (Section 3. 3); Middle: Squeeze Net with simple bypass (Section 6); Right: Squeeze Net with complex bypass (Sec-tion 6). 3. 3. 1 O THER SQUEEZE NET DETAILS For brevity, we have omitted number of details and design choices about Squeeze Net from Table 1 and Figure 2. We provide these design choices in the following. The intuition behind these choices may be found in the papers cited below. So that the output activations from 1x1 and 3x3 filters have the same height and width, we add a 1-pixel border of zero-padding in the input data to 3x3 filters of expand modules. Re LU (Nair & Hinton, 2010) is applied to activations from squeeze and expand layers. Dropout (Srivastava et al., 2014) with a ratio of 50% is applied after the fire9 module. Note the lack of fully-connected layers in Squeeze Net; this design choice was inspired by the Ni N (Lin et al., 2013) architecture. When training Squeeze Net, we begin with a learning rate of 0. 04, and we lin-early decrease the learning rate throughout training, as described in (Mishkin et al., 2016). For details on the training protocol (e. g. batch size, learning rate, parame-ter initialization), please refer to our Caffe-compatible configuration files located here: https://github. com/Deep Scale/Squeeze Net. The Caffe framework does not natively support a convolution layer that contains multiple filter resolutions (e. g. 1x1 and 3x3) (Jia et al., 2014). To get around this, we implement our expand layer with two separate convolution layers: a layer with 1x1 filters, and a layer with 3x3 filters. Then, we concatenate the outputs of these layers together in the channel dimension. This is numerically equivalent to implementing one layer that contains both 1x1 and 3x3 filters. We released the Squeeze Net configuration files in the format defined by the Caffe CNN frame-work. However, in addition to Caffe, several other CNN frameworks have emerged, including MXNet (Chen et al., 2015a), Chainer (Tokui et al., 2015), Keras (Chollet, 2016), and Torch (Col-lobert et al., 2011). Each of these has its own native format for representing a CNN architec-ture. That said, most of these libraries use the same underlying computational back-ends such as cu DNN (Chetlur et al., 2014) and MKL-DNN (Das et al., 2016). The research community has 5
SQUEEZENET.pdf
Under review as a conference paper at ICLR 2017 ported the Squeeze Net CNN architecture for compatibility with a number of other CNN software frameworks: MXNet (Chen et al., 2015a) port of Squeeze Net: (Haria, 2016) Chainer (Tokui et al., 2015) port of Squeeze Net: (Bell, 2016) Keras (Chollet, 2016) port of Squeeze Net: (DT42, 2016) Torch (Collobert et al., 2011) port of Squeeze Net's Fire Modules: (Waghmare, 2016) 4 E VALUATION OF SQUEEZE NET We now turn our attention to evaluating Squeeze Net. In each of the CNN model compression papers reviewed in Section 2. 1, the goal was to compress an Alex Net (Krizhevsky et al., 2012) model that was trained to classify images using the Image Net (Deng et al., 2009) (ILSVRC 2012) dataset. Therefore, we use Alex Net5and the associated model compression results as a basis for comparison when evaluating Squeeze Net. Table 1: Squeeze Net architectural dimensions. (The formatting of this table was inspired by the Inception2 paper (Ioffe & Szegedy, 2015). ) 5layer name/typeoutput sizefilter size / stride (if not a fire layer)depths1x1(#1x1 squeeze)e1x1(#1x1 expand)e3x3(#3x3expand)s1x1sparsity e1x1sparsitye3x3sparsity# bits#parameter before pruning#parameter after pruninginput image224x224x3--conv1111x111x967x7/2 (x96)16bit14,20814,208maxpool155x55x963x3/20fire255x55x1282166464100%100%33%6bit11,9205,746fire355x55x1282166464100%100%33%6bit12,4326,258fire455x55x256232128128100%100%33%6bit45,34420,646maxpool427x27x2563x3/20fire527x27x256232128128100%100%33%6bit49,44024,742fire627x27x384248192192100%50%33%6bit104,88044,700fire727x27x38424819219250%100%33%6bit111,02446,236fire827x27x512264256256100%50%33%6bit188,99277,581maxpool813x12x5123x3/20fire913x13x51226425625650%100%30%6bit197,18477,581conv1013x13x10001x1/1 (x1000)16bit513,000103,400avgpool101x1x100013x13/101,248,424 (total)421,098 (total)20% (3x3)100% (7x7) In Table 2, we review Squeeze Net in the context of recent model compression results. The SVD-based approach is able to compress a pretrained Alex Net model by a factor of 5x, while diminishing top-1 accuracy to 56. 0% (Denton et al., 2014). Network Pruning achieves a 9x reduction in model size while maintaining the baseline of 57. 2% top-1 and 80. 3% top-5 accuracy on Image Net (Han et al., 2015b). Deep Compression achieves a 35x reduction in model size while still maintaining the baseline accuracy level (Han et al., 2015a). Now, with Squeeze Net, we achieve a 50X reduction in model size compared to Alex Net, while meeting or exceeding the top-1 and top-5 accuracy of Alex Net. We summarize all of the aforementioned results in Table 2. It appears that we have surpassed the state-of-the-art results from the model compression commu-nity: even when using uncompressed 32-bit values to represent the model, Squeeze Net has a 1. 4× smaller model size than the best efforts from the model compression community while maintain-ing or exceeding the baseline accuracy. Until now, an open question has been: are small models amenable to compression, or do small models “need” all of the representational power afforded by dense floating-point values? To find out, we applied Deep Compression (Han et al., 2015a) 5Our baseline is bvlc alexnet from the Caffe codebase (Jia et al., 2014). 6
SQUEEZENET.pdf
Under review as a conference paper at ICLR 2017 Table 2: Comparing Squeeze Net to model compression approaches. By model size, we mean the number of bytes required to store all of the parameters in the trained model. CNN architecture Compression Approach Data Type Original Compressed Model Size Reduction in Model Size vs. Alex Net Top-1 Image Net Accuracy Top-5 Image Net Accuracy Alex Net None (baseline) 32 bit 240MB 1x 57. 2% 80. 3% Alex Net SVD (Denton et al., 2014)32 bit 240MB48MB 5x 56. 0% 79. 4% Alex Net Network Pruning (Han et al., 2015b)32 bit 240MB27MB 9x 57. 2% 80. 3% Alex Net Deep Compression (Han et al., 2015a)5-8 bit 240MB6. 9MB 35x 57. 2% 80. 3% Squeeze Net (ours) None 32 bit 4. 8MB 50x 57. 5% 80. 3% Squeeze Net (ours) Deep Compression 8 bit 4. 8MB0. 66MB 363x 57. 5% 80. 3% Squeeze Net (ours) Deep Compression 6 bit 4. 8MB0. 47MB 510x 57. 5% 80. 3% to Squeeze Net, using 33% sparsity6and 8-bit quantization. This yields a 0. 66 MB model ( 363× smaller than 32-bit Alex Net) with equivalent accuracy to Alex Net. Further, applying Deep Compres-sion with 6-bit quantization and 33% sparsity on Squeeze Net, we produce a 0. 47MB model ( 510× smaller than 32-bit Alex Net) with equivalent accuracy. Our small model is indeed amenable to compression. In addition, these results demonstrate that Deep Compression (Han et al., 2015a) not only works well on CNN architectures with many parameters (e. g. Alex Net and VGG), but it is also able to compress the already compact, fully convolutional Squeeze Net architecture. Deep Compression compressed Squeeze Net by 10×while preserving the baseline accuracy. In summary: by combin-ing CNN architectural innovation (Squeeze Net) with state-of-the-art compression techniques (Deep Compression), we achieved a 510×reduction in model size with no decrease in accuracy compared to the baseline. Finally, note that Deep Compression (Han et al., 2015b) uses a codebook as part of its scheme for quantizing CNN parameters to 6-or 8-bits of precision. Therefore, on most commodity processors, it isnottrivial to achieve a speedup of32 8= 4xwith 8-bit quantization or32 6= 5. 3xwith 6-bit quantization using the scheme developed in Deep Compression. However, Han et al. developed custom hardware-Efficient Inference Engine (EIE)-that can compute codebook-quantized CNNs more efficiently (Han et al., 2016a). In addition, in the months since we released Squeeze Net, P. Gysel developed a strategy called Ristretto for linearly quantizing Squeeze Net to 8 bits (Gysel, 2016). Specifically, Ristretto does computation in 8 bits, and it stores parameters and activations in 8-bit data types. Using the Ristretto strategy for 8-bit computation in Squeeze Net inference, Gysel observed less than 1 percentage-point of drop in accuracy when using 8-bit instead of 32-bit data types. 5 CNN M ICROARCHITECTURE DESIGN SPACE EXPLORATION So far, we have proposed architectural design strategies for small models, followed these principles to create Squeeze Net, and discovered that Squeeze Net is 50x smaller than Alex Net with equivalent accuracy. However, Squeeze Net and other models reside in a broad and largely unexplored design space of CNN architectures. Now, in Sections 5 and 6, we explore several aspects of the design space. We divide this architectural exploration into two main topics: microarchitectural exploration (per-module layer dimensions and configurations) and macroarchitectural exploration (high-level end-to-end organization of modules and other layers). In this section, we design and execute experiments with the goal of providing intuition about the shape of the microarchitectural design space with respect to the design strategies that we proposed in Section 3. 1. Note that our goal here is notto maximize accuracy in every experiment, but rather to understand the impact of CNN architectural choices on model size and accuracy. 6Note that, due to the storage overhead of storing sparse matrix indices, 33% sparsity leads to somewhat less than a 3×decrease in model size. 7
SQUEEZENET.pdf
Under review as a conference paper at ICLR 2017 13#MB#of #weights #85. 3% #accuracy #86. 0% #accuracy #19#MB#of #weights #4. 8#MB#of #weights #80. 3% #accuracy #Squeeze Net # (a) Exploring the impact of the squeeze ratio ( SR) on model size and accuracy. 21#MB#of #weights #13#MB#of #weights #5. 7#MB#of #weights #85. 3% #accuracy #85. 3% #accuracy #76. 3% #accuracy #(b) Exploring the impact of the ratio of 3x3 filters in expand layers ( pct3x3) on model size and accuracy. Figure 3: Microarchitectural design space exploration. 5. 1 CNN M ICROARCHITECTURE METAPARAMETERS In Squeeze Net, each Fire module has three dimensional hyperparameters that we defined in Sec-tion 3. 2: s1x1,e1x1, and e3x3. Squeeze Net has 8 Fire modules with a total of 24 dimensional hyperparameters. To do broad sweeps of the design space of Squeeze Net-like architectures, we define the following set of higher level metaparameters which control the dimensions of all Fire modules in a CNN. We define base eas the number of expand filters in the first Fire module in a CNN. After every freq Fire modules, we increase the number of expand filters by incre. In other words, for Fire module i, the number of expand filters is ei=base e+ (incre∗⌊ i freq⌋ ). In the expand layer of a Fire module, some filters are 1x1 and some are 3x3; we define ei=ei,1x1+ei,3x3 withpct3x3(in the range [0,1], shared over all Fire modules) as the percentage of expand filters that are 3x3. In other words, ei,3x3=ei∗pct3x3, and ei,1x1=ei∗(1-pct3x3). Finally, we define the number of filters in the squeeze layer of a Fire module using a metaparameter called the squeeze ratio (SR) (again, in the range [0,1], shared by all Fire modules): si,1x1=SR∗ei(or equivalently si,1x1=SR∗(ei,1x1+ei,3x3)). Squeeze Net (Table 1) is an example architecture that we gen-erated with the aforementioned set of metaparameters. Specifically, Squeeze Net has the following metaparameters: base e= 128,incre= 128,pct3x3= 0. 5,freq = 2, and SR= 0. 125. 5. 2 S QUEEZE RATIO In Section 3. 1, we proposed decreasing the number of parameters by using squeeze layers to decrease the number of input channels seen by 3x3 filters. We defined the squeeze ratio (SR) as the ratio between the number of filters in squeeze layers and the number of filters in expand layers. We now design an experiment to investigate the effect of the squeeze ratio on model size and accuracy. In these experiments, we use Squeeze Net (Figure 2) as a starting point. As in Squeeze Net, these experiments use the following metaparameters: base e= 128,incre= 128,pct3x3= 0. 5, and freq = 2. We train multiple models, where each model has a different squeeze ratio (SR)7in the range [0. 125, 1. 0]. In Figure 3(a), we show the results of this experiment, where each point on the graph is an independent model that was trained from scratch. Squeeze Net is the SR=0. 125 point in this figure. 8From this figure, we learn that increasing SR beyond 0. 125 can further increase Image Net top-5 accuracy from 80. 3% (i. e. Alex Net-level) with a 4. 8MB model to 86. 0% with a 19MB model. Accuracy plateaus at 86. 0% with SR=0. 75 (a 19MB model), and setting SR=1. 0 further increases model size without improving accuracy. 5. 3 T RADING OFF 1X1AND 3X3FILTERS In Section 3. 1, we proposed decreasing the number of parameters in a CNN by replacing some 3x3 filters with 1x1 filters. An open question is, how important is spatial resolution in CNN filters? 7Note that, for a given model, all Fire layers share the same squeeze ratio. 8Note that we named it Squeeze Net because it has a low squeeze ratio (SR). That is, the squeeze layers in Squeeze Net have 0. 125x the number of filters as the expand layers. 8
SQUEEZENET.pdf
Under review as a conference paper at ICLR 2017 The VGG (Simonyan & Zisserman, 2014) architectures have 3x3 spatial resolution in most layers' filters; Goog Le Net (Szegedy et al., 2014) and Network-in-Network (Ni N) (Lin et al., 2013) have 1x1 filters in some layers. In Goog Le Net and Ni N, the authors simply propose a specific quantity of 1x1 and 3x3 filters without further analysis. 9Here, we attempt to shed light on how the proportion of 1x1 and 3x3 filters affects model size and accuracy. We use the following metaparameters in this experiment: base e=incre= 128,freq = 2,SR= 0. 500, and we vary pct3x3from 1% to 99%. In other words, each Fire module's expand layer has a predefined number of filters partitioned between 1x1 and 3x3, and here we turn the knob on these filters from “mostly 1x1” to “mostly 3x3”. As in the previous experiment, these models have 8 Fire modules, following the same organization of layers as in Figure 2. We show the results of this experiment in Figure 3(b). Note that the 13MB models in Figure 3(a) and Figure 3(b) are the same architecture: SR= 0. 500andpct3x3= 50%. We see in Figure 3(b) that the top-5 accuracy plateaus at 85. 6% using 50% 3x3 filters, and further increasing the percentage of 3x3 filters leads to a larger model size but provides no improvement in accuracy on Image Net. 6 CNN M ACROARCHITECTURE DESIGN SPACE EXPLORATION So far we have explored the design space at the microarchitecture level, i. e. the contents of individual modules of the CNN. Now, we explore design decisions at the macroarchitecture level concerning the high-level connections among Fire modules. Inspired by Res Net (He et al., 2015b), we explored three different architectures: Vanilla Squeeze Net (as per the prior sections). Squeeze Net with simple bypass connections between some Fire modules. (Inspired by (Sri-vastava et al., 2015; He et al., 2015b). ) Squeeze Net with complex bypass connections between the remaining Fire modules. We illustrate these three variants of Squeeze Net in Figure 2. Oursimple bypass architecture adds bypass connections around Fire modules 3, 5, 7, and 9, requiring these modules to learn a residual function between input and output. As in Res Net, to implement a bypass connection around Fire3, we set the input to Fire4 equal to (output of Fire2 + output of Fire3), where the + operator is elementwise addition. This changes the regularization applied to the parameters of these Fire modules, and, as per Res Net, can improve the final accuracy and/or ability to train the full model. One limitation is that, in the straightforward case, the number of input channels and number of output channels has to be the same; as a result, only half of the Fire modules can have simple bypass connections, as shown in the middle diagram of Fig 2. When the “same number of channels” requirement can't be met, we use a complex bypass connection, as illustrated on the right of Figure 2. While a simple bypass is “just a wire,” we define a complex bypass as a bypass that includes a 1x1 convolution layer with the number of filters set equal to the number of output channels that are needed. Note that complex bypass connections add extra parameters to the model, while simple bypass connections do not. In addition to changing the regularization, it is intuitive to us that adding bypass connections would help to alleviate the representational bottleneck introduced by squeeze layers. In Squeeze Net, the squeeze ratio (SR) is 0. 125, meaning that every squeeze layer has 8x fewer output channels than the accompanying expand layer. Due to this severe dimensionality reduction, a limited amount of in-formation can pass through squeeze layers. However, by adding bypass connections to Squeeze Net, we open up avenues for information to flow around the squeeze layers. We trained Squeeze Net with the three macroarchitectures in Figure 2 and compared the accuracy and model size in Table 3. We fixed the microarchitecture to match Squeeze Net as described in Table 1 throughout the macroarchitecture exploration. Complex and simple bypass connections both yielded an accuracy improvement over the vanilla Squeeze Net architecture. Interestingly, the simple bypass enabled a higher accuracy accuracy improvement than complex bypass. Adding the 9To be clear, each filter is 1x1x Channels or 3x3x Channels, which we abbreviate to 1x1 and 3x3. 9
SQUEEZENET.pdf
Under review as a conference paper at ICLR 2017 Table 3: Squeeze Net accuracy and model size using different macroarchitecture configurations Architecture Top-1 Accuracy Top-5 Accuracy Model Size Vanilla Squeeze Net 57. 5% 80. 3% 4. 8MB Squeeze Net + Simple Bypass 60. 4% 82. 5% 4. 8MB Squeeze Net + Complex Bypass 58. 8% 82. 0% 7. 7MB simple bypass connections yielded an increase of 2. 9 percentage-points in top-1 accuracy and 2. 2 percentage-points in top-5 accuracy without increasing model size. 7 C ONCLUSIONS In this paper, we have proposed steps toward a more disciplined approach to the design-space explo-ration of convolutional neural networks. Toward this goal we have presented Squeeze Net, a CNN architecture that has 50×fewer parameters than Alex Net and maintains Alex Net-level accuracy on Image Net. We also compressed Squeeze Net to less than 0. 5MB, or 510×smaller than Alex Net without compression. Since we released this paper as a technical report in 2016, Song Han and his collaborators have experimented further with Squeeze Net and model compression. Using a new approach called Dense-Sparse-Dense (DSD) (Han et al., 2016b), Han et al. use model compres-sion during training as a regularizer to further improve accuracy, producing a compressed set of Squeeze Net parameters that is 1. 2 percentage-points more accurate on Image Net-1k, and also pro-ducing an uncompressed set of Squeeze Net parameters that is 4. 3 percentage-points more accurate, compared to our results in Table 2. We mentioned near the beginning of this paper that small models are more amenable to on-chip implementations on FPGAs. Since we released the Squeeze Net model, Gschwend has developed a variant of Squeeze Net and implemented it on an FPGA (Gschwend, 2016). As we anticipated, Gschwend was able to able to store the parameters of a Squeeze Net-like model entirely within the FPGA and eliminate the need for off-chip memory accesses to load model parameters. In the context of this paper, we focused on Image Net as a target dataset. However, it has become common practice to apply Image Net-trained CNN representations to a variety of applications such as fine-grained object recognition (Zhang et al., 2013; Donahue et al., 2013), logo identification in images (Iandola et al., 2015), and generating sentences about images (Fang et al., 2015). Image Net-trained CNNs have also been applied to a number of applications pertaining to autonomous driv-ing, including pedestrian and vehicle detection in images (Iandola et al., 2014; Girshick et al., 2015; Ashraf et al., 2016) and videos (Chen et al., 2015b), as well as segmenting the shape of the road (Badrinarayanan et al., 2015). We think Squeeze Net will be a good candidate CNN architecture for a variety of applications, especially those in which small model size is of importance. Squeeze Net is one of several new CNNs that we have discovered while broadly exploring the de-sign space of CNN architectures. We hope that Squeeze Net will inspire the reader to consider and explore the broad range of possibilities in the design space of CNN architectures and to perform that exploration in a more systematic manner. REFERENCES Khalid Ashraf, Bichen Wu, Forrest N. Iandola, Matthew W. Moskewicz, and Kurt Keutzer. Shallow networks for high-accuracy road object-detection. ar Xiv:1606. 01561, 2016. Vijay Badrinarayanan, Alex Kendall, and Roberto Cipolla. Seg Net: A deep convolutional encoder-decoder architecture for image segmentation. arxiv:1511. 00561, 2015. Eddie Bell. A implementation of squeezenet in chainer. https://github. com/ejlb/ squeezenet-chainer, 2016. J. Bergstra and Y. Bengio. An optimization methodology for neural network weights and architec-tures. JMLR, 2012. Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang. Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems. ar Xiv:1512. 01274, 2015a. 10
SQUEEZENET.pdf
Under review as a conference paper at ICLR 2017 Xiaozhi Chen, Kaustav Kundu, Yukun Zhu, Andrew G Berneshawi, Huimin Ma, Sanja Fidler, and Raquel Urtasun. 3d object proposals for accurate object class detection. In NIPS, 2015b. Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catan-zaro, and Evan Shelhamer. cu DNN: efficient primitives for deep learning. ar Xiv:1410. 0759, 2014. Francois Chollet. Keras: Deep learning library for theano and tensorflow. https://keras. io, 2016. Ronan Collobert, Koray Kavukcuoglu, and Clement Farabet. Torch7: A matlab-like environment for machine learning. In NIPS Big Learn Workshop, 2011. Consumer Reports. Teslas new autopilot: Better but still needs improvement. http://www. consumerreports. org/tesla/ tesla-new-autopilot-better-but-needs-improvement, 2016. Dipankar Das, Sasikanth Avancha, Dheevatsa Mudigere, Karthikeyan Vaidyanathan, Srinivas Srid-haran, Dhiraj D. Kalamkar, Bharat Kaul, and Pradeep Dubey. Distributed deep learning using synchronous stochastic gradient descent. ar Xiv:1602. 06709, 2016. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Image Net: A large-scale hierarchical image database. In CVPR, 2009. E. L Denton, W. Zaremba, J. Bruna, Y. Le Cun, and R. Fergus. Exploiting linear structure within convolutional networks for efficient evaluation. In NIPS, 2014. Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. ar Xiv:1310. 1531, 2013. DT42. Squeezenet keras implementation. https://github. com/DT42/squeezenet_ demo, 2016. Hao Fang, Saurabh Gupta, Forrest Iandola, Rupesh Srivastava, Li Deng, Piotr Dollar, Jianfeng Gao, Xiaodong He, Margaret Mitchell, John C. Platt, C. Lawrence Zitnick, and Geoffrey Zweig. From captions to visual concepts and back. In CVPR, 2015. Ross B. Girshick, Forrest N. Iandola, Trevor Darrell, and Jitendra Malik. Deformable part models are convolutional neural networks. In CVPR, 2015. David Gschwend. Zynqnet: An fpga-accelerated embedded convolutional neural network. Master's thesis, Swiss Federal Institute of Technology Zurich (ETH-Zurich), 2016. Philipp Gysel. Ristretto: Hardware-oriented approximation of convolutional neural networks. ar Xiv:1605. 06402, 2016. S. Han, H. Mao, and W. Dally. Deep compression: Compressing DNNs with pruning, trained quantization and huffman coding. arxiv:1510. 00149v3, 2015a. S. Han, J. Pool, J. Tran, and W. Dally. Learning both weights and connections for efficient neural networks. In NIPS, 2015b. Song Han, Xingyu Liu, Huizi Mao, Jing Pu, Ardavan Pedram, Mark A Horowitz, and William J Dally. Eie: Efficient inference engine on compressed deep neural network. International Sympo-sium on Computer Architecture (ISCA), 2016a. Song Han, Jeff Pool, Sharan Narang, Huizi Mao, Shijian Tang, Erich Elsen, Bryan Catanzaro, John Tran, and William J. Dally. Dsd: Regularizing deep neural networks with dense-sparse-dense training flow. ar Xiv:1607. 04381, 2016b. Guo Haria. convert squeezenet to mxnet. https://github. com/haria/Squeeze Net/ commit/0cf57539375fd5429275af36fc94c774503427c3, 2016. 11
SQUEEZENET.pdf
Under review as a conference paper at ICLR 2017 K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectifiers: Surpassing human-level perfor-mance on imagenet classification. In ICCV, 2015a. Kaiming He and Jian Sun. Convolutional neural networks at constrained time cost. In CVPR, 2015. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog-nition. ar Xiv:1512. 03385, 2015b. Forrest N. Iandola, Matthew W. Moskewicz, Sergey Karayev, Ross B. Girshick, Trevor Darrell, and Kurt Keutzer. Densenet: Implementing efficient convnet descriptor pyramids. ar Xiv:1404. 1869, 2014. Forrest N. Iandola, Anting Shen, Peter Gao, and Kurt Keutzer. Deep Logo: Hitting logo recognition with the deep neural network hammer. ar Xiv:1510. 02131, 2015. Forrest N. Iandola, Khalid Ashraf, Matthew W. Moskewicz, and Kurt Keutzer. Fire Caffe: near-linear acceleration of deep neural network training on compute clusters. In CVPR, 2016. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. JMLR, 2015. Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Ser-gio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embed-ding. ar Xiv:1408. 5093, 2014. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Image Net Classification with Deep Con-volutional Neural Networks. In NIPS, 2012. Y. Le Cun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Back-propagation applied to handwritten zip code recognition. Neural Computation, 1989. Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. ar Xiv:1312. 4400, 2013. T. B. Ludermir, A. Yamazaki, and C. Zanchettin. An optimization methodology for neural network weights and architectures. IEEE Trans. Neural Networks, 2006. Dmytro Mishkin, Nikolay Sergievskiy, and Jiri Matas. Systematic evaluation of cnn advances on the imagenet. ar Xiv:1606. 02228, 2016. Vinod Nair and Geoffrey E. Hinton. Rectified linear units improve restricted boltzmann machines. In ICML, 2010. Jiantao Qiu, Jie Wang, Song Yao, Kaiyuan Guo, Boxun Li, Erjin Zhou, Jincheng Yu, Tianqi Tang, Ningyi Xu, Sen Song, Yu Wang, and Huazhong Yang. Going deeper with embedded fpga platform for convolutional neural network. In ACM International Symposium on FPGA, 2016. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. ar Xiv:1409. 1556, 2014. J. Snoek, H. Larochelle, and R. P. Adams. Practical bayesian optimization of machine learning algorithms. In NIPS, 2012. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. JMLR, 2014. R. K. Srivastava, K. Greff, and J. Schmidhuber. Highway networks. In ICML Deep Learning Workshop, 2015. K. O. Stanley and R. Miikkulainen. Evolving neural networks through augmenting topologies. Neu-rocomputing, 2002. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Du-mitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. ar Xiv:1409. 4842, 2014. 12
SQUEEZENET.pdf
Under review as a conference paper at ICLR 2017 Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Re-thinking the inception architecture for computer vision. ar Xiv:1512. 00567, 2015. Christian Szegedy, Sergey Ioffe, and Vincent Vanhoucke. Inception-v4, inception-resnet and the impact of residual connections on learning. ar Xiv:1602. 07261, 2016. S. Tokui, K. Oono, S. Hido, and J. Clayton. Chainer: a next-generation open source framework for deep learning. In NIPS Workshop on Machine Learning Systems (Learning Sys), 2015. Sagar M Waghmare. Fire Module. lua. https://github. com/Element-Research/dpnn/ blob/master/Fire Module. lua, 2016. Ning Zhang, Ryan Farrell, Forrest Iandola, and Trevor Darrell. Deformable part descriptors for fine-grained recognition and attribute prediction. In ICCV, 2013. 13
SQUEEZENET.pdf
ar Xiv:1409. 1556v6 [cs. CV] 10 Apr 2015Publishedasa conferencepaperat ICLR2015 VERYDEEPCONVOLUTIONAL NETWORKS FORLARGE-SCALEIMAGERECOGNITION Karen Simonyan∗& Andrew Zisserman+ Visual Geometry Group,Departmentof Engineering Science, Universityof Oxford {karen,az }@robots. ox. ac. uk ABSTRACT In this work we investigate the effect of the convolutional n etwork depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with verysmall( 3×3)convolutionfilters,whichshowsthatasignificantimprove ment on the prior-art configurations can be achieved by pushing th e depth to 16-19 weight layers. These findings were the basis of our Image Net C hallenge 2014 submission,whereourteamsecuredthefirstandthesecondpl acesinthelocalisa-tion and classification tracks respectively. We also show th at our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing Conv Net models publicly a vailable to facili-tate furtherresearchontheuse ofdeepvisualrepresentati onsincomputervision. 1 INTRODUCTION Convolutional networks (Conv Nets) have recently enjoyed a great success in large-scale im-age and video recognition (Krizhevskyetal., 2012; Zeiler& Fergus, 2013; Sermanetet al., 2014; Simonyan& Zisserman, 2014) which has become possible due to the large public image reposito-ries,suchas Image Net(Denget al.,2009),andhigh-perform ancecomputingsystems,suchas GPUs orlarge-scaledistributedclusters(Deanet al., 2012). In particular,animportantroleintheadvance ofdeepvisualrecognitionarchitectureshasbeenplayedby the Image Net Large-Scale Visual Recog-nition Challenge (ILSVRC) (Russakovskyet al., 2014), whic h has served as a testbed for a few generationsof large-scale image classification systems, f rom high-dimensionalshallow feature en-codings(Perronninetal.,2010)(thewinnerof ILSVRC-2011 )todeep Conv Nets(Krizhevskyet al., 2012)(thewinnerof ILSVRC-2012). With Conv Nets becoming more of a commodity in the computer vi sion field, a number of at-tempts have been made to improve the original architecture o f Krizhevskyet al. (2012) in a bid to achieve better accuracy. For instance, the best-perf orming submissions to the ILSVRC-2013 (Zeiler&Fergus, 2013; Sermanetetal., 2014) utilised smaller receptive window size and smaller stride of the first convolutional layer. Another lin e of improvements dealt with training and testing the networks densely over the whole image and ove r multiple scales (Sermanetet al., 2014; Howard, 2014). In this paper, we address another impor tant aspect of Conv Net architecture design-itsdepth. Tothisend,we fixotherparametersofthea rchitecture,andsteadilyincreasethe depth of the network by adding more convolutionallayers, wh ich is feasible due to the use of very small (3×3)convolutionfiltersinall layers. As a result, we come up with significantly more accurate Conv N et architectures, which not only achieve the state-of-the-art accuracy on ILSVRC classifica tion and localisation tasks, but are also applicabletootherimagerecognitiondatasets,wherethey achieveexcellentperformanceevenwhen usedasa partofa relativelysimple pipelines(e. g. deepfea turesclassified byalinear SVM without fine-tuning). We havereleasedourtwobest-performingmode ls1tofacilitatefurtherresearch. The rest of the paper is organised as follows. In Sect. 2, we de scribe our Conv Net configurations. The details of the image classification trainingand evaluat ionare then presented in Sect. 3, and the ∗current affiliation: Google Deep Mind+current affiliation: Universityof Oxfordand Google Deep Mi nd 1http://www. robots. ox. ac. uk/ ˜vgg/research/very_deep/ 1
VGG-16 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 configurations are compared on the ILSVRC classification tas k in Sect. 4. Sect. 5 concludes the paper. For completeness,we also describeand assess our ILS VRC-2014object localisationsystem in Appendix A,anddiscussthegeneralisationofverydeepfe aturestootherdatasetsin Appendix B. Finally,Appendix Ccontainsthelist ofmajorpaperrevisio ns. 2 CONVNETCONFIGURATIONS To measure the improvement brought by the increased Conv Net depth in a fair setting, all our Conv Net layer configurations are designed using the same pri nciples, inspired by Ciresan etal. (2011); Krizhevskyet al. (2012). In this section, we first de scribe a generic layout of our Conv Net configurations(Sect. 2. 1)andthendetailthespecificconfig urationsusedintheevaluation(Sect. 2. 2). Ourdesignchoicesarethendiscussedandcomparedtothepri orart in Sect. 2. 3. 2. 1 A RCHITECTURE During training, the input to our Conv Nets is a fixed-size 224×224RGB image. The only pre-processingwedoissubtractingthemean RGBvalue,computed onthetrainingset,fromeachpixel. Theimageispassedthroughastackofconvolutional(conv. ) layers,whereweusefilterswithavery small receptive field: 3×3(which is the smallest size to capture the notion of left/rig ht, up/down, center). In one of the configurationswe also utilise 1×1convolutionfilters, which can be seen as a linear transformationof the input channels (followed by n on-linearity). The convolutionstride is fixedto1pixel;thespatialpaddingofconv. layerinputissuchthatt hespatialresolutionispreserved afterconvolution,i. e. the paddingis 1pixel for3×3conv. layers. Spatial poolingis carriedoutby fivemax-poolinglayers,whichfollowsomeoftheconv. layer s(notalltheconv. layersarefollowed bymax-pooling). Max-poolingisperformedovera 2×2pixelwindow,withstride 2. Astackofconvolutionallayers(whichhasadifferentdepth indifferentarchitectures)isfollowedby three Fully-Connected(FC) layers: the first two have4096ch annelseach,the thirdperforms1000-way ILSVRC classification and thus contains1000channels(o ne foreach class). The final layer is thesoft-maxlayer. Theconfigurationofthefullyconnected layersis thesameinall networks. Allhiddenlayersareequippedwiththerectification(Re LU( Krizhevskyetal.,2012))non-linearity. We note that none of our networks (except for one) contain Loc al Response Normalisation (LRN) normalisation (Krizhevskyet al., 2012): as will be sh own in Sect. 4, such normalisation does not improve the performance on the ILSVRC dataset, but l eads to increased memory con-sumption and computation time. Where applicable, the param eters for the LRN layer are those of(Krizhevskyetal., 2012). 2. 2 C ONFIGURATIONS The Conv Net configurations, evaluated in this paper, are out lined in Table 1, one per column. In the following we will refer to the nets by their names (A-E). A ll configurationsfollow the generic design presented in Sect. 2. 1, and differ only in the depth: f rom 11 weight layers in the network A (8conv. and3FClayers)to19weightlayersinthenetwork E(1 6conv. and3FClayers). Thewidth of conv. layers (the number of channels) is rather small, sta rting from 64in the first layer and then increasingbyafactorof 2aftereachmax-poolinglayer,untilit reaches 512. In Table 2 we reportthe numberof parametersfor each configur ation. In spite of a large depth, the numberof weights in our netsis not greater thanthe numberof weightsin a moreshallow net with largerconv. layerwidthsandreceptivefields(144Mweights in(Sermanetet al., 2014)). 2. 3 D ISCUSSION Our Conv Net configurations are quite different from the ones used in the top-performing entries of the ILSVRC-2012 (Krizhevskyetal., 2012) and ILSVRC-201 3 competitions (Zeiler& Fergus, 2013;Sermanetet al.,2014). Ratherthanusingrelativelyl argereceptivefieldsinthefirstconv. lay-ers(e. g. 11×11withstride 4in(Krizhevskyet al.,2012),or 7×7withstride 2in(Zeiler& Fergus, 2013; Sermanetet al., 2014)), we use very small 3×3receptive fields throughout the whole net, whichareconvolvedwiththeinputateverypixel(withstrid e1). Itiseasytoseethatastackoftwo 3×3conv. layers(withoutspatialpoolinginbetween)hasaneff ectivereceptivefieldof 5×5;three 2
VGG-16 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 Table 1:Conv Net configurations (shown in columns). The depth of the configurations increase s fromtheleft(A)totheright(E),asmorelayersareadded(th eaddedlayersareshowninbold). The convolutional layer parameters are denoted as “conv ⟨receptive field size ⟩-⟨number of channels ⟩”. The Re LU activationfunctionisnotshownforbrevity. Conv Net Configuration A A-LRN B C D E 11weight 11weight 13 weight 16weight 16weight 19 weight layers layers layers layers layers layers input (224×224RGBimage) conv3-64 conv3-64 conv3-64 conv3-64 conv3-64 conv3-64 LRN conv3-64 conv3-64 conv3-64 conv3-64 maxpool conv3-128 conv3-128 conv3-128 conv3-128 conv3-128 conv3-128 conv3-128 conv3-128 conv3-128 conv3-128 maxpool conv3-256 conv3-256 conv3-256 conv3-256 conv3-256 conv3-256 conv3-256 conv3-256 conv3-256 conv3-256 conv3-256 conv3-256 conv1-256 conv3-256 conv3-256 conv3-256 maxpool conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv1-512 conv3-512 conv3-512 conv3-512 maxpool conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv1-512 conv3-512 conv3-512 conv3-512 maxpool FC-4096 FC-4096 FC-1000 soft-max Table2:Number ofparameters (inmillions). Network A,A-LRN BCDE Number of parameters 133 133134138144 such layers have a 7×7effectivereceptive field. So what have we gainedby using, fo r instance, a stackofthree 3×3conv. layersinsteadofasingle 7×7layer? First,weincorporatethreenon-linear rectification layers instead of a single one, which makes the decision functionmore discriminative. Second, we decrease the number of parameters: assuming that both the input and the output of a three-layer 3×3convolutionstack has Cchannels,the stack is parametrisedby 3( 32C2) = 27C2 weights; at the same time, a single 7×7conv. layer would require 72C2= 49C2parameters, i. e. 81%more. Thiscan be seen as imposinga regularisationon the 7×7conv. filters, forcingthemto haveadecompositionthroughthe 3×3filters(withnon-linearityinjectedin between). The incorporation of 1×1conv. layers (configuration C, Table 1) is a way to increase th e non-linearity of the decision function without affecting the re ceptive fields of the conv. layers. Even thoughinourcasethe 1×1convolutionisessentiallyalinearprojectionontothespa ceofthesame dimensionality(thenumberofinputandoutputchannelsist hesame),anadditionalnon-linearityis introducedbytherectificationfunction. Itshouldbenoted that1×1conv. layershaverecentlybeen utilisedin the“Networkin Network”architectureof Linet a l. (2014). Small-size convolution filters have been previously used by Ciresan etal. (2011), but their nets are significantly less deep than ours, and they did not evalua te on the large-scale ILSVRC dataset. Goodfellowet al. (2014) applied deep Conv Nets ( 11weight layers) to the task of street number recognition, and showed that the increased de pth led to better performance. Goog Le Net(Szegedyet al., 2014), a top-performingentryof the ILSVRC-2014classification task, was developed independentlyof our work, but is similar in th at it is based on very deep Conv Nets 3
VGG-16 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 (22 weight layers) and small convolution filters (apart from 3×3, they also use 1×1and5×5 convolutions). Their network topology is, however, more co mplex than ours, and the spatial reso-lution of the feature maps is reduced more aggressively in th e first layers to decrease the amount of computation. As will be shown in Sect. 4. 5, our model is out performing that of Szegedyetal. (2014)intermsofthesingle-networkclassificationaccura cy. 3 CLASSIFICATION FRAMEWORK In the previous section we presented the details of our netwo rk configurations. In this section, we describethe detailsofclassification Conv Nettrainingand evaluation. 3. 1 T RAINING The Conv Net training procedure generally follows Krizhevs kyetal. (2012) (except for sampling theinputcropsfrommulti-scaletrainingimages,asexplai nedlater). Namely,thetrainingiscarried out by optimising the multinomial logistic regression obje ctive using mini-batch gradient descent (based on back-propagation(Le Cunet al., 1989)) with momen tum. The batch size was set to 256, momentum to 0. 9. The training was regularised by weight decay (the L2penalty multiplier set to 5·10-4)anddropoutregularisationforthefirsttwofully-connect edlayers(dropoutratiosetto 0. 5). Thelearningrate wasinitially setto 10-2,andthendecreasedbyafactorof 10whenthevalidation set accuracy stopped improving. In total, the learning rate was decreased 3 times, and the learning was stopped after 370K iterations (74 epochs). We conjecture that in spite of the l arger number of parametersandthegreaterdepthofournetscomparedto(Kri zhevskyetal.,2012),thenetsrequired lessepochstoconvergedueto(a)implicitregularisationi mposedbygreaterdepthandsmallerconv. filter sizes; (b)pre-initialisationofcertainlayers. The initialisation of the networkweightsis important,sin ce bad initialisation can stall learningdue to the instability of gradient in deep nets. To circumvent th is problem, we began with training the configuration A (Table 1), shallow enoughto be trained wi th randominitialisation. Then,when trainingdeeperarchitectures,weinitialisedthefirstfou rconvolutionallayersandthelastthreefully-connectedlayerswiththelayersofnet A(theintermediatel ayerswereinitialisedrandomly). Wedid notdecreasethelearningrateforthepre-initialisedlaye rs,allowingthemtochangeduringlearning. For random initialisation (where applicable), we sampled t he weights from a normal distribution with thezeromeanand 10-2variance. The biaseswere initialisedwith zero. It isworth notingthat after the paper submission we found that it is possible to ini tialise the weights without pre-training byusingthe randominitialisationprocedureof Glorot&Ben gio(2010). Toobtainthefixed-size 224×224Conv Netinputimages,theywererandomlycroppedfromresca led training images (one crop per image per SGD iteration). To fu rther augment the training set, the cropsunderwentrandomhorizontalflippingandrandom RGBco lourshift(Krizhevskyet al.,2012). Trainingimagerescalingisexplainedbelow. Training image size. Let Sbe the smallest side of an isotropically-rescaledtraining image, from which the Conv Net input is cropped (we also refer to Sas the training scale). While the crop size is fixed to 224×224, in principle Scan take on any value not less than 224: for S= 224the crop will capture whole-image statistics, completely spanning the smallest side of a training image; for S≫224thecropwillcorrespondtoasmallpartoftheimage,contain ingasmallobjectoranobject part. We considertwoapproachesforsettingthetrainingscale S. Thefirst istofix S,whichcorresponds to single-scale training (note that image content within th e sampled crops can still represent multi-scale image statistics). In our experiments, we evaluated m odels trained at two fixed scales: S= 256(which has been widely used in the prior art (Krizhevskyet al., 2012; Zeiler&Fergus, 2013; Sermanetet al., 2014)) and S= 384. Given a Conv Net configuration,we first trained the network using S= 256. To speed-up training of the S= 384network, it was initialised with the weights pre-trainedwith S= 256,andwe useda smallerinitiallearningrateof 10-3. The second approachto setting Sis multi-scale training, where each training image is indiv idually rescaled by randomly sampling Sfrom a certain range [Smin,Smax](we used Smin= 256and Smax= 512). Sinceobjectsinimagescanbeofdifferentsize,itisbene ficialtotakethisintoaccount duringtraining. Thiscanalso beseen astrainingset augmen tationbyscale jittering,wherea single 4
VGG-16 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 model is trained to recognise objects over a wide range of sca les. For speed reasons, we trained multi-scale models by fine-tuning all layers of a single-sca le model with the same configuration, pre-trainedwithfixed S= 384. 3. 2 T ESTING Attest time,givena trained Conv Netandaninputimage,itis classified inthefollowingway. First, it is isotropically rescaled to a pre-defined smallest image side, denoted as Q(we also refer to it as the test scale). We note that Qis not necessarily equal to the training scale S(as we will show in Sect. 4, usingseveralvaluesof Qforeach Sleadsto improvedperformance). Then,the network is applied densely overthe rescaled test image in a way simil ar to (Sermanetet al., 2014). Namely, the fully-connected layers are first converted to convoluti onal layers (the first FC layer to a 7×7 conv. layer, the last two FC layers to 1×1conv. layers). The resulting fully-convolutional net is then applied to the whole (uncropped) image. The result is a c lass score map with the number of channels equal to the number of classes, and a variable spati al resolution, dependent on the input imagesize. Finally,toobtainafixed-sizevectorofclasssc oresfortheimage,theclassscoremapis spatially averaged(sum-pooled). We also augmentthe test s et by horizontalflippingof the images; thesoft-maxclassposteriorsoftheoriginalandflippedima gesareaveragedtoobtainthefinalscores fortheimage. Since the fully-convolutionalnetwork is applied over the w hole image, there is no need to sample multiple crops at test time (Krizhevskyetal., 2012), which is less efficient as it requires network re-computationforeachcrop. Atthesametime,usingalarge setofcrops,asdoneby Szegedyetal. (2014),canleadtoimprovedaccuracy,asit resultsin afiner samplingoftheinputimagecompared tothefully-convolutionalnet. Also,multi-cropevaluati oniscomplementarytodenseevaluationdue to different convolution boundary conditions: when applyi ng a Conv Net to a crop, the convolved feature mapsare paddedwith zeros, while in the case of dense evaluationthe paddingfor the same crop naturally comes from the neighbouring parts of an image (due to both the convolutions and spatial pooling), which substantially increases the overa ll network receptive field, so more context iscaptured. Whilewebelievethatinpracticetheincreased computationtimeofmultiplecropsdoes notjustifythepotentialgainsinaccuracy,forreferencew ealsoevaluateournetworksusing 50crops perscale( 5×5regulargridwith 2flips),foratotalof 150cropsover 3scales,whichiscomparable to144cropsover 4scalesusedby Szegedyetal. (2014). 3. 3 IMPLEMENTATION DETAILS Ourimplementationisderivedfromthepubliclyavailable C ++ Caffetoolbox(Jia,2013)(branched out in December 2013), but contains a number of significant mo difications, allowing us to perform trainingandevaluationonmultiple GPUsinstalledinasing lesystem,aswellastrainandevaluateon full-size (uncropped) images at multiple scales (as descri bed above). Multi-GPU training exploits data parallelism, and is carried out by splitting each batch of training images into several GPU batches, processed in parallel on each GPU. After the GPU bat ch gradientsare computed, they are averaged to obtain the gradient of the full batch. Gradient c omputation is synchronous across the GPUs, sothe resultisexactlythesame aswhentrainingona si ngle GPU. While more sophisticated methods of speeding up Conv Net tra ining have been recently pro-posed (Krizhevsky, 2014), which employmodeland data paral lelism for differentlayersof the net, wehavefoundthatourconceptuallymuchsimplerschemealre adyprovidesaspeedupof 3. 75times on an off-the-shelf4-GPU system, as comparedto using a sing le GPU. On a system equippedwith four NVIDIATitan Black GPUs,trainingasinglenettook2-3w eeksdependingonthearchitecture. 4 CLASSIFICATION EXPERIMENTS Dataset. In this section, we present the image classification results achieved by the described Conv Netarchitecturesonthe ILSVRC-2012dataset(whichwa susedfor ILSVRC2012-2014chal-lenges). The dataset includes images of 1000 classes, and is split into three sets: training ( 1. 3M images), validation ( 50K images), and testing ( 100K images with held-out class labels). The clas-sification performanceis evaluated using two measures: the top-1 and top-5 error. The former is a multi-class classification error, i. e. the proportion of in correctly classified images; the latter is the 5
VGG-16 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 main evaluation criterion used in ILSVRC, and is computed as the proportion of images such that theground-truthcategoryisoutsidethetop-5predictedca tegories. Forthemajorityofexperiments,weusedthevalidationseta sthetestset. Certainexperimentswere also carried out on the test set and submitted to the official I LSVRC server as a “VGG” team entry tothe ILSVRC-2014competition(Russakovskyet al., 2014). 4. 1 SINGLESCALEEVALUATION We begin with evaluating the performanceof individual Conv Net models at a single scale with the layerconfigurationsdescribedin Sect. 2. 2. The test images ize was set as follows: Q=Sforfixed S,and Q= 0. 5(Smin+Smax)forjittered S∈[Smin,Smax]. Theresultsofareshownin Table3. First, we note that using local response normalisation (A-L RN network) does not improve on the model A without any normalisation layers. We thus do not empl oy normalisation in the deeper architectures(B-E). Second, we observe that the classification error decreases w ith the increased Conv Net depth: from 11 layers in A to 19 layers in E. Notably, in spite of the same de pth, the configuration C (which containsthree 1×1conv. layers),performsworsethantheconfiguration D,whic huses3×3conv. layersthroughoutthenetwork. Thisindicatesthatwhileth e additionalnon-linearitydoeshelp(Cis better than B), it is also important to capture spatial conte xt by using conv. filters with non-trivial receptive fields (D is better than C). The error rate of our arc hitecture saturates when the depth reaches19layers,butevendeepermodelsmightbebeneficialforlarger datasets. Wealsocompared the net B with a shallow net with five 5×5conv. layers, which was derived from B by replacing eachpairof 3×3conv. layerswithasingle 5×5conv. layer(whichhasthesamereceptivefieldas explained in Sect. 2. 3). The top-1 error of the shallow net wa s measured to be 7%higher than that of B (on a center crop),which confirmsthat a deepnet with smal l filters outperformsa shallow net withlargerfilters. Finally, scale jittering at training time ( S∈[256;512] ) leads to significantly better results than training on images with fixed smallest side ( S= 256or S= 384), even though a single scale is usedattesttime. Thisconfirmsthattrainingsetaugmentati onbyscalejitteringisindeedhelpfulfor capturingmulti-scaleimagestatistics. Table3:Conv Netperformanceatasingle testscale. Conv Net config. (Table 1) smallest image side top-1 val. error (%) top-5 val. error (%) train(S)test (Q) A 256 256 29. 6 10. 4 A-LRN 256 256 29. 7 10. 5 B 256 256 28. 7 9. 9 C256 256 28. 1 9. 4 384 384 28. 1 9. 3 [256;512] 384 27. 3 8. 8 D256 256 27. 0 8. 8 384 384 26. 8 8. 7 [256;512] 384 25. 6 8. 1 E256 256 27. 3 9. 0 384 384 26. 9 8. 7 [256;512] 384 25. 5 8. 0 4. 2 M ULTI-SCALEEVALUATION Havingevaluatedthe Conv Netmodelsatasinglescale,wenow assesstheeffectofscalejitteringat testtime. Itconsistsofrunningamodeloverseveralrescal edversionsofatestimage(corresponding to different values of Q), followed by averaging the resulting class posteriors. Co nsidering that a large discrepancy between training and testing scales lead s to a drop in performance, the models trained with fixed Swere evaluated over three test image sizes, close to the trai ning one: Q= {S-32,S,S+ 32}. At the same time, scale jittering at training time allows th e network to be appliedto a widerrangeofscales at test time,so the modeltr ainedwithvariable S∈[Smin;Smax] wasevaluatedoveralargerrangeofsizes Q={Smin,0. 5(Smin+Smax),Smax}. 6
VGG-16 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 Theresults,presentedin Table4,indicatethatscalejitte ringattest timeleadstobetterperformance (as compared to evaluating the same model at a single scale, s hown in Table 3). As before, the deepest configurations(D and E) perform the best, and scale j ittering is better than training with a fixed smallest side S. Our best single-network performance on the validation set is24. 8%/7. 5% top-1/top-5error(highlightedinboldin Table4). Onthete stset,theconfiguration Eachieves 7. 3% top-5error. Table4:Conv Netperformanceatmultiple test scales. Conv Net config. (Table 1) smallest image side top-1val. error (%) top-5val. error (%) train(S)test(Q) B 256 224,256,288 28. 2 9. 6 C256 224,256,288 27. 7 9. 2 384 352,384,416 27. 8 9. 2 [256;512] 256,384,512 26. 3 8. 2 D256 224,256,288 26. 6 8. 6 384 352,384,416 26. 5 8. 6 [256;512] 256,384,512 24. 8 7. 5 E256 224,256,288 26. 9 8. 7 384 352,384,416 26. 7 8. 6 [256;512] 256,384,512 24. 8 7. 5 4. 3 M ULTI-CROP EVALUATION In Table 5 we compare dense Conv Net evaluation with mult-cro p evaluation (see Sect. 3. 2 for de-tails). We also assess the complementarityof thetwo evalua tiontechniquesbyaveragingtheirsoft-max outputs. As can be seen, using multiple crops performs sl ightly better than dense evaluation, andthe two approachesareindeedcomplementary,astheir co mbinationoutperformseach ofthem. As noted above, we hypothesize that this is due to a different treatment of convolution boundary conditions. Table 5:Conv Netevaluationtechniques comparison. Inall experimentsthe trainingscale Swas sampledfrom [256;512],andthreetest scales Qwereconsidered: {256,384,512}. Conv Net config. (Table 1) Evaluationmethod top-1 val. error(%) top-5 val. error (%) Ddense 24. 8 7. 5 multi-crop 24. 6 7. 5 multi-crop &dense 24. 4 7. 2 Edense 24. 8 7. 5 multi-crop 24. 6 7. 4 multi-crop &dense 24. 4 7. 1 4. 4 C ONVNETFUSION Upuntilnow,weevaluatedtheperformanceofindividual Con v Netmodels. Inthispartoftheexper-iments,wecombinetheoutputsofseveralmodelsbyaveragin gtheirsoft-maxclassposteriors. This improvesthe performancedueto complementarityof the mode ls, andwas used in the top ILSVRC submissions in 2012 (Krizhevskyet al., 2012) and 2013 (Zeil er&Fergus, 2013; Sermanetet al., 2014). The results are shown in Table 6. By the time of ILSVRC submiss ion we had only trained the single-scale networks, as well as a multi-scale model D (by fi ne-tuning only the fully-connected layers rather than all layers). The resulting ensemble of 7 n etworks has 7. 3%ILSVRC test error. After the submission, we considered an ensemble of only two b est-performing multi-scale models (configurations D and E), which reduced the test error to 7. 0%using dense evaluation and 6. 8% using combined dense and multi-crop evaluation. For refere nce, our best-performingsingle model achieves7. 1%error(model E, Table5). 4. 5 C OMPARISON WITH THE STATE OF THE ART Finally, we compare our results with the state of the art in Ta ble 7. In the classification task of ILSVRC-2014 challenge (Russakovskyet al., 2014), our “VGG ” team secured the 2nd place with 7
VGG-16 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 Table6:Multiple Conv Netfusion results. Combined Conv Net models Error top-1 val top-5val top-5test ILSVRCsubmission (D/256/224,256,288), (D/384/352,384,416), (D/[256;512 ]/256,384,512) (C/256/224,256,288), (C/384/352,384,416) (E/256/224,256,288), (E/384/352,384,416)24. 7 7. 5 7. 3 post-submission (D/[256;512]/256,384,512), (E/[256;512]/256,384,512),dense eval. 24. 0 7. 1 7. 0 (D/[256;512]/256,384,512), (E/[256;512]/256,384,512),multi-crop 23. 9 7. 2-(D/[256;512]/256,384,512), (E/[256;512]/256,384,512),multi-crop &dense eval. 23. 7 6. 8 6. 8 7. 3%test errorusinganensembleof7 models. Afterthesubmissio n,we decreasedtheerrorrateto 6. 8%usinganensembleof2models. As can be seen from Table 7, our very deep Conv Netssignificant ly outperformthe previousgener-ation of models, which achieved the best results in the ILSVR C-2012 and ILSVRC-2013 competi-tions. Our result is also competitivewith respect to the cla ssification task winner(Goog Le Netwith 6. 7%error) and substantially outperforms the ILSVRC-2013 winn ing submission Clarifai, which achieved 11. 2%with outside training data and 11. 7%without it. This is remarkable, considering that our best result is achievedby combiningjust two models-significantly less than used in most ILSVRC submissions. In terms of the single-net performance, our architecture achieves the best result (7. 0%test error), outperforming a single Goog Le Net by 0. 9%. Notably, we did not depart from the classical Conv Net architecture of Le Cunetal. (198 9), but improved it by substantially increasingthedepth. Table 7:Comparison with the state of the art in ILSVRC classification. Our methodis denoted as“VGG”. Onlytheresultsobtainedwithoutoutsidetrainin gdataarereported. Method top-1 val. error(%) top-5val. error (%) top-5testerror (%) VGG(2nets, multi-crop& dense eval. ) 23. 7 6. 8 6. 8 VGG(1net, multi-crop& dense eval. ) 24. 4 7. 1 7. 0 VGG(ILSVRCsubmission, 7nets, dense eval. ) 24. 7 7. 5 7. 3 Goog Le Net (Szegedy et al., 2014) (1net)-7. 9 Goog Le Net (Szegedy et al., 2014) (7nets)-6. 7 MSRA(He et al., 2014) (11nets)--8. 1 MSRA(He et al., 2014) (1net) 27. 9 9. 1 9. 1 Clarifai(Russakovsky et al., 2014) (multiplenets)--11. 7 Clarifai(Russakovsky et al., 2014) (1net)--12. 5 Zeiler& Fergus (Zeiler&Fergus, 2013) (6nets) 36. 0 14. 7 14. 8 Zeiler& Fergus (Zeiler&Fergus, 2013) (1net) 37. 5 16. 0 16. 1 Over Feat (Sermanetet al.,2014) (7nets) 34. 0 13. 2 13. 6 Over Feat (Sermanetet al.,2014) (1net) 35. 7 14. 2-Krizhevsky et al. (Krizhevsky et al., 2012) (5nets) 38. 1 16. 4 16. 4 Krizhevsky et al. (Krizhevsky et al., 2012) (1net) 40. 7 18. 2-5 CONCLUSION In this work we evaluated very deep convolutional networks ( up to 19 weight layers) for large-scale image classification. It was demonstrated that the rep resentation depth is beneficial for the classificationaccuracy,andthatstate-of-the-artperfor manceonthe Image Netchallengedatasetcan beachievedusingaconventional Conv Netarchitecture(Le C unet al.,1989;Krizhevskyet al.,2012) withsubstantiallyincreaseddepth. Intheappendix,weals oshowthatourmodelsgeneralisewellto a wide range of tasks and datasets, matchingor outperformin gmore complexrecognitionpipelines builtaroundlessdeepimagerepresentations. Ourresultsy etagainconfirmtheimportanceof depth invisualrepresentations. ACKNOWLEDGEMENTS Thisworkwassupportedby ERCgrant Vis Recno. 228180. Wegra tefullyacknowledgethesupport of NVIDIACorporationwiththedonationofthe GPUsusedfort hisresearch. 8
VGG-16 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 REFERENCES Bell, S., Upchurch, P.,Snavely, N., and Bala, K. Material re cognition inthe wild withthe materials in context database. Co RR,abs/1412. 0623, 2014. Chatfield, K., Simonyan, K., Vedaldi, A., and Zisserman, A. R eturn of the devil in the details: Delving deep intoconvolutional nets. In Proc. BMVC.,2014. Cimpoi,M.,Maji,S.,and Vedaldi,A. Deepconvolutionalfilt erbanksfortexturerecognitionandsegmentation. Co RR,abs/1411. 6836, 2014. Ciresan, D. C., Meier, U., Masci, J., Gambardella, L. M., and Schmidhuber, J. Flexible, high performance convolutional neural networks for image classification. In IJCAI,pp. 1237-1242, 2011. Dean, J., Corrado, G., Monga, R., Chen, K., Devin, M., Mao, M., Ranzato, M., Senior, A., Tucker, P., Yang, K.,Le,Q. V.,and Ng, A. Y. Large scale distributeddeepnetwo rks. In NIPS,pp. 1232-1240, 2012. Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proc. CVPR,2009. Donahue,J.,Jia,Y.,Vinyals,O.,Hoffman,J.,Zhang,N.,Tz eng,E.,and Darrell,T. Decaf: Adeepconvolutional activation feature for generic visual recognition. Co RR,abs/1310. 1531, 2013. Everingham, M., Eslami, S. M. A., Van Gool, L., Williams,C., Winn, J., and Zisserman, A. The Pascal visual object classes challenge: Aretrospective. IJCV,111(1):98-136, 2015. Fei-Fei, L., Fergus, R., and Perona, P. Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categor ies. In IEEE CVPR Workshop of Generative Model Based Vision, 2004. Girshick, R. B., Donahue, J., Darrell, T., and Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. Co RR,abs/1311. 2524v5, 2014. Publishedin Proc. CVPR,2014. Gkioxari, G.,Girshick, R.,and Malik, J. Actions and attrib utes from wholes and parts. Co RR,abs/1412. 2604, 2014. Glorot, X. and Bengio, Y. Understanding the difficultyof tra iningdeep feedforward neural networks. In Proc. AISTATS,volume 9, pp. 249-256, 2010. Goodfellow, I. J., Bulatov, Y., Ibarz, J., Arnoud, S., and Sh et, V. Multi-digit number recognition from street view imagery usingdeep convolutional neural networks. In Proc. ICLR,2014. Griffin, G., Holub, A., and Perona, P. Caltech-256 object cat egory dataset. Technical Report 7694, California Institute of Technology, 2007. He, K., Zhang, X., Ren, S., and Sun, J. Spatial pyramid poolin g in deep convolutional networks for visual recognition. Co RR,abs/1406. 4729v2, 2014. Hoai, M. Regularizedmax pooling forimage categorization. In Proc. BMVC.,2014. Howard, A. G. Someimprovements ondeepconvolutional neura l networkbasedimageclassification. In Proc. ICLR,2014. Jia, Y. Caffe: An open source convolutional architecture fo r fast feature embedding. http://caffe. berkeleyvision. org/,2013. Karpathy, A. and Fei-Fei, L. Deep visual-semantic alignmen ts for generating image descriptions. Co RR, abs/1412. 2306, 2014. Kiros, R., Salakhutdinov, R., and Zemel, R. S. Unifying visu al-semantic embeddings with multimodal neural language models. Co RR,abs/1411. 2539, 2014. Krizhevsky, A. One weirdtrickfor parallelizingconvoluti onal neural networks. Co RR,abs/1404. 5997, 2014. Krizhevsky, A., Sutskever, I., and Hinton, G. E. Image Net cl assification with deep convolutional neural net-works. In NIPS,pp. 1106-1114, 2012. Le Cun,Y.,Boser, B.,Denker, J. S.,Henderson, D.,Howard, R. E.,Hubbard, W.,and Jackel, L. D. Backpropa-gationapplied tohandwrittenzipcode recognition. Neural Computation, 1(4):541-551, 1989. Lin,M., Chen, Q.,and Yan, S. Networkinnetwork. In Proc. ICLR,2014. Long, J., Shelhamer, E., and Darrell, T. Fully convolutiona l networks for semantic segmentation. Co RR, abs/1411. 4038, 2014. Oquab, M., Bottou, L., Laptev, I., and Sivic, J. Learning and Transferring Mid-Level Image Representations using Convolutional Neural Networks. In Proc. CVPR,2014. Perronnin, F.,S´ anchez, J.,and Mensink, T. Improving the F isherkernel forlarge-scale image classification. In Proc. ECCV,2010. Razavian, A.,Azizpour, H.,Sullivan, J.,and Carlsson,S. C NNFeaturesoff-the-shelf: an Astounding Baseline for Recognition. Co RR,abs/1403. 6382, 2014. 9
VGG-16 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A. C., and Fei-Fei, L. Image Net large sc ale visual recognition challenge. Co RR, abs/1409. 0575, 2014. Sermanet,P.,Eigen, D.,Zhang, X.,Mathieu, M.,Fergus,R., and Le Cun,Y. Over Feat: Integrated Recognition, Localizationand Detectionusing Convolutional Networks. In Proc. ICLR,2014. Simonyan, K. and Zisserman, A. Two-stream convolutional ne tworks for action recognition in videos. Co RR, abs/1406. 2199, 2014. Published in Proc. NIPS,2014. Szegedy, C., Liu, W.,Jia, Y., Sermanet, P.,Reed, S.,Anguel ov, D.,Erhan, D., Vanhoucke, V., and Rabinovich, A. Goingdeeper withconvolutions. Co RR,abs/1409. 4842, 2014. Wei, Y., Xia, W., Huang, J., Ni, B., Dong, J., Zhao, Y., and Yan, S. CNN: Single-label to multi-label. Co RR, abs/1406. 5726, 2014. Zeiler, M. D. and Fergus, R. Visualizing and understanding c onvolutional networks. Co RR, abs/1311. 2901, 2013. Publishedin Proc. ECCV,2014. A LOCALISATION In the main bodyof the paper we have consideredthe classifica tion task of the ILSVRC challenge, and performed a thorough evaluation of Conv Net architectur es of different depth. In this section, we turn to the localisation task of the challenge, which we ha ve won in 2014 with 25. 3%error. It can be seen as a special case of object detection, where a sing le object bounding box should be predictedforeach ofthe top-5classes, irrespectiveof the actual numberofobjectsof the class. For thiswe adoptthe approachof Sermanetet al. (2014), the winn ersof the ILSVRC-2013localisation challenge,withafewmodifications. Ourmethodisdescribed in Sect. A. 1andevaluatedin Sect. A. 2. A. 1 L OCALISATION CONVNET To perform object localisation, we use a very deep Conv Net, w here the last fully connected layer predicts the bounding box location instead of the class scor es. A bounding box is represented by a 4-D vector storing its center coordinates, width, and heig ht. There is a choice of whether the boundingbox prediction is shared across all classes (singl e-class regression, SCR (Sermanetet al., 2014))orisclass-specific(per-classregression,PCR). In theformercase,thelastlayeris4-D,while in the latter it is 4000-D (since there are 1000 classes in the dataset). Apart from the last bounding boxpredictionlayer,weuse the Conv Netarchitecture D (Tab le1),whichcontains16weightlayers andwasfoundtobe thebest-performingin theclassification task (Sect. 4). Training. Training of localisation Conv Nets is similar to that of the c lassification Conv Nets (Sect. 3. 1). Themaindifferenceisthatwereplacethelogis ticregressionobjectivewitha Euclidean loss,whichpenalisesthedeviationofthepredictedboundi ngboxparametersfromtheground-truth. We trainedtwo localisation models, each on a single scale: S= 256and S= 384(due to the time constraints,we didnot use trainingscale jitteringforour ILSVRC-2014submission). Trainingwas initialised with the correspondingclassification models ( trained on the same scales), and the initial learning rate was set to 10-3. We exploredboth fine-tuningall layers and fine-tuningonly the first two fully-connected layers, as done in (Sermanetetal., 201 4). The last fully-connected layer was initialisedrandomlyandtrainedfromscratch. Testing. We consider two testing protocols. The first is used for compa ring different network modifications on the validation set, and considers only the b oundingbox prediction for the ground truth class (to factor out the classification errors). The bo unding box is obtained by applying the networkonlyto thecentralcropoftheimage. The second, fully-fledged, testing procedure is based on the dense application of the localisation Conv Net to the whole image, similarly to the classification t ask (Sect. 3. 2). The difference is that instead of the class score map, the output of the last fully-c onnected layer is a set of bounding box predictions. To come up with the final prediction, we util ise the greedy merging procedure of Sermanetetal. (2014), which first merges spatially close predictions (by averaging their coor-dinates), and then rates them based on the class scores, obta ined from the classification Conv Net. When several localisation Conv Nets are used, we first take th e union of their sets of boundingbox predictions, and then run the mergingprocedureon the union. We did not use the multiple pooling 10
VGG-16 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 offsets technique of Sermanetetal. (2014), which increase s the spatial resolution of the bounding boxpredictionsandcanfurtherimprovetheresults. A. 2 L OCALISATION EXPERIMENTS In this section we first determine the best-performinglocal isation setting (using the first test proto-col), and then evaluate it in a fully-fledged scenario (the se cond protocol). The localisation error is measured according to the ILSVRC criterion (Russakovsky et al., 2014), i. e. the bounding box predictionis deemed correctif its intersectionoverunion ratio with the ground-truthboundingbox isabove0. 5. Settings comparison. As can be seen from Table 8, per-class regression (PCR) outpe rforms the class-agnostic single-class regression (SCR), which diff ers from the findings of Sermanetetal. (2014), where PCR was outperformed by SCR. We also note that fi ne-tuning all layers for the lo-calisation task leads to noticeablybetter results than fine-tuningonly the fully-connectedlayers(as donein(Sermanetet al.,2014)). Intheseexperiments,thes mallestimagessidewassetto S= 384; theresultswith S= 256exhibitthesamebehaviourandarenotshownforbrevity. Table 8:Localisation error for different modifications with the simplified testing protocol: the boundingbox is predictedfrom a single central image crop, a nd the ground-truthclass is used. All Conv Net layers (except for the last one) have the configurati on D (Table 1), while the last layer performseithersingle-classregression(SCR) orper-clas sregression(PCR). Fine-tunedlayers regression type GTclass localisationerror 1st and2nd FCSCR 36. 4 PCR 34. 3 all PCR 33. 1 Fully-fledgedevaluation. Havingdeterminedthebestlocalisationsetting(PCR,fine-tuningofall layers),we nowapply it in the fully-fledgedscenario,where the top-5class labelsare predictedus-ing our best-performingclassification system (Sect. 4. 5), and multiple densely-computedbounding box predictions are merged using the method of Sermanetetal. (2014). As can be seen from Ta-ble 9, applicationof the localisation Conv Netto the whole i magesubstantiallyimprovesthe results compared to using a center crop (Table 8), despite using the t op-5 predicted class labels instead of thegroundtruth. Similarlytotheclassificationtask(Sect. 4),testingatseveralscalesandcombining thepredictionsofmultiplenetworksfurtherimprovesthep erformance. Table9:Localisationerror smallestimage side top-5localisationerror (%) train(S) test(Q) val. test. 256 256 29. 5-384 384 28. 2 26. 7 384 352,384 27. 5-fusion: 256/256 and 384/352,384 26. 9 25. 3 Comparison with the state of the art. We compare our best localisation result with the state of the art in Table 10. With 25. 3%test error, our “VGG” team won the localisation challenge of ILSVRC-2014 (Russakovskyet al., 2014). Notably, our resul ts are considerably better than those of the ILSVRC-2013winner Overfeat(Sermanetet al., 2014), even thoughwe used less scales and did not employ their resolution enhancement technique. We e nvisage that better localisation per-formance can be achieved if this technique is incorporated i nto our method. This indicates the performanceadvancementbroughtbyourverydeep Conv Nets-wegotbetterresultswithasimpler localisationmethod,buta morepowerfulrepresentation. B GENERALISATION OF VERYDEEPFEATURES In the previous sections we have discussed training and eval uation of very deep Conv Nets on the ILSVRC dataset. In this section, we evaluate our Conv Nets, p re-trained on ILSVRC, as feature 11
VGG-16 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 Table 10: Comparison with the state of the art in ILSVRC localisation. Our methodis denoted as“VGG”. Method top-5val. error (%) top-5 testerror (%) VGG 26. 9 25. 3 Goog Le Net (Szegedyet al., 2014)-26. 7 Over Feat (Sermanet etal.,2014) 30. 0 29. 9 Krizhevsky et al. (Krizhevsky et al.,2012)-34. 2 extractors on other, smaller, datasets, where training lar ge models from scratch is not feasible due to over-fitting. Recently, there has been a lot of interest in such a use case (Zeiler&Fergus, 2013; Donahueet al., 2013; Razavianet al., 2014; Chatfieldet al., 2014), as it turns out that deep image representations,learnton ILSVRC,generalisewelltoothe rdatasets,wheretheyhaveoutperformed hand-crafted representations by a large margin. Following that line of work, we investigate if our modelsleadtobetterperformancethanmoreshallowmodelsu tilisedinthestate-of-the-artmethods. In this evaluation, we consider two models with the best clas sification performance on ILSVRC (Sect. 4)-configurations“Net-D”and“Net-E”(whichwemade publiclyavailable). To utilise the Conv Nets, pre-trained on ILSVRC, for image cl assification on other datasets, we remove the last fully-connected layer (which performs 1000-way ILSVRC classification), and use 4096-Dactivationsofthepenultimatelayerasimagefeatur es,whichareaggregatedacrossmultiple locations and scales. The resulting image descriptor is L2-normalised and combined with a linear SVM classifier, trained on the target dataset. For simplicit y, pre-trained Conv Net weights are kept fixed(nofine-tuningisperformed). Aggregation of features is carried out in a similar manner to our ILSVRC evaluation procedure (Sect. 3. 2). Namely, an image is first rescaled so that its sma llest side equals Q, and then the net-work is densely applied over the image plane (which is possib le when all weight layers are treated as convolutional). We then perform global average pooling o n the resulting feature map, which producesa 4096-Dimage descriptor. The descriptor is then a veraged with the descriptor of a hori-zontally flipped image. As was shown in Sect. 4. 2, evaluation over multiple scales is beneficial, so we extract features over several scales Q. The resulting multi-scale features can be either stacked or pooled across scales. Stacking allows a subsequent class ifier to learn how to optimally combine image statistics over a range of scales; this, however, come s at the cost of the increased descriptor dimensionality. We returntothediscussionofthisdesignc hoicein theexperimentsbelow. We also assess late fusion of features, computed using two networks, which is performed by stacking their respectiveimagedescriptors. Table11: Comparisonwiththestateoftheartinimageclassificationo n VOC-2007,VOC-2012, Caltech-101, and Caltech-256. Our models are denoted as “VGG”. Results marked with * were achievedusing Conv Netspre-trainedonthe extended ILSVRCdataset(2000classes). Method VOC-2007 VOC-2012 Caltech-101 Caltech-256 (mean AP) (mean AP) (meanclass recall) (mean class recall) Zeiler& Fergus (Zeiler&Fergus, 2013)-79. 0 86. 5±0. 5 74. 2±0. 3 Chatfieldetal. (Chatfieldet al., 2014) 82. 4 83. 2 88. 4±0. 6 77. 6±0. 1 He etal. (Heet al.,2014) 82. 4-93. 4±0. 5-Weiet al. (Weiet al., 2014) 81. 5(85. 2∗)81. 7 (90. 3∗)--VGGNet-D (16layers) 89. 3 89. 0 91. 8±1. 0 85. 0±0. 2 VGGNet-E(19 layers) 89. 3 89. 0 92. 3±0. 5 85. 1±0. 3 VGGNet-D & Net-E 89. 7 89. 3 92. 7±0. 5 86. 2±0. 3 Image Classification on VOC-2007and VOC-2012. We beginwith the evaluationon the image classification task of PASCAL VOC-2007 and VOC-2012 benchma rks (Everinghametal., 2015). These datasets contain 10K and 22. 5K images respectively, a nd each image is annotated with one or several labels, correspondingto 20 object categories. T he VOC organisersprovidea pre-defined split into training, validation, and test data (the test dat a for VOC-2012 is not publicly available; instead,anofficialevaluationserverisprovided). Recogn itionperformanceismeasuredusingmean averageprecision(m AP)acrossclasses. Notably, by examining the performance on the validation set s of VOC-2007 and VOC-2012, we foundthat aggregatingimage descriptors,computedat mult iple scales, by averagingperformssim-12
VGG-16 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 ilarly to the aggregation by stacking. We hypothesize that t his is due to the fact that in the VOC dataset the objects appear over a variety of scales, so there is no particular scale-specific seman-tics which a classifier could exploit. Since averaging has a b enefit of not inflating the descrip-tor dimensionality, we were able to aggregated image descri ptors over a wide range of scales: Q∈ {256,384,512,640,768}. It is worth noting though that the improvement over a smalle r rangeof{256,384,512}wasrathermarginal( 0. 3%). Thetestsetperformanceisreportedandcomparedwithother approachesin Table11. Ournetworks “Net-D”and“Net-E”exhibitidenticalperformanceon VOCda tasets,andtheircombinationslightly improves the results. Our methods set the new state of the art across image representations, pre-trained on the ILSVRC dataset, outperformingthe previousb est result of Chatfieldet al. (2014) by more than 6%. It should be noted that the method of Wei et al. (2014), which achieves1%better m AP on VOC-2012, is pre-trained on an extended 2000-class IL SVRC dataset, which includes additional 1000 categories, semantically close to those in VOC datasets. It also benefits from the fusionwith anobjectdetection-assistedclassification pi peline. Image Classificationon Caltech-101and Caltech-256. Inthissectionweevaluateverydeepfea-tureson Caltech-101(Fei-Feiet al.,2004)and Caltech-256 (Griffinet al.,2007)imageclassification benchmarks. Caltech-101contains9Kimageslabelledinto1 02classes(101objectcategoriesanda backgroundclass), while Caltech-256 is larger with 31K ima ges and 257 classes. A standard eval-uation protocolon these datasets is to generateseveral ran domsplits into training and test data and report the average recognition performance across the spli ts, which is measured by the mean class recall(whichcompensatesforadifferentnumberoftestima gesperclass). Following Chatfield etal. (2014); Zeiler&Fergus(2013); He etal. (2014),on Caltech-101we generated3 randomsplits into training and test data, so that each split contains 30 traini ng images per class, and up to 50 test images per class. On Caltech-256 we also generated 3 splits, each of which contains 60 training images per class (and the rest is used for testing). In each sp lit, 20% of training images were used asa validationset forhyper-parameterselection. We found that unlike VOC, on Caltech datasets the stacking of descriptors, computed over multi-ple scales, performs better than averaging or max-pooling. This can be explained by the fact that in Caltech images objects typically occupy the whole image, so multi-scale image features are se-manticallydifferent(capturingthe wholeobject vs. object parts), andstacking allows a classifier to exploitsuchscale-specificrepresentations. We usedthree scales Q∈ {256,384,512}. Ourmodelsarecomparedtoeachotherandthestateofthearti n Table11. Ascanbeseen,thedeeper 19-layer Net-Eperformsbetterthanthe16-layer Net-D,and theircombinationfurtherimprovesthe performance. On Caltech-101, our representations are comp etitive with the approach of He etal. (2014),which,however,performssignificantlyworsethano urnetson VOC-2007. On Caltech-256, ourfeaturesoutperformthestate oftheart (Chatfieldetal., 2014)byalargemargin( 8. 6%). Action Classification on VOC-2012. We also evaluated our best-performing image representa-tion (the stacking of Net-D and Net-E features) on the PASCAL VOC-2012 action classification task (Everinghamet al., 2015), which consists in predictin g an action class from a single image, given a bounding box of the person performing the action. The dataset contains 4. 6K training im-ages,labelledinto11classes. Similarlytothe VOC-2012ob jectclassificationtask,theperformance is measured using the m AP. We considered two training settin gs: (i) computing the Conv Net fea-turesonthewholeimageandignoringtheprovidedboundingb ox;(ii)computingthefeaturesonthe wholeimageandontheprovidedboundingbox,andstackingth emtoobtainthefinalrepresentation. Theresultsarecomparedtootherapproachesin Table12. Ourrepresentationachievesthestateofartonthe VOCactio nclassificationtaskevenwithoutusing the provided bounding boxes, and the results are further imp roved when using both images and bounding boxes. Unlike other approaches, we did not incorpo rate any task-specific heuristics, but reliedontherepresentationpowerofverydeepconvolution alfeatures. Other Recognition Tasks. Since the public release of our models, they have been active ly used by the research community for a wide range of image recogniti on tasks, consistently outperform-ing more shallow representations. For instance, Girshicke t al. (2014) achieve the state of the object detection results by replacing the Conv Net of Krizhe vskyet al. (2012) with our 16-layer model. Similar gains over a more shallow architecture of Kri zhevskyet al. (2012) have been ob-13
VGG-16 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 Table 12: Comparison with the state of the art in single-image action c lassification on VOC-2012. Our models are denoted as “VGG”. Results marked with * were a chieved using Conv Nets pre-trainedonthe extended ILSVRCdataset (1512classes). Method VOC-2012 (mean AP) (Oquab et al., 2014) 70. 2∗ (Gkioxari etal.,2014) 73. 6 (Hoai,2014) 76. 3 VGG Net-D& Net-E,image-only 79. 2 VGG Net-D& Net-E,image and bounding box 84. 0 served in semantic segmentation (Longet al., 2014), image c aption generation (Kirosetal., 2014; Karpathy& Fei-Fei, 2014),textureandmaterialrecognitio n(Cimpoiet al., 2014; Bell etal., 2014). C PAPERREVISIONS Here we present the list of major paper revisions, outlining the substantial changes for the conve-nienceofthe reader. v1Initialversion. Presentstheexperimentscarriedoutbefo rethe ILSVRCsubmission. v2Addspost-submission ILSVRCexperimentswithtrainingset augmentationusingscalejittering, whichimprovestheperformance. v3Addsgeneralisationexperiments(Appendix B) on PASCAL VOC and Caltech image classifica-tiondatasets. Themodelsusedforthese experimentsarepub liclyavailable. v4The paper is converted to ICLR-2015 submission format. Also adds experiments with multiple cropsforclassification. v6Camera-ready ICLR-2015conferencepaper. Addsa compariso nof the net B with a shallow net andtheresultson PASCAL VOCactionclassificationbenchmar k. 14
VGG-16 layer image recognition model.pdf
ar Xiv:1409. 1556v6 [cs. CV] 10 Apr 2015Publishedasa conferencepaperat ICLR2015 VERYDEEPCONVOLUTIONAL NETWORKS FORLARGE-SCALEIMAGERECOGNITION Karen Simonyan∗& Andrew Zisserman+ Visual Geometry Group,Departmentof Engineering Science, Universityof Oxford {karen,az }@robots. ox. ac. uk ABSTRACT In this work we investigate the effect of the convolutional n etwork depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with verysmall( 3×3)convolutionfilters,whichshowsthatasignificantimprove ment on the prior-art configurations can be achieved by pushing th e depth to 16-19 weight layers. These findings were the basis of our Image Net C hallenge 2014 submission,whereourteamsecuredthefirstandthesecondpl acesinthelocalisa-tion and classification tracks respectively. We also show th at our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing Conv Net models publicly a vailable to facili-tate furtherresearchontheuse ofdeepvisualrepresentati onsincomputervision. 1 INTRODUCTION Convolutional networks (Conv Nets) have recently enjoyed a great success in large-scale im-age and video recognition (Krizhevskyetal., 2012; Zeiler& Fergus, 2013; Sermanetet al., 2014; Simonyan& Zisserman, 2014) which has become possible due to the large public image reposito-ries,suchas Image Net(Denget al.,2009),andhigh-perform ancecomputingsystems,suchas GPUs orlarge-scaledistributedclusters(Deanet al., 2012). In particular,animportantroleintheadvance ofdeepvisualrecognitionarchitectureshasbeenplayedby the Image Net Large-Scale Visual Recog-nition Challenge (ILSVRC) (Russakovskyet al., 2014), whic h has served as a testbed for a few generationsof large-scale image classification systems, f rom high-dimensionalshallow feature en-codings(Perronninetal.,2010)(thewinnerof ILSVRC-2011 )todeep Conv Nets(Krizhevskyet al., 2012)(thewinnerof ILSVRC-2012). With Conv Nets becoming more of a commodity in the computer vi sion field, a number of at-tempts have been made to improve the original architecture o f Krizhevskyet al. (2012) in a bid to achieve better accuracy. For instance, the best-perf orming submissions to the ILSVRC-2013 (Zeiler&Fergus, 2013; Sermanetetal., 2014) utilised smaller receptive window size and smaller stride of the first convolutional layer. Another lin e of improvements dealt with training and testing the networks densely over the whole image and ove r multiple scales (Sermanetet al., 2014; Howard, 2014). In this paper, we address another impor tant aspect of Conv Net architecture design-itsdepth. Tothisend,we fixotherparametersofthea rchitecture,andsteadilyincreasethe depth of the network by adding more convolutionallayers, wh ich is feasible due to the use of very small (3×3)convolutionfiltersinall layers. As a result, we come up with significantly more accurate Conv N et architectures, which not only achieve the state-of-the-art accuracy on ILSVRC classifica tion and localisation tasks, but are also applicabletootherimagerecognitiondatasets,wherethey achieveexcellentperformanceevenwhen usedasa partofa relativelysimple pipelines(e. g. deepfea turesclassified byalinear SVM without fine-tuning). We havereleasedourtwobest-performingmode ls1tofacilitatefurtherresearch. The rest of the paper is organised as follows. In Sect. 2, we de scribe our Conv Net configurations. The details of the image classification trainingand evaluat ionare then presented in Sect. 3, and the ∗current affiliation: Google Deep Mind+current affiliation: Universityof Oxfordand Google Deep Mi nd 1http://www. robots. ox. ac. uk/ ˜vgg/research/very_deep/ 1
VGG-19 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 configurations are compared on the ILSVRC classification tas k in Sect. 4. Sect. 5 concludes the paper. For completeness,we also describeand assess our ILS VRC-2014object localisationsystem in Appendix A,anddiscussthegeneralisationofverydeepfe aturestootherdatasetsin Appendix B. Finally,Appendix Ccontainsthelist ofmajorpaperrevisio ns. 2 CONVNETCONFIGURATIONS To measure the improvement brought by the increased Conv Net depth in a fair setting, all our Conv Net layer configurations are designed using the same pri nciples, inspired by Ciresan etal. (2011); Krizhevskyet al. (2012). In this section, we first de scribe a generic layout of our Conv Net configurations(Sect. 2. 1)andthendetailthespecificconfig urationsusedintheevaluation(Sect. 2. 2). Ourdesignchoicesarethendiscussedandcomparedtothepri orart in Sect. 2. 3. 2. 1 A RCHITECTURE During training, the input to our Conv Nets is a fixed-size 224×224RGB image. The only pre-processingwedoissubtractingthemean RGBvalue,computed onthetrainingset,fromeachpixel. Theimageispassedthroughastackofconvolutional(conv. ) layers,whereweusefilterswithavery small receptive field: 3×3(which is the smallest size to capture the notion of left/rig ht, up/down, center). In one of the configurationswe also utilise 1×1convolutionfilters, which can be seen as a linear transformationof the input channels (followed by n on-linearity). The convolutionstride is fixedto1pixel;thespatialpaddingofconv. layerinputissuchthatt hespatialresolutionispreserved afterconvolution,i. e. the paddingis 1pixel for3×3conv. layers. Spatial poolingis carriedoutby fivemax-poolinglayers,whichfollowsomeoftheconv. layer s(notalltheconv. layersarefollowed bymax-pooling). Max-poolingisperformedovera 2×2pixelwindow,withstride 2. Astackofconvolutionallayers(whichhasadifferentdepth indifferentarchitectures)isfollowedby three Fully-Connected(FC) layers: the first two have4096ch annelseach,the thirdperforms1000-way ILSVRC classification and thus contains1000channels(o ne foreach class). The final layer is thesoft-maxlayer. Theconfigurationofthefullyconnected layersis thesameinall networks. Allhiddenlayersareequippedwiththerectification(Re LU( Krizhevskyetal.,2012))non-linearity. We note that none of our networks (except for one) contain Loc al Response Normalisation (LRN) normalisation (Krizhevskyet al., 2012): as will be sh own in Sect. 4, such normalisation does not improve the performance on the ILSVRC dataset, but l eads to increased memory con-sumption and computation time. Where applicable, the param eters for the LRN layer are those of(Krizhevskyetal., 2012). 2. 2 C ONFIGURATIONS The Conv Net configurations, evaluated in this paper, are out lined in Table 1, one per column. In the following we will refer to the nets by their names (A-E). A ll configurationsfollow the generic design presented in Sect. 2. 1, and differ only in the depth: f rom 11 weight layers in the network A (8conv. and3FClayers)to19weightlayersinthenetwork E(1 6conv. and3FClayers). Thewidth of conv. layers (the number of channels) is rather small, sta rting from 64in the first layer and then increasingbyafactorof 2aftereachmax-poolinglayer,untilit reaches 512. In Table 2 we reportthe numberof parametersfor each configur ation. In spite of a large depth, the numberof weights in our netsis not greater thanthe numberof weightsin a moreshallow net with largerconv. layerwidthsandreceptivefields(144Mweights in(Sermanetet al., 2014)). 2. 3 D ISCUSSION Our Conv Net configurations are quite different from the ones used in the top-performing entries of the ILSVRC-2012 (Krizhevskyetal., 2012) and ILSVRC-201 3 competitions (Zeiler& Fergus, 2013;Sermanetet al.,2014). Ratherthanusingrelativelyl argereceptivefieldsinthefirstconv. lay-ers(e. g. 11×11withstride 4in(Krizhevskyet al.,2012),or 7×7withstride 2in(Zeiler& Fergus, 2013; Sermanetet al., 2014)), we use very small 3×3receptive fields throughout the whole net, whichareconvolvedwiththeinputateverypixel(withstrid e1). Itiseasytoseethatastackoftwo 3×3conv. layers(withoutspatialpoolinginbetween)hasaneff ectivereceptivefieldof 5×5;three 2
VGG-19 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 Table 1:Conv Net configurations (shown in columns). The depth of the configurations increase s fromtheleft(A)totheright(E),asmorelayersareadded(th eaddedlayersareshowninbold). The convolutional layer parameters are denoted as “conv ⟨receptive field size ⟩-⟨number of channels ⟩”. The Re LU activationfunctionisnotshownforbrevity. Conv Net Configuration A A-LRN B C D E 11weight 11weight 13 weight 16weight 16weight 19 weight layers layers layers layers layers layers input (224×224RGBimage) conv3-64 conv3-64 conv3-64 conv3-64 conv3-64 conv3-64 LRN conv3-64 conv3-64 conv3-64 conv3-64 maxpool conv3-128 conv3-128 conv3-128 conv3-128 conv3-128 conv3-128 conv3-128 conv3-128 conv3-128 conv3-128 maxpool conv3-256 conv3-256 conv3-256 conv3-256 conv3-256 conv3-256 conv3-256 conv3-256 conv3-256 conv3-256 conv3-256 conv3-256 conv1-256 conv3-256 conv3-256 conv3-256 maxpool conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv1-512 conv3-512 conv3-512 conv3-512 maxpool conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv1-512 conv3-512 conv3-512 conv3-512 maxpool FC-4096 FC-4096 FC-1000 soft-max Table2:Number ofparameters (inmillions). Network A,A-LRN BCDE Number of parameters 133 133134138144 such layers have a 7×7effectivereceptive field. So what have we gainedby using, fo r instance, a stackofthree 3×3conv. layersinsteadofasingle 7×7layer? First,weincorporatethreenon-linear rectification layers instead of a single one, which makes the decision functionmore discriminative. Second, we decrease the number of parameters: assuming that both the input and the output of a three-layer 3×3convolutionstack has Cchannels,the stack is parametrisedby 3( 32C2) = 27C2 weights; at the same time, a single 7×7conv. layer would require 72C2= 49C2parameters, i. e. 81%more. Thiscan be seen as imposinga regularisationon the 7×7conv. filters, forcingthemto haveadecompositionthroughthe 3×3filters(withnon-linearityinjectedin between). The incorporation of 1×1conv. layers (configuration C, Table 1) is a way to increase th e non-linearity of the decision function without affecting the re ceptive fields of the conv. layers. Even thoughinourcasethe 1×1convolutionisessentiallyalinearprojectionontothespa ceofthesame dimensionality(thenumberofinputandoutputchannelsist hesame),anadditionalnon-linearityis introducedbytherectificationfunction. Itshouldbenoted that1×1conv. layershaverecentlybeen utilisedin the“Networkin Network”architectureof Linet a l. (2014). Small-size convolution filters have been previously used by Ciresan etal. (2011), but their nets are significantly less deep than ours, and they did not evalua te on the large-scale ILSVRC dataset. Goodfellowet al. (2014) applied deep Conv Nets ( 11weight layers) to the task of street number recognition, and showed that the increased de pth led to better performance. Goog Le Net(Szegedyet al., 2014), a top-performingentryof the ILSVRC-2014classification task, was developed independentlyof our work, but is similar in th at it is based on very deep Conv Nets 3
VGG-19 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 (22 weight layers) and small convolution filters (apart from 3×3, they also use 1×1and5×5 convolutions). Their network topology is, however, more co mplex than ours, and the spatial reso-lution of the feature maps is reduced more aggressively in th e first layers to decrease the amount of computation. As will be shown in Sect. 4. 5, our model is out performing that of Szegedyetal. (2014)intermsofthesingle-networkclassificationaccura cy. 3 CLASSIFICATION FRAMEWORK In the previous section we presented the details of our netwo rk configurations. In this section, we describethe detailsofclassification Conv Nettrainingand evaluation. 3. 1 T RAINING The Conv Net training procedure generally follows Krizhevs kyetal. (2012) (except for sampling theinputcropsfrommulti-scaletrainingimages,asexplai nedlater). Namely,thetrainingiscarried out by optimising the multinomial logistic regression obje ctive using mini-batch gradient descent (based on back-propagation(Le Cunet al., 1989)) with momen tum. The batch size was set to 256, momentum to 0. 9. The training was regularised by weight decay (the L2penalty multiplier set to 5·10-4)anddropoutregularisationforthefirsttwofully-connect edlayers(dropoutratiosetto 0. 5). Thelearningrate wasinitially setto 10-2,andthendecreasedbyafactorof 10whenthevalidation set accuracy stopped improving. In total, the learning rate was decreased 3 times, and the learning was stopped after 370K iterations (74 epochs). We conjecture that in spite of the l arger number of parametersandthegreaterdepthofournetscomparedto(Kri zhevskyetal.,2012),thenetsrequired lessepochstoconvergedueto(a)implicitregularisationi mposedbygreaterdepthandsmallerconv. filter sizes; (b)pre-initialisationofcertainlayers. The initialisation of the networkweightsis important,sin ce bad initialisation can stall learningdue to the instability of gradient in deep nets. To circumvent th is problem, we began with training the configuration A (Table 1), shallow enoughto be trained wi th randominitialisation. Then,when trainingdeeperarchitectures,weinitialisedthefirstfou rconvolutionallayersandthelastthreefully-connectedlayerswiththelayersofnet A(theintermediatel ayerswereinitialisedrandomly). Wedid notdecreasethelearningrateforthepre-initialisedlaye rs,allowingthemtochangeduringlearning. For random initialisation (where applicable), we sampled t he weights from a normal distribution with thezeromeanand 10-2variance. The biaseswere initialisedwith zero. It isworth notingthat after the paper submission we found that it is possible to ini tialise the weights without pre-training byusingthe randominitialisationprocedureof Glorot&Ben gio(2010). Toobtainthefixed-size 224×224Conv Netinputimages,theywererandomlycroppedfromresca led training images (one crop per image per SGD iteration). To fu rther augment the training set, the cropsunderwentrandomhorizontalflippingandrandom RGBco lourshift(Krizhevskyet al.,2012). Trainingimagerescalingisexplainedbelow. Training image size. Let Sbe the smallest side of an isotropically-rescaledtraining image, from which the Conv Net input is cropped (we also refer to Sas the training scale). While the crop size is fixed to 224×224, in principle Scan take on any value not less than 224: for S= 224the crop will capture whole-image statistics, completely spanning the smallest side of a training image; for S≫224thecropwillcorrespondtoasmallpartoftheimage,contain ingasmallobjectoranobject part. We considertwoapproachesforsettingthetrainingscale S. Thefirst istofix S,whichcorresponds to single-scale training (note that image content within th e sampled crops can still represent multi-scale image statistics). In our experiments, we evaluated m odels trained at two fixed scales: S= 256(which has been widely used in the prior art (Krizhevskyet al., 2012; Zeiler&Fergus, 2013; Sermanetet al., 2014)) and S= 384. Given a Conv Net configuration,we first trained the network using S= 256. To speed-up training of the S= 384network, it was initialised with the weights pre-trainedwith S= 256,andwe useda smallerinitiallearningrateof 10-3. The second approachto setting Sis multi-scale training, where each training image is indiv idually rescaled by randomly sampling Sfrom a certain range [Smin,Smax](we used Smin= 256and Smax= 512). Sinceobjectsinimagescanbeofdifferentsize,itisbene ficialtotakethisintoaccount duringtraining. Thiscanalso beseen astrainingset augmen tationbyscale jittering,wherea single 4
VGG-19 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 model is trained to recognise objects over a wide range of sca les. For speed reasons, we trained multi-scale models by fine-tuning all layers of a single-sca le model with the same configuration, pre-trainedwithfixed S= 384. 3. 2 T ESTING Attest time,givena trained Conv Netandaninputimage,itis classified inthefollowingway. First, it is isotropically rescaled to a pre-defined smallest image side, denoted as Q(we also refer to it as the test scale). We note that Qis not necessarily equal to the training scale S(as we will show in Sect. 4, usingseveralvaluesof Qforeach Sleadsto improvedperformance). Then,the network is applied densely overthe rescaled test image in a way simil ar to (Sermanetet al., 2014). Namely, the fully-connected layers are first converted to convoluti onal layers (the first FC layer to a 7×7 conv. layer, the last two FC layers to 1×1conv. layers). The resulting fully-convolutional net is then applied to the whole (uncropped) image. The result is a c lass score map with the number of channels equal to the number of classes, and a variable spati al resolution, dependent on the input imagesize. Finally,toobtainafixed-sizevectorofclasssc oresfortheimage,theclassscoremapis spatially averaged(sum-pooled). We also augmentthe test s et by horizontalflippingof the images; thesoft-maxclassposteriorsoftheoriginalandflippedima gesareaveragedtoobtainthefinalscores fortheimage. Since the fully-convolutionalnetwork is applied over the w hole image, there is no need to sample multiple crops at test time (Krizhevskyetal., 2012), which is less efficient as it requires network re-computationforeachcrop. Atthesametime,usingalarge setofcrops,asdoneby Szegedyetal. (2014),canleadtoimprovedaccuracy,asit resultsin afiner samplingoftheinputimagecompared tothefully-convolutionalnet. Also,multi-cropevaluati oniscomplementarytodenseevaluationdue to different convolution boundary conditions: when applyi ng a Conv Net to a crop, the convolved feature mapsare paddedwith zeros, while in the case of dense evaluationthe paddingfor the same crop naturally comes from the neighbouring parts of an image (due to both the convolutions and spatial pooling), which substantially increases the overa ll network receptive field, so more context iscaptured. Whilewebelievethatinpracticetheincreased computationtimeofmultiplecropsdoes notjustifythepotentialgainsinaccuracy,forreferencew ealsoevaluateournetworksusing 50crops perscale( 5×5regulargridwith 2flips),foratotalof 150cropsover 3scales,whichiscomparable to144cropsover 4scalesusedby Szegedyetal. (2014). 3. 3 IMPLEMENTATION DETAILS Ourimplementationisderivedfromthepubliclyavailable C ++ Caffetoolbox(Jia,2013)(branched out in December 2013), but contains a number of significant mo difications, allowing us to perform trainingandevaluationonmultiple GPUsinstalledinasing lesystem,aswellastrainandevaluateon full-size (uncropped) images at multiple scales (as descri bed above). Multi-GPU training exploits data parallelism, and is carried out by splitting each batch of training images into several GPU batches, processed in parallel on each GPU. After the GPU bat ch gradientsare computed, they are averaged to obtain the gradient of the full batch. Gradient c omputation is synchronous across the GPUs, sothe resultisexactlythesame aswhentrainingona si ngle GPU. While more sophisticated methods of speeding up Conv Net tra ining have been recently pro-posed (Krizhevsky, 2014), which employmodeland data paral lelism for differentlayersof the net, wehavefoundthatourconceptuallymuchsimplerschemealre adyprovidesaspeedupof 3. 75times on an off-the-shelf4-GPU system, as comparedto using a sing le GPU. On a system equippedwith four NVIDIATitan Black GPUs,trainingasinglenettook2-3w eeksdependingonthearchitecture. 4 CLASSIFICATION EXPERIMENTS Dataset. In this section, we present the image classification results achieved by the described Conv Netarchitecturesonthe ILSVRC-2012dataset(whichwa susedfor ILSVRC2012-2014chal-lenges). The dataset includes images of 1000 classes, and is split into three sets: training ( 1. 3M images), validation ( 50K images), and testing ( 100K images with held-out class labels). The clas-sification performanceis evaluated using two measures: the top-1 and top-5 error. The former is a multi-class classification error, i. e. the proportion of in correctly classified images; the latter is the 5
VGG-19 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 main evaluation criterion used in ILSVRC, and is computed as the proportion of images such that theground-truthcategoryisoutsidethetop-5predictedca tegories. Forthemajorityofexperiments,weusedthevalidationseta sthetestset. Certainexperimentswere also carried out on the test set and submitted to the official I LSVRC server as a “VGG” team entry tothe ILSVRC-2014competition(Russakovskyet al., 2014). 4. 1 SINGLESCALEEVALUATION We begin with evaluating the performanceof individual Conv Net models at a single scale with the layerconfigurationsdescribedin Sect. 2. 2. The test images ize was set as follows: Q=Sforfixed S,and Q= 0. 5(Smin+Smax)forjittered S∈[Smin,Smax]. Theresultsofareshownin Table3. First, we note that using local response normalisation (A-L RN network) does not improve on the model A without any normalisation layers. We thus do not empl oy normalisation in the deeper architectures(B-E). Second, we observe that the classification error decreases w ith the increased Conv Net depth: from 11 layers in A to 19 layers in E. Notably, in spite of the same de pth, the configuration C (which containsthree 1×1conv. layers),performsworsethantheconfiguration D,whic huses3×3conv. layersthroughoutthenetwork. Thisindicatesthatwhileth e additionalnon-linearitydoeshelp(Cis better than B), it is also important to capture spatial conte xt by using conv. filters with non-trivial receptive fields (D is better than C). The error rate of our arc hitecture saturates when the depth reaches19layers,butevendeepermodelsmightbebeneficialforlarger datasets. Wealsocompared the net B with a shallow net with five 5×5conv. layers, which was derived from B by replacing eachpairof 3×3conv. layerswithasingle 5×5conv. layer(whichhasthesamereceptivefieldas explained in Sect. 2. 3). The top-1 error of the shallow net wa s measured to be 7%higher than that of B (on a center crop),which confirmsthat a deepnet with smal l filters outperformsa shallow net withlargerfilters. Finally, scale jittering at training time ( S∈[256;512] ) leads to significantly better results than training on images with fixed smallest side ( S= 256or S= 384), even though a single scale is usedattesttime. Thisconfirmsthattrainingsetaugmentati onbyscalejitteringisindeedhelpfulfor capturingmulti-scaleimagestatistics. Table3:Conv Netperformanceatasingle testscale. Conv Net config. (Table 1) smallest image side top-1 val. error (%) top-5 val. error (%) train(S)test (Q) A 256 256 29. 6 10. 4 A-LRN 256 256 29. 7 10. 5 B 256 256 28. 7 9. 9 C256 256 28. 1 9. 4 384 384 28. 1 9. 3 [256;512] 384 27. 3 8. 8 D256 256 27. 0 8. 8 384 384 26. 8 8. 7 [256;512] 384 25. 6 8. 1 E256 256 27. 3 9. 0 384 384 26. 9 8. 7 [256;512] 384 25. 5 8. 0 4. 2 M ULTI-SCALEEVALUATION Havingevaluatedthe Conv Netmodelsatasinglescale,wenow assesstheeffectofscalejitteringat testtime. Itconsistsofrunningamodeloverseveralrescal edversionsofatestimage(corresponding to different values of Q), followed by averaging the resulting class posteriors. Co nsidering that a large discrepancy between training and testing scales lead s to a drop in performance, the models trained with fixed Swere evaluated over three test image sizes, close to the trai ning one: Q= {S-32,S,S+ 32}. At the same time, scale jittering at training time allows th e network to be appliedto a widerrangeofscales at test time,so the modeltr ainedwithvariable S∈[Smin;Smax] wasevaluatedoveralargerrangeofsizes Q={Smin,0. 5(Smin+Smax),Smax}. 6
VGG-19 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 Theresults,presentedin Table4,indicatethatscalejitte ringattest timeleadstobetterperformance (as compared to evaluating the same model at a single scale, s hown in Table 3). As before, the deepest configurations(D and E) perform the best, and scale j ittering is better than training with a fixed smallest side S. Our best single-network performance on the validation set is24. 8%/7. 5% top-1/top-5error(highlightedinboldin Table4). Onthete stset,theconfiguration Eachieves 7. 3% top-5error. Table4:Conv Netperformanceatmultiple test scales. Conv Net config. (Table 1) smallest image side top-1val. error (%) top-5val. error (%) train(S)test(Q) B 256 224,256,288 28. 2 9. 6 C256 224,256,288 27. 7 9. 2 384 352,384,416 27. 8 9. 2 [256;512] 256,384,512 26. 3 8. 2 D256 224,256,288 26. 6 8. 6 384 352,384,416 26. 5 8. 6 [256;512] 256,384,512 24. 8 7. 5 E256 224,256,288 26. 9 8. 7 384 352,384,416 26. 7 8. 6 [256;512] 256,384,512 24. 8 7. 5 4. 3 M ULTI-CROP EVALUATION In Table 5 we compare dense Conv Net evaluation with mult-cro p evaluation (see Sect. 3. 2 for de-tails). We also assess the complementarityof thetwo evalua tiontechniquesbyaveragingtheirsoft-max outputs. As can be seen, using multiple crops performs sl ightly better than dense evaluation, andthe two approachesareindeedcomplementary,astheir co mbinationoutperformseach ofthem. As noted above, we hypothesize that this is due to a different treatment of convolution boundary conditions. Table 5:Conv Netevaluationtechniques comparison. Inall experimentsthe trainingscale Swas sampledfrom [256;512],andthreetest scales Qwereconsidered: {256,384,512}. Conv Net config. (Table 1) Evaluationmethod top-1 val. error(%) top-5 val. error (%) Ddense 24. 8 7. 5 multi-crop 24. 6 7. 5 multi-crop &dense 24. 4 7. 2 Edense 24. 8 7. 5 multi-crop 24. 6 7. 4 multi-crop &dense 24. 4 7. 1 4. 4 C ONVNETFUSION Upuntilnow,weevaluatedtheperformanceofindividual Con v Netmodels. Inthispartoftheexper-iments,wecombinetheoutputsofseveralmodelsbyaveragin gtheirsoft-maxclassposteriors. This improvesthe performancedueto complementarityof the mode ls, andwas used in the top ILSVRC submissions in 2012 (Krizhevskyet al., 2012) and 2013 (Zeil er&Fergus, 2013; Sermanetet al., 2014). The results are shown in Table 6. By the time of ILSVRC submiss ion we had only trained the single-scale networks, as well as a multi-scale model D (by fi ne-tuning only the fully-connected layers rather than all layers). The resulting ensemble of 7 n etworks has 7. 3%ILSVRC test error. After the submission, we considered an ensemble of only two b est-performing multi-scale models (configurations D and E), which reduced the test error to 7. 0%using dense evaluation and 6. 8% using combined dense and multi-crop evaluation. For refere nce, our best-performingsingle model achieves7. 1%error(model E, Table5). 4. 5 C OMPARISON WITH THE STATE OF THE ART Finally, we compare our results with the state of the art in Ta ble 7. In the classification task of ILSVRC-2014 challenge (Russakovskyet al., 2014), our “VGG ” team secured the 2nd place with 7
VGG-19 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 Table6:Multiple Conv Netfusion results. Combined Conv Net models Error top-1 val top-5val top-5test ILSVRCsubmission (D/256/224,256,288), (D/384/352,384,416), (D/[256;512 ]/256,384,512) (C/256/224,256,288), (C/384/352,384,416) (E/256/224,256,288), (E/384/352,384,416)24. 7 7. 5 7. 3 post-submission (D/[256;512]/256,384,512), (E/[256;512]/256,384,512),dense eval. 24. 0 7. 1 7. 0 (D/[256;512]/256,384,512), (E/[256;512]/256,384,512),multi-crop 23. 9 7. 2-(D/[256;512]/256,384,512), (E/[256;512]/256,384,512),multi-crop &dense eval. 23. 7 6. 8 6. 8 7. 3%test errorusinganensembleof7 models. Afterthesubmissio n,we decreasedtheerrorrateto 6. 8%usinganensembleof2models. As can be seen from Table 7, our very deep Conv Netssignificant ly outperformthe previousgener-ation of models, which achieved the best results in the ILSVR C-2012 and ILSVRC-2013 competi-tions. Our result is also competitivewith respect to the cla ssification task winner(Goog Le Netwith 6. 7%error) and substantially outperforms the ILSVRC-2013 winn ing submission Clarifai, which achieved 11. 2%with outside training data and 11. 7%without it. This is remarkable, considering that our best result is achievedby combiningjust two models-significantly less than used in most ILSVRC submissions. In terms of the single-net performance, our architecture achieves the best result (7. 0%test error), outperforming a single Goog Le Net by 0. 9%. Notably, we did not depart from the classical Conv Net architecture of Le Cunetal. (198 9), but improved it by substantially increasingthedepth. Table 7:Comparison with the state of the art in ILSVRC classification. Our methodis denoted as“VGG”. Onlytheresultsobtainedwithoutoutsidetrainin gdataarereported. Method top-1 val. error(%) top-5val. error (%) top-5testerror (%) VGG(2nets, multi-crop& dense eval. ) 23. 7 6. 8 6. 8 VGG(1net, multi-crop& dense eval. ) 24. 4 7. 1 7. 0 VGG(ILSVRCsubmission, 7nets, dense eval. ) 24. 7 7. 5 7. 3 Goog Le Net (Szegedy et al., 2014) (1net)-7. 9 Goog Le Net (Szegedy et al., 2014) (7nets)-6. 7 MSRA(He et al., 2014) (11nets)--8. 1 MSRA(He et al., 2014) (1net) 27. 9 9. 1 9. 1 Clarifai(Russakovsky et al., 2014) (multiplenets)--11. 7 Clarifai(Russakovsky et al., 2014) (1net)--12. 5 Zeiler& Fergus (Zeiler&Fergus, 2013) (6nets) 36. 0 14. 7 14. 8 Zeiler& Fergus (Zeiler&Fergus, 2013) (1net) 37. 5 16. 0 16. 1 Over Feat (Sermanetet al.,2014) (7nets) 34. 0 13. 2 13. 6 Over Feat (Sermanetet al.,2014) (1net) 35. 7 14. 2-Krizhevsky et al. (Krizhevsky et al., 2012) (5nets) 38. 1 16. 4 16. 4 Krizhevsky et al. (Krizhevsky et al., 2012) (1net) 40. 7 18. 2-5 CONCLUSION In this work we evaluated very deep convolutional networks ( up to 19 weight layers) for large-scale image classification. It was demonstrated that the rep resentation depth is beneficial for the classificationaccuracy,andthatstate-of-the-artperfor manceonthe Image Netchallengedatasetcan beachievedusingaconventional Conv Netarchitecture(Le C unet al.,1989;Krizhevskyet al.,2012) withsubstantiallyincreaseddepth. Intheappendix,weals oshowthatourmodelsgeneralisewellto a wide range of tasks and datasets, matchingor outperformin gmore complexrecognitionpipelines builtaroundlessdeepimagerepresentations. Ourresultsy etagainconfirmtheimportanceof depth invisualrepresentations. ACKNOWLEDGEMENTS Thisworkwassupportedby ERCgrant Vis Recno. 228180. Wegra tefullyacknowledgethesupport of NVIDIACorporationwiththedonationofthe GPUsusedfort hisresearch. 8
VGG-19 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 REFERENCES Bell, S., Upchurch, P.,Snavely, N., and Bala, K. Material re cognition inthe wild withthe materials in context database. Co RR,abs/1412. 0623, 2014. Chatfield, K., Simonyan, K., Vedaldi, A., and Zisserman, A. R eturn of the devil in the details: Delving deep intoconvolutional nets. In Proc. BMVC.,2014. Cimpoi,M.,Maji,S.,and Vedaldi,A. Deepconvolutionalfilt erbanksfortexturerecognitionandsegmentation. Co RR,abs/1411. 6836, 2014. Ciresan, D. C., Meier, U., Masci, J., Gambardella, L. M., and Schmidhuber, J. Flexible, high performance convolutional neural networks for image classification. In IJCAI,pp. 1237-1242, 2011. Dean, J., Corrado, G., Monga, R., Chen, K., Devin, M., Mao, M., Ranzato, M., Senior, A., Tucker, P., Yang, K.,Le,Q. V.,and Ng, A. Y. Large scale distributeddeepnetwo rks. In NIPS,pp. 1232-1240, 2012. Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proc. CVPR,2009. Donahue,J.,Jia,Y.,Vinyals,O.,Hoffman,J.,Zhang,N.,Tz eng,E.,and Darrell,T. Decaf: Adeepconvolutional activation feature for generic visual recognition. Co RR,abs/1310. 1531, 2013. Everingham, M., Eslami, S. M. A., Van Gool, L., Williams,C., Winn, J., and Zisserman, A. The Pascal visual object classes challenge: Aretrospective. IJCV,111(1):98-136, 2015. Fei-Fei, L., Fergus, R., and Perona, P. Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categor ies. In IEEE CVPR Workshop of Generative Model Based Vision, 2004. Girshick, R. B., Donahue, J., Darrell, T., and Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. Co RR,abs/1311. 2524v5, 2014. Publishedin Proc. CVPR,2014. Gkioxari, G.,Girshick, R.,and Malik, J. Actions and attrib utes from wholes and parts. Co RR,abs/1412. 2604, 2014. Glorot, X. and Bengio, Y. Understanding the difficultyof tra iningdeep feedforward neural networks. In Proc. AISTATS,volume 9, pp. 249-256, 2010. Goodfellow, I. J., Bulatov, Y., Ibarz, J., Arnoud, S., and Sh et, V. Multi-digit number recognition from street view imagery usingdeep convolutional neural networks. In Proc. ICLR,2014. Griffin, G., Holub, A., and Perona, P. Caltech-256 object cat egory dataset. Technical Report 7694, California Institute of Technology, 2007. He, K., Zhang, X., Ren, S., and Sun, J. Spatial pyramid poolin g in deep convolutional networks for visual recognition. Co RR,abs/1406. 4729v2, 2014. Hoai, M. Regularizedmax pooling forimage categorization. In Proc. BMVC.,2014. Howard, A. G. Someimprovements ondeepconvolutional neura l networkbasedimageclassification. In Proc. ICLR,2014. Jia, Y. Caffe: An open source convolutional architecture fo r fast feature embedding. http://caffe. berkeleyvision. org/,2013. Karpathy, A. and Fei-Fei, L. Deep visual-semantic alignmen ts for generating image descriptions. Co RR, abs/1412. 2306, 2014. Kiros, R., Salakhutdinov, R., and Zemel, R. S. Unifying visu al-semantic embeddings with multimodal neural language models. Co RR,abs/1411. 2539, 2014. Krizhevsky, A. One weirdtrickfor parallelizingconvoluti onal neural networks. Co RR,abs/1404. 5997, 2014. Krizhevsky, A., Sutskever, I., and Hinton, G. E. Image Net cl assification with deep convolutional neural net-works. In NIPS,pp. 1106-1114, 2012. Le Cun,Y.,Boser, B.,Denker, J. S.,Henderson, D.,Howard, R. E.,Hubbard, W.,and Jackel, L. D. Backpropa-gationapplied tohandwrittenzipcode recognition. Neural Computation, 1(4):541-551, 1989. Lin,M., Chen, Q.,and Yan, S. Networkinnetwork. In Proc. ICLR,2014. Long, J., Shelhamer, E., and Darrell, T. Fully convolutiona l networks for semantic segmentation. Co RR, abs/1411. 4038, 2014. Oquab, M., Bottou, L., Laptev, I., and Sivic, J. Learning and Transferring Mid-Level Image Representations using Convolutional Neural Networks. In Proc. CVPR,2014. Perronnin, F.,S´ anchez, J.,and Mensink, T. Improving the F isherkernel forlarge-scale image classification. In Proc. ECCV,2010. Razavian, A.,Azizpour, H.,Sullivan, J.,and Carlsson,S. C NNFeaturesoff-the-shelf: an Astounding Baseline for Recognition. Co RR,abs/1403. 6382, 2014. 9
VGG-19 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A. C., and Fei-Fei, L. Image Net large sc ale visual recognition challenge. Co RR, abs/1409. 0575, 2014. Sermanet,P.,Eigen, D.,Zhang, X.,Mathieu, M.,Fergus,R., and Le Cun,Y. Over Feat: Integrated Recognition, Localizationand Detectionusing Convolutional Networks. In Proc. ICLR,2014. Simonyan, K. and Zisserman, A. Two-stream convolutional ne tworks for action recognition in videos. Co RR, abs/1406. 2199, 2014. Published in Proc. NIPS,2014. Szegedy, C., Liu, W.,Jia, Y., Sermanet, P.,Reed, S.,Anguel ov, D.,Erhan, D., Vanhoucke, V., and Rabinovich, A. Goingdeeper withconvolutions. Co RR,abs/1409. 4842, 2014. Wei, Y., Xia, W., Huang, J., Ni, B., Dong, J., Zhao, Y., and Yan, S. CNN: Single-label to multi-label. Co RR, abs/1406. 5726, 2014. Zeiler, M. D. and Fergus, R. Visualizing and understanding c onvolutional networks. Co RR, abs/1311. 2901, 2013. Publishedin Proc. ECCV,2014. A LOCALISATION In the main bodyof the paper we have consideredthe classifica tion task of the ILSVRC challenge, and performed a thorough evaluation of Conv Net architectur es of different depth. In this section, we turn to the localisation task of the challenge, which we ha ve won in 2014 with 25. 3%error. It can be seen as a special case of object detection, where a sing le object bounding box should be predictedforeach ofthe top-5classes, irrespectiveof the actual numberofobjectsof the class. For thiswe adoptthe approachof Sermanetet al. (2014), the winn ersof the ILSVRC-2013localisation challenge,withafewmodifications. Ourmethodisdescribed in Sect. A. 1andevaluatedin Sect. A. 2. A. 1 L OCALISATION CONVNET To perform object localisation, we use a very deep Conv Net, w here the last fully connected layer predicts the bounding box location instead of the class scor es. A bounding box is represented by a 4-D vector storing its center coordinates, width, and heig ht. There is a choice of whether the boundingbox prediction is shared across all classes (singl e-class regression, SCR (Sermanetet al., 2014))orisclass-specific(per-classregression,PCR). In theformercase,thelastlayeris4-D,while in the latter it is 4000-D (since there are 1000 classes in the dataset). Apart from the last bounding boxpredictionlayer,weuse the Conv Netarchitecture D (Tab le1),whichcontains16weightlayers andwasfoundtobe thebest-performingin theclassification task (Sect. 4). Training. Training of localisation Conv Nets is similar to that of the c lassification Conv Nets (Sect. 3. 1). Themaindifferenceisthatwereplacethelogis ticregressionobjectivewitha Euclidean loss,whichpenalisesthedeviationofthepredictedboundi ngboxparametersfromtheground-truth. We trainedtwo localisation models, each on a single scale: S= 256and S= 384(due to the time constraints,we didnot use trainingscale jitteringforour ILSVRC-2014submission). Trainingwas initialised with the correspondingclassification models ( trained on the same scales), and the initial learning rate was set to 10-3. We exploredboth fine-tuningall layers and fine-tuningonly the first two fully-connected layers, as done in (Sermanetetal., 201 4). The last fully-connected layer was initialisedrandomlyandtrainedfromscratch. Testing. We consider two testing protocols. The first is used for compa ring different network modifications on the validation set, and considers only the b oundingbox prediction for the ground truth class (to factor out the classification errors). The bo unding box is obtained by applying the networkonlyto thecentralcropoftheimage. The second, fully-fledged, testing procedure is based on the dense application of the localisation Conv Net to the whole image, similarly to the classification t ask (Sect. 3. 2). The difference is that instead of the class score map, the output of the last fully-c onnected layer is a set of bounding box predictions. To come up with the final prediction, we util ise the greedy merging procedure of Sermanetetal. (2014), which first merges spatially close predictions (by averaging their coor-dinates), and then rates them based on the class scores, obta ined from the classification Conv Net. When several localisation Conv Nets are used, we first take th e union of their sets of boundingbox predictions, and then run the mergingprocedureon the union. We did not use the multiple pooling 10
VGG-19 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 offsets technique of Sermanetetal. (2014), which increase s the spatial resolution of the bounding boxpredictionsandcanfurtherimprovetheresults. A. 2 L OCALISATION EXPERIMENTS In this section we first determine the best-performinglocal isation setting (using the first test proto-col), and then evaluate it in a fully-fledged scenario (the se cond protocol). The localisation error is measured according to the ILSVRC criterion (Russakovsky et al., 2014), i. e. the bounding box predictionis deemed correctif its intersectionoverunion ratio with the ground-truthboundingbox isabove0. 5. Settings comparison. As can be seen from Table 8, per-class regression (PCR) outpe rforms the class-agnostic single-class regression (SCR), which diff ers from the findings of Sermanetetal. (2014), where PCR was outperformed by SCR. We also note that fi ne-tuning all layers for the lo-calisation task leads to noticeablybetter results than fine-tuningonly the fully-connectedlayers(as donein(Sermanetet al.,2014)). Intheseexperiments,thes mallestimagessidewassetto S= 384; theresultswith S= 256exhibitthesamebehaviourandarenotshownforbrevity. Table 8:Localisation error for different modifications with the simplified testing protocol: the boundingbox is predictedfrom a single central image crop, a nd the ground-truthclass is used. All Conv Net layers (except for the last one) have the configurati on D (Table 1), while the last layer performseithersingle-classregression(SCR) orper-clas sregression(PCR). Fine-tunedlayers regression type GTclass localisationerror 1st and2nd FCSCR 36. 4 PCR 34. 3 all PCR 33. 1 Fully-fledgedevaluation. Havingdeterminedthebestlocalisationsetting(PCR,fine-tuningofall layers),we nowapply it in the fully-fledgedscenario,where the top-5class labelsare predictedus-ing our best-performingclassification system (Sect. 4. 5), and multiple densely-computedbounding box predictions are merged using the method of Sermanetetal. (2014). As can be seen from Ta-ble 9, applicationof the localisation Conv Netto the whole i magesubstantiallyimprovesthe results compared to using a center crop (Table 8), despite using the t op-5 predicted class labels instead of thegroundtruth. Similarlytotheclassificationtask(Sect. 4),testingatseveralscalesandcombining thepredictionsofmultiplenetworksfurtherimprovesthep erformance. Table9:Localisationerror smallestimage side top-5localisationerror (%) train(S) test(Q) val. test. 256 256 29. 5-384 384 28. 2 26. 7 384 352,384 27. 5-fusion: 256/256 and 384/352,384 26. 9 25. 3 Comparison with the state of the art. We compare our best localisation result with the state of the art in Table 10. With 25. 3%test error, our “VGG” team won the localisation challenge of ILSVRC-2014 (Russakovskyet al., 2014). Notably, our resul ts are considerably better than those of the ILSVRC-2013winner Overfeat(Sermanetet al., 2014), even thoughwe used less scales and did not employ their resolution enhancement technique. We e nvisage that better localisation per-formance can be achieved if this technique is incorporated i nto our method. This indicates the performanceadvancementbroughtbyourverydeep Conv Nets-wegotbetterresultswithasimpler localisationmethod,buta morepowerfulrepresentation. B GENERALISATION OF VERYDEEPFEATURES In the previous sections we have discussed training and eval uation of very deep Conv Nets on the ILSVRC dataset. In this section, we evaluate our Conv Nets, p re-trained on ILSVRC, as feature 11
VGG-19 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 Table 10: Comparison with the state of the art in ILSVRC localisation. Our methodis denoted as“VGG”. Method top-5val. error (%) top-5 testerror (%) VGG 26. 9 25. 3 Goog Le Net (Szegedyet al., 2014)-26. 7 Over Feat (Sermanet etal.,2014) 30. 0 29. 9 Krizhevsky et al. (Krizhevsky et al.,2012)-34. 2 extractors on other, smaller, datasets, where training lar ge models from scratch is not feasible due to over-fitting. Recently, there has been a lot of interest in such a use case (Zeiler&Fergus, 2013; Donahueet al., 2013; Razavianet al., 2014; Chatfieldet al., 2014), as it turns out that deep image representations,learnton ILSVRC,generalisewelltoothe rdatasets,wheretheyhaveoutperformed hand-crafted representations by a large margin. Following that line of work, we investigate if our modelsleadtobetterperformancethanmoreshallowmodelsu tilisedinthestate-of-the-artmethods. In this evaluation, we consider two models with the best clas sification performance on ILSVRC (Sect. 4)-configurations“Net-D”and“Net-E”(whichwemade publiclyavailable). To utilise the Conv Nets, pre-trained on ILSVRC, for image cl assification on other datasets, we remove the last fully-connected layer (which performs 1000-way ILSVRC classification), and use 4096-Dactivationsofthepenultimatelayerasimagefeatur es,whichareaggregatedacrossmultiple locations and scales. The resulting image descriptor is L2-normalised and combined with a linear SVM classifier, trained on the target dataset. For simplicit y, pre-trained Conv Net weights are kept fixed(nofine-tuningisperformed). Aggregation of features is carried out in a similar manner to our ILSVRC evaluation procedure (Sect. 3. 2). Namely, an image is first rescaled so that its sma llest side equals Q, and then the net-work is densely applied over the image plane (which is possib le when all weight layers are treated as convolutional). We then perform global average pooling o n the resulting feature map, which producesa 4096-Dimage descriptor. The descriptor is then a veraged with the descriptor of a hori-zontally flipped image. As was shown in Sect. 4. 2, evaluation over multiple scales is beneficial, so we extract features over several scales Q. The resulting multi-scale features can be either stacked or pooled across scales. Stacking allows a subsequent class ifier to learn how to optimally combine image statistics over a range of scales; this, however, come s at the cost of the increased descriptor dimensionality. We returntothediscussionofthisdesignc hoicein theexperimentsbelow. We also assess late fusion of features, computed using two networks, which is performed by stacking their respectiveimagedescriptors. Table11: Comparisonwiththestateoftheartinimageclassificationo n VOC-2007,VOC-2012, Caltech-101, and Caltech-256. Our models are denoted as “VGG”. Results marked with * were achievedusing Conv Netspre-trainedonthe extended ILSVRCdataset(2000classes). Method VOC-2007 VOC-2012 Caltech-101 Caltech-256 (mean AP) (mean AP) (meanclass recall) (mean class recall) Zeiler& Fergus (Zeiler&Fergus, 2013)-79. 0 86. 5±0. 5 74. 2±0. 3 Chatfieldetal. (Chatfieldet al., 2014) 82. 4 83. 2 88. 4±0. 6 77. 6±0. 1 He etal. (Heet al.,2014) 82. 4-93. 4±0. 5-Weiet al. (Weiet al., 2014) 81. 5(85. 2∗)81. 7 (90. 3∗)--VGGNet-D (16layers) 89. 3 89. 0 91. 8±1. 0 85. 0±0. 2 VGGNet-E(19 layers) 89. 3 89. 0 92. 3±0. 5 85. 1±0. 3 VGGNet-D & Net-E 89. 7 89. 3 92. 7±0. 5 86. 2±0. 3 Image Classification on VOC-2007and VOC-2012. We beginwith the evaluationon the image classification task of PASCAL VOC-2007 and VOC-2012 benchma rks (Everinghametal., 2015). These datasets contain 10K and 22. 5K images respectively, a nd each image is annotated with one or several labels, correspondingto 20 object categories. T he VOC organisersprovidea pre-defined split into training, validation, and test data (the test dat a for VOC-2012 is not publicly available; instead,anofficialevaluationserverisprovided). Recogn itionperformanceismeasuredusingmean averageprecision(m AP)acrossclasses. Notably, by examining the performance on the validation set s of VOC-2007 and VOC-2012, we foundthat aggregatingimage descriptors,computedat mult iple scales, by averagingperformssim-12
VGG-19 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 ilarly to the aggregation by stacking. We hypothesize that t his is due to the fact that in the VOC dataset the objects appear over a variety of scales, so there is no particular scale-specific seman-tics which a classifier could exploit. Since averaging has a b enefit of not inflating the descrip-tor dimensionality, we were able to aggregated image descri ptors over a wide range of scales: Q∈ {256,384,512,640,768}. It is worth noting though that the improvement over a smalle r rangeof{256,384,512}wasrathermarginal( 0. 3%). Thetestsetperformanceisreportedandcomparedwithother approachesin Table11. Ournetworks “Net-D”and“Net-E”exhibitidenticalperformanceon VOCda tasets,andtheircombinationslightly improves the results. Our methods set the new state of the art across image representations, pre-trained on the ILSVRC dataset, outperformingthe previousb est result of Chatfieldet al. (2014) by more than 6%. It should be noted that the method of Wei et al. (2014), which achieves1%better m AP on VOC-2012, is pre-trained on an extended 2000-class IL SVRC dataset, which includes additional 1000 categories, semantically close to those in VOC datasets. It also benefits from the fusionwith anobjectdetection-assistedclassification pi peline. Image Classificationon Caltech-101and Caltech-256. Inthissectionweevaluateverydeepfea-tureson Caltech-101(Fei-Feiet al.,2004)and Caltech-256 (Griffinet al.,2007)imageclassification benchmarks. Caltech-101contains9Kimageslabelledinto1 02classes(101objectcategoriesanda backgroundclass), while Caltech-256 is larger with 31K ima ges and 257 classes. A standard eval-uation protocolon these datasets is to generateseveral ran domsplits into training and test data and report the average recognition performance across the spli ts, which is measured by the mean class recall(whichcompensatesforadifferentnumberoftestima gesperclass). Following Chatfield etal. (2014); Zeiler&Fergus(2013); He etal. (2014),on Caltech-101we generated3 randomsplits into training and test data, so that each split contains 30 traini ng images per class, and up to 50 test images per class. On Caltech-256 we also generated 3 splits, each of which contains 60 training images per class (and the rest is used for testing). In each sp lit, 20% of training images were used asa validationset forhyper-parameterselection. We found that unlike VOC, on Caltech datasets the stacking of descriptors, computed over multi-ple scales, performs better than averaging or max-pooling. This can be explained by the fact that in Caltech images objects typically occupy the whole image, so multi-scale image features are se-manticallydifferent(capturingthe wholeobject vs. object parts), andstacking allows a classifier to exploitsuchscale-specificrepresentations. We usedthree scales Q∈ {256,384,512}. Ourmodelsarecomparedtoeachotherandthestateofthearti n Table11. Ascanbeseen,thedeeper 19-layer Net-Eperformsbetterthanthe16-layer Net-D,and theircombinationfurtherimprovesthe performance. On Caltech-101, our representations are comp etitive with the approach of He etal. (2014),which,however,performssignificantlyworsethano urnetson VOC-2007. On Caltech-256, ourfeaturesoutperformthestate oftheart (Chatfieldetal., 2014)byalargemargin( 8. 6%). Action Classification on VOC-2012. We also evaluated our best-performing image representa-tion (the stacking of Net-D and Net-E features) on the PASCAL VOC-2012 action classification task (Everinghamet al., 2015), which consists in predictin g an action class from a single image, given a bounding box of the person performing the action. The dataset contains 4. 6K training im-ages,labelledinto11classes. Similarlytothe VOC-2012ob jectclassificationtask,theperformance is measured using the m AP. We considered two training settin gs: (i) computing the Conv Net fea-turesonthewholeimageandignoringtheprovidedboundingb ox;(ii)computingthefeaturesonthe wholeimageandontheprovidedboundingbox,andstackingth emtoobtainthefinalrepresentation. Theresultsarecomparedtootherapproachesin Table12. Ourrepresentationachievesthestateofartonthe VOCactio nclassificationtaskevenwithoutusing the provided bounding boxes, and the results are further imp roved when using both images and bounding boxes. Unlike other approaches, we did not incorpo rate any task-specific heuristics, but reliedontherepresentationpowerofverydeepconvolution alfeatures. Other Recognition Tasks. Since the public release of our models, they have been active ly used by the research community for a wide range of image recogniti on tasks, consistently outperform-ing more shallow representations. For instance, Girshicke t al. (2014) achieve the state of the object detection results by replacing the Conv Net of Krizhe vskyet al. (2012) with our 16-layer model. Similar gains over a more shallow architecture of Kri zhevskyet al. (2012) have been ob-13
VGG-19 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 Table 12: Comparison with the state of the art in single-image action c lassification on VOC-2012. Our models are denoted as “VGG”. Results marked with * were a chieved using Conv Nets pre-trainedonthe extended ILSVRCdataset (1512classes). Method VOC-2012 (mean AP) (Oquab et al., 2014) 70. 2∗ (Gkioxari etal.,2014) 73. 6 (Hoai,2014) 76. 3 VGG Net-D& Net-E,image-only 79. 2 VGG Net-D& Net-E,image and bounding box 84. 0 served in semantic segmentation (Longet al., 2014), image c aption generation (Kirosetal., 2014; Karpathy& Fei-Fei, 2014),textureandmaterialrecognitio n(Cimpoiet al., 2014; Bell etal., 2014). C PAPERREVISIONS Here we present the list of major paper revisions, outlining the substantial changes for the conve-nienceofthe reader. v1Initialversion. Presentstheexperimentscarriedoutbefo rethe ILSVRCsubmission. v2Addspost-submission ILSVRCexperimentswithtrainingset augmentationusingscalejittering, whichimprovestheperformance. v3Addsgeneralisationexperiments(Appendix B) on PASCAL VOC and Caltech image classifica-tiondatasets. Themodelsusedforthese experimentsarepub liclyavailable. v4The paper is converted to ICLR-2015 submission format. Also adds experiments with multiple cropsforclassification. v6Camera-ready ICLR-2015conferencepaper. Addsa compariso nof the net B with a shallow net andtheresultson PASCAL VOCactionclassificationbenchmar k. 14
VGG-19 layer image recognition model.pdf
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
0
Edit dataset card