text
stringlengths
844
8.87k
source
stringclasses
5 values
Rethinking the Inception Architecture for Computer Vision Christian Szegedy Google Inc. szegedy@google. com Vincent Vanhoucke vanhoucke@google. com Sergey Ioffe sioffe@google. com Jonathon Shlens shlens@google. com Zbigniew Wojna University College London zbigniewwojna@gmail. com Abstract Convolutional networks are at the core of most state-of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in vari-ous benchmarks. Although increased model size and com-putational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we are explor-ing ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21. 2%top-1and5. 6%top-5error for single frame evaluation using a network with a computa-tional cost of 5billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4models and multi-crop evaluation, we report 3. 5%top-5 error and 17. 3%top-1error. 1. Introduction Since the 2012 Image Net competition [16] winning en-try by Krizhevsky et al [9], their network “Alex Net” has been successfully applied to a larger variety of computer vision tasks, for example to object-detection [5], segmen-tation [12], human pose estimation [22], video classifica-tion [8], object tracking [23], and superresolution [3]. These successes spurred a new line of research that fo-cused on finding higher performing convolutional neural networks. Starting in 2014, the quality of network architec-tures significantly improved by utilizing deeper and wider networks. VGGNet [18] and Goog Le Net [20] yielded simi-larly high performance in the 2014 ILSVRC [16] classifica-tion challenge. One interesting observation was that gains in the classification performance tend to transfer to signifi-cant quality gains in a wide variety of application domains. This means that architectural improvements in deep con-volutional architecture can be utilized for improving perfor-mance for most other computer vision tasks that are increas-ingly reliant on high quality, learned visual features. Also, improvements in the network quality resulted in new appli-cation domains for convolutional networks in cases where Alex Net features could not compete with hand engineered, crafted solutions, e. g. proposal generation in detection[4]. Although VGGNet [18] has the compelling feature of architectural simplicity, this comes at a high cost: evalu-ating the network requires a lot of computation. On the other hand, the Inception architecture of Goog Le Net [20] was also designed to perform well even under strict con-straints on memory and computational budget. For exam-ple, Google Net employed only 5 million parameters, which represented a 12×reduction with respect to its predeces-sor Alex Net, which used 60million parameters. Further-more, VGGNet employed about 3x more parameters than Alex Net. The computational cost of Inception is also much lower than VGGNet or its higher performing successors [6]. This has made it feasible to utilize Inception networks in big-data scenarios[17], [13], where huge amount of data needed to be processed at reasonable cost or scenarios where memory or computational capacity is inherently limited, for example in mobile vision settings. It is certainly possible to mitigate parts of these issues by applying specialized solutions to tar-get memory use [2], [15] or by optimizing the execution of certain operations via computational tricks [10]. However, these methods add extra complexity. Furthermore, these methods could be applied to optimize the Inception archi-tecture as well, widening the efficiency gap again. Still, the complexity of the Inception architecture makes 1ar Xiv:1512. 00567v3 [cs. CV] 11 Dec 2015
Googles Inception v3.pdf
it more difficult to make changes to the network. If the ar-chitecture is scaled up naively, large parts of the computa-tional gains can be immediately lost. Also, [20] does not provide a clear description about the contributing factors that lead to the various design decisions of the Goog Le Net architecture. This makes it much harder to adapt it to new use-cases while maintaining its efficiency. For example, if it is deemed necessary to increase the capacity of some Inception-style model, the simple transformation of just doubling the number of all filter bank sizes will lead to a 4x increase in both computational cost and number of pa-rameters. This might prove prohibitive or unreasonable in a lot of practical scenarios, especially if the associated gains are modest. In this paper, we start with describing a few general principles and optimization ideas that that proved to be useful for scaling up convolution networks in efficient ways. Although our principles are not limited to Inception-type networks, they are easier to observe in that context as the generic structure of the Inception style building blocks is flexible enough to incorporate those constraints naturally. This is enabled by the generous use of dimensional reduc-tion and parallel structures of the Inception modules which allows for mitigating the impact of structural changes on nearby components. Still, one needs to be cautious about doing so, as some guiding principles should be observed to maintain high quality of the models. 2. General Design Principles Here we will describe a few design principles based on large-scale experimentation with various architectural choices with convolutional networks. At this point, the util-ity of the principles below are speculative and additional future experimental evidence will be necessary to assess their accuracy and domain of validity. Still, grave devia-tions from these principles tended to result in deterioration in the quality of the networks and fixing situations where those deviations were detected resulted in improved archi-tectures in general. 1. Avoid representational bottlenecks, especially early in the network. Feed-forward networks can be repre-sented by an acyclic graph from the input layer(s) to the classifier or regressor. This defines a clear direction for the information flow. For any cut separating the in-puts from the outputs, one can access the amount of information passing though the cut. One should avoid bottlenecks with extreme compression. In general the representation size should gently decrease from the in-puts to the outputs before reaching the final represen-tation used for the task at hand. Theoretically, infor-mation content can not be assessed merely by the di-mensionality of the representation as it discards impor-tant factors like correlation structure; the dimensional-ity merely provides a rough estimate of information content. 2. Higher dimensional representations are easier to pro-cess locally within a network. Increasing the activa-tions per tile in a convolutional network allows for more disentangled features. The resulting networks will train faster. 3. Spatial aggregation can be done over lower dimen-sional embeddings without much or any loss in rep-resentational power. For example, before performing a more spread out (e. g. 3×3) convolution, one can re-duce the dimension of the input representation before the spatial aggregation without expecting serious ad-verse effects. We hypothesize that the reason for that is the strong correlation between adjacent unit results in much less loss of information during dimension re-duction, if the outputs are used in a spatial aggrega-tion context. Given that these signals should be easily compressible, the dimension reduction even promotes faster learning. 4. Balance the width and depth of the network. Optimal performance of the network can be reached by balanc-ing the number of filters per stage and the depth of the network. Increasing both the width and the depth of the network can contribute to higher quality net-works. However, the optimal improvement for a con-stant amount of computation can be reached if both are increased in parallel. The computational budget should therefore be distributed in a balanced way between the depth and width of the network. Although these principles might make sense, it is not straightforward to use them to improve the quality of net-works out of box. The idea is to use them judiciously in ambiguous situations only. 3. Factorizing Convolutions with Large Filter Size Much of the original gains of the Goog Le Net net-work [20] arise from a very generous use of dimension re-duction. This can be viewed as a special case of factorizing convolutions in a computationally efficient manner. Con-sider for example the case of a 1×1convolutional layer followed by a 3×3convolutional layer. In a vision net-work, it is expected that the outputs of near-by activations are highly correlated. Therefore, we can expect that their activations can be reduced before aggregation and that this should result in similarly expressive local representations. Here we explore other ways of factorizing convolutions in various settings, especially in order to increase the com-putational efficiency of the solution. Since Inception net-works are fully convolutional, each weight corresponds to
Googles Inception v3.pdf
Figure 1. Mini-network replacing the 5×5convolutions. one multiplication per activation. Therefore, any reduction in computational cost results in reduced number of param-eters. This means that with suitable factorization, we can end up with more disentangled parameters and therefore with faster training. Also, we can use the computational and memory savings to increase the filter-bank sizes of our network while maintaining our ability to train each model replica on a single computer. 3. 1. Factorization into smaller convolutions Convolutions with larger spatial filters (e. g. 5×5or 7×7) tend to be disproportionally expensive in terms of computation. For example, a 5×5convolution with nfil-ters over a grid with mfilters is 25/9 = 2. 78 times more computationally expensive than a 3×3convolution with the same number of filters. Of course, a 5×5filter can cap-ture dependencies between signals between activations of units further away in the earlier layers, so a reduction of the geometric size of the filters comes at a large cost of expres-siveness. However, we can ask whether a 5×5convolution could be replaced by a multi-layer network with less pa-rameters with the same input size and output depth. If we zoom into the computation graph of the 5×5convolution, we see that each output looks like a small fully-connected network sliding over 5×5tiles over its input (see Figure 1). Since we are constructing a vision network, it seems natural to exploit translation invariance again and replace the fully connected component by a two layer convolutional archi-tecture: the first layer is a 3×3convolution, the second is a fully connected layer on top of the 3×3output grid of the first layer (see Figure 1). Sliding this small network over the input activation grid boils down to replacing the 5×5 convolution with two layers of 3×3convolution (compare Figure 4 with 5). This setup clearly reduces the parameter count by shar-ing the weights between adjacent tiles. To analyze the ex-0 0. 5 1 1. 5 2 2. 5 3 3. 5 4 x 10600. 10. 20. 30. 40. 50. 60. 70. 8 Iteration Top-1 Accuracy Factorization with Linear vs Re LU activation Re LU Linear Figure 2. One of several control experiments between two Incep-tion models, one of them uses factorization into linear + Re LU layers, the other uses two Re LU layers. After 3. 86million opera-tions, the former settles at 76. 2%, while the latter reaches 77. 2% top-1 Accuracy on the validation set. pected computational cost savings, we will make a few sim-plifying assumptions that apply for the typical situations: We can assume that n=αm, that is that we want to change the number of activations/unit by a constant alpha factor. Since the 5×5convolution is aggregating, αis typically slightly larger than one (around 1. 5 in the case of Goog Le Net). Having a two layer replacement for the 5×5layer, it seems reasonable to reach this expansion in two steps: increasing the number of filters by√αin both steps. In order to simplify our estimate by choosing α= 1 (no expansion), If we would naivly slide a network without reusing the computation between neighboring grid tiles, we would increase the computational cost. sliding this network can be represented by two 3×3convolutional layers which reuses the activations between adjacent tiles. This way, we end up with a net9+9 25×reduction of computation, resulting in a relative gain of 28% by this factorization. The exact same saving holds for the parameter count as each parame-ter is used exactly once in the computation of the activation of each unit. Still, this setup raises two general questions: Does this replacement result in any loss of expressiveness? If our main goal is to factorize the linear part of the compu-tation, would it not suggest to keep linear activations in the first layer? We have ran several control experiments (for ex-ample see figure 2) and using linear activation was always inferior to using rectified linear units in all stages of the fac-torization. We attribute this gain to the enhanced space of variations that the network can learn especially if we batch-normalize [7] the output activations. One can see similar effects when using linear activations for the dimension re-duction components. 3. 2. Spatial Factorization into Asymmetric Convo-lutions The above results suggest that convolutions with filters larger 3×3a might not be generally useful as they can always be reduced into a sequence of 3×3convolutional
Googles Inception v3.pdf
Figure 3. Mini-network replacing the 3×3convolutions. The lower layer of this network consists of a 3×1convolution with 3 output units. 1x1 1x1 5x5 3x3 Pool 1x1 Base Filter Concat 1x1 Figure 4. Original Inception module as described in [20]. layers. Still we can ask the question whether one should factorize them into smaller, for example 2×2convolutions. However, it turns out that one can do even better than 2×2 by using asymmetric convolutions, e. g. n×1. For example using a 3×1convolution followed by a 1×3convolution is equivalent to sliding a two layer network with the same receptive field as in a 3×3convolution (see figure 3). Still the two-layer solution is 33% cheaper for the same number of output filters, if the number of input and output filters is equal. By comparison, factorizing a 3×3convolution into a two 2×2convolution represents only a 11% saving of computation. In theory, we could go even further and argue that one can replace any n×nconvolution by a 1×nconvolu-1x1 1x1 3x3 3x3 Pool 1x1 Base Filter Concat 3x3 1x1 Figure 5. Inception modules where each 5×5convolution is re-placed by two 3×3convolution, as suggested by principle 3 of Section 2. tion followed by a n×1convolution and the computational cost saving increases dramatically as ngrows (see figure 6). In practice, we have found that employing this factorization does not work well on early layers, but it gives very good re-sults on medium grid-sizes (On m×mfeature maps, where mranges between 12and20). On that level, very good re-sults can be achieved by using 1×7convolutions followed by7×1convolutions. 4. Utility of Auxiliary Classifiers [20] has introduced the notion of auxiliary classifiers to improve the convergence of very deep networks. The origi-nal motivation was to push useful gradients to the lower lay-ers to make them immediately useful and improve the con-vergence during training by combating the vanishing gra-dient problem in very deep networks. Also Lee et al[11] argues that auxiliary classifiers promote more stable learn-ing and better convergence. Interestingly, we found that auxiliary classifiers did not result in improved convergence early in the training: the training progression of network with and without side head looks virtually identical before both models reach high accuracy. Near the end of training, the network with the auxiliary branches starts to overtake the accuracy of the network without any auxiliary branch and reaches a slightly higher plateau. Also [20] used two side-heads at different stages in the network. The removal of the lower auxiliary branch did not have any adverse effect on the final quality of the network. Together with the earlier observation in the previous para-
Googles Inception v3.pdf
1x1 1x1 1xn Pool 1x1 Base Filter Concat nx1 1xn nx1 1xn nx1 1x1 Figure 6. Inception modules after the factorization of the n×n convolutions. In our proposed architecture, we chose n= 7 for the17×17grid. (The filter sizes are picked using principle 3). graph, this means that original the hypothesis of [20] that these branches help evolving the low-level features is most likely misplaced. Instead, we argue that the auxiliary clas-sifiers act as regularizer. This is supported by the fact that the main classifier of the network performs better if the side branch is batch-normalized [7] or has a dropout layer. This also gives a weak supporting evidence for the conjecture that batch normalization acts as a regularizer. 5. Efficient Grid Size Reduction Traditionally, convolutional networks used some pooling operation to decrease the grid size of the feature maps. In order to avoid a representational bottleneck, before apply-ing maximum or average pooling the activation dimension of the network filters is expanded. For example, starting a d×dgrid withkfilters, if we would like to arrive at ad 2×d 2 grid with 2kfilters, we first need to compute a stride-1 con-volution with 2kfilters and then apply an additional pooling step. This means that the overall computational cost is dom-inated by the expensive convolution on the larger grid using 2d2k2operations. One possibility would be to switch to pooling with convolution and therefore resulting in 2(d 2)2k2 1x1 1x1 3x3 Pool 1x1 Base Filter Concat 1x3 1x3 1x1 3x1 3x1 Figure 7. Inception modules with expanded the filter bank outputs. This architecture is used on the coarsest ( 8×8) grids to promote high dimensional representations, as suggested by principle 2 of Section 2. We are using this solution only on the coarsest grid, since that is the place where producing high dimensional sparse representation is the most critical as the ratio of local processing (by1×1convolutions) is increased compared to the spatial ag-gregation. 17x17x768 5x5x768 8x8x1280 Inception 5x5x128 1x1x1024 5x5 Average pooling with stride 3 1x1 Convolution Fully connected... Figure 8. Auxiliary classifier on top of the last 17×17layer. Batch normalization[7] of the layers in the side head results in a 0. 4% absolute gain in top-1accuracy. The lower axis shows the number of itertions performed, each with batch size 32. reducing the computational cost by a quarter. However, this creates a representational bottlenecks as the overall dimen-sionality of the representation drops to (d 2)2kresulting in less expressive networks (see Figure 9). Instead of doing so, we suggest another variant the reduces the computational cost even further while removing the representational bot-tleneck. (see Figure 10). We can use two parallel stride 2 blocks:Pand C. Pis a pooling layer (either average or maximum pooling) the activation, both of them are stride 2 the filter banks of which are concatenated as in figure 10.
Googles Inception v3.pdf
35x35x320 17x17x320 17x17x640 Pooling Inception 35x35x320 35x35x640 17x17x640 Inception Pooling Figure 9. Two alternative ways of reducing the grid size. The so-lution on the left violates the principle 1 of not introducing an rep-resentational bottleneck from Section 2. The version on the right is3times more expensive computationally. Pool stride 2 Base Filter Concat 1x1 3x3 stride 2 3x3 stride 1 1x1 3x3 stride 2 35x35x320 17x17x320 17x17x320 17x17x640 pool conv concat Figure 10. Inception module that reduces the grid-size while ex-pands the filter banks. It is both cheap and avoids the representa-tional bottleneck as is suggested by principle 1. The diagram on the right represents the same solution but from the perspective of grid sizes rather than the operations. 6. Inception-v2 Here we are connecting the dots from above and pro-pose a new architecture with improved performance on the ILSVRC 2012 classification benchmark. The layout of our network is given in table 1. Note that we have factorized the traditional 7×7convolution into three 3×3convolu-tions based on the same ideas as described in section 3. 1. For the Inception part of the network, we have 3traditional inception modules at the 35×35with288filters each. This is reduced to a 17×17grid with 768filters using the grid reduction technique described in section 5. This is is fol-lowed by 5instances of the factorized inception modules as depicted in figure 5. This is reduced to a 8×8×1280 grid with the grid reduction technique depicted in figure 10. At the coarsest 8×8level, we have two Inception modules as depicted in figure 6, with a concatenated output filter bank size of 2048 for each tile. The detailed structure of the net-work, including the sizes of filter banks inside the Inception modules, is given in the supplementary material, given in themodel. txt that is in the tar-file of this submission. typepatch size/stride or remarksinput size conv 3×3/2 299×299×3 conv 3×3/1 149×149×32 conv padded 3×3/1 147×147×32 pool 3×3/2 147×147×64 conv 3×3/1 73×73×64 conv 3×3/2 71×71×80 conv 3×3/1 35×35×192 3×Inception As in figure 5 35×35×288 5×Inception As in figure 6 17×17×768 2×Inception As in figure 7 8×8×1280 pool 8×8 8×8×2048 linear logits 1×1×2048 softmax classifier 1×1×1000 Table 1. The outline of the proposed network architecture. The output size of each module is the input size of the next one. We are using variations of reduction technique depicted Figure 10 to reduce the grid sizes between the Inception blocks whenever ap-plicable. We have marked the convolution with 0-padding, which is used to maintain the grid size. 0-padding is also used inside those Inception modules that do not reduce the grid size. All other layers do not use padding. The various filter bank sizes are chosen to observe principle 4 from Section 2. However, we have observed that the quality of the network is relatively stable to variations as long as the principles from Section 2 are observed. Although our network is 42 layers deep, our computation cost is only about 2. 5higher than that of Goog Le Net and it is still much more efficient than VGGNet. 7. Model Regularization via Label Smoothing Here we propose a mechanism to regularize the classifier layer by estimating the marginalized effect of label-dropout during training. For each training example x, our model computes the probability of each label k∈ {1... K}:p(k|x) = exp(zk)∑K i=1exp(zi). Here,ziare the logits or unnormalized log-probabilities. Consider the ground-truth distribution over labelsq(k|x)for this training example, normalized so that∑ kq(k|x) = 1. For brevity, let us omit the dependence ofpandqon example x. We define the loss for the ex-ample as the cross entropy: ℓ=-∑K k=1log(p(k))q(k). Minimizing this is equivalent to maximizing the expected log-likelihood of a label, where the label is selected accord-ing to its ground-truth distribution q(k). Cross-entropy loss is differentiable with respect to the logits zkand thus can be used for gradient training of deep models. The gradient has a rather simple form:∂ℓ ∂zk=p(k)-q(k), which is bounded between-1and1. Consider the case of a single ground-truth label y, so thatq(y) = 1 andq(k) = 0 for allk̸=y. In this case,
Googles Inception v3.pdf
minimizing the cross entropy is equivalent to maximizing the log-likelihood of the correct label. For a particular ex-amplexwith labely, the log-likelihood is maximized for q(k) =δk,y, whereδk,yis Dirac delta, which equals 1for k=yand0otherwise. This maximum is not achievable for finitezkbut is approached if zy≫zkfor allk̸=y-that is, if the logit corresponding to the ground-truth la-bel is much great than all other logits. This, however, can cause two problems. First, it may result in over-fitting: if the model learns to assign full probability to the ground-truth label for each training example, it is not guaranteed to generalize. Second, it encourages the differences between the largest logit and all others to become large, and this, combined with the bounded gradient∂ℓ ∂zk, reduces the abil-ity of the model to adapt. Intuitively, this happens because the model becomes too confident about its predictions. We propose a mechanism for encouraging the model to be less confident. While this may not be desired if the goal is to maximize the log-likelihood of training labels, it does regularize the model and makes it more adaptable. The method is very simple. Consider a distribution over labels u(k),independent of the training example x, and a smooth-ing parameter ϵ. For a training example with ground-truth labely, we replace the label distribution q(k|x) =δk,ywith q′(k|x) = (1-ϵ)δk,y+ϵu(k) which is a mixture of the original ground-truth distribution q(k|x)and the fixed distribution u(k), with weights 1-ϵ andϵ, respectively. This can be seen as the distribution of the labelkobtained as follows: first, set it to the ground-truth labelk=y; then, with probability ϵ, replacekwith a sample drawn from the distribution u(k). We propose to use the prior distribution over labels as u(k). In our exper-iments, we used the uniform distribution u(k) = 1/K, so that q′(k) = (1-ϵ)δk,y+ϵ K. We refer to this change in ground-truth label distribution as label-smoothing regularization, or LSR. Note that LSR achieves the desired goal of preventing the largest logit from becoming much larger than all others. Indeed, if this were to happen, then a single q(k)would approach 1while all others would approach 0. This would result in a large cross-entropy with q′(k)because, unlike q(k) =δk,y, allq′(k)have a positive lower bound. Another interpretation of LSR can be obtained by con-sidering the cross entropy: H(q′,p) =-K∑ k=1logp(k)q′(k) = (1-ϵ)H(q,p)+ϵH(u,p) Thus, LSR is equivalent to replacing a single cross-entropy loss H(q,p)with a pair of such losses H(q,p)and H(u,p). The second loss penalizes the deviation of predicted label distributionpfrom the prior u, with the relative weightϵ 1-ϵ. Note that this deviation could be equivalently captured by the KL divergence, since H(u,p) =DKL(u∥p) +H(u) and H(u)is fixed. When uis the uniform distribution, H(u,p)is a measure of how dissimilar the predicted dis-tributionpis to uniform, which could also be measured (but not equivalently) by negative entropy-H(p); we have not experimented with this approach. In our Image Net experiments with K= 1000 classes, we usedu(k) = 1/1000 andϵ= 0. 1. For ILSVRC 2012, we have found a consistent improvement of about 0. 2%ab-solute both for top-1error and the top-5error (cf. Table 3). 8. Training Methodology We have trained our networks with stochastic gradient utilizing the Tensor Flow [1] distributed machine learning system using 50replicas running each on a NVidia Kepler GPU with batch size 32for100epochs. Our earlier experi-ments used momentum [19] with a decay of 0. 9, while our best models were achieved using RMSProp [21] with de-cay of 0. 9andϵ= 1. 0. We used a learning rate of 0. 045, decayed every two epoch using an exponential rate of 0. 94. In addition, gradient clipping [14] with threshold 2. 0was found to be useful to stabilize the training. Model evalua-tions are performed using a running average of the parame-ters computed over time. 9. Performance on Lower Resolution Input A typical use-case of vision networks is for the the post-classification of detection, for example in the Multibox [4] context. This includes the analysis of a relative small patch of the image containing a single object with some context. The tasks is to decide whether the center part of the patch corresponds to some object and determine the class of the object if it does. The challenge is that objects tend to be relatively small and low-resolution. This raises the question of how to properly deal with lower resolution input. The common wisdom is that models employing higher resolution receptive fields tend to result in significantly im-proved recognition performance. However it is important to distinguish between the effect of the increased resolution of the first layer receptive field and the effects of larger model capacitance and computation. If we just change the reso-lution of the input without further adjustment to the model, then we end up using computationally much cheaper mod-els to solve more difficult tasks. Of course, it is natural, that these solutions loose out already because of the reduced computational effort. In order to make an accurate assess-ment, the model needs to analyze vague hints in order to be able to “hallucinate” the fine details. This is computa-tionally costly. The question remains therefore: how much
Googles Inception v3.pdf
Receptive Field Size Top-1 Accuracy (single frame) 79×79 75. 2% 151×151 76. 4% 299×299 76. 6% Table 2. Comparison of recognition performance when the size of the receptive field varies, but the computational cost is constant. does higher input resolution helps if the computational ef-fort is kept constant. One simple way to ensure constant effort is to reduce the strides of the first two layer in the case of lower resolution input, or by simply removing the first pooling layer of the network. For this purpose we have performed the following three experiments: 1. 299×299receptive field with stride 2and maximum pooling after the first layer. 2. 151×151receptive field with stride 1and maximum pooling after the first layer. 3. 79×79receptive field with stride 1andwithout pool-ing after the first layer. All three networks have almost identical computational cost. Although the third network is slightly cheaper, the cost of the pooling layer is marginal and (within 1%of the total cost of the)network. In each case, the networks were trained until convergence and their quality was measured on the validation set of the Image Net ILSVRC 2012 classifica-tion benchmark. The results can be seen in table 2. Al-though the lower-resolution networks take longer to train, the quality of the final result is quite close to that of their higher resolution counterparts. However, if one would just naively reduce the network size according to the input resolution, then network would perform much more poorly. However this would an unfair comparison as we would are comparing a 16 times cheaper model on a more difficult task. Also these results of table 2 suggest, one might con-sider using dedicated high-cost low resolution networks for smaller objects in the R-CNN [5] context. 10. Experimental Results and Comparisons Table 3 shows the experimental results about the recog-nition performance of our proposed architecture (Inception-v2) as described in Section 6. Each Inception-v2 line shows the result of the cumulative changes including the high-lighted new modification plus all the earlier ones. Label Smoothing refers to method described in Section 7. Fac-torized 7×7includes a change that factorizes the first 7×7convolutional layer into a sequence of 3×3convo-lutional layers. BN-auxiliary refers to the version in which Network Top-1 Error Top-5 Error Cost Bn Ops Goog Le Net [20] 29% 9. 2% 1. 5 BN-Goog Le Net 26. 8%-1. 5 BN-Inception [7] 25. 2% 7. 8 2. 0 Inception-v2 23. 4%-3. 8 Inception-v2 RMSProp 23. 1% 6. 3 3. 8 Inception-v2 Label Smoothing 22. 8% 6. 1 3. 8 Inception-v2 Factorized 7×7 21. 6% 5. 8 4. 8 Inception-v2 BN-auxiliary21. 2% 5. 6% 4. 8 Table 3. Single crop experimental results comparing the cumula-tive effects on the various contributing factors. We compare our numbers with the best published single-crop inference for Ioffe at al [7]. For the “Inception-v2” lines, the changes are cumulative and each subsequent line includes the new change in addition to the previous ones. The last line is referring to all the changes is what we refer to as “Inception-v3” below. Unfortunately, He et al [6] reports the only 10-crop evaluation results, but not single crop results, which is reported in the Table 4 below. Network Crops Evaluated Top-5 Error Top-1 Error Goog Le Net [20] 10-9. 15% Goog Le Net [20] 144-7. 89% VGG [18]-24. 4% 6. 8% BN-Inception [7] 144 22% 5. 82% PRe LU [6] 10 24. 27% 7. 38% PRe LU [6]-21. 59% 5. 71% Inception-v3 12 19. 47% 4. 48% Inception-v3 144 18. 77% 4. 2% Table 4. Single-model, multi-crop experimental results compar-ing the cumulative effects on the various contributing factors. We compare our numbers with the best published single-model infer-ence results on the ILSVRC 2012 classification benchmark. the fully connected layer of the auxiliary classifier is also batch-normalized, not just the convolutions. We are refer-ring to the model in last row of Table 3 as Inception-v3 and evaluate its performance in the multi-crop and ensemble set-tings. All our evaluations are done on the 48238 non-blacklisted examples on the ILSVRC-2012 validation set, as suggested by [16]. We have evaluated all the 50000 ex-amples as well and the results were roughly 0. 1% worse in top-5 error and around 0. 2% in top-1 error. In the upcom-ing version of this paper, we will verify our ensemble result on the test set, but at the time of our last evaluation of BN-Inception in spring [7] indicates that the test and validation set error tends to correlate very well.
Googles Inception v3.pdf
Network Models Evaluated Crops Evaluated Top-1 Error Top-5 Error VGGNet [18] 2-23. 7% 6. 8% Goog Le Net [20] 7 144-6. 67% PRe LU [6]---4. 94% BN-Inception [7] 6 144 20. 1% 4. 9% Inception-v3 4 144 17. 2% 3. 58%∗ Table 5. Ensemble evaluation results comparing multi-model, multi-crop reported results. Our numbers are compared with the best published ensemble inference results on the ILSVRC 2012 classification benchmark. ∗All results, but the top-5 ensemble result reported are on the validation set. The ensemble yielded 3. 46% top-5 error on the validation set. 11. Conclusions We have provided several design principles to scale up convolutional networks and studied them in the context of the Inception architecture. This guidance can lead to high performance vision networks that have a relatively mod-est computation cost compared to simpler, more monolithic architectures. Our highest quality version of Inception-v3 reaches 21. 2%, top-1and5. 6%top-5 error for single crop evaluation on the ILSVR 2012 classification, setting a new state of the art. This is achieved with relatively modest (2. 5×) increase in computational cost compared to the net-work described in Ioffe et al [7]. Still our solution uses much less computation than the best published results based on denser networks: our model outperforms the results of He et al [6]-cutting the top-5(top-1) error by 25% (14%) relative, respectively-while being six times cheaper com-putationally and using at least five times less parameters (estimated). Our ensemble of four Inception-v3 models reaches 3. 5%with multi-crop evaluation reaches 3. 5%top-5error which represents an over 25% reduction to the best published results and is almost half of the error of ILSVRC 2014 winining Goog Le Net ensemble. We have also demonstrated that high quality results can be reached with receptive field resolution as low as 79×79. This might prove to be helpful in systems for detecting rel-atively small objects. We have studied how factorizing con-volutions and aggressive dimension reductions inside neural network can result in networks with relatively low computa-tional cost while maintaining high quality. The combination of lower parameter count and additional regularization with batch-normalized auxiliary classifiers and label-smoothing allows for training high quality networks on relatively mod-est sized training sets. References [1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghe-mawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia,R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Man ´e, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Vi ´egas, O. Vinyals, P. War-den, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng. Tensor-Flow: Large-scale machine learning on heterogeneous sys-tems, 2015. Software available from tensorflow. org. [2] W. Chen, J. T. Wilson, S. Tyree, K. Q. Weinberger, and Y. Chen. Compressing neural networks with the hashing trick. In Proceedings of The 32nd International Conference on Machine Learning, 2015. [3] C. Dong, C. C. Loy, K. He, and X. Tang. Learning a deep convolutional network for image super-resolution. In Com-puter Vision-ECCV 2014, pages 184-199. Springer, 2014. [4] D. Erhan, C. Szegedy, A. Toshev, and D. Anguelov. Scalable object detection using deep neural networks. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Confer-ence on, pages 2155-2162. IEEE, 2014. [5] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich fea-ture hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014. [6] K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. ar Xiv preprint ar Xiv:1502. 01852, 2015. [7] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of The 32nd International Conference on Ma-chine Learning, pages 448-456, 2015. [8] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei. Large-scale video classification with con-volutional neural networks. In Computer Vision and Pat-tern Recognition (CVPR), 2014 IEEE Conference on, pages 1725-1732. IEEE, 2014. [9] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097-1105, 2012. [10] A. Lavin. Fast algorithms for convolutional neural networks. ar Xiv preprint ar Xiv:1509. 09308, 2015. [11] C.-Y. Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu. Deeply-supervised nets. ar Xiv preprint ar Xiv:1409. 5185, 2014. [12] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recogni-tion, pages 3431-3440, 2015. [13] Y. Movshovitz-Attias, Q. Yu, M. C. Stumpe, V. Shet, S. Arnoud, and L. Yatziv. Ontological supervision for fine grained classification of street view storefronts. In Proceed-ings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1693-1702, 2015. [14] R. Pascanu, T. Mikolov, and Y. Bengio. On the diffi-culty of training recurrent neural networks. ar Xiv preprint ar Xiv:1211. 5063, 2012. [15] D. C. Psichogios and L. H. Ungar. Svd-net: an algorithm that automatically selects network structure. IEEE transac-tions on neural networks/a publication of the IEEE Neural Networks Council, 5(3):513-515, 1993.
Googles Inception v3.pdf
[16] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. 2014. [17] F. Schroff, D. Kalenichenko, and J. Philbin. Facenet: A uni-fied embedding for face recognition and clustering. ar Xiv preprint ar Xiv:1503. 03832, 2015. [18] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. ar Xiv preprint ar Xiv:1409. 1556, 2014. [19] I. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the importance of initialization and momentum in deep learning. In Proceedings of the 30th International Conference on Ma-chine Learning (ICML-13), volume 28, pages 1139-1147. JMLR Workshop and Conference Proceedings, May 2013. [20] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1-9, 2015. [21] T. Tieleman and G. Hinton. Divide the gradient by a run-ning average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4, 2012. Accessed: 2015-11-05. [22] A. Toshev and C. Szegedy. Deeppose: Human pose estima-tion via deep neural networks. In Computer Vision and Pat-tern Recognition (CVPR), 2014 IEEE Conference on, pages 1653-1660. IEEE, 2014. [23] N. Wang and D.-Y. Yeung. Learning a deep compact image representation for visual tracking. In Advances in Neural Information Processing Systems, pages 809-817, 2013.
Googles Inception v3.pdf
Under review as a conference paper at ICLR 2017 SQUEEZE NET: A LEXNET-LEVEL ACCURACY WITH 50X FEWER PARAMETERS AND <0. 5MB MODEL SIZE Forrest N. Iandola1, Song Han2, Matthew W. Moskewicz1, Khalid Ashraf1, William J. Dally2, Kurt Keutzer1 1Deep Scale∗& UC Berkeley2Stanford University {forresti, moskewcz, kashraf, keutzer }@eecs. berkeley. edu {songhan, dally }@stanford. edu ABSTRACT Recent research on deep convolutional neural networks (CNNs) has focused pri-marily on improving accuracy. For a given accuracy level, it is typically possi-ble to identify multiple CNN architectures that achieve that accuracy level. With equivalent accuracy, smaller CNN architectures offer at least three advantages: (1) Smaller CNNs require less communication across servers during distributed train-ing. (2) Smaller CNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller CNNs are more feasible to deploy on FP-GAs and other hardware with limited memory. To provide all of these advantages, we propose a small CNN architecture called Squeeze Net. Squeeze Net achieves Alex Net-level accuracy on Image Net with 50x fewer parameters. Additionally, with model compression techniques, we are able to compress Squeeze Net to less than 0. 5MB ( 510×smaller than Alex Net). The Squeeze Net architecture is available for download here: https://github. com/Deep Scale/Squeeze Net 1 I NTRODUCTION AND MOTIVATION Much of the recent research on deep convolutional neural networks (CNNs) has focused on increas-ing accuracy on computer vision datasets. For a given accuracy level, there typically exist multiple CNN architectures that achieve that accuracy level. Given equivalent accuracy, a CNN architecture with fewer parameters has several advantages: More efficient distributed training. Communication among servers is the limiting factor to the scalability of distributed CNN training. For distributed data-parallel training, com-munication overhead is directly proportional to the number of parameters in the model (Ian-dola et al., 2016). In short, small models train faster due to requiring less communication. Less overhead when exporting new models to clients. For autonomous driving, compa-nies such as Tesla periodically copy new models from their servers to customers' cars. This practice is often referred to as an over-the-air update. Consumer Reports has found that the safety of Tesla's Autopilot semi-autonomous driving functionality has incrementally improved with recent over-the-air updates (Consumer Reports, 2016). However, over-the-air updates of today's typical CNN/DNN models can require large data transfers. With Alex Net, this would require 240MB of communication from the server to the car. Smaller models require less communication, making frequent updates more feasible. Feasible FPGA and embedded deployment. FPGAs often have less than 10MB1of on-chip memory and no off-chip memory or storage. For inference, a sufficiently small model could be stored directly on the FPGA instead of being bottlenecked by memory band-width (Qiu et al., 2016), while video frames stream through the FPGA in real time. Further, when deploying CNNs on Application-Specific Integrated Circuits (ASICs), a sufficiently small model could be stored directly on-chip, and smaller models may enable the ASIC to fit on a smaller die. ∗http://deepscale. ai 1For example, the Xilinx Vertex-7 FPGA has a maximum of 8. 5 MBytes (i. e. 68 Mbits) of on-chip memory and does not provide off-chip memory. 1ar Xiv:1602. 07360v4 [cs. CV] 4 Nov 2016
SQUEEZENET.pdf
Under review as a conference paper at ICLR 2017 As you can see, there are several advantages of smaller CNN architectures. With this in mind, we focus directly on the problem of identifying a CNN architecture with fewer parameters but equivalent accuracy compared to a well-known model. We have discovered such an architecture, which we call Squeeze Net. In addition, we present our attempt at a more disciplined approach to searching the design space for novel CNN architectures. The rest of the paper is organized as follows. In Section 2 we review the related work. Then, in Sections 3 and 4 we describe and evaluate the Squeeze Net architecture. After that, we turn our attention to understanding how CNN architectural design choices impact model size and accuracy. We gain this understanding by exploring the design space of Squeeze Net-like architectures. In Section 5, we do design space exploration on the CNN microarchitecture, which we define as the organization and dimensionality of individual layers and modules. In Section 6, we do design space exploration on the CNN macroarchitecture, which we define as high-level organization of layers in a CNN. Finally, we conclude in Section 7. In short, Sections 3 and 4 are useful for CNN researchers as well as practitioners who simply want to apply Squeeze Net to a new application. The remaining sections are aimed at advanced researchers who intend to design their own CNN architectures. 2 R ELATED WORK 2. 1 M ODEL COMPRESSION The overarching goal of our work is to identify a model that has very few parameters while preserv-ing accuracy. To address this problem, a sensible approach is to take an existing CNN model and compress it in a lossy fashion. In fact, a research community has emerged around the topic of model compression, and several approaches have been reported. A fairly straightforward approach by Den-tonet al. is to apply singular value decomposition (SVD) to a pretrained CNN model (Denton et al., 2014). Han et al. developed Network Pruning, which begins with a pretrained model, then replaces parameters that are below a certain threshold with zeros to form a sparse matrix, and finally performs a few iterations of training on the sparse CNN (Han et al., 2015b). Recently, Han et al. extended their work by combining Network Pruning with quantization (to 8 bits or less) and huffman encoding to create an approach called Deep Compression (Han et al., 2015a), and further designed a hardware accelerator called EIE (Han et al., 2016a) that operates directly on the compressed model, achieving substantial speedups and energy savings. 2. 2 CNN M ICROARCHITECTURE Convolutions have been used in artificial neural networks for at least 25 years; Le Cun et al. helped to popularize CNNs for digit recognition applications in the late 1980s (Le Cun et al., 1989). In neural networks, convolution filters are typically 3D, with height, width, and channels as the key dimensions. When applied to images, CNN filters typically have 3 channels in their first layer (i. e. RGB), and in each subsequent layer Lithe filters have the same number of channels as Li-1has filters. The early work by Le Cun et al. (Le Cun et al., 1989) uses 5x5x Channels2filters, and the recent VGG (Simonyan & Zisserman, 2014) architectures extensively use 3x3 filters. Models such as Network-in-Network (Lin et al., 2013) and the Goog Le Net family of architectures (Szegedy et al., 2014; Ioffe & Szegedy, 2015; Szegedy et al., 2015; 2016) use 1x1 filters in some layers. With the trend of designing very deep CNNs, it becomes cumbersome to manually select filter di-mensions for each layer. To address this, various higher level building blocks, or modules, comprised of multiple convolution layers with a specific fixed organization have been proposed. For example, the Goog Le Net papers propose Inception modules, which are comprised of a number of different di-mensionalities of filters, usually including 1x1 and 3x3, plus sometimes 5x5 (Szegedy et al., 2014) and sometimes 1x3 and 3x1 (Szegedy et al., 2015). Many such modules are then combined, perhaps with additional ad-hoc layers, to form a complete network. We use the term CNN microarchitecture to refer to the particular organization and dimensions of the individual modules. 2. 3 CNN M ACROARCHITECTURE While the CNN microarchitecture refers to individual layers and modules, we define the CNN macroarchitecture as the system-level organization of multiple modules into an end-to-end CNN architecture. 2From now on, we will simply abbreviate Hx Wx Channels to Hx W. 2
SQUEEZENET.pdf
Under review as a conference paper at ICLR 2017 Perhaps the mostly widely studied CNN macroarchitecture topic in the recent literature is the impact ofdepth (i. e. number of layers) in networks. Simoyan and Zisserman proposed the VGG (Simonyan & Zisserman, 2014) family of CNNs with 12 to 19 layers and reported that deeper networks produce higher accuracy on the Image Net-1k dataset (Deng et al., 2009). K. He et al. proposed deeper CNNs with up to 30 layers that deliver even higher Image Net accuracy (He et al., 2015a). The choice of connections across multiple layers or modules is an emerging area of CNN macroar-chitectural research. Residual Networks (Res Net) (He et al., 2015b) and Highway Networks (Sri-vastava et al., 2015) each propose the use of connections that skip over multiple layers, for example additively connecting the activations from layer 3 to the activations from layer 6. We refer to these connections as bypass connections. The authors of Res Net provide an A/B comparison of a 34-layer CNN with and without bypass connections; adding bypass connections delivers a 2 percentage-point improvement on Top-5 Image Net accuracy. 2. 4 N EURAL NETWORK DESIGN SPACE EXPLORATION Neural networks (including deep and convolutional NNs) have a large design space, with numerous options for microarchitectures, macroarchitectures, solvers, and other hyperparameters. It seems natural that the community would want to gain intuition about how these factors impact a NN's accuracy (i. e. the shape of the design space). Much of the work on design space exploration (DSE) of NNs has focused on developing automated approaches for finding NN architectures that deliver higher accuracy. These automated DSE approaches include bayesian optimization (Snoek et al., 2012), simulated annealing (Ludermir et al., 2006), randomized search (Bergstra & Bengio, 2012), and genetic algorithms (Stanley & Miikkulainen, 2002). To their credit, each of these papers pro-vides a case in which the proposed DSE approach produces a NN architecture that achieves higher accuracy compared to a representative baseline. However, these papers make no attempt to provide intuition about the shape of the NN design space. Later in this paper, we eschew automated ap-proaches-instead, we refactor CNNs in such a way that we can do principled A/B comparisons to investigate how CNN architectural decisions influence model size and accuracy. In the following sections, we first propose and evaluate the Squeeze Net architecture with and with-out model compression. Then, we explore the impact of design choices in microarchitecture and macroarchitecture for Squeeze Net-like CNN architectures. 3 S QUEEZE NET:PRESERVING ACCURACY WITH FEW PARAMETERS In this section, we begin by outlining our design strategies for CNN architectures with few param-eters. Then, we introduce the Fire module, our new building block out of which to build CNN architectures. Finally, we use our design strategies to construct Squeeze Net, which is comprised mainly of Fire modules. 3. 1 A RCHITECTURAL DESIGN STRATEGIES Our overarching objective in this paper is to identify CNN architectures that have few parameters while maintaining competitive accuracy. To achieve this, we employ three main strategies when designing CNN architectures: Strategy 1. Replace 3x3 filters with 1x1 filters. Given a budget of a certain number of convolution filters, we will choose to make the majority of these filters 1x1, since a 1x1 filter has 9X fewer parameters than a 3x3 filter. Strategy 2. Decrease the number of input channels to 3x3 filters. Consider a convolution layer that is comprised entirely of 3x3 filters. The total quantity of parameters in this layer is (number of input channels) * (number of filters) * (3*3). So, to maintain a small total number of parameters in a CNN, it is important not only to decrease the number of 3x3 filters (see Strategy 1 above), but also to decrease the number of input channels to the 3x3 filters. We decrease the number of input channels to 3x3 filters using squeeze layers, which we describe in the next section. Strategy 3. Downsample late in the network so that convolution layers have large activation maps. In a convolutional network, each convolution layer produces an output activation map with a spatial resolution that is at least 1x1 and often much larger than 1x1. The height and width of these activation maps are controlled by: (1) the size of the input data (e. g. 256x256 images) and (2) 3
SQUEEZENET.pdf
Under review as a conference paper at ICLR 2017 h"p://www. presenta. on0process. com/lego0blocks0in0powerpoint. html88 squeeze 8expand 81x18convolu. on8filters 81x18and83x38convolu. on8filters 8Re LU 8 Re LU 8 Figure 1: Microarchitectural view: Organization of convolution filters in the Fire module. In this example, s1x1= 3,e1x1= 4, and e3x3= 4. We illustrate the convolution filters but not the activations. the choice of layers in which to downsample in the CNN architecture. Most commonly, downsam-pling is engineered into CNN architectures by setting the (stride >1) in some of the convolution or pooling layers (e. g. (Szegedy et al., 2014; Simonyan & Zisserman, 2014; Krizhevsky et al., 2012)). If early3layers in the network have large strides, then most layers will have small activation maps. Conversely, if most layers in the network have a stride of 1, and the strides greater than 1 are con-centrated toward the end4of the network, then many layers in the network will have large activation maps. Our intuition is that large activation maps (due to delayed downsampling) can lead to higher classification accuracy, with all else held equal. Indeed, K. He and H. Sun applied delayed down-sampling to four different CNN architectures, and in each case delayed downsampling led to higher classification accuracy (He & Sun, 2015). Strategies 1 and 2 are about judiciously decreasing the quantity of parameters in a CNN while attempting to preserve accuracy. Strategy 3 is about maximizing accuracy on a limited budget of parameters. Next, we describe the Fire module, which is our building block for CNN architectures that enables us to successfully employ Strategies 1, 2, and 3. 3. 2 T HEFIREMODULE We define the Fire module as follows. A Fire module is comprised of: a squeeze convolution layer (which has only 1x1 filters), feeding into an expand layer that has a mix of 1x1 and 3x3 convolution filters; we illustrate this in Figure 1. The liberal use of 1x1 filters in Fire modules is an application of Strategy 1 from Section 3. 1. We expose three tunable dimensions (hyperparameters) in a Fire module: s1x1,e1x1, and e3x3. In a Fire module, s1x1is the number of filters in the squeeze layer (all 1x1), e1x1is the number of 1x1 filters in the expand layer, and e3x3is the number of 3x3 filters in the expand layer. When we use Fire modules we set s1x1to be less than ( e1x1+e3x3), so the squeeze layer helps to limit the number of input channels to the 3x3 filters, as per Strategy 2 from Section 3. 1. 3. 3 T HESQUEEZE NET ARCHITECTURE We now describe the Squeeze Net CNN architecture. We illustrate in Figure 2 that Squeeze Net begins with a standalone convolution layer (conv1), followed by 8 Fire modules (fire2-9), ending with a final conv layer (conv10). We gradually increase the number of filters per fire module from the beginning to the end of the network. Squeeze Net performs max-pooling with a stride of 2 after layers conv1, fire4, fire8, and conv10; these relatively late placements of pooling are per Strategy 3 from Section 3. 1. We present the full Squeeze Net architecture in Table 1. 3In our terminology, an “early” layer is close to the input data. 4In our terminology, the “end” of the network is the classifier. 4
SQUEEZENET.pdf
Under review as a conference paper at ICLR 2017 "labrador retriever dog" conv196 fire2128 fire3128 fire4256 fire5256 fire6384 fire7384 fire8512 fire9512 conv101000 softmaxmaxpool/2 maxpool/2 maxpool/2global avgpool conv196 fire2128 fire3128 fire4256 fire5256 fire6384 fire7384 fire8512 fire9512 conv101000 softmaxmaxpool/2 maxpool/2 maxpool/2global avgpool conv196 fire2128 fire3128 fire4256 fire5256 fire6384 fire7384 fire8512 fire9512 conv101000 softmaxmaxpool/2 maxpool/2 maxpool/2global avgpool conv1x1 conv1x1 conv1x1 conv1x196 Figure 2: Macroarchitectural view of our Squeeze Net architecture. Left: Squeeze Net (Section 3. 3); Middle: Squeeze Net with simple bypass (Section 6); Right: Squeeze Net with complex bypass (Sec-tion 6). 3. 3. 1 O THER SQUEEZE NET DETAILS For brevity, we have omitted number of details and design choices about Squeeze Net from Table 1 and Figure 2. We provide these design choices in the following. The intuition behind these choices may be found in the papers cited below. So that the output activations from 1x1 and 3x3 filters have the same height and width, we add a 1-pixel border of zero-padding in the input data to 3x3 filters of expand modules. Re LU (Nair & Hinton, 2010) is applied to activations from squeeze and expand layers. Dropout (Srivastava et al., 2014) with a ratio of 50% is applied after the fire9 module. Note the lack of fully-connected layers in Squeeze Net; this design choice was inspired by the Ni N (Lin et al., 2013) architecture. When training Squeeze Net, we begin with a learning rate of 0. 04, and we lin-early decrease the learning rate throughout training, as described in (Mishkin et al., 2016). For details on the training protocol (e. g. batch size, learning rate, parame-ter initialization), please refer to our Caffe-compatible configuration files located here: https://github. com/Deep Scale/Squeeze Net. The Caffe framework does not natively support a convolution layer that contains multiple filter resolutions (e. g. 1x1 and 3x3) (Jia et al., 2014). To get around this, we implement our expand layer with two separate convolution layers: a layer with 1x1 filters, and a layer with 3x3 filters. Then, we concatenate the outputs of these layers together in the channel dimension. This is numerically equivalent to implementing one layer that contains both 1x1 and 3x3 filters. We released the Squeeze Net configuration files in the format defined by the Caffe CNN frame-work. However, in addition to Caffe, several other CNN frameworks have emerged, including MXNet (Chen et al., 2015a), Chainer (Tokui et al., 2015), Keras (Chollet, 2016), and Torch (Col-lobert et al., 2011). Each of these has its own native format for representing a CNN architec-ture. That said, most of these libraries use the same underlying computational back-ends such as cu DNN (Chetlur et al., 2014) and MKL-DNN (Das et al., 2016). The research community has 5
SQUEEZENET.pdf
Under review as a conference paper at ICLR 2017 ported the Squeeze Net CNN architecture for compatibility with a number of other CNN software frameworks: MXNet (Chen et al., 2015a) port of Squeeze Net: (Haria, 2016) Chainer (Tokui et al., 2015) port of Squeeze Net: (Bell, 2016) Keras (Chollet, 2016) port of Squeeze Net: (DT42, 2016) Torch (Collobert et al., 2011) port of Squeeze Net's Fire Modules: (Waghmare, 2016) 4 E VALUATION OF SQUEEZE NET We now turn our attention to evaluating Squeeze Net. In each of the CNN model compression papers reviewed in Section 2. 1, the goal was to compress an Alex Net (Krizhevsky et al., 2012) model that was trained to classify images using the Image Net (Deng et al., 2009) (ILSVRC 2012) dataset. Therefore, we use Alex Net5and the associated model compression results as a basis for comparison when evaluating Squeeze Net. Table 1: Squeeze Net architectural dimensions. (The formatting of this table was inspired by the Inception2 paper (Ioffe & Szegedy, 2015). ) 5layer name/typeoutput sizefilter size / stride (if not a fire layer)depths1x1(#1x1 squeeze)e1x1(#1x1 expand)e3x3(#3x3expand)s1x1sparsity e1x1sparsitye3x3sparsity# bits#parameter before pruning#parameter after pruninginput image224x224x3--conv1111x111x967x7/2 (x96)16bit14,20814,208maxpool155x55x963x3/20fire255x55x1282166464100%100%33%6bit11,9205,746fire355x55x1282166464100%100%33%6bit12,4326,258fire455x55x256232128128100%100%33%6bit45,34420,646maxpool427x27x2563x3/20fire527x27x256232128128100%100%33%6bit49,44024,742fire627x27x384248192192100%50%33%6bit104,88044,700fire727x27x38424819219250%100%33%6bit111,02446,236fire827x27x512264256256100%50%33%6bit188,99277,581maxpool813x12x5123x3/20fire913x13x51226425625650%100%30%6bit197,18477,581conv1013x13x10001x1/1 (x1000)16bit513,000103,400avgpool101x1x100013x13/101,248,424 (total)421,098 (total)20% (3x3)100% (7x7) In Table 2, we review Squeeze Net in the context of recent model compression results. The SVD-based approach is able to compress a pretrained Alex Net model by a factor of 5x, while diminishing top-1 accuracy to 56. 0% (Denton et al., 2014). Network Pruning achieves a 9x reduction in model size while maintaining the baseline of 57. 2% top-1 and 80. 3% top-5 accuracy on Image Net (Han et al., 2015b). Deep Compression achieves a 35x reduction in model size while still maintaining the baseline accuracy level (Han et al., 2015a). Now, with Squeeze Net, we achieve a 50X reduction in model size compared to Alex Net, while meeting or exceeding the top-1 and top-5 accuracy of Alex Net. We summarize all of the aforementioned results in Table 2. It appears that we have surpassed the state-of-the-art results from the model compression commu-nity: even when using uncompressed 32-bit values to represent the model, Squeeze Net has a 1. 4× smaller model size than the best efforts from the model compression community while maintain-ing or exceeding the baseline accuracy. Until now, an open question has been: are small models amenable to compression, or do small models “need” all of the representational power afforded by dense floating-point values? To find out, we applied Deep Compression (Han et al., 2015a) 5Our baseline is bvlc alexnet from the Caffe codebase (Jia et al., 2014). 6
SQUEEZENET.pdf
Under review as a conference paper at ICLR 2017 Table 2: Comparing Squeeze Net to model compression approaches. By model size, we mean the number of bytes required to store all of the parameters in the trained model. CNN architecture Compression Approach Data Type Original Compressed Model Size Reduction in Model Size vs. Alex Net Top-1 Image Net Accuracy Top-5 Image Net Accuracy Alex Net None (baseline) 32 bit 240MB 1x 57. 2% 80. 3% Alex Net SVD (Denton et al., 2014)32 bit 240MB48MB 5x 56. 0% 79. 4% Alex Net Network Pruning (Han et al., 2015b)32 bit 240MB27MB 9x 57. 2% 80. 3% Alex Net Deep Compression (Han et al., 2015a)5-8 bit 240MB6. 9MB 35x 57. 2% 80. 3% Squeeze Net (ours) None 32 bit 4. 8MB 50x 57. 5% 80. 3% Squeeze Net (ours) Deep Compression 8 bit 4. 8MB0. 66MB 363x 57. 5% 80. 3% Squeeze Net (ours) Deep Compression 6 bit 4. 8MB0. 47MB 510x 57. 5% 80. 3% to Squeeze Net, using 33% sparsity6and 8-bit quantization. This yields a 0. 66 MB model ( 363× smaller than 32-bit Alex Net) with equivalent accuracy to Alex Net. Further, applying Deep Compres-sion with 6-bit quantization and 33% sparsity on Squeeze Net, we produce a 0. 47MB model ( 510× smaller than 32-bit Alex Net) with equivalent accuracy. Our small model is indeed amenable to compression. In addition, these results demonstrate that Deep Compression (Han et al., 2015a) not only works well on CNN architectures with many parameters (e. g. Alex Net and VGG), but it is also able to compress the already compact, fully convolutional Squeeze Net architecture. Deep Compression compressed Squeeze Net by 10×while preserving the baseline accuracy. In summary: by combin-ing CNN architectural innovation (Squeeze Net) with state-of-the-art compression techniques (Deep Compression), we achieved a 510×reduction in model size with no decrease in accuracy compared to the baseline. Finally, note that Deep Compression (Han et al., 2015b) uses a codebook as part of its scheme for quantizing CNN parameters to 6-or 8-bits of precision. Therefore, on most commodity processors, it isnottrivial to achieve a speedup of32 8= 4xwith 8-bit quantization or32 6= 5. 3xwith 6-bit quantization using the scheme developed in Deep Compression. However, Han et al. developed custom hardware-Efficient Inference Engine (EIE)-that can compute codebook-quantized CNNs more efficiently (Han et al., 2016a). In addition, in the months since we released Squeeze Net, P. Gysel developed a strategy called Ristretto for linearly quantizing Squeeze Net to 8 bits (Gysel, 2016). Specifically, Ristretto does computation in 8 bits, and it stores parameters and activations in 8-bit data types. Using the Ristretto strategy for 8-bit computation in Squeeze Net inference, Gysel observed less than 1 percentage-point of drop in accuracy when using 8-bit instead of 32-bit data types. 5 CNN M ICROARCHITECTURE DESIGN SPACE EXPLORATION So far, we have proposed architectural design strategies for small models, followed these principles to create Squeeze Net, and discovered that Squeeze Net is 50x smaller than Alex Net with equivalent accuracy. However, Squeeze Net and other models reside in a broad and largely unexplored design space of CNN architectures. Now, in Sections 5 and 6, we explore several aspects of the design space. We divide this architectural exploration into two main topics: microarchitectural exploration (per-module layer dimensions and configurations) and macroarchitectural exploration (high-level end-to-end organization of modules and other layers). In this section, we design and execute experiments with the goal of providing intuition about the shape of the microarchitectural design space with respect to the design strategies that we proposed in Section 3. 1. Note that our goal here is notto maximize accuracy in every experiment, but rather to understand the impact of CNN architectural choices on model size and accuracy. 6Note that, due to the storage overhead of storing sparse matrix indices, 33% sparsity leads to somewhat less than a 3×decrease in model size. 7
SQUEEZENET.pdf
Under review as a conference paper at ICLR 2017 13#MB#of #weights #85. 3% #accuracy #86. 0% #accuracy #19#MB#of #weights #4. 8#MB#of #weights #80. 3% #accuracy #Squeeze Net # (a) Exploring the impact of the squeeze ratio ( SR) on model size and accuracy. 21#MB#of #weights #13#MB#of #weights #5. 7#MB#of #weights #85. 3% #accuracy #85. 3% #accuracy #76. 3% #accuracy #(b) Exploring the impact of the ratio of 3x3 filters in expand layers ( pct3x3) on model size and accuracy. Figure 3: Microarchitectural design space exploration. 5. 1 CNN M ICROARCHITECTURE METAPARAMETERS In Squeeze Net, each Fire module has three dimensional hyperparameters that we defined in Sec-tion 3. 2: s1x1,e1x1, and e3x3. Squeeze Net has 8 Fire modules with a total of 24 dimensional hyperparameters. To do broad sweeps of the design space of Squeeze Net-like architectures, we define the following set of higher level metaparameters which control the dimensions of all Fire modules in a CNN. We define base eas the number of expand filters in the first Fire module in a CNN. After every freq Fire modules, we increase the number of expand filters by incre. In other words, for Fire module i, the number of expand filters is ei=base e+ (incre∗⌊ i freq⌋ ). In the expand layer of a Fire module, some filters are 1x1 and some are 3x3; we define ei=ei,1x1+ei,3x3 withpct3x3(in the range [0,1], shared over all Fire modules) as the percentage of expand filters that are 3x3. In other words, ei,3x3=ei∗pct3x3, and ei,1x1=ei∗(1-pct3x3). Finally, we define the number of filters in the squeeze layer of a Fire module using a metaparameter called the squeeze ratio (SR) (again, in the range [0,1], shared by all Fire modules): si,1x1=SR∗ei(or equivalently si,1x1=SR∗(ei,1x1+ei,3x3)). Squeeze Net (Table 1) is an example architecture that we gen-erated with the aforementioned set of metaparameters. Specifically, Squeeze Net has the following metaparameters: base e= 128,incre= 128,pct3x3= 0. 5,freq = 2, and SR= 0. 125. 5. 2 S QUEEZE RATIO In Section 3. 1, we proposed decreasing the number of parameters by using squeeze layers to decrease the number of input channels seen by 3x3 filters. We defined the squeeze ratio (SR) as the ratio between the number of filters in squeeze layers and the number of filters in expand layers. We now design an experiment to investigate the effect of the squeeze ratio on model size and accuracy. In these experiments, we use Squeeze Net (Figure 2) as a starting point. As in Squeeze Net, these experiments use the following metaparameters: base e= 128,incre= 128,pct3x3= 0. 5, and freq = 2. We train multiple models, where each model has a different squeeze ratio (SR)7in the range [0. 125, 1. 0]. In Figure 3(a), we show the results of this experiment, where each point on the graph is an independent model that was trained from scratch. Squeeze Net is the SR=0. 125 point in this figure. 8From this figure, we learn that increasing SR beyond 0. 125 can further increase Image Net top-5 accuracy from 80. 3% (i. e. Alex Net-level) with a 4. 8MB model to 86. 0% with a 19MB model. Accuracy plateaus at 86. 0% with SR=0. 75 (a 19MB model), and setting SR=1. 0 further increases model size without improving accuracy. 5. 3 T RADING OFF 1X1AND 3X3FILTERS In Section 3. 1, we proposed decreasing the number of parameters in a CNN by replacing some 3x3 filters with 1x1 filters. An open question is, how important is spatial resolution in CNN filters? 7Note that, for a given model, all Fire layers share the same squeeze ratio. 8Note that we named it Squeeze Net because it has a low squeeze ratio (SR). That is, the squeeze layers in Squeeze Net have 0. 125x the number of filters as the expand layers. 8
SQUEEZENET.pdf
Under review as a conference paper at ICLR 2017 The VGG (Simonyan & Zisserman, 2014) architectures have 3x3 spatial resolution in most layers' filters; Goog Le Net (Szegedy et al., 2014) and Network-in-Network (Ni N) (Lin et al., 2013) have 1x1 filters in some layers. In Goog Le Net and Ni N, the authors simply propose a specific quantity of 1x1 and 3x3 filters without further analysis. 9Here, we attempt to shed light on how the proportion of 1x1 and 3x3 filters affects model size and accuracy. We use the following metaparameters in this experiment: base e=incre= 128,freq = 2,SR= 0. 500, and we vary pct3x3from 1% to 99%. In other words, each Fire module's expand layer has a predefined number of filters partitioned between 1x1 and 3x3, and here we turn the knob on these filters from “mostly 1x1” to “mostly 3x3”. As in the previous experiment, these models have 8 Fire modules, following the same organization of layers as in Figure 2. We show the results of this experiment in Figure 3(b). Note that the 13MB models in Figure 3(a) and Figure 3(b) are the same architecture: SR= 0. 500andpct3x3= 50%. We see in Figure 3(b) that the top-5 accuracy plateaus at 85. 6% using 50% 3x3 filters, and further increasing the percentage of 3x3 filters leads to a larger model size but provides no improvement in accuracy on Image Net. 6 CNN M ACROARCHITECTURE DESIGN SPACE EXPLORATION So far we have explored the design space at the microarchitecture level, i. e. the contents of individual modules of the CNN. Now, we explore design decisions at the macroarchitecture level concerning the high-level connections among Fire modules. Inspired by Res Net (He et al., 2015b), we explored three different architectures: Vanilla Squeeze Net (as per the prior sections). Squeeze Net with simple bypass connections between some Fire modules. (Inspired by (Sri-vastava et al., 2015; He et al., 2015b). ) Squeeze Net with complex bypass connections between the remaining Fire modules. We illustrate these three variants of Squeeze Net in Figure 2. Oursimple bypass architecture adds bypass connections around Fire modules 3, 5, 7, and 9, requiring these modules to learn a residual function between input and output. As in Res Net, to implement a bypass connection around Fire3, we set the input to Fire4 equal to (output of Fire2 + output of Fire3), where the + operator is elementwise addition. This changes the regularization applied to the parameters of these Fire modules, and, as per Res Net, can improve the final accuracy and/or ability to train the full model. One limitation is that, in the straightforward case, the number of input channels and number of output channels has to be the same; as a result, only half of the Fire modules can have simple bypass connections, as shown in the middle diagram of Fig 2. When the “same number of channels” requirement can't be met, we use a complex bypass connection, as illustrated on the right of Figure 2. While a simple bypass is “just a wire,” we define a complex bypass as a bypass that includes a 1x1 convolution layer with the number of filters set equal to the number of output channels that are needed. Note that complex bypass connections add extra parameters to the model, while simple bypass connections do not. In addition to changing the regularization, it is intuitive to us that adding bypass connections would help to alleviate the representational bottleneck introduced by squeeze layers. In Squeeze Net, the squeeze ratio (SR) is 0. 125, meaning that every squeeze layer has 8x fewer output channels than the accompanying expand layer. Due to this severe dimensionality reduction, a limited amount of in-formation can pass through squeeze layers. However, by adding bypass connections to Squeeze Net, we open up avenues for information to flow around the squeeze layers. We trained Squeeze Net with the three macroarchitectures in Figure 2 and compared the accuracy and model size in Table 3. We fixed the microarchitecture to match Squeeze Net as described in Table 1 throughout the macroarchitecture exploration. Complex and simple bypass connections both yielded an accuracy improvement over the vanilla Squeeze Net architecture. Interestingly, the simple bypass enabled a higher accuracy accuracy improvement than complex bypass. Adding the 9To be clear, each filter is 1x1x Channels or 3x3x Channels, which we abbreviate to 1x1 and 3x3. 9
SQUEEZENET.pdf
Under review as a conference paper at ICLR 2017 Table 3: Squeeze Net accuracy and model size using different macroarchitecture configurations Architecture Top-1 Accuracy Top-5 Accuracy Model Size Vanilla Squeeze Net 57. 5% 80. 3% 4. 8MB Squeeze Net + Simple Bypass 60. 4% 82. 5% 4. 8MB Squeeze Net + Complex Bypass 58. 8% 82. 0% 7. 7MB simple bypass connections yielded an increase of 2. 9 percentage-points in top-1 accuracy and 2. 2 percentage-points in top-5 accuracy without increasing model size. 7 C ONCLUSIONS In this paper, we have proposed steps toward a more disciplined approach to the design-space explo-ration of convolutional neural networks. Toward this goal we have presented Squeeze Net, a CNN architecture that has 50×fewer parameters than Alex Net and maintains Alex Net-level accuracy on Image Net. We also compressed Squeeze Net to less than 0. 5MB, or 510×smaller than Alex Net without compression. Since we released this paper as a technical report in 2016, Song Han and his collaborators have experimented further with Squeeze Net and model compression. Using a new approach called Dense-Sparse-Dense (DSD) (Han et al., 2016b), Han et al. use model compres-sion during training as a regularizer to further improve accuracy, producing a compressed set of Squeeze Net parameters that is 1. 2 percentage-points more accurate on Image Net-1k, and also pro-ducing an uncompressed set of Squeeze Net parameters that is 4. 3 percentage-points more accurate, compared to our results in Table 2. We mentioned near the beginning of this paper that small models are more amenable to on-chip implementations on FPGAs. Since we released the Squeeze Net model, Gschwend has developed a variant of Squeeze Net and implemented it on an FPGA (Gschwend, 2016). As we anticipated, Gschwend was able to able to store the parameters of a Squeeze Net-like model entirely within the FPGA and eliminate the need for off-chip memory accesses to load model parameters. In the context of this paper, we focused on Image Net as a target dataset. However, it has become common practice to apply Image Net-trained CNN representations to a variety of applications such as fine-grained object recognition (Zhang et al., 2013; Donahue et al., 2013), logo identification in images (Iandola et al., 2015), and generating sentences about images (Fang et al., 2015). Image Net-trained CNNs have also been applied to a number of applications pertaining to autonomous driv-ing, including pedestrian and vehicle detection in images (Iandola et al., 2014; Girshick et al., 2015; Ashraf et al., 2016) and videos (Chen et al., 2015b), as well as segmenting the shape of the road (Badrinarayanan et al., 2015). We think Squeeze Net will be a good candidate CNN architecture for a variety of applications, especially those in which small model size is of importance. Squeeze Net is one of several new CNNs that we have discovered while broadly exploring the de-sign space of CNN architectures. We hope that Squeeze Net will inspire the reader to consider and explore the broad range of possibilities in the design space of CNN architectures and to perform that exploration in a more systematic manner. REFERENCES Khalid Ashraf, Bichen Wu, Forrest N. Iandola, Matthew W. Moskewicz, and Kurt Keutzer. Shallow networks for high-accuracy road object-detection. ar Xiv:1606. 01561, 2016. Vijay Badrinarayanan, Alex Kendall, and Roberto Cipolla. Seg Net: A deep convolutional encoder-decoder architecture for image segmentation. arxiv:1511. 00561, 2015. Eddie Bell. A implementation of squeezenet in chainer. https://github. com/ejlb/ squeezenet-chainer, 2016. J. Bergstra and Y. Bengio. An optimization methodology for neural network weights and architec-tures. JMLR, 2012. Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang. Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems. ar Xiv:1512. 01274, 2015a. 10
SQUEEZENET.pdf
Under review as a conference paper at ICLR 2017 Xiaozhi Chen, Kaustav Kundu, Yukun Zhu, Andrew G Berneshawi, Huimin Ma, Sanja Fidler, and Raquel Urtasun. 3d object proposals for accurate object class detection. In NIPS, 2015b. Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catan-zaro, and Evan Shelhamer. cu DNN: efficient primitives for deep learning. ar Xiv:1410. 0759, 2014. Francois Chollet. Keras: Deep learning library for theano and tensorflow. https://keras. io, 2016. Ronan Collobert, Koray Kavukcuoglu, and Clement Farabet. Torch7: A matlab-like environment for machine learning. In NIPS Big Learn Workshop, 2011. Consumer Reports. Teslas new autopilot: Better but still needs improvement. http://www. consumerreports. org/tesla/ tesla-new-autopilot-better-but-needs-improvement, 2016. Dipankar Das, Sasikanth Avancha, Dheevatsa Mudigere, Karthikeyan Vaidyanathan, Srinivas Srid-haran, Dhiraj D. Kalamkar, Bharat Kaul, and Pradeep Dubey. Distributed deep learning using synchronous stochastic gradient descent. ar Xiv:1602. 06709, 2016. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Image Net: A large-scale hierarchical image database. In CVPR, 2009. E. L Denton, W. Zaremba, J. Bruna, Y. Le Cun, and R. Fergus. Exploiting linear structure within convolutional networks for efficient evaluation. In NIPS, 2014. Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. ar Xiv:1310. 1531, 2013. DT42. Squeezenet keras implementation. https://github. com/DT42/squeezenet_ demo, 2016. Hao Fang, Saurabh Gupta, Forrest Iandola, Rupesh Srivastava, Li Deng, Piotr Dollar, Jianfeng Gao, Xiaodong He, Margaret Mitchell, John C. Platt, C. Lawrence Zitnick, and Geoffrey Zweig. From captions to visual concepts and back. In CVPR, 2015. Ross B. Girshick, Forrest N. Iandola, Trevor Darrell, and Jitendra Malik. Deformable part models are convolutional neural networks. In CVPR, 2015. David Gschwend. Zynqnet: An fpga-accelerated embedded convolutional neural network. Master's thesis, Swiss Federal Institute of Technology Zurich (ETH-Zurich), 2016. Philipp Gysel. Ristretto: Hardware-oriented approximation of convolutional neural networks. ar Xiv:1605. 06402, 2016. S. Han, H. Mao, and W. Dally. Deep compression: Compressing DNNs with pruning, trained quantization and huffman coding. arxiv:1510. 00149v3, 2015a. S. Han, J. Pool, J. Tran, and W. Dally. Learning both weights and connections for efficient neural networks. In NIPS, 2015b. Song Han, Xingyu Liu, Huizi Mao, Jing Pu, Ardavan Pedram, Mark A Horowitz, and William J Dally. Eie: Efficient inference engine on compressed deep neural network. International Sympo-sium on Computer Architecture (ISCA), 2016a. Song Han, Jeff Pool, Sharan Narang, Huizi Mao, Shijian Tang, Erich Elsen, Bryan Catanzaro, John Tran, and William J. Dally. Dsd: Regularizing deep neural networks with dense-sparse-dense training flow. ar Xiv:1607. 04381, 2016b. Guo Haria. convert squeezenet to mxnet. https://github. com/haria/Squeeze Net/ commit/0cf57539375fd5429275af36fc94c774503427c3, 2016. 11
SQUEEZENET.pdf
Under review as a conference paper at ICLR 2017 K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectifiers: Surpassing human-level perfor-mance on imagenet classification. In ICCV, 2015a. Kaiming He and Jian Sun. Convolutional neural networks at constrained time cost. In CVPR, 2015. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog-nition. ar Xiv:1512. 03385, 2015b. Forrest N. Iandola, Matthew W. Moskewicz, Sergey Karayev, Ross B. Girshick, Trevor Darrell, and Kurt Keutzer. Densenet: Implementing efficient convnet descriptor pyramids. ar Xiv:1404. 1869, 2014. Forrest N. Iandola, Anting Shen, Peter Gao, and Kurt Keutzer. Deep Logo: Hitting logo recognition with the deep neural network hammer. ar Xiv:1510. 02131, 2015. Forrest N. Iandola, Khalid Ashraf, Matthew W. Moskewicz, and Kurt Keutzer. Fire Caffe: near-linear acceleration of deep neural network training on compute clusters. In CVPR, 2016. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. JMLR, 2015. Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Ser-gio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embed-ding. ar Xiv:1408. 5093, 2014. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Image Net Classification with Deep Con-volutional Neural Networks. In NIPS, 2012. Y. Le Cun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Back-propagation applied to handwritten zip code recognition. Neural Computation, 1989. Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. ar Xiv:1312. 4400, 2013. T. B. Ludermir, A. Yamazaki, and C. Zanchettin. An optimization methodology for neural network weights and architectures. IEEE Trans. Neural Networks, 2006. Dmytro Mishkin, Nikolay Sergievskiy, and Jiri Matas. Systematic evaluation of cnn advances on the imagenet. ar Xiv:1606. 02228, 2016. Vinod Nair and Geoffrey E. Hinton. Rectified linear units improve restricted boltzmann machines. In ICML, 2010. Jiantao Qiu, Jie Wang, Song Yao, Kaiyuan Guo, Boxun Li, Erjin Zhou, Jincheng Yu, Tianqi Tang, Ningyi Xu, Sen Song, Yu Wang, and Huazhong Yang. Going deeper with embedded fpga platform for convolutional neural network. In ACM International Symposium on FPGA, 2016. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. ar Xiv:1409. 1556, 2014. J. Snoek, H. Larochelle, and R. P. Adams. Practical bayesian optimization of machine learning algorithms. In NIPS, 2012. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. JMLR, 2014. R. K. Srivastava, K. Greff, and J. Schmidhuber. Highway networks. In ICML Deep Learning Workshop, 2015. K. O. Stanley and R. Miikkulainen. Evolving neural networks through augmenting topologies. Neu-rocomputing, 2002. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Du-mitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. ar Xiv:1409. 4842, 2014. 12
SQUEEZENET.pdf
Under review as a conference paper at ICLR 2017 Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Re-thinking the inception architecture for computer vision. ar Xiv:1512. 00567, 2015. Christian Szegedy, Sergey Ioffe, and Vincent Vanhoucke. Inception-v4, inception-resnet and the impact of residual connections on learning. ar Xiv:1602. 07261, 2016. S. Tokui, K. Oono, S. Hido, and J. Clayton. Chainer: a next-generation open source framework for deep learning. In NIPS Workshop on Machine Learning Systems (Learning Sys), 2015. Sagar M Waghmare. Fire Module. lua. https://github. com/Element-Research/dpnn/ blob/master/Fire Module. lua, 2016. Ning Zhang, Ryan Farrell, Forrest Iandola, and Trevor Darrell. Deformable part descriptors for fine-grained recognition and attribute prediction. In ICCV, 2013. 13
SQUEEZENET.pdf
ar Xiv:1409. 1556v6 [cs. CV] 10 Apr 2015Publishedasa conferencepaperat ICLR2015 VERYDEEPCONVOLUTIONAL NETWORKS FORLARGE-SCALEIMAGERECOGNITION Karen Simonyan∗& Andrew Zisserman+ Visual Geometry Group,Departmentof Engineering Science, Universityof Oxford {karen,az }@robots. ox. ac. uk ABSTRACT In this work we investigate the effect of the convolutional n etwork depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with verysmall( 3×3)convolutionfilters,whichshowsthatasignificantimprove ment on the prior-art configurations can be achieved by pushing th e depth to 16-19 weight layers. These findings were the basis of our Image Net C hallenge 2014 submission,whereourteamsecuredthefirstandthesecondpl acesinthelocalisa-tion and classification tracks respectively. We also show th at our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing Conv Net models publicly a vailable to facili-tate furtherresearchontheuse ofdeepvisualrepresentati onsincomputervision. 1 INTRODUCTION Convolutional networks (Conv Nets) have recently enjoyed a great success in large-scale im-age and video recognition (Krizhevskyetal., 2012; Zeiler& Fergus, 2013; Sermanetet al., 2014; Simonyan& Zisserman, 2014) which has become possible due to the large public image reposito-ries,suchas Image Net(Denget al.,2009),andhigh-perform ancecomputingsystems,suchas GPUs orlarge-scaledistributedclusters(Deanet al., 2012). In particular,animportantroleintheadvance ofdeepvisualrecognitionarchitectureshasbeenplayedby the Image Net Large-Scale Visual Recog-nition Challenge (ILSVRC) (Russakovskyet al., 2014), whic h has served as a testbed for a few generationsof large-scale image classification systems, f rom high-dimensionalshallow feature en-codings(Perronninetal.,2010)(thewinnerof ILSVRC-2011 )todeep Conv Nets(Krizhevskyet al., 2012)(thewinnerof ILSVRC-2012). With Conv Nets becoming more of a commodity in the computer vi sion field, a number of at-tempts have been made to improve the original architecture o f Krizhevskyet al. (2012) in a bid to achieve better accuracy. For instance, the best-perf orming submissions to the ILSVRC-2013 (Zeiler&Fergus, 2013; Sermanetetal., 2014) utilised smaller receptive window size and smaller stride of the first convolutional layer. Another lin e of improvements dealt with training and testing the networks densely over the whole image and ove r multiple scales (Sermanetet al., 2014; Howard, 2014). In this paper, we address another impor tant aspect of Conv Net architecture design-itsdepth. Tothisend,we fixotherparametersofthea rchitecture,andsteadilyincreasethe depth of the network by adding more convolutionallayers, wh ich is feasible due to the use of very small (3×3)convolutionfiltersinall layers. As a result, we come up with significantly more accurate Conv N et architectures, which not only achieve the state-of-the-art accuracy on ILSVRC classifica tion and localisation tasks, but are also applicabletootherimagerecognitiondatasets,wherethey achieveexcellentperformanceevenwhen usedasa partofa relativelysimple pipelines(e. g. deepfea turesclassified byalinear SVM without fine-tuning). We havereleasedourtwobest-performingmode ls1tofacilitatefurtherresearch. The rest of the paper is organised as follows. In Sect. 2, we de scribe our Conv Net configurations. The details of the image classification trainingand evaluat ionare then presented in Sect. 3, and the ∗current affiliation: Google Deep Mind+current affiliation: Universityof Oxfordand Google Deep Mi nd 1http://www. robots. ox. ac. uk/ ˜vgg/research/very_deep/ 1
VGG-16 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 configurations are compared on the ILSVRC classification tas k in Sect. 4. Sect. 5 concludes the paper. For completeness,we also describeand assess our ILS VRC-2014object localisationsystem in Appendix A,anddiscussthegeneralisationofverydeepfe aturestootherdatasetsin Appendix B. Finally,Appendix Ccontainsthelist ofmajorpaperrevisio ns. 2 CONVNETCONFIGURATIONS To measure the improvement brought by the increased Conv Net depth in a fair setting, all our Conv Net layer configurations are designed using the same pri nciples, inspired by Ciresan etal. (2011); Krizhevskyet al. (2012). In this section, we first de scribe a generic layout of our Conv Net configurations(Sect. 2. 1)andthendetailthespecificconfig urationsusedintheevaluation(Sect. 2. 2). Ourdesignchoicesarethendiscussedandcomparedtothepri orart in Sect. 2. 3. 2. 1 A RCHITECTURE During training, the input to our Conv Nets is a fixed-size 224×224RGB image. The only pre-processingwedoissubtractingthemean RGBvalue,computed onthetrainingset,fromeachpixel. Theimageispassedthroughastackofconvolutional(conv. ) layers,whereweusefilterswithavery small receptive field: 3×3(which is the smallest size to capture the notion of left/rig ht, up/down, center). In one of the configurationswe also utilise 1×1convolutionfilters, which can be seen as a linear transformationof the input channels (followed by n on-linearity). The convolutionstride is fixedto1pixel;thespatialpaddingofconv. layerinputissuchthatt hespatialresolutionispreserved afterconvolution,i. e. the paddingis 1pixel for3×3conv. layers. Spatial poolingis carriedoutby fivemax-poolinglayers,whichfollowsomeoftheconv. layer s(notalltheconv. layersarefollowed bymax-pooling). Max-poolingisperformedovera 2×2pixelwindow,withstride 2. Astackofconvolutionallayers(whichhasadifferentdepth indifferentarchitectures)isfollowedby three Fully-Connected(FC) layers: the first two have4096ch annelseach,the thirdperforms1000-way ILSVRC classification and thus contains1000channels(o ne foreach class). The final layer is thesoft-maxlayer. Theconfigurationofthefullyconnected layersis thesameinall networks. Allhiddenlayersareequippedwiththerectification(Re LU( Krizhevskyetal.,2012))non-linearity. We note that none of our networks (except for one) contain Loc al Response Normalisation (LRN) normalisation (Krizhevskyet al., 2012): as will be sh own in Sect. 4, such normalisation does not improve the performance on the ILSVRC dataset, but l eads to increased memory con-sumption and computation time. Where applicable, the param eters for the LRN layer are those of(Krizhevskyetal., 2012). 2. 2 C ONFIGURATIONS The Conv Net configurations, evaluated in this paper, are out lined in Table 1, one per column. In the following we will refer to the nets by their names (A-E). A ll configurationsfollow the generic design presented in Sect. 2. 1, and differ only in the depth: f rom 11 weight layers in the network A (8conv. and3FClayers)to19weightlayersinthenetwork E(1 6conv. and3FClayers). Thewidth of conv. layers (the number of channels) is rather small, sta rting from 64in the first layer and then increasingbyafactorof 2aftereachmax-poolinglayer,untilit reaches 512. In Table 2 we reportthe numberof parametersfor each configur ation. In spite of a large depth, the numberof weights in our netsis not greater thanthe numberof weightsin a moreshallow net with largerconv. layerwidthsandreceptivefields(144Mweights in(Sermanetet al., 2014)). 2. 3 D ISCUSSION Our Conv Net configurations are quite different from the ones used in the top-performing entries of the ILSVRC-2012 (Krizhevskyetal., 2012) and ILSVRC-201 3 competitions (Zeiler& Fergus, 2013;Sermanetet al.,2014). Ratherthanusingrelativelyl argereceptivefieldsinthefirstconv. lay-ers(e. g. 11×11withstride 4in(Krizhevskyet al.,2012),or 7×7withstride 2in(Zeiler& Fergus, 2013; Sermanetet al., 2014)), we use very small 3×3receptive fields throughout the whole net, whichareconvolvedwiththeinputateverypixel(withstrid e1). Itiseasytoseethatastackoftwo 3×3conv. layers(withoutspatialpoolinginbetween)hasaneff ectivereceptivefieldof 5×5;three 2
VGG-16 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 Table 1:Conv Net configurations (shown in columns). The depth of the configurations increase s fromtheleft(A)totheright(E),asmorelayersareadded(th eaddedlayersareshowninbold). The convolutional layer parameters are denoted as “conv ⟨receptive field size ⟩-⟨number of channels ⟩”. The Re LU activationfunctionisnotshownforbrevity. Conv Net Configuration A A-LRN B C D E 11weight 11weight 13 weight 16weight 16weight 19 weight layers layers layers layers layers layers input (224×224RGBimage) conv3-64 conv3-64 conv3-64 conv3-64 conv3-64 conv3-64 LRN conv3-64 conv3-64 conv3-64 conv3-64 maxpool conv3-128 conv3-128 conv3-128 conv3-128 conv3-128 conv3-128 conv3-128 conv3-128 conv3-128 conv3-128 maxpool conv3-256 conv3-256 conv3-256 conv3-256 conv3-256 conv3-256 conv3-256 conv3-256 conv3-256 conv3-256 conv3-256 conv3-256 conv1-256 conv3-256 conv3-256 conv3-256 maxpool conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv1-512 conv3-512 conv3-512 conv3-512 maxpool conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv1-512 conv3-512 conv3-512 conv3-512 maxpool FC-4096 FC-4096 FC-1000 soft-max Table2:Number ofparameters (inmillions). Network A,A-LRN BCDE Number of parameters 133 133134138144 such layers have a 7×7effectivereceptive field. So what have we gainedby using, fo r instance, a stackofthree 3×3conv. layersinsteadofasingle 7×7layer? First,weincorporatethreenon-linear rectification layers instead of a single one, which makes the decision functionmore discriminative. Second, we decrease the number of parameters: assuming that both the input and the output of a three-layer 3×3convolutionstack has Cchannels,the stack is parametrisedby 3( 32C2) = 27C2 weights; at the same time, a single 7×7conv. layer would require 72C2= 49C2parameters, i. e. 81%more. Thiscan be seen as imposinga regularisationon the 7×7conv. filters, forcingthemto haveadecompositionthroughthe 3×3filters(withnon-linearityinjectedin between). The incorporation of 1×1conv. layers (configuration C, Table 1) is a way to increase th e non-linearity of the decision function without affecting the re ceptive fields of the conv. layers. Even thoughinourcasethe 1×1convolutionisessentiallyalinearprojectionontothespa ceofthesame dimensionality(thenumberofinputandoutputchannelsist hesame),anadditionalnon-linearityis introducedbytherectificationfunction. Itshouldbenoted that1×1conv. layershaverecentlybeen utilisedin the“Networkin Network”architectureof Linet a l. (2014). Small-size convolution filters have been previously used by Ciresan etal. (2011), but their nets are significantly less deep than ours, and they did not evalua te on the large-scale ILSVRC dataset. Goodfellowet al. (2014) applied deep Conv Nets ( 11weight layers) to the task of street number recognition, and showed that the increased de pth led to better performance. Goog Le Net(Szegedyet al., 2014), a top-performingentryof the ILSVRC-2014classification task, was developed independentlyof our work, but is similar in th at it is based on very deep Conv Nets 3
VGG-16 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 (22 weight layers) and small convolution filters (apart from 3×3, they also use 1×1and5×5 convolutions). Their network topology is, however, more co mplex than ours, and the spatial reso-lution of the feature maps is reduced more aggressively in th e first layers to decrease the amount of computation. As will be shown in Sect. 4. 5, our model is out performing that of Szegedyetal. (2014)intermsofthesingle-networkclassificationaccura cy. 3 CLASSIFICATION FRAMEWORK In the previous section we presented the details of our netwo rk configurations. In this section, we describethe detailsofclassification Conv Nettrainingand evaluation. 3. 1 T RAINING The Conv Net training procedure generally follows Krizhevs kyetal. (2012) (except for sampling theinputcropsfrommulti-scaletrainingimages,asexplai nedlater). Namely,thetrainingiscarried out by optimising the multinomial logistic regression obje ctive using mini-batch gradient descent (based on back-propagation(Le Cunet al., 1989)) with momen tum. The batch size was set to 256, momentum to 0. 9. The training was regularised by weight decay (the L2penalty multiplier set to 5·10-4)anddropoutregularisationforthefirsttwofully-connect edlayers(dropoutratiosetto 0. 5). Thelearningrate wasinitially setto 10-2,andthendecreasedbyafactorof 10whenthevalidation set accuracy stopped improving. In total, the learning rate was decreased 3 times, and the learning was stopped after 370K iterations (74 epochs). We conjecture that in spite of the l arger number of parametersandthegreaterdepthofournetscomparedto(Kri zhevskyetal.,2012),thenetsrequired lessepochstoconvergedueto(a)implicitregularisationi mposedbygreaterdepthandsmallerconv. filter sizes; (b)pre-initialisationofcertainlayers. The initialisation of the networkweightsis important,sin ce bad initialisation can stall learningdue to the instability of gradient in deep nets. To circumvent th is problem, we began with training the configuration A (Table 1), shallow enoughto be trained wi th randominitialisation. Then,when trainingdeeperarchitectures,weinitialisedthefirstfou rconvolutionallayersandthelastthreefully-connectedlayerswiththelayersofnet A(theintermediatel ayerswereinitialisedrandomly). Wedid notdecreasethelearningrateforthepre-initialisedlaye rs,allowingthemtochangeduringlearning. For random initialisation (where applicable), we sampled t he weights from a normal distribution with thezeromeanand 10-2variance. The biaseswere initialisedwith zero. It isworth notingthat after the paper submission we found that it is possible to ini tialise the weights without pre-training byusingthe randominitialisationprocedureof Glorot&Ben gio(2010). Toobtainthefixed-size 224×224Conv Netinputimages,theywererandomlycroppedfromresca led training images (one crop per image per SGD iteration). To fu rther augment the training set, the cropsunderwentrandomhorizontalflippingandrandom RGBco lourshift(Krizhevskyet al.,2012). Trainingimagerescalingisexplainedbelow. Training image size. Let Sbe the smallest side of an isotropically-rescaledtraining image, from which the Conv Net input is cropped (we also refer to Sas the training scale). While the crop size is fixed to 224×224, in principle Scan take on any value not less than 224: for S= 224the crop will capture whole-image statistics, completely spanning the smallest side of a training image; for S≫224thecropwillcorrespondtoasmallpartoftheimage,contain ingasmallobjectoranobject part. We considertwoapproachesforsettingthetrainingscale S. Thefirst istofix S,whichcorresponds to single-scale training (note that image content within th e sampled crops can still represent multi-scale image statistics). In our experiments, we evaluated m odels trained at two fixed scales: S= 256(which has been widely used in the prior art (Krizhevskyet al., 2012; Zeiler&Fergus, 2013; Sermanetet al., 2014)) and S= 384. Given a Conv Net configuration,we first trained the network using S= 256. To speed-up training of the S= 384network, it was initialised with the weights pre-trainedwith S= 256,andwe useda smallerinitiallearningrateof 10-3. The second approachto setting Sis multi-scale training, where each training image is indiv idually rescaled by randomly sampling Sfrom a certain range [Smin,Smax](we used Smin= 256and Smax= 512). Sinceobjectsinimagescanbeofdifferentsize,itisbene ficialtotakethisintoaccount duringtraining. Thiscanalso beseen astrainingset augmen tationbyscale jittering,wherea single 4
VGG-16 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 model is trained to recognise objects over a wide range of sca les. For speed reasons, we trained multi-scale models by fine-tuning all layers of a single-sca le model with the same configuration, pre-trainedwithfixed S= 384. 3. 2 T ESTING Attest time,givena trained Conv Netandaninputimage,itis classified inthefollowingway. First, it is isotropically rescaled to a pre-defined smallest image side, denoted as Q(we also refer to it as the test scale). We note that Qis not necessarily equal to the training scale S(as we will show in Sect. 4, usingseveralvaluesof Qforeach Sleadsto improvedperformance). Then,the network is applied densely overthe rescaled test image in a way simil ar to (Sermanetet al., 2014). Namely, the fully-connected layers are first converted to convoluti onal layers (the first FC layer to a 7×7 conv. layer, the last two FC layers to 1×1conv. layers). The resulting fully-convolutional net is then applied to the whole (uncropped) image. The result is a c lass score map with the number of channels equal to the number of classes, and a variable spati al resolution, dependent on the input imagesize. Finally,toobtainafixed-sizevectorofclasssc oresfortheimage,theclassscoremapis spatially averaged(sum-pooled). We also augmentthe test s et by horizontalflippingof the images; thesoft-maxclassposteriorsoftheoriginalandflippedima gesareaveragedtoobtainthefinalscores fortheimage. Since the fully-convolutionalnetwork is applied over the w hole image, there is no need to sample multiple crops at test time (Krizhevskyetal., 2012), which is less efficient as it requires network re-computationforeachcrop. Atthesametime,usingalarge setofcrops,asdoneby Szegedyetal. (2014),canleadtoimprovedaccuracy,asit resultsin afiner samplingoftheinputimagecompared tothefully-convolutionalnet. Also,multi-cropevaluati oniscomplementarytodenseevaluationdue to different convolution boundary conditions: when applyi ng a Conv Net to a crop, the convolved feature mapsare paddedwith zeros, while in the case of dense evaluationthe paddingfor the same crop naturally comes from the neighbouring parts of an image (due to both the convolutions and spatial pooling), which substantially increases the overa ll network receptive field, so more context iscaptured. Whilewebelievethatinpracticetheincreased computationtimeofmultiplecropsdoes notjustifythepotentialgainsinaccuracy,forreferencew ealsoevaluateournetworksusing 50crops perscale( 5×5regulargridwith 2flips),foratotalof 150cropsover 3scales,whichiscomparable to144cropsover 4scalesusedby Szegedyetal. (2014). 3. 3 IMPLEMENTATION DETAILS Ourimplementationisderivedfromthepubliclyavailable C ++ Caffetoolbox(Jia,2013)(branched out in December 2013), but contains a number of significant mo difications, allowing us to perform trainingandevaluationonmultiple GPUsinstalledinasing lesystem,aswellastrainandevaluateon full-size (uncropped) images at multiple scales (as descri bed above). Multi-GPU training exploits data parallelism, and is carried out by splitting each batch of training images into several GPU batches, processed in parallel on each GPU. After the GPU bat ch gradientsare computed, they are averaged to obtain the gradient of the full batch. Gradient c omputation is synchronous across the GPUs, sothe resultisexactlythesame aswhentrainingona si ngle GPU. While more sophisticated methods of speeding up Conv Net tra ining have been recently pro-posed (Krizhevsky, 2014), which employmodeland data paral lelism for differentlayersof the net, wehavefoundthatourconceptuallymuchsimplerschemealre adyprovidesaspeedupof 3. 75times on an off-the-shelf4-GPU system, as comparedto using a sing le GPU. On a system equippedwith four NVIDIATitan Black GPUs,trainingasinglenettook2-3w eeksdependingonthearchitecture. 4 CLASSIFICATION EXPERIMENTS Dataset. In this section, we present the image classification results achieved by the described Conv Netarchitecturesonthe ILSVRC-2012dataset(whichwa susedfor ILSVRC2012-2014chal-lenges). The dataset includes images of 1000 classes, and is split into three sets: training ( 1. 3M images), validation ( 50K images), and testing ( 100K images with held-out class labels). The clas-sification performanceis evaluated using two measures: the top-1 and top-5 error. The former is a multi-class classification error, i. e. the proportion of in correctly classified images; the latter is the 5
VGG-16 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 main evaluation criterion used in ILSVRC, and is computed as the proportion of images such that theground-truthcategoryisoutsidethetop-5predictedca tegories. Forthemajorityofexperiments,weusedthevalidationseta sthetestset. Certainexperimentswere also carried out on the test set and submitted to the official I LSVRC server as a “VGG” team entry tothe ILSVRC-2014competition(Russakovskyet al., 2014). 4. 1 SINGLESCALEEVALUATION We begin with evaluating the performanceof individual Conv Net models at a single scale with the layerconfigurationsdescribedin Sect. 2. 2. The test images ize was set as follows: Q=Sforfixed S,and Q= 0. 5(Smin+Smax)forjittered S∈[Smin,Smax]. Theresultsofareshownin Table3. First, we note that using local response normalisation (A-L RN network) does not improve on the model A without any normalisation layers. We thus do not empl oy normalisation in the deeper architectures(B-E). Second, we observe that the classification error decreases w ith the increased Conv Net depth: from 11 layers in A to 19 layers in E. Notably, in spite of the same de pth, the configuration C (which containsthree 1×1conv. layers),performsworsethantheconfiguration D,whic huses3×3conv. layersthroughoutthenetwork. Thisindicatesthatwhileth e additionalnon-linearitydoeshelp(Cis better than B), it is also important to capture spatial conte xt by using conv. filters with non-trivial receptive fields (D is better than C). The error rate of our arc hitecture saturates when the depth reaches19layers,butevendeepermodelsmightbebeneficialforlarger datasets. Wealsocompared the net B with a shallow net with five 5×5conv. layers, which was derived from B by replacing eachpairof 3×3conv. layerswithasingle 5×5conv. layer(whichhasthesamereceptivefieldas explained in Sect. 2. 3). The top-1 error of the shallow net wa s measured to be 7%higher than that of B (on a center crop),which confirmsthat a deepnet with smal l filters outperformsa shallow net withlargerfilters. Finally, scale jittering at training time ( S∈[256;512] ) leads to significantly better results than training on images with fixed smallest side ( S= 256or S= 384), even though a single scale is usedattesttime. Thisconfirmsthattrainingsetaugmentati onbyscalejitteringisindeedhelpfulfor capturingmulti-scaleimagestatistics. Table3:Conv Netperformanceatasingle testscale. Conv Net config. (Table 1) smallest image side top-1 val. error (%) top-5 val. error (%) train(S)test (Q) A 256 256 29. 6 10. 4 A-LRN 256 256 29. 7 10. 5 B 256 256 28. 7 9. 9 C256 256 28. 1 9. 4 384 384 28. 1 9. 3 [256;512] 384 27. 3 8. 8 D256 256 27. 0 8. 8 384 384 26. 8 8. 7 [256;512] 384 25. 6 8. 1 E256 256 27. 3 9. 0 384 384 26. 9 8. 7 [256;512] 384 25. 5 8. 0 4. 2 M ULTI-SCALEEVALUATION Havingevaluatedthe Conv Netmodelsatasinglescale,wenow assesstheeffectofscalejitteringat testtime. Itconsistsofrunningamodeloverseveralrescal edversionsofatestimage(corresponding to different values of Q), followed by averaging the resulting class posteriors. Co nsidering that a large discrepancy between training and testing scales lead s to a drop in performance, the models trained with fixed Swere evaluated over three test image sizes, close to the trai ning one: Q= {S-32,S,S+ 32}. At the same time, scale jittering at training time allows th e network to be appliedto a widerrangeofscales at test time,so the modeltr ainedwithvariable S∈[Smin;Smax] wasevaluatedoveralargerrangeofsizes Q={Smin,0. 5(Smin+Smax),Smax}. 6
VGG-16 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 Theresults,presentedin Table4,indicatethatscalejitte ringattest timeleadstobetterperformance (as compared to evaluating the same model at a single scale, s hown in Table 3). As before, the deepest configurations(D and E) perform the best, and scale j ittering is better than training with a fixed smallest side S. Our best single-network performance on the validation set is24. 8%/7. 5% top-1/top-5error(highlightedinboldin Table4). Onthete stset,theconfiguration Eachieves 7. 3% top-5error. Table4:Conv Netperformanceatmultiple test scales. Conv Net config. (Table 1) smallest image side top-1val. error (%) top-5val. error (%) train(S)test(Q) B 256 224,256,288 28. 2 9. 6 C256 224,256,288 27. 7 9. 2 384 352,384,416 27. 8 9. 2 [256;512] 256,384,512 26. 3 8. 2 D256 224,256,288 26. 6 8. 6 384 352,384,416 26. 5 8. 6 [256;512] 256,384,512 24. 8 7. 5 E256 224,256,288 26. 9 8. 7 384 352,384,416 26. 7 8. 6 [256;512] 256,384,512 24. 8 7. 5 4. 3 M ULTI-CROP EVALUATION In Table 5 we compare dense Conv Net evaluation with mult-cro p evaluation (see Sect. 3. 2 for de-tails). We also assess the complementarityof thetwo evalua tiontechniquesbyaveragingtheirsoft-max outputs. As can be seen, using multiple crops performs sl ightly better than dense evaluation, andthe two approachesareindeedcomplementary,astheir co mbinationoutperformseach ofthem. As noted above, we hypothesize that this is due to a different treatment of convolution boundary conditions. Table 5:Conv Netevaluationtechniques comparison. Inall experimentsthe trainingscale Swas sampledfrom [256;512],andthreetest scales Qwereconsidered: {256,384,512}. Conv Net config. (Table 1) Evaluationmethod top-1 val. error(%) top-5 val. error (%) Ddense 24. 8 7. 5 multi-crop 24. 6 7. 5 multi-crop &dense 24. 4 7. 2 Edense 24. 8 7. 5 multi-crop 24. 6 7. 4 multi-crop &dense 24. 4 7. 1 4. 4 C ONVNETFUSION Upuntilnow,weevaluatedtheperformanceofindividual Con v Netmodels. Inthispartoftheexper-iments,wecombinetheoutputsofseveralmodelsbyaveragin gtheirsoft-maxclassposteriors. This improvesthe performancedueto complementarityof the mode ls, andwas used in the top ILSVRC submissions in 2012 (Krizhevskyet al., 2012) and 2013 (Zeil er&Fergus, 2013; Sermanetet al., 2014). The results are shown in Table 6. By the time of ILSVRC submiss ion we had only trained the single-scale networks, as well as a multi-scale model D (by fi ne-tuning only the fully-connected layers rather than all layers). The resulting ensemble of 7 n etworks has 7. 3%ILSVRC test error. After the submission, we considered an ensemble of only two b est-performing multi-scale models (configurations D and E), which reduced the test error to 7. 0%using dense evaluation and 6. 8% using combined dense and multi-crop evaluation. For refere nce, our best-performingsingle model achieves7. 1%error(model E, Table5). 4. 5 C OMPARISON WITH THE STATE OF THE ART Finally, we compare our results with the state of the art in Ta ble 7. In the classification task of ILSVRC-2014 challenge (Russakovskyet al., 2014), our “VGG ” team secured the 2nd place with 7
VGG-16 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 Table6:Multiple Conv Netfusion results. Combined Conv Net models Error top-1 val top-5val top-5test ILSVRCsubmission (D/256/224,256,288), (D/384/352,384,416), (D/[256;512 ]/256,384,512) (C/256/224,256,288), (C/384/352,384,416) (E/256/224,256,288), (E/384/352,384,416)24. 7 7. 5 7. 3 post-submission (D/[256;512]/256,384,512), (E/[256;512]/256,384,512),dense eval. 24. 0 7. 1 7. 0 (D/[256;512]/256,384,512), (E/[256;512]/256,384,512),multi-crop 23. 9 7. 2-(D/[256;512]/256,384,512), (E/[256;512]/256,384,512),multi-crop &dense eval. 23. 7 6. 8 6. 8 7. 3%test errorusinganensembleof7 models. Afterthesubmissio n,we decreasedtheerrorrateto 6. 8%usinganensembleof2models. As can be seen from Table 7, our very deep Conv Netssignificant ly outperformthe previousgener-ation of models, which achieved the best results in the ILSVR C-2012 and ILSVRC-2013 competi-tions. Our result is also competitivewith respect to the cla ssification task winner(Goog Le Netwith 6. 7%error) and substantially outperforms the ILSVRC-2013 winn ing submission Clarifai, which achieved 11. 2%with outside training data and 11. 7%without it. This is remarkable, considering that our best result is achievedby combiningjust two models-significantly less than used in most ILSVRC submissions. In terms of the single-net performance, our architecture achieves the best result (7. 0%test error), outperforming a single Goog Le Net by 0. 9%. Notably, we did not depart from the classical Conv Net architecture of Le Cunetal. (198 9), but improved it by substantially increasingthedepth. Table 7:Comparison with the state of the art in ILSVRC classification. Our methodis denoted as“VGG”. Onlytheresultsobtainedwithoutoutsidetrainin gdataarereported. Method top-1 val. error(%) top-5val. error (%) top-5testerror (%) VGG(2nets, multi-crop& dense eval. ) 23. 7 6. 8 6. 8 VGG(1net, multi-crop& dense eval. ) 24. 4 7. 1 7. 0 VGG(ILSVRCsubmission, 7nets, dense eval. ) 24. 7 7. 5 7. 3 Goog Le Net (Szegedy et al., 2014) (1net)-7. 9 Goog Le Net (Szegedy et al., 2014) (7nets)-6. 7 MSRA(He et al., 2014) (11nets)--8. 1 MSRA(He et al., 2014) (1net) 27. 9 9. 1 9. 1 Clarifai(Russakovsky et al., 2014) (multiplenets)--11. 7 Clarifai(Russakovsky et al., 2014) (1net)--12. 5 Zeiler& Fergus (Zeiler&Fergus, 2013) (6nets) 36. 0 14. 7 14. 8 Zeiler& Fergus (Zeiler&Fergus, 2013) (1net) 37. 5 16. 0 16. 1 Over Feat (Sermanetet al.,2014) (7nets) 34. 0 13. 2 13. 6 Over Feat (Sermanetet al.,2014) (1net) 35. 7 14. 2-Krizhevsky et al. (Krizhevsky et al., 2012) (5nets) 38. 1 16. 4 16. 4 Krizhevsky et al. (Krizhevsky et al., 2012) (1net) 40. 7 18. 2-5 CONCLUSION In this work we evaluated very deep convolutional networks ( up to 19 weight layers) for large-scale image classification. It was demonstrated that the rep resentation depth is beneficial for the classificationaccuracy,andthatstate-of-the-artperfor manceonthe Image Netchallengedatasetcan beachievedusingaconventional Conv Netarchitecture(Le C unet al.,1989;Krizhevskyet al.,2012) withsubstantiallyincreaseddepth. Intheappendix,weals oshowthatourmodelsgeneralisewellto a wide range of tasks and datasets, matchingor outperformin gmore complexrecognitionpipelines builtaroundlessdeepimagerepresentations. Ourresultsy etagainconfirmtheimportanceof depth invisualrepresentations. ACKNOWLEDGEMENTS Thisworkwassupportedby ERCgrant Vis Recno. 228180. Wegra tefullyacknowledgethesupport of NVIDIACorporationwiththedonationofthe GPUsusedfort hisresearch. 8
VGG-16 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 REFERENCES Bell, S., Upchurch, P.,Snavely, N., and Bala, K. Material re cognition inthe wild withthe materials in context database. Co RR,abs/1412. 0623, 2014. Chatfield, K., Simonyan, K., Vedaldi, A., and Zisserman, A. R eturn of the devil in the details: Delving deep intoconvolutional nets. In Proc. BMVC.,2014. Cimpoi,M.,Maji,S.,and Vedaldi,A. Deepconvolutionalfilt erbanksfortexturerecognitionandsegmentation. Co RR,abs/1411. 6836, 2014. Ciresan, D. C., Meier, U., Masci, J., Gambardella, L. M., and Schmidhuber, J. Flexible, high performance convolutional neural networks for image classification. In IJCAI,pp. 1237-1242, 2011. Dean, J., Corrado, G., Monga, R., Chen, K., Devin, M., Mao, M., Ranzato, M., Senior, A., Tucker, P., Yang, K.,Le,Q. V.,and Ng, A. Y. Large scale distributeddeepnetwo rks. In NIPS,pp. 1232-1240, 2012. Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proc. CVPR,2009. Donahue,J.,Jia,Y.,Vinyals,O.,Hoffman,J.,Zhang,N.,Tz eng,E.,and Darrell,T. Decaf: Adeepconvolutional activation feature for generic visual recognition. Co RR,abs/1310. 1531, 2013. Everingham, M., Eslami, S. M. A., Van Gool, L., Williams,C., Winn, J., and Zisserman, A. The Pascal visual object classes challenge: Aretrospective. IJCV,111(1):98-136, 2015. Fei-Fei, L., Fergus, R., and Perona, P. Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categor ies. In IEEE CVPR Workshop of Generative Model Based Vision, 2004. Girshick, R. B., Donahue, J., Darrell, T., and Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. Co RR,abs/1311. 2524v5, 2014. Publishedin Proc. CVPR,2014. Gkioxari, G.,Girshick, R.,and Malik, J. Actions and attrib utes from wholes and parts. Co RR,abs/1412. 2604, 2014. Glorot, X. and Bengio, Y. Understanding the difficultyof tra iningdeep feedforward neural networks. In Proc. AISTATS,volume 9, pp. 249-256, 2010. Goodfellow, I. J., Bulatov, Y., Ibarz, J., Arnoud, S., and Sh et, V. Multi-digit number recognition from street view imagery usingdeep convolutional neural networks. In Proc. ICLR,2014. Griffin, G., Holub, A., and Perona, P. Caltech-256 object cat egory dataset. Technical Report 7694, California Institute of Technology, 2007. He, K., Zhang, X., Ren, S., and Sun, J. Spatial pyramid poolin g in deep convolutional networks for visual recognition. Co RR,abs/1406. 4729v2, 2014. Hoai, M. Regularizedmax pooling forimage categorization. In Proc. BMVC.,2014. Howard, A. G. Someimprovements ondeepconvolutional neura l networkbasedimageclassification. In Proc. ICLR,2014. Jia, Y. Caffe: An open source convolutional architecture fo r fast feature embedding. http://caffe. berkeleyvision. org/,2013. Karpathy, A. and Fei-Fei, L. Deep visual-semantic alignmen ts for generating image descriptions. Co RR, abs/1412. 2306, 2014. Kiros, R., Salakhutdinov, R., and Zemel, R. S. Unifying visu al-semantic embeddings with multimodal neural language models. Co RR,abs/1411. 2539, 2014. Krizhevsky, A. One weirdtrickfor parallelizingconvoluti onal neural networks. Co RR,abs/1404. 5997, 2014. Krizhevsky, A., Sutskever, I., and Hinton, G. E. Image Net cl assification with deep convolutional neural net-works. In NIPS,pp. 1106-1114, 2012. Le Cun,Y.,Boser, B.,Denker, J. S.,Henderson, D.,Howard, R. E.,Hubbard, W.,and Jackel, L. D. Backpropa-gationapplied tohandwrittenzipcode recognition. Neural Computation, 1(4):541-551, 1989. Lin,M., Chen, Q.,and Yan, S. Networkinnetwork. In Proc. ICLR,2014. Long, J., Shelhamer, E., and Darrell, T. Fully convolutiona l networks for semantic segmentation. Co RR, abs/1411. 4038, 2014. Oquab, M., Bottou, L., Laptev, I., and Sivic, J. Learning and Transferring Mid-Level Image Representations using Convolutional Neural Networks. In Proc. CVPR,2014. Perronnin, F.,S´ anchez, J.,and Mensink, T. Improving the F isherkernel forlarge-scale image classification. In Proc. ECCV,2010. Razavian, A.,Azizpour, H.,Sullivan, J.,and Carlsson,S. C NNFeaturesoff-the-shelf: an Astounding Baseline for Recognition. Co RR,abs/1403. 6382, 2014. 9
VGG-16 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A. C., and Fei-Fei, L. Image Net large sc ale visual recognition challenge. Co RR, abs/1409. 0575, 2014. Sermanet,P.,Eigen, D.,Zhang, X.,Mathieu, M.,Fergus,R., and Le Cun,Y. Over Feat: Integrated Recognition, Localizationand Detectionusing Convolutional Networks. In Proc. ICLR,2014. Simonyan, K. and Zisserman, A. Two-stream convolutional ne tworks for action recognition in videos. Co RR, abs/1406. 2199, 2014. Published in Proc. NIPS,2014. Szegedy, C., Liu, W.,Jia, Y., Sermanet, P.,Reed, S.,Anguel ov, D.,Erhan, D., Vanhoucke, V., and Rabinovich, A. Goingdeeper withconvolutions. Co RR,abs/1409. 4842, 2014. Wei, Y., Xia, W., Huang, J., Ni, B., Dong, J., Zhao, Y., and Yan, S. CNN: Single-label to multi-label. Co RR, abs/1406. 5726, 2014. Zeiler, M. D. and Fergus, R. Visualizing and understanding c onvolutional networks. Co RR, abs/1311. 2901, 2013. Publishedin Proc. ECCV,2014. A LOCALISATION In the main bodyof the paper we have consideredthe classifica tion task of the ILSVRC challenge, and performed a thorough evaluation of Conv Net architectur es of different depth. In this section, we turn to the localisation task of the challenge, which we ha ve won in 2014 with 25. 3%error. It can be seen as a special case of object detection, where a sing le object bounding box should be predictedforeach ofthe top-5classes, irrespectiveof the actual numberofobjectsof the class. For thiswe adoptthe approachof Sermanetet al. (2014), the winn ersof the ILSVRC-2013localisation challenge,withafewmodifications. Ourmethodisdescribed in Sect. A. 1andevaluatedin Sect. A. 2. A. 1 L OCALISATION CONVNET To perform object localisation, we use a very deep Conv Net, w here the last fully connected layer predicts the bounding box location instead of the class scor es. A bounding box is represented by a 4-D vector storing its center coordinates, width, and heig ht. There is a choice of whether the boundingbox prediction is shared across all classes (singl e-class regression, SCR (Sermanetet al., 2014))orisclass-specific(per-classregression,PCR). In theformercase,thelastlayeris4-D,while in the latter it is 4000-D (since there are 1000 classes in the dataset). Apart from the last bounding boxpredictionlayer,weuse the Conv Netarchitecture D (Tab le1),whichcontains16weightlayers andwasfoundtobe thebest-performingin theclassification task (Sect. 4). Training. Training of localisation Conv Nets is similar to that of the c lassification Conv Nets (Sect. 3. 1). Themaindifferenceisthatwereplacethelogis ticregressionobjectivewitha Euclidean loss,whichpenalisesthedeviationofthepredictedboundi ngboxparametersfromtheground-truth. We trainedtwo localisation models, each on a single scale: S= 256and S= 384(due to the time constraints,we didnot use trainingscale jitteringforour ILSVRC-2014submission). Trainingwas initialised with the correspondingclassification models ( trained on the same scales), and the initial learning rate was set to 10-3. We exploredboth fine-tuningall layers and fine-tuningonly the first two fully-connected layers, as done in (Sermanetetal., 201 4). The last fully-connected layer was initialisedrandomlyandtrainedfromscratch. Testing. We consider two testing protocols. The first is used for compa ring different network modifications on the validation set, and considers only the b oundingbox prediction for the ground truth class (to factor out the classification errors). The bo unding box is obtained by applying the networkonlyto thecentralcropoftheimage. The second, fully-fledged, testing procedure is based on the dense application of the localisation Conv Net to the whole image, similarly to the classification t ask (Sect. 3. 2). The difference is that instead of the class score map, the output of the last fully-c onnected layer is a set of bounding box predictions. To come up with the final prediction, we util ise the greedy merging procedure of Sermanetetal. (2014), which first merges spatially close predictions (by averaging their coor-dinates), and then rates them based on the class scores, obta ined from the classification Conv Net. When several localisation Conv Nets are used, we first take th e union of their sets of boundingbox predictions, and then run the mergingprocedureon the union. We did not use the multiple pooling 10
VGG-16 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 offsets technique of Sermanetetal. (2014), which increase s the spatial resolution of the bounding boxpredictionsandcanfurtherimprovetheresults. A. 2 L OCALISATION EXPERIMENTS In this section we first determine the best-performinglocal isation setting (using the first test proto-col), and then evaluate it in a fully-fledged scenario (the se cond protocol). The localisation error is measured according to the ILSVRC criterion (Russakovsky et al., 2014), i. e. the bounding box predictionis deemed correctif its intersectionoverunion ratio with the ground-truthboundingbox isabove0. 5. Settings comparison. As can be seen from Table 8, per-class regression (PCR) outpe rforms the class-agnostic single-class regression (SCR), which diff ers from the findings of Sermanetetal. (2014), where PCR was outperformed by SCR. We also note that fi ne-tuning all layers for the lo-calisation task leads to noticeablybetter results than fine-tuningonly the fully-connectedlayers(as donein(Sermanetet al.,2014)). Intheseexperiments,thes mallestimagessidewassetto S= 384; theresultswith S= 256exhibitthesamebehaviourandarenotshownforbrevity. Table 8:Localisation error for different modifications with the simplified testing protocol: the boundingbox is predictedfrom a single central image crop, a nd the ground-truthclass is used. All Conv Net layers (except for the last one) have the configurati on D (Table 1), while the last layer performseithersingle-classregression(SCR) orper-clas sregression(PCR). Fine-tunedlayers regression type GTclass localisationerror 1st and2nd FCSCR 36. 4 PCR 34. 3 all PCR 33. 1 Fully-fledgedevaluation. Havingdeterminedthebestlocalisationsetting(PCR,fine-tuningofall layers),we nowapply it in the fully-fledgedscenario,where the top-5class labelsare predictedus-ing our best-performingclassification system (Sect. 4. 5), and multiple densely-computedbounding box predictions are merged using the method of Sermanetetal. (2014). As can be seen from Ta-ble 9, applicationof the localisation Conv Netto the whole i magesubstantiallyimprovesthe results compared to using a center crop (Table 8), despite using the t op-5 predicted class labels instead of thegroundtruth. Similarlytotheclassificationtask(Sect. 4),testingatseveralscalesandcombining thepredictionsofmultiplenetworksfurtherimprovesthep erformance. Table9:Localisationerror smallestimage side top-5localisationerror (%) train(S) test(Q) val. test. 256 256 29. 5-384 384 28. 2 26. 7 384 352,384 27. 5-fusion: 256/256 and 384/352,384 26. 9 25. 3 Comparison with the state of the art. We compare our best localisation result with the state of the art in Table 10. With 25. 3%test error, our “VGG” team won the localisation challenge of ILSVRC-2014 (Russakovskyet al., 2014). Notably, our resul ts are considerably better than those of the ILSVRC-2013winner Overfeat(Sermanetet al., 2014), even thoughwe used less scales and did not employ their resolution enhancement technique. We e nvisage that better localisation per-formance can be achieved if this technique is incorporated i nto our method. This indicates the performanceadvancementbroughtbyourverydeep Conv Nets-wegotbetterresultswithasimpler localisationmethod,buta morepowerfulrepresentation. B GENERALISATION OF VERYDEEPFEATURES In the previous sections we have discussed training and eval uation of very deep Conv Nets on the ILSVRC dataset. In this section, we evaluate our Conv Nets, p re-trained on ILSVRC, as feature 11
VGG-16 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 Table 10: Comparison with the state of the art in ILSVRC localisation. Our methodis denoted as“VGG”. Method top-5val. error (%) top-5 testerror (%) VGG 26. 9 25. 3 Goog Le Net (Szegedyet al., 2014)-26. 7 Over Feat (Sermanet etal.,2014) 30. 0 29. 9 Krizhevsky et al. (Krizhevsky et al.,2012)-34. 2 extractors on other, smaller, datasets, where training lar ge models from scratch is not feasible due to over-fitting. Recently, there has been a lot of interest in such a use case (Zeiler&Fergus, 2013; Donahueet al., 2013; Razavianet al., 2014; Chatfieldet al., 2014), as it turns out that deep image representations,learnton ILSVRC,generalisewelltoothe rdatasets,wheretheyhaveoutperformed hand-crafted representations by a large margin. Following that line of work, we investigate if our modelsleadtobetterperformancethanmoreshallowmodelsu tilisedinthestate-of-the-artmethods. In this evaluation, we consider two models with the best clas sification performance on ILSVRC (Sect. 4)-configurations“Net-D”and“Net-E”(whichwemade publiclyavailable). To utilise the Conv Nets, pre-trained on ILSVRC, for image cl assification on other datasets, we remove the last fully-connected layer (which performs 1000-way ILSVRC classification), and use 4096-Dactivationsofthepenultimatelayerasimagefeatur es,whichareaggregatedacrossmultiple locations and scales. The resulting image descriptor is L2-normalised and combined with a linear SVM classifier, trained on the target dataset. For simplicit y, pre-trained Conv Net weights are kept fixed(nofine-tuningisperformed). Aggregation of features is carried out in a similar manner to our ILSVRC evaluation procedure (Sect. 3. 2). Namely, an image is first rescaled so that its sma llest side equals Q, and then the net-work is densely applied over the image plane (which is possib le when all weight layers are treated as convolutional). We then perform global average pooling o n the resulting feature map, which producesa 4096-Dimage descriptor. The descriptor is then a veraged with the descriptor of a hori-zontally flipped image. As was shown in Sect. 4. 2, evaluation over multiple scales is beneficial, so we extract features over several scales Q. The resulting multi-scale features can be either stacked or pooled across scales. Stacking allows a subsequent class ifier to learn how to optimally combine image statistics over a range of scales; this, however, come s at the cost of the increased descriptor dimensionality. We returntothediscussionofthisdesignc hoicein theexperimentsbelow. We also assess late fusion of features, computed using two networks, which is performed by stacking their respectiveimagedescriptors. Table11: Comparisonwiththestateoftheartinimageclassificationo n VOC-2007,VOC-2012, Caltech-101, and Caltech-256. Our models are denoted as “VGG”. Results marked with * were achievedusing Conv Netspre-trainedonthe extended ILSVRCdataset(2000classes). Method VOC-2007 VOC-2012 Caltech-101 Caltech-256 (mean AP) (mean AP) (meanclass recall) (mean class recall) Zeiler& Fergus (Zeiler&Fergus, 2013)-79. 0 86. 5±0. 5 74. 2±0. 3 Chatfieldetal. (Chatfieldet al., 2014) 82. 4 83. 2 88. 4±0. 6 77. 6±0. 1 He etal. (Heet al.,2014) 82. 4-93. 4±0. 5-Weiet al. (Weiet al., 2014) 81. 5(85. 2∗)81. 7 (90. 3∗)--VGGNet-D (16layers) 89. 3 89. 0 91. 8±1. 0 85. 0±0. 2 VGGNet-E(19 layers) 89. 3 89. 0 92. 3±0. 5 85. 1±0. 3 VGGNet-D & Net-E 89. 7 89. 3 92. 7±0. 5 86. 2±0. 3 Image Classification on VOC-2007and VOC-2012. We beginwith the evaluationon the image classification task of PASCAL VOC-2007 and VOC-2012 benchma rks (Everinghametal., 2015). These datasets contain 10K and 22. 5K images respectively, a nd each image is annotated with one or several labels, correspondingto 20 object categories. T he VOC organisersprovidea pre-defined split into training, validation, and test data (the test dat a for VOC-2012 is not publicly available; instead,anofficialevaluationserverisprovided). Recogn itionperformanceismeasuredusingmean averageprecision(m AP)acrossclasses. Notably, by examining the performance on the validation set s of VOC-2007 and VOC-2012, we foundthat aggregatingimage descriptors,computedat mult iple scales, by averagingperformssim-12
VGG-16 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 ilarly to the aggregation by stacking. We hypothesize that t his is due to the fact that in the VOC dataset the objects appear over a variety of scales, so there is no particular scale-specific seman-tics which a classifier could exploit. Since averaging has a b enefit of not inflating the descrip-tor dimensionality, we were able to aggregated image descri ptors over a wide range of scales: Q∈ {256,384,512,640,768}. It is worth noting though that the improvement over a smalle r rangeof{256,384,512}wasrathermarginal( 0. 3%). Thetestsetperformanceisreportedandcomparedwithother approachesin Table11. Ournetworks “Net-D”and“Net-E”exhibitidenticalperformanceon VOCda tasets,andtheircombinationslightly improves the results. Our methods set the new state of the art across image representations, pre-trained on the ILSVRC dataset, outperformingthe previousb est result of Chatfieldet al. (2014) by more than 6%. It should be noted that the method of Wei et al. (2014), which achieves1%better m AP on VOC-2012, is pre-trained on an extended 2000-class IL SVRC dataset, which includes additional 1000 categories, semantically close to those in VOC datasets. It also benefits from the fusionwith anobjectdetection-assistedclassification pi peline. Image Classificationon Caltech-101and Caltech-256. Inthissectionweevaluateverydeepfea-tureson Caltech-101(Fei-Feiet al.,2004)and Caltech-256 (Griffinet al.,2007)imageclassification benchmarks. Caltech-101contains9Kimageslabelledinto1 02classes(101objectcategoriesanda backgroundclass), while Caltech-256 is larger with 31K ima ges and 257 classes. A standard eval-uation protocolon these datasets is to generateseveral ran domsplits into training and test data and report the average recognition performance across the spli ts, which is measured by the mean class recall(whichcompensatesforadifferentnumberoftestima gesperclass). Following Chatfield etal. (2014); Zeiler&Fergus(2013); He etal. (2014),on Caltech-101we generated3 randomsplits into training and test data, so that each split contains 30 traini ng images per class, and up to 50 test images per class. On Caltech-256 we also generated 3 splits, each of which contains 60 training images per class (and the rest is used for testing). In each sp lit, 20% of training images were used asa validationset forhyper-parameterselection. We found that unlike VOC, on Caltech datasets the stacking of descriptors, computed over multi-ple scales, performs better than averaging or max-pooling. This can be explained by the fact that in Caltech images objects typically occupy the whole image, so multi-scale image features are se-manticallydifferent(capturingthe wholeobject vs. object parts), andstacking allows a classifier to exploitsuchscale-specificrepresentations. We usedthree scales Q∈ {256,384,512}. Ourmodelsarecomparedtoeachotherandthestateofthearti n Table11. Ascanbeseen,thedeeper 19-layer Net-Eperformsbetterthanthe16-layer Net-D,and theircombinationfurtherimprovesthe performance. On Caltech-101, our representations are comp etitive with the approach of He etal. (2014),which,however,performssignificantlyworsethano urnetson VOC-2007. On Caltech-256, ourfeaturesoutperformthestate oftheart (Chatfieldetal., 2014)byalargemargin( 8. 6%). Action Classification on VOC-2012. We also evaluated our best-performing image representa-tion (the stacking of Net-D and Net-E features) on the PASCAL VOC-2012 action classification task (Everinghamet al., 2015), which consists in predictin g an action class from a single image, given a bounding box of the person performing the action. The dataset contains 4. 6K training im-ages,labelledinto11classes. Similarlytothe VOC-2012ob jectclassificationtask,theperformance is measured using the m AP. We considered two training settin gs: (i) computing the Conv Net fea-turesonthewholeimageandignoringtheprovidedboundingb ox;(ii)computingthefeaturesonthe wholeimageandontheprovidedboundingbox,andstackingth emtoobtainthefinalrepresentation. Theresultsarecomparedtootherapproachesin Table12. Ourrepresentationachievesthestateofartonthe VOCactio nclassificationtaskevenwithoutusing the provided bounding boxes, and the results are further imp roved when using both images and bounding boxes. Unlike other approaches, we did not incorpo rate any task-specific heuristics, but reliedontherepresentationpowerofverydeepconvolution alfeatures. Other Recognition Tasks. Since the public release of our models, they have been active ly used by the research community for a wide range of image recogniti on tasks, consistently outperform-ing more shallow representations. For instance, Girshicke t al. (2014) achieve the state of the object detection results by replacing the Conv Net of Krizhe vskyet al. (2012) with our 16-layer model. Similar gains over a more shallow architecture of Kri zhevskyet al. (2012) have been ob-13
VGG-16 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 Table 12: Comparison with the state of the art in single-image action c lassification on VOC-2012. Our models are denoted as “VGG”. Results marked with * were a chieved using Conv Nets pre-trainedonthe extended ILSVRCdataset (1512classes). Method VOC-2012 (mean AP) (Oquab et al., 2014) 70. 2∗ (Gkioxari etal.,2014) 73. 6 (Hoai,2014) 76. 3 VGG Net-D& Net-E,image-only 79. 2 VGG Net-D& Net-E,image and bounding box 84. 0 served in semantic segmentation (Longet al., 2014), image c aption generation (Kirosetal., 2014; Karpathy& Fei-Fei, 2014),textureandmaterialrecognitio n(Cimpoiet al., 2014; Bell etal., 2014). C PAPERREVISIONS Here we present the list of major paper revisions, outlining the substantial changes for the conve-nienceofthe reader. v1Initialversion. Presentstheexperimentscarriedoutbefo rethe ILSVRCsubmission. v2Addspost-submission ILSVRCexperimentswithtrainingset augmentationusingscalejittering, whichimprovestheperformance. v3Addsgeneralisationexperiments(Appendix B) on PASCAL VOC and Caltech image classifica-tiondatasets. Themodelsusedforthese experimentsarepub liclyavailable. v4The paper is converted to ICLR-2015 submission format. Also adds experiments with multiple cropsforclassification. v6Camera-ready ICLR-2015conferencepaper. Addsa compariso nof the net B with a shallow net andtheresultson PASCAL VOCactionclassificationbenchmar k. 14
VGG-16 layer image recognition model.pdf
ar Xiv:1409. 1556v6 [cs. CV] 10 Apr 2015Publishedasa conferencepaperat ICLR2015 VERYDEEPCONVOLUTIONAL NETWORKS FORLARGE-SCALEIMAGERECOGNITION Karen Simonyan∗& Andrew Zisserman+ Visual Geometry Group,Departmentof Engineering Science, Universityof Oxford {karen,az }@robots. ox. ac. uk ABSTRACT In this work we investigate the effect of the convolutional n etwork depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with verysmall( 3×3)convolutionfilters,whichshowsthatasignificantimprove ment on the prior-art configurations can be achieved by pushing th e depth to 16-19 weight layers. These findings were the basis of our Image Net C hallenge 2014 submission,whereourteamsecuredthefirstandthesecondpl acesinthelocalisa-tion and classification tracks respectively. We also show th at our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing Conv Net models publicly a vailable to facili-tate furtherresearchontheuse ofdeepvisualrepresentati onsincomputervision. 1 INTRODUCTION Convolutional networks (Conv Nets) have recently enjoyed a great success in large-scale im-age and video recognition (Krizhevskyetal., 2012; Zeiler& Fergus, 2013; Sermanetet al., 2014; Simonyan& Zisserman, 2014) which has become possible due to the large public image reposito-ries,suchas Image Net(Denget al.,2009),andhigh-perform ancecomputingsystems,suchas GPUs orlarge-scaledistributedclusters(Deanet al., 2012). In particular,animportantroleintheadvance ofdeepvisualrecognitionarchitectureshasbeenplayedby the Image Net Large-Scale Visual Recog-nition Challenge (ILSVRC) (Russakovskyet al., 2014), whic h has served as a testbed for a few generationsof large-scale image classification systems, f rom high-dimensionalshallow feature en-codings(Perronninetal.,2010)(thewinnerof ILSVRC-2011 )todeep Conv Nets(Krizhevskyet al., 2012)(thewinnerof ILSVRC-2012). With Conv Nets becoming more of a commodity in the computer vi sion field, a number of at-tempts have been made to improve the original architecture o f Krizhevskyet al. (2012) in a bid to achieve better accuracy. For instance, the best-perf orming submissions to the ILSVRC-2013 (Zeiler&Fergus, 2013; Sermanetetal., 2014) utilised smaller receptive window size and smaller stride of the first convolutional layer. Another lin e of improvements dealt with training and testing the networks densely over the whole image and ove r multiple scales (Sermanetet al., 2014; Howard, 2014). In this paper, we address another impor tant aspect of Conv Net architecture design-itsdepth. Tothisend,we fixotherparametersofthea rchitecture,andsteadilyincreasethe depth of the network by adding more convolutionallayers, wh ich is feasible due to the use of very small (3×3)convolutionfiltersinall layers. As a result, we come up with significantly more accurate Conv N et architectures, which not only achieve the state-of-the-art accuracy on ILSVRC classifica tion and localisation tasks, but are also applicabletootherimagerecognitiondatasets,wherethey achieveexcellentperformanceevenwhen usedasa partofa relativelysimple pipelines(e. g. deepfea turesclassified byalinear SVM without fine-tuning). We havereleasedourtwobest-performingmode ls1tofacilitatefurtherresearch. The rest of the paper is organised as follows. In Sect. 2, we de scribe our Conv Net configurations. The details of the image classification trainingand evaluat ionare then presented in Sect. 3, and the ∗current affiliation: Google Deep Mind+current affiliation: Universityof Oxfordand Google Deep Mi nd 1http://www. robots. ox. ac. uk/ ˜vgg/research/very_deep/ 1
VGG-19 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 configurations are compared on the ILSVRC classification tas k in Sect. 4. Sect. 5 concludes the paper. For completeness,we also describeand assess our ILS VRC-2014object localisationsystem in Appendix A,anddiscussthegeneralisationofverydeepfe aturestootherdatasetsin Appendix B. Finally,Appendix Ccontainsthelist ofmajorpaperrevisio ns. 2 CONVNETCONFIGURATIONS To measure the improvement brought by the increased Conv Net depth in a fair setting, all our Conv Net layer configurations are designed using the same pri nciples, inspired by Ciresan etal. (2011); Krizhevskyet al. (2012). In this section, we first de scribe a generic layout of our Conv Net configurations(Sect. 2. 1)andthendetailthespecificconfig urationsusedintheevaluation(Sect. 2. 2). Ourdesignchoicesarethendiscussedandcomparedtothepri orart in Sect. 2. 3. 2. 1 A RCHITECTURE During training, the input to our Conv Nets is a fixed-size 224×224RGB image. The only pre-processingwedoissubtractingthemean RGBvalue,computed onthetrainingset,fromeachpixel. Theimageispassedthroughastackofconvolutional(conv. ) layers,whereweusefilterswithavery small receptive field: 3×3(which is the smallest size to capture the notion of left/rig ht, up/down, center). In one of the configurationswe also utilise 1×1convolutionfilters, which can be seen as a linear transformationof the input channels (followed by n on-linearity). The convolutionstride is fixedto1pixel;thespatialpaddingofconv. layerinputissuchthatt hespatialresolutionispreserved afterconvolution,i. e. the paddingis 1pixel for3×3conv. layers. Spatial poolingis carriedoutby fivemax-poolinglayers,whichfollowsomeoftheconv. layer s(notalltheconv. layersarefollowed bymax-pooling). Max-poolingisperformedovera 2×2pixelwindow,withstride 2. Astackofconvolutionallayers(whichhasadifferentdepth indifferentarchitectures)isfollowedby three Fully-Connected(FC) layers: the first two have4096ch annelseach,the thirdperforms1000-way ILSVRC classification and thus contains1000channels(o ne foreach class). The final layer is thesoft-maxlayer. Theconfigurationofthefullyconnected layersis thesameinall networks. Allhiddenlayersareequippedwiththerectification(Re LU( Krizhevskyetal.,2012))non-linearity. We note that none of our networks (except for one) contain Loc al Response Normalisation (LRN) normalisation (Krizhevskyet al., 2012): as will be sh own in Sect. 4, such normalisation does not improve the performance on the ILSVRC dataset, but l eads to increased memory con-sumption and computation time. Where applicable, the param eters for the LRN layer are those of(Krizhevskyetal., 2012). 2. 2 C ONFIGURATIONS The Conv Net configurations, evaluated in this paper, are out lined in Table 1, one per column. In the following we will refer to the nets by their names (A-E). A ll configurationsfollow the generic design presented in Sect. 2. 1, and differ only in the depth: f rom 11 weight layers in the network A (8conv. and3FClayers)to19weightlayersinthenetwork E(1 6conv. and3FClayers). Thewidth of conv. layers (the number of channels) is rather small, sta rting from 64in the first layer and then increasingbyafactorof 2aftereachmax-poolinglayer,untilit reaches 512. In Table 2 we reportthe numberof parametersfor each configur ation. In spite of a large depth, the numberof weights in our netsis not greater thanthe numberof weightsin a moreshallow net with largerconv. layerwidthsandreceptivefields(144Mweights in(Sermanetet al., 2014)). 2. 3 D ISCUSSION Our Conv Net configurations are quite different from the ones used in the top-performing entries of the ILSVRC-2012 (Krizhevskyetal., 2012) and ILSVRC-201 3 competitions (Zeiler& Fergus, 2013;Sermanetet al.,2014). Ratherthanusingrelativelyl argereceptivefieldsinthefirstconv. lay-ers(e. g. 11×11withstride 4in(Krizhevskyet al.,2012),or 7×7withstride 2in(Zeiler& Fergus, 2013; Sermanetet al., 2014)), we use very small 3×3receptive fields throughout the whole net, whichareconvolvedwiththeinputateverypixel(withstrid e1). Itiseasytoseethatastackoftwo 3×3conv. layers(withoutspatialpoolinginbetween)hasaneff ectivereceptivefieldof 5×5;three 2
VGG-19 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 Table 1:Conv Net configurations (shown in columns). The depth of the configurations increase s fromtheleft(A)totheright(E),asmorelayersareadded(th eaddedlayersareshowninbold). The convolutional layer parameters are denoted as “conv ⟨receptive field size ⟩-⟨number of channels ⟩”. The Re LU activationfunctionisnotshownforbrevity. Conv Net Configuration A A-LRN B C D E 11weight 11weight 13 weight 16weight 16weight 19 weight layers layers layers layers layers layers input (224×224RGBimage) conv3-64 conv3-64 conv3-64 conv3-64 conv3-64 conv3-64 LRN conv3-64 conv3-64 conv3-64 conv3-64 maxpool conv3-128 conv3-128 conv3-128 conv3-128 conv3-128 conv3-128 conv3-128 conv3-128 conv3-128 conv3-128 maxpool conv3-256 conv3-256 conv3-256 conv3-256 conv3-256 conv3-256 conv3-256 conv3-256 conv3-256 conv3-256 conv3-256 conv3-256 conv1-256 conv3-256 conv3-256 conv3-256 maxpool conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv1-512 conv3-512 conv3-512 conv3-512 maxpool conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv3-512 conv1-512 conv3-512 conv3-512 conv3-512 maxpool FC-4096 FC-4096 FC-1000 soft-max Table2:Number ofparameters (inmillions). Network A,A-LRN BCDE Number of parameters 133 133134138144 such layers have a 7×7effectivereceptive field. So what have we gainedby using, fo r instance, a stackofthree 3×3conv. layersinsteadofasingle 7×7layer? First,weincorporatethreenon-linear rectification layers instead of a single one, which makes the decision functionmore discriminative. Second, we decrease the number of parameters: assuming that both the input and the output of a three-layer 3×3convolutionstack has Cchannels,the stack is parametrisedby 3( 32C2) = 27C2 weights; at the same time, a single 7×7conv. layer would require 72C2= 49C2parameters, i. e. 81%more. Thiscan be seen as imposinga regularisationon the 7×7conv. filters, forcingthemto haveadecompositionthroughthe 3×3filters(withnon-linearityinjectedin between). The incorporation of 1×1conv. layers (configuration C, Table 1) is a way to increase th e non-linearity of the decision function without affecting the re ceptive fields of the conv. layers. Even thoughinourcasethe 1×1convolutionisessentiallyalinearprojectionontothespa ceofthesame dimensionality(thenumberofinputandoutputchannelsist hesame),anadditionalnon-linearityis introducedbytherectificationfunction. Itshouldbenoted that1×1conv. layershaverecentlybeen utilisedin the“Networkin Network”architectureof Linet a l. (2014). Small-size convolution filters have been previously used by Ciresan etal. (2011), but their nets are significantly less deep than ours, and they did not evalua te on the large-scale ILSVRC dataset. Goodfellowet al. (2014) applied deep Conv Nets ( 11weight layers) to the task of street number recognition, and showed that the increased de pth led to better performance. Goog Le Net(Szegedyet al., 2014), a top-performingentryof the ILSVRC-2014classification task, was developed independentlyof our work, but is similar in th at it is based on very deep Conv Nets 3
VGG-19 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 (22 weight layers) and small convolution filters (apart from 3×3, they also use 1×1and5×5 convolutions). Their network topology is, however, more co mplex than ours, and the spatial reso-lution of the feature maps is reduced more aggressively in th e first layers to decrease the amount of computation. As will be shown in Sect. 4. 5, our model is out performing that of Szegedyetal. (2014)intermsofthesingle-networkclassificationaccura cy. 3 CLASSIFICATION FRAMEWORK In the previous section we presented the details of our netwo rk configurations. In this section, we describethe detailsofclassification Conv Nettrainingand evaluation. 3. 1 T RAINING The Conv Net training procedure generally follows Krizhevs kyetal. (2012) (except for sampling theinputcropsfrommulti-scaletrainingimages,asexplai nedlater). Namely,thetrainingiscarried out by optimising the multinomial logistic regression obje ctive using mini-batch gradient descent (based on back-propagation(Le Cunet al., 1989)) with momen tum. The batch size was set to 256, momentum to 0. 9. The training was regularised by weight decay (the L2penalty multiplier set to 5·10-4)anddropoutregularisationforthefirsttwofully-connect edlayers(dropoutratiosetto 0. 5). Thelearningrate wasinitially setto 10-2,andthendecreasedbyafactorof 10whenthevalidation set accuracy stopped improving. In total, the learning rate was decreased 3 times, and the learning was stopped after 370K iterations (74 epochs). We conjecture that in spite of the l arger number of parametersandthegreaterdepthofournetscomparedto(Kri zhevskyetal.,2012),thenetsrequired lessepochstoconvergedueto(a)implicitregularisationi mposedbygreaterdepthandsmallerconv. filter sizes; (b)pre-initialisationofcertainlayers. The initialisation of the networkweightsis important,sin ce bad initialisation can stall learningdue to the instability of gradient in deep nets. To circumvent th is problem, we began with training the configuration A (Table 1), shallow enoughto be trained wi th randominitialisation. Then,when trainingdeeperarchitectures,weinitialisedthefirstfou rconvolutionallayersandthelastthreefully-connectedlayerswiththelayersofnet A(theintermediatel ayerswereinitialisedrandomly). Wedid notdecreasethelearningrateforthepre-initialisedlaye rs,allowingthemtochangeduringlearning. For random initialisation (where applicable), we sampled t he weights from a normal distribution with thezeromeanand 10-2variance. The biaseswere initialisedwith zero. It isworth notingthat after the paper submission we found that it is possible to ini tialise the weights without pre-training byusingthe randominitialisationprocedureof Glorot&Ben gio(2010). Toobtainthefixed-size 224×224Conv Netinputimages,theywererandomlycroppedfromresca led training images (one crop per image per SGD iteration). To fu rther augment the training set, the cropsunderwentrandomhorizontalflippingandrandom RGBco lourshift(Krizhevskyet al.,2012). Trainingimagerescalingisexplainedbelow. Training image size. Let Sbe the smallest side of an isotropically-rescaledtraining image, from which the Conv Net input is cropped (we also refer to Sas the training scale). While the crop size is fixed to 224×224, in principle Scan take on any value not less than 224: for S= 224the crop will capture whole-image statistics, completely spanning the smallest side of a training image; for S≫224thecropwillcorrespondtoasmallpartoftheimage,contain ingasmallobjectoranobject part. We considertwoapproachesforsettingthetrainingscale S. Thefirst istofix S,whichcorresponds to single-scale training (note that image content within th e sampled crops can still represent multi-scale image statistics). In our experiments, we evaluated m odels trained at two fixed scales: S= 256(which has been widely used in the prior art (Krizhevskyet al., 2012; Zeiler&Fergus, 2013; Sermanetet al., 2014)) and S= 384. Given a Conv Net configuration,we first trained the network using S= 256. To speed-up training of the S= 384network, it was initialised with the weights pre-trainedwith S= 256,andwe useda smallerinitiallearningrateof 10-3. The second approachto setting Sis multi-scale training, where each training image is indiv idually rescaled by randomly sampling Sfrom a certain range [Smin,Smax](we used Smin= 256and Smax= 512). Sinceobjectsinimagescanbeofdifferentsize,itisbene ficialtotakethisintoaccount duringtraining. Thiscanalso beseen astrainingset augmen tationbyscale jittering,wherea single 4
VGG-19 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 model is trained to recognise objects over a wide range of sca les. For speed reasons, we trained multi-scale models by fine-tuning all layers of a single-sca le model with the same configuration, pre-trainedwithfixed S= 384. 3. 2 T ESTING Attest time,givena trained Conv Netandaninputimage,itis classified inthefollowingway. First, it is isotropically rescaled to a pre-defined smallest image side, denoted as Q(we also refer to it as the test scale). We note that Qis not necessarily equal to the training scale S(as we will show in Sect. 4, usingseveralvaluesof Qforeach Sleadsto improvedperformance). Then,the network is applied densely overthe rescaled test image in a way simil ar to (Sermanetet al., 2014). Namely, the fully-connected layers are first converted to convoluti onal layers (the first FC layer to a 7×7 conv. layer, the last two FC layers to 1×1conv. layers). The resulting fully-convolutional net is then applied to the whole (uncropped) image. The result is a c lass score map with the number of channels equal to the number of classes, and a variable spati al resolution, dependent on the input imagesize. Finally,toobtainafixed-sizevectorofclasssc oresfortheimage,theclassscoremapis spatially averaged(sum-pooled). We also augmentthe test s et by horizontalflippingof the images; thesoft-maxclassposteriorsoftheoriginalandflippedima gesareaveragedtoobtainthefinalscores fortheimage. Since the fully-convolutionalnetwork is applied over the w hole image, there is no need to sample multiple crops at test time (Krizhevskyetal., 2012), which is less efficient as it requires network re-computationforeachcrop. Atthesametime,usingalarge setofcrops,asdoneby Szegedyetal. (2014),canleadtoimprovedaccuracy,asit resultsin afiner samplingoftheinputimagecompared tothefully-convolutionalnet. Also,multi-cropevaluati oniscomplementarytodenseevaluationdue to different convolution boundary conditions: when applyi ng a Conv Net to a crop, the convolved feature mapsare paddedwith zeros, while in the case of dense evaluationthe paddingfor the same crop naturally comes from the neighbouring parts of an image (due to both the convolutions and spatial pooling), which substantially increases the overa ll network receptive field, so more context iscaptured. Whilewebelievethatinpracticetheincreased computationtimeofmultiplecropsdoes notjustifythepotentialgainsinaccuracy,forreferencew ealsoevaluateournetworksusing 50crops perscale( 5×5regulargridwith 2flips),foratotalof 150cropsover 3scales,whichiscomparable to144cropsover 4scalesusedby Szegedyetal. (2014). 3. 3 IMPLEMENTATION DETAILS Ourimplementationisderivedfromthepubliclyavailable C ++ Caffetoolbox(Jia,2013)(branched out in December 2013), but contains a number of significant mo difications, allowing us to perform trainingandevaluationonmultiple GPUsinstalledinasing lesystem,aswellastrainandevaluateon full-size (uncropped) images at multiple scales (as descri bed above). Multi-GPU training exploits data parallelism, and is carried out by splitting each batch of training images into several GPU batches, processed in parallel on each GPU. After the GPU bat ch gradientsare computed, they are averaged to obtain the gradient of the full batch. Gradient c omputation is synchronous across the GPUs, sothe resultisexactlythesame aswhentrainingona si ngle GPU. While more sophisticated methods of speeding up Conv Net tra ining have been recently pro-posed (Krizhevsky, 2014), which employmodeland data paral lelism for differentlayersof the net, wehavefoundthatourconceptuallymuchsimplerschemealre adyprovidesaspeedupof 3. 75times on an off-the-shelf4-GPU system, as comparedto using a sing le GPU. On a system equippedwith four NVIDIATitan Black GPUs,trainingasinglenettook2-3w eeksdependingonthearchitecture. 4 CLASSIFICATION EXPERIMENTS Dataset. In this section, we present the image classification results achieved by the described Conv Netarchitecturesonthe ILSVRC-2012dataset(whichwa susedfor ILSVRC2012-2014chal-lenges). The dataset includes images of 1000 classes, and is split into three sets: training ( 1. 3M images), validation ( 50K images), and testing ( 100K images with held-out class labels). The clas-sification performanceis evaluated using two measures: the top-1 and top-5 error. The former is a multi-class classification error, i. e. the proportion of in correctly classified images; the latter is the 5
VGG-19 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 main evaluation criterion used in ILSVRC, and is computed as the proportion of images such that theground-truthcategoryisoutsidethetop-5predictedca tegories. Forthemajorityofexperiments,weusedthevalidationseta sthetestset. Certainexperimentswere also carried out on the test set and submitted to the official I LSVRC server as a “VGG” team entry tothe ILSVRC-2014competition(Russakovskyet al., 2014). 4. 1 SINGLESCALEEVALUATION We begin with evaluating the performanceof individual Conv Net models at a single scale with the layerconfigurationsdescribedin Sect. 2. 2. The test images ize was set as follows: Q=Sforfixed S,and Q= 0. 5(Smin+Smax)forjittered S∈[Smin,Smax]. Theresultsofareshownin Table3. First, we note that using local response normalisation (A-L RN network) does not improve on the model A without any normalisation layers. We thus do not empl oy normalisation in the deeper architectures(B-E). Second, we observe that the classification error decreases w ith the increased Conv Net depth: from 11 layers in A to 19 layers in E. Notably, in spite of the same de pth, the configuration C (which containsthree 1×1conv. layers),performsworsethantheconfiguration D,whic huses3×3conv. layersthroughoutthenetwork. Thisindicatesthatwhileth e additionalnon-linearitydoeshelp(Cis better than B), it is also important to capture spatial conte xt by using conv. filters with non-trivial receptive fields (D is better than C). The error rate of our arc hitecture saturates when the depth reaches19layers,butevendeepermodelsmightbebeneficialforlarger datasets. Wealsocompared the net B with a shallow net with five 5×5conv. layers, which was derived from B by replacing eachpairof 3×3conv. layerswithasingle 5×5conv. layer(whichhasthesamereceptivefieldas explained in Sect. 2. 3). The top-1 error of the shallow net wa s measured to be 7%higher than that of B (on a center crop),which confirmsthat a deepnet with smal l filters outperformsa shallow net withlargerfilters. Finally, scale jittering at training time ( S∈[256;512] ) leads to significantly better results than training on images with fixed smallest side ( S= 256or S= 384), even though a single scale is usedattesttime. Thisconfirmsthattrainingsetaugmentati onbyscalejitteringisindeedhelpfulfor capturingmulti-scaleimagestatistics. Table3:Conv Netperformanceatasingle testscale. Conv Net config. (Table 1) smallest image side top-1 val. error (%) top-5 val. error (%) train(S)test (Q) A 256 256 29. 6 10. 4 A-LRN 256 256 29. 7 10. 5 B 256 256 28. 7 9. 9 C256 256 28. 1 9. 4 384 384 28. 1 9. 3 [256;512] 384 27. 3 8. 8 D256 256 27. 0 8. 8 384 384 26. 8 8. 7 [256;512] 384 25. 6 8. 1 E256 256 27. 3 9. 0 384 384 26. 9 8. 7 [256;512] 384 25. 5 8. 0 4. 2 M ULTI-SCALEEVALUATION Havingevaluatedthe Conv Netmodelsatasinglescale,wenow assesstheeffectofscalejitteringat testtime. Itconsistsofrunningamodeloverseveralrescal edversionsofatestimage(corresponding to different values of Q), followed by averaging the resulting class posteriors. Co nsidering that a large discrepancy between training and testing scales lead s to a drop in performance, the models trained with fixed Swere evaluated over three test image sizes, close to the trai ning one: Q= {S-32,S,S+ 32}. At the same time, scale jittering at training time allows th e network to be appliedto a widerrangeofscales at test time,so the modeltr ainedwithvariable S∈[Smin;Smax] wasevaluatedoveralargerrangeofsizes Q={Smin,0. 5(Smin+Smax),Smax}. 6
VGG-19 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 Theresults,presentedin Table4,indicatethatscalejitte ringattest timeleadstobetterperformance (as compared to evaluating the same model at a single scale, s hown in Table 3). As before, the deepest configurations(D and E) perform the best, and scale j ittering is better than training with a fixed smallest side S. Our best single-network performance on the validation set is24. 8%/7. 5% top-1/top-5error(highlightedinboldin Table4). Onthete stset,theconfiguration Eachieves 7. 3% top-5error. Table4:Conv Netperformanceatmultiple test scales. Conv Net config. (Table 1) smallest image side top-1val. error (%) top-5val. error (%) train(S)test(Q) B 256 224,256,288 28. 2 9. 6 C256 224,256,288 27. 7 9. 2 384 352,384,416 27. 8 9. 2 [256;512] 256,384,512 26. 3 8. 2 D256 224,256,288 26. 6 8. 6 384 352,384,416 26. 5 8. 6 [256;512] 256,384,512 24. 8 7. 5 E256 224,256,288 26. 9 8. 7 384 352,384,416 26. 7 8. 6 [256;512] 256,384,512 24. 8 7. 5 4. 3 M ULTI-CROP EVALUATION In Table 5 we compare dense Conv Net evaluation with mult-cro p evaluation (see Sect. 3. 2 for de-tails). We also assess the complementarityof thetwo evalua tiontechniquesbyaveragingtheirsoft-max outputs. As can be seen, using multiple crops performs sl ightly better than dense evaluation, andthe two approachesareindeedcomplementary,astheir co mbinationoutperformseach ofthem. As noted above, we hypothesize that this is due to a different treatment of convolution boundary conditions. Table 5:Conv Netevaluationtechniques comparison. Inall experimentsthe trainingscale Swas sampledfrom [256;512],andthreetest scales Qwereconsidered: {256,384,512}. Conv Net config. (Table 1) Evaluationmethod top-1 val. error(%) top-5 val. error (%) Ddense 24. 8 7. 5 multi-crop 24. 6 7. 5 multi-crop &dense 24. 4 7. 2 Edense 24. 8 7. 5 multi-crop 24. 6 7. 4 multi-crop &dense 24. 4 7. 1 4. 4 C ONVNETFUSION Upuntilnow,weevaluatedtheperformanceofindividual Con v Netmodels. Inthispartoftheexper-iments,wecombinetheoutputsofseveralmodelsbyaveragin gtheirsoft-maxclassposteriors. This improvesthe performancedueto complementarityof the mode ls, andwas used in the top ILSVRC submissions in 2012 (Krizhevskyet al., 2012) and 2013 (Zeil er&Fergus, 2013; Sermanetet al., 2014). The results are shown in Table 6. By the time of ILSVRC submiss ion we had only trained the single-scale networks, as well as a multi-scale model D (by fi ne-tuning only the fully-connected layers rather than all layers). The resulting ensemble of 7 n etworks has 7. 3%ILSVRC test error. After the submission, we considered an ensemble of only two b est-performing multi-scale models (configurations D and E), which reduced the test error to 7. 0%using dense evaluation and 6. 8% using combined dense and multi-crop evaluation. For refere nce, our best-performingsingle model achieves7. 1%error(model E, Table5). 4. 5 C OMPARISON WITH THE STATE OF THE ART Finally, we compare our results with the state of the art in Ta ble 7. In the classification task of ILSVRC-2014 challenge (Russakovskyet al., 2014), our “VGG ” team secured the 2nd place with 7
VGG-19 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 Table6:Multiple Conv Netfusion results. Combined Conv Net models Error top-1 val top-5val top-5test ILSVRCsubmission (D/256/224,256,288), (D/384/352,384,416), (D/[256;512 ]/256,384,512) (C/256/224,256,288), (C/384/352,384,416) (E/256/224,256,288), (E/384/352,384,416)24. 7 7. 5 7. 3 post-submission (D/[256;512]/256,384,512), (E/[256;512]/256,384,512),dense eval. 24. 0 7. 1 7. 0 (D/[256;512]/256,384,512), (E/[256;512]/256,384,512),multi-crop 23. 9 7. 2-(D/[256;512]/256,384,512), (E/[256;512]/256,384,512),multi-crop &dense eval. 23. 7 6. 8 6. 8 7. 3%test errorusinganensembleof7 models. Afterthesubmissio n,we decreasedtheerrorrateto 6. 8%usinganensembleof2models. As can be seen from Table 7, our very deep Conv Netssignificant ly outperformthe previousgener-ation of models, which achieved the best results in the ILSVR C-2012 and ILSVRC-2013 competi-tions. Our result is also competitivewith respect to the cla ssification task winner(Goog Le Netwith 6. 7%error) and substantially outperforms the ILSVRC-2013 winn ing submission Clarifai, which achieved 11. 2%with outside training data and 11. 7%without it. This is remarkable, considering that our best result is achievedby combiningjust two models-significantly less than used in most ILSVRC submissions. In terms of the single-net performance, our architecture achieves the best result (7. 0%test error), outperforming a single Goog Le Net by 0. 9%. Notably, we did not depart from the classical Conv Net architecture of Le Cunetal. (198 9), but improved it by substantially increasingthedepth. Table 7:Comparison with the state of the art in ILSVRC classification. Our methodis denoted as“VGG”. Onlytheresultsobtainedwithoutoutsidetrainin gdataarereported. Method top-1 val. error(%) top-5val. error (%) top-5testerror (%) VGG(2nets, multi-crop& dense eval. ) 23. 7 6. 8 6. 8 VGG(1net, multi-crop& dense eval. ) 24. 4 7. 1 7. 0 VGG(ILSVRCsubmission, 7nets, dense eval. ) 24. 7 7. 5 7. 3 Goog Le Net (Szegedy et al., 2014) (1net)-7. 9 Goog Le Net (Szegedy et al., 2014) (7nets)-6. 7 MSRA(He et al., 2014) (11nets)--8. 1 MSRA(He et al., 2014) (1net) 27. 9 9. 1 9. 1 Clarifai(Russakovsky et al., 2014) (multiplenets)--11. 7 Clarifai(Russakovsky et al., 2014) (1net)--12. 5 Zeiler& Fergus (Zeiler&Fergus, 2013) (6nets) 36. 0 14. 7 14. 8 Zeiler& Fergus (Zeiler&Fergus, 2013) (1net) 37. 5 16. 0 16. 1 Over Feat (Sermanetet al.,2014) (7nets) 34. 0 13. 2 13. 6 Over Feat (Sermanetet al.,2014) (1net) 35. 7 14. 2-Krizhevsky et al. (Krizhevsky et al., 2012) (5nets) 38. 1 16. 4 16. 4 Krizhevsky et al. (Krizhevsky et al., 2012) (1net) 40. 7 18. 2-5 CONCLUSION In this work we evaluated very deep convolutional networks ( up to 19 weight layers) for large-scale image classification. It was demonstrated that the rep resentation depth is beneficial for the classificationaccuracy,andthatstate-of-the-artperfor manceonthe Image Netchallengedatasetcan beachievedusingaconventional Conv Netarchitecture(Le C unet al.,1989;Krizhevskyet al.,2012) withsubstantiallyincreaseddepth. Intheappendix,weals oshowthatourmodelsgeneralisewellto a wide range of tasks and datasets, matchingor outperformin gmore complexrecognitionpipelines builtaroundlessdeepimagerepresentations. Ourresultsy etagainconfirmtheimportanceof depth invisualrepresentations. ACKNOWLEDGEMENTS Thisworkwassupportedby ERCgrant Vis Recno. 228180. Wegra tefullyacknowledgethesupport of NVIDIACorporationwiththedonationofthe GPUsusedfort hisresearch. 8
VGG-19 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 REFERENCES Bell, S., Upchurch, P.,Snavely, N., and Bala, K. Material re cognition inthe wild withthe materials in context database. Co RR,abs/1412. 0623, 2014. Chatfield, K., Simonyan, K., Vedaldi, A., and Zisserman, A. R eturn of the devil in the details: Delving deep intoconvolutional nets. In Proc. BMVC.,2014. Cimpoi,M.,Maji,S.,and Vedaldi,A. Deepconvolutionalfilt erbanksfortexturerecognitionandsegmentation. Co RR,abs/1411. 6836, 2014. Ciresan, D. C., Meier, U., Masci, J., Gambardella, L. M., and Schmidhuber, J. Flexible, high performance convolutional neural networks for image classification. In IJCAI,pp. 1237-1242, 2011. Dean, J., Corrado, G., Monga, R., Chen, K., Devin, M., Mao, M., Ranzato, M., Senior, A., Tucker, P., Yang, K.,Le,Q. V.,and Ng, A. Y. Large scale distributeddeepnetwo rks. In NIPS,pp. 1232-1240, 2012. Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proc. CVPR,2009. Donahue,J.,Jia,Y.,Vinyals,O.,Hoffman,J.,Zhang,N.,Tz eng,E.,and Darrell,T. Decaf: Adeepconvolutional activation feature for generic visual recognition. Co RR,abs/1310. 1531, 2013. Everingham, M., Eslami, S. M. A., Van Gool, L., Williams,C., Winn, J., and Zisserman, A. The Pascal visual object classes challenge: Aretrospective. IJCV,111(1):98-136, 2015. Fei-Fei, L., Fergus, R., and Perona, P. Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categor ies. In IEEE CVPR Workshop of Generative Model Based Vision, 2004. Girshick, R. B., Donahue, J., Darrell, T., and Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. Co RR,abs/1311. 2524v5, 2014. Publishedin Proc. CVPR,2014. Gkioxari, G.,Girshick, R.,and Malik, J. Actions and attrib utes from wholes and parts. Co RR,abs/1412. 2604, 2014. Glorot, X. and Bengio, Y. Understanding the difficultyof tra iningdeep feedforward neural networks. In Proc. AISTATS,volume 9, pp. 249-256, 2010. Goodfellow, I. J., Bulatov, Y., Ibarz, J., Arnoud, S., and Sh et, V. Multi-digit number recognition from street view imagery usingdeep convolutional neural networks. In Proc. ICLR,2014. Griffin, G., Holub, A., and Perona, P. Caltech-256 object cat egory dataset. Technical Report 7694, California Institute of Technology, 2007. He, K., Zhang, X., Ren, S., and Sun, J. Spatial pyramid poolin g in deep convolutional networks for visual recognition. Co RR,abs/1406. 4729v2, 2014. Hoai, M. Regularizedmax pooling forimage categorization. In Proc. BMVC.,2014. Howard, A. G. Someimprovements ondeepconvolutional neura l networkbasedimageclassification. In Proc. ICLR,2014. Jia, Y. Caffe: An open source convolutional architecture fo r fast feature embedding. http://caffe. berkeleyvision. org/,2013. Karpathy, A. and Fei-Fei, L. Deep visual-semantic alignmen ts for generating image descriptions. Co RR, abs/1412. 2306, 2014. Kiros, R., Salakhutdinov, R., and Zemel, R. S. Unifying visu al-semantic embeddings with multimodal neural language models. Co RR,abs/1411. 2539, 2014. Krizhevsky, A. One weirdtrickfor parallelizingconvoluti onal neural networks. Co RR,abs/1404. 5997, 2014. Krizhevsky, A., Sutskever, I., and Hinton, G. E. Image Net cl assification with deep convolutional neural net-works. In NIPS,pp. 1106-1114, 2012. Le Cun,Y.,Boser, B.,Denker, J. S.,Henderson, D.,Howard, R. E.,Hubbard, W.,and Jackel, L. D. Backpropa-gationapplied tohandwrittenzipcode recognition. Neural Computation, 1(4):541-551, 1989. Lin,M., Chen, Q.,and Yan, S. Networkinnetwork. In Proc. ICLR,2014. Long, J., Shelhamer, E., and Darrell, T. Fully convolutiona l networks for semantic segmentation. Co RR, abs/1411. 4038, 2014. Oquab, M., Bottou, L., Laptev, I., and Sivic, J. Learning and Transferring Mid-Level Image Representations using Convolutional Neural Networks. In Proc. CVPR,2014. Perronnin, F.,S´ anchez, J.,and Mensink, T. Improving the F isherkernel forlarge-scale image classification. In Proc. ECCV,2010. Razavian, A.,Azizpour, H.,Sullivan, J.,and Carlsson,S. C NNFeaturesoff-the-shelf: an Astounding Baseline for Recognition. Co RR,abs/1403. 6382, 2014. 9
VGG-19 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A. C., and Fei-Fei, L. Image Net large sc ale visual recognition challenge. Co RR, abs/1409. 0575, 2014. Sermanet,P.,Eigen, D.,Zhang, X.,Mathieu, M.,Fergus,R., and Le Cun,Y. Over Feat: Integrated Recognition, Localizationand Detectionusing Convolutional Networks. In Proc. ICLR,2014. Simonyan, K. and Zisserman, A. Two-stream convolutional ne tworks for action recognition in videos. Co RR, abs/1406. 2199, 2014. Published in Proc. NIPS,2014. Szegedy, C., Liu, W.,Jia, Y., Sermanet, P.,Reed, S.,Anguel ov, D.,Erhan, D., Vanhoucke, V., and Rabinovich, A. Goingdeeper withconvolutions. Co RR,abs/1409. 4842, 2014. Wei, Y., Xia, W., Huang, J., Ni, B., Dong, J., Zhao, Y., and Yan, S. CNN: Single-label to multi-label. Co RR, abs/1406. 5726, 2014. Zeiler, M. D. and Fergus, R. Visualizing and understanding c onvolutional networks. Co RR, abs/1311. 2901, 2013. Publishedin Proc. ECCV,2014. A LOCALISATION In the main bodyof the paper we have consideredthe classifica tion task of the ILSVRC challenge, and performed a thorough evaluation of Conv Net architectur es of different depth. In this section, we turn to the localisation task of the challenge, which we ha ve won in 2014 with 25. 3%error. It can be seen as a special case of object detection, where a sing le object bounding box should be predictedforeach ofthe top-5classes, irrespectiveof the actual numberofobjectsof the class. For thiswe adoptthe approachof Sermanetet al. (2014), the winn ersof the ILSVRC-2013localisation challenge,withafewmodifications. Ourmethodisdescribed in Sect. A. 1andevaluatedin Sect. A. 2. A. 1 L OCALISATION CONVNET To perform object localisation, we use a very deep Conv Net, w here the last fully connected layer predicts the bounding box location instead of the class scor es. A bounding box is represented by a 4-D vector storing its center coordinates, width, and heig ht. There is a choice of whether the boundingbox prediction is shared across all classes (singl e-class regression, SCR (Sermanetet al., 2014))orisclass-specific(per-classregression,PCR). In theformercase,thelastlayeris4-D,while in the latter it is 4000-D (since there are 1000 classes in the dataset). Apart from the last bounding boxpredictionlayer,weuse the Conv Netarchitecture D (Tab le1),whichcontains16weightlayers andwasfoundtobe thebest-performingin theclassification task (Sect. 4). Training. Training of localisation Conv Nets is similar to that of the c lassification Conv Nets (Sect. 3. 1). Themaindifferenceisthatwereplacethelogis ticregressionobjectivewitha Euclidean loss,whichpenalisesthedeviationofthepredictedboundi ngboxparametersfromtheground-truth. We trainedtwo localisation models, each on a single scale: S= 256and S= 384(due to the time constraints,we didnot use trainingscale jitteringforour ILSVRC-2014submission). Trainingwas initialised with the correspondingclassification models ( trained on the same scales), and the initial learning rate was set to 10-3. We exploredboth fine-tuningall layers and fine-tuningonly the first two fully-connected layers, as done in (Sermanetetal., 201 4). The last fully-connected layer was initialisedrandomlyandtrainedfromscratch. Testing. We consider two testing protocols. The first is used for compa ring different network modifications on the validation set, and considers only the b oundingbox prediction for the ground truth class (to factor out the classification errors). The bo unding box is obtained by applying the networkonlyto thecentralcropoftheimage. The second, fully-fledged, testing procedure is based on the dense application of the localisation Conv Net to the whole image, similarly to the classification t ask (Sect. 3. 2). The difference is that instead of the class score map, the output of the last fully-c onnected layer is a set of bounding box predictions. To come up with the final prediction, we util ise the greedy merging procedure of Sermanetetal. (2014), which first merges spatially close predictions (by averaging their coor-dinates), and then rates them based on the class scores, obta ined from the classification Conv Net. When several localisation Conv Nets are used, we first take th e union of their sets of boundingbox predictions, and then run the mergingprocedureon the union. We did not use the multiple pooling 10
VGG-19 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 offsets technique of Sermanetetal. (2014), which increase s the spatial resolution of the bounding boxpredictionsandcanfurtherimprovetheresults. A. 2 L OCALISATION EXPERIMENTS In this section we first determine the best-performinglocal isation setting (using the first test proto-col), and then evaluate it in a fully-fledged scenario (the se cond protocol). The localisation error is measured according to the ILSVRC criterion (Russakovsky et al., 2014), i. e. the bounding box predictionis deemed correctif its intersectionoverunion ratio with the ground-truthboundingbox isabove0. 5. Settings comparison. As can be seen from Table 8, per-class regression (PCR) outpe rforms the class-agnostic single-class regression (SCR), which diff ers from the findings of Sermanetetal. (2014), where PCR was outperformed by SCR. We also note that fi ne-tuning all layers for the lo-calisation task leads to noticeablybetter results than fine-tuningonly the fully-connectedlayers(as donein(Sermanetet al.,2014)). Intheseexperiments,thes mallestimagessidewassetto S= 384; theresultswith S= 256exhibitthesamebehaviourandarenotshownforbrevity. Table 8:Localisation error for different modifications with the simplified testing protocol: the boundingbox is predictedfrom a single central image crop, a nd the ground-truthclass is used. All Conv Net layers (except for the last one) have the configurati on D (Table 1), while the last layer performseithersingle-classregression(SCR) orper-clas sregression(PCR). Fine-tunedlayers regression type GTclass localisationerror 1st and2nd FCSCR 36. 4 PCR 34. 3 all PCR 33. 1 Fully-fledgedevaluation. Havingdeterminedthebestlocalisationsetting(PCR,fine-tuningofall layers),we nowapply it in the fully-fledgedscenario,where the top-5class labelsare predictedus-ing our best-performingclassification system (Sect. 4. 5), and multiple densely-computedbounding box predictions are merged using the method of Sermanetetal. (2014). As can be seen from Ta-ble 9, applicationof the localisation Conv Netto the whole i magesubstantiallyimprovesthe results compared to using a center crop (Table 8), despite using the t op-5 predicted class labels instead of thegroundtruth. Similarlytotheclassificationtask(Sect. 4),testingatseveralscalesandcombining thepredictionsofmultiplenetworksfurtherimprovesthep erformance. Table9:Localisationerror smallestimage side top-5localisationerror (%) train(S) test(Q) val. test. 256 256 29. 5-384 384 28. 2 26. 7 384 352,384 27. 5-fusion: 256/256 and 384/352,384 26. 9 25. 3 Comparison with the state of the art. We compare our best localisation result with the state of the art in Table 10. With 25. 3%test error, our “VGG” team won the localisation challenge of ILSVRC-2014 (Russakovskyet al., 2014). Notably, our resul ts are considerably better than those of the ILSVRC-2013winner Overfeat(Sermanetet al., 2014), even thoughwe used less scales and did not employ their resolution enhancement technique. We e nvisage that better localisation per-formance can be achieved if this technique is incorporated i nto our method. This indicates the performanceadvancementbroughtbyourverydeep Conv Nets-wegotbetterresultswithasimpler localisationmethod,buta morepowerfulrepresentation. B GENERALISATION OF VERYDEEPFEATURES In the previous sections we have discussed training and eval uation of very deep Conv Nets on the ILSVRC dataset. In this section, we evaluate our Conv Nets, p re-trained on ILSVRC, as feature 11
VGG-19 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 Table 10: Comparison with the state of the art in ILSVRC localisation. Our methodis denoted as“VGG”. Method top-5val. error (%) top-5 testerror (%) VGG 26. 9 25. 3 Goog Le Net (Szegedyet al., 2014)-26. 7 Over Feat (Sermanet etal.,2014) 30. 0 29. 9 Krizhevsky et al. (Krizhevsky et al.,2012)-34. 2 extractors on other, smaller, datasets, where training lar ge models from scratch is not feasible due to over-fitting. Recently, there has been a lot of interest in such a use case (Zeiler&Fergus, 2013; Donahueet al., 2013; Razavianet al., 2014; Chatfieldet al., 2014), as it turns out that deep image representations,learnton ILSVRC,generalisewelltoothe rdatasets,wheretheyhaveoutperformed hand-crafted representations by a large margin. Following that line of work, we investigate if our modelsleadtobetterperformancethanmoreshallowmodelsu tilisedinthestate-of-the-artmethods. In this evaluation, we consider two models with the best clas sification performance on ILSVRC (Sect. 4)-configurations“Net-D”and“Net-E”(whichwemade publiclyavailable). To utilise the Conv Nets, pre-trained on ILSVRC, for image cl assification on other datasets, we remove the last fully-connected layer (which performs 1000-way ILSVRC classification), and use 4096-Dactivationsofthepenultimatelayerasimagefeatur es,whichareaggregatedacrossmultiple locations and scales. The resulting image descriptor is L2-normalised and combined with a linear SVM classifier, trained on the target dataset. For simplicit y, pre-trained Conv Net weights are kept fixed(nofine-tuningisperformed). Aggregation of features is carried out in a similar manner to our ILSVRC evaluation procedure (Sect. 3. 2). Namely, an image is first rescaled so that its sma llest side equals Q, and then the net-work is densely applied over the image plane (which is possib le when all weight layers are treated as convolutional). We then perform global average pooling o n the resulting feature map, which producesa 4096-Dimage descriptor. The descriptor is then a veraged with the descriptor of a hori-zontally flipped image. As was shown in Sect. 4. 2, evaluation over multiple scales is beneficial, so we extract features over several scales Q. The resulting multi-scale features can be either stacked or pooled across scales. Stacking allows a subsequent class ifier to learn how to optimally combine image statistics over a range of scales; this, however, come s at the cost of the increased descriptor dimensionality. We returntothediscussionofthisdesignc hoicein theexperimentsbelow. We also assess late fusion of features, computed using two networks, which is performed by stacking their respectiveimagedescriptors. Table11: Comparisonwiththestateoftheartinimageclassificationo n VOC-2007,VOC-2012, Caltech-101, and Caltech-256. Our models are denoted as “VGG”. Results marked with * were achievedusing Conv Netspre-trainedonthe extended ILSVRCdataset(2000classes). Method VOC-2007 VOC-2012 Caltech-101 Caltech-256 (mean AP) (mean AP) (meanclass recall) (mean class recall) Zeiler& Fergus (Zeiler&Fergus, 2013)-79. 0 86. 5±0. 5 74. 2±0. 3 Chatfieldetal. (Chatfieldet al., 2014) 82. 4 83. 2 88. 4±0. 6 77. 6±0. 1 He etal. (Heet al.,2014) 82. 4-93. 4±0. 5-Weiet al. (Weiet al., 2014) 81. 5(85. 2∗)81. 7 (90. 3∗)--VGGNet-D (16layers) 89. 3 89. 0 91. 8±1. 0 85. 0±0. 2 VGGNet-E(19 layers) 89. 3 89. 0 92. 3±0. 5 85. 1±0. 3 VGGNet-D & Net-E 89. 7 89. 3 92. 7±0. 5 86. 2±0. 3 Image Classification on VOC-2007and VOC-2012. We beginwith the evaluationon the image classification task of PASCAL VOC-2007 and VOC-2012 benchma rks (Everinghametal., 2015). These datasets contain 10K and 22. 5K images respectively, a nd each image is annotated with one or several labels, correspondingto 20 object categories. T he VOC organisersprovidea pre-defined split into training, validation, and test data (the test dat a for VOC-2012 is not publicly available; instead,anofficialevaluationserverisprovided). Recogn itionperformanceismeasuredusingmean averageprecision(m AP)acrossclasses. Notably, by examining the performance on the validation set s of VOC-2007 and VOC-2012, we foundthat aggregatingimage descriptors,computedat mult iple scales, by averagingperformssim-12
VGG-19 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 ilarly to the aggregation by stacking. We hypothesize that t his is due to the fact that in the VOC dataset the objects appear over a variety of scales, so there is no particular scale-specific seman-tics which a classifier could exploit. Since averaging has a b enefit of not inflating the descrip-tor dimensionality, we were able to aggregated image descri ptors over a wide range of scales: Q∈ {256,384,512,640,768}. It is worth noting though that the improvement over a smalle r rangeof{256,384,512}wasrathermarginal( 0. 3%). Thetestsetperformanceisreportedandcomparedwithother approachesin Table11. Ournetworks “Net-D”and“Net-E”exhibitidenticalperformanceon VOCda tasets,andtheircombinationslightly improves the results. Our methods set the new state of the art across image representations, pre-trained on the ILSVRC dataset, outperformingthe previousb est result of Chatfieldet al. (2014) by more than 6%. It should be noted that the method of Wei et al. (2014), which achieves1%better m AP on VOC-2012, is pre-trained on an extended 2000-class IL SVRC dataset, which includes additional 1000 categories, semantically close to those in VOC datasets. It also benefits from the fusionwith anobjectdetection-assistedclassification pi peline. Image Classificationon Caltech-101and Caltech-256. Inthissectionweevaluateverydeepfea-tureson Caltech-101(Fei-Feiet al.,2004)and Caltech-256 (Griffinet al.,2007)imageclassification benchmarks. Caltech-101contains9Kimageslabelledinto1 02classes(101objectcategoriesanda backgroundclass), while Caltech-256 is larger with 31K ima ges and 257 classes. A standard eval-uation protocolon these datasets is to generateseveral ran domsplits into training and test data and report the average recognition performance across the spli ts, which is measured by the mean class recall(whichcompensatesforadifferentnumberoftestima gesperclass). Following Chatfield etal. (2014); Zeiler&Fergus(2013); He etal. (2014),on Caltech-101we generated3 randomsplits into training and test data, so that each split contains 30 traini ng images per class, and up to 50 test images per class. On Caltech-256 we also generated 3 splits, each of which contains 60 training images per class (and the rest is used for testing). In each sp lit, 20% of training images were used asa validationset forhyper-parameterselection. We found that unlike VOC, on Caltech datasets the stacking of descriptors, computed over multi-ple scales, performs better than averaging or max-pooling. This can be explained by the fact that in Caltech images objects typically occupy the whole image, so multi-scale image features are se-manticallydifferent(capturingthe wholeobject vs. object parts), andstacking allows a classifier to exploitsuchscale-specificrepresentations. We usedthree scales Q∈ {256,384,512}. Ourmodelsarecomparedtoeachotherandthestateofthearti n Table11. Ascanbeseen,thedeeper 19-layer Net-Eperformsbetterthanthe16-layer Net-D,and theircombinationfurtherimprovesthe performance. On Caltech-101, our representations are comp etitive with the approach of He etal. (2014),which,however,performssignificantlyworsethano urnetson VOC-2007. On Caltech-256, ourfeaturesoutperformthestate oftheart (Chatfieldetal., 2014)byalargemargin( 8. 6%). Action Classification on VOC-2012. We also evaluated our best-performing image representa-tion (the stacking of Net-D and Net-E features) on the PASCAL VOC-2012 action classification task (Everinghamet al., 2015), which consists in predictin g an action class from a single image, given a bounding box of the person performing the action. The dataset contains 4. 6K training im-ages,labelledinto11classes. Similarlytothe VOC-2012ob jectclassificationtask,theperformance is measured using the m AP. We considered two training settin gs: (i) computing the Conv Net fea-turesonthewholeimageandignoringtheprovidedboundingb ox;(ii)computingthefeaturesonthe wholeimageandontheprovidedboundingbox,andstackingth emtoobtainthefinalrepresentation. Theresultsarecomparedtootherapproachesin Table12. Ourrepresentationachievesthestateofartonthe VOCactio nclassificationtaskevenwithoutusing the provided bounding boxes, and the results are further imp roved when using both images and bounding boxes. Unlike other approaches, we did not incorpo rate any task-specific heuristics, but reliedontherepresentationpowerofverydeepconvolution alfeatures. Other Recognition Tasks. Since the public release of our models, they have been active ly used by the research community for a wide range of image recogniti on tasks, consistently outperform-ing more shallow representations. For instance, Girshicke t al. (2014) achieve the state of the object detection results by replacing the Conv Net of Krizhe vskyet al. (2012) with our 16-layer model. Similar gains over a more shallow architecture of Kri zhevskyet al. (2012) have been ob-13
VGG-19 layer image recognition model.pdf
Publishedasa conferencepaperat ICLR2015 Table 12: Comparison with the state of the art in single-image action c lassification on VOC-2012. Our models are denoted as “VGG”. Results marked with * were a chieved using Conv Nets pre-trainedonthe extended ILSVRCdataset (1512classes). Method VOC-2012 (mean AP) (Oquab et al., 2014) 70. 2∗ (Gkioxari etal.,2014) 73. 6 (Hoai,2014) 76. 3 VGG Net-D& Net-E,image-only 79. 2 VGG Net-D& Net-E,image and bounding box 84. 0 served in semantic segmentation (Longet al., 2014), image c aption generation (Kirosetal., 2014; Karpathy& Fei-Fei, 2014),textureandmaterialrecognitio n(Cimpoiet al., 2014; Bell etal., 2014). C PAPERREVISIONS Here we present the list of major paper revisions, outlining the substantial changes for the conve-nienceofthe reader. v1Initialversion. Presentstheexperimentscarriedoutbefo rethe ILSVRCsubmission. v2Addspost-submission ILSVRCexperimentswithtrainingset augmentationusingscalejittering, whichimprovestheperformance. v3Addsgeneralisationexperiments(Appendix B) on PASCAL VOC and Caltech image classifica-tiondatasets. Themodelsusedforthese experimentsarepub liclyavailable. v4The paper is converted to ICLR-2015 submission format. Also adds experiments with multiple cropsforclassification. v6Camera-ready ICLR-2015conferencepaper. Addsa compariso nof the net B with a shallow net andtheresultson PASCAL VOCactionclassificationbenchmar k. 14
VGG-19 layer image recognition model.pdf
ARTICLE Democratized image analytics by visual programming through integration of deep models and small-scale machine learning Primo žGodec1,7, Matja žPančur1,7, Nejc Ileni č1,7, Andrej Čopar1, Martin Stra žar1, AlešErjavec1, Ajda Pretnar1, Janez Dem šar1,A nže Stari č1, Marko Toplak1, Lan Žagar1, Jan Hartman1, Hamilton Wang2, Riccardo Bellazzi3, UrošPetrovi č4,5, Silvia Garagna6, Maurizio Zuccotti6, Dongsu Park2, Gad Shaulsky2& Bla žZupan1,2* Analysis of biomedical images requires computational expertize that are uncommon among biomedical scientists. Deep learning approaches for image analysis provide an opportunity todevelop user-friendly tools for exploratory data analysis. Here, we use the visual program-ming toolbox Orange ( http://orange. biolab. si ) to simplify image analysis by integrating deep-learning embedding, machine learning procedures, and data visualization. Orange supportsthe construction of data analysis work flows by assembling components for data pre-processing, visualization, and modeling. We equipped Orange with components that use pre-trained deep convolutional networks to pro file images with vectors of features. These vectors are used in image clustering and classi fication in a framework that enables mining of image sets for both novel and experienced users. We demonstrate the utility of the tool in imageanalysis of progenitor cells in mouse bone healing, identi fication of developmental compe-tence in mouse oocytes, subcellular protein localization in yeast, and developmental mor-phology of social amoebae. https://doi. org/10. 1038/s41467-019-12397-x OPEN 1Faculty of Computer and Information Science, University of Ljubljana, 1000 Ljubljana, Slovenia. 2Department of Molecular and Human Genetics, Baylor College of Medicine, Houston, TX 77030, USA. 3Faculty of Engineering, University of Pavia, 27100 Pavia, Italy. 4Biotechnical Faculty, University of Ljubljana, 1000 Ljubljana, Slovenia. 5Department of Molecular and Biomedical Sciences, Jo žef Stefan Institute, 1000 Ljubljana, Slovenia. 6Department of Biology and Biotechnology, University of Pavia, 27100 Pavia, Italy. 7These authors contributed equally: Primo žGodec, Matja žPančur, Nejc Ileni č. *email: blaz. zupan@fri. uni-lj. si NATURE COMMUNICATIONS | (2019) 10:4551 | https://doi. org/10. 1038/s41467-019-12397-x | www. nature. com/naturecommunications 11234567890():,;
visual Dee Learning.pdf
Deep learning1has revolutionized the field of biomedical image analysis. Conventional approaches have used problem-speci fic algorithms to describe images with manually crafted features, such as cell morphology, count,intensity, and texture. Feature learning with deep convolutional neural networks is implicit, and training the network usually focuses on particular tasks, such as breast cancer detection in mammography 2, subcellular protein localization3, or plant disease detection4. Training a deep network usually requires a large number of images, which limits its utility. For example, the classi fier for plant disease detection by Mohanty et al. 4was trained on 54,306 images of diseased and healthy plants, and the yeast protein localization model by Kraus et al. 3was inferred from 22,000 annotated images, but not everyone who could benefit from image analysis has so many well-annotated images. Machine learning on images does not always need training on a closely related set of images. Just like our visual cortex can adapt to the analysis of many scenes and images, a deep network pre-trained on a suf ficiently large number of diverse images may infer useful features from a broad range of new image sets. This idea is based on transfer learning5, a machine-learning technique that stores the knowledge obtained from one problem in a trained model and applies it to another problem, which may be quite different. A typical deep network for image analysis6,7contains convolutional layers that identify structural features of the images followed by fully connected layers that combine the features and find interactions between them. When applied to classi fication, the network nodes of the penultimate layer contain information about the most informative combination of image features, and thefinal layer includes one node for each image class that reports on class probability. For transfer learning, we can repurpose a deep network trained on one set of images through retraining on another collection of images. In the purest form of knowledge transfer, we need to retrain only the last layer of the network: images from the new collection are represented with feature vectors inferred by the existing deep model, and their relations with the image class are inferred by applying a machine-learningmethod such as logistic regression. A successful example of such deep network repurposing was, for instance, proposed for diagnosis of dermatology 8,w h e r et h e authors started with an existing deep network and re-trained it to classify skin cancer. As a starting model that embeds images into feature space, the authors used Google 's Inception V36, a con-volutional 48-layered neural network that was trained on 1. 2 million images from the Image Net repository. Image Net includes images depicting real life objects, such as vehicles (locomotive, amphibian, minivan), tools (shovel, screwdriver), and animals (tick, tarantula, bee), most of which are not similar to images from the field of dermatology, or even molecular biology research images that we later use in cases that demonstrate our proposed tool. To classify skin cancer, a part of Inception V3 was re-trained over 120,000 clinical images that included 3374 der-moscopy images. The resulting system achieved dermatologists-level classi fication accuracy. A similar approach was proposed to classify in situ hybridization images of the fruit fly9,w h e r ea n existing deep model trained on Image Net images was repurposed for classi fication of over 23,000 fruit fly images at different stages of embryogenesis. Transfer learning has also been successfully used in other fields, such as the analysis of images from an electron microscope10, X-ray computed tomography11,a n d pathology12. Just like we, humans, can adapt our visual recog-nition to any new image classi fication task through additional training, the above examples show that arti ficial intelligence systems can adapt trained models to new tasks. Transfer learning wasfirst proposed in the late 1990s13,14, but it is being increas-ingly adopted in image analytics with the utility of deep models. The potential of transfer learning is being well recognized in the biomedical sciences15. Here, we propose a visual programming approach to image analytics, where the users can combine image embedding by pre-trained deep models with clustering and classi fication. Our tool supports the execution of essential data mining functions on images in an easy-to-use framework, where common data ana-lysis tasks can be conceived and executed within minutes. The proposed tool also makes image analytics accessible to anyone that can spare an hour for training, or even only minutes for watching the educational videos that we provide together with the tool. While the proposed framework is general and can consider any type or class of images, we focus here on biomedicine and demonstrate the utility of the tool for analysis of images from molecular and cell biology. Results Image analytics by visual programming. We have designed a toolbox for image analytics that features visual programming. The toolbox supports users in assembly of analysis work flows that are comprised of components that load the images, embed them into a vector space, and analyze these image pro files to infer image clusters or classi fications. The toolbox is based on Orange data mining 16,17, a general-purpose data analysis framework that already includes components for clustering, classi fication, and interactive data and model visualizations. Image-speci fic exten-sion described in this paper is packaged in Orange 's add-on for image analytics. Both Orange and the proposed extension are provided as open-source and are freely available through Oran-ge's home page at http://orange. biolab. si. Data analysis in Orange is implemented through work flows. A work flow (see Fig. 1for example) consists of widgets — components that can process, model, or visualize the data. Widgets accept data as their input and display or send results as their output. Data analysis work flows in Orange are de fined by the selection of widgets and connections between them. For instance, the work flow in Fig. 1loads a set of images from the chosen directory, represents images with feature vectors through embedding, estimates the distances between these vectors and hence between the images, and uses the computed distances in clustering and visual depictions of image similarities in the multi-dimensional scaling plot. Users can monitor the execution of every step of the Orange work flow and inspect every intermediate result. For instance, they can check the images that have been loaded (Image Viewer connected to Load Images in the work flow from Fig. 1), visualize the result of hierarchical clustering in a dendrogram, and even visualize the selection of images from a specific branch of the dendrogram or from a section of points in the multi-dimensional scaling plot. The users can also inspect the raw data coming from embedders, or feature pro files of the selected images in the dendrogram (achievable by connecting the Data Table widget to any of the widgets in the work flow; for brevity not shown in Fig. 1). The ability to check and inspect the results at every step of the analysis pipeline helps the users in gaining con fidence in results and familiarity with the analysis procedures. It also provides a tool for educators to explain different analysis steps to potential trainees. Interactive data visualizations. Widgets in Orange are inter-active, and they can immediately transmit the results upon any change in the widget parameters or any change in the selection of elements in their graphical presentations. For instance, the hier-archical clustering widget from Fig. 1allows users to select the branch of the constructed dendrogram. Upon selection, the Hierarchical Clustering widget outputs the data corresponding to ARTICLE NATURE COMMUNICATIONS | https://doi. org/10. 1038/s41467-019-12397-x 2 NATURE COMMUNICATIONS | (2019) 10:4551 | https://doi. org/10. 1038/s41467-019-12397-x | www. nature. com/naturecommunications
visual Dee Learning.pdf
the selected branch, which is fed into the image visualization widget (Image Viewer (2) in Fig. 1). Any change in the selection of branches in the dendrogram propagates through the work flow and updates the content of any of the downstream widgets, instructing, for instance, Image View (2) to immediately display the images that are pertinent to the user 's selection in the dendrogram. Case studies. We illustrate the visual programming approach and the interactive visualizations through the analysis of highly diverse multi-color image sets that include stem/progenitor cells in bone healing at various mouse ages (3-12 months), identi fi-cation of developmentally competent or incompetent mouse oocytes, subcellular protein localization in yeast, and develop-mental morphology of the social amoeba Dictyostelium dis-coideum (Fig. 2). The phenotypes re flected in the images were assessed by domain experts. For instance, the oocytes were clas-sified as developmentally competent whenever a ring of Hoechst-positive chromatin was observed surrounding the nucleolus or as developmentally incompetent when this ring was not evident and the chromatin appeared more diffuse18. The phenotypes of the developing social amoebae were determined by visual inspectionbased on morphological features, such as the presence of cell aggregates or streams, according to terms from the Dictyostelium phenotype ontology in dicty Base ( http://dictybase. org ). Sub-cellular localization of proteins in budding yeast was reported by curators of the YPL +database ( http://yplp. uni-graz. at ). We considered the possibility that manual curation and classi fication in biomedicine might contain errors, so we used data mining methods that can handle noise and inconsistencies. Typical tasks in biomedical data mining include clustering, data projection (unsupervised data mining), and development of prediction models (supervised data mining). Orange provides a simpli fied interface to these tasks that can be executed with work flows that consist of only a few data processing and visualization elements16. Figure 1shows a work flow for unsupervised analysis of in vivo imaging of skeletal stem cells during mouse bone healing. The work flow loads the images, embeds them in vector space, computes distances between the image vectors and performs unsupervised mining through clustering and data projection. Regardless of the in vivo morphological diversity of stem cells due to sex, age, and location, we consistently found good separation of image classes during their biological responses. This finding illustrates that the general morphology and cellular responses of stem/progenitor a b cde Image viewer Image viewer (2)Import images Image embedding Distances Hierarchical clustering MDS Image viewer (1) Fig. 1 Unsupervised analysis of bone healing images. a The data analysis work flow starts with importing 37 images from a local folder. The images can be viewed in the Image Viewer widget (not shown) and are passed to the Image Embedder, which was set to use Google 's Inception V3 deep network. We computed the distances between the embedded images and presented them as a dendrogram ( b) with the Hierarchical Clustering widget. The clusters corresponded well to the time (days) post injury (D7 and D14), with a few exceptions. One such exception was a branch of two images highlighted in the dendrogram ( b) and shown in the Image Viewer (2) ( c). Image distances were also given to the multi-dimensional scaling widget (MDS), that also exhibits separation between bone healing samples at different times as depicted in different colors. Three representative MDS points from D7 and D14 were selected manually (data points with orange boundaries) ( d) and the images are shown in the Image Viewer (1) ( e). The two images highlighted in the dendrogram ( b) were also passed to the MDS widget as a data subset. They are visualized as filled dots in this data projection ( d) and they appear close to each other because of their similarity. This figure illustrates how a biologist may explore the data after clustering —first focusing on the misclassi fied samples and looking at the images and then selecting some of the best classi fied images as a point of reference for further exploration NATURE COMMUNICATIONS | https://doi. org/10. 1038/s41467-019-12397-x ARTICLE NATURE COMMUNICATIONS | (2019) 10:4551 | https://doi. org/10. 1038/s41467-019-12397-x | www. nature. com/naturecommunications 3
visual Dee Learning.pdf
cells remain biologically interpretable and that the method can be used for analyzing images from vastly diverse domains of biology with high fidelity. The work flow in Fig. 1was also used to analyze the other three sets of images (Supplementary Note 1). The meta information on the timing of bone healing, type of mouse oocyte, protein localization in yeast and developmental phase in Dictyostelium helped us interpret the results but was not used in model inference in the unsupervised data mining. It is also possible to use this information explicitly to build models that predict these phenotype classes using supervised data mining. Before using the models for prediction (see correspond-ing work flow in the Supplementary Note 6), we can assess their accuracy by learning from a training set and testing the models ona separate test set. The work flow in Fig. 3performs these tasks and uses cross-validation for accuracy assessment. We used logistic regression for modeling from the image embedding matrix, and cross-validation for estimating the accuracy. Only five out of the 131 images from the oocyte phenotyping were misclassi fied, resulting in a surprisingly high accuracy of 96% (see Supplementary Table 1). To compare our approach to manual analysis, we presented the same 131 oocyte images to three reproductive biologists during their training period. These biologists had different levels of training experience (i. e., beginner, medium, and good) and their classi fication accuracy, compared to that of the expert, ranged from 78. 7 to 84. 5%19, quite lower than the accuracy of the automated approach. Day7 Day7 Day14 NSN SN SN Cytoplasm Endosome Er STR LAG TAGa b c d Fig. 2 Example images considered in our pilot study encompass diverse fields in biomedicine. a Bone-fracture repair involves skeletal stem cells. The images in this example are from mice that were the progeny of a cross between mice carrying Mx1/Tomato (red), which is a skeletal stem/progenitor cells marker, and mice carrying αSMA-GFP or Nestin-GFP (green), which are mesenchymal cell markers. The bones were injured and images were taken in vivo at 7 days and 14 days after injury, when critical events in the early repair process occur. b Chromatin organization (Hoechst staining) in the nucleus of mouse fully grown antral oocytes. Depending on their chromatin organization, oocytes are classi fied as surrounded nucleolus (SN), with a ring of heterochromatin surrounding the nucleolus and not surrounded nucleolus (NSN) oocytes, with a more dispersed chromatin not surrounding the nucleol us. SN oocytes are developmentally competent, whereas NSN oocytes are incompetent18. c Protein localization in budding yeast —fluorescence micrographs of GFP-fusion proteins localized to the cytoplasm, endosome or endoplasmic reticulum (er) as indicated. d Images of Dictyostelium discoideum cells at different developmental stages —streaming (STR), loose aggregate (LAG), and tight aggregates (TAG). Scale bars are 100 μm(a), 10 µm(b), 5µm(c), or 1 mm ( d). See Supplementary Note 1 for detailed description of the image sets ARTICLE NATURE COMMUNICATIONS | https://doi. org/10. 1038/s41467-019-12397-x 4 NATURE COMMUNICATIONS | (2019) 10:4551 | https://doi. org/10. 1038/s41467-019-12397-x | www. nature. com/naturecommunications
visual Dee Learning.pdf
In addition to providing high accuracy, the work flow shown in Fig. 3helps the biologist to explore and gain understanding through the analysis of misclassi fications in an interactive confusion matrix. For example, the work flow can show the correctly classi fied SN oocytes in MDS and in the image viewer. The same work flow was applied to the other three image sets. It resulted in high cross-validated accuracy of logistic regression for the phenotype classi fication of bone healing (F1 =0. 95), Dictyostelium development (F1 =0. 82), and yeast protein loca-lization (F1 =0. 95) (see corresponding work flows in the Supplementary Table 1). We compared the performance of this work flow to image classi fication using features from more traditional image analytics pipelines. Features extracted by apipeline in Cell Pro filer 20and scale-invariant feature transform21 were less informative and yielded cross-validation accuracies that were consistently below those obtained with image embedding by deep models (see Supplementary Table 3). Access to different image embedders. Embedding is the process of passing the image through an existing deep network to acquire its vector representation. The Image Embedder widget in Orangecan accommodate different embedders (see Supplementary Note 5 for a current list) and it can be extended with specialized embedders when those become available. For example, in Goo-gle's Inception V36—a convolutional 48-layered neural network that was trained on 1. 2 million images from the Image Net repository —the penultimate layer consists of 2048 nodes, and thus every input image is embedded into a 2048-element vector. The actual embedding process is either carried out on a dedicated server (the default option that uses Inception V3) or runs locally. For local embedding, we use Squeeze Net, a deep-convolutional network that along with accuracy also optimized network com-plexity22. The advantage of Squeeze Net is that the images stay local on the user 's computer and are not sent to the server, thus also accommodating for privacy. We have not observed major differences between Squeeze Net 's embedding in accuracy when compared to other more complex networks like Inception V3 across our four case studies (see Supplementary Table 1 for accuracy study). The use of server-based embedders may bene fit from the speed of embedding in case of larger image sets, as well as provide a way to compare Squeeze Net to other, bigger and better-known trained deep models. a b ce d Image viewer (1)Image viewer Image embedding Import images Distances Logistic regression Test & score Confusion matrix MDS Fig. 3 Supervised data analysis of 131 mouse oocyte images with surrounded (SN) or not surrounded (NSN) chromatin organization. a The data analysis work flowfirst imports the data from the local directory where images are stored in respective subdirectories named SN and NSN. Vector-based embedding passes the data matrix to a cross-validation widget (Test and Score) that accepts a machine-learning method (logistic regression) as an additional i nput. The Test and Score widget displays the cross-validated accuracy (area under ROC curve —AUC, classi fication accuracy —CA, and harmonic average of the precision and recall —F1 score) ( b) and sends the evaluation results to the Confusion Matrix widget ( c). The Confusion Matrix widget provides information on misclassi fication. In this example, 65 of the 69 SN oocytes were classi fied correctly. Selection of this particular cell in the Confusion Matrix triggers sending these images and their descriptors further down the work flow to an Image Viewer ( d) and, as a subset of data points, to the MDS widget that performs multi-dimensional scaling ( e). Just like in Fig. 1, the MDS widget shows a planar projection of data points (images) and highlights, in this case, the image points selected in the Confusion Matrix. Altogether, the components of this work flow are used to quantitatively evaluate the expected performance of machine-learning models through cross-validation and to support further exploration of correctly and incorrectly classi fied images NATURE COMMUNICATIONS | https://doi. org/10. 1038/s41467-019-12397-x ARTICLE NATURE COMMUNICATIONS | (2019) 10:4551 | https://doi. org/10. 1038/s41467-019-12397-x | www. nature. com/naturecommunications 5
visual Dee Learning.pdf
While we expect that specialized embedders will become available in the near future (e. g., for molecular and cell biology and pathology), our pilot study also suggests that general embedders might perform adequately well. For example, wefound that cross-validated F1 accuracies were high when comparing expert annotations to image class predictions by logistic regression trained on features inferred from four different neural networks. The accuracy was around 0. 95 for all cases except for the Dictyostelium phenotyping, which resulted in an F1 score around 0. 82. Surprisingly accurate models were inferred from features by a deep network that was trained on 79,433 images of paintings 23. These findings suggest that transfer learning, even in its most straightforward form, can be applied to a broad range of image analytics problems. They also suggest that the utility of domain-speci fic networks may be limited such that their marginal increase in accuracy may not justify the effort associated with generating them. Discussion We present a tool and a free, open-source working environment for image analytics. The solution builds upon the visual pro-gramming data mining framework Orange, using the framework 's ability to construct work flows, develop data models, and engage in interactive visualizations. Modern image analytics are well supported in programming environments, such as those built around Python and enhanced with libraries for deep learning such as Tensor Flow, Py Torch, and Keras. While these toolboxes should be preferred by any advanced user or data scientists, Orange aims to complement them by providing an accessible and interactive environment that still offers a high degree of func-tionality and can adapt to speci fic needs through visual pro-gramming and construction of problem-speci fic work flows. Orange 'si m a g ea n a l y t i c si si n t e n d e df o ra n a l y s i so fs m a l l e ri m a g e sets where the starting point is image embedding using a pre-trained deep network. Currently, we support a choice of frequently used deep network models, which provide good accuracy in the four cases we have studied in the paper, even though the embedders we use were not trained on images from molecular biology (see Supple-mentary Information). Orange embeddings rely on the penultimate layer of those deep networks, where transfer learning is achieved t h r o u g he n c o d i n go fi m a g e sw i t hf e a t u r e sf r o mt h i sl a y e ra n di s followed by using classical machine-learning method, such as ran-dom forests or logistic regression. Transfer learning could be achieved by partial or complete re training of an existing network, but this task would require suitable hardware and may have an impact on the computational time. In designing a framework that avoids retraining the e mbedding part of the deep neural network, we aimed at a solution that is fast and can execute on a common computer or laptop. With this choice, we have potentially sacri ficed accuracy that could arise from mo re complex solutions, but those solutions would be computationa lly more challenging and would require specialized hardware. The framework we propose deals with images as a whole and relies on their embedding to support a wealth of machine-learning tasks like clustering, classi fication, regression, outlier detection, and dimensionality reduction. Our current imple-mentation is somewhat limited, as it does not support other important image analytics tasks, notably image segmentation. These tasks would fit nicely within the framework of visual programming and interactive analytics and are good candidates for inclusion in further releases of our software. The Orange framework with its extension for image analytics provides a user-friendly interface for unsupervised and supervised mining of images from various domains of biomedicine. It runson standard computers and laptops and does not require spe-cialized hardware. Through visual programming and construction of intuitive work flows, Orange supports domain experts (e. g., biologists) in a field where knowledge of computer science and programming used to be essential. With easy access to machine learning and its combination with interactive visualizations Orange aims to democratize data science. Methods Visual analytics. The proposed approach to image mining uses visual analytics24 that combines interactive visualizations and automated data analysis, including machine learning. Orange addresses all essential phases of visual analysis frame-works25that include data loading and transformation, data visualization with user interaction, inference of data models, and model visualization. Orange implements components for visualization and data processing, and through visual program-ming supports data analysts to combine and link data analysis components andconstruct data analysis work flows. A typical work flow component receives the data, processes it, visualizes the result of processing, and outputs the results of the analysis or any selection of the user for further processing by a downstreamcomponent in the work flow. For instance, the widget for hierarchical clustering in Fig. 1receives the pairwise distances between data items, constructs the corre-sponding clustering, visualizes it in a dendrogram, and outputs the data items that are associated with user-selected branches of the dendrogram. The outputs of Orange components are instantaneously modi fied upon any change in the input or by any selection made by the user, thus resulting in a visual analytics system where any change in the component triggers changes in the downstream components that subsequently update their results and corresponding visualizations of data andmodels. Transfer learning and embedding. For embedding, images are represented with feature vectors inferred by pre-trained deep-convolutional networks 6. Orange provides an interface to several deep models for image classi fication from the Keras Python library ( https://keras. io ), including Inception V3, VGG16, and VGG19 (see Supplementary Note 5 for a complete list of deep models used) and represents images from features of the penultimate layer of these networks. In Inception V3, for example, an image is represented with 2048 features that are further processed by supervised or unsupervised machine learning in the work flows in Figs. 1and3. Machine learning. The work flows in Figs. 1and3employ several standard data mining procedures such as computation of pairwise distances between data items, hierarchical clustering, and multi-dimensional scaling for model construction andcross-validation, and model scoring for model evaluation. Wherever possible, Orange embeds standard Python libraries for machine learning and data manip-ulation, including numpy ( http://www. numpy. org ), scipy ( https://www. scipy. org ) and scikit-learn 26, and wraps their functionality within building blocks of work-flows that provide an interface where the user can change the parameters of machine-learning methods or browse through results and related visualizations of the inferred models. In our work flows, we have used cosine distances between feature vectors, multi-dimensional scaling and hierarchal clustering with Wardlinkage, and logistic regression. Default parameters of these methods were used unless otherwise noted. Reporting summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability Images of bone fraction repair, yeast protein localization, mouse oocytes, and development of social amoeba are available online at https:// figshare. com/articles/ Orange-Image-Analytics/9632276 (see Supplementary Note 1). These data can also be accessed through the Orange Datasets widget. Code availability We recommend using the newest version of Orange available through the Orangewebsite ( http://orange. biolab. si ). Orange version 3. 22. 0 used in this study is available at https://orange. biolab. si/download/ files. Orange 's source code is available on Git Hub (https://github. com/biolab/orange3 ). Components for image analytics are available in Image Analytics add-on; to install, use Options > Add-ons... and select Image Analytics. The source code for Image Analytics add-on is available on Git Hub ( https://github. com/ biolab/orange3-imageanalytics ); version 0. 3. 1 of the add-on was used to render data in the manuscript. The Supplementary Notes 4 and 6 include pointers to work flows that can be used to reproduce all the figures and results in this study. Received: 25 July 2018; Accepted: 3 September 2019; ARTICLE NATURE COMMUNICATIONS | https://doi. org/10. 1038/s41467-019-12397-x 6 NATURE COMMUNICATIONS | (2019) 10:4551 | https://doi. org/10. 1038/s41467-019-12397-x | www. nature. com/naturecommunications
visual Dee Learning.pdf
References 1. Le Cun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436-444 (2015). 2. Cruz-Roa, A. et al. Accurate and reproducible invasive breast cancer detection in whole-slide images: A Deep Learning approach for quantifying tumor extent. Sci. Rep. 7, 46450 (2017). 3. Kraus, O. Z. et al. Automated analysis of high ‐content microscopy data with deep learning. Mol. Syst. Biol. 13, 924 (2017). 4. Mohanty, S. P., Hughes, D. P. & Salathé, M. Using deep learning for image-based plant disease detection. Front. Plant Sci. 7, 1419 (2016). 5. Pan, S. J. & Yang, Q. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22, 1345-1359 (2010). 6. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J. & Wojna, Z. Rethinking the inception architecture for computer vision. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2818-2826 (2016). 7. Webb, S. Deep learning for biology. Nature 554, 555-557 (2018). 8. Esteva, A. et al. Dermatologist-level classi fication of skin cancer with deep neural networks. Nature 542, 115-118 (2017). 9. Zhang, W. et al. Deep model based transfer and multi-task learning for biological image analysis. In Proc. of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 1475-1484 (2015). 10. Modarres, M. H. et al. Neural network for nanoscience scanning electron microscope image recognition. Sci. Rep. 7, 13282 (2017). 11. Abidin, A. Z. et al. Deep transfer learning for characterizing chondrocyte patterns in phase contrast X-Ray computed tomography images of the humanpatellar cartilage. Comput. Biol. Med. 95,2 4-33 (2018). 12. Khosravi, P., Kazemi, E., Imielinski, M., Elemento, O. & Hajirasouliha, I. Deep convolutional neural networks enable discrimination of heterogeneous digitalpathology images. EBio Medicine 27, 317-328 (2018). 13. Pratt, L. Y. Discriminability-based transfer between neural networks. In NIPS: Advances in Neural Information Processing Systems 5, 204-211 (1993). 14. Thrun, S. & Pratt, L. Y. Special Issue on Inductive Transfer. Mach. Learn. 28 (1997). 15. Angermueller, C., Pärnamaa, T., Parts, L. & Stegle, O. Deep learning for computational biology. Mol. Syst. Biol. 12, 878 (2016). 16. Curk, T. et al. Microarray data mining with visual programming. Bioinformatics 21396-398 (2005). 17. Dem šar, J. et al. Orange: data mining toolbox in python. J. Mach. Learn. Res. 14, 2349-2353 (2013). 18. Zuccotti, M., Merico, V., Cecconi, S., Redi, C. A. & Garagna, S. What does it take to make a developmentally competent mammalian egg? Hum. Reprod. Update 17, 525-540 (2011). 19. Bui, T. T. H. et al. Cytoplasmic movement pro files of mouse surrounding nucleolus and not-surrounding nucleolus antral oocytes during meioticresumption. Mol. Reprod. Dev. 84, 356-362 (2017). 20. Carpenter, A. E. et al. Cell Pro filer: Image analysis software for identifying and quantifying cell phenotypes. Genome Biol. 7, R100 (2006). 21. Lowe, D. G. Object recognition from local scale-invariant features. in Proc. of the Seventh IEEE International Conference on Computer Vision. (eds. J. Tsotsos A. Blake, Y. Ohta and S. Zucker) 1150-1157 (IEEE Computer Society, 1999). 22. Iandola, F. N. et al. Squeeze Net: alex Net-level accuracy with 50x fewer parameters and <0. 5MB model size. ar Xiv (2016). 23. Ileni č,N. Deep Models of Painting Authorship. (University of Ljubljana, 2017). 24. Keim, D. A., Mansmann, F., Schneidewind, J., Thomas, J. & Ziegler, H. in Visual Data Mining. Lecture Notes in Computer Science, Vol. 4404 (eds Simoff, S. J., Böhlen, M. H. & Mazeika, A. ) (Springer, Berlin, Heidelberg, 2008). 25. Sacha, D. et al. What you see is what you can change: human-centered machine learning by interactive visualization. Neurocomputing 268, 164-175 (2017). 26. Pedregosa, F. et al. Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825-2830 (2011). Acknowledgements This work was partially supported by Slovenian Research Agency grants P2-0209, BI-US/ 17-18-014 (P. G., N. I., M. P., N. I., A. Č., M. S., A. E., A. P., J. D., A. S., M. T., L. Ž., J. H., and B. Z. ), P1-0207, and N1-0034 (U. P. ). G. S. was partly supported by grant R35 GM118016from the National Institutes of Health. D. P. was supported by the grant R01AR072018from National Institute of Arthritis and Musculoskeletal and Skin Diseases (NIAMS) of the NIH. M. Z. and S. G. were supported by the Italian Ministry of Education, University and Research (MIUR), Dipartimenti di Eccellenza Program (2018-2022) to the Depart-ment of Biology and Biotechnology 'L. Spallanzani ', University of Pavia. R. B. was partially supported by the grant 2015-0042 from Fondazione Regionale per la Ricerca Biomedica. Author contributions B. Z. and G. S. wrote the manuscript. B. Z. conceived the software architecture. P. G., N. I.,A. C., A. E., J. D., A. S., M. T., L. Ž., and J. H wrote the software. M. P., N. I., P. G. designed and maintain the server services for image embedding. A. P. and M. S. tested the software. A. P. wrote the documentation and filmed video tutorials. G. S., D. P., M. Z, S. G., R. B., U. P., and H. W. provided the biomedical images and contributed domain expertize. P. G., M. P., and N. I. contributed equally to this work. All the authors read and approved the manuscript. Competing interests While Orange and its add-ons are free and available in open-source, Orange websitesolicits donations that are used to support further development. B. Z., J. D., and A. P. manage the website. The website also promotes courses that can be offered by and whose proceedings may bene fit the employees of University of Ljubljana, including P. G., M. P., N. I., A. Č., M. S., A. E., A. P., J. D., A. S., M. T., L. Ž., J. H., and B. Z. All other authors declare no competing interests. Additional information Supplementary information is available for this paper at https://doi. org/10. 1038/s41467-019-12397-x. Correspondence and requests for materials should be addressed to B. Z. Peer review information Nature Communications thanks the anonymous reviewer(s) for their contribution to the peer review of this work Reprints and permission information is available at http://www. nature. com/reprints Publisher 's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional af filiations. Open Access This article is licensed under a Creative Commons Attri-bution 4. 0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriatecredit to the original author(s) and the source, provide a link to the Creative Commonslicense, and indicate if changes were made. The images or other third party material in thisarticle are included in the article 's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article 's C r e a t i v e C o m m o n s license and your intended use is not permitted by statutory regulation or exceeds thepermitted use, you will need to obtain permission directly from the copyright holder. Toview a copy of this license, visit http://creativecommons. org/licenses/by/4. 0/. © The Author(s) 2019NATURE COMMUNICATIONS | https://doi. org/10. 1038/s41467-019-12397-x ARTICLE NATURE COMMUNICATIONS | (2019) 10:4551 | https://doi. org/10. 1038/s41467-019-12397-x | www. nature. com/naturecommunications 7
visual Dee Learning.pdf