taesiri commited on
Commit
1b12435
1 Parent(s): 3389077

Upload papers/1412/1412.7062.tex with huggingface_hub

Browse files
Files changed (1) hide show
  1. papers/1412/1412.7062.tex +785 -0
papers/1412/1412.7062.tex ADDED
@@ -0,0 +1,785 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ \documentclass{article} \usepackage{iclr2015,times}
2
+ \usepackage{hyperref}
3
+ \usepackage{url}
4
+
5
+ \usepackage{array}
6
+ \usepackage{subfigure}
7
+ \usepackage{epsfig}
8
+ \usepackage{graphicx}
9
+ \usepackage{amsmath}
10
+ \usepackage{amssymb}
11
+ \usepackage{bbm}
12
+ \usepackage{epstopdf}
13
+ \usepackage{caption}
14
+ \usepackage{enumitem}
15
+ \usepackage{calc}
16
+ \usepackage{multirow}
17
+ \usepackage{xspace}
18
+
19
+ \newcommand{\figref}[1]{Fig\onedot~\ref{#1}}
20
+ \newcommand{\equref}[1]{Eq\onedot~\eqref{#1}}
21
+ \newcommand{\secref}[1]{Sec\onedot~\ref{#1}}
22
+ \newcommand{\tabref}[1]{Tab\onedot~\ref{#1}}
23
+ \newcommand{\thmref}[1]{Theorem~\ref{#1}}
24
+ \newcommand{\prgref}[1]{Program~\ref{#1}}
25
+ \newcommand{\algref}[1]{Alg\onedot~\ref{#1}}
26
+ \newcommand{\clmref}[1]{Claim~\ref{#1}}
27
+ \newcommand{\lemref}[1]{Lemma~\ref{#1}}
28
+ \newcommand{\ptyref}[1]{Property\onedot~\ref{#1}}
29
+
30
+ \newcommand{\by}[2]{\ensuremath{#1 \! \times \! #2}}
31
+
32
+ \renewcommand{\cite}[1]{\citep{#1}}
33
+
34
+ \makeatletter
35
+ \DeclareRobustCommand\onedot{\futurelet\@let@token\@onedot}
36
+ \def\@onedot{\ifx\@let@token.\else.\null\fi\xspace}
37
+ \def\eg{\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot}
38
+ \def\ie{\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot}
39
+ \def\cf{\emph{cf}\onedot} \def\Cf{\emph{Cf}\onedot}
40
+ \def\etc{\emph{etc}\onedot} \def\vs{\emph{vs}\onedot}
41
+ \def\wrt{w.r.t\onedot} \def\dof{d.o.f\onedot}
42
+ \def\etal{\emph{et al}\onedot}
43
+
44
+ \title{Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs}
45
+
46
+
47
+ \author{
48
+ Liang-Chieh Chen\\
49
+ Univ\onedot of California, Los Angeles\\
50
+ \texttt{lcchen@cs.ucla.edu}
51
+ \And
52
+ George Papandreou \thanks{Work initiated when G.P\onedot was with the Toyota
53
+ Technological Institute at Chicago. The first two authors contributed
54
+ equally to this work.}\\
55
+ Google Inc.\\
56
+ \texttt{gpapan@google.com}\\
57
+ \And
58
+ Iasonas Kokkinos\\
59
+ CentraleSup\'elec and INRIA\\
60
+ \texttt{iasonas.kokkinos@ecp.fr}\\
61
+ \And
62
+ Kevin Murphy\\
63
+ Google Inc.\\
64
+ \texttt{kpmurphy@google.com}\\
65
+ \And
66
+ Alan L. Yuille\\
67
+ Univ\onedot of California, Los Angeles\\
68
+ \texttt{yuille@stat.ucla.edu}
69
+ }
70
+
71
+
72
+ \newcommand{\fix}{\marginpar{FIX}}
73
+ \newcommand{\new}{\marginpar{NEW}}
74
+
75
+ \iclrfinalcopy
76
+ \iclrconference
77
+ \begin{document}
78
+
79
+
80
+ \maketitle
81
+
82
+
83
+ \begin{abstract}
84
+ Deep Convolutional Neural Networks (DCNNs) have recently shown state of the
85
+ art performance in high level vision tasks, such as image classification and
86
+ object detection. This work brings together methods from DCNNs and
87
+ probabilistic graphical models for addressing the task of pixel-level
88
+ classification (also called ''semantic image segmentation''). We show that
89
+ responses at the final layer of DCNNs are not sufficiently localized for
90
+ accurate object segmentation. This is due to the very invariance properties
91
+ that make DCNNs good for high level tasks. We overcome this poor
92
+ localization property of deep networks by combining the responses at the
93
+ final DCNN layer with a fully connected Conditional Random Field (CRF).
94
+ Qualitatively, our ``DeepLab'' system is able to localize segment boundaries at a level
95
+ of accuracy which is beyond previous methods. Quantitatively, our method sets
96
+ the new state-of-art at the PASCAL VOC-2012 semantic image segmentation
97
+ task, reaching 71.6\% IOU accuracy in the test set. We show how these
98
+ results can be obtained efficiently: Careful network re-purposing and a
99
+ novel application of the 'hole' algorithm from the wavelet community allow
100
+ dense computation of neural net responses at 8 frames per second on a modern
101
+ GPU.
102
+
103
+
104
+
105
+
106
+
107
+ \end{abstract}
108
+
109
+
110
+ \section{Introduction}
111
+ \label{sec:intro}
112
+ Deep Convolutional Neural Networks (DCNNs) had been the method of choice for document recognition since \citet{LeCun1998}, but
113
+ have only recently become the mainstream of high-level vision research.
114
+ Over the past two years DCNNs have pushed the performance of computer vision systems to soaring heights on a broad array of high-level problems, including image classification \citep{KrizhevskyNIPS2013, sermanet2013overfeat, simonyan2014very, szegedy2014going, papandreou2014untangling}, object detection \citep{girshick2014rcnn}, fine-grained categorization \citep{zhang2014part}, among others.
115
+ A common theme in these works is that DCNNs trained in an end-to-end manner deliver strikingly better results than systems relying on carefully engineered representations, such as SIFT or HOG features.
116
+ This success can be partially attributed to the built-in invariance of DCNNs to local image transformations, which underpins their ability to learn hierarchical abstractions of data \citep{zeiler2014visualizing}. While this invariance is clearly desirable for high-level vision tasks, it can hamper low-level tasks, such as pose estimation \citep{chen2014articulated, tompson2014joint} and semantic segmentation - where we want precise localization, rather than abstraction of spatial details.
117
+
118
+
119
+
120
+
121
+
122
+ There are two technical hurdles in the application of DCNNs to image labeling
123
+ tasks: signal downsampling, and spatial `insensitivity' (invariance). The
124
+ first problem relates to the reduction of signal resolution incurred by the
125
+ repeated combination of max-pooling and downsampling (`striding') performed at
126
+ every layer of standard DCNNs \citep{KrizhevskyNIPS2013,
127
+ simonyan2014very, szegedy2014going}. Instead, as in
128
+ \citet{papandreou2014untangling}, we employ the `atrous'
129
+ (with holes) algorithm originally developed for efficiently computing the
130
+ undecimated discrete wavelet transform \cite{Mall99}. This allows
131
+ efficient dense computation of DCNN responses in a scheme
132
+ substantially simpler than earlier solutions to this problem
133
+ \cite{GCMG+13, sermanet2013overfeat}.
134
+
135
+ The second problem relates to the fact that obtaining object-centric decisions
136
+ from a classifier requires invariance to spatial transformations,
137
+ inherently limiting the spatial accuracy of the DCNN model. We boost
138
+ our model's ability to capture fine details by employing a
139
+ fully-connected Conditional Random Field (CRF). Conditional Random
140
+ Fields have been broadly used in semantic segmentation to
141
+ combine class scores computed by multi-way classifiers with the low-level
142
+ information captured by the local interactions of pixels and edges
143
+ \citep{rother2004grabcut, shotton2009textonboost} or superpixels
144
+ \citep{lucchi2011spatial}. Even though works of increased sophistication have
145
+ been proposed to model the hierarchical dependency \citep{he2004multiscale,
146
+ ladicky2009associative, lempitsky2011pylon} and/or high-order dependencies
147
+ of segments \citep{delong2012fast, gonfaus2010harmony, kohli2009robust, CPY13, Wang15}, we
148
+ use the fully connected pairwise CRF proposed by
149
+ \citet{krahenbuhl2011efficient} for its efficient computation, and ability to
150
+ capture fine edge details while also catering for long range dependencies. That model was shown in
151
+ \citet{krahenbuhl2011efficient} to largely improve the performance of a
152
+ boosting-based pixel-level classifier, and in our work we demonstrate that it
153
+ leads to state-of-the-art results when coupled with a DCNN-based pixel-level
154
+ classifier.
155
+
156
+
157
+
158
+ The three main advantages of our ``DeepLab'' system are (i) speed: by
159
+ virtue of the `atrous' algorithm, our dense DCNN operates at 8 fps,
160
+ while Mean Field Inference for the fully-connected CRF requires 0.5
161
+ second, (ii) accuracy: we obtain state-of-the-art results on the
162
+ PASCAL semantic segmentation challenge, outperforming the second-best
163
+ approach of \citet{mostajabi2014feedforward} by a margin of 7.2$\%$ and
164
+ (iii) simplicity: our system is composed of a cascade of two fairly
165
+ well-established modules, DCNNs and CRFs.
166
+
167
+
168
+
169
+
170
+
171
+ \section{Related Work}
172
+
173
+ Our system works directly on the pixel representation, similarly to \citet{long2014fully}. This is in contrast to the two-stage approaches that are now most common in semantic segmentation with DCNNs: such techniques typically use a cascade of bottom-up image segmentation and DCNN-based region classification, which makes the system commit to potential errors of the front-end segmentation system.
174
+ For instance, the bounding box proposals and masked regions delivered by \citep{arbelaez2014multiscale, Uijlings13} are used in
175
+ \citet{girshick2014rcnn} and \cite{hariharan2014simultaneous} as inputs to a DCNN to introduce shape information into the classification process. Similarly, the authors of \citet{mostajabi2014feedforward} rely on a superpixel representation. A celebrated non-DCNN precursor to these works
176
+ is the second order pooling method of \citep{carreira2012semantic} which also assigns labels to the regions proposals delivered by \citep{carreira2012cpmc}.
177
+ Understanding the perils of committing to a single segmentation, the authors of \citet{cogswell2014combining}
178
+ build on \citep{yadollahpour2013discriminative} to explore a diverse set of CRF-based segmentation proposals, computed also by \citep{carreira2012cpmc}. These segmentation proposals are then re-ranked according to a DCNN trained in particular for this reranking task. Even though this approach explicitly tries to handle the temperamental nature of a front-end segmentation algorithm, there is still no explicit exploitation of the DCNN scores in the CRF-based segmentation algorithm: the DCNN is only applied post-hoc, while it would make sense to directly try to use its results {\em during} segmentation.
179
+
180
+
181
+ Moving towards works that lie closer to our approach, several other researchers have considered the use of convolutionally computed DCNN features for dense image labeling. Among the first have been
182
+ \citet{farabet2013learning} who apply DCNNs at multiple image resolutions and then employ a segmentation tree to smooth the prediction results; more recently, \citet{hariharan2014hypercolumns} propose to concatenate the computed inter-mediate feature maps within the DCNNs for pixel classification, and \citet{dai2014convolutional} propose to pool the inter-mediate feature maps by region proposals. Even though these works still employ segmentation algorithms that are decoupled from the DCNN classifier's results, we believe it is advantageous that segmentation is only used at a later stage, avoiding the commitment to premature decisions.
183
+
184
+
185
+
186
+ More recently, the segmentation-free techniques of \citep{long2014fully, eigen2014predicting} directly apply DCNNs to the whole image in a sliding window fashion, replacing the last fully connected layers of a DCNN by convolutional layers. In order to deal with the spatial localization issues outlined in the beginning of the introduction, \citet{long2014fully} upsample and concatenate the scores from inter-mediate feature maps, while \citet{eigen2014predicting} refine the prediction result from coarse to fine by propagating the coarse results to another DCNN.
187
+
188
+
189
+
190
+ The main difference between our model and other state-of-the-art models is the combination of pixel-level CRFs and DCNN-based `unary terms'. Focusing on the closest works in this direction, \citet{cogswell2014combining} use CRFs as a proposal mechanism for a DCNN-based reranking system, while \citet{farabet2013learning} treat superpixels as nodes for a local pairwise CRF and use graph-cuts for discrete inference; as such their results can be limited by errors in superpixel computations, while ignoring long-range superpixel dependencies. Our approach instead treats every pixel as a CRF node, exploits long-range dependencies, and uses CRF inference to directly optimize a DCNN-driven cost function. We note that mean field had been extensively studied for traditional image segmentation/edge detection tasks, \eg, \citep{geiger1991parallel, geiger1991common, kokkinos2008computational}, but recently \citet{krahenbuhl2011efficient} showed that the inference can be very efficient for fully connected CRF and particularly effective in the context of semantic segmentation.
191
+
192
+ After the first version of our manuscript was made publicly available,
193
+ it came to our attention that two other groups have independently
194
+ and concurrently pursued a very similar direction, combining DCNNs and
195
+ densely connected CRFs \citep{bell2014material,
196
+ zheng2015crfrnn}. There are several differences in technical aspects
197
+ of the respective models. \citet{bell2014material} focus on the problem of material
198
+ classification, while \citet{zheng2015crfrnn} unroll the CRF
199
+ mean-field inference steps to convert the whole system into an end-to-end
200
+ trainable feed-forward network.
201
+
202
+ We have updated our proposed ``DeepLab'' system with much
203
+ improved methods and results in our latest work \cite{chen2016deeplab}.
204
+ We refer the interested reader to the paper for details.
205
+
206
+
207
+
208
+
209
+
210
+
211
+
212
+
213
+
214
+
215
+
216
+
217
+
218
+ \section{Convolutional Neural Networks for Dense Image Labeling}
219
+ \label{sec:convnets}
220
+
221
+
222
+
223
+ Herein we describe how we have re-purposed and finetuned the publicly
224
+ available Imagenet-pretrained state-of-art 16-layer classification network of
225
+ \cite{simonyan2014very} (VGG-16) into an efficient and effective dense feature
226
+ extractor for our dense semantic image segmentation system.
227
+
228
+ \subsection{Efficient Dense Sliding Window Feature Extraction with the Hole Algorithm}
229
+ \label{sec:convnet-hole}
230
+
231
+ Dense spatial score evaluation is instrumental in the success of our dense CNN
232
+ feature extractor. As a first step to implement this, we convert the
233
+ fully-connected layers of VGG-16 into convolutional ones and run the network
234
+ in a convolutional fashion on the image at its original resolution. However
235
+ this is not enough as it yields very sparsely computed detection scores (with
236
+ a stride of 32 pixels). To compute scores more densely at our target stride of
237
+ 8 pixels, we develop a variation of the method previously employed by
238
+ \citet{GCMG+13, sermanet2013overfeat}. We skip subsampling after the last two
239
+ max-pooling layers in the network of \citet{simonyan2014very} and modify the
240
+ convolutional filters in the layers that follow them by introducing zeros to
241
+ increase their length (\by{2}{} in the last three convolutional layers and
242
+ \by{4}{} in the first fully connected layer). We can implement this more
243
+ efficiently by keeping the filters intact and instead sparsely sample the
244
+ feature maps on which they are applied on using an input stride of 2 or 4
245
+ pixels, respectively. This approach, illustrated in \figref{fig:hole} is
246
+ known as the `hole algorithm' (`atrous algorithm') and has been developed
247
+ before for efficient computation of the undecimated wavelet transform
248
+ \cite{Mall99}. We have implemented this within the Caffe framework
249
+ \citep{jia2014caffe} by adding to the \textsl{im2col} function (it converts
250
+ multi-channel feature maps to vectorized patches) the option to sparsely
251
+ sample the underlying feature map. This approach is generally applicable
252
+ and allows us to efficiently compute dense CNN feature maps at any target
253
+ subsampling rate without introducing any approximations.
254
+
255
+ \begin{figure}
256
+ \centering
257
+ \includegraphics[width=0.5\linewidth]{fig/atrous2.pdf}
258
+ \caption{Illustration of the hole algorithm in 1-D, when
259
+ \textsl{kernel\_size = 3}, \textsl{input\_stride = 2},
260
+ and \textsl{output\_stride = 1}.}
261
+ \label{fig:hole}
262
+ \end{figure}
263
+
264
+
265
+ We finetune the model weights of the Imagenet-pretrained VGG-16 network to
266
+ adapt it to the image classification task in a straightforward fashion,
267
+ following the procedure of \citet{long2014fully}. We replace the 1000-way
268
+ Imagenet classifier in the last layer of VGG-16 with a 21-way one. Our loss
269
+ function is the sum of cross-entropy terms for each spatial position in the
270
+ CNN output map (subsampled by 8 compared to the original image). All positions
271
+ and labels are equally weighted in the overall loss function. Our targets are
272
+ the ground truth labels (subsampled by 8). We optimize the objective function
273
+ with respect to the weights at all network layers by the standard SGD
274
+ procedure of \citet{KrizhevskyNIPS2013}.
275
+
276
+ During testing, we need class score maps at the original image resolution. As
277
+ illustrated in Figure~\ref{fig:score-maps} and further elaborated in
278
+ Section~\ref{sec:local-chal}, the class score maps (corresponding to
279
+ log-probabilities) are quite smooth, which allows us to use simple bilinear
280
+ interpolation to increase their resolution by a factor of 8 at a negligible
281
+ computational cost. Note that the method of \citet{long2014fully} does not use
282
+ the hole algorithm and produces very coarse scores (subsampled by a factor of
283
+ 32) at the CNN output. This forced them to use learned upsampling layers,
284
+ significantly increasing the complexity and training time of their system:
285
+ Fine-tuning our network on PASCAL VOC 2012 takes about 10 hours, while
286
+ they report a training time of several days (both timings on a modern GPU).
287
+
288
+ \subsection{Controlling the Receptive Field Size and Accelerating Dense Computation
289
+ with Convolutional Nets}
290
+ \label{sec:convnet-field}
291
+
292
+ Another key ingredient in re-purposing our network for dense score computation
293
+ is explicitly controlling the network's receptive field size. Most recent
294
+ DCNN-based image recognition methods rely on networks pre-trained on the
295
+ Imagenet large-scale classification task. These networks typically have large
296
+ receptive field size: in the case of the VGG-16 net we consider, its receptive
297
+ field is \by{224}{224} (with zero-padding) and \by{404}{404} pixels if the net
298
+ is applied convolutionally. After converting the network to a fully
299
+ convolutional one, the first fully connected layer has 4,096 filters
300
+ of large \by{7}{7} spatial size and becomes the computational
301
+ bottleneck in our dense score map computation.
302
+
303
+
304
+
305
+
306
+
307
+ We have addressed this practical problem by spatially subsampling (by
308
+ simple decimation) the first FC layer to \by{4}{4} (or \by{3}{3}) spatial size. This
309
+ has reduced the receptive field of the network down to \by{128}{128}
310
+ (with zero-padding) or \by{308}{308} (in convolutional mode) and has
311
+ reduced computation time for the first FC layer by $2 - 3$ times. Using our
312
+ Caffe-based implementation and a Titan GPU, the resulting VGG-derived
313
+ network is very efficient: Given a \by{306}{306} input image, it
314
+ produces \by{39}{39} dense raw feature scores at the top of the
315
+ network at a rate of about 8 frames/sec during testing. The speed
316
+ during training is 3 frames/sec. We have also successfully experimented with
317
+ reducing the number of channels at the fully connected layers from 4,096 down to
318
+ 1,024, considerably further decreasing computation time and memory footprint
319
+ without sacrificing performance, as detailed in Section~\ref{sec:experiments}.
320
+ Using smaller networks such as \citet{KrizhevskyNIPS2013} could allow
321
+ video-rate test-time dense feature computation even on light-weight GPUs.
322
+
323
+ \section{Detailed Boundary Recovery: Fully-Connected Conditional Random Fields and Multi-scale Prediction}
324
+ \label{sec:boundary-recovery}
325
+
326
+ \subsection{Deep Convolutional Networks and the Localization Challenge}
327
+ \label{sec:local-chal}
328
+
329
+ As illustrated in Figure~\ref{fig:score-maps}, DCNN score maps can
330
+ reliably predict the presence and rough position of objects in an image but
331
+ are less well suited for pin-pointing their exact outline. There is a natural
332
+ trade-off between classification accuracy and localization accuracy with
333
+ convolutional networks: Deeper models with multiple max-pooling layers have
334
+ proven most successful in classification tasks, however their increased
335
+ invariance and large receptive fields make the problem of inferring position
336
+ from the scores at their top output levels more challenging.
337
+
338
+ Recent work has pursued two directions to
339
+ address this localization challenge. The first approach is to harness
340
+ information from multiple layers in the convolutional network in order to
341
+ better estimate the object boundaries \citep{long2014fully, eigen2014predicting}. The second approach is
342
+ to employ a super-pixel representation, essentially delegating the
343
+ localization task to a low-level segmentation method. This route is followed
344
+ by the very successful recent method of \citet{mostajabi2014feedforward}.
345
+
346
+ In Section~\ref{sec:dense-crf}, we pursue a novel alternative direction based
347
+ on coupling the recognition capacity of DCNNs and the fine-grained
348
+ localization accuracy of fully connected CRFs and show that it is remarkably
349
+ successful in addressing the localization challenge, producing accurate
350
+ semantic segmentation results and recovering object boundaries at a level of
351
+ detail that is well beyond the reach of existing methods.
352
+
353
+
354
+
355
+ \subsection{Fully-Connected Conditional Random Fields for Accurate Localization}
356
+ \label{sec:dense-crf}
357
+
358
+ \begin{figure}[ht]
359
+ \centering
360
+ \begin{tabular}{c c c c c}
361
+ \includegraphics[width=0.16\linewidth]{fig/mean_field_illustration/2007_007470.jpg} &
362
+ \includegraphics[width=0.16\linewidth]{fig/mean_field_illustration/Score_Class1_Itr0.pdf} &
363
+ \includegraphics[width=0.16\linewidth]{fig/mean_field_illustration/Score_Class1_Itr1.pdf} &
364
+ \includegraphics[width=0.16\linewidth]{fig/mean_field_illustration/Score_Class1_Itr2.pdf} &
365
+ \includegraphics[width=0.16\linewidth]{fig/mean_field_illustration/Score_Class1_Itr10.pdf} \\
366
+ \includegraphics[width=0.16\linewidth]{fig/mean_field_illustration/2007_007470.png} &
367
+ \includegraphics[width=0.16\linewidth]{fig/mean_field_illustration/Belief_Class1_Itr0.pdf} &
368
+ \includegraphics[width=0.16\linewidth]{fig/mean_field_illustration/Belief_Class1_Itr1.pdf} &
369
+ \includegraphics[width=0.16\linewidth]{fig/mean_field_illustration/Belief_Class1_Itr2.pdf} &
370
+ \includegraphics[width=0.16\linewidth]{fig/mean_field_illustration/Belief_Class1_Itr10.pdf} \\
371
+ Image/G.T. & DCNN output & CRF Iteration 1 & CRF Iteration 2 & CRF Iteration 10 \\
372
+ \end{tabular}
373
+ \caption{Score map (input before softmax function) and belief map (output of softmax function) for Aeroplane. We show the score (1st row) and belief (2nd row) maps after each mean field iteration. The output of last DCNN layer is used as input to the mean field inference. Best viewed in color.}
374
+ \label{fig:score-maps}
375
+ \end{figure}
376
+
377
+ Traditionally, conditional random fields (CRFs) have been employed to smooth
378
+ noisy segmentation maps \cite{rother2004grabcut, kohli2009robust}. Typically
379
+ these models contain energy terms that couple neighboring nodes, favoring
380
+ same-label assignments to spatially proximal pixels. Qualitatively, the
381
+ primary function of these short-range CRFs has been to clean up the spurious
382
+ predictions of weak classifiers built on top of local hand-engineered features.
383
+
384
+ Compared to these weaker classifiers, modern DCNN architectures such as
385
+ the one we use in this work produce score maps and semantic label
386
+ predictions which are qualitatively different. As illustrated in
387
+ Figure~\ref{fig:score-maps}, the score maps are typically quite smooth and
388
+ produce homogeneous classification results. In this regime, using short-range
389
+ CRFs can be detrimental, as our goal should be to recover detailed local
390
+ structure rather than further smooth it. Using contrast-sensitive potentials
391
+ \cite{rother2004grabcut} in conjunction to local-range CRFs can potentially
392
+ improve localization but still miss thin-structures and typically requires
393
+ solving an expensive discrete optimization problem.
394
+
395
+ To overcome these limitations of short-range CRFs, we integrate into our system
396
+ the fully connected CRF model of \citet{krahenbuhl2011efficient}.
397
+ The model employs the energy function
398
+ \begin{align}
399
+ E(\boldsymbol{x}) = \sum_i \theta_i(x_i) + \sum_{ij} \theta_{ij}(x_i, x_j)
400
+ \end{align}
401
+ where $\boldsymbol{x}$ is the label assignment for pixels. We use as unary
402
+ potential $\theta_i(x_i) = - \log P(x_i)$, where $P(x_i)$ is the label
403
+ assignment probability at pixel $i$ as computed by DCNN. The pairwise
404
+ potential is $\theta_{ij}(x_i, x_j) = \mu(x_i,x_j)\sum_{m=1}^{K} w_m \cdot
405
+ k^m(\boldsymbol{f}_i, \boldsymbol{f}_j)$, where $\mu(x_i,x_j)=1 \text{ if } x_i \neq x_j$, and zero otherwise (\ie, Potts Model). There is one pairwise term for each
406
+ pair of pixels $i$ and $j$ in the image no matter how far from each other they
407
+ lie, \ie the model's factor graph is fully connected. Each $k^m$ is the
408
+ Gaussian kernel depends on features (denoted as $\boldsymbol{f}$) extracted for pixel $i$ and $j$ and is
409
+ weighted by parameter $w_m$. We adopt bilateral position and color terms,
410
+ specifically, the kernels are
411
+ \begin{align}
412
+ \label{eq:fully_crf}
413
+ w_1 \exp \Big(-\frac{||p_i-p_j||^2}{2\sigma_\alpha^2} -\frac{||I_i-I_j||^2}{2\sigma_\beta^2} \Big) + w_2 \exp \Big(-\frac{||p_i-p_j||^2}{2\sigma_\gamma^2}\Big)
414
+ \end{align}
415
+ where the first kernel depends on both pixel positions (denoted as $p$) and
416
+ pixel color intensities (denoted as $I$), and the second kernel only depends
417
+ on pixel positions. The hyper parameters $\sigma_\alpha$, $\sigma_\beta$ and
418
+ $\sigma_\gamma$ control the ``scale'' of the Gaussian kernels.
419
+
420
+ Crucially, this model is amenable to efficient approximate probabilistic
421
+ inference \citep{krahenbuhl2011efficient}. The message passing updates under a
422
+ fully decomposable mean field approximation $b(\boldsymbol{x}) = \prod_i
423
+ b_i(x_i)$ can be expressed as convolutions with a Gaussian kernel in feature
424
+ space. High-dimensional filtering algorithms \citep{adams2010fast}
425
+ significantly speed-up this computation resulting in an algorithm that is very
426
+ fast in practice, less that 0.5 sec on average for Pascal VOC images using the
427
+ publicly available implementation of \citep{krahenbuhl2011efficient}.
428
+
429
+ \begin{figure}
430
+ \centering
431
+ \includegraphics[width=0.7\linewidth]{fig/model_illustration3.pdf}
432
+ \caption{Model Illustration. The coarse score map from Deep
433
+ Convolutional Neural Network (with fully convolutional layers) is
434
+ upsampled by bi-linear interpolation. A fully connected CRF is
435
+ applied to refine the segmentation result. Best viewed in color.}
436
+ \label{fig:ModelIllustration}
437
+ \end{figure}
438
+
439
+ \subsection{Multi-Scale Prediction}
440
+ \label{sec:multiscale}
441
+
442
+ Following the promising recent results of \cite{hariharan2014hypercolumns,
443
+ long2014fully} we have also explored a multi-scale prediction method to
444
+ increase the boundary localization accuracy. Specifically, we attach
445
+ to the input image and the output of each of the first four max
446
+ pooling layers a two-layer MLP (first layer: 128 3x3 convolutional
447
+ filters, second layer: 128 1x1 convolutional filters) whose feature map
448
+ is concatenated to the main network's last layer feature map. The aggregate feature map
449
+ fed into the softmax layer is thus enhanced by 5 * 128 = 640
450
+ channels. We only adjust the newly added weights, keeping the other
451
+ network parameters to the values learned by the method of
452
+ Section~\ref{sec:convnets}. As discussed in the experimental section,
453
+ introducing these extra direct connections from fine-resolution layers
454
+ improves localization performance, yet the effect is not as dramatic
455
+ as the one obtained with the fully-connected CRF.
456
+ \section{Experimental Evaluation}
457
+ \label{sec:experiments}
458
+
459
+ \paragraph{Dataset} We test our DeepLab model on the PASCAL VOC 2012 segmentation benchmark \citep{everingham2014pascal}, consisting of 20 foreground object classes and one background class. The original dataset contains $1,464$, $1,449$, and $1,456$ images for training, validation, and testing, respectively. The dataset is augmented by the extra annotations provided by \citet{hariharan2011semantic}, resulting in $10,582$ training images. The performance is measured in terms of pixel intersection-over-union (IOU) averaged across the 21 classes.
460
+
461
+ \paragraph{Training} We adopt the simplest form of piecewise training, decoupling the DCNN and CRF training stages, assuming the unary terms provided by the DCNN are fixed during CRF training.
462
+
463
+ For DCNN training we employ the VGG-16 network which has been pre-trained on ImageNet. We fine-tuned the VGG-16 network on the VOC 21-way pixel-classification task by stochastic gradient descent on the cross-entropy loss function, as described in Section~\ref{sec:convnet-hole}. We use a mini-batch of 20 images and initial learning rate of $0.001$ ($0.01$ for the final classifier layer), multiplying the learning rate by 0.1 at every 2000 iterations. We use momentum of $0.9$ and a weight decay of $0.0005$.
464
+
465
+ After the DCNN has been fine-tuned, we cross-validate the parameters
466
+ of the fully connected CRF model in \equref{eq:fully_crf} along the
467
+ lines of \citet{krahenbuhl2011efficient}. We use the default values of
468
+ $w_2 = 3$ and $\sigma_\gamma = 3$ and we search for the best values
469
+ of $w_1$, $\sigma_\alpha$, and $\sigma_\beta$ by cross-validation on a
470
+ small subset of the validation set (we use 100 images). We
471
+ employ coarse-to-fine search scheme. Specifically, the initial search
472
+ range of the parameters are $w_1 \in [5, 10]$, $\sigma_\alpha \in
473
+ [50:10:100]$ and $\sigma_\beta \in [3:1:10]$ (MATLAB notation), and
474
+ then we refine the search step sizes around the first round's best
475
+ values. We fix the number of mean field iterations to 10 for all
476
+ reported experiments.
477
+
478
+ \begin{table}[t]
479
+ \centering
480
+ \begin{tabular}{c c}
481
+ \hspace{-0.7cm}
482
+ \raisebox{0cm}{
483
+ \begin{tabular}{l | c}
484
+ Method & mean IOU (\%) \\
485
+ \hline \hline
486
+ DeepLab & 59.80 \\
487
+ DeepLab-CRF & 63.74 \\
488
+ \hline
489
+ DeepLab-MSc & 61.30 \\
490
+ DeepLab-MSc-CRF & 65.21 \\
491
+ \hline \hline
492
+ DeepLab-7x7 & 64.38 \\
493
+ DeepLab-CRF-7x7 & 67.64 \\
494
+ \hline
495
+ DeepLab-LargeFOV & 62.25 \\
496
+ DeepLab-CRF-LargeFOV & 67.64 \\
497
+ \hline
498
+ DeepLab-MSc-LargeFOV & 64.21 \\
499
+ DeepLab-MSc-CRF-LargeFOV & 68.70 \\
500
+ \end{tabular}
501
+ }
502
+ &
503
+ \raisebox{0.4cm}{
504
+ \begin{tabular}{l | c}
505
+ Method & mean IOU (\%) \\
506
+ \hline \hline
507
+ MSRA-CFM & 61.8 \\
508
+ FCN-8s & 62.2 \\
509
+ TTI-Zoomout-16 & 64.4 \\
510
+ \hline \hline
511
+ DeepLab-CRF & 66.4 \\
512
+ DeepLab-MSc-CRF & 67.1 \\
513
+ DeepLab-CRF-7x7 & 70.3 \\
514
+ DeepLab-CRF-LargeFOV & 70.3 \\
515
+ DeepLab-MSc-CRF-LargeFOV & 71.6 \\
516
+ \end{tabular}
517
+ }
518
+ \\
519
+ (a) & (b)
520
+ \end{tabular}
521
+ \caption{(a) Performance of our proposed models on the PASCAL VOC
522
+ 2012 `val' set (with training in the augmented `train' set). The
523
+ best performance is achieved by exploiting both multi-scale features
524
+ and large field-of-view. (b)
525
+ Performance of our proposed models (with
526
+ training in the augmented `trainval' set) compared to other
527
+ state-of-art methods on the PASCAL VOC 2012 `test' set.}
528
+ \label{tb:valIOU}
529
+ \end{table}
530
+
531
+ \paragraph{Evaluation on Validation set} We conduct the majority of
532
+ our evaluations on the PASCAL `val' set, training our model on the
533
+ augmented PASCAL `train' set. As shown in \tabref{tb:valIOU} (a),
534
+ incorporating the fully connected CRF to our model (denoted by
535
+ DeepLab-CRF) yields a substantial performance boost, about 4\%
536
+ improvement over DeepLab. We note that the work of
537
+ \citet{krahenbuhl2011efficient} improved the $27.6\%$ result of
538
+ TextonBoost \citep{shotton2009textonboost} to $29.1\%$, which makes
539
+ the improvement we report here (from $59.8\%$ to $63.7\%$) all the
540
+ more impressive.
541
+
542
+ Turning to qualitative results, we provide visual comparisons between
543
+ DeepLab and DeepLab-CRF in \figref{fig:ValResults}. Employing a fully
544
+ connected CRF significantly improves the results, allowing the model
545
+ to accurately capture intricate object boundaries.
546
+
547
+ \paragraph{Multi-Scale features} We also exploit the features from the intermediate layers, similar to \citet{hariharan2014hypercolumns, long2014fully}. As shown in \tabref{tb:valIOU} (a), adding the multi-scale features to our DeepLab model (denoted as DeepLab-MSc) improves about $1.5\%$ performance, and further incorporating the fully connected CRF (denoted as DeepLab-MSc-CRF) yields about 4\% improvement. The qualitative comparisons between DeepLab and DeepLab-MSc are shown in \figref{fig:msBoundary}. Leveraging the multi-scale features can slightly refine the object boundaries.
548
+
549
+ \paragraph{Field of View} The `atrous algorithm' we employed allows us to arbitrarily control the Field-of-View (FOV) of the models by adjusting the input stride, as illustrated in \figref{fig:hole}. In \tabref{tab:fov}, we experiment with several kernel sizes and input strides at the first fully connected layer. The method, DeepLab-CRF-7x7, is the direct modification from VGG-16 net, where the kernel size = \by{7}{7} and input stride = 4. This model yields performance of $67.64\%$ on the `val' set, but it is relatively slow ($1.44$ images per second during training). We have improved model speed to $2.9$ images per second by reducing the kernel size to \by{4}{4}. We have experimented with two such network variants with different FOV sizes, DeepLab-CRF and DeepLab-CRF-4x4; the latter has large FOV (\ie, large input stride) and attains better performance. Finally, we employ kernel size \by{3}{3} and input stride = 12, and further change the filter sizes from 4096 to 1024 for the last two layers. Interestingly, the resulting model, DeepLab-CRF-LargeFOV, matches the performance of the expensive DeepLab-CRF-7x7. At the same time, it is $3.36$ times faster to run and has significantly fewer parameters (20.5M instead of 134.3M).
550
+
551
+ The performance of several model variants is summarized in \tabref{tb:valIOU}, showing the benefit of exploiting multi-scale features and large FOV.
552
+
553
+ \begin{table}[t]\scriptsize
554
+ \centering
555
+ \begin{tabular}{l | c c c c || c c}
556
+ Method & kernel size & input stride & receptive field & \# parameters & mean IOU (\%) & Training speed (img/sec) \\
557
+ \hline \hline
558
+ DeepLab-CRF-7x7 & \by{7}{7} & 4 & 224 & 134.3M & 67.64 & 1.44 \\
559
+ \hline
560
+ DeepLab-CRF & \by{4}{4} & 4 & 128 & 65.1M & 63.74 & 2.90 \\
561
+ \hline
562
+ DeepLab-CRF-4x4 & \by{4}{4} & 8 & 224 & 65.1M & 67.14 & 2.90 \\
563
+ \hline
564
+ DeepLab-CRF-LargeFOV & \by{3}{3} & 12 & 224 & 20.5M & 67.64 & 4.84 \\
565
+ \end{tabular}
566
+ \caption{Effect of Field-Of-View. We show the performance (after CRF) and training speed on the PASCAL VOC 2012 `val' set as the function of (1) the kernel size of first fully connected layer, (2) the input stride value employed in the atrous algorithm.}
567
+ \label{tab:fov}
568
+ \end{table}
569
+
570
+ \begin{figure}[ht]
571
+ \centering
572
+ \begin{tabular}{c c c c c}
573
+ \includegraphics[height=0.11\linewidth]{fig/boundary_refine/vgg128noup_2007_003022.png} &
574
+ \includegraphics[height=0.11\linewidth]{fig/boundary_refine/vgg128noup_2007_001284.png} &
575
+ \includegraphics[height=0.11\linewidth]{fig/boundary_refine/vgg128noup_2007_001289.png} &
576
+ \includegraphics[height=0.11\linewidth]{fig/boundary_refine/vgg128noup_2007_001311.png} &
577
+ \includegraphics[height=0.11\linewidth]{fig/boundary_refine/vgg128noup_2009_000573.png} \\
578
+ \includegraphics[height=0.11\linewidth]{fig/boundary_refine/vgg128ms_2007_003022.png} &
579
+ \includegraphics[height=0.11\linewidth]{fig/boundary_refine/vgg128ms_2007_001284.png} &
580
+ \includegraphics[height=0.11\linewidth]{fig/boundary_refine/vgg128ms_2007_001289.png} &
581
+ \includegraphics[height=0.11\linewidth]{fig/boundary_refine/vgg128ms_2007_001311.png} &
582
+ \includegraphics[height=0.11\linewidth]{fig/boundary_refine/vgg128ms_2009_000573.png} \\
583
+ \end{tabular}
584
+ \caption{Incorporating multi-scale features improves the boundary segmentation. We show the results obtained by DeepLab and DeepLab-MSc in the first and second row, respectively. Best viewed in color.}
585
+ \label{fig:msBoundary}
586
+ \end{figure}
587
+
588
+
589
+
590
+ \paragraph{Mean Pixel IOU along Object Boundaries}
591
+ To quantify the accuracy of the proposed model near object boundaries, we evaluate the segmentation accuracy with an experiment similar to \citet{kohli2009robust, krahenbuhl2011efficient}. Specifically, we use the `void' label annotated in val set, which usually occurs around object boundaries. We compute the mean IOU for those pixels that are located within a narrow band (called trimap) of `void' labels. As shown in \figref{fig:IOUBoundary}, exploiting the multi-scale features from the intermediate layers and refining the segmentation results by a fully connected CRF significantly improve the results around object boundaries.
592
+
593
+ \paragraph{Comparison with State-of-art} In \figref{fig:val_comparison}, we qualitatively compare our proposed model, DeepLab-CRF, with two state-of-art models: FCN-8s \citep{long2014fully} and TTI-Zoomout-16 \citep{mostajabi2014feedforward} on the `val' set (the results are extracted from their papers). Our model is able to capture the intricate object boundaries.
594
+
595
+ \paragraph{Reproducibility} We have implemented the proposed methods by extending the excellent Caffe framework \citep{jia2014caffe}. We share our source code, configuration files, and trained models that allow reproducing the results in this paper at a companion web site \url{https://bitbucket.org/deeplab/deeplab-public}.
596
+
597
+ \begin{figure}[!tbp]
598
+ \centering
599
+ \resizebox{\columnwidth}{!}{
600
+ \begin{tabular} {c c c}
601
+ \raisebox{1.7cm} {
602
+ \begin{tabular}{c c}
603
+ \includegraphics[height=0.1\linewidth]{fig/trimap/2007_000363.jpg} &
604
+ \includegraphics[height=0.1\linewidth]{fig/trimap/2007_000363.png} \\
605
+ \includegraphics[height=0.1\linewidth]{fig/trimap/TrimapWidth2.pdf} &
606
+ \includegraphics[height=0.1\linewidth]{fig/trimap/TrimapWidth10.pdf} \\
607
+ \end{tabular} } &
608
+ \includegraphics[height=0.25\linewidth]{fig/SegPixelAccWithinTrimap.pdf} &
609
+ \includegraphics[height=0.25\linewidth]{fig/SegPixelIOUWithinTrimap.pdf} \\
610
+ (a) & (b) & (c) \\
611
+ \end{tabular}
612
+ }
613
+ \caption{(a) Some trimap examples (top-left: image. top-right: ground-truth. bottom-left: trimap of 2 pixels. bottom-right: trimap of 10 pixels). Quality of segmentation result within a band around the object boundaries for the proposed methods. (b) Pixelwise accuracy. (c) Pixel mean IOU.
614
+ }
615
+ \label{fig:IOUBoundary}
616
+ \end{figure}
617
+
618
+ \begin{figure}[t]
619
+ \centering
620
+ \begin{tabular}{c c}
621
+ \includegraphics[height=0.55\linewidth]{fig/comparedWithFCN.pdf} &
622
+ \includegraphics[height=0.55\linewidth]{fig/comparedWithRoomOut.pdf} \\
623
+ (a) FCN-8s vs. DeepLab-CRF & (b) TTI-Zoomout-16 vs. DeepLab-CRF \\
624
+ \end{tabular}
625
+ \caption{Comparisons with state-of-the-art models on the val set. First row: images. Second row: ground truths. Third row: other recent models (Left: FCN-8s, Right: TTI-Zoomout-16). Fourth row: our DeepLab-CRF. Best viewed in color.}
626
+ \label{fig:val_comparison}
627
+ \end{figure}
628
+
629
+ \paragraph{Test set results} Having set our model choices on the validation set, we evaluate our model variants on the PASCAL VOC 2012 official `test' set. As shown in \tabref{tab:voc2012}, our DeepLab-CRF and DeepLab-MSc-CRF models achieve performance of $66.4\%$ and $67.1\%$ mean IOU\footnote{\url{http://host.robots.ox.ac.uk:8080/leaderboard/displaylb.php?challengeid=11&compid=6}}, respectively. Our models outperform all the other state-of-the-art models (specifically, TTI-Zoomout-16 \citep{mostajabi2014feedforward}, FCN-8s \citep{long2014fully}, and MSRA-CFM \citep{dai2014convolutional}). When we increase the FOV of the models, DeepLab-CRF-LargeFOV yields performance of $70.3\%$, the same as DeepLab-CRF-7x7, while its training speed is faster. Furthermore, our best model, DeepLab-MSc-CRF-LargeFOV, attains the best performance of $71.6\%$ by employing both multi-scale features and large FOV.
630
+
631
+ \begin{table*}[!tbp] \setlength{\tabcolsep}{3pt}
632
+ \resizebox{\columnwidth}{!}{
633
+ \begin{tabular}{|l||c*{20}{|c}||c|}
634
+ \hline
635
+ Method & bkg & aero & bike & bird & boat & bottle& bus & car & cat & chair& cow &table & dog & horse & mbike& person& plant&sheep& sofa &train & tv & mean \\
636
+ \hline \hline
637
+ MSRA-CFM & - & 75.7 & 26.7 & 69.5 & 48.8 & 65.6 & 81.0 & 69.2 & 73.3 & 30.0 & 68.7 & 51.5 & 69.1 & 68.1 & 71.7 & 67.5 & 50.4 & 66.5 & 44.4 & 58.9 & 53.5 & 61.8 \\
638
+ FCN-8s & - & 76.8 & 34.2 & 68.9 & 49.4 & 60.3 & 75.3 & 74.7 & 77.6 & 21.4 & 62.5 & 46.8 & 71.8 & 63.9 & 76.5 & 73.9 & 45.2 & 72.4 & 37.4 & 70.9 & 55.1 & 62.2 \\
639
+ TTI-Zoomout-16 & 89.8 & 81.9 & 35.1 & 78.2 & 57.4 & 56.5 & 80.5 & 74.0 & 79.8 & 22.4 & 69.6 & 53.7 & 74.0 & 76.0 & 76.6 & 68.8 & 44.3 & 70.2 & 40.2 & 68.9 & 55.3 & 64.4 \\
640
+ \hline
641
+ DeepLab-CRF & 92.1 & 78.4 & 33.1 & 78.2 & 55.6 & 65.3 & 81.3 & 75.5 & 78.6 & 25.3 & 69.2 & 52.7 & 75.2 & 69.0 & 79.1 & 77.6 & 54.7 & 78.3 & 45.1 & 73.3 & 56.2 & 66.4 \\
642
+ DeepLab-MSc-CRF & 92.6 & 80.4 & 36.8 & 77.4 & 55.2 & 66.4 & 81.5 & 77.5 & 78.9 & 27.1 & 68.2 & 52.7 & 74.3 & 69.6 & 79.4 & 79.0 & 56.9 & 78.8 & 45.2 & 72.7 & 59.3 & 67.1 \\
643
+ \href{http://host.robots.ox.ac.uk:8080/anonymous/EKRH3N.html}{DeepLab-CRF-7x7} & 92.8 & 83.9 & 36.6 & 77.5 & 58.4 & {\bf 68.0} & 84.6 & {\bf 79.7} & 83.1 & 29.5 & {\bf 74.6} & 59.3 & 78.9 & 76.0 & 82.1 & 80.6 & {\bf 60.3} & 81.7 & 49.2 & {\bf 78.0} & 60.7 & 70.3 \\
644
+ DeepLab-CRF-LargeFOV & 92.6 & 83.5 & 36.6 & {\bf 82.5} & 62.3 & 66.5 & {\bf 85.4} & 78.5 & {\bf 83.7} & 30.4 & 72.9 & {\bf 60.4} & 78.5 & 75.5 & 82.1 & 79.7 & 58.2 & 82.0 & 48.8 & 73.7 & 63.3 & 70.3 \\
645
+ DeepLab-MSc-CRF-LargeFOV & {\bf 93.1} & {\bf 84.4} & {\bf 54.5} & 81.5 & {\bf 63.6} & 65.9 & 85.1 & 79.1 & 83.4 & {\bf 30.7} & 74.1 & 59.8 & {\bf 79.0} & {\bf 76.1} & {\bf 83.2} & {\bf 80.8} & 59.7 & {\bf 82.2} & {\bf 50.4} & 73.1 & {\bf 63.7} & {\bf 71.6} \\
646
+ \hline
647
+ \end{tabular}
648
+ }
649
+ \caption{Labeling IOU (\%) on the PASCAL VOC 2012 test set, using the trainval set for training.}
650
+ \label{tab:voc2012}
651
+ \end{table*}
652
+
653
+
654
+
655
+
656
+
657
+ \begin{figure}[!htbp]
658
+ \centering
659
+ \scalebox{0.82} {
660
+ \begin{tabular}{c c c | c c c}
661
+ \includegraphics[height=0.12\linewidth]{fig/img/2007_002094.jpg} &
662
+ \includegraphics[height=0.12\linewidth]{fig/res_none/2007_002094.png} &
663
+ \includegraphics[height=0.12\linewidth]{fig/res_crf/2007_002094.png} &
664
+ \includegraphics[height=0.12\linewidth]{fig/img/2007_002719.jpg} &
665
+ \includegraphics[height=0.12\linewidth]{fig/res_none/2007_002719.png} &
666
+ \includegraphics[height=0.12\linewidth]{fig/res_crf/2007_002719.png} \\
667
+ \includegraphics[height=0.12\linewidth]{fig/img/2007_003957.jpg} &
668
+ \includegraphics[height=0.12\linewidth]{fig/res_none/2007_003957.png} &
669
+ \includegraphics[height=0.12\linewidth]{fig/res_crf/2007_003957.png} &
670
+ \includegraphics[height=0.12\linewidth]{fig/img/2007_003991.jpg} &
671
+ \includegraphics[height=0.12\linewidth]{fig/res_none/2007_003991.png} &
672
+ \includegraphics[height=0.12\linewidth]{fig/res_crf/2007_003991.png} \\
673
+ \includegraphics[height=0.10\linewidth]{fig/img/2008_001439.jpg} &
674
+ \includegraphics[height=0.10\linewidth]{fig/res_none/2008_001439.png} &
675
+ \includegraphics[height=0.10\linewidth]{fig/res_crf/2008_001439.png} &
676
+ \includegraphics[height=0.12\linewidth]{fig/img/2008_004363.jpg} &
677
+ \includegraphics[height=0.12\linewidth]{fig/res_none/2008_004363.png} &
678
+ \includegraphics[height=0.12\linewidth]{fig/res_crf/2008_004363.png} \\
679
+ \includegraphics[height=0.12\linewidth]{fig/img/2008_006229.jpg} &
680
+ \includegraphics[height=0.12\linewidth]{fig/res_none/2008_006229.png} &
681
+ \includegraphics[height=0.12\linewidth]{fig/res_crf/2008_006229.png} &
682
+ \includegraphics[height=0.12\linewidth]{fig/img/2009_000412.jpg} &
683
+ \includegraphics[height=0.12\linewidth]{fig/res_none/2009_000412.png} &
684
+ \includegraphics[height=0.12\linewidth]{fig/res_crf/2009_000412.png} \\
685
+ \includegraphics[height=0.12\linewidth]{fig/img/2009_000421.jpg} &
686
+ \includegraphics[height=0.12\linewidth]{fig/res_none/2009_000421.png} &
687
+ \includegraphics[height=0.12\linewidth]{fig/res_crf/2009_000421.png} &
688
+ \includegraphics[height=0.12\linewidth]{fig/img/2010_001079.jpg} &
689
+ \includegraphics[height=0.12\linewidth]{fig/res_none/2010_001079.png} &
690
+ \includegraphics[height=0.12\linewidth]{fig/res_crf/2010_001079.png} \\
691
+ \includegraphics[height=0.12\linewidth]{fig/img/2010_000038.jpg} &
692
+ \includegraphics[height=0.12\linewidth]{fig/res_none/2010_000038.png} &
693
+ \includegraphics[height=0.12\linewidth]{fig/res_crf/2010_000038.png} &
694
+ \includegraphics[height=0.12\linewidth]{fig/img/2010_001024.jpg} &
695
+ \includegraphics[height=0.12\linewidth]{fig/res_none/2010_001024.png} &
696
+ \includegraphics[height=0.12\linewidth]{fig/res_crf/2010_001024.png} \\
697
+ \includegraphics[height=0.24\linewidth]{fig/img/2007_005331.jpg} &
698
+ \includegraphics[height=0.24\linewidth]{fig/res_none/2007_005331.png} &
699
+ \includegraphics[height=0.24\linewidth]{fig/res_crf/2007_005331.png} &
700
+ \includegraphics[height=0.24\linewidth]{fig/img/2008_004654.jpg} &
701
+ \includegraphics[height=0.24\linewidth]{fig/res_none/2008_004654.png} &
702
+ \includegraphics[height=0.24\linewidth]{fig/res_crf/2008_004654.png} \\
703
+ \includegraphics[height=0.24\linewidth]{fig/img/2007_000129.jpg} &
704
+ \includegraphics[height=0.24\linewidth]{fig/res_none/2007_000129.png} &
705
+ \includegraphics[height=0.24\linewidth]{fig/res_crf/2007_000129.png} &
706
+ \includegraphics[height=0.24\linewidth]{fig/img/2007_002619.jpg} &
707
+ \includegraphics[height=0.24\linewidth]{fig/res_none/2007_002619.png} &
708
+ \includegraphics[height=0.24\linewidth]{fig/res_crf/2007_002619.png} \\
709
+ \includegraphics[height=0.12\linewidth]{fig/img/2007_002852.jpg} &
710
+ \includegraphics[height=0.12\linewidth]{fig/res_none/2007_002852.png} &
711
+ \includegraphics[height=0.12\linewidth]{fig/res_crf/2007_002852.png} &
712
+ \includegraphics[height=0.12\linewidth]{fig/img/2010_001069.jpg} &
713
+ \includegraphics[height=0.12\linewidth]{fig/res_none/2010_001069.png} &
714
+ \includegraphics[height=0.12\linewidth]{fig/res_crf/2010_001069.png} \\
715
+ \hline
716
+ \hline
717
+ \includegraphics[height=0.12\linewidth]{fig/img/2007_000491.jpg} &
718
+ \includegraphics[height=0.12\linewidth]{fig/res_none/2007_000491.png} &
719
+ \includegraphics[height=0.12\linewidth]{fig/res_crf/2007_000491.png} &
720
+ \includegraphics[height=0.12\linewidth]{fig/img/2007_000529.jpg} &
721
+ \includegraphics[height=0.12\linewidth]{fig/res_none/2007_000529.png} &
722
+ \includegraphics[height=0.12\linewidth]{fig/res_crf/2007_000529.png} \\
723
+ \includegraphics[height=0.12\linewidth]{fig/img/2007_000559.jpg} &
724
+ \includegraphics[height=0.12\linewidth]{fig/res_none/2007_000559.png} &
725
+ \includegraphics[height=0.12\linewidth]{fig/res_crf/2007_000559.png} &
726
+ \includegraphics[height=0.12\linewidth]{fig/img/2007_000663.jpg} &
727
+ \includegraphics[height=0.12\linewidth]{fig/res_none/2007_000663.png} &
728
+ \includegraphics[height=0.12\linewidth]{fig/res_crf/2007_000663.png} \\
729
+ \includegraphics[height=0.12\linewidth]{fig/img/2007_000452.jpg} &
730
+ \includegraphics[height=0.12\linewidth]{fig/res_none/2007_000452.png} &
731
+ \includegraphics[height=0.12\linewidth]{fig/res_crf/2007_000452.png} &
732
+ \includegraphics[height=0.12\linewidth]{fig/img/2007_002268.jpg} &
733
+ \includegraphics[height=0.12\linewidth]{fig/res_none/2007_002268.png} &
734
+ \includegraphics[height=0.12\linewidth]{fig/res_crf/2007_002268.png} \\
735
+ \end{tabular}
736
+ }
737
+ \caption{Visualization results on VOC 2012-val. For each row, we show the input image, the segmentation result delivered by the DCNN (DeepLab), and the refined segmentation result of the Fully Connected CRF (DeepLab-CRF). We show our failure modes in the last three rows. Best viewed in color.}
738
+ \label{fig:ValResults}
739
+ \end{figure}
740
+ \section{Discussion}
741
+ \label{sec:discussion}
742
+
743
+ Our work combines ideas from deep convolutional neural networks and
744
+ fully-connected conditional random fields, yielding a novel method able to
745
+ produce semantically accurate predictions and detailed segmentation maps,
746
+ while being computationally efficient. Our experimental results show that the
747
+ proposed method significantly advances the state-of-art in the challenging
748
+ PASCAL VOC 2012 semantic image segmentation task.
749
+
750
+ There are multiple aspects in our model that we intend to refine, such as
751
+ fully integrating its two main components (CNN and CRF) and train the whole
752
+ system in an end-to-end fashion, similar to \citet{Koltun13, chen2014learning, zheng2015crfrnn}.
753
+ We also plan to experiment with more datasets and apply our method to other
754
+ sources of data such as depth maps or videos. Recently, we have pursued model training with weakly supervised annotations, in the form of bounding boxes or image-level labels \citep{papandreou15weak}.
755
+
756
+ At a higher level, our work lies in the intersection of convolutional neural
757
+ networks and probabilistic graphical models. We plan to further investigate
758
+ the interplay of these two powerful classes of methods and explore their
759
+ synergistic potential for solving challenging computer vision tasks.
760
+
761
+ \subsection*{Acknowledgments}
762
+
763
+ This work was partly supported by ARO 62250-CS, NIH Grant 5R01EY022247-03, EU Project RECONFIG FP7-ICT-600825 and EU Project MOBOT FP7-ICT-2011-600796. We also gratefully acknowledge the support of NVIDIA Corporation with the
764
+ donation of GPUs used for this research. We would like to thank the
765
+ anonymous reviewers for their detailed comments and constructive
766
+ feedback.
767
+
768
+ \subsection*{Paper Revisions}
769
+
770
+ Here we present the list of major paper revisions for the convenience of the readers.
771
+
772
+ \paragraph{v1} Submission to ICLR 2015. Introduces the model DeepLab-CRF, which attains the performance of $66.4\%$ on PASCAL VOC 2012 test set.
773
+
774
+ \paragraph{v2} Rebuttal for ICLR 2015. Adds the model DeepLab-MSc-CRF, which incorporates multi-scale features from the intermediate layers. DeepLab-MSc-CRF yields the performance of $67.1\%$ on PASCAL VOC 2012 test set.
775
+
776
+ \paragraph{v3} Camera-ready for ICLR 2015. Experiments with large Field-Of-View. On PASCAL VOC 2012 test set, DeepLab-CRF-LargeFOV achieves the performance of $70.3\%$. When exploiting both mutli-scale features and large FOV, DeepLab-MSc-CRF-LargeFOV attains the performance of $71.6\%$.
777
+
778
+ \paragraph{v4} Reference to our updated ``DeepLab'' system \cite{chen2016deeplab} with
779
+ much improved results.
780
+
781
+
782
+ \bibliography{egbib}
783
+ \bibliographystyle{iclr2015}
784
+
785
+ \end{document}