File size: 71,322 Bytes
8cf8144
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163


\documentclass[runningheads]{llncs}

\usepackage{graphicx}
\usepackage{amsmath, amssymb}
\usepackage{breqn}
\usepackage{tabularx}
\usepackage{multirow}
\usepackage{grffile}
\usepackage{color}
\usepackage{float}
\usepackage{url}
\usepackage{lineno}
\usepackage[abs]{overpic}
\usepackage{transparent}
\usepackage{cite}
\usepackage[normalem]{ulem}
\usepackage[colorlinks,linkcolor=red,anchorcolor=blue,citecolor=green]{hyperref}

\usepackage[width=122mm,left=12mm,paperwidth=146mm,height=193mm,top=12mm,paperheight=217mm]{geometry}

\newcommand{\etal}{\textit{et al}}
\begin{document}
\pagestyle{headings}
\mainmatter
\def\ECCV18SubNumber{1168}  

\title{Shift-Net: Image Inpainting via Deep Feature Rearrangement} 

\titlerunning{Shift-Net: Image Inpainting via Deep Feature Rearrangement}

\authorrunning{Zhaoyi Yan \etal}



\author{Zhaoyi Yan$^{1}$, Xiaoming Li$^{1}$, Mu Li$^{2}$, Wangmeng Zuo$^{1}$\thanks{Corresponding author.}, Shiguang Shan$^{3,4}$\\
{\tt\small yanzhaoyi@outlook.com, csxmli@hit.edu.cn, csmuli@comp.polyu.edu.hk,}\\
{\tt\small wmzuo@hit.edu.cn, sgshan@ict.ac.cn}
\small\institute{$^1$School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China\\
$^2$Department of Computing, The Hong Kong Polytechnic University, Hong Kong, China\\
$^3$Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), \\
Institute of Computing Technology, CAS, Beijing 100190, China \\
$^4$CAS Center for Excellence in Brain Science and Intelligence Technology \\}
}


\maketitle

\begin{abstract}
    Deep convolutional networks (CNNs) have exhibited their potential in image inpainting for producing plausible results.
However, in most existing methods, e.g., context encoder, the missing parts are predicted by propagating the surrounding convolutional features through a fully connected layer, which intends to produce semantically plausible but blurry result.
In this paper, we introduce a special shift-connection layer to the U-Net architecture, namely Shift-Net, for filling in missing regions of any shape with sharp structures and fine-detailed textures.
To this end, the encoder feature of the known region is shifted to serve as an estimation of the missing parts.
A guidance loss is introduced on decoder feature to minimize the distance between the decoder feature after fully connected layer and the ground-truth encoder feature of the missing parts.
With such constraint, the decoder feature in missing region can be used to guide the shift of encoder feature in known region.
An end-to-end learning algorithm is further developed to train the Shift-Net.
Experiments on the Paris StreetView and Places datasets demonstrate the efficiency and effectiveness of our Shift-Net in producing sharper, fine-detailed, and visually plausible results.
The codes and pre-trained models are available at \url{https://github.com/Zhaoyi-Yan/Shift-Net}.
\keywords{Inpainting, feature rearrangement, deep learning}
\end{abstract}


\section{Introduction}\label{section1}
Image inpainting is the process of filling in missing regions with plausible hypothesis, and can be used in many real world applications such as removing distracting objects, repairing corrupted or damaged parts, and completing occluded regions.
For example, when taking a photo, rare is the case that you are satisfied with what you get directly.
Distracting scene elements, such as irrelevant people or disturbing objects, generally are inevitable but unwanted by the users.
In these cases, image inpainting can serve as a remedy to remove these elements and fill in with plausible content.

\begin{figure}[t]
  \center
\setlength\tabcolsep{0.5pt}
\begin{tabular}{cccc}
    \includegraphics[height=0.24\textwidth]{figs/fig1/1_input} &
    \includegraphics[height=0.24\textwidth]{figs/fig1/1_pm} &
    \includegraphics[height=0.24\textwidth]{figs/fig1/1_CE} &
    \includegraphics[height=0.24\textwidth]{figs/fig1/1_output} \\
(a) & (b) & (c) & (d) \\
\end{tabular}
\vspace{-1em}
 \caption{Qualitative comparison of inpainting methods. Given (a) an image with a missing region,
 we present the inpainting results by (b) Content-Aware Fill~\cite{Content-Aware-Fill},
 (c) {context encoder}~\cite{pathak2016context}, and (d) our Shift-Net.}
  \vspace{-5mm}
  \label{fig:teaser}
\end{figure}

Despite decades of studies, image inpainting remains a very challenging problem in computer vision and graphics.
In general, there are two requirements for the image inpainting result: (i) global semantic structure and (ii) fine detailed textures.
Classical exemplar-based inpainting methods, e.g., PatchMatch~\cite{barnes2009patchmatch}, gradually synthesize the content of missing parts by searching similar patches from known region.
Even such methods are promising in filling high-frequency texture details, they fail in capturing the global structure of the image (See Fig.~\ref{fig:teaser}(b)).
In contrast, deep convolutional networks (CNNs) have also been suggested to predict the missing parts conditioned on their surroundings~\cite{pathak2016context,yang2017high}.
Benefited from large scale training data, they can produce semantically plausible inpainting result.
However, the existing CNN-based methods usually complete the missing parts by propagating the surrounding convolutional features through a fully connected layer (i.e., bottleneck), making the inpainting results sometimes lack of fine texture details and blurry.
The introduction of adversarial loss is helpful in improving the sharpness of the result, but cannot address this issue essentially (see Fig.~\ref{fig:teaser}(c)).



In this paper, we present a novel CNN, namely Shift-Net, to take into account the advantages of both exemplar-based and CNN-based methods for image inpainting.
Our Shift-Net adopts the U-Net architecture by adding a special shift-connection layer.
In exemplar-based inpainting~\cite{criminisi2003object}, the patch-based replication and filling process are iteratively performed to grow the texture and structure from the known region to the missing parts.
And the patch processing order plays a key role in yielding plausible inpainting result~\cite{le2011examplar, xu2010image}.
We note that CNN is effective in predicting the image structure and semantics of the missing parts.
Guided by the salient structure produced by CNN, the filling process in our Shift-Net can be finished concurrently by introducing a shift-connection layer to connect the encoder feature of known region and the decoder feature of missing parts.
Thus, our Shift-Net inherits the advantages of exemplar-based and CNN-based methods, and can produce inpainting result with both plausible semantics and fine detailed textures (See Fig.~\ref{fig:teaser}(d)).



Guidance loss, reconstruction loss, and adversarial learning are incorporated to guide the shift operation and to learn the model parameters of Shift-Net.
To ensure that the decoder feature can serve as a good guidance, a guidance loss is introduced to enforce the decoder feature be close to the ground-truth encoder feature.
Moreover, $\ell_1$ and adversarial losses are also considered to reconstruct the missing parts and restore more detailed textures.
By minimizing the model objective, our Shift-Net can be end-to-end learned with a training set.
Experiments are conducted on the Paris StreetView dataset~\cite{doersch2012makes}, the Places dataset~\cite{zhou2017places}, and real world images.
The results show that our Shift-Net can handle missing regions with any shape, and is effective in producing sharper, fine-detailed, and visually plausible results (See Fig.~\ref{fig:teaser}(d)).



Besides, Yang \etal.~\cite{yang2017high} also suggest a multi-scale neural patch synthesis (MNPS) approach to incorporating CNN-based with exemplar-based methods.
Their method includes two stages, where an encoder-decoder network is used to generate an initial estimation in the first stage.
By considering both global content and texture losses, a joint optimization model on VGG-19~\cite{simonyan2014very} is minimized to generate the fine-detailed result in the second stage.
Even Yang \etal.~\cite{yang2017high} yields encouraging result, it is very time-consuming and takes about $40,000$ millisecond (ms) to process an image with size of $256 \times 256$.
In contrast, our Shift-Net can achieve comparable or better results (See Fig.~\ref{fig:Paris} and Fig.~\ref{fig:Places} for several examples) and only takes about $80$ ms.
Taking both effectiveness and efficiency into account, our Shift-Net can provide a favorable solution to combine exemplar-based and CNN-based inpainting for improving performance.



To sum up, the main contribution of this work is three-fold:
\begin{enumerate}
  \item By introducing the shift-connection layer to U-Net, a novel Shift-Net architecture is developed to efficiently combine CNN-based and exemplar-based inpainting.
\item The guidance, reconstruction, and adversarial losses are introduced to train our Shift-Net. Even with the deployment of shift operation, all the network parameters can be learned in an end-to-end manner.
\item Our Shift-Net achieves state-of-the-art results in comparison with~\cite{barnes2009patchmatch,pathak2016context,yang2017high} and performs favorably in generating fine-detailed textures and visually plausible results.

\end{enumerate}


\begin{figure*}[!t]
  \centering
\begin{overpic}[scale=.12]{Shift-Network-V7.pdf} 

   \put(78,18){\scriptsize {${\Phi_{l}(I)}$}}
   \put(212,22){\scriptsize {${\Phi_{L-l}(I)}$}}
   \put(256,8){\scriptsize {${\Phi_{L-l}^{\text{\emph{shift}}}(I)}$}}

\end{overpic}
\vspace{-1em}
   \caption{The architecture of our model. We add the shift-connection layer at the resolution of $32\times32$.}
   \label{fig:ShiftNetwork}
   \vspace{-1em}
\end{figure*}



\section{Related Work}\label{section2}
In this section, we briefly review the work on each of the three sub-fields, i.e., exemplar-based inpainting, CNN-based inpainting, and style transfer, and specially focus on those relevant to this work.



\vspace{-1em}
\subsection{Exemplar-based inpainting}
In exemplar-based inpainting~\cite{barnes2009patchmatch,barnes2010generalized,criminisi2003object,drori2003fragment,
efros1999texture,jia2003image,jia2004inference,komodakis2006image,komodakis2007image,le2011examplar,pritch2009shift,
simakov2008summarizing,sun2005image,wexler2004space,wexler2007space,xu2010image}, the completion is conducted from the exterior to the interior of the missing part by searching and copying best matching patches from the known region.
For fast patch search, Barnes \etal.~suggest a PatchMatch algorithm~\cite{barnes2009patchmatch} to exploit the image coherency, and generalize it for finding k-nearest neighbors~\cite{barnes2010generalized}.
Generally, exemplar-based inpainting is superior in synthesizing textures, but is not well suited for preserving edges and structures.
For better recovery of image structure, several patch priority measures have been proposed to fill in structural patches first~\cite{criminisi2003object,le2011examplar, xu2010image}.
Global image coherence has also been introduced to the Markov random field (MRF) framework for improving visual quality~\cite{komodakis2006image, pritch2009shift, wexler2004space}.
However, these methods only work well on images with simple structures, and may fail in handling images with complex objects and scenes.
Besides, in most exemplar-based inpainting methods~\cite{komodakis2006image, komodakis2007image, pritch2009shift}, the missing part is recovered as the shift representation of the known region in pixel/region level, which also motivates our shift operation on convolution feature representation.
\vspace{-1em}




\subsection{CNN-based inpainting}
Recently, deep CNNs have achieved great success in image inpainting.
Originally, CNN-based inpainting is confined to small and thin masks~\cite{kohler2014mask, ren2015shepard, xie2012image}.
Phatak \etal.~\cite{pathak2016context} present an encoder-decoder~(i.e., context encoder) network to predict the missing parts, where an adversarial loss is adopted in training to improve the visual quality of the inpainted image.
Even context encoder is effective in capturing image semantics and global structure, it completes the input image with only one forward-pass and performs poorly in generating fine-detailed textures.
Semantic image inpainting is introduced to fill in the missing part conditioned on the known region for images from a specific semantic class~\cite{yeh2017semantic}.
In order to obtain globally consistent result with locally realistic details, global and local discriminators have been proposed in image inpainting~\cite{IizukaSIGGRAPH2017} and face completion~\cite{li2017generative}.
For better recovery of fine details, MNPS is presented to combine exemplar-based and CNN-based inpainting~\cite{yang2017high}.


\vspace{-1em}
\subsection{Style transfer}
Image inpainting can be treated as an extension of style transfer, where both the content and style (texture) of missing part are estimated and transferred from the known region.
In the recent few years, style transfer~\cite{chen2016fast, dumoulin2016learned, gatys2015neural, gatys2016controlling, huang2017arbitrary, johnson2016perceptual, li2016combining, luan2017deep, ulyanov2016texture} has been an active research topic.
Gatys \etal.~\cite{gatys2015neural} show that one can transfer style and texture of the style image to the content image by solving an optimization objective defined on an existing CNN.
Instead of the Gram matrix, Li \etal.~\cite{li2016combining} apply the MRF regularizer to style transfer to suppress distortions and smears.
In~\cite{chen2016fast}, local matching is performed on the convolution layer of the pre-trained network to combine content and style, and an inverse network is then deployed to generate the image from feature representation.



\section{Method}\label{section3}
Given an input image $I$, image inpainting aims to restore the ground-truth image $I^{gt}$ by filling in the missing part.
To this end, we adopt U-Net~\cite{ronneberger2015u} as the baseline network.
By incorporating with guidance loss and shift operation, we develop a novel Shift-Net for better recovery of semantic structure and fine-detailed textures.
In the following, we first introduce the guidance loss and Shift-Net, and then describe the model objective and learning algorithm.

\vspace{-1em}
\subsection{Guidance loss on decoder feature}\label{section3.1}
The U-Net consists of an encoder and a symmetric decoder, where skip connection is introduced to concatenate the features from each layer of encoder and those of the corresponding layer of decoder.
Such skip connection makes it convenient to utilize the information before and after bottleneck, which is valuable for image inpainting and other low level vision tasks in capturing localized visual details~\cite{isola2016image, zhu2017unpaired}.
The architecture of the U-Net adopted in this work is shown in Fig.~\ref{fig:ShiftNetwork}. Please refer to the supplementary material for more details on network parameters.



Let $\Omega$ be the missing region and $\overline{\Omega}$ be the known region.
Given a U-Net of $L$ layers, $\Phi_{l}(I)$ is used to denote the encoder feature of the $l$-th layer, and $\Phi_{L-l}(I)$ the decoder feature of the $(L-l)$-th layer.
For the end of recovering $I^{gt}$, we expect that $\Phi_{l}(I)$ and $\Phi_{L-l}(I)$ convey almost all the information in $\Phi_{l}(I^{gt})$.
For any location $\mathbf{y} \in \Omega$, we have $\left( \Phi_{l}(I) \right)_{\mathbf{y}} \approx 0$.
Thus, $\left( \Phi_{L-l}(I) \right)_{\mathbf{y}}$ should convey equivalent information of $\left( \Phi_{l}(I^{gt}) \right)_{\mathbf{y}}$.



In this work, we suggest to explicitly model the relationship between  $\left( \Phi_{L-l}(I) \right)_{\mathbf{y}}$ and $\left( \Phi_{l}(I^{gt}) \right)_{\mathbf{y}}$ by introducing the following guidance loss,
\begin{equation}\label{loss_guidance}
\small
{\cal L}_g = \sum_{\mathbf{y} \in \Omega} \left\| \left( \Phi_{L-l}(I) \right)_{\mathbf{y}} - \left( \Phi_{l}(I^{gt}) \right)_{\mathbf{y}} \right\|_2^2.
\end{equation}
We note that $\left( \Phi_{l}(I) \right)_{\mathbf{x}} \approx \left( \Phi_{l}(I^{gt}) \right)_{\mathbf{x}}$ for any $\mathbf{x} \in \overline{\Omega}$.
Thus the guidance loss is only defined on $\mathbf{y} \in {\Omega}$ to make $\left( \Phi_{L-l}(I) \right)_{\mathbf{y}} \approx \left( \Phi_{l}(I^{gt}) \right)_{\mathbf{y}}$.
By concatenating $\Phi_{l}(I)$ and $\Phi_{L-l}(I)$, all information in $\Phi_{l}(I^{gt})$ can be approximately obtained.


Experiment on deep feature visualization is further conducted to illustrate the relation between $\left( \Phi_{L-l}(I) \right)_{\mathbf{y}}$ and $\left( \Phi_{l}(I^{gt}) \right)_{\mathbf{y}}$.
For visualizing $\{ \left( \Phi_{l}(I^{gt}) \right)_{\mathbf{y}} | {\mathbf{y}} \in \Omega \}$, we adopt the method~\cite{mahendran2015understanding} by solving an optimization problem
\begin{equation}\label{vis_gt}
\small
H^{gt} = \arg \min_{H} \sum_{{\mathbf{y}} \in \Omega} \left\| \left( \Phi_{l}(H) \right)_{\mathbf{y}} - \left( \Phi_{l}(I^{gt}) \right)_{\mathbf{y}}\right\|_2^2.
\end{equation}
Analogously, $\{ \left( \Phi_{L-l}(I) \right)_{\mathbf{y}} | {\mathbf{y}} \in \Omega \}$ is visualized by
\begin{equation}\label{vis_gt}
\small
H^{de} = \arg \min_{H} \sum_{{\mathbf{y}} \in \Omega} \left\| \left( \Phi_{l}(H) \right)_{\mathbf{y}} - \left( \Phi_{L-l}(I) \right)_{\mathbf{y}} \right\|_2^2.
\end{equation}
Figs.~\ref{fig:Visualization}(b)(c) show the visualization results of $H^{gt}$ and $H^{de}$.
With the introduction of guidance loss, obviously $H^{de}$ can serve as a reasonable estimation of $H^{gt}$, and U-Net works well in recovering image semantics and structures.
However, in compared with $H^{gt}$ and $I^{gt}$, the result $H^{de}$ is blurry, which is consistent with the poor performance of CNN-based inpainting in recovering fine textures~\cite{yang2017high}.
Finally, we note that the guidance loss is helpful in constructing an explicit relation between $\left( \Phi_{L-l}(I) \right)_{\mathbf{y}}$ and $\left( \Phi_{l}(I^{gt}) \right)_{\mathbf{y}}$.
In the next section, we will explain how to utilize such property for better estimation to $\left( \Phi_{l}(I^{gt}) \right)_{\mathbf{y}}$ and enhancing inpainting result.


\begin{figure}[t]
  \center
\setlength\tabcolsep{0.5pt}
\begin{tabular}{cccc}
    \includegraphics[height=0.24\textwidth]{figs/fig_v/1} &
    \includegraphics[height=0.24\textwidth]{figs/fig_v/2} &
    \includegraphics[height=0.24\textwidth]{figs/fig_v/3} &
    \includegraphics[height=0.24\textwidth]{figs/fig_v/4} \\
 (a) & (b) & (c) & (d) \\
\end{tabular}
  \vspace{-1mm}
\caption{Visualization of features learned by our model. Given (a) an input image, (b) is the visualization of $\left( \Phi_{l}(I^{gt})\right)_{\mathbf{y}}$ (i.e., $H^{gt}$),
(c) shows the result of $\left( \Phi_{L-l}(I) \right)_{\mathbf{y}}$ (i.e., $H^{de}$) and
(d) demonstrates the effect of $\left( \Phi_{L-l}^{\text{\emph{shift}}}(I) \right)_{\mathbf{y}}$.}
  \vspace{-5mm}
  \label{fig:Visualization}
\end{figure}



\vspace{-1em}
\subsection{Shift operation and Shift-Net}\label{section3.2} In exemplar-based inpainting, it is generally assumed that the missing part is the spatial rearrangement of the pixels/patches in the known region.
For each pixel/patch localized at $\mathbf{y}$ in missing part, exemplar-based inpainting explicitly or implicitly find a shift vector $\mathbf{u}_{\mathbf{y}}$, and recover $(I)_{\mathbf{y}}$ with $(I)_{\mathbf{y} + \mathbf{u}_{\mathbf{y}} }$, where ${\mathbf{y}+\mathbf{u}_{\mathbf{y}} } \in \overline{\Omega}$ is in the known region.
The pixel value $(I)_{\mathbf{y}}$ is unknown before inpainting.
Thus, the shift vectors usually are obtained progressively from the exterior to the interior of the missing part, or by solving a MRF model by considering global image coherence.
However, these methods may fail in recovering complex image semantics and structures.


We introduce a special shift-connection layer in U-Net, which takes $\Phi_{l}(I)$ and $\Phi_{L-l}(I)$ to obtain an updated estimation on $\Phi_{l}(I^{gt})$.

For each $\left( \Phi_{L-l}(I) \right)_{\mathbf{y}}$ with $\mathbf{y} \in \Omega$, its nearest neighbor searching in $\left( \Phi_{l}(I) \right)_{\mathbf{x}}$ ($\mathbf{x} \in \overline{\Omega}$) can be independently obtained by,
\begin{equation}\label{eqn:nn}
\mathbf{x}^*(\mathbf{y}) = \arg \max_{\mathbf{x} \in \overline{\Omega}}
\frac{\left \langle \left( \Phi_{L-l}(I) \right)_{\mathbf{y}}, \left( \Phi_{l}(I) \right)_{\mathbf{x}} \right \rangle}
{\|\left( \Phi_{L-l}(I) \right)_{\mathbf{y}}\|_2  \|\left( \Phi_{l}(I) \right)_{\mathbf{x}}\|_2},
\end{equation}
and the shift vector is defined as $\mathbf{u}_{\mathbf{y}} = \mathbf{x}^*(\mathbf{y}) - \mathbf{y}$.
Similar to~\cite{li2016combining}, the nearest neighbor searching can be computed as a convolutional layer.
Then, we update the estimation of $\left( \Phi_{l}(I^{gt}) \right)_{\mathbf{y}}$ as the spatial rearrangement of the encoder feature $\left( \Phi_{l}(I) \right)_{\mathbf{x}}$,
\begin{equation}\label{eqn:shift}
\left( \Phi_{L-l}^{\text{\emph{shift}}}(I) \right)_{\mathbf{y}} = \left( \Phi_{l}(I) \right)_{\mathbf{y} + \mathbf{u}_{\mathbf{y}}}.
\end{equation}
See Fig.~\ref{fig:Visualization}(d) for visualization.
Finally, as shown in Fig.~\ref{fig:ShiftNetwork}, the convolution features $\Phi_{L-l}(I)$, $\Phi_{l}(I)$ and $\Phi_{L-l}^{\text{\emph{shift}}}(I)$ are concatenated and taken as inputs to the $(L-l+1)$-th layer, resulting in our Shift-Net.




The shift operation is different with exemplar-based inpainting from several aspects.
(i) While exemplar-based inpainting is operated on pixels/patches, shift operation is performed on deep encoder feature domain which is end-to-end learned from training data.
(ii) In exemplar-based inpainting, the shift vectors are obtained either by solving an optimization problem or in particular order. As for shift operation, with the guidance of $\Phi_{L-l}(I)$, all the shift vectors can be computed in parallel.
(iii) For exemplar-based inpainting, both patch processing orders and global image coherence are not sufficient for preserving complex structures and semantics. In contrast, in shift operation $\Phi_{L-l}(I)$ is learned from large scale data and is more powerful in capturing global semantics.
(iv) In exemplar-based inpainting, after obtaining the shift vectors, the completion result can be directly obtained as the shift representation of the known region. As for shift operation, we take the shift representation $\Phi_{L-l}^{\text{\emph{shift}}}(I)$ together with $\Phi_{L-l}(I)$ and $\Phi_{l}(I)$ as inputs to $(L-l+1)$-th layer of U-Net, and adopt a data-driven manner to learn an appropriate model for image inpainting.
Moreover, even with the introduction of shift-connection layer, all the model parameters in our Shift-Net can be end-to-end learned from training data.
Thus, our Shift-Net naturally inherits the advantages of exemplar-based and CNN-based inpainting.


\vspace{-1em}
\subsection{Model objective and learning}\label{section3.3}

\subsubsection{Objective.}\label{section3.3.1}

Denote by $\Phi(I; \mathbf{W})$ the output of our Shift-Net, where $\mathbf{W}$ is the model parameters to be learned.
Besides the guidance loss, the $\ell_1$ loss and the adversarial loss are also included to train our Shift-Net.
The $\ell_1$ loss is defined as,
\begin{equation}\label{eqn:l2loss}
{\cal L}_{\ell_{1}} = \|\Phi(I; \mathbf{W}) - I^{gt}\|_1,
\end{equation}
which is suggested to constrain that the inpainting result should approximate the ground-truth image.



Recently, adversarial learning has been adopted in many low level vision~\cite{ledig2016photo} and image generation tasks\cite{isola2016image, radford2015unsupervised}, and exhibits its superiority in restoring high-frequency details and photo-realistic textures.
As for image inpainting, we use $p_{data}(I^{gt})$ to denote the distribution of ground-truth images, and $p_{miss}(I)$ to denote the distribution of input image.
The adversarial loss is then defined as,
\begin{align}\label{eqn:ganloss}
{\cal L}_{adv} \!&\! =  \min_{\mathbf{W}} \max_{D}  \mathbb{E}_{I^{gt} \sim p_{data}({I^{gt}})} [\log D({I^{gt}})]  \\
 \!&\! + \mathbb{E}_{I \sim p_{miss}({I})} [\log ( 1 - D(\Phi(I; \mathbf{W})) )],
\end{align}
where $D(\cdot)$ denotes the discriminator to predict the probability that an image is from the distribution $p_{data}(I^{gt})$.



Taking guidance, $\ell_1$, and adversarial losses into account, the overall objective of our Shift-Net is defined as,
\begin{equation}\label{eqn:objective}
{\cal L} = {\cal L}_{\ell_{1}} + \lambda_{g} {\cal L}_g + \lambda_{adv} {\cal L}_{adv},
\end{equation}
where $\lambda_{g}$  and $\lambda_{adv}$ are the tradeoff parameters for the guidance and adversarial losses, respectively.


\subsubsection{Learning.}\label{section3.3.2}

Given a training set $\{ (I, I^{gt}) \}$, the Shift-Net is trained by minimizing the objective in Eqn.~(\ref{eqn:objective}) via back-propagation.
We note that the Shift-Net and the discriminator are trained in an adversarial manner.
The Shift-Net $\Phi(I; \mathbf{W})$ is updated by minimizing the adversarial loss ${\cal L}_{adv}$, while the discriminator $D$ is updated by maximizing ${\cal L}_{adv}$.




Due to the introduction of shift-connection layer, we should modify the computation of the gradient w.r.t. the $l$-th layer of feature $F_l = \Phi_{l}(I)$.
To avoid confusion, we use $F_l^{skip}$ to denote the feature $F_l$ after skip connection, and of course we have $F_l^{skip} = F_l$.
According to Eqn.~(\ref{eqn:shift}), the relation between ${\Phi_{L-l}^{\text{\emph{shift}}}(I)}$ and $\Phi_{l}(I)$ can be written as,
\begin{equation}\label{BPEqn1}
{\Phi_{L-l}^{\text{\emph{shift}}}(I)} = \mathbf{P} \Phi_{l}(I),
\end{equation}
where $\mathbf{P}$ denotes the shift matrix of $\{0, 1\}$, and there is only one element of 1 in each row of $\mathbf{P}$.
Thus, the gradient with respect to $\Phi_{l}(I)$ consists of three terms,
\begin{equation}\label{eqn:gradient}
\frac{\partial {\cal L}}{\partial F_l} \!=\! \frac{\partial {\cal L}}{\partial F_l^{skip}}
\!+\! \frac{\partial {\cal L}}{\partial F_{l+1}} \frac{\partial F_{l+1}} {\partial F_l}
\!+\! \mathbf{P}^T \frac{\partial {\cal L}}{\partial \Phi_{L-l}^{\text{\emph{shift}}}(I)},
\end{equation}
where the computation of the first two terms are the same with U-Net, and the gradient with respect to ${ \Phi_{L-l}^{\text{\emph{shift}}}(I)}$ can also be directly computed.
Thus, our Shift-Net can also be end-to-end trained to learn the model parameters $\mathbf{W}$.



\section{Experiments}\label{section4}
We evaluate our method on two datasets: Paris StreetView~\cite{doersch2012makes} and six scenes from Places365-Standard dataset~\cite{zhou2017places}.
The Paris StreetView contains 14,900 training images and 100 test images.
There are 1.6 million training images from 365 scene categories in the Places365-Standard.
The scene categories selected from Places365-Standard are \emph{butte}, \emph{canyon}, \emph{field}, \emph{synagogue}, \emph{tundra} and \emph{valley}.
Each category has 5,000 training images, 900 test images and 100 validation images.
Our model is learned using the training set and tested on the validation set.
For both Paris StreetView and Places, we resize each training image to let its minimal length/width be 350, and randomly crop a subimage of size $256\times256$ as input to our model.
Moreover, our method is also tested on real world images for removing objects and distractors.
Our Shift-Net is optimized using the Adam algorithm~\cite{kingma2015adam} with a learning rate of $2 \times {10^{ - 4}}$ and ${\beta _1} = 0.5$.
The batch size is $1$ and the training is stopped after $30$ epochs.
Data augmentation such as flipping is also adopted during training.
The tradeoff parameters are set as $\lambda_{g} = 0.01$ and $\lambda_{adv} = 0.002$.
It takes about one day to train our Shift-Net on an Nvidia Titan X Pascal GPU.

\vspace{-1em}



\begin{figure*}[!t]
  \center
\setlength\tabcolsep{1.5pt}
\begin{tabular}{ccccc}
  \includegraphics[width=.18\textwidth]{figs/fig2/hole_white/004_im} &
  \includegraphics[width=.18\textwidth]{figs/fig2/patchMatch/pM_0004} &
  \includegraphics[width=.18\textwidth]{figs/fig2/CE/fake_0004} &
  \includegraphics[width=.18\textwidth]{figs/fig2/highRes/hr_0004} &
  \includegraphics[width=.18\textwidth]{figs/fig2/ours/004_im}\\


  \includegraphics[width=.18\textwidth]{figs_supp/fig1_Paris/hole_white/input_0085} &
  \includegraphics[width=.18\textwidth]{figs_supp/fig1_Paris/patchMatch/pM_0085} &
  \includegraphics[width=.18\textwidth]{figs_supp/fig1_Paris/CE/fake_0085} &
  \includegraphics[width=.18\textwidth]{figs_supp/fig1_Paris/highRes/hr_0085} &
  \includegraphics[width=.18\textwidth]{figs_supp/fig1_Paris/ours/tc_g1_085_im}\\

  \includegraphics[width=.18\textwidth]{figs_supp/fig1_Paris/hole_white/input_0010} &
  \includegraphics[width=.18\textwidth]{figs_supp/fig1_Paris/patchMatch/pM_0010}  &
  \includegraphics[width=.18\textwidth]{figs_supp/fig1_Paris/CE/fake_0010} &
  \includegraphics[width=.18\textwidth]{figs_supp/fig1_Paris/highRes/hr_0010} &
  \includegraphics[width=.18\textwidth]{figs_supp/fig1_Paris/ours/g100_010_im}\\

(a)  & (b) & (c)  & (d) & (e)\\
\end{tabular}
\vspace{-.5em}
\caption{Qualitative comparisons on the Paris StreetView dataset. From the left to the right are:
(a) input, (b) Content-Aware Fill~\cite{Content-Aware-Fill}, (c) context encoder~\cite{pathak2016context}, (d) MNPS~\cite{yang2017high} and (e) Ours. All images are scaled to $256\times 256$.}
\label{fig:Paris}
\vspace{-1em}
\end{figure*}


\begin{figure}[!t]
\setlength\tabcolsep{1.5pt}
\centering
\small
\begin{tabular}{ccccc}
  \includegraphics[width=.18\linewidth]{figs_rebuttal/Places/hole_white/fake_0092_hole} &
  \includegraphics[width=.18\linewidth]{figs_rebuttal/Places/PatchMatch/fake_0092_pM} &
  \includegraphics[width=.18\linewidth]{figs_rebuttal/Places/CE/fake_0092} &
  \includegraphics[width=.18\linewidth]{figs_rebuttal/Places/MNPS/mnps_0092} &
  \includegraphics[width=.18\linewidth]{figs_rebuttal/Places/Ours/0092}\\

  \includegraphics[width=.18\linewidth]{figs_rebuttal/Places/hole_white/fake_0099_hole} &
  \includegraphics[width=.18\linewidth]{figs_rebuttal/Places/PatchMatch/fake_0099_pM} &
  \includegraphics[width=.18\linewidth]{figs_rebuttal/Places/CE/fake_0099} &
  \includegraphics[width=.18\linewidth]{figs_rebuttal/Places/MNPS/mnps_0099} &
  \includegraphics[width=.18\linewidth]{figs_rebuttal/Places/Ours/0099}\\



  \includegraphics[width=.18\linewidth]{figs_rebuttal/Places/hole_white/fake_0216_hole} &
  \includegraphics[width=.18\linewidth]{figs_rebuttal/Places/PatchMatch/fake_0216_pM} &
  \includegraphics[width=.18\linewidth]{figs_rebuttal/Places/CE/fake_0216} &
  \includegraphics[width=.18\linewidth]{figs_rebuttal/Places/MNPS/mnps_0216} &
  \includegraphics[width=.18\linewidth]{figs_rebuttal/Places/Ours/0216}\\
(a)  & (b) & (c)  & (d) & (e)\\
\end{tabular}
\vspace{-.5em}
\caption{Qualitative comparisons on the Places. From the left to the right are:
(a) input, (b) Content-Aware Fill~\cite{Content-Aware-Fill}, (c) context encoder~\cite{pathak2016context}, (d) MNPS~\cite{yang2017high} and (e) Ours. All images are scaled to $256\times 256$.}
\label{fig:Places}
\end{figure}




\subsection{Comparisons with state-of-the-arts}\label{section4.1}
We compare our results with Photoshop Content-Aware Fill~\cite{Content-Aware-Fill} based on~\cite{barnes2009patchmatch}, context encoder~\cite{pathak2016context}, and MNPS~\cite{yang2017high}.
As context encoder only accepts $128 \times 128$ images, we upsample the results to $256 \times 256$.
For MNPS~\cite{yang2017high}, we set the pyramid level be 2 to get the resolution of $256 \times 256$.


\noindent\textbf{Evaluation on Paris StreetView and Places.}
Fig.~\ref{fig:Paris} shows the comparisons of our method with the three state-of-the-art approaches on Paris StreetView.
Content-Aware Fill~\cite{Content-Aware-Fill} is effective in recovering low level textures, but performs slightly worse in handling occlusions with complex structures.
Context encoder~\cite{pathak2016context} is effective in semantic inpainting, but the results seem blurry and detail-missing due to the effect of bottleneck.
MNPS~\cite{yang2017high} adopts a multi-stage scheme to combine CNN and examplar-based inpainting, and generally works better than Content-Aware Fill~\cite{Content-Aware-Fill} and context encoder~\cite{pathak2016context}.
However, the multi-scales in MNPS~\cite{yang2017high} are not jointly trained, where some adverse effects produced in the first stage may not be eliminated by the subsequent stages.
In comparison to the competing methods, our Shift-Net combines CNN and examplar-based inpainting in an end-to-end manner, and generally is able to generate visual-pleasing results.
Moreover, we also note that our Shift-Net is much more efficient than MNPS~\cite{yang2017high}.
Our method consumes only about $80$ ms for a $256 \times 256$ image, which is about 500$\times$ faster than MNPS~\cite{yang2017high} (about $40$ seconds).
In addition, we also evaluate our method on the Places dataset (see Fig.~\ref{fig:Places}).
Again our Shift-Net performs favorably in generating fine-detailed, semantically plausible, and realistic images.


\noindent\textbf{Quantitative evaluation.}
We also compare our model quantitatively with the competing methods on the Paris StreetView dataset.
Table \ref{table:paris} lists the PSNR, SSIM and mean \(\ell_2\) loss of different methods.
Our Shift-Net achieves the best numerical performance.
We attribute it to the combination of CNN-based with examplar-based inpainting as well as the end-to-end training.
In comparison, MNPS~\cite{yang2017high} adopts a two-stage scheme and cannot be jointly trained.





\begin{table}[!t]
 \scriptsize
  \caption{Comparison of PSNR, SSIM and mean \(\ell_2\) loss on Paris StreetView dataset.}
  \vspace{-1.5em}
\begin{center}
\resizebox{.90\textwidth}{!}{\begin{tabular}{ l  c  c c}
    \hline
    Method & PSNR & SSIM & Mean \(\ell_2\) Loss\\ \hline
    Content-Aware Fill~\cite{Content-Aware-Fill} & 23.71 & 0.74 & 0.0617 \\ \hline
    context encoder~\cite{pathak2016context} (\(\ell_2\) + adversarial loss) & 24.16 & 0.87 &  0.0313 \\ \hline
    MNPS~\cite{yang2017high} & 25.98 & 0.89 & 0.0258 \\ \hline
    Ours & \textbf{26.51} & \textbf{0.90} & \textbf{0.0208} \\ \hline
  \end{tabular}}
  \end{center}
  \label{table:paris}
  \vspace{-3em} \end{table}



\noindent\textbf{Random mask completion.}
Our model can also be trained for arbitrary region completion.
Fig.~\ref{fig:Paris_random} shows the results by Content-Aware Fill~\cite{Content-Aware-Fill} and our Shift-Net.
For textured and smooth regions, both Content-Aware Fill~\cite{Content-Aware-Fill} and our Shift-Net perform favorably.
While for structural region, our Shift-Net is more effective in filling the cropped regions with context coherent with global content and structures.
\begin{figure}[!t]
  \centering
\setlength\tabcolsep{1.5pt}
\begin{tabular}{ccccc}
  \includegraphics[width=.18\linewidth]{figs/fig4/input/012_im}&
  \includegraphics[width=.18\linewidth]{figs/fig4/input/033_im}&
  \includegraphics[width=.18\linewidth]{figs/fig4/input/048_im}&
  \includegraphics[width=.18\linewidth]{figs/fig4/input/085_im}&
  \includegraphics[width=.18\linewidth]{figs/fig4/input/080_im}\\

  \includegraphics[width=.18\linewidth]{figs/fig4/pm/012_im}&
  \includegraphics[width=.18\linewidth]{figs/fig4/pm/033_im}&
  \includegraphics[width=.18\linewidth]{figs/fig4/pm/048_im}&
  \includegraphics[width=.18\linewidth]{figs/fig4/pm/085_im}&
  \includegraphics[width=.18\linewidth]{figs/fig4/pm/080_im}\\


  \includegraphics[width=.18\linewidth]{figs/fig4/ours/012_im}&
  \includegraphics[width=.18\linewidth]{figs/fig4/ours/033_im}&
  \includegraphics[width=.18\linewidth]{figs/fig4/ours/048_im}&
  \includegraphics[width=.18\linewidth]{figs/fig4/ours/085_im}&
  \includegraphics[width=.18\linewidth]{figs/fig4/ours/080_im}\\
\end{tabular}
\vspace{-1em}
\caption{Random region completion. From top to bottom are: input, Content-Aware Fill~\cite{Content-Aware-Fill}, and Ours.}
\label{fig:Paris_random}
\end{figure}



\subsection{Inpainting of real world images}\label{section4.2}

We also evaluate our Shift-Net trained on Paris StreetView for the inpainting of real world images by considering two types of missing regions: (i) central region, (ii) object removal.
From the first row of Fig.~\ref{fig:realImgs}, one can see that our Shift-Net trained with central mask can be generalized to handle real world images.
From the second row of Fig.~\ref{fig:realImgs}, we show the feasibility of using our Shift-Net trained with random mask to remove unwanted objects from the images.

\begin{figure}[!t]
\setlength\tabcolsep{1.5pt}
\centering
\begin{tabular}{cccc}

\includegraphics[width=.24\linewidth]{figs/fig5/input/4A6A2760}&
\includegraphics[width=.24\linewidth]{figs/fig5/ours/4A6A2760}&
\includegraphics[width=.24\linewidth]{figs/fig5/input/4A6A2954}&
\includegraphics[width=.24\linewidth]{figs/fig5/ours/4A6A2954}\\
\includegraphics[width=.24\linewidth]{figs/fig5/target/DSC_4257}&
\includegraphics[width=.24\linewidth]{figs/fig5/ours/DSC_4257}&
\includegraphics[width=.24\linewidth]{figs/fig5/target/14}&
\includegraphics[width=.24\linewidth]{figs/fig5/ours/14}\\

\end{tabular}
\vspace{-1em}
\caption{Results on real images. From the top to bottom are: central region inpainting, and object removal.}
\label{fig:realImgs}
\end{figure}








\section{Ablative Studies}\label{section5}
\begin{figure}[!t]
\vspace{-0em}
\setlength\tabcolsep{1.5pt}
\centering
\small
\begin{tabular}{cccc}
\includegraphics[width=.24\linewidth]{figs/fig6/Unet/004_im} &
\includegraphics[width=.24\linewidth]{figs/fig6/UnetGuide/004_im} &
\includegraphics[width=.24\linewidth]{figs/fig6/Ours_without_lambdaG/004_im} &
\includegraphics[width=.24\linewidth]{figs/fig6/Ours/004_im} \\
(a) U-Net & (b) U-Net  & (c) Ours  & (d) Ours\\
(w/o ${\cal L}_g$) & (w/ ${\cal L}_g$) & (w/o ${\cal L}_g$) & (w/ ${\cal L}_g$)
\end{tabular}
\vspace{-1em}
\caption{The effect of guidance loss ${\cal L}_g$ in U-Net and our Shift-Net. }
\label{fig:guidanceEffectiveness}
\vspace{-1em}
\end{figure}




\begin{figure}[!t]
\setlength\tabcolsep{1.5pt}
\centering
\small
\begin{tabular}{cccc}
\includegraphics[width=.24\linewidth]{figs/fig7/gW_1} &
\includegraphics[width=.24\linewidth]{figs/fig7/gW_0.1} &
\includegraphics[width=.24\linewidth]{figs/fig7/gW_0.01} &
\includegraphics[width=.24\linewidth]{figs/fig7/gW_0.001} \\
(a) $\lambda_{g}=1$ & (b) $\lambda_{g}=0.1$ & (c) $\lambda_{g}=0.01$ & (d) $\lambda_{g}=0.001$\\
\end{tabular}
\vspace{-1em}
\caption{The effect of the tradeoff parameter $\lambda_{g}$ of guidance loss. }
\label{fig:lambdaG}
\vspace{-1em}
\end{figure}


The main differences between our Shift-Net and the other methods are the introduction of guidance loss and shift-connection layer.
Thus, experiments are first conducted to analyze the effect of guidance loss and shift operation.
Then we respectively zero out the corresponding weight of $(L-l+1)$-th layer to verify the effectiveness of the shift feature $\Phi_{L-l}^{\text{\emph{shift}}}$ in generating fine-detailed results.
Moreover, the benefit of shift-connection does not owe to the increase of feature map size.
To illustrate this, we also compare Shift-Net with a baseline model by substituting the nearest neighbor searching with random shift-connection.


\vspace{-1em}
\subsection{Effect of guidance loss}\label{section5.1}

Two groups of experiments are conducted to evaluate the effect of guidance loss.
In the first group of experiments, we add and remove the guidance loss ${\cal L}_g$ for U-Net and our Shift-Net to train the inpainting models.
Fig.~\ref{fig:guidanceEffectiveness} shows the inpainting results by these four methods.
It can be observed that, for both U-Net and Shift-Net the guidance loss is helpful in suppressing artifacts and preserving salient structure.


In the second group of experiments, we evaluate the effect of the tradeoff parameter $\lambda_g$ for guidance loss.
For our Shift-Net, the guidance loss is introduced for both recovering the semantic structure of the missing region and guiding the shift of the encoder feature.
To this end, proper tradeoff parameter $\lambda_g$ should be chosen, where too large or too small $\lambda_g$ values may be harmful to the inpainting results.
Fig.~\ref{fig:lambdaG} shows the results by setting different $\lambda_g$ values.
When $\lambda_{g}$ is small (e.g., $= 0.001$), the decoder feature may not serve as a suitable guidance to guarantee the correct shift of the encoder feature.
From Fig.~\ref{fig:lambdaG}(d), some artifacts can still be observed.
When $\lambda_{g}$ becomes too large (e.g., $\geq 0.1$), the constraint will be too excessive, and artifacts may also be introduced in the result (see Fig.~\ref{fig:lambdaG}(a)(b)).
Thus, we empirically set $\lambda_{g}=0.01$ in all our experiments.


\vspace{-1em}
\subsection{Effect of shift operation at different layers}\label{section5.2}

The superiority of Shift-Net against context encoder~\cite{pathak2016context} has demonstrated the effectiveness of shift operation.
By comparing the results by U-Net (w/${\cal L}_g$) and Shift-Net (w/${\cal L}_g$) in Fig.~\ref{fig:guidanceEffectiveness}(b)(d), one can see that shift operation does benefit the preserving of semantics and the recovery of detailed textures.
Note that the shift operation can be deployed to different layer, e.g., $(L-l)$-th, of the decoder.
When $l$ is smaller, the feature map size goes larger, and more computation time is required to perform the shift operation.
When $l$ is larger, the feature map size becomes smaller, but more detailed information may lost in the corresponding encoder layer, which may be harmful to recover image details and semantics.
Thus, proper $l$ should be chosen for better tradeoff between computation time and inpainting performance.
Fig.~\ref{fig:Shift_in_layers} shows the results of Shift-Net by adding the shift-connection layer to each of the $(L-4)$-th, $(L-3)$-th, and $(L-2)$-th layers, respectively.
When the shift-connection layer is added to the $(L-2)$-th layer, Shift-Net generally works well in producing visually pleasing results, but it takes more time (i.e., $\sim400$ ms per image) to process an image (See Fig.~\ref{fig:Shift_in_layers}(d)).
When the shift-connection layer is added to the $(L-4)$-th layer, Shift-Net becomes very efficient (i.e., $\sim40$ ms per image) but tends to generate the result with less textures and coarse details (See Fig.~\ref{fig:Shift_in_layers}(b)).
By performing the shift operation in $(L-3)$-th layer, better tradeoff between efficiency (i.e., $\sim80$ ms per image) and performance can be obtained by Shift-Net (See Fig.~\ref{fig:Shift_in_layers}(c)).




\begin{figure}[!t]
\setlength\tabcolsep{1.5pt}
\centering
\small
\begin{tabular}{cccc}
\includegraphics[width=.24\linewidth]{figs/fig8/gt_048_im} &
\includegraphics[width=.24\linewidth]{figs/fig8/16_048_im} &
\includegraphics[width=.24\linewidth]{figs/fig8/32_048_im} &
\includegraphics[width=.24\linewidth]{figs/fig8/64_048_im} \\
(a) ground-truth & (b) $L-4$ & (c) $L-3$ & (d) $L-2$\\
\end{tabular}
\vspace{-1em}
\caption{The effect of performing shift operation on different layers $L-l$. }
\label{fig:Shift_in_layers}
\vspace{-1.5em}
\end{figure}

\vspace{-1em}
\subsection{Effect of the shifted feature}

As we stacks the convolutional features $\Phi_{L-l}(I)$, $\Phi_{l}(I)$ and $\Phi_{L-l}^{\text{\emph{shift}}}$ as inputs
of $(L-l+1)$-th layer of U-Net, we can respectively zero out the weight of the corresponding slice in $(L-l+1)$-th layer.
Fig.~\ref{fig:Effect_of_shifted_feature} demonstrates the results of Shift-Net by only zeroing out the weight of each slice.
When we abandon the decoder feature $\Phi_{L-l}(I)$, the central part fails to restore any structures (See Fig.~\ref{fig:Effect_of_shifted_feature}(b)), which indicates main structure and content are constructed by the subnet between
$l\sim L-l$ layers.
However, if we ignore the feature $\Phi_{l}(I)$, we get general structure (See (Fig.~\ref{fig:Effect_of_shifted_feature}(c)) but quality inferior to the final result Fig.~\ref{fig:Effect_of_shifted_feature}(e).
This exhibits the fact that encoder feature $\Phi_{l}(I)$ has no significant effect on recovering features, which
manifests the guidance loss is forceful to explicitly model the relationship between $\left( \Phi_{L-l}(I) \right)_{\mathbf{y}}$ and $\left( \Phi_{l}(I^{gt}) \right)_{\mathbf{y}}$ as illustrated in Sec.~\ref{section3.1}.
Finally, when we discard the shift feature $\Phi_{L-l}^{\text{\emph{shift}}}$, the result is totally a mixture of structures
(See Fig.~\ref{fig:Effect_of_shifted_feature}(d)).
Therefore, we can conclude that $\Phi_{L-l}^{\text{\emph{shift}}}$ acts as a refinement and enhancement role in recovering clear and fine details
in our Shift-Net.

\begin{figure}[!t]
\setlength\tabcolsep{1.5pt}
\centering
\small
\begin{tabular}{ccccc}
\includegraphics[width=.18\linewidth]{figs/fig9/input/085_im} &
\includegraphics[width=.18\linewidth]{figs/fig9/zeroFirst/085_im} &
\includegraphics[width=.18\linewidth]{figs/fig9/zeroSecond/085_im} &
\includegraphics[width=.18\linewidth]{figs/fig9/zeroThird/085_im} &
\includegraphics[width=.18\linewidth]{figs/fig9/output/085_im} \\
(a)  & (b) & (c) & (d)& (e) \\
\end{tabular}
\vspace{-1em}
\caption{Given (a) the input, (b), (c) and (d) are respectively the results when the 1st, 2nd, 3rd parts of weights in $(L-l+1)$-th layer are zeroed. (e) is the result of Ours.}
\label{fig:Effect_of_shifted_feature}
\end{figure}


\begin{figure}[!t]
  \centering
\setlength\tabcolsep{1.5pt}
\begin{tabular}{ccccc}
  \includegraphics[width=.18\linewidth]{figs_rebuttal/RandomReplace/random/004_im}&
  \includegraphics[width=.18\linewidth]{figs_rebuttal/RandomReplace/random/054_im}&
  \includegraphics[width=.18\linewidth]{figs_rebuttal/RandomReplace/random/085_im}&
  \includegraphics[width=.18\linewidth]{figs_rebuttal/RandomReplace/random/096_im}&
  \includegraphics[width=.18\linewidth]{figs_rebuttal/RandomReplace/random/097_im}\\

  \includegraphics[width=.18\linewidth]{figs_rebuttal/RandomReplace/nearest/4_im}&
  \includegraphics[width=.18\linewidth]{figs_rebuttal/RandomReplace/nearest/054_im}&
  \includegraphics[width=.18\linewidth]{figs_rebuttal/RandomReplace/nearest/85_im}&
  \includegraphics[width=.18\linewidth]{figs_rebuttal/RandomReplace/nearest/096_im}&
  \includegraphics[width=.18\linewidth]{figs_rebuttal/RandomReplace/nearest/97_im}\\

\end{tabular}
\vspace{-1em}
\caption{From top to bottom are: Shift-Net with random shift-connection and nearest neighbor searching.}
\label{fig:Comparison_randomReplace}
\vspace{-1em}
\end{figure}

\vspace{-1em}
\subsection{Comparison with random shift-connection}\label{setcion5.4}
Finally, we implement a baseline Shift-Net model by substituting the nearest neighbor searching with random shift-connection.
Fig.~\ref{fig:Comparison_randomReplace} shows five examples of inpainting results by Shift-Net and baseline model.
Compared to the nearest neighbor searching, the results by random shift-connection exhibit more artifacts, distortions, and structure disconnections.
When training with random neighbor searching, random shifted feature continuously acts as dummy and confusing input.
The network gradually learns to ignore $\Phi_{L-l}^{\text{\emph{shift}}}$ in order to minimizing the total loss function.
Thus, the favorable performance of Shift-Net should owe to the correct shift-operation.



\vspace{-1em}
\section{Conclusion}\label{section6}

This paper has proposed a novel architecture, i.e., Shift-Net, for image completion that exhibits fast speed with promising fine details via deep feature rearrangement.
The guidance loss is introduced to enhance the explicit relation between the encoded feature in known region and decoded feature in missing region.
By exploiting such relation, the shift operation can be efficiently performed and is effective in improving inpainting performance.
Experiments show that our Shift-Net performs favorably in comparison to the state-of-the-art methods, and is effective in generating sharp, fine-detailed and photo-realistic images.
In future, more studies will be given to improve the speed of nearest searching in the shift operation, introduce multiple shift-connection layers, and extend the shift-connection to other low level vision tasks.

\clearpage
\appendix

\section{Definition of masked region in feature maps}\label{sectionA}
As shift-connection works based on the boundary of masked region and unmasked region in feature maps.
Thus, we need to give a definition of masked region in feature maps.
Denote by $\Omega^0$ the missing part of the input image, and we should determine $\Omega^l$ for the $l$-th convolutional layer.
In our implementation, we introduce a mask image $M$ with $(M)_{\mathbf{y}} = 1$ ($\mathbf{y} \in \Omega$) and 0 otherwise.
Then, we define a CNN $\Psi(M)$ that has the same architecture with the encoder but with the network width of 1.
All the elements of the filters are $1/16$, and we remove all the nonlinearity.
Taking $M$ as input, we obtain the feature of the $l$-th layer as $\Psi_l(M)$.
Then, $\Omega^l$ is defined as $\Omega^l = \{ \mathbf{y} | (\Psi_l(M))_{\mathbf{y}} \geq T\}$, where $T$ is the threshold with $0 \leq T \leq 1$.
Fig.~\ref{fig:Threshold_in_shift} shows the results of Shift-Net by setting $T = 4/16, 5/16, 6/16$, respectively.
It can be seen that Shift-Net is robust to $T$, which may be attributed to that we take the shift and encoder, decoder features as the inputs to the $L-l+1$ layer.
We empirically set $T=5/16$ in our experiments.

\begin{figure}[!htb]
\setlength\tabcolsep{1.5pt}
\centering
\small
\begin{tabular}{cccc}
\includegraphics[width=.22\linewidth]{figs/fig11/gt_077_im} &
\includegraphics[width=.22\linewidth]{figs/fig11/4_16_077_im} &
\includegraphics[width=.22\linewidth]{figs/fig11/5_16_077_im} &
\includegraphics[width=.22\linewidth]{figs/fig11/6_16_077_im} \\
(a) Ground-truth & (b) $T = 4/16$ & (c) $T = 5/16$ & (d) $T = 6/16$\\
\end{tabular}
\caption{The effect of different thresholds in shift-connection. }
\vspace{-2em}
\label{fig:Threshold_in_shift}
\vspace{-0.5em}
\end{figure}


\section{Details on Shift-Net}\label{setionB}

\subsection{Architecture of generative model G.}

For the generative model of our Shift-Net, we adopt the architecture of U-Net proposed in~\cite{isola2016image, radford2015unsupervised}.
Each convolution or deconvolution layer is followed by instance normalization~\cite{ulyanov2016instance}.
The encoder part of $G$ is stacked with Convolution-InstanceNorm-LeakyReLU layers, while the decoder part of $G$ consists of seven Deconvolution-InstanceNorm-ReLU layers.
Following the code of pix2pix, we zero out the biases of all convolution and deconvolution layers in the generative model in training.
In this way, we can promise the correctness of \textbf{Line 208}.
$L$ denotes the total number of convolution/deconvolution layers in our model.
We add guidance loss and shift operation in $(L-3)$-th layer, which results in the concatenated features of $\Phi_{L-3}(I)$, $\Phi_{3}(I)$ and $\Phi_{L-3}^{\text{\emph{shift}}}(I)$ as inputs of the adjacent deconvolution.
Details about the architecture of our generative model $G$ is shown in Table~\ref{table:netG}.
It is noted that we do not apply InstanceNorm on the bottleneck layer.
The activation map of the bottleneck layer is $1 \times 1$, which means we only get one activation per convolutional filter.
As we train our network with batchsize 1, activations will be zeroed out once InstanceNorm is applied on the bottleneck layer. Please to pix2pix\footnote{https://github.com/phillipi/pix2pix/commit/b479b6b} for more explanation.


\subsection{Architecture of discriminative network D.}\label{sectionC}

$D$ shares the similar design pattern with the encoder part of $G$, however, is only 5-convolution-layer network.
We exclusively use convolution layers with filters of size $4 \times 4$ pixels with varying stride lengths to reduce
the spatial dimension of the input down to a size of $30 \times 30$ where we append sigmoid activation at the final output.
InstanceNorm is not applied to the first convolutional layer, and we use leaky ReLU with slope of 0.2 for activations except
for the sigmoid in the last layer.
See Table~\ref{table:netD} for more details.





\section{More comparisons and object removals}\label{setionD}

\subsection{Comparisons on Paris StreetView and Places datasets}

More comparisons with context encoder~\cite{pathak2016context}, Content-Aware-Fill~\cite{Content-Aware-Fill}, pix2pix\cite{isola2016image} and MNPS~\cite{yang2017high} on both Paris StreetView~\cite{doersch2012makes} and
Places\cite{zhou2017places} are also conducted.
Please refer to Fig.~\ref{fig:comparison_on_paris_1} and~\ref{fig:comparison_on_paris_2} for more results on Paris StreetView. For comparison on Places, please refer to Fig.~\ref{fig:comparison_on_place_1}.
Our Shift-Net outperforms state-of-the-art approaches in both structural consistency and detail richness.
Both global structure and fine details can be preserved in our model, however, other methods either perform poorly in generating clear, realistic details or lack global structure consistency.




\begin{table}[!t]
\renewcommand{\arraystretch}{1.3}
\caption{The architecture of the $G$ network. ``IN'' represents InstanceNorm and ``LReLU''
donates leaky ReLU with the slope of 0.2.}
\vspace{-0.5em}
\centering
\begin{tabular}{l}
\hline
\ \ \ \ \ \ \ \ \ \ \ \ {\bf The architecture of generative model} $G$\\
\hline
{\bf Input}: Image ($256 \times 256 \times 3$)\\
\hline
[Layer \ \ 1]    \ \ \  Conv. (4, 4, \ 64), stride=2; \\  \hline
[Layer \ \ 2]   \ \ \ \emph{LReLU}; Conv. (4, 4, 128), stride=2; IN; \\ \hline
[Layer \ \ 3]    \ \ \ \emph{LReLU}; Conv. (4, 4, 256), stride=2; IN; \\ \hline
[Layer \ \ 4]    \ \ \ \emph{LReLU}; Conv. (4, 4, 512), stride=2; IN; \\ \hline
[Layer \ \ 5]    \ \ \ \emph{LReLU}; Conv. (4, 4, 512), stride=2; IN;\\ \hline
[Layer \ \ 6]    \ \ \ \emph{LReLU}; Conv. (4, 4, 512), stride=2; IN;\\ \hline
[Layer \ \ 7]    \ \ \ \emph{LReLU}; Conv. (4, 4, 512), stride=2; IN;\\ \hline
[Layer \ \ 8]    \ \ \ \emph{LReLU}; Conv. (4, 4, 512), stride=2; \\ \hline
[Layer \ \ 9]    \ \ \  \emph{ReLU}; DeConv. (4, 4, 512), stride=2; IN; \ \\ \hline
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Concatenate(Layer \ \ 9, Layer \ 7);\\
\hline
[Layer 10]    \ \ \  DeConv. (4, 4, 512), stride=2; IN; \ \\ \hline
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Concatenate(Layer 10, Layer \ 6); \emph{ReLU};\\
\hline
[Layer 11]    \ \ \  DeConv. (4, 4, 512), stride=2; IN; \ \\ \hline
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Concatenate(Layer 11, Layer \ 5); \emph{ReLU};\\
\hline
[Layer 12]    \ \ \  DeConv. (4, 4, 512), stride=2; IN; \ \\ \hline
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Concatenate(Layer 12, Layer \ 4); \emph{ReLU};\\
\hline
[Layer 13]    \ \ \  DeConv. (4, 4, 256), stride=2; IN; \ \\ \hline
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Concatenate(Layer 13, Layer \ 3); \emph{ReLU};\\
\hline
[Layer 14]    \ \ \  {\bf Guidance loss layer}; \ \\
\hline
[Layer 15]    \ \ \  {\bf Shift-connection layer}; \ \\
\hline
[Layer 16]    \ \ \  DeConv. (4, 4, 128), stride=2; IN; \ \\ \hline
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Concatenate(Layer 16, Layer \ 2); \emph{ReLU};\\
\hline
[Layer 17]    \ \ \  DeConv. (4, 4, \ \ 64), stride=2; IN; \\  \hline
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Concatenate(Layer 17, Layer \ 1); \emph{ReLU};\\
\hline
[Layer 18]    \ \ \  \emph{ReLU}; DeConv. (4, 4, 3), stride=2; \emph{Tanh}; \ \ \  \\
\hline
{\bf Output}: Final result ($256 \times 256 \times 3$)\\
\hline
\end{tabular}
\label{table:netG}
\end{table}




\begin{table}[H]
\renewcommand{\arraystretch}{1.3}
\caption{The architecture of the discriminative network. ``IN'' represents InstanceNorm and ``LReLU''
donates leaky ReLU with the slope of 0.2.}
\vspace{-0.5em}
\centering
\begin{tabular}{l}
\hline
\ \ \ \ \  \ \ \  {\bf The architecture of discriminative model} $D$ \ \ \ \ \ \  \ \\
\hline
{\bf Input}: Image ($256 \times 256 \times 3$) \\
\hline
[layer 1]  \ \  \   Conv. (4, 4, \ \ 64), stride=2; \emph{LReLU}; \\
\hline
[layer 2] \   \ \  Conv. (4, 4, 128), stride=2; IN; \emph{LReLU};   \\
\hline
[layer 3] \  \ \    Conv. (4, 4, 256), stride=2; IN; \emph{LReLU};   \\
\hline
[layer 4]  \ \ \  Conv. (4, 4, 512), stride=1; IN; \emph{LReLU};\\
\hline
[layer 5] \  \  \    Conv. (4, 4, 1), stride=1; \emph{Sigmoid};    \\
\hline
{\bf Output}: Real or Fake ($30 \times 30 \times 1$)\\
\hline
\end{tabular}
\label{table:netD}
\end{table}

\begin{figure*}[t]
  \center
\setlength\tabcolsep{0.5pt}
\begin{tabular}{cccccc}

  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/hole_white/input_0097}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/patchMatch/pM_0097}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/CE/fake_0097}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/pix2pix/097_im}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/highRes/hr_0097}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/ours/acc_g5_097_im}\\

  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/hole_white/input_0047}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/patchMatch/pM_0047}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/CE/fake_0047}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/pix2pix/047_im}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/highRes/hr_0047}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/ours/g1_20_047_im}\\

  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/hole_white/input_0007}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/patchMatch/pM_0007}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/CE/fake_0007}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/pix2pix/007_im}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/highRes/hr_0007}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/ours/acc_g10_again_007_im}\\

  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/hole_white/input_0033}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/patchMatch/pM_0033}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/CE/fake_0033}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/pix2pix/033_im}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/highRes/hr_0033}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/ours/acc_g5_033_im}\\

    \includegraphics[width=.160\textwidth]{figs/fig1_Paris/hole_white/input_0028}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/patchMatch/pM_0028}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/CE/fake_0028}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/pix2pix/028_im}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/highRes/hr_0028}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/ours/g1_30_028_im}\\

  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/hole_white/input_0049}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/patchMatch/pM_0049}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/CE/fake_0049}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/pix2pix/049_im}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/highRes/hr_0049}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/ours/g100_049_im}\\

  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/hole_white/input_0074}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/patchMatch/pM_0074}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/CE/fake_0074}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/pix2pix/074_im}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/highRes/hr_0074}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/ours/g1_20_074_im}\\

  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/hole_white/input_0012}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/patchMatch/pM_0012}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/CE/fake_0012}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/pix2pix/012_im}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/highRes/hr_0012}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/ours/012_im}\\

(a)  & (b) & (c)  & (d) & (e) & (f)\\
\end{tabular}
\vspace{-.5em}
\caption{Qualitative comparisons on Paris StreetView. From the left to the right are:
(a) input, (b) Content-Aware Fill~\cite{Content-Aware-Fill}, (c) context encoder~\cite{pathak2016context}, (d) pix2pix\cite{isola2016image}, (e) MNPS~\cite{yang2017high} and (f) Ours. All images are scaled to $256\times 256$.}

\label{fig:comparison_on_paris_1}
\vspace{-.5em}
\end{figure*}






\begin{figure*}[!htbp]
  \center
\setlength\tabcolsep{0.5pt}
\begin{tabular}{cccccc}


  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/hole_white/input_0094}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/patchMatch/pM_0094}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/CE/fake_0094}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/pix2pix/094_im}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/highRes/hr_0094}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/ours/g10_TH4_16_094_im}\\

  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/hole_white/input_0054}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/patchMatch/pM_0054}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/CE/fake_0054}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/pix2pix/054_im}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/highRes/hr_0054}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/ours/054_im}\\

    \includegraphics[width=.160\textwidth]{figs/fig1_Paris/hole_white/input_0001}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/patchMatch/pM_0001}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/CE/fake_0001}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/pix2pix/001_im}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/highRes/hr_0001}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/ours/001_im}\\

  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/hole_white/input_0002}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/patchMatch/pM_0002}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/CE/fake_0002}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/pix2pix/002_im}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/highRes/hr_0002}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/ours/002_im}\\

  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/hole_white/input_0070}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/patchMatch/pM_0070}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/CE/fake_0070}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/pix2pix/070_im}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/highRes/hr_0070}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/ours/070_im}\\

  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/hole_white/input_0071}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/patchMatch/pM_0071}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/CE/fake_0071}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/pix2pix/071_im}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/highRes/hr_0071}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/ours/071_im}\\

  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/hole_white/input_0095}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/patchMatch/pM_0095}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/CE/fake_0095}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/pix2pix/095_im}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/highRes/hr_0095}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/ours/095_im}\\

  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/hole_white/input_0077}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/patchMatch/pM_0077}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/CE/fake_0077}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/pix2pix/077_im}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/highRes/hr_0077}&
  \includegraphics[width=.160\textwidth]{figs/fig1_Paris/ours/077_im}\\

(a)  & (b) & (c)  & (d) & (e) & (f)\\
\end{tabular}
\vspace{-.5em}
\caption{Qualitative comparisons on Paris StreetView. From the left to the right are:
(a) input, (b) Content-Aware Fill~\cite{Content-Aware-Fill}, (c) context encoder~\cite{pathak2016context}, (d) pix2pix\cite{isola2016image}, (e) MNPS~\cite{yang2017high} and (f) Ours. All images are scaled to $256\times 256$.}

\label{fig:comparison_on_paris_2}
\vspace{-.5em}
\end{figure*}



\begin{figure*}[t]
  \center
\setlength\tabcolsep{0.5pt}
\begin{tabular}{cccccc}
  \includegraphics[width=.160\textwidth]{figs/fig2_Places/hole_white/input_0170}&
  \includegraphics[width=.160\textwidth]{figs/fig2_Places/patchMatch/pM_170}&
  \includegraphics[width=.160\textwidth]{figs/fig2_Places/CE/fake_0002}&
  \includegraphics[width=.160\textwidth]{figs/fig2_Places/pix2pix/170}&
  \includegraphics[width=.160\textwidth]{figs/fig2_Places/highRes/result_0002}&
  \includegraphics[width=.160\textwidth]{figs/fig2_Places/ours/170}\\

  \includegraphics[width=.160\textwidth]{figs/fig2_Places/hole_white/input_0224}&
  \includegraphics[width=.160\textwidth]{figs/fig2_Places/patchMatch/pM_224}&
  \includegraphics[width=.160\textwidth]{figs/fig2_Places/CE/fake_0003}&
  \includegraphics[width=.160\textwidth]{figs/fig2_Places/pix2pix/224}&
  \includegraphics[width=.160\textwidth]{figs/fig2_Places/highRes/result_0003}&
  \includegraphics[width=.160\textwidth]{figs/fig2_Places/ours/224}\\


  \includegraphics[width=.160\textwidth]{figs/fig2_Places/hole_white/input_0241}&
  \includegraphics[width=.160\textwidth]{figs/fig2_Places/patchMatch/pM_241}&
  \includegraphics[width=.160\textwidth]{figs/fig2_Places/CE/fake_0004}&
  \includegraphics[width=.160\textwidth]{figs/fig2_Places/pix2pix/241}&
  \includegraphics[width=.160\textwidth]{figs/fig2_Places/highRes/result_0004}&
  \includegraphics[width=.160\textwidth]{figs/fig2_Places/ours/241}\\

  \includegraphics[width=.160\textwidth]{figs/fig2_Places/hole_white/input_0270}&
  \includegraphics[width=.160\textwidth]{figs/fig2_Places/patchMatch/pM_270}&
  \includegraphics[width=.160\textwidth]{figs/fig2_Places/CE/fake_0005}&
  \includegraphics[width=.160\textwidth]{figs/fig2_Places/pix2pix/270}&
  \includegraphics[width=.160\textwidth]{figs/fig2_Places/highRes/result_0005}&
  \includegraphics[width=.160\textwidth]{figs/fig2_Places/ours/270}\\

  \includegraphics[width=.160\textwidth]{figs/fig2_Places/hole_white/input_6039}&
  \includegraphics[width=.160\textwidth]{figs/fig2_Places/patchMatch/pM_6039}&
  \includegraphics[width=.160\textwidth]{figs/fig2_Places/CE/fake_0006}&
  \includegraphics[width=.160\textwidth]{figs/fig2_Places/pix2pix/6039}&
  \includegraphics[width=.160\textwidth]{figs/fig2_Places/highRes/result_0006}&
  \includegraphics[width=.160\textwidth]{figs/fig2_Places/ours/6039}\\

  \includegraphics[width=.160\textwidth]{figs/fig2_Places/hole_white/input_24563}&
  \includegraphics[width=.160\textwidth]{figs/fig2_Places/patchMatch/pM_24563}&
  \includegraphics[width=.160\textwidth]{figs/fig2_Places/CE/fake_0013}&
  \includegraphics[width=.160\textwidth]{figs/fig2_Places/pix2pix/24563}&
  \includegraphics[width=.160\textwidth]{figs/fig2_Places/highRes/result_0013}&
  \includegraphics[width=.160\textwidth]{figs/fig2_Places/ours/24563}\\

  \includegraphics[width=.160\textwidth]{figs/fig2_Places/hole_white/input_8613}&
  \includegraphics[width=.160\textwidth]{figs/fig2_Places/patchMatch/pM_8613}&
  \includegraphics[width=.160\textwidth]{figs/fig2_Places/CE/fake_0017}&
  \includegraphics[width=.160\textwidth]{figs/fig2_Places/pix2pix/OT_8613}&
  \includegraphics[width=.160\textwidth]{figs/fig2_Places/highRes/result_0017}&
  \includegraphics[width=.160\textwidth]{figs/fig2_Places/ours/8613}\\

  \includegraphics[width=.160\textwidth]{figs/fig2_Places/hole_white/input_25214}&
  \includegraphics[width=.160\textwidth]{figs/fig2_Places/patchMatch/pM_25214}&
  \includegraphics[width=.160\textwidth]{figs/fig2_Places/CE/fake_0014}&
  \includegraphics[width=.160\textwidth]{figs/fig2_Places/pix2pix/25214}&
  \includegraphics[width=.160\textwidth]{figs/fig2_Places/highRes/result_0014}&
  \includegraphics[width=.160\textwidth]{figs/fig2_Places/ours/25214}\\

(a)  & (b) & (c)  & (d) & (e) & (f)\\
\end{tabular}
\vspace{-.5em}
\caption{Qualitative comparisons on Places. From the left to the right are:
(a) input, (b) Content-Aware Fill~\cite{Content-Aware-Fill}, (c) context encoder~\cite{pathak2016context}, (d) pix2pix\cite{isola2016image}, (e) MNPS~\cite{yang2017high} and (f) Ours. All images are scaled to $256\times 256$.}

\label{fig:comparison_on_place_1}
\vspace{-.5em}
\end{figure*}


\clearpage
\subsection{More object removal on real world images by our Shift-Net}
We apply our model trained on Paris StreetView~\cite{doersch2012makes} or Places~\cite{zhou2017places} to process object removal on real world images, as shown in Fig.~\ref{fig:realImgs} for results.
These real world images are complex for large area of distractors and complicated background.
Even so, our model can handle them well, which indicates the effectiveness, applicability and generality of our model.



\begin{figure}[!h]
\setlength\tabcolsep{1.5pt}
\centering
\begin{tabular}{cccc}

\includegraphics[width=.24\linewidth]{figs/fig3_RealImgs/gt/m17_gt.jpg}&
\includegraphics[width=.24\linewidth]{figs/fig3_RealImgs/ours/m17.jpg}&
\includegraphics[width=.24\linewidth]{figs/fig3_RealImgs/gt/m21_gt.jpg}&
\includegraphics[width=.24\linewidth]{figs/fig3_RealImgs/ours/m21.jpg}\\
\includegraphics[width=.24\linewidth]{figs/fig3_RealImgs/gt/m22_gt.jpg}&
\includegraphics[width=.24\linewidth]{figs/fig3_RealImgs/ours/m22.jpg}&
\includegraphics[width=.24\linewidth]{figs/fig3_RealImgs/gt/m30_gt.jpg}&
\includegraphics[width=.24\linewidth]{figs/fig3_RealImgs/ours/m30.jpg}\\

\includegraphics[width=.24\linewidth]{figs/fig3_RealImgs/gt/m44_gt.jpg}&
\includegraphics[width=.24\linewidth]{figs/fig3_RealImgs/ours/m44.jpg}&
\includegraphics[width=.24\linewidth]{figs/fig3_RealImgs/gt/m33_gt.jpg}&
\includegraphics[width=.24\linewidth]{figs/fig3_RealImgs/ours/m33.jpg}\\


\end{tabular}
\caption{Object removal on real images.}
\label{fig:realImgs}
\end{figure}

\clearpage
\bibliographystyle{splncs}
\bibliography{egbib}
\end{document}