File size: 80,686 Bytes
8cf8144
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
\documentclass[journal,comsoc]{IEEEtran}
\usepackage{amsmath}
\usepackage{newtxmath}
\usepackage[T1]{fontenc}
\usepackage[latin9]{inputenc}
\usepackage{array}
\usepackage{url}
\usepackage{multirow}
\usepackage{graphicx}
\usepackage[unicode=true,
 bookmarks=false,
 breaklinks=false,pdfborder={0 0 1},backref=section,colorlinks=false]
 {hyperref}

\makeatletter

\newcommand{\lyxmathsym}[1]{\ifmmode\begingroup\def\b@ld{bold}
  \text{\ifx\math@version\b@ld\bfseries\fi#1}\endgroup\else#1\fi}

\providecommand{\tabularnewline}{\\}



































\ifCLASSINFOpdf
\else
\fi






\interdisplaylinepenalty=2500



















































\hyphenation{op-tical net-works semi-conduc-tor}





\makeatother

\begin{document}

\title{Recent Advances in Zero-shot Recognition}

\author{Yanwei~Fu, ~Tao~Xiang, ~Yu-Gang~Jiang, Xiangyang~Xue,~Leonid~Sigal,
and~Shaogang~Gong \thanks{Yanwei Fu and Xiangyang Xue are with the School of Data Science, Fudan
University, Shanghai, 200433, China. E-mail: \{yanweifu,xyxue\}@fudan.edu.cn;

Yu-Gang Jiang is with the School of Computer Science, Shanghai Key
Lab of Intelligent Information Processing, Fudan University. Email:
ygj@fudan.edu.cn; Yu-Gang Jiang is the corresponding author.

Leonid Sigal is with the Department of Computer Science, University
of British Columbia, BC, Canada. Email: lsigal@cs.ubc.ca;

Tao Xiang and Shaogang Gong are with the School of Electronic Engineering
and Computer Science, Queen Mary University of London, E1 4NS, UK.
Email: \{t.xiang, s.gong\}@qmul.ac.uk. } }
\maketitle
\begin{abstract}
With the recent renaissance of deep convolution neural networks, encouraging
breakthroughs have been achieved on the supervised recognition tasks,
where each class has sufficient training data and fully annotated
training data. However, to scale the recognition to a large number
of classes with few or now training samples for each class remains
an unsolved problem. One approach to scaling up the recognition is
to develop models capable of recognizing unseen categories without
any training instances, or zero-shot recognition/ learning. This article
provides a comprehensive review of existing zero-shot recognition
techniques covering various aspects ranging from representations of
models, and from datasets and evaluation settings. We also overview
related recognition tasks including one-shot and open set recognition
which can be used as natural extensions of zero-shot recognition when
limited number of class samples become available or when zero-shot
recognition is implemented in a real-world setting. Importantly, we
highlight the limitations of existing approaches and point out future
research directions in this existing new research area. 
\end{abstract}

\begin{IEEEkeywords}
life-long learning, zero-shot recognition, one-shot learning, open-set
recognition. 
\end{IEEEkeywords}


\section{Introduction}

Humans can distinguish at least 30,000 basic object categories \cite{object_cat_1987}
and many more subordinate ones (e.g., breeds of dogs). They can also
create new categories dynamically from few examples or purely based
on high-level description. In contrast, most existing computer vision
techniques require hundreds, if not thousands, of labelled samples
for each object class in order to learn a recognition model. Inspired
by humans' ability to recognize without seeing examples, the research
area of \emph{learning to learn} or \emph{lifelong learning} \cite{chen_iccv13,lifelonglearning,Tom1995lifelong}
has received increasing interests.

These studies aim to intelligently apply previously learned knowledge
to help future recognition tasks. In particular, a major topic in
this research area is building recognition models capable of recognizing
novel visual categories that have no associated labelled training
samples (\emph{i.e.}, zero-shot learning), few training examples (\emph{i.e.}
one-shot learning), and recognizing the visual categories under an
`open-set' setting where the testing instance could belong to either
seen or unseen/novel categories.

These problems can be solved under the setting of transfer learning.
Typically, transfer learning emphasizes the transfer of knowledge
across domains, tasks, and distributions that are similar but not
the same. Transfer learning \cite{pan2009transfer_survey} refers
to the problem of applying the knowledge learned in one or more auxiliary
tasks/domains/sources to develop an effective model for a target task/domain.

To recognize zero-shot categories in the target domain, one has to
utilize the information learned from source domain. Unfortunately,
it may be difficult for existing methods of domain adaptation \cite{visual_domain_adapt}
to be directly applied on these tasks, since there are only few training
instances available on target domain. Thus the key challenge is to
learn domain-invariant and generalizable feature representation and/or
recognition models usable in the target domain.

The rest of this paper is organized as follows: We give an overview
of zero-shot recognition in Sec. \ref{sec:Overview-of-Zero-shot}.
The semantic representations and common models of zero-shot recognition
have been reviewed in Sec. \ref{sec:Semantic-Representations-in}
and Sec. \ref{sec:Models-for-Zero-shot} respectively. Next, we discuss
the recognition tasks beyond zero-shot recognition in Sec. \ref{sec:Beyond-Zero-shot-Recognition}
including generalized zero-shot recognition, open-set recognition
and one-shot recognition. The commonly used datasets are discussed
in Sec. \ref{sec:Datasets-in-Zero-shot}; and we also discuss the
problems of using these datasets to conduct zero-shot recognition.
Finally, we suggest some future research directions in Sec. \ref{sec:Future-Research-Directions}
and conclude the paper in Sec. \ref{sec:Conclusion}.

\section{Overview of Zero-shot Recognition\label{sec:Overview-of-Zero-shot}}

Zero-shot recognition can be used in a variety of research areas,
such as neural decoding from fMRI images \cite{palatucci2009zero_shot},
face verification \cite{kumar2009}, object recognition \cite{lampert13AwAPAMI},
video understanding \cite{emotion_0shot,fu2012attribsocial,liu2011action_attrib,yanweiPAMIlatentattrib},
and natural language processing \cite{Blitzer_zero-shotdomain}. The
tasks of identifying classes without any observed data is called zero-shot
learning. Specifically, in the settings of zero-shot recognition,
the recognition model should leverage training data from source/auxiliary
dataset/domain to identify the unseen target/testing dataset/domain.
Thus the main challenge of zero-shot recognition is how to generalize
the recognition models to identify the novel object categories without
accessing any labelled instances of these categories.

The key idea underpinning zero-shot recognition is to \emph{explore}
and \emph{exploit} the knowledge of how an unseen class (in target
domain) is semantically related to the seen classes (in the source
domain).  We \emph{explore} the relationship of seen and unseen classes
in Sec. \ref{sec:Semantic-Representations-in}, through the use of
intermediate-level semantic representations. These semantic representation
are typically encoded in a high dimensional vector space. The common
semantic representations include semantic attributes (Sec. \ref{subsec:Semantic-Attributes})
and semantic word vectors (Sec. \ref{sec:Generalised-Semantic-Representat}),
encoding linguistic context. The semantic representation is assumed
to be shared between the auxiliary/source and target/test dataset.
Given a pre-defined semantic representation, each class name can be
represented by an attribute vector or a semantic word vector \textendash{}
a representation termed {\em class prototype}.

Because the semantic representations are universal and shared, they
can be \emph{exploited} for knowledge transfer between the source
and target datasets (Sec. \ref{sec:Models-for-Zero-shot}), in order
to enable recognition novel unseen classes. A projection function
mapping visual features to the semantic representations is typically
learned from the auxiliary data, using an embedding model (Sec. \ref{subsec:Embedding-Models}).
Each unlabelled target class is represented in the same embedding
space using a class `prototype'. Each projected target instance is
then classified, using the recognition model, by measuring similarity
of projection to the class prototypes in the embedding space (Sec.
\ref{subsec:Recognition-models-in}). Additionally, under an open
set setting where the test instances could belong to either the source
or target categories, the instances of target sets can also be taken
as outliers of the source data; therefore novelty detection \cite{RichardNIPS13}
needs to be employed first to determine whether a testing instance
is on the manifold of source categories; and if it is not, it will
be further classified into one of the target categories.

The zero-shot recognition can be considered a type of life-long learning.
For example, when reading a description `flightless birds living
almost exclusively in Antarctica', most of us know and can recognize
that it is referring to a penguin, even though most people have never
seen a penguin in their life. In cognitive science \cite{Thrun96learningto},
studies explain that humans are able to learn new concepts by extracting
intermediate semantic representation or high-level descriptions ({\em
i.e.}, flightless, bird, living in Antarctica) and transferring knowledge
from known sources (other bird classes, {\em e.g.}, swan, canary,
cockatoo and so on) to the unknown target (penguin). That is the reason
why humans are able to understand new concepts with no (zero-shot
recognition) or only few training samples (few-shot recognition).
This ability is termed ``learning to learn\textquotedblright . 

More interestingly, humans can recognize newly created categories
from few examples or merely based on high-level description, {\em
e.g.}, they are able to easily recognize the video event named ``Germany
World Cup winner celebrations 2014\textquotedblright{} which, by definition,
did not exist before July 2014. To teach machines to recognize the
numerous visual concepts dynamically created by combining multitude
of existing concepts, one would require an exponential set of training
instances for a supervised learning approach. As such, the supervised
approach would struggle with the one-off and novel concepts such as
``Germany World Cup winner celebrations 2014\textquotedblright ,
because no positive video samples would be available before July 2014
when Germany finally beat Argentina to win the Cup. Therefore, zero-shot
recognition is crucial for recognizing dynamically created novel concepts
which are composed of new combinations of existing concepts. With
zero-shot learning, it is possible to construct a classifier for ``Germany
World Cup winner celebrations 2014\textquotedblright by transferring
knowledge from related visual concepts with ample training samples,
{\em e.g.}, `` FC Bayern Munich - Champions of Europe 2013\textquotedblright{}
and `` Spain World Cup winner celebrations 2010\textquotedblright .

\section{Semantic Representations in Zero-shot Recognition\label{sec:Semantic-Representations-in}}

In this section, we review the semantic representations used for zero-shot recognition. These representations can be categorized into two categories, \emph{namely}, semantic attributes and beyond.
We briefly review relevant papers in Table \ref{tab:Paper-summary-of}.

\subsection{Semantic Attributes\label{subsec:Semantic-Attributes}}

An attribute (\emph{e.g.}, has wings) refers to the intrinsic characteristic
that is possessed by an instance or a class (\emph{e.g.}, bird)~
(Fu \emph{et al.} \cite{fu2012attribsocial}), or indicates properties
(\emph{e.g.}, spotted) or annotations (\emph{e.g.}, has a head) of
an image or an object~(Lampert \emph{et al.} \cite{lampert13AwAPAMI}).
Attributes describe a class or an instance, in contrast to the typical
classification, which names an instance. Farhadi \emph{et al}. \cite{farhadi2009attrib_describe}
learned a richer set of attributes including parts, shape, materials
and \emph{etc}. Another commonly used methodology (\emph{e.g.}, in
human action recognition (Liu \emph{et al.} \cite{liu2011action_attrib}),
and in attribute and object-based modeling (Wang \emph{et al.} \cite{wang2011clothesattrib}))
is to take the attribute labels as latent variables on the training
dataset, {\emph{e}.g.}, in the form of a structured latent SVM model
with the objective is to minimize prediction loss. The attribute description
of an instance or a category is useful as a semantically meaningful
intermediate representation bridging a gap between low level features
and high level class concepts~(Palatucci \emph{et al.} \cite{palatucci2009zero_shot}).

The attribute learning approaches have emerged as a promising paradigm
for bridging the semantic gap and addressing data sparsity through
transferring attribute knowledge in image and video understanding
tasks. A key advantage of attribute learning is to provide an intuitive
mechanism for multi-task learning~(Salakhutdinov \emph{et al.} \cite{torralba2011app_share})
and transfer learning~(Hwang \emph{et al.} \cite{hwang2011obj_attrib}).
Particularly, attribute learning enables the learning with few or
zero instances of each class via attribute sharing, \emph{i.e.}, zero-shot
and one-shot learning. Specifically, the challenge of zero-shot recognition
is to recognize unseen visual object categories without any training
exemplars of the unseen class. This requires the knowledge transfer
of semantic information from auxiliary (seen) classes, with example
images, to unseen target classes.

Later works~(Parikh \emph{et al}.\cite{parikh2011relativeattrib},
Kovashka \emph{et al.} \cite{whittlesearch} and Berg \emph{et al.}
\cite{attrbDiscovery12ECCV}) extended the unary/binary attributes
to compound attributes, which makes them extremely useful for information
retrieval (\emph{e.g.}, by allowing complex queries such as ``Asian
women with short hair, big eyes and high cheekbones'') and identification
(\emph{e.g.}, finding an actor whose name you forgot, or an image
that you have misplaced in a large collection).

In a broader sense, the attribute can be taken as one special type
of ``subjective visual property'' \cite{robust_0shot}, which indicates
the task of estimating continuous values representing visual properties
observed in an image/video. These properties are also examples of
attributes, including image/video interestingness~\cite{imginterestingnessICCV2013,yugangVideoInteresting2013},
memorability~\cite{Isola2011NIPS,Isola2011cvpr}, aesthetic~\cite{Dhar2011cvpr},
and human-face age estimation~\cite{fu2010ageSurvey,crowdcountingKE}.
Image interestingness was studied in Gygli \emph{et al.}~\cite{imginterestingnessICCV2013},
which showed that three cues contribute the most to interestingness:
aesthetics, unusualness/novelty and general preferences; the last
of which refers to the fact that people, in general, find certain
types of scenes more interesting than others, for example, outdoor-natural
vs.~indoor-manmade. Jiang \emph{et al.~}\cite{yugangVideoInteresting2013}
evaluated different features for video interestingness prediction
from crowdsourced pairwise comparisons. ACM International Conference
on Multimedia Retrieval (ICMR) 2017 published special issue (``multimodal
understanding of subjective properties''\footnote{\url{http://www.icmr2017.ro/call-for-special-sessions-s1.php}})
on the applications of multimedia analysis for subjective property
understanding, detection and retrieval. These subjective visual properties
can be used as an intermediate representation for zero-shot recognition
as well as other visual recognition tasks, {\em e.g.}, people can
be recognized by the description of how pale their skin complexion
is and/or how chubby their face looks \cite{parikh2011relativeattrib}.
In the next subsections, we will briefly review different types of
attributes.

\subsubsection{User-defined Attributes\label{subsec:User-defined-Attributes}}

User-defined attributes are defined by human experts \cite{lampert2009zeroshot_dat,lampert13AwAPAMI},
or a concept ontology \cite{fu2012attribsocial}. Different tasks
may also necessitate and contain distinctive attributes, such as facial
and clothes attributes \cite{wang2011clothesattrib,moon_attrb,rudd2016moon,wang2016walk,datta2011face_attrib,ehrlich2016facial},
attributes of biological traits (\emph{e.g.}, age and gender) \cite{survey_of_face,facial_attrb_icmr},
product attributes (\emph{e.g.}, size, color, price) \cite{multi_task_attrib}
and 3D shape attributes \cite{3D_shape_attribute}. Such attributes
transcend the specific learning tasks and are, typically, pre-learned
independently across different categories, thus allowing transference
of knowledge \cite{whittlesearch,vaquero2009attrib_surveil,wang2009attrib_class_sal}.
Essentially, these attributes can either serve as the intermediate
representations for knowledge transfer in zero-shot, one-shot and
multi-task learning \cite{multi_task_attrib}, or be directly employed
for advanced applications, such as clothes recommendation \cite{wang2011clothesattrib}.

Ferrari \emph{et al.} \cite{ferrari2007attrib_learn} studied some
elementary properties such as colour and/or geometric pattern. From
human annotations, they proposed a generative model for learning simple
color and texture attributes. The attribute can be either viewed as
unary (\emph{e.g.}, red colour, round texture), or binary (\emph{e.g.},
black/white stripes). The `unary' attributes are simple attributes,
whose characteristic properties are captured by individual image segments
(appearance for red, shape for round). In contrast, the `binary' attributes
are more complex attributes, whose basic element is a pair of segments
(\emph{e.g.}, black/white stripes).

\subsubsection{Relative Attributes}

Attributes discussed above use single value to represent the strength
of an attribute being possessed by one instance/class; they can indicate
properties (\emph{e.g.}, spotted) or annotations of images or objects.
In contrast, relative information, in the form of relative attributes,
can be used as a more informative way to express richer semantic meaning
and thus better represent visual information. The relative attributes
can be directly used for zero-shot recognition \cite{parikh2011relativeattrib}.

Relative attributes (Parikh \emph{et al.} \cite{parikh2011relativeattrib})
were first proposed in order to learn a ranking function capable of
predicting the relative semantic strength of a given attribute. The
annotators give pairwise comparisons on images and a ranking function
is then learned to estimate relative attribute values for unseen images
as ranking scores. These relative attributes are learned as a form
of richer representation, corresponding to the strength of visual
properties, and used in a number of tasks including visual recognition
with sparse data, interactive image search (Kovashka \emph{et al.}
\cite{whittlesearch}), semi-supervised (Shrivastava \emph{et al.}
\cite{ShrivastavaECCV12}) and active learning (Biswas \emph{et al.}
\cite{BiswasCVPR13,attr_clas_feedback}) of visual categories. Kovashka
\emph{et al.} \cite{whittlesearch} proposed a novel model of feedback
for image search where users can interactively adjust the properties
of exemplar images by using relative attributes in order to best match
his/her ideal queries.

Fu \emph{et al.} \cite{robust_0shot} extended the relative attributes
to ``subjective visual properties'' and proposed a learning-to-rank
model of pruning the annotation outliers/errors in crowdsourced pairwise
comparisons. Given only weakly-supervised pairwise image comparisons,
Singh \emph{et al.} \cite{relative_ranking_eccv16} developed an end-to-end
deep convolutional network to simultaneously localize and rank relative
visual attributes. The localization branch in \cite{relative_ranking_eccv16}
is adapted from the spatial transformer network \cite{jaderberg2015spatial}.

\begin{table*}
\centering{}\begin{tabular}{cc}
\hline 
Different Types of Attributes  & Papers\tabularnewline
\hline 
User-defined attributes  & \cite{lampert2009zeroshot_dat,lampert13AwAPAMI}\cite{fu2012attribsocial}\cite{vaquero2009attrib_surveil,wang2009attrib_class_sal}\cite{wang2011clothesattrib,moon_attrb,rudd2016moon,wang2016walk,datta2011face_attrib,ehrlich2016facial}\cite{multi_task_attrib}\cite{wang2011clothesattrib}\cite{survey_of_face,facial_attrb_icmr}\cite{ferrari2007attrib_learn}\tabularnewline
Relative attributes  & \cite{parikh2011relativeattrib}\cite{relative_ranking_eccv16}\cite{robust_0shot}\cite{BiswasCVPR13,attr_clas_feedback}\cite{whittlesearch}\cite{ShrivastavaECCV12}\tabularnewline
Data-driven attributes  & \cite{parikh2011nameable_attribs}\cite{tang2009concepts_from_noisytags}\cite{liu2011action_attrib,farhadi2009attrib_describe}\cite{fu2012attribsocial,yanweiPAMIlatentattrib}\cite{farhadi2009attrib_describe}\cite{video_story_1shot}\tabularnewline
Video attributes  & \cite{hauptmann2007semanticGapRetr}\cite{snoek2007semantic_retrieval}\cite{hauptmann2007semanticGapRetr}\cite{toderici2010youtube_tag}\cite{zuxuan_2016_CVPR,obj2action}\cite{tang2009annotation}\cite{qi2007corr_mlab}\tabularnewline
\hline 
\hline 
Concept ontology  & \cite{fergus2010label_share}\cite{RohrbachCVPR12,rohrbach2010semantic_transfer}\cite{costa_mlzsl}\cite{recog_action,concept_not_alone}\tabularnewline
Semantic word embedding  & \cite{ZSL_convex_optimization,zhang2016zero,zhang2016zeroshot,yanweiBMVC,DeviseNIPS13,RichardNIPS13}\cite{RichardNIPS13}\cite{huang2012ACL}\cite{yanweiBMVC}\cite{DeviseNIPS13,ZSL_convex_optimization}\cite{TAC_0shot,emotion_0shot}\tabularnewline
\hline 
\end{tabular}\caption{\label{tab:Paper-summary-of} Different types of semantic representations
for zero-shot recognition.}
\end{table*}


\subsubsection{Data-driven attributes}

The attributes are usually defined by extra knowledge of either expert
users or concept ontology. To better augment such user-defined attributes,
Parikh \emph{et al}. \cite{parikh2011nameable_attribs} proposed a
novel approach to actively augment the vocabulary of attributes to
both help resolve intra-class confusions of new attributes and coordinate
the ``name-ability'' and ``discriminativeness'' of candidate attributes.
However, such user-defined attributes are far from enough to model
the complex visual data. The definition process can still be either
inefficient (costing substantial effort of user experts) and/or insufficient
(descriptive properties may not be discriminative). To tackle such
problems, it is necessary to automatically discover more discriminative
intermediate representations from visual data, \emph{i.e.} data-driven
attributes. The data-driven attributes can be used in zero-shot recognition
tasks \cite{liu2011action_attrib,fu2012attribsocial}.

Despite previous efforts, an exhaustive space of attributes is unlikely
to be available, due to the expense of ontology creation, and a simple
fact that semantically obvious attributes, for humans, do not necessarily
correspond to the space of detectable and discriminative attributes.
One method of collecting labels for large scale problems is to use
Amazon Mechanical Turk (AMT) \cite{amazon_mechanical}. However, even
with excellent quality assurance, the results collected still exhibit
strong label noise. Thus label-noise \cite{tang2009concepts_from_noisytags}
is a serious issue in learning from either AMT, or existing social
meta-data. More subtly, even with an exhaustive ontology, only a subset
of concepts from the ontology are likely to have sufficient annotated
training examples, so the portion of the ontology which is effectively
usable for learning, may be much smaller. This inspired the works
of automatically mining the attributes from data.

Data-driven attributes have only been explored in a few previous works.
Liu \emph{et al}. \cite{liu2011action_attrib} employed an information
theoretic approach to infer the data-driven attributes from training
examples by building a framework based on a latent SVM formulation.
They directly extended the attribute concepts in images to comparable
``action attributes'' in order to better recognize human actions.
Attributes are used to represent human actions from videos and enable
the construction of more descriptive models for human action recognition.
They augmented user-defined attributes with data-driven attributes
to better differentiate existing classes. Farhadi \emph{et al.~}\cite{farhadi2009attrib_describe}
also learned user-defined and data-driven attributes.

The data-driven attribute works in \cite{liu2011action_attrib,farhadi2009attrib_describe,latent_semantic_attrb}
are limited. First, they learn the user-defined and data-driven attributes
separately, rather than jointly in the same framework. Therefore data-driven
attributes may re-discover the patterns that exist in the user-defined
attributes. Second, the data-driven attributes are mined from data
and we do not know the corresponding semantic attribute names for
the discovered attributes. For those reasons, usually data-driven
attributes can not be directly used in zero-shot learning. These limitations
inspired the works of \cite{fu2012attribsocial,yanweiPAMIlatentattrib}.
Fu \emph{et al.} \cite{fu2012attribsocial,yanweiPAMIlatentattrib}
addressed the tasks of understanding multimedia data with sparse and
incomplete labels. Particularly, they studied the videos of social
group activities by proposing a novel scalable probabilistic topic
model for learning a semi-latent attribute space. The learned multi-modal
semi-latent attributes can enable multi-task learning, one-shot learning
and zero-shot learning. Habibian \emph{et al.} \cite{video_story_1shot}
proposed a new type of video representation by learning the ``VideoStory''
embedding from videos and corresponding descriptions. This representation
can also be interpreted as data-driven attributes. The work won the
best paper award in ACM Multimedia 2014.

\subsubsection{Video Attributes}

Most existing studies on attributes focus on object classification
from static images. Another line of work instead investigates attributes
defined in videos, \emph{i.e.}, video attributes, which are very important
for corresponding video related tasks such as action recognition and
activity understanding. Video attributes can correspond to a wide
range of visual concepts such as objects (e.g., animal), indoor/outdoor
scenes (e.g., meeting, snow), actions (e.g. blowing candle) and events
(e.g., wedding ceremony), and so on. Compared to static image attributes,
many video attributes can only be computed from image sequences and
are more complex in that they often involve multiple objects.

Video attributes are closely related to video concept detection in
Multimedia community. The video concepts in a video ontology can be
taken as video attributes in zero-shot recognition. Depending on the
ontology and models used, many approaches on video concept detection
(Chang \emph{et al}. \cite{change_aaai,change_ijcai}, Snoek \emph{et
al.} \cite{snoek2007semantic_retrieval}, Hauptmann \emph{et al.}
\cite{hauptmann2007semanticGapRetr}, Gan \emph{et al}. \cite{recog_action}
and Qin \emph{et al.} \cite{zero_shot_action_cvpr2017}) can therefore
be seen as addressing a sub-task of video attribute learning to solve
zero-shot video event detection. Some works aim to automatically expand
(\emph{e.g.}, Hauptmann \emph{et al.} \cite{hauptmann2007semanticGapRetr}
and Tang \emph{et al.} \cite{tang2009annotation}) or enrich (Yang
\emph{et al.} \cite{yang2011tag_tagging}) the set of video tags \cite{hospedales2011video_tags,toderici2010youtube_tag,yang2011tag_tagging}
given a search query. In this case, the expanded/enriched tagging
space has to be constrained by a fixed concept ontology, which may
be very large and complex \cite{toderici2010youtube_tag,Aradhye2009,yang2011disc_subtag}.
For example, there is a vocabulary space of over $20,000$ tags in
\cite{toderici2010youtube_tag}.

Zero-shot video event detection has also attracted large research
attention recently. The video event is a higher level semantic entity
and is typically composed of multiple concepts/video attributes. For
example, a ``birthday party\textquotedblright{} event consists of
multiple concepts, \emph{e.g.}, ``blowing candle\textquotedblright{}
and ``birthday cake\textquotedblright . The semantic correlation
of video concepts has also been utilized to help predict the video
event of interest, such as weakly supervised concepts \cite{multimodal_0shot},
pairwise relationships of concepts (Gan \emph{et al.} \cite{concept_not_alone})
and general video understanding by object and scene semantics attributes
\cite{zuxuan_2016_CVPR,obj2action}. Note, a full survey of recent
works on zero-shot video event detection is beyond the scope of this
paper.

\subsection{Semantic Representations Beyond Attributes\label{sec:Generalised-Semantic-Representat}}

Besides the attributes, there are many other types of semantic representations,
\emph{e.g.} semantic word vector and concept ontology. Representations
that are directly learned from textual descriptions of categories
have also been investigated, such as Wikipedia articles \cite{Elhoseiny_2013_ICCV,deep_0shot},
sentence descriptions \cite{deep_0shot_cvpr} or knowledge graphs
\cite{RohrbachCVPR12,rohrbach2010semantic_transfer}.

\subsubsection{Concept ontology}

Concept ontology is directly used as the semantic representation alternative
to attributes. For example, WordNet~\cite{WordNet_1995Miller} is
one of the most widely studied concept ontologies. It is a large-scale
semantic ontology built from a large lexical dataset of English. Nouns,
verbs, adjectives and adverbs are grouped into sets of cognitive synonyms
(synsets) which indicate distinct concepts. The idea of semantic distance, defined by the WordNet ontology, is also
used by Rohrbach \emph{et al}.~\cite{RohrbachCVPR12,rohrbach2010semantic_transfer}
for transferring semantic information in zero-shot learning problems.
They thoroughly evaluated many alternatives of semantic links between
auxiliary and target classes by exploring linguistic bases such as
WordNet, Wikipedia, Yahoo Web, Yahoo Image, and Flickr Image. Additionally,
WordNet has been used for many vision problems. Fergus \emph{et al}.~\cite{fergus2010label_share}
leveraged the WordNet ontology hierarchy to define semantic distance
between any two categories for sharing labels in classification. The
COSTA \cite{costa_mlzsl} model exploits the co-occurrences of visual
concepts in images for knowledge transfer in zero-shot recognition.

\subsubsection{Semantic word vectors}

Recently, word vector approaches, based on distributed language representations,
have gained popularity in zero-shot recognition \cite{ZSL_convex_optimization,zhang2016zero,zhang2016zeroshot,yanweiBMVC,DeviseNIPS13,RichardNIPS13}.
A user-defined semantic attribute space is pre-defined and each dimension
of the space has a specific semantic meaning according to either human
experts or concept ontology ({\em e.g.}, one dimension could correspond
to `has fur', and another `has four legs')(Sec. \ref{subsec:User-defined-Attributes}).
In contrast, the semantic word vector space is trained from linguistic
knowledge bases such as Wikipedia and UMBCWebBase using natural language
processing models \cite{huang2012ACL,wordvectorICLR}. As a result,
although the relative positions of different visual concepts will
have semantic meaning, e.g., a cat would be closer to a dog than a
sofa, each dimension of the space does not have a specific semantic
meaning. The language model is used to project each class' textual
name into this space. These projections can be used as prototypes
for zero-shot learning. Socher \emph{et al}.~\cite{RichardNIPS13}
learned a neural network model to embed each image into a $50$-dimensional
word vector semantic space, which was obtained using an unsupervised
linguistic model~\cite{huang2012ACL} trained on Wikipedia text.
The images from either known or unknown classes could be mapped into
such word vectors and classified by finding the closest prototypical
linguistic word in the semantic space.

Distributed semantic word vectors have been widely used for zero-shot
recognition. Skip-gram model and CBOW model ~\cite{wordvectorICLR,distributedword2vec2013NIPS}
were trained from a large scale of text corpora to construct semantic
word space. Different from the unsupervised linguistic model~\cite{huang2012ACL},
distributed word vector representations facilitate modeling of syntactic
and semantic regularities in language and enable vector-oriented reasoning
and vector arithmetics. For example, $Vec(\lyxmathsym{\textquotedblleft}Moscow\lyxmathsym{\textquotedblright})$
should be much closer to $Vec(\lyxmathsym{\textquotedblleft}Russia\lyxmathsym{\textquotedblright})+Vec(\lyxmathsym{\textquotedblleft}capital\lyxmathsym{\textquotedblright})$
than $Vec(\lyxmathsym{\textquotedblleft}Russia\lyxmathsym{\textquotedblright})$
or $Vec(\lyxmathsym{\textquotedblleft}capital\lyxmathsym{\textquotedblright})$
in the semantic space. One possible explanation and intuition underlying
these syntactic and semantic regularities is the distributional hypothesis
\cite{Harris1981}, which states that a word's meaning is captured
by other words that co-occur with it. Frome \emph{et al}.~\cite{DeviseNIPS13}
further scaled such ideas to recognize large-scale datasets. They
proposed a deep visual-semantic embedding model to map images into
a rich semantic embedding space for large-scale zero-shot recognition.
Fu \emph{et al.}~\cite{yanweiBMVC} showed that such a reasoning
could be used to synthesize all different label combination prototypes
in the semantic space and thus is crucial for multi-label zero-shot
learning. More recent work of using semantic word embedding includes
\cite{ZSL_convex_optimization,zhang2016zero,zhang2016zeroshot}.

More interestingly, the vector arithmetics of semantic emotion word
vectors is matching the psychological theories of Emotion, such as
Ekman's six pan-cultural basic emotions or Plutchik's emotion. For
example, $Vec(\lyxmathsym{\textquotedblleft}Surprise\lyxmathsym{\textquotedblright})+Vec(\lyxmathsym{\textquotedblleft}Sadness\lyxmathsym{\textquotedblright})$
is very close to $Vec(\lyxmathsym{\textquotedblleft}Disappointment\lyxmathsym{\textquotedblright})$;
and $Vec(\lyxmathsym{\textquotedblleft}Joy\lyxmathsym{\textquotedblright})+Vec(\lyxmathsym{\textquotedblleft}Trust\lyxmathsym{\textquotedblright})$
is very close to $Vec(\lyxmathsym{\textquotedblleft}Love\lyxmathsym{\textquotedblright})$.
Since there are usually thousands of words that can describe emotions,
zero-shot emotion recognition has been also investigated in \cite{TAC_0shot}
and \cite{emotion_0shot}.

\section{Models for Zero-shot Recognition\label{sec:Models-for-Zero-shot}}

With the help of semantic representations, zero-shot recognition can
usually be solved by first learning an embedding model (Sec. \ref{subsec:Embedding-Models})
and then doing recognition (Sec. \ref{subsec:Recognition-models-in}).
To the best of our knowledge, a general `embedding' formulation of
zero-shot recognition was first introduced by Larochelle \emph{et
al.~}\cite{zero_data_AAAI2008}. They embedded handwritten character
with a typed representation which further helped to recognize unseen
classes.

The embedding models aim to establish connections between seen classes
and unseen classes by projecting the low-level features of images/videos
close to their corresponding semantic vectors (prototypes). Once the
embedding is learned, from known classes, novel classes can be recognized
based on the similarity of their prototype representations and predicted
representations of the instances in the embedding space. The recognition
model matches the projection of the image features against the unseen
class prototypes (in the embedding space). In addition to discussing
these models and recognition methods in Sec. \ref{subsec:Embedding-Models}
and Sec. \ref{subsec:Recognition-models-in}, respectively, we will
also discuss the potential problems encountered in zero-shot recognition
models in Sec. \ref{subsec:Problems-in-Existing}.

\subsection{Embedding Models \label{subsec:Embedding-Models}}

\subsubsection{Bayesian Models}

The embedding models can be learned using a Bayesian formulation,
which enables easy integration of prior knowledge of each type of
attribute to compensate for limited supervision of novel classes in
image and video understanding. A generative model is first proposed
in Ferrari and Zisserman in \cite{ferrari2007attrib_learn} for learning
simple color and texture attributes.

Lampert \emph{et al}. \cite{lampert2009zeroshot_dat,lampert13AwAPAMI}
is the first to study the problem of object recognition of categories
for which no training examples are available. Direct Attribute Prediction
(DAP) and Indirect Attribute Prediction (IAP) are the first two models
for zero-shot recognition \cite{lampert2009zeroshot_dat,lampert13AwAPAMI}.
DAP and IAP algorithms use a single model that first learns embedding
using Support Vector Machine (SVM) and then does recognition using
Bayesian formulation. The DAP and IAP further inspired later works
that employ generative models to learn the embedding, including with
topic models \cite{yanweiPAMIlatentattrib,fu2012attribsocial,yu2010attributetransfer}
and random forests \cite{Jayaraman2014}. We briefly describe the
DAP and IAP models as follows, 
\begin{itemize}
\item \emph{DAP~Model.}\quad{}Assume the relation between known classes,
$y_{i},...,y_{k}$, unseen classes, $z_{1},...,z_{L}$, and descriptive
attributes $a_{1},...,a_{M}$ is given by the matrix of binary associations
values $a_{m}^{y}$ and $a_{m}^{z}$. Such a matrix encodes the presence/absence
of each attribute in a given class. Extra knowledge is applied to
define such an association matrix, for instance, by leveraging human
experts~(Lampert \emph{et al.} \cite{lampert2009zeroshot_dat,lampert13AwAPAMI}),
by consulting a concept ontology~(Fu \emph{et al.} \cite{yanweiPAMIlatentattrib}),
or by semantic relatedness measured between class and attribute concepts~(Rohrbach
\emph{et al.} \cite{RohrbachCVPR12}). In the training stage, the
attribute classifiers are trained from the attribute annotations of
known classes $y_{i},...,y_{k}$. At the test stage, the posterior
probability $p(a_{m}|x)$ can be inferred for an individual attribute
$a_{m}$ in an image $x$. To predict the class label of object class
$z$, 
\end{itemize}
\begin{align}
p(z|x) & =\Sigma_{a\in\left\{ 0,1\right\} ^{M}}p(z|a)p(a|x)\label{eq:DAPmodel}\\
= & \frac{p(z)}{p(a^{z})}\prod_{m=1}^{M}p(a_{m}|x)^{a_{m}^{z}}
\end{align}

\begin{itemize}
\item \emph{IAP~Model.}\quad{}The DAP model directly learns attribute
classifiers from the known classes, while the IAP model builds attribute
classifiers by combining the probabilities of all associated known
classes. It is also introduced as direct similarity-based model in
Rohrbach \emph{et al.} \cite{RohrbachCVPR12}. In the training step,
we can learn the probabilistic multi-class classifier to estimate
$p(y_{k}|x)$ for all training classes $y_{i},...,y_{k}$. Once $p(a|x)$
is estimated, we use it in the same way as in for DAP in zero-shot
learning classification problems. In the testing step, we predict, 
\end{itemize}
\begin{equation}
p(a_{m}|x)=\Sigma_{k=1}^{K}p(a_{m}|y_{k})p(y_{k}|x)\label{eq:IAP model}
\end{equation}


\subsubsection{Semantic Embedding}

Semantic embedding learns the mapping from visual feature space to
the semantic space which has various semantic representations. As
discussed in Sec. \ref{subsec:Semantic-Attributes}, the attributes
are introduced to describe objects; and the learned attributes may
not be optimal for recognition tasks. To this end, Akata \emph{et
al.} \cite{labelembeddingcvpr13} proposed the idea of label embedding
that takes attribute-based image classification as a label-embedding
problem by minimising the compatibility function between an image
and a label embedding. In their work, a modified ranking objective
function was derived from the WSABIE model~\cite{WASABIE2010}. As
object-level attributes may suffer from the problems of the partial
occlusions, scale changes of images, Li \emph{et al.} \cite{LiECCV2014}
proposed learning and extracting attributes on segments containing
the entire object; and then joint learning for simultaneous object
classification and segment proposal ranking by attributes. They thus
learned the embedding by the max-margin empirical risk over both the
class label as well as the segmentation quality. Other semantic embedding
algorithms have also been investigated such as semi-supervised max-margin
learning framework \cite{max_margin_zsl_2015,sslzsl_0shot}, latent
SVM \cite{zhang2016zero} or multi-task learning \cite{hwang2011obj_attrib,decorrelated_cvpr14,unified_model}.

\subsubsection{Embedding into Common Spaces}

Besides the semantic embedding, the relationship of visual and semantic
space can be learned by jointly exploring and exploiting a common
intermediate space. Extensive efforts \cite{deep_0shot,unified_model,embedding_akata,romera2015embarrassingly,yanweiembedding,yang2014unified,mahajan2011joint_attrib}
had been made towards this direction. Akata \emph{et al.} \cite{embedding_akata}
learned a joint embedding semantic space between attributes, text
and hierarchical relationships. Ba \emph{et al.} \cite{deep_0shot}
employed text features to predict the output weights of both the convolutional
and the fully connected layers in a deep convolutional neural network
(CNN).

On one dataset, there may exist many different types of semantic representations.
Each type of representation may contain complementary information.
Fusing them can potentially improve the recognition performance. Thus
several recent works studied different methods of multi-view embedding.
Fu \emph{et al.} \cite{semantic_graph} employed the semantic class
label graph to fuse the scores of different semantic representations.
Similarly label relation graphs have also been studied in \cite{Deng2014}
and significantly improved large-scale object classification in supervised
and zero-shot recognition scenarios.

A number of successful approaches to learning a semantic embedding
space reply on Canonical Component Analysis (CCA). Hardoon \emph{et
al.}~\cite{CCAoverview} proposed a general, kernel CCA method, for
learning semantic embedding of web images and their associated text.
Such embedding enables a direct comparison between text and images.
Many more works \cite{SocherFeiFeiCVPR2010,multiviewCCAIJCV,HwangIJCV,topicimgannot}
focused on modeling the images/videos and associated text (\emph{e.g.},
tags on Flickr/YouTube). Multi-view CCA is often exploited to provide
unsupervised fusion of different modalities. Gong \emph{et al. }\cite{multiviewCCAIJCV}
also investigated the problem of modeling Internet images and associated
text or tags and proposed a three-view CCA embedding framework for
retrieval tasks. Additional view allows their framework to outperform
a number of two-view baselines on retrieval tasks. Qi \emph{et al}.
\cite{jointly_zsl} proposed an embedding model for jointly exploring
the functional relationships between text and image features for transferring
inter-model and intra-model labels to help annotate the images. The
inter-modal label transfer can be generalized to zero-shot recognition.

\subsubsection{Deep Embedding}

Most of recent zero-shot recognition models have to rely the state-of-the-art
deep convolutional models to extract the image features. As one of
the first works, DeViSE \cite{DeviseNIPS13} extended the deep architecture
to learn the visual and semantic embedding; and it can identify visual
objects using both labeled image data as well as semantic information
gleaned from unannotated text. ConSE \cite{ZSL_convex_optimization}
constructed the image embedding approach by mapping images into the
semantic embedding space via convex combination of the class label
embedding vectors. Both DeViSE and ConSE are evaluated on large-scale
datasets, \textendash{} ImageNet (ILSVRC) 2012 1K and ImageNet 2011
21K dataset.

To combine the visual and textual branches in the deep embedding,
different loss functions can be considered, including margin-based
losses \cite{DeviseNIPS13,yang2014unified}, or Euclidean distance
loss \cite{szegedy2015going}, or least square loss \cite{deep_0shot}.
Zhang \emph{et al.} \cite{deep_0shot_recent} employed the visual
space as the embedding space and proposed an end-to-end deep learning
architecture for zero-shot recognition. Their networks have two branches:
visual encoding branch which uses convolutional neural network to
encode the input image as a feature vector, and the semantic embedding
branch which encodes the input semantic representation vector of each
class which the corresponding image belonging to.

\subsection{Recognition Models in the Embedding Space\label{subsec:Recognition-models-in}}

Once the embedding model is learned, the testing instances can be
projected into this embedding space. The recognition can be carried
out by using different recognition models. The most common used one
is nearest neighbour classifier which classify the testing instances
by assigning the class label in term of the nearest distances of the
class prototypes against the projections of testing instances in the
embedding space. Fu \emph{et al.} \cite{yanweiPAMIlatentattrib} proposed
semi-latent zero-shot learning algorithm to update the class prototypes
by one step self-training.

Manifold information can be used in the recognition models in the
embedding space. Fu \emph{et al.} \cite{transductiveEmbeddingJournal}
proposed a hyper-graph structure in their multi-view embedding space;
and zero-shot recognition can be addressed by label propagation from
unseen prototype instances to unseen testing instances. Changpinyo
\emph{et al.} \cite{synthesized_0shot} synthesized classifiers in
the embedding space for zero-shot recognition. For multi-label zero-shot
learning, the recognition models have to consider the co-occurrence/correlations
of different semantic labels \cite{costa_mlzsl,yanweiBMVC,fast_0shot}.

Latent SVM structure has also been used as the recognition models
\cite{wang2010reg_tag_corr,hwang2011obj_attrib}. Wang \emph{et al.
}\cite{wang2010reg_tag_corr} treated the object attributes as latent
variables and learnt the correlations of attributes through an undirected
graphical model. Hwang \emph{et al.} \cite{hwang2011obj_attrib} utilized
a kernelized multi-task feature learning framework to learn the sharing
features between objects and their attributes. Additionally, Long
et al. \cite{shaoling_cvpr2017} employed the attributes to synthesize
unseen visual features at training stage; and thus zero-shot recognition
can be solved by the conventional supervised classification models.

\subsection{Problems in Zero-shot Recognition \label{subsec:Problems-in-Existing}}

There are two intrinsic problems in zero-shot recognition, namely
projection domain shift problem (Sec. \ref{subsec:Projection-domain-shift})
and hubness problem (Sec. \ref{subsec:Hubness-Problem}).

\subsubsection{Projection Domain Shift Problems\label{subsec:Projection-domain-shift}}

\begin{figure}[t]
\centering{}\centering{}\includegraphics[scale=0.26]{fig1}\caption{\label{fig:domain-shift:Low-level-feature-distribution}Illustrating
projection domain shift problem. Zero-shot prototypes are annotated
as red stars and predicted semantic attribute projections shown in
blue. Both Pig and Zebra share the same `hasTail' attribute yet with
very different visual appearance of `Tail'. The figure comes from
\cite{transductiveEmbeddingJournal}. }
\end{figure}

Projection domain shift problem in zero-shot recognition was first
identified by Fu \emph{et al.} \cite{transductiveEmbeddingJournal}.
This problem can be explained as follows: since the source and target
datasets have different classes, the underlying data distribution
of these classes may also differ. The projection functions learned
on the source dataset, from visual space to the embedding space, without
any adaptation to the target dataset, will cause an unknown shift/bias.
Figure \ref{fig:domain-shift:Low-level-feature-distribution} from
\cite{transductiveEmbeddingJournal} gives a more intuitive illustration
of this problem. It plots the 85D attribute space representation spanned
by feature projections which is learned from source data, and class
prototypes which are 85D binary attribute vectors. Zebra and Pig are
one of auxiliary and target classes respectively; and the same 'hasTail'
semantic attribute means very different visual appearance for Pig
and Zebra. In the attribute space, directly using the projection functions
learned from source datasets (\emph{e.g.}, Zebra) on the target datasets
(\emph{e.g.}, Pig) will lead to a large discrepancy between the class
prototype of the target class and the predicted semantic attribute
projections.

To alleviate this problem, the transductive learning based approaches
were proposed, to utilize the manifold information of the instances
from unseen classes \cite{transductiveEmbeddingJournal,Eylor_iccv2015,transferlearningNIPS,Li_CVPR2017,zsl_action_xu,yanweiembedding}.
Nevertheless, the transductive setting assumes that all the testing
data can be accessed at once, which obviously is invalid if the new
unseen classes appear dynamically and unavailable before learning
models. Thus inductive learning base approaches \cite{Eylor_iccv2015,synthesized_0shot,Jayaraman2014,semantic_graph,Li_CVPR2017}
have also been studied and these methods usually enforce other additional
constraints or information from the training data.

\subsubsection{Hubness problem\label{subsec:Hubness-Problem}}

The hubness problem is another interesting phenomenon that may be
observed in zero-shot recognition. Essentially, hubness problem can
be described as the presence of `universal' neighbors, or hubs, in
the space. Radovanovic \emph{et al. }\cite{marcobaronihubness} was
the first to study the hubness problem; in \cite{marcobaronihubness}
a hypothesis is made that hubness is an inherent property of data
distributions in the high dimensional vector space. Nevertheless,
Low \emph{et al.} \cite{Low2013} challenged this hypothesis and showed
the evidence that hubness is rather a boundary effect or, more generally,
an effect of a density gradient in the process of data generation.
Interestingly, their experiments showed that the hubness phenomenon
can also occur in low-dimensional data.

While causes for hubness are still under investigation, recent works
\cite{dinu2014improving,shigeto2015ridge} noticed that the regression
based zero-shot learning methods do suffer from this problem. To alleviate
this problem, Dinu \emph{et al.} \cite{dinu2014improving} utilized
the global distribution of feature instances of unseen data, \emph{i.e.},
in a transductive manner. In contrast, Yutaro \emph{et al.} \cite{shigeto2015ridge}
addressed this problem in an inductive way by embedding the class
prototypes into a visual feature space.

\section{Beyond Zero-shot Recognition\label{sec:Beyond-Zero-shot-Recognition}}

\subsection{Generalized Zero-shot Recognition and Open-set~Recognition}

In conventional supervised learning tasks, it is taken for granted
that the algorithms should take the form of ``closed set'' where
all testing classes should are known at training time. The zero-shot
recognition, in contrast, assumes that the source and target classes
cannot be mixed; and that the testing data only coming from the unseen
classes. This assumption, of course, greatly and unrealistically simplifies
the recognition tasks. To relax the settings of zero-shot recognition
and investigate recognition tasks in a more generic setting, there
are several tasks advocated beyond the conventional zero-shot recognition.
In particular, generalized zero-shot recognition \cite{wild_0shot}
and open set recognition tasks have been discussed recently \cite{Scheirer_2014_TPAMIb,Scheirer_2013_TPAMI,ssvoc_2016_CVPR,ssvoc_evl}.

The generalized zero-shot recognition proposed in \cite{wild_0shot}
broke the restricted nature of conventional zero-shot recognition
and also included the training classes among the testing data. Chao
\emph{et al.} \cite{wild_0shot} showed that it is nontrivial and
ineffective to directly extend the current zero-shot learning approaches
to solve the generalized zero-shot recognition. Such a generalized
setting, due to the more practical nature, is recommended as the evaluation
settings for zero-shot recognition tasks \cite{zsl_ugly}.

Open-set recognition, in contrast, has been developed independently
of zero-shot recognition. Initially, open set recognition aimed at
breaking the limitation of ``closed set'' recognition setup. Specifically,
the task of open set recognition tries to identify the class name
of an image from a very large set of classes, which includes but is
not limited to training classes. The open set recognition can be roughly
divided into two sub-groups.

\subsubsection{Conventional open set recognition}

First formulated in \cite{Bendale_2015_CVPR,Sattar_2015_CVPR,Scheirer_2013_TPAMI,Scheirer_2014_TPAMIb},
the conventional open set recognition only identifies whether the
testing images come from the training classes or some unseen classes.
This category of methods do not explicitly predict from which out
of unseen classes the testing instance, from the unseen classes, belongs
to. In such a setting, the conventional open set recognition is also
known as incremental learning \cite{gomes2008inc_dpmm,diehl2003inc_svm,iCaRL}.

\subsubsection{Generalized open set recognition}

The key difference from the conventional open set recognition is that
the generalized open set recognition also needs to explicitly predict
the semantic meaning (class) of testing instances even from the unseen
novel classes. This task was first defined and evaluated in \cite{ssvoc_2016_CVPR,ssvoc_evl}
on the tasks of object categorization. The generalized open set recognition
can be taken as a most general version of zero-shot recognition, where
the classifiers are trained from training instances of limited training
classes, whilst the learned classifiers are required to classify the
testing instances from a very large set of open vocabulary, say, 310
K class vocabulary in \cite{ssvoc_2016_CVPR,ssvoc_evl}. Conceptually
similar, there are vast variants of generalized open-set recognition
tasks which have been studied in other research community such as,
open-vocabulary object retrieval \cite{Guadarrama14:OOR,open_vocab_description},
open-world person re-identification \cite{open_world_1shot} or searching
targets \cite{Sattar_2015_CVPR}, open vocabulary scene parsing \cite{open_vocab_scen_parsing}.

\subsection{One-shot recognition}

A closely-related problem to zero-shot learning is one-shot or few-shot
learning problem \textendash{} instead of/apart from having only textual
description of the new classes, one-shot learning assumes that there
are one or few training samples for each class. Similar to zero-shot
recognition, one-shot recognition is inspired by fact that humans
are able to learn new object categories from one or very few examples
\cite{Jankowski,compositional_1shot}. Existing one-shot learning
approaches can be divided into two groups: the direct supervised learning
based approaches and the transfer learning based approaches.

\subsubsection{Direct Supervised Learning-based Approaches}

Early approaches do not assume that there exist a set of auxiliary
classes which are related and/or have ample training samples whereby
transferable knowledge can be extracted to compensate for the lack
of training samples. Instead, the target classes are used to trained
a standard classifier using supervised learning. The simplest method
is to employ nonparametric models such as kNN which are not restricted
by the number of training samples. However, without any learning,
the distance metric used for kNN is often inaccurate. To overcome
this problem, metric embedding can be learned and then used for kNN
classification \cite{NIPS2004_2566}. Other approaches attempt to
synthesize more training samples to augment the small training dataset
\cite{inverse_graphics,CAD_models,human_level_prob,compositional_1shot}.
However, without knowledge transfer from other classes, the performance
of direct supervised learning based approaches is typically weak.
Importantly, these models cannot meet the requirement of lifelong
learning, that is, when new unseen classes are added, the learned
classifier should still be able to recognize the seen existing classes.

\subsubsection{Transfer Learning-based One-shot Recognition}

This category of approaches follow a similar setting to zero-shot
learning, that is, they assume that an auxiliary set of training data
from different classes exist. They explore the paradigm of learning
to learn \cite{Thrun96learningto} or meta-learning \cite{JVilalta2002AIR}
and aim to transfer knowledge from the auxiliary dataset to the target
dataset with one or few examples per class. These approaches differ
in (i) what knowledge is transferred and (ii) how the knowledge is
represented. Specifically, the knowledge can be extracted and shared
in the form of model prior in a generative model \cite{feifei2003unsup_1s_objcat_learn,feifei2006one_shot,tommasi2009transfercat},
features \cite{bart2005cross_gen,hertz2016icml,Fleuret2005nips,amit2007icml,wolfc2005cvpr,torralba2005pami},
semantic attributes \cite{yanweiPAMIlatentattrib,lampert13AwAPAMI,transferlearningNIPS,rohrbach2010semantic_transfer},
or contextual information \cite{one_shot_TL_contexutal}. Many of
these approaches take a similar strategy as the existing zero-shot
learning approaches and transfer knowledge via a shared embedding
space. Embedding space can typically be formulated using neural networks
(\emph{e.g.}, siamese network \cite{Bromley1993ijcai,siamese_1shot}),
discriminative (\emph{e.g.}, Support Vector Regressors (SVR) \cite{farhadi2009attrib_describe,lampert13AwAPAMI,Kienzle2006icml}),
metric learning \cite{quattoni2008sparse_transfer,fink2005nips},
or kernel embedding \cite{wolf2009iccv,hertz2016icml} methods. Particularly,
one of most common embedding ways is semantic embedding which is normally
explored by projecting the visual features and semantic entities into
a common {\em new} space. Such projections can take various forms
with corresponding loss functions, such as SJE \cite{embedding_akata},
WSABIE \cite{Weston:2011:WSU:2283696.2283856}, ALE \cite{labelembeddingcvpr13},
DeViSE \cite{DeviseNIPS13}, and CCA \cite{yanweiembedding}.

More recently deep meta-learning has received increasing attention
for few-shot learning \cite{deep_1shot_recent,feedforward_1shot,siamese_1shot,video2vec_1shot,video_story_1shot,open_world_1shot,matchingnet_1shot,infield_1shot,compositional_1shot}.
Wang et al. \cite{yuxiong2016eccv,yuxiong2016nips} proposed the idea
of one-shot adaptation by automatically learning a generic, category
agnostic transformation from models learned from few samples to models
learned from large enough sample sets. A model-agnostic meta-learning
framework is proposed by Finn et al. \cite{pmlr-v70-finn17a} which
trains a deep model from the auxiliary dataset with the objective
that the learned model can be effectively updated/fine-tuned on the
new classes with one or few gradient steps. Note that similar to the
generalised zero-shot learning setting, recently the problem of adding
new classes to a deep neural network whilst keeping the ability to
recognise the old classes have been attempted \cite{rusu-progressive-2016}.
However, the problem of lifelong learning and progressively adding
new classes with few-shot remains an unsolved problem.

\section{Datasets in Zero-shot Recognition\label{sec:Datasets-in-Zero-shot}}

This section summarizes the datasets used for zero-shot recognition.
Recently with the increasing number of proposed zero-shot recognition
algorithms, Xian \emph{et al.} \cite{zsl_ugly} compared and analyzed
a significant number of the state-of-the-art methods in depth and
they defined a new benchmark by unifying both the evaluation protocols
and data splits. The details of these datasets are listed in Tab.
\ref{tab:Datasets-in-zero-shot}.

\subsection{Standard Datasets}

\subsubsection{Animal with Attribute (AwA) dataset \cite{lampert2009zeroshot_dat}}

AwA consists of the 50 Osher-son/Kemp animal category images collected
online. There are $30,475$ images with at least $92$ examples of
each class. Seven different feature types are provided: RGB color
histograms, SIFT~\cite{sift}, rgSIFT~\cite{colorSIFT2008CVPR},
PHOG~\cite{PHOG2007CVIR}, SURF~\cite{bay2008surf}, local self-similarity
histograms~\cite{selfsimilarity2007CVPR} and DeCaf~\cite{decaf2014ICML}.
The AwA dataset defines $50$ classes of animals, and $85$ associated
attributes (such as furry, and has claws). For the consistent evaluation
of attribute-based object classification methods, the AwA dataset
defined $10$ test classes: \emph{chimpanzee}, \emph{giant panda},
\emph{hippopotamus}, \emph{humpback whale}, \emph{leopard}, \emph{pig},
\emph{raccoon}, \emph{rat}, \emph{seal}. The $6,180$ images of those
classes are taken as the test data, whereas the $24,295$ images of
the remaining $40$ classes can be used for training. Since the images
in AwA are not available under a public license, Xian \emph{et al.}{}
\cite{zsl_ugly} introduced another new zero-shot learning dataset
\textendash{} Animals with Attributes 2 (AWA2) dataset with 37,322
publicly licensed and released images from the same 50 classes and
85 attributes as AwA. 

\subsubsection{aPascal-aYahoo dataset \cite{farhadi2009attrib_describe}}

aPascal-aYahoo has a 12,695-image subset of the PASCAL VOC 2008 data
set with $20$ object classes (aPascal); and 2,644 images that were
collected using the Yahoo image search engine (aYahoo) of $12$ object
classes. Each image in this data set has been annotated with 64 binary
attributes that characterize the visible objects.

\subsubsection{CUB-200-2011 dataset \cite{WahCUB_200_2011}}

CUB-200-2011 contains $11,788$ images of $200$ bird classes. This
is a more challenging dataset than AwA \textendash{} it is designed
for fine-grained recognition and has more classes but fewer images.
All images are annotated with bounding boxes, part locations, and
attribute labels. Images and annotations were filtered by multiple
users of Amazon Mechanical Turk. CUB-200-2011 is used as the benchmarks
dataset for multi-class categorization and part localization. Each
class is annotated with $312$ binary attributes derived from the
bird species ontology. A typical setting is to use $150$ classes
as auxiliary data, holding out $50$ as target data, which is the
setting adopted in Akata \emph{et al.} \cite{labelembeddingcvpr13}.

\subsubsection{Outdoor Scene Recognition (OSR) Dataset \cite{scene_OSR}}

OSR consists of $2,688$ images from $8$ categories and $6$ attributes
(`openness', `natrual', \emph{etc.}) and an average $426$ labelled
pairs for each attribute from $240$ training images. Graphs constructed
are thus extremely sparse. Pairwise attribute annotation was collected
by AMT (Kovashka \emph{et al. }\cite{whittlesearch}). Each pair was
labelled by $5$ workers to average the comparisons by majority voting.
Each image also belongs to a scene type.

\subsubsection{Public Figure Face Database (PubFig) \cite{kumar2009}}

PubFig is a large face dataset of 58,797 images of 200 people collected
from the internet. Parikh \emph{et al.} \cite{parikh2011relativeattrib}
selected a subset of PubFig consisting of $772$ images from $8$
people and $11$ attributes (`smiling', `round face', \emph{etc.}).
We annotate this subset as PubFig-sub. The pairwise attribute annotation
was collected by Amazon Mechanical Turk \cite{whittlesearch}. Each
pair was labelled by 5 workers. A total of 241 training images for
PubFig-sub respectively were labelled. The average number of compared
pairs per attribute were 418.

\subsubsection{SUN attribute dataset \cite{SUN_attrib}}

This is a subset of the SUN Database \cite{xiao2010sunscene} for
fine-grained scene categorization and it has $14,340$ images from
$717$ classes ($20$ images per class). Each image is annotated with
$102$ binary attributes that describe the scenes' material and surface
properties as well as lighting conditions, functions, affordances,
and general image layout.

\subsubsection{Unstructured Social Activity Attribute (USAA) dataset \cite{fu2012attribsocial}}

USAA is the first benchmark video attribute dataset for social activity
video classification and annotation. The ground-truth attributes are
annotated for $8$ semantic class videos of Columbia Consumer Video
(CCV) dataset~\cite{jiang2011consumervideo}, and select $100$ videos
per-class for training and testing respectively. These classes were
selected as the most complex social group activities. By referring
to the existing work on video ontology~\cite{Zha_ontology,jiang2011consumervideo},
the $69$ attributes can be divided into five broad classes: actions,
objects, scenes, sounds, and camera movement. Directly using the ground-truth
attributes as input to a SVM, the videos can come with $86.9\%$ classification
accuracy. This illustrates the challenge of USAA dataset: while the
attributes are informative, there is sufficient intra-class variability
in the attribute-space, and even perfect knowledge of the instance-level
attributes is also insufficient for perfect classification.

\subsubsection{ImageNet datasets \cite{rohrbach2010semantic_transfer,RohrbachCVPR12,ssvoc_2016_CVPR,synthesized_0shot}}

ImageNet has been used in several different papers with relatively
different settings. The original ImageNet dataset has been proposed
in \cite{deng2009imagenet}. The full set of ImageNet contains over
15 million labeled high-resolution images belonging to roughly 22,000
categories and labelled by human annotators using Amazon's Mechanical
Turk (AMT) crowd-sourcing tool. Starting in 2010, as part of the Pascal
Visual Object Challenge, an annual competition called the ImageNet
Large-Scale Visual Recognition Challenge (ILSVRC) has been held. ILSVRC
uses a subset of ImageNet with roughly 1,000 images in each of 1,000
categories. In \cite{transferlearningNIPS,RohrbachCVPR12}, Robhrbach
\emph{et al. }split the ILSVRC 2010 data into 800/200 classes for
source/target data. In \cite{ssvoc_2016_CVPR}, Fu \emph{et al. }employed
the training data of ILSVRC 2012 as the source data; and the testing
part of ILSVRC 2012 as well as the data of ILSVRC 2010 as the target
data. The full sized ImageNet data has been used in \cite{synthesized_0shot,DeviseNIPS13,ZSL_convex_optimization}.

\subsubsection{Oxford 102 Flower dataset \cite{oxford_flower}}

Oxford 102 is a collection of 102 groups of flowers each with 40 to
256 flower images, and total 8,189 images in total. The flowers were
chosen from the common flower species in the United Kingdom. Elhoseiny
\emph{et al.} \cite{Elhoseiny_2013_ICCV} generated textual descriptions
for each class of this dataset.

\begin{table*}
\centering{}\begin{tabular}{cccccc}
\hline 
 & Dataset  & \# instances  & \#classes  & \#attribute  & Annotation Level\tabularnewline
\hline 
\multirow{8}{*}{A } & AwA  & 30475  & 50  & 85  & per class\tabularnewline
 & aPascal-aYahoo  & 15339  & 32  & 64  & per image\tabularnewline
 & PubFig  & 58,797  & 200  & \textendash{}  & per image\tabularnewline
 & PubFig-sub  & 772  & 8  & 11  & per image pairs\tabularnewline
 & OSR  & 2688  & 8  & 6  & per image pairs\tabularnewline
 & ImageNet  & 15 million  & 22000  & \textendash{}  & per image\tabularnewline
 & ILSVRC 2010  & 1.2 million  & 1000  & \textendash{}  & per image\tabularnewline
 & ILSVRC 2012  & 1.2 million  & 1000  & \textendash{}  & per image\tabularnewline
\hline 
\hline 
\multirow{3}{*}{B } & Oxford 102 Flower  & 8189  & 102  & \textendash{}  & \textendash{}\tabularnewline
 & CUB-200-2011  & 11788  & 200  & 312  & per class\tabularnewline
 & SUN-attribute  & 14340  & 717  & 102  & per image\tabularnewline
\hline 
\hline 
\multirow{4}{*}{C } & USAA  & 1600  & 8  & 69  & per video\tabularnewline
 & UCF101  & 13320  & 101  & \textendash{}  & per video\tabularnewline
 & ActivityNet  & 27801  & 203  & \textendash{}  & per video\tabularnewline
 & FCVID  & 91223  & 239  & \textendash{}  & per video\tabularnewline
\hline 
\end{tabular}\caption{\label{tab:Datasets-in-zero-shot}Datasets in zero-shot recognition.
The datasets are divided into three groups: general image classification
(A), fine-grained image classification (B) and video classification
datasets (C).}
\end{table*}


\subsubsection{UCF101 dataset \cite{ucf101}}

UCF101 is another popular benchmark for human action recognition in
videos, which consists of $13,320$ video clips (27 hours in total)
with 101 annotated classes. More recently, the THUMOS-2014 Action
Recognition Challenge \cite{THUMOS} created a benchmark by extending
upon the UCF-101 dataset (used as the training set). Additional videos
were collected from the Internet, including $2,500$ background videos,
$1,000$ validation and $1,574$ test videos.

\subsubsection{Fudan-Columbia Video Dataset (FCVID) \cite{fcvid_2017}}

FCVID contains $91,223$ web videos annotated manually into $239$
categories. Categories cover a wide range of topics (not only activities),
such as social events (\emph{e.g.}, tailgate party), procedural events
(\emph{e.g.}, making cake), object appearances (\emph{e.g.}, panda)
and scenic videos (\emph{e.g.}, beach). Standard split consists of
$45,611$ videos for training and $45,612$ videos for testing.

\subsubsection{ActivityNet dataset \cite{activitynet}}

ActivityNet is another large-scale video dataset for human activity
recognition and understanding and released in 2015. It consisted of
27,801 video clips annotated into 203 activity classes, totaling 849
hours of video. Comparing with existing dataset, ActivityNet has more
fine-grained action categories (\emph{e.g.}, ``drinking beer\textquotedblright{}
and ``drinking coffee\textquotedblright ). ActivityNet had the settings
of both trimmed and untrimmed videos of its classes.

\subsection{Discussion of Datasets.}

In Tab. \ref{tab:Datasets-in-zero-shot}, we roughly divide all the
datasets into three groups: general image classification, fine-grained
image classification and video classification datasets. These datasets
have been employed widely as the benchmark datasets in many previous
works. However, we believe that when making a comparison with the
other existing methods on these datasets, there are several issues
that should be discussed.

\subsubsection{Features}

With the renaissance of deep convolutional neural networks, deep features
of images/videos have been used for zero-shot recognition. Note that
different types of deep features (\emph{e.g.}, Overfeat \cite{overfeat},
VGG-19\cite{returnDevil2014BMVC}, or ResNet \cite{he2015deep}) have
varying level of semantic abstraction and representation ability;
and even the same type of deep features, if fine-tuned on different
dataset and with slightly different parameters, will also have different
representative ability. Thus it should be obvious, without using the
same type of features, it is not possible to conduct a fair comparisons
among different methods and draw any meaningful conclusion. Importantly
it is possible that the improved performance of one zero shot recognition
could be largely attributed to the better deep features used.

\subsubsection{Auxiliary data}

As mentioned, zero-shot recognition can be formulated in a transfer
learning setting. The size and quality of auxiliary data can be very
important for the overall performance of zero-shot recognition. Note
that these auxiliary data do not only include the auxiliary source
image/video dataset, but also refer to the data to extract/train the
concept ontology, or semantic word vectors. For example, the semantic
word vectors trained on large-scale linguistic articles, in general,
are better semantically distributed than those trained on small sized
linguistic corpus. Similarly, GloVe \cite{GloVec} is reported to
be better than the skip-gram and CBOW models \cite{distributedword2vec2013NIPS}.
Therefore, to make a fair comparison with existing works, another
important factor is to use the same set of auxiliary data.

\subsubsection{Evaluation}

For many datasets, there is no agreed source/target splits for zero-shot
evaluation. Xian \emph{et al.} \cite{zsl_ugly} suggested a new benchmark
by unifying both the evaluation protocols and data splits.

\section{Future Research Directions\label{sec:Future-Research-Directions}}

\subsubsection{More Generalized and Realistic Setting}

From the detailed review of existing zero-shot learning methods, it
is clear that overall the existing efforts have been focused on a
rather restrictive and impractical setting: classification is required
for new object classes only and the new unseen classes, though having
no training sample present, are assumed to be known. In reality, one
wants to progressively add new classes to the existing classes. Importantly,
this needs to be achieved without jeopardizing the ability of the
model to recognize existing seen classes. Furthermore, we cannot assume
that the new samples will only come from a set of known unseen classes.
Rather, they can only be assumed to belong to either existing seen
classes, known unseen classes, or unknown unseen classes. We therefore
foresee a more generalized setting will be adopted by the future zero-shot
learning work.

\subsubsection{Combining Zero-shot with Few-shot Learning}

As mentioned earlier, the problems of zero-shot and few-shot learning
are closely related and as a result, many existing methods use the
same or similar models. However, it is somewhat surprising to note
that no serious efforts have been taken to address these two problems
jointly. In particular, zero-shot learning would typically not consider
the possibility of having few training samples, while few-shot learning
ignores the fact that the textual description/human knowledge about
the new class is always there to be exploited. A few existing zero-shot
learning methods \cite{ssvoc_2016_CVPR,yanweiPAMIlatentattrib,latent_0shot,deep_0shot_cvpr}
have included few-shot learning experiments. However, they typically
use a naive \emph{kNN} approach, that is, each class prototype is
treated as a training sample and together with the k-shot, this becomes
a k+1-shot recognition problem. However, as shown by existing zero-shot
learning methods \cite{transductiveEmbeddingJournal}, the prototype
is worth far more that one training sample; it thus should be treated
differently. We thus expect a future direction on extending the existing
few-shot learning methods by incorporating the prototype as a `super'-shot
to improve the model learning.

\subsubsection{Beyond object categories}

So far the current zero-shot learning efforts are limited to recognizing
object categories. However, visual concepts can have far complicated
relationships than object categories. In particular beyond objects/nouns,
attributes/adjectives are important visual concepts. When combined
with objects, the same attribute often has different meaning, \emph{e.g.},
the concept of `yellow' in yellow face and a yellow banana clearly
differs. Zero-shot learning attributes with associated objects is
thus an interesting future research direction.

\subsubsection{Curriculum learning}

In a lifelong learning setting, a model will incrementally learn to
recognise new classes whilst keep the capacity for existing classes.
A related problem is thus how to select the more suitable new classes
to learn given the existing classes. It has been shown that \cite{iCaRL,lifelong_iid,curriculum_learning}
the sequence of adding different classes have a clear impact on the
model performance. It is therefore useful to investigate how to incorporate
the curriculum learning principles in designing a zero-shot learning
strategy.

\section{Conclusion\label{sec:Conclusion}}

In this paper, we have reviewed the recent advances in zero shot recognition.
Firstly different types of semantic representations are examined and
compared; the models used in zero shot learning have also been investigated.
Next, beyond zero shot recognition, one-shot and open set recognition
are identified as two very important related topics and thus reviewed.
Finally, the common used datasets in zero-shot recognition have been
reviewed with a number of issues in existing evaluations of zero-shot
recognition methods discussed. We also point out a number of research
direction which we believe will the focus of the future zero-shot
recognition studies.

\vspace{0.1in}
\noindent \textbf{Acknowledgments.}
This work is supported in part by two grants from NSF China ($\#61702108$, $\#61622204$, $\#61572134$),  and an European FP7 project (PIRSESGA-$2013-612652$). Yanwei Fu is supported by The Program for Professor of Special Appointment (Eastern Scholar) at Shanghai Institutions of Higher Learning.
\noindent  \bibliographystyle{IEEEtran}
\bibliography{egbib}



\begin{IEEEbiography}{Yanwei Fu}
received the BSc degree in information and computing sciences from Nanjing University of Technology in 2008; and the MEng degree in the Department of Computer Science \& Technology at Nanjing University in 2011, China. He is now pursuing his PhD in vision group of EECS, Queen Mary University of London. His research interest is attribute learning, topic model, learning to rank, video summarization and image segmentation.
\end{IEEEbiography}

\begin{IEEEbiography}{Tao Xiang}
received the Ph.D. degree in electrical and computer engineering from the National University of Singapore in 2002. He is currently a reader (associate professor) in the School of Electronic Engineering and Computer Science, Queen Mary University of London. His research interests include computer vision, machine learning, and data mining. He has published over 140 papers in international journals and conferences.
\end{IEEEbiography}


\begin{IEEEbiography}{Leonid Sigal}
 is an Associate Professor at the University of British Columbia. Prior to this he was a Senior Research Scientist at Disney Research. He completed his Ph.D. at Brown University in 2008; received his M.A. from Boston University in 1999, and M.Sc. from Brown University in 2003. Leonid's research interests lie in the areas of computer vision, machine learning, and computer graphics. Leonid's research emphasis is on machine learning and statistical approaches for visual recognition, understanding and analytics. He has published more than 70 papers in venues and journals in these fields (including TPAMI, IJCV, CVPR, ICCV and NIPS).
\end{IEEEbiography}

\begin{IEEEbiography}{Yu-Gang Jiang}
a Professor in School of Computer Science, Fudan University, China. His Lab for Big Video Data Analytics conducts research on all aspects of extracting high-level information from big video data, such as video event recognition, object/scene recognition and large-scale visual search. His work has led to many awards, including the inaugural ACM China Rising Star Award and the 2015 ACM SIGMM Rising Star Award.
\end{IEEEbiography}

\begin{IEEEbiography}{Xiangyang Xue}
Xiangyang Xue received the B.S., M.S., and Ph.D. degrees in communication engineering from Xidian University, Xi'an, China, in 1989, 1992 and 1995, respectively. He is currently a Professor of Computer Science at Fudan University, Shanghai, China. His research interests include multimedia information processing and machine learning.
\end{IEEEbiography}

\begin{IEEEbiography}{Shaogang Gong}
received the DPhil degree in 1989 from Keble College, Oxford University. He has been Professor of Visual Com- putation at Queen Mary University of Lon- don since 2001, a fellow of the Institution of Electrical Engineers and a fellow of the British Computer Society. His research inter- ests include computer vision, machine learn- ing, and video analysis.
\end{IEEEbiography}



\end{document}