VictorSanh HF staff commited on
Commit
61210a1
1 Parent(s): 94c1e74

basic dataset card

Browse files
Files changed (1) hide show
  1. README.md +163 -0
README.md ADDED
@@ -0,0 +1,163 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-2.0
3
+ ---
4
+ # Dataset Card for NoCaps
5
+
6
+ ## Table of Contents
7
+ - [Table of Contents](#table-of-contents)
8
+ - [Dataset Description](#dataset-description)
9
+ - [Dataset Summary](#dataset-summary)
10
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
11
+ - [Languages](#languages)
12
+ - [Dataset Structure](#dataset-structure)
13
+ - [Data Instances](#data-instances)
14
+ - [Data Fields](#data-fields)
15
+ - [Data Splits](#data-splits)
16
+ - [Dataset Creation](#dataset-creation)
17
+ - [Curation Rationale](#curation-rationale)
18
+ - [Source Data](#source-data)
19
+ - [Annotations](#annotations)
20
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
21
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
22
+ - [Social Impact of Dataset](#social-impact-of-dataset)
23
+ - [Discussion of Biases](#discussion-of-biases)
24
+ - [Other Known Limitations](#other-known-limitations)
25
+ - [Additional Information](#additional-information)
26
+ - [Dataset Curators](#dataset-curators)
27
+ - [Licensing Information](#licensing-information)
28
+ - [Citation Information](#citation-information)
29
+ - [Contributions](#contributions)
30
+
31
+ ## Dataset Description
32
+
33
+ - **Homepage:** [https://nocaps.org/](https://nocaps.org/)
34
+ - **Paper:** [nocaps: novel object captioning at scale](https://openaccess.thecvf.com/content_ICCV_2019/papers/Agrawal_nocaps_novel_object_captioning_at_scale_ICCV_2019_paper.pdf)
35
+ - **Leaderboard:**
36
+ - **Point of Contact:**: contact@nocaps.org
37
+
38
+ ### Dataset Summary
39
+
40
+ Dubbed NoCaps for novel object captioning at scale, NoCaps consists of 166,100 human-generated captions describing 15,100 images from the Open Images validation and test sets.
41
+ The associated training data consists of COCO image-caption pairs, plus Open Images image-level labels and object bounding boxes.
42
+ Since Open Images contains many more classes than COCO, nearly 400 object classes seen in test images have no or very few associated training captions (hence, nocaps).
43
+
44
+
45
+ ### Supported Tasks and Leaderboards
46
+
47
+ [More Information Needed]
48
+
49
+ ### Languages
50
+
51
+ [More Information Needed]
52
+
53
+ ## Dataset Structure
54
+
55
+ ### Data Instances
56
+
57
+ Each instance has the following structure:
58
+ ```
59
+ {
60
+ 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=L size=732x1024 at 0x7F574A3A9B50>,
61
+ 'image_coco_url': 'https://s3.amazonaws.com/nocaps/val/0013ea2087020901.jpg',
62
+ 'image_date_captured': '2018-11-06 11:04:33',
63
+ 'image_file_name': '0013ea2087020901.jpg',
64
+ 'image_height': 1024,
65
+ 'image_width': 732,
66
+ 'image_id': 0,
67
+ 'image_license': 0,
68
+ 'image_open_images_id': '0013ea2087020901',
69
+ 'annotations_ids': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9],
70
+ 'annotations_captions': [
71
+ 'A baby is standing in front of a house.',
72
+ 'A little girl in a white jacket and sandals.',
73
+ 'A young child stands in front of a house.',
74
+ 'A child is wearing a white shirt and standing on a side walk. ',
75
+ 'A little boy is standing in his diaper with a white shirt on.',
76
+ 'A child wearing a diaper and shoes stands on the sidewalk.',
77
+ 'A child is wearing a light-colored shirt during the daytime.',
78
+ 'A little kid standing on the pavement in a shirt. ',
79
+ 'Black and white photo of a little girl smiling.',
80
+ 'a cute baby is standing alone with white shirt'
81
+ ]
82
+ }
83
+ ```
84
+
85
+ ### Data Fields
86
+
87
+ - `image`: The image
88
+ - `image_coco_url`: URL for the image
89
+ - `image_date_captured`: Date at which the image was captured
90
+ - `image_file_name`: The file name for the image
91
+ - `image_height`: Height of the image
92
+ - `image_width`: Width of the image
93
+ - `image_id`: Id of the image
94
+ - `image_license`: Not sure what this is, it is always at 0
95
+ - `image_open_images_id`: Open image id
96
+ - `annotations_ids`: Unique ids for the captions (to use in conjunction with `annotations_captions`)
97
+ - `annotations_captions`: Captions for the image (to use in conjunction with `annotations_ids`)
98
+
99
+ ### Data Splits
100
+
101
+ [More Information Needed]
102
+
103
+ ## Dataset Creation
104
+
105
+ ### Curation Rationale
106
+
107
+ [More Information Needed]
108
+
109
+ ### Source Data
110
+
111
+ #### Initial Data Collection and Normalization
112
+
113
+ [More Information Needed]
114
+
115
+ #### Who are the source language producers?
116
+
117
+ [More Information Needed]
118
+
119
+ ### Annotations
120
+
121
+ #### Annotation process
122
+
123
+ [More Information Needed]
124
+
125
+ #### Who are the annotators?
126
+
127
+ [More Information Needed]
128
+
129
+ ### Personal and Sensitive Information
130
+
131
+ [More Information Needed]
132
+
133
+ ## Considerations for Using the Data
134
+
135
+ ### Social Impact of Dataset
136
+
137
+ [More Information Needed]
138
+
139
+ ### Discussion of Biases
140
+
141
+ [More Information Needed]
142
+
143
+ ### Other Known Limitations
144
+
145
+ [More Information Needed]
146
+
147
+ ## Additional Information
148
+
149
+ ### Dataset Curators
150
+
151
+ [More Information Needed]
152
+
153
+ ### Licensing Information
154
+
155
+ [More Information Needed]
156
+
157
+ ### Citation Information
158
+
159
+ [More Information Needed]
160
+
161
+ ### Contributions
162
+
163
+ Thanks to [@VictorSanh](https://github.com/VictorSanh) for adding this dataset.