Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
100K<n<1M
Annotations Creators:
crowdsourced
ArXiv:
Tags:
License:
coallaoh commited on
Commit
be4770e
1 Parent(s): 7023955

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +113 -62
README.md CHANGED
@@ -16,82 +16,133 @@ source_datasets:
16
  task_categories:
17
  - image-classification
18
  ---
19
- ## Neglected Free Lunch – Learning Image Classifiers Using Annotation Byproducts | [Paper](https://arxiv.org/abs/2303.17595)
20
 
21
- Dongyoon Han<sup>1*</sup>, Junsuk Choe<sup>2*</sup>, Seonghyeok Chun<sup>3</sup>, John Joon Young Chung<sup>4</sup>
22
-
23
- Minsuk Chang<sup>5</sup>, Sangdoo Yun<sup>1</sup>, Jean Y. Song<sup>6</sup>, Seong Joon Oh<sup>7&dagger;</sup>
24
-
25
- <sub>\* Equal contribution</sub> <sub>&dagger;</sub> <sub> Corresponding author </sub>
26
-
27
- <sup>1</sup> <sub>NAVER AI LAB</sub> <sup>2</sup> <sub>Sogang University</sub> <sup>3</sup> <sub>Dante Company</sub> <sup>4</sup> <sub>University of Michigan</sub> <sup>5</sup> <sub>NAVER AI LAB, currently at Google</sub> <sup>6</sup> <sub>DGIST</sub> <sup>7</sup> <sub>University of T&uuml;bingen</sub>
28
-
29
- Supervised learning of image classifiers distills human knowledge into a parametric model *f* through pairs of images and corresponding labels (*X*,*Y*). We argue that this simple and widely used representation of human knowledge neglects rich auxiliary information from the annotation procedure, such as the time-series of mouse traces and clicks.
30
-
31
- <p align=center>
32
- <img src="https://user-images.githubusercontent.com/7447092/203720567-dc6e1277-84d2-439c-a9f8-879e31c04e6f.png" alt="imagenet-byproduct-sample" width=500px />
33
- <p/>
34
-
35
- Our insight is that such **annotation byproducts** *Z* provide approximate human attention that weakly guides the model to focus on the foreground cues, reducing spurious correlations and discouraging shortcut learning.
36
-
37
- We have created **ImageNet-AB** and **COCO-AB** to verify this.
38
-
39
- They are ImageNet and COCO training sets enriched with sample-wise annotation byproducts, collected by replicating the respective original annotation tasks.
40
-
41
- We refer to the new paradigm of training models with annotation byproducts as **learning using annotation byproducts (LUAB)**.
42
-
43
- <p align=center>
44
- <img src="https://user-images.githubusercontent.com/7447092/203721515-2aea133d-1a77-4463-8372-5f0e0dbe4d2d.png" alt="luab" width=500px />
45
- <p/>
46
-
47
- We show that a simple multitask loss for regressing *Z* together with *Y* already improves the generalisability and robustness of the learned models. Compared to the original supervised learning, LUAB does not require extra annotation costs.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48
 
49
- ### Dataloader for ImageNet-AB and COCO-AB
50
 
51
- We provide example dataloaders for the annotation byproducts.
 
 
52
 
53
- * Dataloader for ImageNet-AB: [imagenet_dataloader.ipynb](imagenet_dataloader.ipynb)
54
- * Dataloader for COCO-AB: [coco_dataloader.ipynb](coco_dataloader.ipynb)
55
 
 
 
 
 
56
 
57
- ### Annotation tools for ImageNet and COCO
58
 
59
- * Annotation tool for ImageNet: [github.com/naver-ai/imagenet-annotation-tool](https://github.com/naver-ai/imagenet-annotation-tool)
60
- * Annotation tool for COCO: [github.com/naver-ai/coco-annotation-tool](https://github.com/naver-ai/coco-annotation-tool)
 
 
61
 
62
- ### License
 
 
 
 
63
 
64
- ```
65
- MIT License
66
-
67
- Copyright (c) 2023-present NAVER Cloud Corp.
68
-
69
- Permission is hereby granted, free of charge, to any person obtaining a copy
70
- of this software and associated documentation files (the "Software"), to deal
71
- in the Software without restriction, including without limitation the rights
72
- to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
73
- copies of the Software, and to permit persons to whom the Software is
74
- furnished to do so, subject to the following conditions:
75
-
76
- The above copyright notice and this permission notice shall be included in all
77
- copies or substantial portions of the Software.
78
-
79
- THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
80
- IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
81
- FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
82
- AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
83
- LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
84
- OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
85
- SOFTWARE.
86
- ```
87
 
88
- ### Citing our work
 
89
 
 
90
  ```
91
- @article{han2023arxiv,
92
  title = {Neglected Free Lunch – Learning Image Classifiers Using Annotation Byproducts},
93
  author = {Han, Dongyoon and Choe, Junsuk and Chun, Seonghyeok and Chung, John Joon Young and Chang, Minsuk and Yun, Sangdoo and Song, Jean Y. and Oh, Seong Joon},
94
- journal={arXiv preprint arXiv:2303.17595},
95
  year = {2023}
96
  }
97
  ```
 
16
  task_categories:
17
  - image-classification
18
  ---
 
19
 
20
+ ## General Information
21
+
22
+ **Title**: COCO-AB
23
+
24
+ **Description**:
25
+ The COCO-AB dataset is an extension of the COCO 2014 training set, enriched with additional annotation byproducts (AB).
26
+ The data includes 82,765 reannotated images from the original COCO 2014 training set.
27
+ It has relevance in computer vision, specifically in object detection and location.
28
+ The aim of the dataset is to provide a richer understanding of the images (without extra costs) by recording additional actions and interactions from the annotation process.
29
+
30
+ **Links**:
31
+
32
+ - [ICCV'23 Paper](https://arxiv.org/abs/2303.17595)
33
+ - [Main Repository](https://github.com/naver-ai/NeglectedFreeLunch)
34
+ - [COCO Annotation Interface](https://github.com/naver-ai/coco-annotation-tool)
35
+
36
+
37
+ ## Collection Process
38
+
39
+ **Collection Details**:
40
+ The additional annotations for the COCO-AB dataset were collected using Amazon Mechanical Turk (MTurk) workers from the US region, due to the task being described in English.
41
+ The task was designed as a human intelligence task (HIT), and the qualification approval rate was set at 90% to ensure the task's quality.
42
+ Each HIT contained 20 pages of annotation tasks, each page having a single candidate image to be tagged.
43
+ We follow the original annotation interface of COCO as much as possible.
44
+ See [GitHub repository](https://github.com/naver-ai/coco-annotation-tool) and [Paper](https://arxiv.org/abs/2303.17595) for further information.
45
+
46
+
47
+ A total of 4140 HITs were completed, with 365 HITs being rejected based on criteria such as recall rate, accuracy of icon location, task completion rate, and verification with database and secret hash code.
48
+
49
+ **Annotator Compensation**:
50
+ Annotators were paid 2.0 USD per HIT.
51
+ The median time taken to complete each HIT was 12.1 minutes, yielding an approximate hourly wage of 9.92 USD.
52
+ This wage is above the US federal minimum hourly wage.
53
+ A total of 8,280 USD was paid to the MTurk annotators, with an additional 20% fee paid to Amazon.
54
+
55
+ **Annotation Rejection**:
56
+ We rejected a HIT under the following circumstances.
57
+
58
+ - The recall rate was lower than 0.333.
59
+ - The accuracy of icon location is lower than 0.75.
60
+ - The annotator did not complete at least 16 out of the 20 pages of tasks.
61
+ - The annotation was not found in our database, and the secret hash code for confirming their completion was incorrect.
62
+ - In total, 365 out of 4,140 completed HITs (8.8%) were rejected.
63
+
64
+
65
+ **Collection Time**:
66
+ The entire annotation collection process took place between January 9, 2022, and January 12, 2022
67
+
68
+ ## Data Schema
69
+
70
+ ```json
71
+ {
72
+ "image_id": 459214,
73
+ "originalImageHeight": 428,
74
+ "originalImageWidth": 640,
75
+ "categories": [”car”, “bicycle”],
76
+ "imageHeight": 450,
77
+ "imageWidth": 450,
78
+ "timeSpent": 22283,
79
+ "actionHistories": [
80
+ {"actionType": ”add”,
81
+ "iconType": ”car”,
82
+ "pointTo": {"x": 0.583, "y": 0.588},
83
+ "timeAt": 16686},
84
+ {"actionType": ”add”,
85
+ "iconType": “bicycle”,
86
+ "pointTo": {"x": 0.592, "y": 0.639},
87
+ "timeAt": 16723}
88
+ ],
89
+ "categoryHistories": [
90
+ {"categoryIndex": 1,
91
+ "categoryName": ”Animal”,
92
+ "timeAt": 10815,
93
+ "usingKeyboard": false},
94
+ {"categoryIndex": 10,
95
+ "categoryName": ”IndoorObjects”,
96
+ "timeAt": 19415,
97
+ "usingKeyboard": false}
98
+ ],
99
+ "mouseTracking": [
100
+ {"x": 0.679, "y": 0.862, "timeAt": 15725},
101
+ {"x": 0.717, "y": 0.825, "timeAt": 15731}
102
+ ],
103
+ "worker_id": "00AA3B5E80",
104
+ "assignment_id": "3AMYWKA6YLE80HK9QYYHI2YEL2YO6L",
105
+ "page_idx": 8
106
+ }
107
+ ```
108
 
109
+ ## Usage
110
 
111
+ One could use the annotation byproducts to improve the model generalisability and robustness.
112
+ This is appealing, as the annotation byproducts do not incur extra annotation costs for the annotators.
113
+ For more information, refer to our [ICCV'23 Paper](https://arxiv.org/abs/2303.17595).
114
 
115
+ ## Dataset Statistics
 
116
 
117
+ Annotators have reannotated 82,765 (99.98%) of 82,783 training images from the COCO 2014 training set.
118
+ For those images, we have recorded the annotation byproducts.
119
+ We found that each HIT recalls 61.9% of the list of classes per image, with the standard deviation ±0.118%p.
120
+ The average localisation accuracy for icon placement is 92.3% where the standard deviation is ±0.057%p.
121
 
 
122
 
123
+ ## Ethics and Legalities
124
+ The crowdsourced annotators were fairly compensated for their time at a rate well above the U.S. federal minimum wage.
125
+ In terms of data privacy, the dataset maintains the same ethical standards as the original COCO dataset.
126
+ Worker identifiers were anonymized using a non-reversible hashing function, ensuring privacy.
127
 
128
+ Our data collection has obtained IRB approval from an author’s institute.
129
+ For the future collection of annotation byproducts, we note that there exist potential risks that annotation byproducts may contain annotators’ privacy.
130
+ Data collectors may even attempt to leverage more private information as byproducts.
131
+ We urge data collectors not to collect or exploit private information from annotators.
132
+ Whenever appropriate, one must ask for the annotators’ consent.
133
 
134
+ ## Maintenance and Updates
135
+ This section will be updated as and when there are changes or updates to the dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
136
 
137
+ ## Known Limitations
138
+ Given the budget constraint, we have not been able to acquire 8+ annotations per sample, as done in the original work.
139
 
140
+ ## Citation Information
141
  ```
142
+ @inproceedings{han2023iccv,
143
  title = {Neglected Free Lunch – Learning Image Classifiers Using Annotation Byproducts},
144
  author = {Han, Dongyoon and Choe, Junsuk and Chun, Seonghyeok and Chung, John Joon Young and Chang, Minsuk and Yun, Sangdoo and Song, Jean Y. and Oh, Seong Joon},
145
+ booktitle = {International Conference on Computer Vision (ICCV)},
146
  year = {2023}
147
  }
148
  ```