harpreetsahota commited on
Commit
988d049
1 Parent(s): b70b9e6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -119
README.md CHANGED
@@ -17,8 +17,6 @@ dataset_summary: >
17
  This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 4838
18
  samples.
19
 
20
-
21
-
22
  ## Installation
23
 
24
 
@@ -30,8 +28,6 @@ dataset_summary: >
30
  pip install -U fiftyone
31
 
32
  ```
33
-
34
-
35
  ## Usage
36
 
37
 
@@ -60,9 +56,6 @@ dataset_summary: >
60
 
61
  ![image/png](imagenet-d.gif)
62
 
63
-
64
-
65
-
66
  This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 4838 samples.
67
 
68
  ## Installation
@@ -81,141 +74,53 @@ import fiftyone.utils.huggingface as fouh
81
 
82
  # Load the dataset
83
  # Note: other available arguments include 'max_samples', etc
84
- dataset = fouh.load_from_hub("harpreetsahota/ImageNet-D")
85
 
86
  # Launch the App
87
  session = fo.launch_app(dataset)
88
  ```
89
 
90
-
91
- ## Dataset Details
92
-
93
  ### Dataset Description
94
 
95
- <!-- Provide a longer summary of what this dataset is. -->
96
-
97
-
98
-
99
- - **Curated by:** [More Information Needed]
100
- - **Funded by [optional]:** [More Information Needed]
101
- - **Shared by [optional]:** [More Information Needed]
102
- - **Language(s) (NLP):** en
103
- - **License:** [More Information Needed]
104
-
105
- ### Dataset Sources [optional]
106
-
107
- <!-- Provide the basic links for the dataset. -->
108
-
109
- - **Repository:** [More Information Needed]
110
- - **Paper [optional]:** [More Information Needed]
111
- - **Demo [optional]:** [More Information Needed]
112
 
113
- ## Uses
114
 
115
- <!-- Address questions around how the dataset is intended to be used. -->
116
 
117
- ### Direct Use
 
118
 
119
- <!-- This section describes suitable use cases for the dataset. -->
120
-
121
- [More Information Needed]
122
-
123
- ### Out-of-Scope Use
124
-
125
- <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
126
-
127
- [More Information Needed]
128
-
129
- ## Dataset Structure
130
-
131
- <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
132
-
133
- [More Information Needed]
134
-
135
- ## Dataset Creation
136
-
137
- ### Curation Rationale
138
-
139
- <!-- Motivation for the creation of this dataset. -->
140
-
141
- [More Information Needed]
142
 
143
  ### Source Data
144
 
145
- <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
146
 
147
  #### Data Collection and Processing
148
 
149
- <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
150
-
151
- [More Information Needed]
152
-
153
- #### Who are the source data producers?
154
 
155
- <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
156
 
157
- [More Information Needed]
158
 
159
- ### Annotations [optional]
160
 
161
- <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
162
 
163
- #### Annotation process
164
 
165
- <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
166
-
167
- [More Information Needed]
168
-
169
- #### Who are the annotators?
170
-
171
- <!-- This section describes the people or systems who created the annotations. -->
172
-
173
- [More Information Needed]
174
-
175
- #### Personal and Sensitive Information
176
-
177
- <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
178
-
179
- [More Information Needed]
180
-
181
- ## Bias, Risks, and Limitations
182
-
183
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
184
-
185
- [More Information Needed]
186
-
187
- ### Recommendations
188
-
189
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
190
-
191
- Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
192
-
193
- ## Citation [optional]
194
 
195
- <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
196
 
 
197
  **BibTeX:**
198
-
199
- [More Information Needed]
200
-
201
- **APA:**
202
-
203
- [More Information Needed]
204
-
205
- ## Glossary [optional]
206
-
207
- <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
208
-
209
- [More Information Needed]
210
-
211
- ## More Information [optional]
212
-
213
- [More Information Needed]
214
-
215
- ## Dataset Card Authors [optional]
216
-
217
- [More Information Needed]
218
-
219
- ## Dataset Card Contact
220
-
221
- [More Information Needed]
 
17
  This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 4838
18
  samples.
19
 
 
 
20
  ## Installation
21
 
22
 
 
28
  pip install -U fiftyone
29
 
30
  ```
 
 
31
  ## Usage
32
 
33
 
 
56
 
57
  ![image/png](imagenet-d.gif)
58
 
 
 
 
59
  This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 4838 samples.
60
 
61
  ## Installation
 
74
 
75
  # Load the dataset
76
  # Note: other available arguments include 'max_samples', etc
77
+ dataset = fouh.load_from_hub("Voxel51/ImageNet-D")
78
 
79
  # Launch the App
80
  session = fo.launch_app(dataset)
81
  ```
82
 
 
 
 
83
  ### Dataset Description
84
 
85
+ ImageNet-D is a new benchmark created using diffusion models to generate realistic synthetic images with diverse backgrounds, textures, and materials[1]. The dataset contains 4,835 hard images that cause significant accuracy drops of up to 60% for a range of vision models, including ResNet, ViT, CLIP, LLaVa, and MiniGPT-4[1].
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
86
 
87
+ To create ImageNet-D, a large pool of synthetic images is generated by combining object categories with various nuisance attributes using Stable Diffusion[1]. The most challenging images that cause shared failures across multiple surrogate models are selected for the final dataset[1]. Human labelling via Amazon Mechanical Turk is used for quality control to ensure the images are valid and high-quality[1].
88
 
89
+ Experiments show that ImageNet-D reveals significant robustness gaps in current vision models[1]. The synthetic images transfer well to unseen models, uncovering common failure modes[1]. ImageNet-D provides a more diverse and challenging test set than prior synthetic benchmarks like ImageNet-C, ImageNet-9, and Stylized ImageNet[1].
90
 
91
+ Citations:
92
+ [1] https://arxiv.org/html/2403.18775v1
93
 
94
+ - **Funded by :** KAIST, University of Michigan, Ann Arbor, McGill University, MILA
95
+ - **License:** MIT License
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
96
 
97
  ### Source Data
98
 
99
+ See the [original repo](https://github.com/chenshuang-zhang/imagenet_d) for details
100
 
101
  #### Data Collection and Processing
102
 
103
+ The ImageNet-D dataset was constructed using diffusion models to generate a large pool of realistic synthetic images covering various combinations of object categories and nuisance attributes. The key steps in the data collection and generation process were:
 
 
 
 
104
 
105
+ 1. **Image generation**: The Stable Diffusion model was used to generate high-fidelity images based on user-defined text prompts specifying the desired object category (C) and nuisance attributes (N) such as background, material, and texture. The image generation is formulated as:
106
 
107
+ Image(C, N) = StableDiffusion(Prompt(C, N))
108
 
109
+ For example, to generate an image of a backpack, the prompt might specify "a backpack in a wheat field" to control both the object category and background nuisance.
110
 
111
+ 2. **Prompt design**: A set of prompts was carefully designed to cover a matrix of object categories and nuisance attributes (see Table 1 in the paper for an overview). This allows generating images with a much broader range of category-nuisance combinations compared to existing test sets.
112
 
113
+ 3. **Labeling**: Each generated image is automatically labeled with the object category (C) specified in its generation prompt. This category label serves as the ground truth for evaluating classification models on the ImageNet-D dataset. A classification is considered incorrect if the model's predicted class does not match the ground truth category.
114
 
115
+ #### Who are the source data producers?
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
116
 
117
+ Chenshuang Zhang, Fei Pan, Junmo Kim, In So Kweon, Chengzhi Mao
118
 
119
+ ## Citation
120
  **BibTeX:**
121
+ @article{zhang2024imagenet_d,
122
+ author = {Zhang, Chenshuang and Pan, Fei and Kim, Junmo and Kweon, In So and Mao, Chengzhi},
123
+ title = {ImageNet-D: Benchmarking Neural Network Robustness on Diffusion Synthetic Object},
124
+ journal = {CVPR},
125
+ year = {2024},
126
+ }