File size: 3,845 Bytes
94ce747
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a6bd7e1
94ce747
 
 
 
 
 
 
 
 
 
 
 
 
6365972
94ce747
 
 
 
03157bd
 
94ce747
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a6bd7e1
94ce747
 
 
 
 
 
 
 
 
 
6365972
94ce747
6365972
94ce747
6365972
94ce747
6365972
94ce747
6365972
94ce747
6365972
94ce747
6365972
94ce747
6365972
94ce747
6365972
94ce747
6365972
94ce747
6365972
94ce747
6365972
 
94ce747
 
 
6365972
 
 
 
 
94ce747
6365972
94ce747
6365972
 
94ce747
6365972
94ce747
 
 
6365972
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
---
annotations_creators: []
language: en
size_categories:
- 1K<n<10K
task_categories:
- image-classification
task_ids: []
pretty_name: ImageNet-A
tags:
- fiftyone
- image
- image-classification
dataset_summary: '




  This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 7450 samples.


  ## Installation


  If you haven''t already, install FiftyOne:


  ```bash

  pip install -U fiftyone

  ```


  ## Usage


  ```python

  import fiftyone as fo

  import fiftyone.utils.huggingface as fouh


  # Load the dataset

  # Note: other available arguments include ''max_samples'', etc

  dataset = fouh.load_from_hub("Voxel51/ImageNet-A")


  # Launch the App

  session = fo.launch_app(dataset)

  ```

  '
---

# Dataset Card for ImageNet-A

![image](ImageNet-A.gif)


This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 7450 samples.

The recipe notebook for creating this FiftyOne Dataset can be found [here](https://colab.research.google.com/drive/1GpM0AEr8YolZxVF5tp0L94YIuT1WP-I7?usp=sharing).

## Installation

If you haven't already, install FiftyOne:

```bash
pip install -U fiftyone
```

## Usage

```python
import fiftyone as fo
import fiftyone.utils.huggingface as fouh

# Load the dataset
# Note: other available arguments include 'max_samples', etc
dataset = fouh.load_from_hub("Voxel51/ImageNet-A")

# Launch the App
session = fo.launch_app(dataset)
```


## Dataset Details

### Dataset Description

ImageNet-A is a dataset of adversarially filtered images that reliably fool current ImageNet classifiers. 

It contains natural, unmodified real-world examples that transfer to various unseen ImageNet models, demonstrating that these models share weaknesses with adversarially selected images. These images cause consistent classification mistakes across various models.

To create ImageNet-A, the authors first downloaded numerous images related to an ImageNet class. They then deleted the images that fixed ResNet-50 classifiers correctly predicted. 

With the remaining incorrectly classified images, the authors manually selected visually clear images.

The resulting ImageNet-A dataset has ~7,500 adversarially filtered images. 

The ImageNet-A dataset enables testing image classification performance when the input data distribution shifts[1]. ImageNet-A can be used to measure model robustness to distribution shift using challenging natural images.

- The images belong to 200 ImageNet classes selected to avoid overly fine-grained classes and classes with substantial overlap[1]. The classes span the most broad categories in ImageNet-1K.

- To create ImageNet-A, the authors downloaded images related to the 200 classes from sources like iNaturalist, Flickr, and DuckDuckGo[1]. 

- They then filtered out images that fixed ResNet-50 classifiers could correctly predict[1]. Images that fooled the classifiers were kept.

- The authors manually selected visually clear, single-class images from the remaining incorrectly classified images to include in the final dataset[1].

- The resulting dataset contains 7,500 natural, unmodified images that reliably transfer to and fool unseen models[1]. 

Citations:
[1] https://ar5iv.labs.arxiv.org/html/1907.07174



- **Curated by:** Jacob Steinhardt, Dawn Song
- **Funded by:** UC Berkeley
- **Shared by:** [Harpreet Sahota](https://twitter.com/DataScienceHarp), Hacker-in-Residence at Voxel51
- **Language(s) (NLP):** en
- **License:** MIT

### Dataset Sources [optional]

- **Repository:** https://github.com/hendrycks/natural-adv-examples
- **Paper:** https://ar5iv.labs.arxiv.org/html/1907.07174

## Citation

**BibTeX:**

```bibtex
@article{hendrycks2021nae,
  title={Natural Adversarial Examples},
  author={Dan Hendrycks and Kevin Zhao and Steven Basart and Jacob Steinhardt and Dawn Song},
  journal={CVPR},
  year={2021}
}
```