File size: 2,877 Bytes
f3ccbe9
949cf3c
c4cfc49
949cf3c
c4cfc49
949cf3c
c4cfc49
 
949cf3c
c4cfc49
949cf3c
c4cfc49
949cf3c
0b950c9
 
 
afd43b5
0b950c9
 
 
 
 
 
 
 
 
17623eb
0b950c9
 
 
 
 
 
 
 
705d9cf
c87b5df
0b950c9
705d9cf
23922fc
 
 
0b950c9
 
23922fc
0b950c9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
---
license: cc-by-sa-3.0
task_categories:
- image-classification
language:
- en
pretty_name: mnist_ambigous
size_categories:
- 10K<n<100K
source_datasets:
- extended|mnist
annotations_creators:
- machine-generated
---


# Mnist-Ambiguous

This dataset contains mnist-like images, but with an unclear ground truth. For each image, there are two classes which could be considered true.
Robust and uncertainty-aware DNNs should thus detect and flag these issues.

### Features
Same as mnist, the supervised dataset has an `image` (28x28 int array) and a `label` (int).

Additionally, the following features are exposed for your convenience:

- `text_label` (str): A textual representation of the probabilistic label, e.g. `p(0)=0.54, p(5)=0.46` 
- `p_label` (list of floats): Ground-Truth probabilities for each class (two nonzero values for our ambiguous images)
- `is_ambiguous` (bool): Flag indicating if this is one of our ambiguous images (see 'splits' below)

### Splits
We provide four splits:

- `test`: 10'000 ambiguous images
- `train`: 10'000 ambiguous images - adding ambiguous images to the training set makes sure test-time ambiguous images are in-distribution.
- `test_mixed`: 20'000 images, consisting of the (shuffled) concatenation of our ambiguous `test` set and the nominal mnist test set by LeCun et. al.,
- `train_mixed`: 70'000 images, consisting of the (shuffled) concatenation of our ambiguous `training` and the nominal training set.

Note that the ambiguous test images are highly ambiguous (i.e., the two classes have very similar ground truth likelihoods), 
the training set images allow for more unbalanced ambiguity. 
This is to make the training set more closely connected to the nominal data, while still keeping the test set clearly ambiguous.

For research targeting explicitly aleatoric uncertainty, we recommend training the model using `train_mixed`. 
Otherwise, our `test` set will lead to both epistemic and aleatoric uncertainty. 
In related literature, such 'mixed' splits are sometimes denoted as *dirty* splits.

### Assessment and Validity
For a brief discussion of the strength and weaknesses of this dataset, 
including a quantitative comparison to the (only) other ambiguous datasets available in the literature, we refer to our paper.

### Paper
Pre-print here: [https://arxiv.org/abs/2207.10495](https://arxiv.org/abs/2207.10495)

Citation:
```
@misc{https://doi.org/10.48550/arxiv.2207.10495,
  doi = {10.48550/ARXIV.2207.10495},
  url = {https://arxiv.org/abs/2207.10495},
  author = {Weiss, Michael and Gómez, André García and Tonella, Paolo},
  title = {A Forgotten Danger in DNN Supervision Testing: Generating and Detecting True Ambiguity},
  publisher = {arXiv},
  year = {2022}
}
```

### License
As this is a derivative work of mnist, which is CC-BY-SA 3.0 licensed, our dataset is released using the same license.