mweiss's picture
Update README.md
2a33389
metadata
license: mit
task_categories:
  - image-classification
language:
  - en
pretty_name: mnist_ambigous
size_categories:
  - 10K<n<100K
source_datasets:
  - extended|mnist
annotations_creators:
  - machine-generated

Fashion-Mnist-Ambiguous

This dataset contains fashion-mnist-like images, but with an unclear ground truth. For each image, there are two classes that could be considered true. Robust and uncertainty-aware DNNs should thus detect and flag these issues.

Features

Same as fashion-mnist, the supervised dataset has an image (28x28 int array) and a label (int).

Additionally, the following features are exposed for your convenience:

  • text_label (str): A textual representation of the probabilistic label, e.g. p(Pullover)=0.54, p(Shirt)=0.46
  • p_label (list of floats): Ground-Truth probabilities for each class (two nonzero values for our ambiguous images)
  • is_ambiguous (bool): Flag indicating if this is one of our ambiguous images (see 'splits' below)

Splits

We provide four splits:

  • test: 10'000 ambiguous images
  • train: 10'000 ambiguous images - adding ambiguous images to the training set makes sure test-time ambiguous images are in-distribution.
  • test_mixed: 20'000 images, consisting of the (shuffled) concatenation of our ambiguous test set and the nominal original fashion mnist test set
  • train_mixed: 70'000 images, consisting of the (shuffled) concatenation of our ambiguous training and the nominal training set.

Note that the ambiguous train images are highly ambiguous (i.e., the two classes have very similar ground truth likelihoods), the training set images allow for more unbalanced ambiguity. This is to make the training set more closely connected to the nominal data, while still keeping the test set clearly ambiguous.

For research targeting explicitly aleatoric uncertainty, we recommend training the model using train_mixed. Otherwise, our test set will lead to both epistemic and aleatoric uncertainty. In related literature, such 'mixed' splits are sometimes denoted as dirty splits.

Assessment and Validity

For a brief discussion of the strength and weaknesses of this dataset we refer to our paper. Please note that our images are not typically realistic - i.e., while they represent multiple classes and thus have an ambiguous ground truth, they do not resemble real-world photographs.

Paper

Pre-print here: https://arxiv.org/abs/2207.10495

Citation:

@misc{https://doi.org/10.48550/arxiv.2207.10495,
  doi = {10.48550/ARXIV.2207.10495},
  url = {https://arxiv.org/abs/2207.10495},
  author = {Weiss, Michael and Gómez, André García and Tonella, Paolo},
  title = {A Forgotten Danger in DNN Supervision Testing: Generating and Detecting True Ambiguity},
  publisher = {arXiv},
  year = {2022}
}

Related Datasets