Datasets:
file_name
stringlengths 18
24
| label
stringclasses 8
values | position
int64 0
0
| audio
audioduration (s) 1
1
|
---|---|---|---|
backward/0165e0e8_0.wav | backward | 0 | |
backward/017c4098_0.wav | backward | 0 | |
backward/017c4098_1.wav | backward | 0 | |
backward/017c4098_2.wav | backward | 0 | |
backward/017c4098_3.wav | backward | 0 | |
backward/017c4098_4.wav | backward | 0 | |
backward/02ade946_0.wav | backward | 0 | |
backward/02ade946_1.wav | backward | 0 | |
backward/02ade946_2.wav | backward | 0 | |
backward/02ade946_3.wav | backward | 0 | |
backward/02ade946_4.wav | backward | 0 | |
backward/042a8dde_0.wav | backward | 0 | |
backward/042a8dde_1.wav | backward | 0 | |
backward/042a8dde_2.wav | backward | 0 | |
backward/042a8dde_3.wav | backward | 0 | |
backward/050170cb_0.wav | backward | 0 | |
backward/050170cb_1.wav | backward | 0 | |
backward/050170cb_2.wav | backward | 0 | |
backward/050170cb_3.wav | backward | 0 | |
backward/050170cb_4.wav | backward | 0 | |
backward/0585b66d_0.wav | backward | 0 | |
backward/0585b66d_1.wav | backward | 0 | |
backward/0585b66d_2.wav | backward | 0 | |
backward/0585b66d_3.wav | backward | 0 | |
backward/0585b66d_4.wav | backward | 0 | |
backward/06f6c194_0.wav | backward | 0 | |
backward/06f6c194_1.wav | backward | 0 | |
backward/06f6c194_2.wav | backward | 0 | |
backward/06f6c194_3.wav | backward | 0 | |
backward/06f6c194_4.wav | backward | 0 | |
backward/08ab231c_0.wav | backward | 0 | |
backward/08ab231c_1.wav | backward | 0 | |
backward/08ab231c_2.wav | backward | 0 | |
backward/08ab231c_3.wav | backward | 0 | |
backward/08ab231c_4.wav | backward | 0 | |
backward/0a2b400e_0.wav | backward | 0 | |
backward/0a2b400e_1.wav | backward | 0 | |
backward/0a2b400e_2.wav | backward | 0 | |
backward/0a2b400e_3.wav | backward | 0 | |
backward/0a2b400e_4.wav | backward | 0 | |
backward/0a396ff2_0.wav | backward | 0 | |
backward/0a5636ca_0.wav | backward | 0 | |
backward/0b7ee1a0_0.wav | backward | 0 | |
backward/0b7ee1a0_1.wav | backward | 0 | |
backward/0b7ee1a0_2.wav | backward | 0 | |
backward/0b7ee1a0_3.wav | backward | 0 | |
backward/0b7ee1a0_4.wav | backward | 0 | |
backward/0ba018fc_0.wav | backward | 0 | |
backward/0ba018fc_1.wav | backward | 0 | |
backward/0ba018fc_2.wav | backward | 0 | |
backward/0ba018fc_3.wav | backward | 0 | |
backward/0d6d7360_0.wav | backward | 0 | |
backward/0d6d7360_1.wav | backward | 0 | |
backward/0d6d7360_2.wav | backward | 0 | |
backward/0d6d7360_3.wav | backward | 0 | |
backward/0d6d7360_4.wav | backward | 0 | |
backward/0d85a428_0.wav | backward | 0 | |
backward/0d90d8e1_0.wav | backward | 0 | |
backward/0d90d8e1_1.wav | backward | 0 | |
backward/0d90d8e1_2.wav | backward | 0 | |
backward/0d90d8e1_3.wav | backward | 0 | |
backward/0e6e36c9_0.wav | backward | 0 | |
backward/0e6e36c9_1.wav | backward | 0 | |
backward/0e6e36c9_2.wav | backward | 0 | |
backward/0e6e36c9_3.wav | backward | 0 | |
backward/0e6e36c9_4.wav | backward | 0 | |
backward/0e8ec1cb_0.wav | backward | 0 | |
backward/0f46028a_0.wav | backward | 0 | |
backward/0f46028a_1.wav | backward | 0 | |
backward/0f46028a_2.wav | backward | 0 | |
backward/0f46028a_3.wav | backward | 0 | |
backward/0f46028a_4.wav | backward | 0 | |
backward/0f7266cf_0.wav | backward | 0 | |
backward/0f7266cf_1.wav | backward | 0 | |
backward/0f7266cf_2.wav | backward | 0 | |
backward/0f7266cf_3.wav | backward | 0 | |
backward/113b3fbc_0.wav | backward | 0 | |
backward/113b3fbc_1.wav | backward | 0 | |
backward/113b3fbc_2.wav | backward | 0 | |
backward/113b3fbc_3.wav | backward | 0 | |
backward/113b3fbc_4.wav | backward | 0 | |
backward/131e738d_0.wav | backward | 0 | |
backward/131e738d_1.wav | backward | 0 | |
backward/131e738d_2.wav | backward | 0 | |
backward/131e738d_3.wav | backward | 0 | |
backward/131e738d_4.wav | backward | 0 | |
backward/13a93b33_0.wav | backward | 0 | |
backward/13dce503_0.wav | backward | 0 | |
backward/13dce503_1.wav | backward | 0 | |
backward/13dce503_2.wav | backward | 0 | |
backward/1474273a_0.wav | backward | 0 | |
backward/1474273a_1.wav | backward | 0 | |
backward/1496195a_0.wav | backward | 0 | |
backward/1496195a_1.wav | backward | 0 | |
backward/1496195a_2.wav | backward | 0 | |
backward/1496195a_3.wav | backward | 0 | |
backward/14c7b073_0.wav | backward | 0 | |
backward/14c7b073_1.wav | backward | 0 | |
backward/14c7b073_2.wav | backward | 0 | |
backward/14c7b073_3.wav | backward | 0 |
Dataset Card for Dataset Name
ChaosMining is a synthetic dataset that evaluates post-hoc local attribution methods in low signal-to-noise ratio (SNR) environments. The post-hoc local attribution methods are explainable AI methods such as Saliency (SA), DeepLift (DL), Integrated Gradient (IG), and Feature Ablation (FA). This dataset is used to evaluate the feature selection ability of these methods when a large amount of noise exists.
Dataset Descriptions
There exist three modalities:
- Symbolic Functional Data: Mathematical functions with noise, used to study regression tasks. Derived from human-designed symbolic functions with predictive and irrelevant features.
- Vision Data: Images combining foreground objects from the CIFAR-10 dataset and background noise or flower images. 224x224 images with 32x32 foreground objects and either Gaussian noise or structural flower backgrounds.
- Audio Data: Audio sequences with a mix of relevant (speech commands) and irrelevant (background noise) signals.
Dataset Sources [optional]
Please check out the following
- Repository: [https://github.com/geshijoker/ChaosMining/tree/main] for data curation and evaluation.
- Paper: [https://arxiv.org/pdf/2406.12150] for details.
Dataset Details
Symbolic Functional Data
- Synthetic Generation: Data is derived from predefined mathematical functions, ensuring a clear ground truth for evaluation.
- Functions: Human-designed symbolic functions combining primitive mathematical operations (e.g., polynomial, trigonometric, exponential functions).
- Generation Process: Each feature is sampled from a normal distribution N(μ,σ^2) with μ=0 and σ=1. Predictive features are computed using the defined symbolic functions, while noise is added by including irrelevant features.
- Annotations: Ground truth annotations are generated based on the symbolic functions used to create the data.
- Normalization: Data values are normalized to ensure consistency across samples.
Vision Data
- Foreground Images: CIFAR-10 dataset, containing 32x32 pixel images of common objects.
- Background Images: Flower102 dataset and Gaussian noise images.
- Combination: Foreground images are overlaid onto background images to create synthetic samples. Foreground images are either centered or randomly placed.
- Noise Types: Backgrounds are generated using Gaussian noise for random noise conditions, or sampled from the Flower102 dataset for structured noise conditions.
- Annotations: Each image is annotated with the position of the foreground object and its class label.
- Splitting: The dataset is divided into training and validation sets to ensure no data leakage.
Audio Data
- Foreground Audio: Speech Command dataset, containing audio clips of spoken commands.
- Background Audio: Random noise generated from a normal distribution and samples from the Rainforest Connection Species dataset.
- Combination: Each audio sample consists of multiple channels, with only one channel containing the foreground audio and the rest containing background noise.
- Noise Conditions: Background noise is either random (generated from a normal distribution) or structured (sampled from environmental sounds).
- Annotations: Each audio sample is annotated with the class label of the foreground audio and the position of the predictive channel.
- Normalization: Audio signals are normalized to a consistent range for uniform processing.
Benchmark Metrics:
The benchmark processes a Model × Attribution × Noise Condition triplet design to evaluate the performance of various post-hoc attribution methods across different scenarios.
- Uniform Score (UScore): Measures prediction accuracy normalized to a range of 0 to 1.
- Functional Precision (FPrec): Measures the overlap between top-k predicted features and actual predictive features.
Uses
Dataset Structure
The configurations of the sub-datasets are ('symbolic_simulation', 'audio_RBFP', 'audio_RBRP', 'audio_SBFP', 'audio_SBRP', 'vision_RBFP', 'vision_RBRP', 'vision_SBFP', 'vision_SBRP'). Please pick one of them for use. The 'symbolic_simulation' data only has the 'train' split while the others have both the 'train' and 'validation' splits.
Load Dataset
For the general dataloading usage of huggingface API, please refer to general usage, including how to work with TensorFlow, PyTorch, JAX ... Here we provide the template codes for PyTorch users.
from datasets import Dataset
from torch.utils.data import DataLoader
# Load the symbolic functional data from huggingface datasets
dataset = load_dataset('geshijoker/chaosmining', 'symbolic_simulation')
print(dataset)
Out: DatasetDict({
train: Dataset({
features: ['num_var', 'function'],
num_rows: 15
})
})
# Read the formulas as a list of (number_of_features, function_string) pairs
formulas = [[data_slice['num_var'], data_slice['function']] for data_slice in dataset['train']]
# Load the vision data from huggingface datasets
dataset = load_dataset('geshijoker/chaosmining', 'vision_RBFP', split='validation', streaming=True)
# Convert hugging face Dataset to pytorch Dataset for vision data
dataset = dataset.with_format('torch')
# Use a dataloader for minibatch loading
dataloader = DataLoader(dataset, batch_size=32)
next(iter(dataloader_vision))
Out: {'image':torch.Size([32, 3, 224, 224]), 'foreground_label':torch.Size([32]), 'position_x':torch.Size([32]), 'position_y':torch.Size([32])}
# Load the audio data from huggingface datasets
dataset = load_dataset('geshijoker/chaosmining', 'audio_RBFP', split='validation', streaming=True)
# Convert hugging face Dataset to pytorch Dataset for audio data.
# Define the transformation
def transform_audio(example):
# Remove the 'path' field
del example['audio']['path']
# Directly access the 'array' and 'sampling_rate' from the 'audio' field
example['sampling_rate'] = example['audio']['sampling_rate']
example['audio'] = example['audio']['array']
return example
# Apply the transformation to the dataset
dataset = dataset.map(transform_audio)
dataset = dataset.with_format('torch')
# Use a dataloader for minibatch loading
dataloader = DataLoader(dataset, batch_size=32)
next(iter(dataloader_vision))
Out: {'audio':torch.Size([32, 10, 16000]), 'sampling_rate':torch.Size([32]), 'label':List_of_32, 'file_name':List_of_32}
Curation Rationale
To create controlled, low signal-to-noise ratio environments that test the efficacy of post-hoc local attribution methods.
- Purpose: To study the effectiveness of neural networks in regression tasks where relevant features are mixed with noise.
- Challenges Addressed: Differentiating between predictive and irrelevant features in a controlled, low signal-to-noise ratio environment.
Source Data
Synthetic data derived from known public datasets (CIFAR-10, Flower102, Speech Commands, Rainforest Connection Species) and generated noise.
Citation
If you use this dataset or code in your research, please cite the paper as follows:
BibTeX: @article{shi2024chaosmining, title={ChaosMining: A Benchmark to Evaluate Post-Hoc Local Attribution Methods in Low SNR Environments}, author={Shi, Ge and Kan, Ziwen and Smucny, Jason and Davidson, Ian}, journal={arXiv preprint arXiv:2406.12150}, year={2024} }
APA: Shi, G., Kan, Z., Smucny, J., & Davidson, I. (2024). ChaosMining: A Benchmark to Evaluate Post-Hoc Local Attribution Methods in Low SNR Environments. arXiv preprint arXiv:2406.12150.
Dataset Card Contact
Davidson Lab at UC Davis
- Downloads last month
- 443