Datasets:
Tasks:
Audio Classification
Modalities:
Audio
Languages:
English
Size:
10K<n<100K
Tags:
audio
License:
license: cc-by-4.0 | |
task_categories: | |
- audio-classification | |
tags: | |
- audio | |
dataset_info: | |
features: | |
- name: video_id | |
dtype: string | |
- name: audio | |
dtype: audio | |
- name: labels | |
sequence: string | |
- name: human_labels | |
sequence: string | |
splits: | |
- name: train | |
num_bytes: 26016210987 | |
num_examples: 18685 | |
- name: test | |
num_bytes: 23763682278 | |
num_examples: 17142 | |
download_size: 49805654900 | |
dataset_size: 49779893265 | |
# AudioSet data | |
This repository contains the balanced training set and evaluation set of | |
the [AudioSet data]( | |
https://research.google.com/audioset/dataset/index.html). The YouTube | |
videos were downloaded in March 2023, so not all of the original audios | |
are available. | |
The distribuion of audio clips is as follows. In parentheses is the dict | |
key used for HugginFace `datasets`: | |
- `bal_train` (`train`): 18685 audio clips out of 22160 originally. | |
- `eval` (`test`): 17142 audio clips out of 20371 originally. | |
You can use the `datasets` library to load this dataset, in which case | |
the raw audio will be returned along with a sequence of one or more | |
labels. Note that the raw audio is returned without further processing, | |
so you will need to decode and possibly downsample the audio for model | |
training. | |
Example instance from the `train` subset: | |
```python | |
{ | |
'video_id': '--PJHxphWEs', | |
'audio': { | |
'path': 'audio/bal_train/--PJHxphWEs.flac', | |
'array': array([-0.04364824, -0.05268681, -0.0568949 , ..., 0.11446512, | |
0.14912748, 0.13409865]), | |
'sampling_rate': 48000 | |
}, | |
'labels': ['/m/09x0r', '/t/dd00088'], | |
'human_labels': ['Speech', 'Gush'] | |
} | |
``` | |
Most audio is sampled at 48 kHz 24 bit, but about 10% is sampled at | |
44.1 kHz 24 bit. Audio files are stored in the FLAC format. | |
## Citation | |
```bibtex | |
@inproceedings{jort_audioset_2017, | |
title = {Audio Set: An ontology and human-labeled dataset for audio events}, | |
author = {Jort F. Gemmeke and Daniel P. W. Ellis and Dylan Freedman and Aren Jansen and Wade Lawrence and R. Channing Moore and Manoj Plakal and Marvin Ritter}, | |
year = {2017}, | |
booktitle = {Proc. IEEE ICASSP 2017}, | |
address = {New Orleans, LA} | |
} | |
``` | |