The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider removing the loading script and relying on automated data support (you can use convert_to_parquet from the datasets library). If this is not possible, please open a discussion for direct help.

BirdSet

Deep learning (DL) has greatly advanced audio classification, yet the field is limited by the scarcity of large-scale benchmark datasets that have propelled progress in other domains. While AudioSet aims to bridge this gap as a universal-domain dataset, its restricted accessibility and lack of diverse real-world evaluation use cases challenge its role as the only resource. Additionally, to maximize the potential of cost-effective and minimal-invasive passive acoustic monitoring (PAM), models must analyze bird vocalizations across a wide range of species and environmental conditions. Therefore, we introduce BirdSet, a large-scale benchmark dataset for audio classification focusing on avian bioacoustics. BirdSet surpasses AudioSet with over 6,800 recording hours (17% increase) from nearly 10,000 classes (18x) for training and more than 400 hours (7x) across eight strongly labeled evaluation datasets. It serves as a versatile resource for use cases such as multi-label classification, covariate shift or self-supervised learning.

Datasets

Disclaimer on sizes: The current dataset sizes reflect the extracted files, as the builder script automatically extracts these files but retains the original zipped versions. This results in approximately double the disk usage for each dataset. While it is possible to manually delete all files not contained in the extracted folder, we are actively working on updating the builder script to resolve this issue.

#train recordings #test labels #test_5s segments size (GB) #classes
PER (Amazon Basin + XCL Subset) 16,802 14,798 15,120 10.5 132
NES (Colombia Costa Rica + XCL Subset) 16,117 6,952 24,480 14.2 89
UHH (Hawaiian Islands + XCL Subset) 3,626 59,583 36,637 4.92 25 tr, 27 te
HSN (High Sierras + XCL Subset) 5,460 10,296 12,000 5.92 21
NBP (NIPS4BPlus + XCL Subset) 24,327 5,493 563 29.9 51
POW (Powdermill Nature + XCL Subset) 14,911 16,052 4,560 15.7 48
SSW (Sapsucker Woods + XCL Subset) 28,403 50,760 205,200 35.2 81
SNE (Sierra Nevada + XCL Subset) 19,390 20,147 23,756 20.8 56
XCM (Xenocanto Subset M) 89,798 x x 89.3 409 (411)
XCL (Xenocanto Complete Snapshot) 528,434 x x 484 9,735
  • We assemble a training dataset for each test dataset that is a subset of a complete Xeno-Canto (XC) snapshot. We extract all recordings that have vocalizations of the bird species appearing in the test dataset.
  • The focal training datasets or soundscape test datasets components can be individually accessed using the identifiers NAME_xc and NAME_scape, respectively (e.g., HSN_xc for the focal part and HSN_scape for the soundscape).
  • We use the .ogg format for every recording and a sampling rate of 32 kHz.
  • Each sample in the training dataset is a recording that may contain more than one vocalization of the corresponding bird species.
  • Each recording in the training datasets has a unique recordist and the corresponding license from XC. We omit all recordings from XC that are CC-ND.
  • The bird species are translated to ebird_codes
  • Snapshot date of XC: 03/10/2024

Each dataset (except for XCM and XCL that only feature Train) comes with a dataset dictionary that features Train, Test_5s, and Test:

Train

  • Exclusively using focal audio data as a subset from XCL with quality ratings A, B, C and excluding all recordings that are CC-ND.
  • Each dataset is tailored for specific target species identified in the corresponding test soundscape files.
  • We transform the scientific names of the birds into the corresponding ebird_code label.
  • We offer detected events and corresponding cluster assignments to identify bird sounds in each recording.
  • We provide the full recordings from XC. These can generate multiple samples from a single instance.

Test_5s

  • Task: Processed to multilabel classification ("ebird_code_multilabel").
  • Only soundscape data from Zenodo formatted acoording to the Kaggle evaluation scheme.
  • Each recording is segmented into 5-second intervals where each ground truth bird vocalization is assigned to.
  • This contains segments without any labels which results in a [0] vector.

Test

  • Only soundscape data sourced from Zenodo.
  • Each sample points to the complete soundscape file where the strong label with bounding boxes appears.
  • This dataset does automatically have samples with recordings that do not contain bird calls.

How to

  • We recommend to first explore the readme in our repository
  • Additionally, you can refer to the Intro notebook
  • The BirdSet Code package simplfies the data processing steps
  • For multi-label evaluation with a segment-based evaluation use the test_5s column for testing.

We provide a very short example where no additional code is required. We load the first 5 seconds to quickly create an examplary training dataset. We recommend to start with HSN. It is a medium size dataset with a low number of overlaps within a segment.

from datasets import Audio

dataset = load_dataset("DBD-research-group/BirdSet", "HSN")

# slice example
dataset["train"] = dataset["train"].select(range(500))

# the dataset comes without an automatic Audio casting, this has to be enabled via huggingface
# this means that each time a sample is called, it is decoded (which may take a while if done for the complete dataset)
# in BirdSet, this is all done on-the-fly during training and testing (since the dataset size would be too big if mapping and saving it only once)
dataset = dataset.cast_column("audio", Audio(sampling_rate=32_000))

# extract the first five seconds of each sample in training (not utilizing event detection)
# a custom decoding with soundfile, stating start and end would be more efficient (see BirdSet Code)
def map_first_five(sample):
    max_length = 160_000 # 32_000hz*5sec
    sample["audio"]["array"] =  sample["audio"]["array"][:max_length]
    return sample

# train is now available as an array that can be transformed into a spectrogram for example 
train = dataset["train"].map(map_first_five, batch_size=1000, num_proc=2)

# the test_5s dataset is already divided into 5-second chunks where each sample can have zero, one or multiple bird vocalizations (ebird_code labels)
test = dataset["test_5s"]

Metadata

format description
audio Audio(sampling_rate=32_000, mono=True, decode=False) audio object from hf
filepath Value("string") relative path where the recording is stored
start_time Value("float64") only testdata: start time of a vocalization in s
end_time Value("float64") only testdata: end time of a vocalzation in s
low_freq Value("int64") only testdata: low frequency bound for a vocalization in kHz
high_freq Value("int64") only testdata: high frequency bound for a vocalization in kHz
ebird_code ClassLabel(names=class_list) assigned species label
ebird_code_secondary Sequence(datasets.Value("string")) only traindata: possible secondary species in a recording
ebird_code_multilabel Sequence(datasets.ClassLabel(names=class_list)) assigned species label in a multilabel format
call_type Sequence(datasets.Value("string")) only traindata: type of bird vocalization
sex Value("string") only traindata: sex of bird species
lat Value("float64") latitude of vocalization/recording in WGS84
long Value("float64") lontitude of vocalization/recording in WGS84
length Value("int64") length of the file in s
microphone Value("string") soundscape or focal recording with the microphone string
license Value("string") license of the recording
source Value("string") source of the recording
local_time Value("string") local time of the recording
detected_events Sequence(datasets.Sequence(datasets.Value("float64"))) only traindata: detected audio events in a recording with bambird, tuples of start/end time
event_cluster Sequence(datasets.Value("int64")) only traindata: detected audio events assigned to a cluster with bambird
peaks Sequence(datasets.Value("float64")) only traindata: peak event detected with scipy peak detection
quality Value("string") only traindata: recording quality of the recording (A,B,C)
recordist Value("string") only traindata: recordist of the recording

Example Metadata Train

{'audio': {'path': '.ogg',
  'array': array([ 0.0008485 ,  0.00128899, -0.00317163, ...,  0.00228528,
          0.00270796, -0.00120562]),
  'sampling_rate': 32000},
 'filepath': '.ogg',
 'start_time': None,
 'end_time': None,
 'low_freq': None,
 'high_freq': None,
 'ebird_code': 0,
 'ebird_code_multilabel': [0],
 'ebird_code_secondary': ['plaant1', 'blfnun1', 'butwoo1', 'whtdov', 'undtin1', 'gryhaw3'],
 'call_type': 'song',
 'sex': 'uncertain',
 'lat': -16.0538,
 'long': -49.604,
 'length': 46,
 'microphone': 'focal',
 'license': '//creativecommons.org/licenses/by-nc/4.0/',
 'source': 'xenocanto',
 'local_time': '18:37',
 'detected_events': [[0.736, 1.824],
  [9.936, 10.944],
  [13.872, 15.552],
  [19.552, 20.752],
  [24.816, 25.968],
  [26.528, 32.16],
  [36.112, 37.808],
  [37.792, 38.88],
  [40.048, 40.8],
  [44.432, 45.616]],
 'event_cluster': [0, 0, 0, 0, 0, -1, 0, 0, -1, 0],
 'peaks': [14.76479119037789, 41.16993396760847],
 'quality': 'A',
 'recordist': '...'}

Example Metadata Test5s

{'audio': {'path': '.ogg',
  'array': array([-0.67190468, -0.9638235 , -0.99569213, ..., -0.01262935,
         -0.01533066, -0.0141047 ]),
  'sampling_rate': 32000},
 'filepath': '.ogg',
 'start_time': 0.0,
 'end_time': 5.0,
 'low_freq': 0,
 'high_freq': 3098,
 'ebird_code': None,
 'ebird_code_multilabel': [1, 10],
 'ebird_code_secondary': None,
 'call_type': None,
 'sex': None,
 'lat': 5.59,
 'long': -75.85,
 'length': None,
 'microphone': 'Soundscape',
 'license': 'Creative Commons Attribution 4.0 International Public License',
 'source': 'https://zenodo.org/record/7525349',
 'local_time': '4:30:29',
 'detected_events': None,
 'event_cluster': None,
 'peaks': None,
 'quality': None,
 'recordist': None}

Citation Information

@misc{rauch2024birdsetlargescaledatasetaudio,
      title={BirdSet: A Large-Scale Dataset for Audio Classification in Avian Bioacoustics}, 
      author={Lukas Rauch and Raphael Schwinger and Moritz Wirth and René Heinrich and Denis Huseljic and Marek Herde and Jonas Lange and Stefan Kahl and Bernhard Sick and Sven Tomforde and Christoph Scholz},
      year={2024},
      eprint={2403.10380},
      archivePrefix={arXiv},
      primaryClass={cs.SD},
      url={https://arxiv.org/abs/2403.10380}, 
}

Licensing

  • Researchers shall use this dataset only for non-commercial research and educational purposes.
  • Each train recording in BirdSet taken from Xeno-Canto has its own CC license. Please refer to the metadata file to view the license for each recording.
  • We exclude all recordings with a SA licenses. Every recording is NC.
  • Each test dataset is licensed under CC BY 4.0.
  • POW as validation dataset is licensed under CC0 1.0.

We have diligently selected and composed the contents of this dataset. Despite our careful review, if you believe that any content violates licensing agreements or infringes on intellectual property rights, please contact us immediately. Upon notification, we will promptly investigate the issue and remove the implicated data from our dataset if necessary. Users are responsible for ensuring that their use of the dataset complies with all licenses, applicable laws, regulations, and ethical guidelines. We make no representations or warranties of any kind and accept no responsibility in the case of violations.

Downloads last month
1,524

Models trained or fine-tuned on DBD-research-group/BirdSet

Space using DBD-research-group/BirdSet 1

Collection including DBD-research-group/BirdSet