--- task_categories: - audio-classification license: cc-by-nc-4.0 tags: - bird classification - passive acoustic monitoring --- # Dataset Description - **Repository:** [https://github.com/DBD-research-group/BirdSet](https://github.com/DBD-research-group/BirdSet) - **Paper:** [BirdSet](https://arxiv.org/abs/2403.10380) - **Point of Contact:** [Lukas Rauch](mailto:lukas.rauch@uni-kassel.de) ## BirdSet Deep learning models have emerged as a powerful tool in avian bioacoustics to assess environmental health. To maximize the potential of cost-effective and minimal-invasive passive acoustic monitoring (PAM), models must analyze bird vocalizations across a wide range of species and environmental conditions. However, data fragmentation challenges a evaluation of generalization performance. Therefore, we introduce the BirdSet dataset, comprising approximately 520,000 global bird recordings for training and over 400 hours PAM recordings for testing in multi-label classification. - **Complementary Code**:[https://github.com/DBD-research-group/GADME](https://github.com/DBD-research-group/BirdSet) - **Complementary Paper**: [https://arxiv.org/abs/2403.10380](https://arxiv.org/abs/2403.10380) ## Datasets | | #train recordings | #test labels | #test_5s segments | size (GB) | #classes | |--------------------------------|--------:|-----------:|--------:|-----------:|-------------:| | [PER][1] (Amazon Basin + XCL Subset) | 16,802 | 14,798 | 15,120 | 10.5 | 132 | | [NES][2] (Colombia Costa Rica + XCL Subset) | 16,117 | 6,952 | 24,480 | 14.2 | 89 | | [UHH][3] (Hawaiian Islands + XCL Subset) | 3,626 | 59,583 | 36,637 | 4.92 | 25 tr, 27 te | | [HSN][4] (High Sierras + XCL Subset) | 5,460 | 10,296 | 12,000 | 5.92 | 21 | | [NBP][5] (NIPS4BPlus + XCL Subset) | 24,327 | 5,493 | 563 | 29.9 | 51 | | [POW][6] (Powdermill Nature + XCL Subset) | 14,911 | 16,052 | 4,560 | 15.7 | 48 | | [SSW][7] (Sapsucker Woods + XCL Subset) | 28,403 | 50,760 | 205,200| 35.2 | 81 | | [SNE][8] (Sierra Nevada + XCL Subset) | 19,390 | 20,147 | 23,756 | 20.8 | 56 | | [XCM][9] (Xenocanto Subset M) | 89,798 | x | x | 89.3 | 409 (411) | | [XCL][10] (Xenocanto Complete Snapshot) | 528,434| x | x | 484 | 9,735 | [1]: https://zenodo.org/records/7079124 [2]: https://zenodo.org/records/7525349 [3]: https://zenodo.org/records/7078499 [4]: https://zenodo.org/records/7525805 [5]: https://github.com/fbravosanchez/NIPS4Bplus [6]: https://zenodo.org/records/4656848 [7]: https://zenodo.org/records/7018484 [8]: https://zenodo.org/records/7050014 [9]: https://xeno-canto.org/ [10]: https://xeno-canto.org - We assemble a training dataset for each test dataset that is a **subset of a complete Xeno-Canto (XC)** snapshot. We extract all recordings that have vocalizations of the bird species appearing in the test dataset. - The focal training datasets or soundscape test datasets components can be individually accessed using the identifiers **NAME_xc** and **NAME_scape**, respectively (e.g., **HSN_xc** for the focal part and **HSN_scape** for the soundscape). - We use the .ogg format for every recording and a sampling rate of 32 kHz. - Each sample in the training dataset is a recording that may contain more than one vocalization of the corresponding bird species. - Each recording in the training datasets has a unique recordist and the corresponding license from XC. We omit all recordings from XC that are CC-ND. - The bird species are translated to ebird_codes - Snapshot date of XC: 03/10/2024 Each dataset (except for XCM and XCL that only feature Train) comes with a dataset dictionary that features **Train**, **Test_5s**, and **Test**: **Train** - Exclusively using _focal audio data as a subset from XCL_ with quality ratings A, B, C and excluding all recordings that are CC-ND. - Each dataset is tailored for specific target species identified in the corresponding test soundscape files. - We transform the scientific names of the birds into the corresponding ebird_code label. - We offer detected events and corresponding cluster assignments to identify bird sounds in each recording. - We provide the full recordings from XC. These can generate multiple samples from a single instance. **Test_5s** - Task: Processed to multilabel classification ("ebird_code_multilabel"). - Only soundscape data from Zenodo formatted acoording to the Kaggle evaluation scheme. - Each recording is segmented into 5-second intervals where each ground truth bird vocalization is assigned to. - This contains segments without any labels which results in a [0] vector. **Test** - Only soundscape data sourced from Zenodo. - Each sample points to the complete soundscape file where the strong label with bounding boxes appears. - This dataset does automatically have samples with recordings that do not contain bird calls. # How to - We recommend to use our [intro notebook](https://github.com/DBD-research-group/BirdSet/blob/main/notebooks/tutorials/birdset-pipeline_tutorial.ipynb) in our code repository. - The BirdSet Code package simplfies the data processing steps - For multi-label evaluation with a segment-based evaluation use the test_5s column for testing. We provide a very short example where no additional code is required. We load the first 5 seconds to quickly create an examplary training dataset. We recommend to start with HSN. It is a medium size dataset with a low number of overlaps within a segment. ```python from datasets import Audio dataset = load_dataset("DBD-research-group/BirdSet", "HSN") # slice example dataset["train"] = dataset["train"].select(range(500)) # the dataset comes without an automatic Audio casting, this has to be enabled via huggingface # this means that each time a sample is called, it is decoded (which may take a while if done for the complete dataset) # in BirdSet, this is all done on-the-fly during training and testing (since the dataset size would be too big if mapping and saving it only once) dataset = dataset.cast_column("audio", Audio(sampling_rate=32_000)) # extract the first five seconds of each sample in training (not utilizing event detection) # this is not very efficient since each complete audio file must be decoded this way. # a custom decoding with soundfile, stating start and end would be more efficient (see BirdSet Code) def map_first_five(sample): max_length = 160_000 # 32_000hz*5sec sample["audio"]["array"] = sample["audio"]["array"][:max_length] return sample # train is now available as an array that can be transformed into a spectrogram for example train = train.map(map_first_five, batch_size=1000, num_proc=2) # the test_5s dataset is already divided into 5-second chunks where each sample can have zero, one or multiple bird vocalizations (ebird_code labels) test = dataset["test_5s"] ``` ## Metadata | | format | description | |------------------------|-------------------------------------------------------:|-------------------------:| | audio | Audio(sampling_rate=32_000, mono=True, decode=False) | audio object from hf | | filepath | Value("string") | relative path where the recording is stored | | start_time | Value("float64") | only testdata: start time of a vocalization in s | | end_time | Value("float64") | only testdata: end time of a vocalzation in s | | low_freq | Value("int64") | only testdata: low frequency bound for a vocalization in kHz | | high_freq | Value("int64") | only testdata: high frequency bound for a vocalization in kHz | | ebird_code | ClassLabel(names=class_list) | assigned species label | | ebird_code_secondary | Sequence(datasets.Value("string")) | only traindata: possible secondary species in a recording | | ebird_code_multilabel | Sequence(datasets.ClassLabel(names=class_list)) | assigned species label in a multilabel format | | call_type | Sequence(datasets.Value("string")) | only traindata: type of bird vocalization | | sex | Value("string") | only traindata: sex of bird species | | lat | Value("float64") | latitude of vocalization/recording in WGS84 | | long | Value("float64") | lontitude of vocalization/recording in WGS84 | | length | Value("int64") | length of the file in s | | microphone | Value("string") | soundscape or focal recording with the microphone string | | license | Value("string") | license of the recording | | source | Value("string") | source of the recording | | local_time | Value("string") | local time of the recording | | detected_events | Sequence(datasets.Sequence(datasets.Value("float64")))| only traindata: detected audio events in a recording with bambird, tuples of start/end time | | event_cluster | Sequence(datasets.Value("int64")) | only traindata: detected audio events assigned to a cluster with bambird | | peaks | Sequence(datasets.Value("float64")) | only traindata: peak event detected with scipy peak detection | | quality | Value("string") | only traindata: recording quality of the recording (A,B,C) | | recordist | Value("string") | only traindata: recordist of the recording | #### Example Metadata Train ```python {'audio': {'path': '.ogg', 'array': array([ 0.0008485 , 0.00128899, -0.00317163, ..., 0.00228528, 0.00270796, -0.00120562]), 'sampling_rate': 32000}, 'filepath': '.ogg', 'start_time': None, 'end_time': None, 'low_freq': None, 'high_freq': None, 'ebird_code': 0, 'ebird_code_multilabel': [0], 'ebird_code_secondary': ['plaant1', 'blfnun1', 'butwoo1', 'whtdov', 'undtin1', 'gryhaw3'], 'call_type': 'song', 'sex': 'uncertain', 'lat': -16.0538, 'long': -49.604, 'length': 46, 'microphone': 'focal', 'license': '//creativecommons.org/licenses/by-nc/4.0/', 'source': 'xenocanto', 'local_time': '18:37', 'detected_events': [[0.736, 1.824], [9.936, 10.944], [13.872, 15.552], [19.552, 20.752], [24.816, 25.968], [26.528, 32.16], [36.112, 37.808], [37.792, 38.88], [40.048, 40.8], [44.432, 45.616]], 'event_cluster': [0, 0, 0, 0, 0, -1, 0, 0, -1, 0], 'peaks': [14.76479119037789, 41.16993396760847], 'quality': 'A', 'recordist': '...'} ``` #### Example Metadata Test5s ```python {'audio': {'path': '.ogg', 'array': array([-0.67190468, -0.9638235 , -0.99569213, ..., -0.01262935, -0.01533066, -0.0141047 ]), 'sampling_rate': 32000}, 'filepath': '.ogg', 'start_time': 0.0, 'end_time': 5.0, 'low_freq': 0, 'high_freq': 3098, 'ebird_code': None, 'ebird_code_multilabel': [1, 10], 'ebird_code_secondary': None, 'call_type': None, 'sex': None, 'lat': 5.59, 'long': -75.85, 'length': None, 'microphone': 'Soundscape', 'license': 'Creative Commons Attribution 4.0 International Public License', 'source': 'https://zenodo.org/record/7525349', 'local_time': '4:30:29', 'detected_events': None, 'event_cluster': None, 'peaks': None, 'quality': None, 'recordist': None} ``` ### Citation Information ``` @misc{rauch2024birdsetdatasetbenchmarkclassification, title={BirdSet: A Dataset and Benchmark for Classification in Avian Bioacoustics}, author={Lukas Rauch and Raphael Schwinger and Moritz Wirth and René Heinrich and Denis Huseljic and Jonas Lange and Stefan Kahl and Bernhard Sick and Sven Tomforde and Christoph Scholz}, year={2024}, eprint={2403.10380}, archivePrefix={arXiv}, primaryClass={cs.SD}, url={https://arxiv.org/abs/2403.10380}, } ``` ### Licensing - Researchers shall use this dataset only for non-commercial research and educational purposes. - Each train recording in BirdSet taken from Xeno-Canto has its own CC license. Please refer to the metadata file to view the license for each recording. - We exclude all recordings with a SA licenses. Every recording is NC. - Each test dataset is licensed under CC BY 4.0. - POW as validation dataset is licensed under CC0 1.0. We have diligently selected and composed the contents of this dataset. Despite our careful review, if you believe that any content violates licensing agreements or infringes on intellectual property rights, please contact us immediately. Upon notification, we will promptly investigate the issue and remove the implicated data from our dataset if necessary. Users are responsible for ensuring that their use of the dataset complies with all licenses, applicable laws, regulations, and ethical guidelines. We make no representations or warranties of any kind and accept no responsibility in the case of violations.