Datasets:
File size: 11,035 Bytes
bcb4da6 df79dfd bcb4da6 7678e02 2575255 2bec902 7678e02 bcb4da6 a044ed3 5d9b6e7 ebcd326 2575255 ef19db1 f66e582 1878ed0 f66e582 1878ed0 82a33c2 5d9b6e7 e43dc12 380a774 f21480e 067e314 2975c9b 1d6c0f4 2975c9b f66e582 2975c9b 0642cda f21480e 2975c9b bcb4da6 2975c9b bcb4da6 0642cda 2975c9b bcb4da6 2bec902 2975c9b bcb4da6 0642cda 2975c9b bcb4da6 2bec902 0e5daba ec78073 bcb4da6 ef19db1 2aaf1b8 3aaa040 986fa85 bcb4da6 29837db 2975c9b bcb4da6 485d899 bcb4da6 485d899 bcb4da6 485d899 eb9a80f 485d899 0e5daba bcb4da6 29837db 0e5daba bcb4da6 485d899 bcb4da6 485d899 bcb4da6 2975c9b 485d899 bcb4da6 485d899 bcb4da6 485d899 bcb4da6 eb9a80f 485d899 d00f8a1 bcb4da6 df79dfd f260ed0 df79dfd c1dfa5e df79dfd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 |
---
task_categories:
- audio-classification
license: cc
tags:
- bird classification
- passive acoustic monitoring
---
## Dataset Description
- **Repository:** [https://github.com/DBD-research-group/GADME](https://github.com/DBD-research-group/BirdSet)
- **Paper:** [GADME](https://arxiv.org/abs/2403.10380)
- **Point of Contact:** [Lukas Rauch](mailto:lukas.rauch@uni-kassel.de)
### Datasets
We present the BirdSet benchmark that covers a comprehensive range of (multi-label and multi-class) classification datasets in avian bioacoustics.
We offer a static set of evaluation datasets and a varied collection of training datasets, enabling the application of diverse methodologies.
We have a complementary code base: https://github.com/DBD-research-group/BirdSet
and a complementary paper (work in progress): https://arxiv.org/abs/2403.10380
| | train | test | test_5s | size (GB) | #classes |
|--------------------------------|--------:|-----------:|--------:|-----------:|-------------:|
| [PER][1] (Amazon Basin) | 16,802 | 14,798 | 15,120 | 10.5 | 132 |
| [NES][2] (Colombia Costa Rica) | 16,117 | 6,952 | 24,480 | 14.2 | 89 |
| [UHH][3] (Hawaiian Islands) | 3,626 | 59,583 | 36,637 | 4.92 | 25 tr, 27 te |
| [HSN][4] (high_sierras) | 5,460 | 10,296 | 12,000 | 5.92 | 21 |
| [NBP][5] (NIPS4BPlus) | 24,327 | 5,493 | 563 | 29.9 | 51 |
| [POW][6] (Powdermill Nature) | 14,911 | 16,052 | 4,560 | 15.7 | 48 |
| [SSW][7] (Sapsucker Woods) | 28,403 | 50,760 | 205,200| 35.2 | 81 |
| [SNE][8] (Sierra Nevada) | 19,390 | 20,147 | 23,756 | 20.8 | 56 |
| [XCM][9] (Xenocanto Subset M) | 89,798 | x | x | 89.3 | 409 (411) |
| [XCL][10] (Xenocanto Complete) | 528,434| x | x | 484 | 9,735 |
[1]: https://zenodo.org/records/7079124
[2]: https://zenodo.org/records/7525349
[3]: https://zenodo.org/records/7078499
[4]: https://zenodo.org/records/7525805
[5]: https://github.com/fbravosanchez/NIPS4Bplus
[6]: https://zenodo.org/records/4656848
[7]: https://zenodo.org/records/7018484
[8]: https://zenodo.org/records/7050014
[9]: https://xeno-canto.org/
[10]: https://xeno-canto.org
- We assemble a training dataset for each test dataset that is a subset of a complete Xeno-Canto (XC) snapshot. We extract all recordings that have vocalizations of the bird species appearing in the test dataset.
- The focal training datasets or soundscape test datasets components can be individually accessed using the identifiers **NAME_xc** and **NAME_scape**, respectively (e.g., **HSN_xc** for the focal part and **HSN_scape** for the soundscape).
- We use the .ogg format for every recording and a sampling rate of 32 kHz.
- Each sample in the training dataset is a recording that may contain more than one vocalization of the corresponding bird species.
- Each recording in the training datasets has a unique recordist and the corresponding license from XC. We omit all recordings from XC that are CC-ND.
- The bird species are translated to ebird_codes
- Snapshot date of XC: 03/10/2024
**Train**
- Exclusively using focal audio data from XC with quality ratings A, B, C and excluding all recordings that are CC-ND.
- Each dataset is tailored for specific target species identified in the corresponding test soundscape files.
- We transform the scientific names of the birds into the corresponding ebird_code label.
- We offer detected events and corresponding cluster assignments to identify bird sounds in each recording.
- We provide the full recordings from XC. These can generate multiple samples from a single instance.
**Test_5s**
- Task: Multilabel ("ebird_code_multilabel")
- Only soundscape data from Zenodo formatted acoording to the Kaggle evaluation scheme.
- Each recording is segmented into 5-second intervals where each ground truth bird vocalization is assigned to.
- This contains segments without any labels which results in a [0] vector.
**Test**
- Task: Multiclass ("ebird_code")
- Only soundscape data sourced from Zenodo.
- We provide the full recording with the complete label set and specified bounding boxes.
- This dataset excludes recordings that do not contain bird calls ("no_call").
### Quick Use
- For multi-label evaluation with a segment-based evaluation use the test_5s column for testing.
- You could only load the first 5 seconds or a given event per recording to quickly create a training dataset.
- We recommend to start with HSN. It is a medium size dataset with a low number of overlaps within a segment
### Metadata
| | format | description |
|------------------------|-------------------------------------------------------:|-------------------------:|
| audio | Audio(sampling_rate=32_000, mono=True, decode=False) | audio object from hf |
| filepath | Value("string") | relative path where the recording is stored |
| start_time | Value("float64") | only testdata: start time of a vocalization in s |
| end_time | Value("float64") | only testdata: end time of a vocalzation in s |
| low_freq | Value("int64") | only testdata: low frequency bound for a vocalization in kHz |
| high_freq | Value("int64") | only testdata: high frequency bound for a vocalization in kHz |
| ebird_code | ClassLabel(names=class_list) | assigned species label |
| ebird_code_secondary | Sequence(datasets.Value("string")) | only traindata: possible secondary species in a recording |
| ebird_code_multilabel | Sequence(datasets.ClassLabel(names=class_list)) | assigned species label in a multilabel format |
| call_type | Sequence(datasets.Value("string")) | only traindata: type of bird vocalization |
| sex | Value("string") | only traindata: sex of bird species |
| lat | Value("float64") | latitude of vocalization/recording in WGS84 |
| long | Value("float64") | lontitude of vocalization/recording in WGS84 |
| length | Value("int64") | length of the file in s |
| microphone | Value("string") | soundscape or focal recording with the microphone string |
| license | Value("string") | license of the recording |
| source | Value("string") | source of the recording |
| local_time | Value("string") | local time of the recording |
| detected_events | Sequence(datasets.Sequence(datasets.Value("float64")))| only traindata: detected audio events in a recording with bambird, tuples of start/end time |
| event_cluster | Sequence(datasets.Value("int64")) | only traindata: detected audio events assigned to a cluster with bambird |
| peaks | Sequence(datasets.Value("float64")) | only traindata: peak event detected with scipy peak detection |
| quality | Value("string") | only traindata: recording quality of the recording (A,B,C) |
| recordist | Value("string") | only traindata: recordist of the recording |
#### Example Metadata Train
```python
{'audio': {'path': '.ogg',
'array': array([ 0.0008485 , 0.00128899, -0.00317163, ..., 0.00228528,
0.00270796, -0.00120562]),
'sampling_rate': 32000},
'filepath': '.ogg',
'start_time': None,
'end_time': None,
'low_freq': None,
'high_freq': None,
'ebird_code': 0,
'ebird_code_multilabel': [0],
'ebird_code_secondary': ['plaant1', 'blfnun1', 'butwoo1', 'whtdov', 'undtin1', 'gryhaw3'],
'call_type': 'song',
'sex': 'uncertain',
'lat': -16.0538,
'long': -49.604,
'length': 46,
'microphone': 'focal',
'license': '//creativecommons.org/licenses/by-nc-sa/4.0/',
'source': 'xenocanto',
'local_time': '18:37',
'detected_events': [[0.736, 1.824],
[9.936, 10.944],
[13.872, 15.552],
[19.552, 20.752],
[24.816, 25.968],
[26.528, 32.16],
[36.112, 37.808],
[37.792, 38.88],
[40.048, 40.8],
[44.432, 45.616]],
'event_cluster': [0, 0, 0, 0, 0, -1, 0, 0, -1, 0],
'peaks': [14.76479119037789, 41.16993396760847],
'quality': 'A',
'recordist': '...'}
```
#### Example Metadata Test5s
```python
{'audio': {'path': '.ogg',
'array': array([-0.67190468, -0.9638235 , -0.99569213, ..., -0.01262935,
-0.01533066, -0.0141047 ]),
'sampling_rate': 32000},
'filepath': '.ogg',
'start_time': 0.0,
'end_time': 5.0,
'low_freq': 0,
'high_freq': 3098,
'ebird_code': None,
'ebird_code_multilabel': [1, 10],
'ebird_code_secondary': None,
'call_type': None,
'sex': None,
'lat': 5.59,
'long': -75.85,
'length': None,
'microphone': 'Soundscape',
'license': 'Creative Commons Attribution 4.0 International Public License',
'source': 'https://zenodo.org/record/7525349',
'local_time': '4:30:29',
'detected_events': None,
'event_cluster': None,
'peaks': None,
'quality': None,
'recordist': None}
```
### Citation Information
```
@misc{birdset,
title={BirdSet: A Multi-Task Benchmark for Classification in Avian Bioacoustics},
author={Lukas Rauch and Raphael Schwinger and Moritz Wirth and René Heinrich and Jonas Lange and Stefan Kahl and Bernhard Sick and Sven Tomforde and Christoph Scholz},
year={2024},
eprint={2403.10380},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
Note that each test subset in the BirdSet dataset has its own citation. Please see the source to see
the correct citation for each contained dataset. Each file in the training dataset also has its own recordist noted. The licenses can be found in the metadata.
``` |