The dataset viewer is not available for this dataset.
The dataset viewer doesn't support this dataset because it runs arbitrary python code. Please open a discussion in the discussion tab if you think this is an error and tag @lhoestq and @severo.
Error code:   DatasetWithScriptNotSupportedError

Need help to make the dataset viewer work? Open a discussion for direct support.

Datasets

We present the BirdSet benchmark that covers a comprehensive range of (multi-label and multi-class) classification datasets in avian bioacoustics. We offer a static set of evaluation datasets and a varied collection of training datasets, enabling the application of diverse methodologies.

We have a complementary code base: https://github.com/DBD-research-group/BirdSet and a complementary paper (work in progress): https://arxiv.org/abs/2403.10380

train test test_5s size (GB) #classes
PER (Amazon Basin) 16,802 14,798 15,120 10.5 132
NES (Colombia Costa Rica) 16,117 6,952 24,480 14.2 89
UHH (Hawaiian Islands) 3,626 59,583 36,637 4.92 25 tr, 27 te
HSN (high_sierras) 5,460 10,296 12,000 5.92 21
NBP (NIPS4BPlus) 24,327 5,493 563 29.9 51
POW (Powdermill Nature) 14,911 16,052 4,560 15.7 48
SSW (Sapsucker Woods) 28,403 50,760 205,200 35.2 81
SNE (Sierra Nevada) 19,390 20,147 23,756 20.8 56
XCM (Xenocanto Subset M) 89,798 x x 89.3 409 (411)
XCL (Xenocanto Complete) 528,434 x x 484 9,735
  • We assemble a training dataset for each test dataset that is a subset of a complete Xeno-Canto (XC) snapshot. We extract all recordings that have vocalizations of the bird species appearing in the test dataset.
  • The focal training datasets or soundscape test datasets components can be individually accessed using the identifiers NAME_xc and NAME_scape, respectively (e.g., HSN_xc for the focal part and HSN_scape for the soundscape).
  • We use the .ogg format for every recording and a sampling rate of 32 kHz.
  • Each sample in the training dataset is a recording that may contain more than one vocalization of the corresponding bird species.
  • Each recording in the training datasets has a unique recordist and the corresponding license from XC. We omit all recordings from XC that are CC-ND.
  • The bird species are translated to ebird_codes
  • Snapshot date of XC: 03/10/2024

Train

  • Exclusively using focal audio data from XC with quality ratings A, B, C and excluding all recordings that are CC-ND.
  • Each dataset is tailored for specific target species identified in the corresponding test soundscape files.
  • We transform the scientific names of the birds into the corresponding ebird_code label.
  • We offer detected events and corresponding cluster assignments to identify bird sounds in each recording.
  • We provide the full recordings from XC. These can generate multiple samples from a single instance.

Test_5s

  • Task: Multilabel ("ebird_code_multilabel")
  • Only soundscape data from Zenodo formatted acoording to the Kaggle evaluation scheme.
  • Each recording is segmented into 5-second intervals where each ground truth bird vocalization is assigned to.
  • This contains segments without any labels which results in a [0] vector.

Test

  • Task: Multiclass ("ebird_code")
  • Only soundscape data sourced from Zenodo.
  • We provide the full recording with the complete label set and specified bounding boxes.
  • This dataset excludes recordings that do not contain bird calls ("no_call").

Quick Use

  • For multi-label evaluation with a segment-based evaluation use the test_5s column for testing.
  • You could only load the first 5 seconds or a given event per recording to quickly create a training dataset.
  • We recommend to start with HSN. It is a medium size dataset with a low number of overlaps within a segment

Metadata

format description
audio Audio(sampling_rate=32_000, mono=True, decode=False) audio object from hf
filepath Value("string") relative path where the recording is stored
start_time Value("float64") only testdata: start time of a vocalization in s
end_time Value("float64") only testdata: end time of a vocalzation in s
low_freq Value("int64") only testdata: low frequency bound for a vocalization in kHz
high_freq Value("int64") only testdata: high frequency bound for a vocalization in kHz
ebird_code ClassLabel(names=class_list) assigned species label
ebird_code_secondary Sequence(datasets.Value("string")) only traindata: possible secondary species in a recording
ebird_code_multilabel Sequence(datasets.ClassLabel(names=class_list)) assigned species label in a multilabel format
call_type Sequence(datasets.Value("string")) only traindata: type of bird vocalization
sex Value("string") only traindata: sex of bird species
lat Value("float64") latitude of vocalization/recording in WGS84
long Value("float64") lontitude of vocalization/recording in WGS84
length Value("int64") length of the file in s
microphone Value("string") soundscape or focal recording with the microphone string
license Value("string") license of the recording
source Value("string") source of the recording
local_time Value("string") local time of the recording
detected_events Sequence(datasets.Sequence(datasets.Value("float64"))) only traindata: detected audio events in a recording with bambird, tuples of start/end time
event_cluster Sequence(datasets.Value("int64")) only traindata: detected audio events assigned to a cluster with bambird
peaks Sequence(datasets.Value("float64")) only traindata: peak event detected with scipy peak detection
quality Value("string") only traindata: recording quality of the recording (A,B,C)
recordist Value("string") only traindata: recordist of the recording

Example Metadata Train

{'audio': {'path': '.ogg',
  'array': array([ 0.0008485 ,  0.00128899, -0.00317163, ...,  0.00228528,
          0.00270796, -0.00120562]),
  'sampling_rate': 32000},
 'filepath': '.ogg',
 'start_time': None,
 'end_time': None,
 'low_freq': None,
 'high_freq': None,
 'ebird_code': 0,
 'ebird_code_multilabel': [0],
 'ebird_code_secondary': ['plaant1', 'blfnun1', 'butwoo1', 'whtdov', 'undtin1', 'gryhaw3'],
 'call_type': 'song',
 'sex': 'uncertain',
 'lat': -16.0538,
 'long': -49.604,
 'length': 46,
 'microphone': 'focal',
 'license': '//creativecommons.org/licenses/by-nc-sa/4.0/',
 'source': 'xenocanto',
 'local_time': '18:37',
 'detected_events': [[0.736, 1.824],
  [9.936, 10.944],
  [13.872, 15.552],
  [19.552, 20.752],
  [24.816, 25.968],
  [26.528, 32.16],
  [36.112, 37.808],
  [37.792, 38.88],
  [40.048, 40.8],
  [44.432, 45.616]],
 'event_cluster': [0, 0, 0, 0, 0, -1, 0, 0, -1, 0],
 'peaks': [14.76479119037789, 41.16993396760847],
 'quality': 'A',
 'recordist': '...'}

Example Metadata Test5s

{'audio': {'path': '.ogg',
  'array': array([-0.67190468, -0.9638235 , -0.99569213, ..., -0.01262935,
         -0.01533066, -0.0141047 ]),
  'sampling_rate': 32000},
 'filepath': '.ogg',
 'start_time': 0.0,
 'end_time': 5.0,
 'low_freq': 0,
 'high_freq': 3098,
 'ebird_code': None,
 'ebird_code_multilabel': [1, 10],
 'ebird_code_secondary': None,
 'call_type': None,
 'sex': None,
 'lat': 5.59,
 'long': -75.85,
 'length': None,
 'microphone': 'Soundscape',
 'license': 'Creative Commons Attribution 4.0 International Public License',
 'source': 'https://zenodo.org/record/7525349',
 'local_time': '4:30:29',
 'detected_events': None,
 'event_cluster': None,
 'peaks': None,
 'quality': None,
 'recordist': None}

Citation Information

@misc{birdset,
      title={BirdSet: A Multi-Task Benchmark for Classification in Avian Bioacoustics}, 
      author={Lukas Rauch and Raphael Schwinger and Moritz Wirth and René Heinrich and Jonas Lange and Stefan Kahl and Bernhard Sick and Sven Tomforde and Christoph Scholz},
      year={2024},
      eprint={2403.10380},
      archivePrefix={arXiv},
      primaryClass={cs.SD}
}

Note that each test subset in the BirdSet dataset has its own citation. Please see the source to see
the correct citation for each contained dataset. Each file in the training dataset also has its own recordist noted. The licenses can be found in the metadata. 
Downloads last month
1,294
Edit dataset card