gtzan-demo / README.md
yangwang825's picture
Update README.md
b433a1f verified
metadata
dataset_info:
  features:
    - name: audio
      dtype:
        audio:
          sampling_rate: 22050
    - name: genre
      dtype: string
    - name: label
      dtype:
        class_label:
          names:
            '0': blues
            '1': classical
            '2': country
            '3': disco
            '4': hiphop
            '5': jazz
            '6': metal
            '7': pop
            '8': reggae
            '9': rock
  splits:
    - name: train
      num_bytes: 586664927
      num_examples: 443
    - name: validation
      num_bytes: 260793810
      num_examples: 197
    - name: test
      num_bytes: 383984112
      num_examples: 290
  download_size: 1230811404
  dataset_size: 1231442849
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*
task_categories:
  - audio-classification
tags:
  - audio
  - multiclass
  - music

GTZAN Music Genre Classification

GTZAN consists of 100 30-second recording excerpts in each of 10 categories, and is the most-used public dataset in music information retrieval (MIR) research. Following Kereliuk et al. (2015), we use the "fault-filtered" partitioning version of GTZAN, which is constructed by hand to include 443/197/290 excerpts. This version of database could be found and downloaded from here.

Citations

@article{kereliuk2015deep,
  title={Deep learning and music adversaries},
  author={Kereliuk, Corey and Sturm, Bob L and Larsen, Jan},
  journal={IEEE Transactions on Multimedia},
  volume={17},
  number={11},
  pages={2059--2071},
  year={2015},
  publisher={IEEE}
}
@article{sturm2014state,
  title={The state of the art ten years after a state of the art: Future research in music information retrieval},
  author={Sturm, Bob L},
  journal={Journal of new music research},
  volume={43},
  number={2},
  pages={147--172},
  year={2014},
  publisher={Taylor \& Francis}
}
@article{tzanetakis2002musical,
  title={Musical genre classification of audio signals},
  author={Tzanetakis, George and Cook, Perry},
  journal={IEEE Transactions on speech and audio processing},
  volume={10},
  number={5},
  pages={293--302},
  year={2002},
  publisher={IEEE}
}