syntheory / README.md
meganwei's picture
Update README.md
92c814f verified
metadata
license: mit
dataset_info:
  - config_name: chords
    features:
      - name: audio
        dtype:
          audio:
            sampling_rate: 44100
            mono: false
      - name: root_note_name
        dtype: string
      - name: chord_type
        dtype: string
      - name: inversion
        dtype: int64
      - name: root_note_is_accidental
        dtype: bool
      - name: root_note_pitch_class
        dtype: int64
      - name: midi_program_num
        dtype: int64
      - name: midi_program_name
        dtype: string
      - name: midi_category
        dtype: string
    splits:
      - name: train
        num_bytes: 18697466628.48
        num_examples: 13248
    download_size: 18637787206
    dataset_size: 18697466628.48
  - config_name: intervals
    features:
      - name: audio
        dtype:
          audio:
            sampling_rate: 44100
            mono: false
      - name: root_note_name
        dtype: string
      - name: root_note_pitch_class
        dtype: int64
      - name: interval
        dtype: int64
      - name: play_style
        dtype: int64
      - name: play_style_name
        dtype: string
      - name: midi_note_val
        dtype: int64
      - name: midi_program_num
        dtype: int64
      - name: midi_program_name
        dtype: string
      - name: midi_category
        dtype: string
    splits:
      - name: train
        num_bytes: 56093049925.056
        num_examples: 39744
    download_size: 56074987413
    dataset_size: 56093049925.056
  - config_name: notes
    features:
      - name: audio
        dtype:
          audio:
            sampling_rate: 44100
            mono: false
      - name: root_note_name
        dtype: string
      - name: root_note_pitch_class
        dtype: int64
      - name: octave
        dtype: int64
      - name: root_note_is_accidental
        dtype: bool
      - name: register
        dtype: int64
      - name: midi_note_val
        dtype: int64
      - name: midi_program_num
        dtype: int64
      - name: midi_program_name
        dtype: string
      - name: midi_category
        dtype: string
    splits:
      - name: train
        num_bytes: 14023184428.832
        num_examples: 9936
    download_size: 13804952340
    dataset_size: 14023184428.832
  - config_name: scales
    features:
      - name: audio
        dtype:
          audio:
            sampling_rate: 44100
            mono: false
      - name: root_note_name
        dtype: string
      - name: mode
        dtype: string
      - name: play_style
        dtype: int64
      - name: play_style_name
        dtype: string
      - name: midi_program_num
        dtype: int64
      - name: midi_program_name
        dtype: string
      - name: midi_category
        dtype: string
    splits:
      - name: train
        num_bytes: 21813743576.416
        num_examples: 15456
    download_size: 21806379646
    dataset_size: 21813743576.416
  - config_name: simple_progressions
    features:
      - name: audio
        dtype:
          audio:
            sampling_rate: 44100
            mono: false
      - name: key_note_name
        dtype: string
      - name: key_note_pitch_class
        dtype: int64
      - name: chord_progression
        dtype: string
      - name: midi_program_num
        dtype: int64
      - name: midi_program_name
        dtype: string
      - name: midi_category
        dtype: string
    splits:
      - name: train
        num_bytes: 29604485544.56
        num_examples: 20976
    download_size: 29509153369
    dataset_size: 29604485544.56
  - config_name: tempos
    features:
      - name: audio
        dtype:
          audio:
            sampling_rate: 44100
            mono: false
      - name: bpm
        dtype: int64
      - name: click_config_name
        dtype: string
      - name: midi_program_num
        dtype: int64
      - name: offset_time
        dtype: float64
    splits:
      - name: train
        num_bytes: 2840527084
        num_examples: 4025
    download_size: 1323717012
    dataset_size: 2840527084
  - config_name: time_signatures
    features:
      - name: audio
        dtype:
          audio:
            sampling_rate: 44100
            mono: false
      - name: time_signature
        dtype: string
      - name: time_signature_beats
        dtype: int64
      - name: time_signature_subdivision
        dtype: int64
      - name: is_compound
        dtype: int64
      - name: bpm
        dtype: int64
      - name: click_config_name
        dtype: string
      - name: midi_program_num
        dtype: int64
      - name: offset_time
        dtype: float64
      - name: reverb_level
        dtype: int64
    splits:
      - name: train
        num_bytes: 846915090
        num_examples: 1200
    download_size: 692431621
    dataset_size: 846915090
configs:
  - config_name: chords
    data_files:
      - split: train
        path: chords/train-*
  - config_name: intervals
    data_files:
      - split: train
        path: intervals/train-*
  - config_name: notes
    data_files:
      - split: train
        path: notes/train-*
  - config_name: scales
    data_files:
      - split: train
        path: scales/train-*
  - config_name: simple_progressions
    data_files:
      - split: train
        path: simple_progressions/train-*
  - config_name: tempos
    data_files:
      - split: train
        path: tempos/train-*
  - config_name: time_signatures
    data_files:
      - split: train
        path: time_signatures/train-*
task_categories:
  - audio-classification
  - feature-extraction
language:
  - en
tags:
  - audio
  - music
  - music information retrieval
size_categories:
  - 100K<n<1M

Dataset Card for SynTheory

Table of Contents

Dataset Description

Dataset Summary

SynTheory is a synthetic dataset of music theory concepts, specifically rhythmic (tempos and time signatures) and tonal (notes, intervals, scales, chords, and chord progressions).

Each of these 7 concepts has its own config.

tempos consist of 161 total integer tempos (bpm) ranging from 50 BPM to 210 BPM (inclusive), 5 percussive instrument types (click_config_name), and 5 random start time offsets (offset_time).

time_signatures consist of 8 time signatures (time_signature), 5 percussive instrument types (click_config_name), 10 random start time offsets (offset_time), and 3 reverb levels (reverb_level). The 8 time signatures are 2/2, 2/4, 3/4, 3/8, 4/4, 6/8, 9/8, and 12/8.

notes consist of 12 pitch classes (root_note_name), 9 octaves (octave), and 92 instrument types (midi_program_name). The 12 pitch classes are C, C#, D, D#, E, F, F#, G, G#, A, A# and B.

intervals consist of 12 interval sizes (interval), 12 root notes (root_note_name), 92 instrument types (midi_program_name), and 3 play styles (play_style_name). The 12 intervals are minor 2nd, Major 2nd, minor 3rd, Major 3rd, Perfect 4th, Tritone, Perfect 5th, minor 6th, Major 6th, minor 7th, Major 7th, and Perfect octave.

scales consist of 7 modes (mode), 12 root notes (root_note_name), 92 instrument types (midi_program_name), and 2 play styles (play_style_name). The 7 modes are Ionian, Dorian, Phrygian, Lydian, Mixolydian, Aeolian, and Locrian.

chords consist of 4 chord quality (chord_type), 3 inversions (inversion), 12 root notes (root_note_name), and 92 instrument types (midi_program_name). The 4 chord quality types are major, minor, augmented, and diminished. The 3 inversions are root position, first inversion, and second inversion.

simple_progressions consist of 19 chord progressions (chord_progression), 12 root notes (key_note_name), and 92 instrument types (midi_program_name). The 19 chord progressions consist of 10 chord progressions in major mode and 9 in natural minor mode. The major mode chord progressions are (I–IV–V–I), (I–IV–vi–V), (I–V–vi–IV), (I–vi–IV–V), (ii–V–I–Vi), (IV–I–V–Vi), (IV–V–iii–Vi), (V–IV–I–V), (V–vi–IV–I), and (vi–IV–I–V). The natural minor mode chord progressions are (i–ii◦–v–i), (i–III–iv–i), (i–iv–v–i), (i–VI–III–VII), (i–VI–VII–i), (i–VI–VII–III), (i–VII–VI–IV), (iv–VII–i–i), and (VII–vi–VII–i).

Supported Tasks and Leaderboards

  • audio-classification: This can be used towards music theory classification tasks.
  • feature-extraction: Our samples can be fed into pretrained audio codecs to extract representations from the model, which can be further used for downstream MIR tasks.

How to use

The datasets library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the load_dataset function.

For example, to download the notes config, simply specify the corresponding language config name (i.e., "notes"):

from datasets import load_dataset

notes = load_dataset("meganwei/syntheory", "notes")

Using the datasets library, you can also stream the dataset on-the-fly by adding a streaming=True argument to the load_dataset function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.

from datasets import load_dataset

notes = load_dataset("meganwei/syntheory", "notes", streaming=True)

print(next(iter(notes)))

Bonus: create a PyTorch dataloader directly with your own datasets (local/streamed).

Local:

from datasets import load_dataset
from torch.utils.data.sampler import BatchSampler, RandomSampler
from torch.utils.data import DataLoader

notes = load_dataset("meganwei/syntheory", "notes")
batch_sampler = BatchSampler(RandomSampler(notes), batch_size=32, drop_last=False)
dataloader = DataLoader(notes, batch_sampler=batch_sampler)

Streaming:

from datasets import load_dataset
from torch.utils.data import DataLoader

notes = load_dataset("meganwei/syntheory", "notes", streaming=True)
dataloader = DataLoader(notes, batch_size=32)

To find out more about loading and preparing audio datasets, head over to hf.co/blog/audio-datasets.

Example scripts

[More Information Needed]

Dataset Structure

Data Fields

[More Information Needed]

Dataset Creation

Curation Rationale

[More Information Needed]

Source Data

Initial Data Collection and Normalization

[More Information Needed]

Who are the source language producers?

[More Information Needed]

Annotations

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

For the notes music theory concept, there are 9,936 distinct note configurations. However, our dataset contains 9,848 non-silent samples. The 88 silent samples at extreme registers are unvoiceable with our soundfont. With a more complete soundfont, all 9,936 configurations are realizable to audio.

The silent samples are the following audio files: 0_0_C_10_Music_Box.wav, 0_0_C_56_Trumpet.wav, 0_0_C_68_Oboe.wav, 1_0_C#_10_Music_Box.wav, 1_0_C#_56_Trumpet.wav, 1_0_C#_68_Oboe.wav, 2_0_D_10_Music_Box.wav, 2_0_D_56_Trumpet.wav, 2_0_D_68_Oboe.wav, 3_0_D#_10_Music_Box.wav, 3_0_D#_56_Trumpet.wav, 3_0_D#_68_Oboe.wav, 4_0_E_10_Music_Box.wav, 4_0_E_56_Trumpet.wav, 4_0_E_68_Oboe.wav, 5_0_F_10_Music_Box.wav, 5_0_F_56_Trumpet.wav, 5_0_F_68_Oboe.wav, 6_0_F#_10_Music_Box.wav, 6_0_F#_56_Trumpet.wav, 6_0_F#_68_Oboe.wav, 7_0_G_10_Music_Box.wav, 7_0_G_56_Trumpet.wav, 7_0_G_68_Oboe.wav, 8_0_G#_10_Music_Box.wav, 8_0_G#_56_Trumpet.wav, 8_0_G#_68_Oboe.wav, 9_0_A_10_Music_Box.wav, 9_0_A_56_Trumpet.wav, 9_0_A_68_Oboe.wav, 10_0_A#_10_Music_Box.wav, 10_0_A#_56_Trumpet.wav, 10_0_A#_68_Oboe.wav, 11_0_B_10_Music_Box.wav, 11_0_B_56_Trumpet.wav, 11_0_B_68_Oboe.wav, 12_0_C_68_Oboe.wav, 13_0_C#_68_Oboe.wav, 14_0_D_68_Oboe.wav, 15_0_D#_68_Oboe.wav, 16_0_E_68_Oboe.wav, 17_0_F_68_Oboe.wav, 18_0_F#_68_Oboe.wav, 19_0_G_68_Oboe.wav, 20_0_G#_68_Oboe.wav, 21_0_A_68_Oboe.wav, 22_0_A#_68_Oboe.wav, 23_0_B_68_Oboe.wav, 24_0_C_68_Oboe.wav, 25_0_C#_68_Oboe.wav, 26_0_D_68_Oboe.wav, 27_0_D#_68_Oboe.wav, 28_0_E_68_Oboe.wav, 29_0_F_68_Oboe.wav, 30_0_F#_68_Oboe.wav, 31_0_G_68_Oboe.wav, 32_0_G#_68_Oboe.wav, 33_0_A_68_Oboe.wav, 34_0_A#_68_Oboe.wav, 35_0_B_68_Oboe.wav, 80_2_G#_67_Baritone_Sax.wav, 81_2_A_67_Baritone_Sax.wav, 82_2_A#_67_Baritone_Sax.wav, 83_2_B_67_Baritone_Sax.wav, 84_2_C_67_Baritone_Sax.wav, 85_2_C#_67_Baritone_Sax.wav, 86_2_D_67_Baritone_Sax.wav, 87_2_D#_67_Baritone_Sax.wav, 88_2_E_67_Baritone_Sax.wav, 89_2_F_67_Baritone_Sax.wav, 90_2_F#_67_Baritone_Sax.wav, 91_2_G_67_Baritone_Sax.wav, 92_2_G#_67_Baritone_Sax.wav, 93_2_A_67_Baritone_Sax.wav, 94_2_A#_67_Baritone_Sax.wav, 95_2_B_67_Baritone_Sax.wav, 96_2_C_67_Baritone_Sax.wav, 97_2_C#_67_Baritone_Sax.wav, 98_2_D_67_Baritone_Sax.wav, 99_2_D#_67_Baritone_Sax.wav, 100_2_E_67_Baritone_Sax.wav, 101_2_F_67_Baritone_Sax.wav, 102_2_F#_67_Baritone_Sax.wav, 103_2_G_67_Baritone_Sax.wav, 104_2_G#_67_Baritone_Sax.wav, 105_2_A_67_Baritone_Sax.wav, 106_2_A#_67_Baritone_Sax.wav, and 107_2_B_67_Baritone_Sax.wav.

Additional Information

Dataset Curators

[More Information Needed]

Licensing Information

[More Information Needed]

Citation Information

@inproceedings{Wei2024-music,
  title={Do Music Generation Models Encode Music Theory?},
  author={Wei, Megan and Freeman, Michael and Donahue, Chris and Sun, Chen},
  booktitle={International Society for Music Information Retrieval},
  year={2024}
}

Data Statistics

Concept Number of Samples
Tempo 4,025
Time Signatures 1,200
Notes 9,936
Intervals 39,744
Scales 15,456
Chords 13,248
Chord Progressions 20,976