MDCT-1k / README.md
danjacobellis's picture
Update README.md
bc13a8a
metadata
dataset_info:
  features:
    - name: image
      dtype: image
    - name: text
      dtype: string
  splits:
    - name: train
      num_bytes: 378108023.375
      num_examples: 1581
  download_size: 373552088
  dataset_size: 378108023.375

MDCT-1k

Over 1000 audio clips from the Google music captions dataset represented as 512x512 time-frequency images. More information is provided in the report.

The time-frequency images are created from the MDCT coefficients of the 0-12kHz frequency band for 20 second audio clips.

Other audio diffusion models operate in the space of the magnitude spectrogram or mel magnitude spectrogram. Since the phase is discarded, this requires the use of a vocoder for audio generation. When operating in the space of the mel-spectrogram, high frequencies are represented with insufficient time resolution, leading to a noticable loss of quality.

Operating in the MDCT space does not require a vocoder, nor does it oversample or undersample any range of frequencies.

Please see this notebook showing how to load the dataset and convert from the MDCT images back to audio

Additionally, this notebook includes an example of the audio generated by fine tuning on this dataset and shows how to use the inference pipeline