MDCT-1k / README.md
danjacobellis's picture
Update README.md
0609db6
|
raw
history blame
1.14 kB
---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 378108023.375
num_examples: 1581
download_size: 373552088
dataset_size: 378108023.375
---
# MDCT-1k
Over 1000 audio clips from the [Google music captions dataset](https://huggingface.co/datasets/google/MusicCaps) represented as 512x512 time-frequency images.
The time-frequency images are created from the MDCT coefficients of the 0-12kHz frequency band for 20 second audio clips.
Please see [this notebook showing how to load the dataset and convert from the MDCT images back to audio](load_dataset.ipynb)
Most other audio diffusion models operate in the space of the magnitude spectrogram or mel magnitude spectrogram. Since the phase is discarded, this requires the use of a vocoder for audio generation. When operating in the space of the mel-spectrogram, high frequencies are represented with insufficient time resolution, leading to a noticable loss of quality.
Operating in the MDCT space does not require a vocoder, nor does it oversample or undersample any range of frequencies.