danjacobellis
commited on
Commit
•
0609db6
1
Parent(s):
c7e3d34
Update README.md
Browse files
README.md
CHANGED
@@ -12,6 +12,14 @@ dataset_info:
|
|
12 |
download_size: 373552088
|
13 |
dataset_size: 378108023.375
|
14 |
---
|
15 |
-
#
|
16 |
|
17 |
-
[
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
download_size: 373552088
|
13 |
dataset_size: 378108023.375
|
14 |
---
|
15 |
+
# MDCT-1k
|
16 |
|
17 |
+
Over 1000 audio clips from the [Google music captions dataset](https://huggingface.co/datasets/google/MusicCaps) represented as 512x512 time-frequency images.
|
18 |
+
|
19 |
+
The time-frequency images are created from the MDCT coefficients of the 0-12kHz frequency band for 20 second audio clips.
|
20 |
+
|
21 |
+
Please see [this notebook showing how to load the dataset and convert from the MDCT images back to audio](load_dataset.ipynb)
|
22 |
+
|
23 |
+
Most other audio diffusion models operate in the space of the magnitude spectrogram or mel magnitude spectrogram. Since the phase is discarded, this requires the use of a vocoder for audio generation. When operating in the space of the mel-spectrogram, high frequencies are represented with insufficient time resolution, leading to a noticable loss of quality.
|
24 |
+
|
25 |
+
Operating in the MDCT space does not require a vocoder, nor does it oversample or undersample any range of frequencies.
|