danjacobellis commited on
Commit
bc13a8a
1 Parent(s): 9b83133

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -3
README.md CHANGED
@@ -14,7 +14,7 @@ dataset_info:
14
  ---
15
  # MDCT-1k
16
 
17
- Over 1000 audio clips from the [Google music captions dataset](https://huggingface.co/datasets/google/MusicCaps) represented as 512x512 time-frequency images.
18
 
19
  The time-frequency images are created from the MDCT coefficients of the 0-12kHz frequency band for 20 second audio clips.
20
 
@@ -22,8 +22,6 @@ Other audio diffusion models operate in the space of the magnitude spectrogram o
22
 
23
  Operating in the MDCT space does not require a vocoder, nor does it oversample or undersample any range of frequencies.
24
 
25
- More information is provided in the [report](MP3_diffusion.pdf).
26
-
27
  Please see [this notebook showing how to load the dataset and convert from the MDCT images back to audio](load_dataset.ipynb)
28
 
29
  Additionally, [this notebook includes an example of the audio generated by fine tuning on this dataset and shows how to use the inference pipeline](music_inference.ipynb)
 
14
  ---
15
  # MDCT-1k
16
 
17
+ Over 1000 audio clips from the [Google music captions dataset](https://huggingface.co/datasets/google/MusicCaps) represented as 512x512 time-frequency images. More information is provided in the [report](MP3_diffusion.pdf).
18
 
19
  The time-frequency images are created from the MDCT coefficients of the 0-12kHz frequency band for 20 second audio clips.
20
 
 
22
 
23
  Operating in the MDCT space does not require a vocoder, nor does it oversample or undersample any range of frequencies.
24
 
 
 
25
  Please see [this notebook showing how to load the dataset and convert from the MDCT images back to audio](load_dataset.ipynb)
26
 
27
  Additionally, [this notebook includes an example of the audio generated by fine tuning on this dataset and shows how to use the inference pipeline](music_inference.ipynb)