|
--- |
|
task_categories: |
|
- audio-to-audio |
|
tags: |
|
- music |
|
pretty_name: YouTubeBigBand |
|
size_categories: |
|
- n<1K |
|
--- |
|
# YouTubeBigBand Dataset |
|
|
|
Inspired by the [YouTubeMix](https://huggingface.co/datasets/krandiash/youtubemix) dataset. |
|
<br/> |
|
<br/> |
|
*Source*: [https://www.youtube.com/watch?v=I4KAKqF4mjE](https://www.youtube.com/watch?v=I4KAKqF4mjE) - a 2 hour long mix of jazz tracks played by a big band. |
|
<br/> |
|
<br/> |
|
Used for pre-training a [SaShiMi model (see citation)](https://arxiv.org/abs/2202.09729) as part of Tel Aviv University Deep Learning Workshop 2024 Semester B (see [project repository fork](https://github.com/galbezalel/s4-dl-workshop/)). |
|
|
|
We include two versions of the dataset: |
|
- `youtubebigband.zip` is a zip file containing 129 1-minute audio clips (re)sampled at 16kHz. These were generated by splitting the original audio track. |
|
- `raw_bigband.wav` is the raw audio track from the YouTube video, sampled at 44.1kHz. |
|
|
|
``` |
|
@article{goel2022sashimi, |
|
title={It's Raw! Audio Generation with State-Space Models}, |
|
author={Goel, Karan and Gu, Albert and Donahue, Chris and R\'{e}, Christopher}, |
|
journal={arXiv preprint arXiv:2202.09729}, |
|
year={2022} |
|
} |
|
|
|
@misc{deepsound, |
|
author = {DeepSound}, |
|
title = {SampleRNN}, |
|
year = {2017}, |
|
publisher = {GitHub}, |
|
journal = {GitHub repository}, |
|
howpublished = {\url{https://github.com/deepsound-project/samplernn-pytorch}}, |
|
} |
|
``` |