slseanwu's picture
add model and paper info
6a51413
|
raw
history blame
1.06 kB
metadata
tags:
  - music-generation
  - transformer
  - pytorch
  - audio
  - music
license: mit

Compose & Embellish

Trained model weights and training datasets for the paper:

Model characteristics

Stage 1: "Compose" model

Generates melody and chord progression from scratch.

Stage 2: "Embellish" model

Generates accompaniment, timing and dynamics conditioned on Stage 1 outputs.

BibTex

If you find the materials useful, please consider citing our work:

@inproceedings{wu2023compembellish,
  title={{Compose \& Embellish}: Well-Structured Piano Performance Generation via A Two-Stage Approach},
  author={Wu, Shih-Lun and Yang, Yi-Hsuan},
  booktitle={Proc. Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP)},
  year={2023},
  url={https://arxiv.org/pdf/2209.08212.pdf}
}