Papers
arxiv:2312.08723

StemGen: A music generation model that listens

Published on Dec 14, 2023
· Featured in Daily Papers on Dec 15, 2023
Authors:
,
,
,
,
,

Abstract

End-to-end generation of musical audio using deep learning techniques has seen an explosion of activity recently. However, most models concentrate on generating fully mixed music in response to abstract conditioning information. In this work, we present an alternative paradigm for producing music generation models that can listen and respond to musical context. We describe how such a model can be constructed using a non-autoregressive, transformer-based model architecture and present a number of novel architectural and sampling improvements. We train the described architecture on both an open-source and a proprietary dataset. We evaluate the produced models using standard quality metrics and a new approach based on music information retrieval descriptors. The resulting model reaches the audio quality of state-of-the-art text-conditioned models, as well as exhibiting strong musical coherence with its context.

Community

There's more demos available here:
https://julian-parker.github.io/stemgen/

Very cool! Are there plans to make some of this work open source?

Very cool! Are there plans to make some of this work open source?

No current plans but I’d absolutely love to open source the model code at least.

this would be greatly beneficial to (indie)game developers, not to mention a lot of other domains too. Looks Good!

  1. plan on releasing any pre-trained models?
  2. there is a sufficient use case to repurpose the model code to improvise on midi in addition to audio signal data

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2312.08723 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2312.08723 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2312.08723 in a Space README.md to link it from this page.

Collections including this paper 19