Edit model card



The Jukebox model was proposed in Jukebox: A generative model for music by Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, Ilya Sutskever.

This model proposes a generative music model which can be produce minute long samples which can bne conditionned on artist, genre and lyrics.

The abstract from the paper is the following:

We introduce Jukebox, a model that generates music with singing in the raw audio domain. We tackle the long context of raw audio using a multiscale VQ-VAE to compress it to discrete codes, and modeling those using autoregressive Transformers. We show that the combined model at scale can generate high-fidelity and diverse songs with coherence up to multiple minutes. We can condition on artist and genre to steer the musical and vocal style, and on unaligned lyrics to make the singing more controllable. We are releasing thousands of non cherry-picked samples, along with model weights and code.


This model is very slow for now, and takes 18h to generate a minute long audio.

This model was contributed by Arthur Zucker. The original code can be found here.

Downloads last month
Hosted inference API

Unable to determine this model’s pipeline type. Check the docs .