license: mit
tags:
- audio
- music
- generation
- tensorflow
Musika Model: musika-grateful-dead-barton-hall
Model provided by: benwakefield
Pretrained musika-grateful-dead-barton-hall model for the Musika system for fast infinite waveform music generation. Introduced in this paper.
How to use
You can generate music from this pretrained musika-grateful-dead-barton-hall model using the notebook available here.
Model description
This pretrained GAN system consists of a ResNet-style generator and discriminator. During training, stability is controlled by adapting the strength of gradient penalty regularization on-the-fly. The gradient penalty weighting term is contained in switch.npy. The generator is conditioned on a latent coordinate system to produce samples of arbitrary length. The latent representations produced by the generator are then passed to a decoder which converts them into waveform audio. The generator has a context window of about 12 seconds of audio.