license: mit tags: - audio - music - generation - tensorflow
You can generate music from this fine-tuned (from misc) halvany_oszi_rozsa model using the notebook available here.
gen_ema.h5 file is needed to generate music. Place it in your
This pretrained GAN system consists of a ResNet-style generator and discriminator. During training, stability is controlled by adapting the strength of gradient penalty regularization on-the-fly. The gradient penalty weighting term is contained in switch.npy. The generator is conditioned on a latent coordinate system to produce samples of arbitrary length. The latent representations produced by the generator are then passed to a decoder which converts them into waveform audio. The generator has a context window of about 12 seconds of audio.
I trained this on colab for 5 epochs. (about 5 * 9000 iterations)