DarkDude31
commited on
Commit
·
1129140
1
Parent(s):
a4b5f51
Update README.md
Browse files
README.md
CHANGED
@@ -10,14 +10,20 @@ tags:
|
|
10 |
# Musika Model: halvany\_oszi\_rozsa
|
11 |
## Model provided by: DarkDude31
|
12 |
|
13 |
-
|
14 |
Introduced in [this paper](https://arxiv.org/abs/2208.08706).
|
15 |
|
16 |
## How to use
|
17 |
|
18 |
-
You can generate music from this
|
|
|
|
|
19 |
|
20 |
### Model description
|
21 |
|
22 |
This pretrained GAN system consists of a ResNet-style generator and discriminator. During training, stability is controlled by adapting the strength of gradient penalty regularization on-the-fly. The gradient penalty weighting term is contained in *switch.npy*. The generator is conditioned on a latent coordinate system to produce samples of arbitrary length. The latent representations produced by the generator are then passed to a decoder which converts them into waveform audio.
|
23 |
The generator has a context window of about 12 seconds of audio.
|
|
|
|
|
|
|
|
|
|
10 |
# Musika Model: halvany\_oszi\_rozsa
|
11 |
## Model provided by: DarkDude31
|
12 |
|
13 |
+
Fine-tuned (from misc) halvany\_oszi\_rozsa model for the [Musika system](https://github.com/marcoppasini/musika) for fast infinite waveform music generation.
|
14 |
Introduced in [this paper](https://arxiv.org/abs/2208.08706).
|
15 |
|
16 |
## How to use
|
17 |
|
18 |
+
You can generate music from this fine-tuned (from misc) halvany_oszi_rozsa model using the notebook available [here](https://colab.research.google.com/drive/1HJWliBXPi-Xlx3gY8cjFI5-xaZgrTD7r).
|
19 |
+
|
20 |
+
Only the `gen_ema.h5` file is needed to generate music. Place it in your `checkpoints` folder.
|
21 |
|
22 |
### Model description
|
23 |
|
24 |
This pretrained GAN system consists of a ResNet-style generator and discriminator. During training, stability is controlled by adapting the strength of gradient penalty regularization on-the-fly. The gradient penalty weighting term is contained in *switch.npy*. The generator is conditioned on a latent coordinate system to produce samples of arbitrary length. The latent representations produced by the generator are then passed to a decoder which converts them into waveform audio.
|
25 |
The generator has a context window of about 12 seconds of audio.
|
26 |
+
|
27 |
+
### Training description
|
28 |
+
|
29 |
+
I had trained this on colab for 5 epochs. (about 5 \* 9000 iterations)
|