TomRB22 commited on
Commit
755eea9
1 Parent(s): fd15548

Adding images to README file

Browse files
Files changed (1) hide show
  1. README.md +10 -6
README.md CHANGED
@@ -11,7 +11,7 @@ tags:
11
 
12
  # Pivaenist
13
 
14
- Pivaenist is a two-minute, random piano music generator with a VAE architecture.
15
 
16
  By the use of the aforementioned autoencoder, it allows the user to encode piano music pieces and to generate new ones.
17
 
@@ -19,9 +19,10 @@ By the use of the aforementioned autoencoder, it allows the user to encode piano
19
 
20
  ### Model Description
21
 
22
- <!-- Going to include a graph of the VAE, with a description below. -->
23
-
24
-
 
25
 
26
  - **Developed by:** TomRB22
27
  - **Model type:** Variational autoencoder
@@ -101,10 +102,13 @@ The first one will clone the repository. Then, fluidsynth, a real-time MIDI synt
101
 
102
  ## Training Details
103
 
104
- [TODO: SONG MAP IMAGE]
105
-
106
  Pivaenist was trained on the [MAESTRO v2.0.0 dataset](https://magenta.tensorflow.org/datasets/maestro), which contains 1282 midi files [check it in colab]. Their preprocessing involves splitting each note in pitch, duration and step, which compose a column of a 3xN matrix (which we call song map), where N is the number of notes and a row represents sequentially the different pitches, durations and steps. The VAE's objective is to reconstruct these matrices, making it then possible to generate random maps by sampling from the distribution, and then convert them to a MIDI file.
107
 
 
 
 
 
 
108
  ### Training Data
109
 
110
  <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
 
11
 
12
  # Pivaenist
13
 
14
+ Pivaenist is a random piano music generator with a VAE architecture.
15
 
16
  By the use of the aforementioned autoencoder, it allows the user to encode piano music pieces and to generate new ones.
17
 
 
19
 
20
  ### Model Description
21
 
22
+ <figure>
23
+ <img src="https://huggingface.co/TomRB22/pivaenist/resolve/main/.images/architecture.png" style="width:100%">
24
+ <figcaption align = "center"><b>Pivaenist's architecture.</b></figcaption>
25
+ </figure>
26
 
27
  - **Developed by:** TomRB22
28
  - **Model type:** Variational autoencoder
 
102
 
103
  ## Training Details
104
 
 
 
105
  Pivaenist was trained on the [MAESTRO v2.0.0 dataset](https://magenta.tensorflow.org/datasets/maestro), which contains 1282 midi files [check it in colab]. Their preprocessing involves splitting each note in pitch, duration and step, which compose a column of a 3xN matrix (which we call song map), where N is the number of notes and a row represents sequentially the different pitches, durations and steps. The VAE's objective is to reconstruct these matrices, making it then possible to generate random maps by sampling from the distribution, and then convert them to a MIDI file.
106
 
107
+ <figure>
108
+ <img src="https://huggingface.co/TomRB22/pivaenist/resolve/main/.images/map_example.png" style="width:50%">
109
+ <figcaption align = "center"><b>A cropped example of a song map.</b></figcaption>
110
+ </figure>
111
+
112
  ### Training Data
113
 
114
  <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->