Update README.md
Browse files
README.md
CHANGED
@@ -217,7 +217,7 @@ I obtained those values through [Weights and Biases](https://wandb.ai/site), whi
|
|
217 |
The training method used is outlined in a blog post by Juancopi81 [here](https://huggingface.co/blog/juancopi81/using-hugging-face-to-train-a-gpt-2-model-for-musi#showcasing-the-model-in-a-%F0%9F%A4%97-space).
|
218 |
While I didn't follow that post exactly, it was of great help when learning how to do this.
|
219 |
|
220 |
-
The final component to talk about is Magenta's note_seq library. This is how token sequences are transposed to note sequences, and played.
|
221 |
This library is much more powerful than I am implementing, and I plan on expanding this project in the future to incorporate more features.
|
222 |
The main method call for this can be found in the app.py file on the HuggingFace space, but here is a snippet of the code for NOTE_ON:
|
223 |
|
@@ -238,3 +238,15 @@ it can be easily mapped to do whatever we want! Pretty cool, and it supports as
|
|
238 |
|
239 |
|
240 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
217 |
The training method used is outlined in a blog post by Juancopi81 [here](https://huggingface.co/blog/juancopi81/using-hugging-face-to-train-a-gpt-2-model-for-musi#showcasing-the-model-in-a-%F0%9F%A4%97-space).
|
218 |
While I didn't follow that post exactly, it was of great help when learning how to do this.
|
219 |
|
220 |
+
The final component to talk about is [Magenta's note_seq library](https://github.com/magenta/note-seq). This is how token sequences are transposed to note sequences, and played.
|
221 |
This library is much more powerful than I am implementing, and I plan on expanding this project in the future to incorporate more features.
|
222 |
The main method call for this can be found in the app.py file on the HuggingFace space, but here is a snippet of the code for NOTE_ON:
|
223 |
|
|
|
238 |
|
239 |
|
240 |
|
241 |
+
## Experiments
|
242 |
+
|
243 |
+
|
244 |
+
|
245 |
+
## Limitations
|
246 |
+
|
247 |
+
The data this system is trained on does not make use of the "style" or "genre" labels. While they are included in the training examples, they are all filled with null data.
|
248 |
+
This means this system cannot create generations that are tailored to a particular style/genre of music. Also,the system also only plays basic synth tones,
|
249 |
+
meaning that we can only hear a simple "chorale" style of music, with little variation. I'd love to explore this further, and expand the system to play various instruments,
|
250 |
+
making the generations seem more natural. There is also limited prompting options. A user cannot (easily) provide a melody or starting notes for the generation to be based on.
|
251 |
+
My idea is to create an interactive "piano" style interface for users, to be able to natrually enter some music as a basis for the generation.
|
252 |
+
Generations are also relavitvely similiar, and I believe this is due solely on the amount of data trained on.
|