juancopi81
commited on
Commit
•
54fd7f8
1
Parent(s):
fc528e5
Update README.md
Browse files
README.md
CHANGED
@@ -1,61 +1,161 @@
|
|
1 |
---
|
2 |
tags:
|
3 |
- generated_from_keras_callback
|
|
|
4 |
model-index:
|
5 |
- name: juancopi81/mutopia_guitar_mmm
|
6 |
results: []
|
|
|
|
|
|
|
|
|
|
|
7 |
---
|
8 |
|
9 |
-
<!-- This model card has been generated automatically according to the information Keras had access to. You should
|
10 |
-
probably proofread and complete it, then remove this comment. -->
|
11 |
-
|
12 |
# juancopi81/mutopia_guitar_mmm
|
13 |
|
14 |
-
|
|
|
|
|
|
|
|
|
15 |
It achieves the following results on the evaluation set:
|
16 |
-
- Train Loss:
|
17 |
-
- Validation Loss:
|
18 |
-
- Epoch: 9
|
19 |
|
20 |
## Model description
|
21 |
|
22 |
-
|
|
|
23 |
|
24 |
## Intended uses & limitations
|
25 |
|
26 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
27 |
|
28 |
## Training and evaluation data
|
29 |
|
30 |
-
|
|
|
31 |
|
32 |
-
|
33 |
|
34 |
### Training hyperparameters
|
35 |
|
36 |
-
The following hyperparameters were used during training:
|
|
|
|
|
|
|
|
|
|
|
|
|
37 |
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-07, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-07, 'decay_steps': 350, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
38 |
- training_precision: mixed_float16
|
39 |
|
40 |
### Training results
|
41 |
-
|
42 |
| Train Loss | Validation Loss | Epoch |
|
43 |
|:----------:|:---------------:|:-----:|
|
44 |
-
|
|
45 |
-
|
|
46 |
-
|
|
47 |
-
|
|
48 |
-
|
|
49 |
-
|
|
50 |
-
|
|
51 |
-
|
|
52 |
-
|
|
53 |
-
|
|
54 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
55 |
|
56 |
### Framework versions
|
57 |
-
|
58 |
-
- Transformers 4.22.2
|
59 |
- TensorFlow 2.8.2
|
60 |
- Datasets 2.5.1
|
61 |
-
- Tokenizers 0.12.1
|
|
|
1 |
---
|
2 |
tags:
|
3 |
- generated_from_keras_callback
|
4 |
+
- music
|
5 |
model-index:
|
6 |
- name: juancopi81/mutopia_guitar_mmm
|
7 |
results: []
|
8 |
+
datasets:
|
9 |
+
- juancopi81/mutopia_guitar_dataset
|
10 |
+
widget:
|
11 |
+
- text: "PIECE_START TIME_SIGNATURE=4_4 BPM=90 TRACK_START INST=0 DENSITY=2 BAR_START NOTE_ON=43"
|
12 |
+
example_title: "Time signature 4/4, BPM=90, NOTE=G2"
|
13 |
---
|
14 |
|
|
|
|
|
|
|
15 |
# juancopi81/mutopia_guitar_mmm
|
16 |
|
17 |
+
Music generation could be approached similarly to language generation. There are many ways to represent music as text and then use a language model to create a model capable of music generation. For encoding MIDI files as text, I am using the excellent [implementation](https://github.com/AI-Guru/MMM-JSB) of Dr. Tristan Beheren of the paper: [MMM: Exploring Conditional Multi-Track Music Generation with the Transformer](https://arxiv.org/abs/2008.06048).
|
18 |
+
|
19 |
+
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the [Mutopia Guitar Dataset](https://huggingface.co/datasets/juancopi81/mutopia_guitar_dataset). Use the widget to generate your piece, and then use [this notebook](https://colab.research.google.com/drive/14vlJwCvDmNH6SFfVuYY0Y18qTbaHEJCY?usp=sharing) to listen to the results (work in progress).
|
20 |
+
I created the notebook as an adaptation of [the one created by Dr. Tristan Behrens](https://huggingface.co/TristanBehrens/js-fakes-4bars).
|
21 |
+
|
22 |
It achieves the following results on the evaluation set:
|
23 |
+
- Train Loss: 0.5365
|
24 |
+
- Validation Loss: 1.5482
|
|
|
25 |
|
26 |
## Model description
|
27 |
|
28 |
+
The model is GPT-2 loaded with the GPT2LMHeadModel architecture from Hugging Face. The context size is 256, and the vocabulary size is 588. The model uses a
|
29 |
+
`WhitespaceSplit` pre-tokenizer. The [tokenizer](https://huggingface.co/juancopi81/mutopia_guitar_dataset_tokenizer) is also in the Hugging Face hub.
|
30 |
|
31 |
## Intended uses & limitations
|
32 |
|
33 |
+
I built this model to learn more about how to use Hugging Face. I am implementing some of the parts of the [Hugging Face course](https://huggingface.co/course/chapter1/1) with a project that I find interesting.
|
34 |
+
The main intention of this model is educational. I am creating a [series of notebooks](https://github.com/juancopi81/MMM_Mutopia_Guitar) where I show every step of the process:
|
35 |
+
- Collecting the data
|
36 |
+
- Pre-processing the data
|
37 |
+
- Training a tokenizer from scratch
|
38 |
+
- Fine-tuning a GPT-2 model
|
39 |
+
- Building a Gradio app for the model
|
40 |
+
|
41 |
+
I trained the model using the free version of Colab with a small dataset. Right now, it is heavily overfitting. My idea is to have a more extensive dataset of Guitar Music from Latinoamerica to train a new model similar to the Mutopia Guitar Model, using more GPU resources.
|
42 |
|
43 |
## Training and evaluation data
|
44 |
|
45 |
+
I am training the model with [Mutopia Guitar Dataset](https://huggingface.co/datasets/juancopi81/mutopia_guitar_dataset). This dataset consists of the soloist guitar pieces of the [Mutopia Project](https://www.mutopiaproject.org/).
|
46 |
+
The dataset mainly contains guitar music from western classical composers, such as Sor, Aguado, Carcassi, and Giuliani.
|
47 |
|
48 |
+
For the first epochs of training, I transposed the notes by raising and lowering the pitches using the twelve semi-tones of an entire octave. Later, I trained the model without transposing the pieces so that generation shows better results of a real guitar piece.
|
49 |
|
50 |
### Training hyperparameters
|
51 |
|
52 |
+
The following hyperparameters were used during training (with transposition):
|
53 |
+
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-07, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-07, 'decay_steps': 5726, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'passive_serialization': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
|
54 |
+
|
55 |
+
The following hyperparameters were used during training (without transposition - first round):
|
56 |
+
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-07, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-07, 'decay_steps': 350, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
|
57 |
+
|
58 |
+
The following hyperparameters were used during training (without transposition - second round):
|
59 |
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-07, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-07, 'decay_steps': 350, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
|
60 |
+
|
61 |
+
The following hyperparameters were used during training (without transposition, new tokenizer - third round):
|
62 |
+
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-07, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-07, 'decay_steps': 350, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'passive_serialization': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
|
63 |
+
|
64 |
+
The following hyperparameters were used during training (without transposition, new tokenizer - fourth round):
|
65 |
+
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-07, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-07, 'decay_steps': 350, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'passive_serialization': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
|
66 |
+
|
67 |
+
The following hyperparameters were used during training (without transposition, new tokenizer - fifth round):
|
68 |
+
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-07, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-07, 'decay_steps': 350, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
|
69 |
+
|
70 |
- training_precision: mixed_float16
|
71 |
|
72 |
### Training results
|
73 |
+
Using transposition:
|
74 |
| Train Loss | Validation Loss | Epoch |
|
75 |
|:----------:|:---------------:|:-----:|
|
76 |
+
| 1.0705 | 1.3590 | 0 |
|
77 |
+
| 0.8889 | 1.3702 | 1 |
|
78 |
+
| 0.7588 | 1.3974 | 2 |
|
79 |
+
| 0.7294 | 1.4813 | 3 |
|
80 |
+
| 0.6263 | 1.5263 | 4 |
|
81 |
+
| 0.5841 | 1.5263 | 5 |
|
82 |
+
| 0.5844 | 1.5263 | 6 |
|
83 |
+
| 0.5837 | 1.5346 | 7 |
|
84 |
+
| 0.5798 | 1.5411 | 8 |
|
85 |
+
| 0.5773 | 1.5440 | 9 |
|
86 |
+
|
87 |
+
Without transposition (first round):
|
88 |
+
| Train Loss | Validation Loss | Epoch |
|
89 |
+
|:----------:|:---------------:|:-----:|
|
90 |
+
| 0.5503 | 1.5436 | 0 |
|
91 |
+
| 0.5503 | 1.5425 | 1 |
|
92 |
+
| 0.5476 | 1.5425 | 2 |
|
93 |
+
| 0.5467 | 1.5425 | 3 |
|
94 |
+
| 0.5447 | 1.5431 | 4 |
|
95 |
+
| 0.5418 | 1.5447 | 5 |
|
96 |
+
| 0.5418 | 1.5451 | 6 |
|
97 |
+
| 0.5401 | 1.5472 | 7 |
|
98 |
+
| 0.5386 | 1.5479 | 8 |
|
99 |
+
| 0.5365 | 1.5482 | 9 |
|
100 |
+
|
101 |
+
Without transposition (second round):
|
102 |
+
| Train Loss | Validation Loss | Epoch |
|
103 |
+
|:----------:|:---------------:|:-----:|
|
104 |
+
| 0.5368 | 1.5482 | 0 |
|
105 |
+
| 0.5355 | 1.5480 | 1 |
|
106 |
+
| 0.5326 | 1.5488 | 2 |
|
107 |
+
| 0.5363 | 1.5493 | 3 |
|
108 |
+
| 0.5346 | 1.5488 | 4 |
|
109 |
+
| 0.5329 | 1.5502 | 5 |
|
110 |
+
| 0.5329 | 1.5514 | 6 |
|
111 |
+
| 0.5308 | 1.5514 | 7 |
|
112 |
+
| 0.5292 | 1.5536 | 8 |
|
113 |
+
| 0.5272 | 1.5543 | 9 |
|
114 |
+
|
115 |
+
Without transposition (third round - new tokenizer):
|
116 |
+
| Train Loss | Validation Loss | Epoch |
|
117 |
+
|:----------:|:---------------:|:-----:|
|
118 |
+
| 6.1361 | 6.4569 | 0 |
|
119 |
+
| 5.6383 | 5.8249 | 1 |
|
120 |
+
| 4.9125 | 4.8956 | 2 |
|
121 |
+
| 4.2013 | 4.2778 | 3 |
|
122 |
+
| 3.8665 | 4.0330 | 4 |
|
123 |
+
| 3.7106 | 3.8956 | 5 |
|
124 |
+
| 3.6041 | 3.7995 | 6 |
|
125 |
+
| 3.5301 | 3.7485 | 7 |
|
126 |
+
| 3.4973 | 3.7323 | 8 |
|
127 |
+
| 3.4909 | 3.7323 | 9 |
|
128 |
+
|
129 |
+
Without transposition (fourth round - new tokenizer):
|
130 |
+
| Train Loss | Validation Loss | Epoch |
|
131 |
+
|:----------:|:---------------:|:-----:|
|
132 |
+
| 3.4879 | 3.7206 | 0 |
|
133 |
+
| 3.4667 | 3.6874 | 1 |
|
134 |
+
| 3.4229 | 3.6373 | 2 |
|
135 |
+
| 3.3680 | 3.5751 | 3 |
|
136 |
+
| 3.2998 | 3.5026 | 4 |
|
137 |
+
| 3.2208 | 3.4240 | 5 |
|
138 |
+
| 3.1385 | 3.3397 | 6 |
|
139 |
+
| 3.0580 | 3.2587 | 7 |
|
140 |
+
| 2.9949 | 3.2118 | 8 |
|
141 |
+
| 2.9646 | 3.1958 | 9 |
|
142 |
+
|
143 |
+
Without transposition (fifth round - new tokenizer):
|
144 |
+
| Train Loss | Validation Loss | Epoch |
|
145 |
+
|:----------:|:---------------:|:-----:|
|
146 |
+
| 2.9562 | 3.1902 | 0 |
|
147 |
+
| 2.9457 | 3.1751 | 1 |
|
148 |
+
| 2.9266 | 3.1512 | 2 |
|
149 |
+
| 2.9039 | 3.1176 | 3 |
|
150 |
+
| 2.8705 | 3.0775 | 4 |
|
151 |
+
| 2.8291 | 3.0295 | 5 |
|
152 |
+
| 2.7872 | 2.9811 | 6 |
|
153 |
+
| 2.7394 | 2.9321 | 7 |
|
154 |
+
| 2.6996 | 2.9023 | 8 |
|
155 |
+
| 2.6819 | 2.8927 | 9 |
|
156 |
|
157 |
### Framework versions
|
158 |
+
- Transformers 4.22.1
|
|
|
159 |
- TensorFlow 2.8.2
|
160 |
- Datasets 2.5.1
|
161 |
+
- Tokenizers 0.12.1
|