File size: 4,173 Bytes
1ad20a2
3919e78
 
44abe18
3919e78
 
 
44abe18
 
 
 
 
1ad20a2
3919e78
 
 
44abe18
 
 
 
 
3919e78
57388f0
 
3919e78
 
 
44abe18
 
3919e78
 
 
44abe18
 
 
 
 
 
 
3919e78
44abe18
3919e78
44abe18
a7d67c8
44abe18
 
3919e78
 
 
 
44abe18
3919e78
 
 
 
44abe18
 
 
 
 
 
 
 
3919e78
 
44abe18
3919e78
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
---
tags:
- generated_from_keras_callback
- music
model-index:
- name: juancopi81/mutopia_guitar_mmm
  results: []
datasets:
- juancopi81/mutopia_guitar_dataset
widget:
- text: "PIECE_START TIME_SIGNATURE=4_4 BPM=90 TRACK_START INST=0 DENSITY=2 BAR_START NOTE_ON=43"
  example_title: "Time signature 4/4, BPM=90, NOTE=G2"
---

# juancopi81/mutopia_guitar_mmm

Music generation could be approached similarly to language generation. There are many ways to represent music as text and then use a language model to create a model capable of music generation. For encoding MIDI files as text, I am using the excellent [implementation](https://github.com/AI-Guru/MMM-JSB) of Dr. Tristan Beheren of the paper: [MMM: Exploring Conditional Multi-Track Music Generation with the Transformer](https://arxiv.org/abs/2008.06048).

This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the [Mutopia Guitar Dataset](https://huggingface.co/datasets/juancopi81/mutopia_guitar_dataset). Use the widget to generate your piece, and then use [this notebook](https://colab.research.google.com/drive/14vlJwCvDmNH6SFfVuYY0Y18qTbaHEJCY?usp=sharing) to listen to the results (work in progress). 
I created the notebook as an adaptation of [the one created by Dr. Tristan Behrens](https://huggingface.co/TristanBehrens/js-fakes-4bars).

It achieves the following results on the evaluation set:
- Train Loss: 0.5837
- Validation Loss: 1.5346

## Model description

The model is GPT-2 loaded with the GPT2LMHeadModel architecture from Hugging Face. The context size is 256, and the vocabulary size is 588. The model uses a 
`WhitespaceSplit` pre-tokenizer. The [tokenizer](https://huggingface.co/juancopi81/mutopia_guitar_dataset_tokenizer) is also in the Hugging Face hub. 

## Intended uses & limitations

I built this model to learn more about how to use Hugging Face. I am implementing some of the parts of the [Hugging Face course](https://huggingface.co/course/chapter1/1) with a project that I find interesting. 
The main intention of this model is educational. I am creating a [series of notebooks](https://github.com/juancopi81/MMM_Mutopia_Guitar) where I show every step of the process:
- Collecting the data
- Pre-processing the data
- Training a tokenizer from scratch
- Fine-tuning a GPT-2 model
- Building a Gradio app for the model

I trained the model using the free version of Colab with a small dataset. Right now, it is heavily overfitting. My idea is to have a more extensive dataset of Guitar Music from Latinoamerica to train a new model similar to the Mutopia Guitar Model, using more GPU resources.

## Training and evaluation data

I am training the model with [Mutopia Guitar Dataset](https://huggingface.co/datasets/juancopi81/mutopia_guitar_dataset). This dataset consists of the soloist guitar pieces of the [Mutopia Project](https://www.mutopiaproject.org/). 
The dataset mainly contains guitar music from western classical composers, such as Sor, Aguado, Carcassi, and Giuliani.

### Training hyperparameters

The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 9089, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.0705     | 1.3590          | 0     |
| 0.8889     | 1.3702          | 1     |
| 0.7588     | 1.3974          | 2     |
| 0.7294     | 1.4813          | 3     |
| 0.6263     | 1.5263          | 5     |
| 0.5841     | 1.5263          | 6     |
| 0.5844     | 1.5263          | 7     |
| 0.5837     | 1.5346         | 8     |

### Framework versions
- Transformers 4.21.3
- TensorFlow 2.8.2
- Datasets 2.4.0
- Tokenizers 0.12.1