mutopia_guitar_mmm / README.md
juancopi81's picture
Update README file - Epocj 8
44abe18
|
raw
history blame
4.17 kB
metadata
tags:
  - generated_from_keras_callback
  - music
model-index:
  - name: juancopi81/mutopia_guitar_mmm
    results: []
datasets:
  - juancopi81/mutopia_guitar_dataset
widget:
  - text: >-
      PIECE_START TIME_SIGNATURE=4_4 BPM=90 TRACK_START INST=0 DENSITY=2
      BAR_START NOTE_ON=43
    example_title: Time signature 4/4, BPM=90, NOTE=G2

juancopi81/mutopia_guitar_mmm

Music generation could be approached similarly to language generation. There are many ways to represent music as text and then use a language model to create a model capable of music generation. For encoding MIDI files as text, I am using the excellent implementation of Dr. Tristan Beheren of the paper: MMM: Exploring Conditional Multi-Track Music Generation with the Transformer.

This model is a fine-tuned version of gpt2 on the Mutopia Guitar Dataset. Use the widget to generate your piece, and then use this notebook to listen to the results (work in progress). I created the notebook as an adaptation of the one created by Dr. Tristan Behrens.

It achieves the following results on the evaluation set:

  • Train Loss: 0.5837
  • Validation Loss: 1.5346

Model description

The model is GPT-2 loaded with the GPT2LMHeadModel architecture from Hugging Face. The context size is 256, and the vocabulary size is 588. The model uses a WhitespaceSplit pre-tokenizer. The tokenizer is also in the Hugging Face hub.

Intended uses & limitations

I built this model to learn more about how to use Hugging Face. I am implementing some of the parts of the Hugging Face course with a project that I find interesting. The main intention of this model is educational. I am creating a series of notebooks where I show every step of the process:

  • Collecting the data
  • Pre-processing the data
  • Training a tokenizer from scratch
  • Fine-tuning a GPT-2 model
  • Building a Gradio app for the model

I trained the model using the free version of Colab with a small dataset. Right now, it is heavily overfitting. My idea is to have a more extensive dataset of Guitar Music from Latinoamerica to train a new model similar to the Mutopia Guitar Model, using more GPU resources.

Training and evaluation data

I am training the model with Mutopia Guitar Dataset. This dataset consists of the soloist guitar pieces of the Mutopia Project. The dataset mainly contains guitar music from western classical composers, such as Sor, Aguado, Carcassi, and Giuliani.

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 9089, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'passive_serialization': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
  • training_precision: mixed_float16

Training results

Train Loss Validation Loss Epoch
1.0705 1.3590 0
0.8889 1.3702 1
0.7588 1.3974 2
0.7294 1.4813 3
0.6263 1.5263 5
0.5841 1.5263 6
0.5844 1.5263 7
0.5837 1.5346 8

Framework versions

  • Transformers 4.21.3
  • TensorFlow 2.8.2
  • Datasets 2.4.0
  • Tokenizers 0.12.1