juancopi81 commited on
Commit
e4b4f9f
1 Parent(s): 94a54b4

Add more information to the README file

Browse files
Files changed (1) hide show
  1. README.md +17 -8
README.md CHANGED
@@ -17,27 +17,36 @@ probably proofread and complete it, then remove this comment. -->
17
 
18
  # juancopi81/mutopia_guitar_mmm
19
 
20
- This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the [Mutopia Guitar Dataset](https://huggingface.co/datasets/juancopi81/mutopia_guitar_dataset). Use the widget to generate your piece and then use [this notebook](https://colab.research.google.com/drive/14vlJwCvDmNH6SFfVuYY0Y18qTbaHEJCY?usp=sharing) to hear it (work in progress).
21
- The notebook is adapted from [the one created by Dr. Tristan Behrens](https://huggingface.co/TristanBehrens/js-fakes-4bars).
 
 
22
 
23
  It achieves the following results on the evaluation set:
24
  - Train Loss: 0.7588
25
  - Validation Loss: 1.3974
26
- - Epoch: 2
27
 
28
  ## Model description
29
 
30
- More information needed
 
31
 
32
  ## Intended uses & limitations
33
 
34
- More information needed
 
 
 
 
 
 
35
 
36
- ## Training and evaluation data
37
 
38
- More information needed
39
 
40
- ## Training procedure
 
41
 
42
  ### Training hyperparameters
43
 
 
17
 
18
  # juancopi81/mutopia_guitar_mmm
19
 
20
+ Music generation could be approached similarly to language generation. There are many ways to represent music as text and then use a language model to create a model capable of music generation. For encoding MIDI files as text, I am using the excellent [implementation](https://github.com/AI-Guru/MMM-JSB) of Dr. Tristan Beheren of the paper: [MMM: Exploring Conditional Multi-Track Music Generation with the Transformer](https://arxiv.org/abs/2008.06048).
21
+
22
+ This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the [Mutopia Guitar Dataset](https://huggingface.co/datasets/juancopi81/mutopia_guitar_dataset). Use the widget to generate your piece, and then use [this notebook](https://colab.research.google.com/drive/14vlJwCvDmNH6SFfVuYY0Y18qTbaHEJCY?usp=sharing) to listen to the results (work in progress).
23
+ I created the notebook as an adaptation of [the one created by Dr. Tristan Behrens](https://huggingface.co/TristanBehrens/js-fakes-4bars).
24
 
25
  It achieves the following results on the evaluation set:
26
  - Train Loss: 0.7588
27
  - Validation Loss: 1.3974
 
28
 
29
  ## Model description
30
 
31
+ The model is GPT-2 loaded with the GPT2LMHeadModel architecture from Hugging Face. The context size is 256, and the vocabulary size is 588. The model uses a
32
+ `WhitespaceSplit` pre-tokenizer. The [tokenizer](https://huggingface.co/juancopi81/mutopia_guitar_dataset_tokenizer) is also in the Hugging Face hub.
33
 
34
  ## Intended uses & limitations
35
 
36
+ I built this model to learn more about how to use Hugging Face. I am implementing some of the parts of the [Hugging Face course](https://huggingface.co/course/chapter1/1) with a project that I find interesting.
37
+ The main intention of this model is educational. I am creating a [series of notebooks](https://github.com/juancopi81/MMM_Mutopia_Guitar) where I show every step of the process:
38
+ - Collecting the data
39
+ - Pre-processing the data
40
+ - Training a tokenizer from scratch
41
+ - Fine-tuning a GPT-2 model
42
+ - Building a Gradio app for the model
43
 
44
+ I trained the model using the free version of Colab with a small dataset. Right now, it is heavily overfitting. My idea is to have a more extensive dataset of Guitar Music from Latinoamerica to train a new model similar to the Mutopia Guitar Model, using more GPU resources.
45
 
46
+ ## Training and evaluation data
47
 
48
+ I am training the model with [Mutopia Guitar Dataset](https://huggingface.co/datasets/juancopi81/mutopia_guitar_dataset). This dataset consists of the soloist guitar pieces of the [Mutopia Project](https://www.mutopiaproject.org/).
49
+ The dataset mainly contains guitar music from western classical composers, such as Sor, Aguado, Carcassi, and Giuliani.
50
 
51
  ### Training hyperparameters
52