juancopi81 commited on
Commit
a7d67c8
1 Parent(s): e4b4f9f

Training in progress epoch 0

Browse files
Files changed (4) hide show
  1. README.md +12 -31
  2. config.json +1 -1
  3. tf_model.h5 +1 -1
  4. tokenizer_config.json +1 -1
README.md CHANGED
@@ -1,15 +1,9 @@
1
  ---
2
  tags:
3
  - generated_from_keras_callback
4
- - music
5
  model-index:
6
  - name: juancopi81/mutopia_guitar_mmm
7
  results: []
8
- datasets:
9
- - juancopi81/mutopia_guitar_dataset
10
- widget:
11
- - text: "PIECE_START TIME_SIGNATURE=4_4 BPM=90 TRACK_START INST=0 DENSITY=2 BAR_START NOTE_ON=43"
12
- example_title: "Time signature 4/4, BPM=90, NOTE=G2"
13
  ---
14
 
15
  <!-- This model card has been generated automatically according to the information Keras had access to. You should
@@ -17,55 +11,42 @@ probably proofread and complete it, then remove this comment. -->
17
 
18
  # juancopi81/mutopia_guitar_mmm
19
 
20
- Music generation could be approached similarly to language generation. There are many ways to represent music as text and then use a language model to create a model capable of music generation. For encoding MIDI files as text, I am using the excellent [implementation](https://github.com/AI-Guru/MMM-JSB) of Dr. Tristan Beheren of the paper: [MMM: Exploring Conditional Multi-Track Music Generation with the Transformer](https://arxiv.org/abs/2008.06048).
21
-
22
- This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the [Mutopia Guitar Dataset](https://huggingface.co/datasets/juancopi81/mutopia_guitar_dataset). Use the widget to generate your piece, and then use [this notebook](https://colab.research.google.com/drive/14vlJwCvDmNH6SFfVuYY0Y18qTbaHEJCY?usp=sharing) to listen to the results (work in progress).
23
- I created the notebook as an adaptation of [the one created by Dr. Tristan Behrens](https://huggingface.co/TristanBehrens/js-fakes-4bars).
24
-
25
  It achieves the following results on the evaluation set:
26
- - Train Loss: 0.7588
27
- - Validation Loss: 1.3974
 
28
 
29
  ## Model description
30
 
31
- The model is GPT-2 loaded with the GPT2LMHeadModel architecture from Hugging Face. The context size is 256, and the vocabulary size is 588. The model uses a
32
- `WhitespaceSplit` pre-tokenizer. The [tokenizer](https://huggingface.co/juancopi81/mutopia_guitar_dataset_tokenizer) is also in the Hugging Face hub.
33
 
34
  ## Intended uses & limitations
35
 
36
- I built this model to learn more about how to use Hugging Face. I am implementing some of the parts of the [Hugging Face course](https://huggingface.co/course/chapter1/1) with a project that I find interesting.
37
- The main intention of this model is educational. I am creating a [series of notebooks](https://github.com/juancopi81/MMM_Mutopia_Guitar) where I show every step of the process:
38
- - Collecting the data
39
- - Pre-processing the data
40
- - Training a tokenizer from scratch
41
- - Fine-tuning a GPT-2 model
42
- - Building a Gradio app for the model
43
-
44
- I trained the model using the free version of Colab with a small dataset. Right now, it is heavily overfitting. My idea is to have a more extensive dataset of Guitar Music from Latinoamerica to train a new model similar to the Mutopia Guitar Model, using more GPU resources.
45
 
46
  ## Training and evaluation data
47
 
48
- I am training the model with [Mutopia Guitar Dataset](https://huggingface.co/datasets/juancopi81/mutopia_guitar_dataset). This dataset consists of the soloist guitar pieces of the [Mutopia Project](https://www.mutopiaproject.org/).
49
- The dataset mainly contains guitar music from western classical composers, such as Sor, Aguado, Carcassi, and Giuliani.
 
50
 
51
  ### Training hyperparameters
52
 
53
  The following hyperparameters were used during training:
54
- - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 9089, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
55
  - training_precision: mixed_float16
56
 
57
  ### Training results
58
 
59
  | Train Loss | Validation Loss | Epoch |
60
  |:----------:|:---------------:|:-----:|
61
- | 1.0705 | 1.3590 | 0 |
62
- | 0.8889 | 1.3702 | 1 |
63
- | 0.7588 | 1.3974 | 2 |
64
 
65
 
66
  ### Framework versions
67
 
68
- - Transformers 4.21.3
69
  - TensorFlow 2.8.2
70
  - Datasets 2.4.0
71
  - Tokenizers 0.12.1
 
1
  ---
2
  tags:
3
  - generated_from_keras_callback
 
4
  model-index:
5
  - name: juancopi81/mutopia_guitar_mmm
6
  results: []
 
 
 
 
 
7
  ---
8
 
9
  <!-- This model card has been generated automatically according to the information Keras had access to. You should
 
11
 
12
  # juancopi81/mutopia_guitar_mmm
13
 
14
+ This model was trained from scratch on an unknown dataset.
 
 
 
 
15
  It achieves the following results on the evaluation set:
16
+ - Train Loss: 0.7294
17
+ - Validation Loss: 1.4813
18
+ - Epoch: 0
19
 
20
  ## Model description
21
 
22
+ More information needed
 
23
 
24
  ## Intended uses & limitations
25
 
26
+ More information needed
 
 
 
 
 
 
 
 
27
 
28
  ## Training and evaluation data
29
 
30
+ More information needed
31
+
32
+ ## Training procedure
33
 
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
37
+ - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 3e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 5726, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
38
  - training_precision: mixed_float16
39
 
40
  ### Training results
41
 
42
  | Train Loss | Validation Loss | Epoch |
43
  |:----------:|:---------------:|:-----:|
44
+ | 0.7294 | 1.4813 | 0 |
 
 
45
 
46
 
47
  ### Framework versions
48
 
49
+ - Transformers 4.22.0
50
  - TensorFlow 2.8.2
51
  - Datasets 2.4.0
52
  - Tokenizers 0.12.1
config.json CHANGED
@@ -32,7 +32,7 @@
32
  "max_length": 350
33
  }
34
  },
35
- "transformers_version": "4.21.3",
36
  "use_cache": true,
37
  "vocab_size": 588
38
  }
 
32
  "max_length": 350
33
  }
34
  },
35
+ "transformers_version": "4.22.0",
36
  "use_cache": true,
37
  "vocab_size": 588
38
  }
tf_model.h5 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4a2ae568b21e34e4396cfa1fbd2eaed91b603b7385fa6f47d0ea954ee3604d30
3
  size 345352296
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3c3b94a3d3d65b1f4aca2375deb0c2e971f7e2e17499901d6c02001302e6ad26
3
  size 345352296
tokenizer_config.json CHANGED
@@ -2,6 +2,6 @@
2
  "bos_token": "<|endoftext|>",
3
  "eos_token": "<|endoftext|>",
4
  "name_or_path": "juancopi81/mutopia_guitar_dataset_tokenizer",
5
- "special_tokens_map_file": "/root/.cache/huggingface/transformers/10de8e72c2dd469b19ce869baf55faa96a0363b0e5a70e2e1899b7957c0cfcaa.2aeea123cb44d5212eff0235c69e12949b8eecab1a274afa3ca271d99aeb330d",
6
  "tokenizer_class": "PreTrainedTokenizerFast"
7
  }
 
2
  "bos_token": "<|endoftext|>",
3
  "eos_token": "<|endoftext|>",
4
  "name_or_path": "juancopi81/mutopia_guitar_dataset_tokenizer",
5
+ "special_tokens_map_file": "/root/.cache/huggingface/hub/models--juancopi81--mutopia_guitar_dataset_tokenizer/snapshots/5bf268ea43daff742db79857a624892893eb0185/special_tokens_map.json",
6
  "tokenizer_class": "PreTrainedTokenizerFast"
7
  }