lusciniaweldmou commited on
Commit
d7b7df5
·
verified ·
1 Parent(s): 68f8c6d

Model save

Browse files
Files changed (1) hide show
  1. README.md +9 -8
README.md CHANGED
@@ -3,8 +3,6 @@ library_name: peft
3
  license: cc-by-nc-4.0
4
  base_model: facebook/musicgen-melody
5
  tags:
6
- - text-to-audio
7
- - ylacombe/tiny-punk
8
  - generated_from_trainer
9
  model-index:
10
  - name: musicgen-melody-lora-punk
@@ -16,10 +14,10 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # musicgen-melody-lora-punk
18
 
19
- This model is a fine-tuned version of [facebook/musicgen-melody](https://huggingface.co/facebook/musicgen-melody) on the YLACOMBE/TINY-PUNK - DEFAULT dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 5.4128
22
- - Clap: -0.0280
23
 
24
  ## Model description
25
 
@@ -39,11 +37,11 @@ More information needed
39
 
40
  The following hyperparameters were used during training:
41
  - learning_rate: 0.0002
42
- - train_batch_size: 2
43
  - eval_batch_size: 1
44
  - seed: 456
45
- - gradient_accumulation_steps: 8
46
- - total_train_batch_size: 16
47
  - optimizer: Use adamw_torch with betas=(0.9,0.99) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
48
  - lr_scheduler_type: linear
49
  - num_epochs: 10.0
@@ -51,6 +49,9 @@ The following hyperparameters were used during training:
51
 
52
  ### Training results
53
 
 
 
 
54
 
55
 
56
  ### Framework versions
 
3
  license: cc-by-nc-4.0
4
  base_model: facebook/musicgen-melody
5
  tags:
 
 
6
  - generated_from_trainer
7
  model-index:
8
  - name: musicgen-melody-lora-punk
 
14
 
15
  # musicgen-melody-lora-punk
16
 
17
+ This model is a fine-tuned version of [facebook/musicgen-melody](https://huggingface.co/facebook/musicgen-melody) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 6.4997
20
+ - Clap: 0.1150
21
 
22
  ## Model description
23
 
 
37
 
38
  The following hyperparameters were used during training:
39
  - learning_rate: 0.0002
40
+ - train_batch_size: 1
41
  - eval_batch_size: 1
42
  - seed: 456
43
+ - gradient_accumulation_steps: 4
44
+ - total_train_batch_size: 4
45
  - optimizer: Use adamw_torch with betas=(0.9,0.99) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
46
  - lr_scheduler_type: linear
47
  - num_epochs: 10.0
 
49
 
50
  ### Training results
51
 
52
+ | Training Loss | Epoch | Step | Validation Loss | Clap |
53
+ |:-------------:|:------:|:----:|:---------------:|:------:|
54
+ | 6.9552 | 5.6061 | 50 | 6.4997 | 0.1150 |
55
 
56
 
57
  ### Framework versions