pedropauletti commited on
Commit
312ef50
1 Parent(s): c89d910

End of training

Browse files
Files changed (1) hide show
  1. README.md +11 -10
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  license: mit
3
- base_model: microsoft/speecht5_tts
4
  tags:
5
  - generated_from_trainer
6
  datasets:
@@ -15,9 +15,9 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  # speecht5_finetuned_common_voice_pt
17
 
18
- This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the common_voice_11_0 dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 0.4776
21
 
22
  ## Model description
23
 
@@ -37,23 +37,24 @@ More information needed
37
 
38
  The following hyperparameters were used during training:
39
  - learning_rate: 1e-05
40
- - train_batch_size: 2
41
- - eval_batch_size: 1
42
  - seed: 42
43
  - gradient_accumulation_steps: 8
44
- - total_train_batch_size: 16
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
  - lr_scheduler_warmup_steps: 500
48
- - training_steps: 3000
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss |
53
  |:-------------:|:-----:|:----:|:---------------:|
54
- | 0.5629 | 3.29 | 1000 | 0.5044 |
55
- | 0.5157 | 6.58 | 2000 | 0.4866 |
56
- | 0.5117 | 9.87 | 3000 | 0.4776 |
 
57
 
58
 
59
  ### Framework versions
 
1
  ---
2
  license: mit
3
+ base_model: pedropauletti/speecht5_finetuned_common_voice_pt
4
  tags:
5
  - generated_from_trainer
6
  datasets:
 
15
 
16
  # speecht5_finetuned_common_voice_pt
17
 
18
+ This model is a fine-tuned version of [pedropauletti/speecht5_finetuned_common_voice_pt](https://huggingface.co/pedropauletti/speecht5_finetuned_common_voice_pt) on the common_voice_11_0 dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 0.4610
21
 
22
  ## Model description
23
 
 
37
 
38
  The following hyperparameters were used during training:
39
  - learning_rate: 1e-05
40
+ - train_batch_size: 4
41
+ - eval_batch_size: 2
42
  - seed: 42
43
  - gradient_accumulation_steps: 8
44
+ - total_train_batch_size: 32
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
  - lr_scheduler_warmup_steps: 500
48
+ - training_steps: 4000
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss |
53
  |:-------------:|:-----:|:----:|:---------------:|
54
+ | 0.4962 | 6.58 | 1000 | 0.4737 |
55
+ | 0.4823 | 13.16 | 2000 | 0.4651 |
56
+ | 0.4824 | 19.74 | 3000 | 0.4612 |
57
+ | 0.4845 | 26.32 | 4000 | 0.4610 |
58
 
59
 
60
  ### Framework versions