zizzimars commited on
Commit
087afb4
1 Parent(s): 6aa15e8

End of training

Browse files
Files changed (1) hide show
  1. README.md +16 -11
README.md CHANGED
@@ -15,7 +15,7 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the None dataset.
17
  It achieves the following results on the evaluation set:
18
- - Loss: 0.5089
19
 
20
  ## Model description
21
 
@@ -35,26 +35,31 @@ More information needed
35
 
36
  The following hyperparameters were used during training:
37
  - learning_rate: 1e-05
38
- - train_batch_size: 8
39
  - eval_batch_size: 2
40
  - seed: 42
41
- - gradient_accumulation_steps: 2
42
- - total_train_batch_size: 16
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
- - lr_scheduler_warmup_steps: 1000
46
- - training_steps: 5000
47
  - mixed_precision_training: Native AMP
48
 
49
  ### Training results
50
 
51
  | Training Loss | Epoch | Step | Validation Loss |
52
  |:-------------:|:-----:|:----:|:---------------:|
53
- | 0.4536 | 100.0 | 1000 | 0.4595 |
54
- | 0.4175 | 200.0 | 2000 | 0.4916 |
55
- | 0.4024 | 300.0 | 3000 | 0.5040 |
56
- | 0.3917 | 400.0 | 4000 | 0.5080 |
57
- | 0.3909 | 500.0 | 5000 | 0.5089 |
 
 
 
 
 
58
 
59
 
60
  ### Framework versions
 
15
 
16
  This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the None dataset.
17
  It achieves the following results on the evaluation set:
18
+ - Loss: 0.5067
19
 
20
  ## Model description
21
 
 
35
 
36
  The following hyperparameters were used during training:
37
  - learning_rate: 1e-05
38
+ - train_batch_size: 2
39
  - eval_batch_size: 2
40
  - seed: 42
41
+ - gradient_accumulation_steps: 16
42
+ - total_train_batch_size: 32
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
+ - lr_scheduler_warmup_steps: 200
46
+ - training_steps: 1000
47
  - mixed_precision_training: Native AMP
48
 
49
  ### Training results
50
 
51
  | Training Loss | Epoch | Step | Validation Loss |
52
  |:-------------:|:-----:|:----:|:---------------:|
53
+ | 0.7136 | 0.03 | 100 | 0.6539 |
54
+ | 0.6471 | 0.06 | 200 | 0.5934 |
55
+ | 0.5851 | 0.08 | 300 | 0.5392 |
56
+ | 0.5764 | 0.11 | 400 | 0.5275 |
57
+ | 0.5666 | 0.14 | 500 | 0.5213 |
58
+ | 0.5577 | 0.17 | 600 | 0.5138 |
59
+ | 0.5605 | 0.2 | 700 | 0.5115 |
60
+ | 0.5622 | 0.22 | 800 | 0.5088 |
61
+ | 0.5603 | 0.25 | 900 | 0.5082 |
62
+ | 0.558 | 0.28 | 1000 | 0.5067 |
63
 
64
 
65
  ### Framework versions