dammyogt commited on
Commit
7cbf169
1 Parent(s): 77099ec

End of training

Browse files
Files changed (1) hide show
  1. README.md +19 -10
README.md CHANGED
@@ -1,24 +1,23 @@
1
  ---
2
- language:
3
- - ha
4
  license: mit
5
  base_model: microsoft/speecht5_tts
6
  tags:
7
- - hausa
8
  - generated_from_trainer
9
  datasets:
10
- - mozilla-foundation/common_voice_8_0
11
  model-index:
12
- - name: SpeechT5 TTS Hausa
13
  results: []
14
  ---
15
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
  should probably proofread and complete it, then remove this comment. -->
18
 
19
- # SpeechT5 TTS Hausa
20
 
21
- This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
 
 
22
 
23
  ## Model description
24
 
@@ -38,16 +37,26 @@ More information needed
38
 
39
  The following hyperparameters were used during training:
40
  - learning_rate: 1e-05
41
- - train_batch_size: 16
42
- - eval_batch_size: 8
43
  - seed: 42
44
- - gradient_accumulation_steps: 2
45
  - total_train_batch_size: 32
46
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
  - lr_scheduler_type: linear
48
  - lr_scheduler_warmup_steps: 500
49
  - training_steps: 4000
50
 
 
 
 
 
 
 
 
 
 
 
51
  ### Framework versions
52
 
53
  - Transformers 4.33.0.dev0
 
1
  ---
 
 
2
  license: mit
3
  base_model: microsoft/speecht5_tts
4
  tags:
 
5
  - generated_from_trainer
6
  datasets:
7
+ - common_voice_8_0
8
  model-index:
9
+ - name: common_voice_8_0_ha
10
  results: []
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
  should probably proofread and complete it, then remove this comment. -->
15
 
16
+ # common_voice_8_0_ha
17
 
18
+ This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the common_voice_8_0 dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: 0.4741
21
 
22
  ## Model description
23
 
 
37
 
38
  The following hyperparameters were used during training:
39
  - learning_rate: 1e-05
40
+ - train_batch_size: 4
41
+ - eval_batch_size: 2
42
  - seed: 42
43
+ - gradient_accumulation_steps: 8
44
  - total_train_batch_size: 32
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
  - lr_scheduler_warmup_steps: 500
48
  - training_steps: 4000
49
 
50
+ ### Training results
51
+
52
+ | Training Loss | Epoch | Step | Validation Loss |
53
+ |:-------------:|:-----:|:----:|:---------------:|
54
+ | 0.5416 | 18.31 | 1000 | 0.4974 |
55
+ | 0.505 | 36.61 | 2000 | 0.4760 |
56
+ | 0.4898 | 54.92 | 3000 | 0.4758 |
57
+ | 0.5004 | 73.23 | 4000 | 0.4741 |
58
+
59
+
60
  ### Framework versions
61
 
62
  - Transformers 4.33.0.dev0