nickdee96 commited on
Commit
ba9cce3
·
1 Parent(s): 1b98609

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -20
README.md CHANGED
@@ -1,6 +1,4 @@
1
  ---
2
- license: mit
3
- base_model: microsoft/speecht5_tts
4
  tags:
5
  - generated_from_trainer
6
  datasets:
@@ -15,9 +13,7 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  # speecht5_finetuned_voxpopuli_sw
17
 
18
- This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the common_voice_11_0 dataset.
19
- It achieves the following results on the evaluation set:
20
- - Loss: 0.5298
21
 
22
  ## Model description
23
 
@@ -37,25 +33,15 @@ More information needed
37
 
38
  The following hyperparameters were used during training:
39
  - learning_rate: 1e-05
40
- - train_batch_size: 4
41
- - eval_batch_size: 2
42
  - seed: 42
43
- - gradient_accumulation_steps: 8
44
- - total_train_batch_size: 32
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
  - lr_scheduler_warmup_steps: 500
48
- - training_steps: 4000
49
-
50
- ### Training results
51
-
52
- | Training Loss | Epoch | Step | Validation Loss |
53
- |:-------------:|:-----:|:----:|:---------------:|
54
- | 0.6196 | 1.65 | 1000 | 0.5635 |
55
- | 0.5928 | 3.29 | 2000 | 0.5452 |
56
- | 0.5803 | 4.94 | 3000 | 0.5339 |
57
- | 0.574 | 6.59 | 4000 | 0.5298 |
58
-
59
 
60
  ### Framework versions
61
 
 
1
  ---
 
 
2
  tags:
3
  - generated_from_trainer
4
  datasets:
 
13
 
14
  # speecht5_finetuned_voxpopuli_sw
15
 
16
+ This model was trained from scratch on the common_voice_11_0 dataset.
 
 
17
 
18
  ## Model description
19
 
 
33
 
34
  The following hyperparameters were used during training:
35
  - learning_rate: 1e-05
36
+ - train_batch_size: 8
37
+ - eval_batch_size: 4
38
  - seed: 42
39
+ - gradient_accumulation_steps: 16
40
+ - total_train_batch_size: 128
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: linear
43
  - lr_scheduler_warmup_steps: 500
44
+ - training_steps: 19000
 
 
 
 
 
 
 
 
 
 
45
 
46
  ### Framework versions
47