jaymanvirk commited on
Commit
10f111c
1 Parent(s): 678c418

End of training

Browse files
README.md CHANGED
@@ -1,10 +1,12 @@
1
  ---
 
 
2
  license: mit
3
- base_model: jaymanvirk/speecht5_tts_finetuned_voxpopuli_lt
4
  tags:
5
  - generated_from_trainer
6
  datasets:
7
- - voxpopuli
8
  model-index:
9
  - name: speecht5_tts_finetuned_voxpopuli_lt
10
  results: []
@@ -15,9 +17,9 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  # speecht5_tts_finetuned_voxpopuli_lt
17
 
18
- This model is a fine-tuned version of [jaymanvirk/speecht5_tts_finetuned_voxpopuli_lt](https://huggingface.co/jaymanvirk/speecht5_tts_finetuned_voxpopuli_lt) on the voxpopuli dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 0.4718
21
 
22
  ## Model description
23
 
@@ -40,8 +42,8 @@ The following hyperparameters were used during training:
40
  - train_batch_size: 8
41
  - eval_batch_size: 8
42
  - seed: 42
43
- - gradient_accumulation_steps: 2
44
- - total_train_batch_size: 16
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
  - lr_scheduler_warmup_steps: 50
@@ -52,10 +54,10 @@ The following hyperparameters were used during training:
52
 
53
  | Training Loss | Epoch | Step | Validation Loss |
54
  |:-------------:|:-----:|:----:|:---------------:|
55
- | 0.5879 | 3.06 | 100 | 0.5127 |
56
- | 0.5239 | 7.04 | 200 | 0.4850 |
57
- | 0.4996 | 11.04 | 300 | 0.4766 |
58
- | 0.4897 | 15.03 | 400 | 0.4718 |
59
 
60
 
61
  ### Framework versions
 
1
  ---
2
+ language:
3
+ - lt
4
  license: mit
5
+ base_model: microsoft/speecht5_tts
6
  tags:
7
  - generated_from_trainer
8
  datasets:
9
+ - facebook/voxpopuli
10
  model-index:
11
  - name: speecht5_tts_finetuned_voxpopuli_lt
12
  results: []
 
17
 
18
  # speecht5_tts_finetuned_voxpopuli_lt
19
 
20
+ This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the facebook/voxpopuli dataset.
21
  It achieves the following results on the evaluation set:
22
+ - Loss: 0.4692
23
 
24
  ## Model description
25
 
 
42
  - train_batch_size: 8
43
  - eval_batch_size: 8
44
  - seed: 42
45
+ - gradient_accumulation_steps: 4
46
+ - total_train_batch_size: 32
47
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
  - lr_scheduler_type: linear
49
  - lr_scheduler_warmup_steps: 50
 
54
 
55
  | Training Loss | Epoch | Step | Validation Loss |
56
  |:-------------:|:-----:|:----:|:---------------:|
57
+ | 0.6225 | 7.02 | 100 | 0.5038 |
58
+ | 0.5198 | 15.01 | 200 | 0.4784 |
59
+ | 0.4946 | 23.0 | 300 | 0.4827 |
60
+ | 0.4796 | 30.02 | 400 | 0.4692 |
61
 
62
 
63
  ### Framework versions
runs/Mar25_08-36-00_e34600ff0ef9/events.out.tfevents.1711355895.e34600ff0ef9.34.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1d47bb4f2c050899b20cd3f966897eda4a748f258a99e9327ca10542d8650a64
3
- size 8167
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:25a7658f792abaea999d688b3b24b3f5dc2cb3c4f2ea944056ec7b19e195956b
3
+ size 8521