binisha commited on
Commit
3129146
·
verified ·
1 Parent(s): 0473029

End of training

Browse files
Files changed (2) hide show
  1. README.md +22 -22
  2. generation_config.json +1 -1
README.md CHANGED
@@ -16,7 +16,7 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 0.6198
20
 
21
  ## Model description
22
 
@@ -41,7 +41,7 @@ The following hyperparameters were used during training:
41
  - seed: 42
42
  - gradient_accumulation_steps: 8
43
  - total_train_batch_size: 32
44
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_steps: 100
47
  - training_steps: 1500
@@ -49,28 +49,28 @@ The following hyperparameters were used during training:
49
 
50
  ### Training results
51
 
52
- | Training Loss | Epoch | Step | Validation Loss |
53
- |:-------------:|:---------:|:----:|:---------------:|
54
- | 0.5825 | 88.8889 | 100 | 0.5775 |
55
- | 0.4984 | 177.7778 | 200 | 0.6064 |
56
- | 0.4252 | 266.6667 | 300 | 0.5800 |
57
- | 0.3997 | 355.5556 | 400 | 0.5745 |
58
- | 0.366 | 444.4444 | 500 | 0.5863 |
59
- | 0.3521 | 533.3333 | 600 | 0.5969 |
60
- | 0.3308 | 622.2222 | 700 | 0.5716 |
61
- | 0.32 | 711.1111 | 800 | 0.5757 |
62
- | 0.3088 | 800.0 | 900 | 0.6095 |
63
- | 0.3006 | 888.8889 | 1000 | 0.6352 |
64
- | 0.2911 | 977.7778 | 1100 | 0.6207 |
65
- | 0.2849 | 1066.6667 | 1200 | 0.6181 |
66
- | 0.2869 | 1155.5556 | 1300 | 0.6321 |
67
- | 0.2853 | 1244.4444 | 1400 | 0.6271 |
68
- | 0.285 | 1333.3333 | 1500 | 0.6198 |
69
 
70
 
71
  ### Framework versions
72
 
73
- - Transformers 4.44.2
74
- - Pytorch 2.5.0+cu121
75
  - Datasets 3.1.0
76
- - Tokenizers 0.19.1
 
16
 
17
  This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 0.4253
20
 
21
  ## Model description
22
 
 
41
  - seed: 42
42
  - gradient_accumulation_steps: 8
43
  - total_train_batch_size: 32
44
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_steps: 100
47
  - training_steps: 1500
 
49
 
50
  ### Training results
51
 
52
+ | Training Loss | Epoch | Step | Validation Loss |
53
+ |:-------------:|:--------:|:----:|:---------------:|
54
+ | 0.648 | 7.9901 | 100 | 0.5922 |
55
+ | 0.5721 | 15.9901 | 200 | 0.5445 |
56
+ | 0.5337 | 23.9901 | 300 | 0.5103 |
57
+ | 0.5057 | 31.9901 | 400 | 0.5052 |
58
+ | 0.4894 | 39.9901 | 500 | 0.4869 |
59
+ | 0.4765 | 47.9901 | 600 | 0.4804 |
60
+ | 0.4577 | 55.9901 | 700 | 0.4770 |
61
+ | 0.4462 | 63.9901 | 800 | 0.4561 |
62
+ | 0.4275 | 71.9901 | 900 | 0.4445 |
63
+ | 0.4143 | 79.9901 | 1000 | 0.4388 |
64
+ | 0.4044 | 87.9901 | 1100 | 0.4363 |
65
+ | 0.3929 | 95.9901 | 1200 | 0.4299 |
66
+ | 0.3922 | 103.9901 | 1300 | 0.4276 |
67
+ | 0.3915 | 111.9901 | 1400 | 0.4262 |
68
+ | 0.3877 | 119.9901 | 1500 | 0.4253 |
69
 
70
 
71
  ### Framework versions
72
 
73
+ - Transformers 4.46.2
74
+ - Pytorch 2.5.1+cu121
75
  - Datasets 3.1.0
76
+ - Tokenizers 0.20.3
generation_config.json CHANGED
@@ -5,5 +5,5 @@
5
  "eos_token_id": 2,
6
  "max_length": 1876,
7
  "pad_token_id": 1,
8
- "transformers_version": "4.44.2"
9
  }
 
5
  "eos_token_id": 2,
6
  "max_length": 1876,
7
  "pad_token_id": 1,
8
+ "transformers_version": "4.46.2"
9
  }