divakaivan commited on
Commit
96c2db5
1 Parent(s): 031f601

End of training

Browse files
Files changed (2) hide show
  1. README.md +26 -8
  2. model.safetensors +1 -1
README.md CHANGED
@@ -21,6 +21,20 @@ should probably proofread and complete it, then remove this comment. -->
21
  # GlaswegianTTS v0.1.0
22
 
23
  This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the glaswegian_tts_v0.1.0 dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
 
25
  ## Training procedure
26
 
@@ -36,22 +50,26 @@ The following hyperparameters were used during training:
36
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
37
  - lr_scheduler_type: linear
38
  - lr_scheduler_warmup_steps: 1000
39
- - training_steps: 4000
40
  - mixed_precision_training: Native AMP
41
 
42
  ### Training results
43
 
44
- | Training Loss | Epoch | Step | Validation Loss |
45
- |:-------------:|:-----:|:----:|:---------------:|
46
- | 0.4348 | 125.0 | 1000 | 0.4666 |
47
- | 0.3783 | 250.0 | 2000 | 0.4955 |
48
- | 0.3715 | 375.0 | 3000 | 0.5152 |
49
- | 0.3564 | 500.0 | 4000 | 0.5275 |
 
 
 
 
50
 
51
 
52
  ### Framework versions
53
 
54
  - Transformers 4.42.0.dev0
55
  - Pytorch 2.3.0+cu121
56
- - Datasets 2.19.1
57
  - Tokenizers 0.19.1
 
21
  # GlaswegianTTS v0.1.0
22
 
23
  This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the glaswegian_tts_v0.1.0 dataset.
24
+ It achieves the following results on the evaluation set:
25
+ - Loss: 0.5090
26
+
27
+ ## Model description
28
+
29
+ More information needed
30
+
31
+ ## Intended uses & limitations
32
+
33
+ More information needed
34
+
35
+ ## Training and evaluation data
36
+
37
+ More information needed
38
 
39
  ## Training procedure
40
 
 
50
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
  - lr_scheduler_type: linear
52
  - lr_scheduler_warmup_steps: 1000
53
+ - training_steps: 8000
54
  - mixed_precision_training: Native AMP
55
 
56
  ### Training results
57
 
58
+ | Training Loss | Epoch | Step | Validation Loss |
59
+ |:-------------:|:--------:|:----:|:---------------:|
60
+ | 0.4421 | 52.6316 | 1000 | 0.4186 |
61
+ | 0.3878 | 105.2632 | 2000 | 0.4447 |
62
+ | 0.3775 | 157.8947 | 3000 | 0.4671 |
63
+ | 0.3639 | 210.5263 | 4000 | 0.4907 |
64
+ | 0.354 | 263.1579 | 5000 | 0.4884 |
65
+ | 0.356 | 315.7895 | 6000 | 0.4997 |
66
+ | 0.3451 | 368.4211 | 7000 | 0.5021 |
67
+ | 0.3514 | 421.0526 | 8000 | 0.5090 |
68
 
69
 
70
  ### Framework versions
71
 
72
  - Transformers 4.42.0.dev0
73
  - Pytorch 2.3.0+cu121
74
+ - Datasets 2.19.2
75
  - Tokenizers 0.19.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d0d798aa491c0e1146e3d38405df5c0bd6b79226b4ec2131bdf1c446b0396a3d
3
  size 577789320
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f83923dfa1d272be8e473cc3441b016afcff883ea88e2827cbfd590adb82eaf9
3
  size 577789320