Shabdobhedi
commited on
End of training
Browse files
README.md
CHANGED
@@ -16,7 +16,7 @@ should probably proofread and complete it, then remove this comment. -->
|
|
16 |
|
17 |
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
|
18 |
It achieves the following results on the evaluation set:
|
19 |
-
- Loss: 0.
|
20 |
|
21 |
## Model description
|
22 |
|
@@ -44,38 +44,28 @@ The following hyperparameters were used during training:
|
|
44 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
45 |
- lr_scheduler_type: linear
|
46 |
- lr_scheduler_warmup_steps: 100
|
47 |
-
- training_steps:
|
48 |
- mixed_precision_training: Native AMP
|
49 |
|
50 |
### Training results
|
51 |
|
52 |
| Training Loss | Epoch | Step | Validation Loss |
|
53 |
|:-------------:|:------:|:----:|:---------------:|
|
54 |
-
| 0.
|
55 |
-
| 0.
|
56 |
-
| 0.
|
57 |
-
| 0.
|
58 |
-
| 0.
|
59 |
-
| 0.
|
60 |
-
| 0.
|
61 |
-
| 0.
|
62 |
-
| 0.
|
63 |
-
| 0.
|
64 |
-
| 0.4786 | 3.9391 | 1100 | 0.4479 |
|
65 |
-
| 0.4668 | 4.2972 | 1200 | 0.4504 |
|
66 |
-
| 0.4714 | 4.6553 | 1300 | 0.4468 |
|
67 |
-
| 0.4717 | 5.0134 | 1400 | 0.4463 |
|
68 |
-
| 0.4662 | 5.3715 | 1500 | 0.4498 |
|
69 |
-
| 0.4637 | 5.7296 | 1600 | 0.4461 |
|
70 |
-
| 0.455 | 6.0877 | 1700 | 0.4474 |
|
71 |
-
| 0.4551 | 6.4458 | 1800 | 0.4451 |
|
72 |
-
| 0.462 | 6.8039 | 1900 | 0.4420 |
|
73 |
-
| 0.4511 | 7.1620 | 2000 | 0.4435 |
|
74 |
|
75 |
|
76 |
### Framework versions
|
77 |
|
78 |
- Transformers 4.44.2
|
79 |
- Pytorch 2.4.1+cu121
|
80 |
-
- Datasets 3.0.
|
81 |
- Tokenizers 0.19.1
|
|
|
16 |
|
17 |
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
|
18 |
It achieves the following results on the evaluation set:
|
19 |
+
- Loss: 0.4481
|
20 |
|
21 |
## Model description
|
22 |
|
|
|
44 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
45 |
- lr_scheduler_type: linear
|
46 |
- lr_scheduler_warmup_steps: 100
|
47 |
+
- training_steps: 1000
|
48 |
- mixed_precision_training: Native AMP
|
49 |
|
50 |
### Training results
|
51 |
|
52 |
| Training Loss | Epoch | Step | Validation Loss |
|
53 |
|:-------------:|:------:|:----:|:---------------:|
|
54 |
+
| 0.5889 | 0.3581 | 100 | 0.5117 |
|
55 |
+
| 0.5419 | 0.7162 | 200 | 0.4928 |
|
56 |
+
| 0.5203 | 1.0743 | 300 | 0.4762 |
|
57 |
+
| 0.5079 | 1.4324 | 400 | 0.4699 |
|
58 |
+
| 0.5022 | 1.7905 | 500 | 0.4580 |
|
59 |
+
| 0.4952 | 2.1486 | 600 | 0.4601 |
|
60 |
+
| 0.4886 | 2.5067 | 700 | 0.4579 |
|
61 |
+
| 0.4853 | 2.8648 | 800 | 0.4520 |
|
62 |
+
| 0.4762 | 3.2229 | 900 | 0.4477 |
|
63 |
+
| 0.4774 | 3.5810 | 1000 | 0.4481 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
64 |
|
65 |
|
66 |
### Framework versions
|
67 |
|
68 |
- Transformers 4.44.2
|
69 |
- Pytorch 2.4.1+cu121
|
70 |
+
- Datasets 3.0.2
|
71 |
- Tokenizers 0.19.1
|
model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 577789320
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:928ab5983d0ea71aecc7bf0f6882dc0a3548f584b56ac5236776fbd60d406f0e
|
3 |
size 577789320
|
runs/Oct23_14-53-05_27d36d51f32a/events.out.tfevents.1729695229.27d36d51f32a.4298.0
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:04b86f118d8ac2effce1197a7fc67485e98b61422317044e5f6a3e7c484893e8
|
3 |
+
size 18153
|