Dawid511 commited on
Commit
d527ded
·
verified ·
1 Parent(s): ce9c7de

End of training

Browse files
README.md ADDED
@@ -0,0 +1,99 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ base_model: dawid511/speecht5_finetuned_librispeech_polish_epo6_batch8_gas4
4
+ tags:
5
+ - generated_from_trainer
6
+ model-index:
7
+ - name: speecht5_finetuned_librispeech_polish_epo10_batch2_gas2
8
+ results: []
9
+ ---
10
+
11
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
+ should probably proofread and complete it, then remove this comment. -->
13
+
14
+ # speecht5_finetuned_librispeech_polish_epo10_batch2_gas2
15
+
16
+ This model is a fine-tuned version of [dawid511/speecht5_finetuned_librispeech_polish_epo6_batch8_gas4](https://huggingface.co/dawid511/speecht5_finetuned_librispeech_polish_epo6_batch8_gas4) on an unknown dataset.
17
+ It achieves the following results on the evaluation set:
18
+ - Loss: 0.3637
19
+
20
+ ## Model description
21
+
22
+ More information needed
23
+
24
+ ## Intended uses & limitations
25
+
26
+ More information needed
27
+
28
+ ## Training and evaluation data
29
+
30
+ More information needed
31
+
32
+ ## Training procedure
33
+
34
+ ### Training hyperparameters
35
+
36
+ The following hyperparameters were used during training:
37
+ - learning_rate: 0.0001
38
+ - train_batch_size: 2
39
+ - eval_batch_size: 2
40
+ - seed: 42
41
+ - gradient_accumulation_steps: 2
42
+ - total_train_batch_size: 4
43
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
44
+ - lr_scheduler_type: linear
45
+ - lr_scheduler_warmup_steps: 100
46
+ - num_epochs: 10
47
+ - mixed_precision_training: Native AMP
48
+
49
+ ### Training results
50
+
51
+ | Training Loss | Epoch | Step | Validation Loss |
52
+ |:-------------:|:------:|:----:|:---------------:|
53
+ | 0.7279 | 0.2558 | 100 | 0.3734 |
54
+ | 0.7647 | 0.5115 | 200 | 0.3829 |
55
+ | 0.7646 | 0.7673 | 300 | 0.3793 |
56
+ | 0.7521 | 1.0230 | 400 | 0.3804 |
57
+ | 0.7673 | 1.2788 | 500 | 0.3817 |
58
+ | 0.7415 | 1.5345 | 600 | 0.3824 |
59
+ | 0.7721 | 1.7903 | 700 | 0.3960 |
60
+ | 0.7766 | 2.0460 | 800 | 0.3767 |
61
+ | 0.7529 | 2.3018 | 900 | 0.3756 |
62
+ | 0.757 | 2.5575 | 1000 | 0.3809 |
63
+ | 0.757 | 2.8133 | 1100 | 0.3808 |
64
+ | 0.746 | 3.0691 | 1200 | 0.3762 |
65
+ | 0.7424 | 3.3248 | 1300 | 0.3744 |
66
+ | 0.7409 | 3.5806 | 1400 | 0.3778 |
67
+ | 0.7453 | 3.8363 | 1500 | 0.3715 |
68
+ | 0.7409 | 4.0921 | 1600 | 0.3722 |
69
+ | 0.7441 | 4.3478 | 1700 | 0.3728 |
70
+ | 0.7304 | 4.6036 | 1800 | 0.3724 |
71
+ | 0.738 | 4.8593 | 1900 | 0.3710 |
72
+ | 0.7213 | 5.1151 | 2000 | 0.3730 |
73
+ | 0.7446 | 5.3708 | 2100 | 0.3721 |
74
+ | 0.7255 | 5.6266 | 2200 | 0.3684 |
75
+ | 0.7321 | 5.8824 | 2300 | 0.3671 |
76
+ | 0.7098 | 6.1381 | 2400 | 0.3673 |
77
+ | 0.7401 | 6.3939 | 2500 | 0.3735 |
78
+ | 0.7165 | 6.6496 | 2600 | 0.3679 |
79
+ | 0.714 | 6.9054 | 2700 | 0.3733 |
80
+ | 0.7035 | 7.1611 | 2800 | 0.3666 |
81
+ | 0.7089 | 7.4169 | 2900 | 0.3689 |
82
+ | 0.7118 | 7.6726 | 3000 | 0.3691 |
83
+ | 0.7064 | 7.9284 | 3100 | 0.3664 |
84
+ | 0.6994 | 8.1841 | 3200 | 0.3679 |
85
+ | 0.6958 | 8.4399 | 3300 | 0.3661 |
86
+ | 0.7087 | 8.6957 | 3400 | 0.3683 |
87
+ | 0.6968 | 8.9514 | 3500 | 0.3635 |
88
+ | 0.7035 | 9.2072 | 3600 | 0.3647 |
89
+ | 0.7045 | 9.4629 | 3700 | 0.3647 |
90
+ | 0.6982 | 9.7187 | 3800 | 0.3642 |
91
+ | 0.6996 | 9.9744 | 3900 | 0.3637 |
92
+
93
+
94
+ ### Framework versions
95
+
96
+ - Transformers 4.47.1
97
+ - Pytorch 2.5.1+cu121
98
+ - Datasets 3.2.0
99
+ - Tokenizers 0.21.0
generation_config.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 0,
4
+ "decoder_start_token_id": 2,
5
+ "eos_token_id": 2,
6
+ "pad_token_id": 1,
7
+ "transformers_version": "4.47.1",
8
+ "use_cache": false
9
+ }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0f04cda1b52f8cf26854b3711bb35a552834c00a06bb757b4a2c0cbace8d1486
3
  size 577789320
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f428efbb94135d0a8383b3b65123aed4e64ef4f2df0c5ef1f19ef6277e867e35
3
  size 577789320
runs/Jan10_16-55-43_d31b3f597294/events.out.tfevents.1736528147.d31b3f597294.1037.8 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:aa9187a1b840a90450cb9519e6c6fe0bf1c6db80610697685a82b10adeca241a
3
- size 50290
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c447f0087f2edfd46bd1cac610ea83183458d6324efc9ea56454284573c66660
3
+ size 50644