Edmon02 commited on
Commit
fcfea08
·
verified ·
1 Parent(s): 411dd64

End of training

Browse files
README.md CHANGED
@@ -15,7 +15,12 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  This model is a fine-tuned version of [Edmon02/speecht5_finetuned_voxpopuli_hy](https://huggingface.co/Edmon02/speecht5_finetuned_voxpopuli_hy) on the None dataset.
17
  It achieves the following results on the evaluation set:
18
- - Loss: 0.4759
 
 
 
 
 
19
 
20
  ## Model description
21
 
@@ -36,7 +41,7 @@ More information needed
36
  The following hyperparameters were used during training:
37
  - learning_rate: 2e-05
38
  - train_batch_size: 4
39
- - eval_batch_size: 2
40
  - seed: 42
41
  - gradient_accumulation_steps: 8
42
  - total_train_batch_size: 32
@@ -46,20 +51,6 @@ The following hyperparameters were used during training:
46
  - training_steps: 4000
47
  - mixed_precision_training: Native AMP
48
 
49
- ### Training results
50
-
51
- | Training Loss | Epoch | Step | Validation Loss |
52
- |:-------------:|:-------:|:----:|:---------------:|
53
- | 0.5345 | 3.4602 | 500 | 0.4915 |
54
- | 0.5277 | 6.9204 | 1000 | 0.4881 |
55
- | 0.5173 | 10.3806 | 1500 | 0.4833 |
56
- | 0.5093 | 13.8408 | 2000 | 0.4807 |
57
- | 0.5135 | 17.3010 | 2500 | 0.4785 |
58
- | 0.5057 | 20.7612 | 3000 | 0.4793 |
59
- | 0.5016 | 24.2215 | 3500 | 0.4773 |
60
- | 0.5036 | 27.6817 | 4000 | 0.4759 |
61
-
62
-
63
  ### Framework versions
64
 
65
  - Transformers 4.43.3
 
15
 
16
  This model is a fine-tuned version of [Edmon02/speecht5_finetuned_voxpopuli_hy](https://huggingface.co/Edmon02/speecht5_finetuned_voxpopuli_hy) on the None dataset.
17
  It achieves the following results on the evaluation set:
18
+ - eval_loss: 0.4599
19
+ - eval_runtime: 13.0209
20
+ - eval_samples_per_second: 37.094
21
+ - eval_steps_per_second: 9.293
22
+ - epoch: 14.7194
23
+ - step: 2000
24
 
25
  ## Model description
26
 
 
41
  The following hyperparameters were used during training:
42
  - learning_rate: 2e-05
43
  - train_batch_size: 4
44
+ - eval_batch_size: 4
45
  - seed: 42
46
  - gradient_accumulation_steps: 8
47
  - total_train_batch_size: 32
 
51
  - training_steps: 4000
52
  - mixed_precision_training: Native AMP
53
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
54
  ### Framework versions
55
 
56
  - Transformers 4.43.3
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ff7b7c9916704a0cae590a677302ddb0926da32eb4c8fbb97fd54c74bb3148e3
3
  size 577887624
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6e583990336dc40e8dc7dd9f12dbf12e2be88a5e49498943a063d8c5e7045b9e
3
  size 577887624
runs/Aug07_10-20-23_ip-10-192-12-40/events.out.tfevents.1723026026.ip-10-192-12-40.1518.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c5e8bbeae4429fa2871be781912b468b7e5976dece7e4758ca34e129c6a2244d
3
- size 16118
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3bf0f73abf153ade09084a096c1c0d02720cbf7b59aa46e00868acc1ec7af587
3
+ size 18017