Edmon02 commited on
Commit
982f56c
·
verified ·
1 Parent(s): 04d9fae

End of training

Browse files
Files changed (2) hide show
  1. README.md +19 -12
  2. generation_config.json +1 -1
README.md CHANGED
@@ -1,7 +1,10 @@
1
  ---
2
- base_model: Edmon02/speecht5_finetuned_voxpopuli_hy
 
3
  tags:
4
  - generated_from_trainer
 
 
5
  model-index:
6
  - name: speecht5_finetuned_voxpopuli_hy
7
  results: []
@@ -12,9 +15,9 @@ should probably proofread and complete it, then remove this comment. -->
12
 
13
  # speecht5_finetuned_voxpopuli_hy
14
 
15
- This model is a fine-tuned version of [Edmon02/speecht5_finetuned_voxpopuli_hy](https://huggingface.co/Edmon02/speecht5_finetuned_voxpopuli_hy) on the None dataset.
16
  It achieves the following results on the evaluation set:
17
- - Loss: 0.5734
18
 
19
  ## Model description
20
 
@@ -41,23 +44,27 @@ The following hyperparameters were used during training:
41
  - total_train_batch_size: 32
42
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
  - lr_scheduler_type: linear
44
- - lr_scheduler_warmup_steps: 500
45
  - training_steps: 4000
46
  - mixed_precision_training: Native AMP
47
 
48
  ### Training results
49
 
50
- | Training Loss | Epoch | Step | Validation Loss |
51
- |:-------------:|:-------:|:----:|:---------------:|
52
- | 0.6866 | 9.8401 | 1000 | 0.6304 |
53
- | 0.6736 | 19.6802 | 2000 | 0.6190 |
54
- | 0.6386 | 29.5203 | 3000 | 0.5798 |
55
- | 0.6381 | 39.3604 | 4000 | 0.5734 |
 
 
 
 
56
 
57
 
58
  ### Framework versions
59
 
60
- - Transformers 4.43.3
61
- - Pytorch 2.4.0+cu121
62
  - Datasets 2.20.0
63
  - Tokenizers 0.19.1
 
1
  ---
2
+ license: mit
3
+ base_model: Edmon02/speecht5_finetuned_hy
4
  tags:
5
  - generated_from_trainer
6
+ datasets:
7
+ - common_voice_17_0
8
  model-index:
9
  - name: speecht5_finetuned_voxpopuli_hy
10
  results: []
 
15
 
16
  # speecht5_finetuned_voxpopuli_hy
17
 
18
+ This model is a fine-tuned version of [Edmon02/speecht5_finetuned_hy](https://huggingface.co/Edmon02/speecht5_finetuned_hy) on the common_voice_17_0 dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 0.6810
21
 
22
  ## Model description
23
 
 
44
  - total_train_batch_size: 32
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
+ - lr_scheduler_warmup_steps: 250
48
  - training_steps: 4000
49
  - mixed_precision_training: Native AMP
50
 
51
  ### Training results
52
 
53
+ | Training Loss | Epoch | Step | Validation Loss |
54
+ |:-------------:|:--------:|:----:|:---------------:|
55
+ | 0.7562 | 12.5786 | 500 | 0.7180 |
56
+ | 0.732 | 25.1572 | 1000 | 0.7013 |
57
+ | 0.7185 | 37.7358 | 1500 | 0.6943 |
58
+ | 0.7064 | 50.3145 | 2000 | 0.6893 |
59
+ | 0.7138 | 62.8931 | 2500 | 0.6849 |
60
+ | 0.6973 | 75.4717 | 3000 | 0.6825 |
61
+ | 0.6933 | 88.0503 | 3500 | 0.6817 |
62
+ | 0.6939 | 100.6289 | 4000 | 0.6810 |
63
 
64
 
65
  ### Framework versions
66
 
67
+ - Transformers 4.42.4
68
+ - Pytorch 2.3.1+cu121
69
  - Datasets 2.20.0
70
  - Tokenizers 0.19.1
generation_config.json CHANGED
@@ -5,5 +5,5 @@
5
  "eos_token_id": 2,
6
  "max_length": 1876,
7
  "pad_token_id": 1,
8
- "transformers_version": "4.43.3"
9
  }
 
5
  "eos_token_id": 2,
6
  "max_length": 1876,
7
  "pad_token_id": 1,
8
+ "transformers_version": "4.42.4"
9
  }