ckandemir commited on
Commit
8804d07
1 Parent(s): 9e2bc76

End of training

Browse files
README.md CHANGED
@@ -2,6 +2,7 @@
2
  language:
3
  - tr
4
  license: apache-2.0
 
5
  tags:
6
  - hf-asr-leaderboard
7
  - generated_from_trainer
@@ -9,13 +10,12 @@ datasets:
9
  - mozilla-foundation/common_voice_11_0
10
  metrics:
11
  - wer
12
- base_model: openai/whisper-tiny
13
  model-index:
14
  - name: Whisper Tiny Tr - Canberk Kandemir
15
  results:
16
  - task:
17
- type: automatic-speech-recognition
18
  name: Automatic Speech Recognition
 
19
  dataset:
20
  name: Common Voice 11.0
21
  type: mozilla-foundation/common_voice_11_0
@@ -23,9 +23,9 @@ model-index:
23
  split: None
24
  args: 'config: tr, split: test'
25
  metrics:
26
- - type: wer
27
- value: 49.26160951660166
28
- name: Wer
29
  ---
30
 
31
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -35,8 +35,8 @@ should probably proofread and complete it, then remove this comment. -->
35
 
36
  This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 11.0 dataset.
37
  It achieves the following results on the evaluation set:
38
- - Loss: 0.5012
39
- - Wer: 49.2616
40
 
41
  ## Model description
42
 
@@ -55,26 +55,27 @@ More information needed
55
  ### Training hyperparameters
56
 
57
  The following hyperparameters were used during training:
58
- - learning_rate: 5e-05
59
- - train_batch_size: 16
60
- - eval_batch_size: 8
61
  - seed: 42
62
  - gradient_accumulation_steps: 2
63
- - total_train_batch_size: 32
64
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
65
  - lr_scheduler_type: cosine_with_restarts
66
  - lr_scheduler_warmup_steps: 500
67
- - training_steps: 2000
68
  - mixed_precision_training: Native AMP
69
 
70
  ### Training results
71
 
72
  | Training Loss | Epoch | Step | Validation Loss | Wer |
73
  |:-------------:|:-----:|:----:|:---------------:|:-------:|
74
- | 0.4527 | 0.44 | 500 | 0.6539 | 61.4814 |
75
- | 0.377 | 0.89 | 1000 | 0.5932 | 55.0348 |
76
- | 0.1962 | 1.33 | 1500 | 0.5298 | 52.0813 |
77
- | 0.1978 | 1.77 | 2000 | 0.5012 | 49.2616 |
 
78
 
79
 
80
  ### Framework versions
 
2
  language:
3
  - tr
4
  license: apache-2.0
5
+ base_model: openai/whisper-tiny
6
  tags:
7
  - hf-asr-leaderboard
8
  - generated_from_trainer
 
10
  - mozilla-foundation/common_voice_11_0
11
  metrics:
12
  - wer
 
13
  model-index:
14
  - name: Whisper Tiny Tr - Canberk Kandemir
15
  results:
16
  - task:
 
17
  name: Automatic Speech Recognition
18
+ type: automatic-speech-recognition
19
  dataset:
20
  name: Common Voice 11.0
21
  type: mozilla-foundation/common_voice_11_0
 
23
  split: None
24
  args: 'config: tr, split: test'
25
  metrics:
26
+ - name: Wer
27
+ type: wer
28
+ value: 75.91546835885195
29
  ---
30
 
31
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
35
 
36
  This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 11.0 dataset.
37
  It achieves the following results on the evaluation set:
38
+ - Loss: 0.5366
39
+ - Wer: 75.9155
40
 
41
  ## Model description
42
 
 
55
  ### Training hyperparameters
56
 
57
  The following hyperparameters were used during training:
58
+ - learning_rate: 3e-05
59
+ - train_batch_size: 32
60
+ - eval_batch_size: 16
61
  - seed: 42
62
  - gradient_accumulation_steps: 2
63
+ - total_train_batch_size: 64
64
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
65
  - lr_scheduler_type: cosine_with_restarts
66
  - lr_scheduler_warmup_steps: 500
67
+ - training_steps: 4000
68
  - mixed_precision_training: Native AMP
69
 
70
  ### Training results
71
 
72
  | Training Loss | Epoch | Step | Validation Loss | Wer |
73
  |:-------------:|:-----:|:----:|:---------------:|:-------:|
74
+ | 0.3923 | 0.89 | 500 | 0.5935 | 73.9613 |
75
+ | 0.2697 | 1.77 | 1000 | 0.5414 | 53.0923 |
76
+ | 0.1784 | 2.66 | 1500 | 0.5194 | 53.8744 |
77
+ | 0.1081 | 3.54 | 2000 | 0.5317 | 64.3962 |
78
+ | 0.0672 | 4.43 | 2500 | 0.5366 | 75.9155 |
79
 
80
 
81
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:54a5607a407f5d973ea4b97518e791a9f664fe515536ccb1b914b053b8e8ad3b
3
  size 151061672
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6f83a1384be750c80b25e4065ec71c05957c832a4fab754d0062d0fa1dd95ae3
3
  size 151061672
runs/Feb16_20-36-22_b58981b54d24/events.out.tfevents.1708115783.b58981b54d24.1173.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:cc123ba9ef80a17f19c78e55f70aa55f136679cde21afc92de7551a659980872
3
- size 22467
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f1565b11f71cdb5b1574842dd9e9ef9251fcd2b2a9074968967155df78c795ce
3
+ size 22821