AsemBadr commited on
Commit
257d566
1 Parent(s): ad4ac6a

End of training

Browse files
Files changed (4) hide show
  1. README.md +4 -44
  2. config.json +2 -2
  3. generation_config.json +1 -1
  4. training_args.bin +2 -2
README.md CHANGED
@@ -8,24 +8,9 @@ tags:
8
  - generated_from_trainer
9
  datasets:
10
  - AsemBadr/GP
11
- metrics:
12
- - wer
13
  model-index:
14
  - name: Whisper Small for Quran Recognition
15
- results:
16
- - task:
17
- name: Automatic Speech Recognition
18
- type: automatic-speech-recognition
19
- dataset:
20
- name: Quran_Reciters
21
- type: AsemBadr/GP
22
- config: default
23
- split: test
24
- args: 'config: default, split: train'
25
- metrics:
26
- - name: Wer
27
- type: wer
28
- value: 3.163142513323019
29
  ---
30
 
31
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -34,9 +19,6 @@ should probably proofread and complete it, then remove this comment. -->
34
  # Whisper Small for Quran Recognition
35
 
36
  This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Quran_Reciters dataset.
37
- It achieves the following results on the evaluation set:
38
- - Loss: 0.0192
39
- - Wer: 3.1631
40
 
41
  ## Model description
42
 
@@ -62,34 +44,12 @@ The following hyperparameters were used during training:
62
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
63
  - lr_scheduler_type: linear
64
  - lr_scheduler_warmup_steps: 500
65
- - training_steps: 8000
66
  - mixed_precision_training: Native AMP
67
 
68
- ### Training results
69
-
70
- | Training Loss | Epoch | Step | Validation Loss | Wer |
71
- |:-------------:|:-----:|:----:|:---------------:|:------:|
72
- | 0.0073 | 1.62 | 500 | 0.0249 | 5.0026 |
73
- | 0.0014 | 3.24 | 1000 | 0.0214 | 4.1086 |
74
- | 0.0008 | 4.85 | 1500 | 0.0221 | 3.9883 |
75
- | 0.0 | 6.47 | 2000 | 0.0180 | 2.9740 |
76
- | 0.0 | 8.09 | 2500 | 0.0177 | 3.0944 |
77
- | 0.0 | 9.71 | 3000 | 0.0178 | 3.0944 |
78
- | 0.0 | 11.33 | 3500 | 0.0179 | 3.1288 |
79
- | 0.0 | 12.94 | 4000 | 0.0179 | 3.1288 |
80
- | 0.0 | 14.56 | 4500 | 0.0181 | 2.8881 |
81
- | 0.0 | 16.18 | 5000 | 0.0184 | 2.9225 |
82
- | 0.0 | 17.8 | 5500 | 0.0186 | 3.0256 |
83
- | 0.0 | 19.42 | 6000 | 0.0188 | 3.1803 |
84
- | 0.0 | 21.04 | 6500 | 0.0190 | 3.1631 |
85
- | 0.0 | 22.65 | 7000 | 0.0191 | 3.1631 |
86
- | 0.0 | 24.27 | 7500 | 0.0192 | 3.1803 |
87
- | 0.0 | 25.89 | 8000 | 0.0192 | 3.1631 |
88
-
89
-
90
  ### Framework versions
91
 
92
- - Transformers 4.40.0.dev0
93
  - Pytorch 2.1.2
94
  - Datasets 2.17.1
95
- - Tokenizers 0.15.1
 
8
  - generated_from_trainer
9
  datasets:
10
  - AsemBadr/GP
 
 
11
  model-index:
12
  - name: Whisper Small for Quran Recognition
13
+ results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  ---
15
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
19
  # Whisper Small for Quran Recognition
20
 
21
  This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Quran_Reciters dataset.
 
 
 
22
 
23
  ## Model description
24
 
 
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_steps: 500
47
+ - training_steps: 12000
48
  - mixed_precision_training: Native AMP
49
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50
  ### Framework versions
51
 
52
+ - Transformers 4.41.0.dev0
53
  - Pytorch 2.1.2
54
  - Datasets 2.17.1
55
+ - Tokenizers 0.19.1
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "/kaggle/input/whisper-small-final/whisper-small-final",
3
  "activation_dropout": 0.0,
4
  "activation_function": "gelu",
5
  "apply_spec_augment": false,
@@ -58,7 +58,7 @@
58
  "scale_embedding": false,
59
  "suppress_tokens": [],
60
  "torch_dtype": "float32",
61
- "transformers_version": "4.40.0.dev0",
62
  "use_cache": true,
63
  "use_weighted_layer_sum": false,
64
  "vocab_size": 51865
 
1
  {
2
+ "_name_or_path": "AsemBadr/whisper-small-final-v2",
3
  "activation_dropout": 0.0,
4
  "activation_function": "gelu",
5
  "apply_spec_augment": false,
 
58
  "scale_embedding": false,
59
  "suppress_tokens": [],
60
  "torch_dtype": "float32",
61
+ "transformers_version": "4.41.0.dev0",
62
  "use_cache": true,
63
  "use_weighted_layer_sum": false,
64
  "vocab_size": 51865
generation_config.json CHANGED
@@ -261,5 +261,5 @@
261
  "transcribe": 50359,
262
  "translate": 50358
263
  },
264
- "transformers_version": "4.40.0.dev0"
265
  }
 
261
  "transcribe": 50359,
262
  "translate": 50358
263
  },
264
+ "transformers_version": "4.41.0.dev0"
265
  }
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:81fdd69ed4230c843750246ce946cd9653ab2226cbd46ba20a262fd6ffd7c78b
3
- size 5112
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8670b231ff99f1056ac1ca9eace30dd28237f9f9b5ed3d195a172585f77b1d92
3
+ size 5176