gigant commited on
Commit
bcee0c0
1 Parent(s): 4ee5cbb

End of training

Browse files
Files changed (2) hide show
  1. README.md +28 -1
  2. generation_config.json +7 -1
README.md CHANGED
@@ -5,9 +5,24 @@ tags:
5
  - generated_from_trainer
6
  datasets:
7
  - PolyAI/minds14
 
 
8
  model-index:
9
  - name: whisper-tiny-minds14-audio-course-v2
10
- results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -16,6 +31,10 @@ should probably proofread and complete it, then remove this comment. -->
16
  # whisper-tiny-minds14-audio-course-v2
17
 
18
  This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
 
 
 
 
19
 
20
  ## Model description
21
 
@@ -43,6 +62,14 @@ The following hyperparameters were used during training:
43
  - lr_scheduler_warmup_steps: 50
44
  - training_steps: 500
45
 
 
 
 
 
 
 
 
 
46
  ### Framework versions
47
 
48
  - Transformers 4.32.1
 
5
  - generated_from_trainer
6
  datasets:
7
  - PolyAI/minds14
8
+ metrics:
9
+ - wer
10
  model-index:
11
  - name: whisper-tiny-minds14-audio-course-v2
12
+ results:
13
+ - task:
14
+ name: Automatic Speech Recognition
15
+ type: automatic-speech-recognition
16
+ dataset:
17
+ name: PolyAI/minds14
18
+ type: PolyAI/minds14
19
+ config: en-US
20
+ split: train[450:]
21
+ args: en-US
22
+ metrics:
23
+ - name: Wer
24
+ type: wer
25
+ value: 0.345926800472255
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
31
  # whisper-tiny-minds14-audio-course-v2
32
 
33
  This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
34
+ It achieves the following results on the evaluation set:
35
+ - Loss: 0.6485
36
+ - Wer Ortho: 0.3492
37
+ - Wer: 0.3459
38
 
39
  ## Model description
40
 
 
62
  - lr_scheduler_warmup_steps: 50
63
  - training_steps: 500
64
 
65
+ ### Training results
66
+
67
+ | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
68
+ |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
69
+ | 0.0089 | 8.93 | 250 | 0.5684 | 0.3677 | 0.3548 |
70
+ | 0.0007 | 17.86 | 500 | 0.6485 | 0.3492 | 0.3459 |
71
+
72
+
73
  ### Framework versions
74
 
75
  - Transformers 4.32.1
generation_config.json CHANGED
@@ -35,11 +35,15 @@
35
  "forced_decoder_ids": [
36
  [
37
  1,
38
- null
39
  ],
40
  [
41
  2,
42
  50359
 
 
 
 
43
  ]
44
  ],
45
  "is_multilingual": true,
@@ -144,6 +148,7 @@
144
  "<|yo|>": 50325,
145
  "<|zh|>": 50260
146
  },
 
147
  "max_initial_timestamp_index": 1,
148
  "max_length": 448,
149
  "no_timestamps_token_id": 50363,
@@ -239,6 +244,7 @@
239
  50361,
240
  50362
241
  ],
 
242
  "task_to_id": {
243
  "transcribe": 50359,
244
  "translate": 50358
 
35
  "forced_decoder_ids": [
36
  [
37
  1,
38
+ 50259
39
  ],
40
  [
41
  2,
42
  50359
43
+ ],
44
+ [
45
+ 3,
46
+ 50363
47
  ]
48
  ],
49
  "is_multilingual": true,
 
148
  "<|yo|>": 50325,
149
  "<|zh|>": 50260
150
  },
151
+ "language": "en",
152
  "max_initial_timestamp_index": 1,
153
  "max_length": 448,
154
  "no_timestamps_token_id": 50363,
 
244
  50361,
245
  50362
246
  ],
247
+ "task": "transcribe",
248
  "task_to_id": {
249
  "transcribe": 50359,
250
  "translate": 50358