evanarlian commited on
Commit
28bae1a
1 Parent(s): 82e87bd

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -24
README.md CHANGED
@@ -1,41 +1,38 @@
1
  ---
2
- language:
3
- - id
4
  license: apache-2.0
5
  tags:
6
- - whisper-event
7
  - generated_from_trainer
8
  datasets:
9
- - mozilla-foundation/common_voice_11_0
10
  metrics:
11
  - wer
12
  model-index:
13
- - name: Whisper Tiny Indonesian
14
  results:
15
  - task:
16
  name: Automatic Speech Recognition
17
  type: automatic-speech-recognition
18
  dataset:
19
- name: mozilla-foundation/common_voice_11_0 id
20
- type: mozilla-foundation/common_voice_11_0
21
  config: id
22
- split: train
23
  args: id
24
  metrics:
25
  - name: Wer
26
  type: wer
27
- value: 145.78068800147562
28
  ---
29
 
30
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
31
  should probably proofread and complete it, then remove this comment. -->
32
 
33
- # Whisper Tiny Indonesian
34
 
35
- This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the mozilla-foundation/common_voice_11_0 id dataset.
36
  It achieves the following results on the evaluation set:
37
- - Loss: 1.8237
38
- - Wer: 145.7807
39
 
40
  ## Model description
41
 
@@ -55,25 +52,34 @@ More information needed
55
 
56
  The following hyperparameters were used during training:
57
  - learning_rate: 2e-05
58
- - train_batch_size: 16
59
- - eval_batch_size: 16
60
  - seed: 42
61
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
  - lr_scheduler_type: linear
63
- - lr_scheduler_warmup_steps: 4
64
- - training_steps: 10
65
  - mixed_precision_training: Native AMP
66
 
67
  ### Training results
68
 
69
- | Training Loss | Epoch | Step | Validation Loss | Wer |
70
- |:-------------:|:-----:|:----:|:---------------:|:--------:|
71
- | 2.0002 | 0.02 | 6 | 1.8237 | 145.7807 |
 
 
 
 
 
 
 
 
 
72
 
73
 
74
  ### Framework versions
75
 
76
- - Transformers 4.25.1
77
- - Pytorch 1.12.1
78
- - Datasets 2.7.1
79
- - Tokenizers 0.13.1
 
1
  ---
 
 
2
  license: apache-2.0
3
  tags:
 
4
  - generated_from_trainer
5
  datasets:
6
+ - common_voice_11_0
7
  metrics:
8
  - wer
9
  model-index:
10
+ - name: whisper-tiny-id
11
  results:
12
  - task:
13
  name: Automatic Speech Recognition
14
  type: automatic-speech-recognition
15
  dataset:
16
+ name: common_voice_11_0
17
+ type: common_voice_11_0
18
  config: id
19
+ split: test
20
  args: id
21
  metrics:
22
  - name: Wer
23
  type: wer
24
+ value: 33.344092963202066
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
28
  should probably proofread and complete it, then remove this comment. -->
29
 
30
+ # whisper-tiny-id
31
 
32
+ This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the common_voice_11_0 dataset.
33
  It achieves the following results on the evaluation set:
34
+ - Loss: 0.7428
35
+ - Wer: 33.3441
36
 
37
  ## Model description
38
 
 
52
 
53
  The following hyperparameters were used during training:
54
  - learning_rate: 2e-05
55
+ - train_batch_size: 64
56
+ - eval_batch_size: 32
57
  - seed: 42
58
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
59
  - lr_scheduler_type: linear
60
+ - lr_scheduler_warmup_steps: 500
61
+ - training_steps: 5000
62
  - mixed_precision_training: Native AMP
63
 
64
  ### Training results
65
 
66
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
67
+ |:-------------:|:-----:|:----:|:---------------:|:-------:|
68
+ | 0.3823 | 4.95 | 500 | 0.5251 | 33.4732 |
69
+ | 0.0495 | 9.9 | 1000 | 0.5700 | 33.3902 |
70
+ | 0.0077 | 14.85 | 1500 | 0.6202 | 32.4218 |
71
+ | 0.0031 | 19.8 | 2000 | 0.6616 | 32.5371 |
72
+ | 0.0019 | 24.75 | 2500 | 0.6873 | 32.7954 |
73
+ | 0.0014 | 29.7 | 3000 | 0.7056 | 33.5700 |
74
+ | 0.0011 | 34.65 | 3500 | 0.7204 | 33.7960 |
75
+ | 0.0009 | 39.6 | 4000 | 0.7327 | 33.7729 |
76
+ | 0.0008 | 44.55 | 4500 | 0.7400 | 33.9113 |
77
+ | 0.0007 | 49.5 | 5000 | 0.7428 | 33.3441 |
78
 
79
 
80
  ### Framework versions
81
 
82
+ - Transformers 4.26.0.dev0
83
+ - Pytorch 1.13.0+cu117
84
+ - Datasets 2.7.1.dev0
85
+ - Tokenizers 0.13.2