TSukiLen commited on
Commit
1a90a42
·
verified ·
1 Parent(s): 9ade956

End of training

Browse files
Files changed (3) hide show
  1. README.md +20 -17
  2. generation_config.json +1 -1
  3. model.safetensors +1 -1
README.md CHANGED
@@ -1,40 +1,42 @@
1
  ---
2
  library_name: transformers
 
 
3
  license: apache-2.0
4
  base_model: openai/whisper-small
5
  tags:
6
  - generated_from_trainer
7
  datasets:
8
- - common_voice_11_0
9
  metrics:
10
  - wer
11
  model-index:
12
- - name: whisper-small-chinese-tw-minnan
13
  results:
14
  - task:
15
  name: Automatic Speech Recognition
16
  type: automatic-speech-recognition
17
  dataset:
18
- name: common_voice_11_0
19
- type: common_voice_11_0
20
  config: nan-tw
21
  split: test
22
- args: nan-tw
23
  metrics:
24
  - name: Wer
25
  type: wer
26
- value: 95.14713474445018
27
  ---
28
 
29
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
30
  should probably proofread and complete it, then remove this comment. -->
31
 
32
- # whisper-small-chinese-tw-minnan
33
 
34
- This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the common_voice_11_0 dataset.
35
  It achieves the following results on the evaluation set:
36
- - Loss: 0.9675
37
- - Wer: 95.1471
38
 
39
  ## Model description
40
 
@@ -54,28 +56,29 @@ More information needed
54
 
55
  The following hyperparameters were used during training:
56
  - learning_rate: 1e-05
57
- - train_batch_size: 16
58
  - eval_batch_size: 8
59
  - seed: 42
60
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
61
  - lr_scheduler_type: linear
62
  - lr_scheduler_warmup_steps: 500
63
- - training_steps: 4000
64
  - mixed_precision_training: Native AMP
65
 
66
  ### Training results
67
 
68
  | Training Loss | Epoch | Step | Validation Loss | Wer |
69
  |:-------------:|:-------:|:----:|:---------------:|:-------:|
70
- | 0.032 | 7.2464 | 1000 | 0.8407 | 96.5927 |
71
- | 0.0013 | 14.4928 | 2000 | 0.9247 | 95.3020 |
72
- | 0.0005 | 21.7391 | 3000 | 0.9552 | 94.9923 |
73
- | 0.0004 | 28.9855 | 4000 | 0.9675 | 95.1471 |
 
74
 
75
 
76
  ### Framework versions
77
 
78
- - Transformers 4.46.2
79
  - Pytorch 2.4.0+cu124
80
  - Datasets 3.1.0
81
  - Tokenizers 0.20.3
 
1
  ---
2
  library_name: transformers
3
+ language:
4
+ - zh
5
  license: apache-2.0
6
  base_model: openai/whisper-small
7
  tags:
8
  - generated_from_trainer
9
  datasets:
10
+ - mozilla-foundation/common_voice_11_0
11
  metrics:
12
  - wer
13
  model-index:
14
+ - name: Whisper Small chinese Test
15
  results:
16
  - task:
17
  name: Automatic Speech Recognition
18
  type: automatic-speech-recognition
19
  dataset:
20
+ name: Common Voice 11.0
21
+ type: mozilla-foundation/common_voice_11_0
22
  config: nan-tw
23
  split: test
24
+ args: 'config: zh-tw, split: test'
25
  metrics:
26
  - name: Wer
27
  type: wer
28
+ value: 94.0629839958699
29
  ---
30
 
31
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
32
  should probably proofread and complete it, then remove this comment. -->
33
 
34
+ # Whisper Small chinese Test
35
 
36
+ This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
37
  It achieves the following results on the evaluation set:
38
+ - Loss: 0.9213
39
+ - Wer: 94.0630
40
 
41
  ## Model description
42
 
 
56
 
57
  The following hyperparameters were used during training:
58
  - learning_rate: 1e-05
59
+ - train_batch_size: 8
60
  - eval_batch_size: 8
61
  - seed: 42
62
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
63
  - lr_scheduler_type: linear
64
  - lr_scheduler_warmup_steps: 500
65
+ - training_steps: 5000
66
  - mixed_precision_training: Native AMP
67
 
68
  ### Training results
69
 
70
  | Training Loss | Epoch | Step | Validation Loss | Wer |
71
  |:-------------:|:-------:|:----:|:---------------:|:-------:|
72
+ | 0.1069 | 3.6364 | 1000 | 0.7541 | 99.3289 |
73
+ | 0.0117 | 7.2727 | 2000 | 0.8330 | 93.9597 |
74
+ | 0.0015 | 10.9091 | 3000 | 0.8627 | 94.7858 |
75
+ | 0.0004 | 14.5455 | 4000 | 0.9036 | 93.3918 |
76
+ | 0.0002 | 18.1818 | 5000 | 0.9213 | 94.0630 |
77
 
78
 
79
  ### Framework versions
80
 
81
+ - Transformers 4.46.3
82
  - Pytorch 2.4.0+cu124
83
  - Datasets 3.1.0
84
  - Tokenizers 0.20.3
generation_config.json CHANGED
@@ -250,5 +250,5 @@
250
  "transcribe": 50359,
251
  "translate": 50358
252
  },
253
- "transformers_version": "4.46.2"
254
  }
 
250
  "transcribe": 50359,
251
  "translate": 50358
252
  },
253
+ "transformers_version": "4.46.3"
254
  }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:10bfeb8df0515ea970c549d4c77f6811ddb673fc7ed55d478bda013f357cad68
3
  size 966995080
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e142936c541af623b5593c82ccf3e92068bdc346712a4820d20abbe11a2b1afd
3
  size 966995080