noflm commited on
Commit
e6c60f5
1 Parent(s): 847e4b5

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -21
README.md CHANGED
@@ -1,41 +1,38 @@
1
  ---
2
- language:
3
- - ja
4
- license: other
5
  tags:
6
- - whisper-event
7
  - generated_from_trainer
8
  datasets:
9
- - Elite35P-Server/EliteVoiceProject
10
  metrics:
11
  - wer
12
  model-index:
13
- - name: Whisper Base Japanese Elite
14
  results:
15
  - task:
16
  name: Automatic Speech Recognition
17
  type: automatic-speech-recognition
18
  dataset:
19
- name: Elite35P-Server/EliteVoiceProject twitter
20
- type: Elite35P-Server/EliteVoiceProject
21
  config: twitter
22
  split: test
23
  args: twitter
24
  metrics:
25
  - name: Wer
26
  type: wer
27
- value: 11.585365853658537
28
  ---
29
 
30
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
31
  should probably proofread and complete it, then remove this comment. -->
32
 
33
- # Whisper Base Japanese Elite
34
 
35
- This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the Elite35P-Server/EliteVoiceProject twitter dataset.
36
  It achieves the following results on the evaluation set:
37
- - Loss: 0.1459
38
- - Wer: 11.5854
39
 
40
  ## Model description
41
 
@@ -55,25 +52,34 @@ More information needed
55
 
56
  The following hyperparameters were used during training:
57
  - learning_rate: 1e-05
58
- - train_batch_size: 8
59
- - eval_batch_size: 4
60
  - seed: 42
61
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
  - lr_scheduler_type: constant_with_warmup
63
- - lr_scheduler_warmup_steps: 50
64
- - training_steps: 1000
65
  - mixed_precision_training: Native AMP
66
 
67
  ### Training results
68
 
69
- | Training Loss | Epoch | Step | Validation Loss | Wer |
70
- |:-------------:|:-----:|:----:|:---------------:|:-------:|
71
- | 0.0009 | 29.01 | 1000 | 0.1459 | 11.5854 |
 
 
 
 
 
 
 
 
 
72
 
73
 
74
  ### Framework versions
75
 
76
  - Transformers 4.26.0.dev0
77
- - Pytorch 1.13.0+cu117
78
  - Datasets 2.8.1.dev0
79
  - Tokenizers 0.13.2
 
1
  ---
2
+ license: apache-2.0
 
 
3
  tags:
 
4
  - generated_from_trainer
5
  datasets:
6
+ - elite_voice_project
7
  metrics:
8
  - wer
9
  model-index:
10
+ - name: whisper-base-ja-elite
11
  results:
12
  - task:
13
  name: Automatic Speech Recognition
14
  type: automatic-speech-recognition
15
  dataset:
16
+ name: elite_voice_project
17
+ type: elite_voice_project
18
  config: twitter
19
  split: test
20
  args: twitter
21
  metrics:
22
  - name: Wer
23
  type: wer
24
+ value: 17.073170731707318
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
28
  should probably proofread and complete it, then remove this comment. -->
29
 
30
+ # whisper-base-ja-elite
31
 
32
+ This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the elite_voice_project dataset.
33
  It achieves the following results on the evaluation set:
34
+ - Loss: 0.4385
35
+ - Wer: 17.0732
36
 
37
  ## Model description
38
 
 
52
 
53
  The following hyperparameters were used during training:
54
  - learning_rate: 1e-05
55
+ - train_batch_size: 32
56
+ - eval_batch_size: 32
57
  - seed: 42
58
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
59
  - lr_scheduler_type: constant_with_warmup
60
+ - lr_scheduler_warmup_steps: 200
61
+ - training_steps: 10000
62
  - mixed_precision_training: Native AMP
63
 
64
  ### Training results
65
 
66
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
67
+ |:-------------:|:------:|:-----:|:---------------:|:-------:|
68
+ | 0.0002 | 111.0 | 1000 | 0.2155 | 9.7561 |
69
+ | 0.0001 | 222.0 | 2000 | 0.2448 | 12.1951 |
70
+ | 0.0 | 333.0 | 3000 | 0.2674 | 13.4146 |
71
+ | 0.0 | 444.0 | 4000 | 0.2943 | 15.8537 |
72
+ | 0.0 | 555.0 | 5000 | 0.3182 | 17.0732 |
73
+ | 0.0 | 666.0 | 6000 | 0.3501 | 18.9024 |
74
+ | 0.0 | 777.0 | 7000 | 0.3732 | 16.4634 |
75
+ | 0.0 | 888.0 | 8000 | 0.4025 | 17.0732 |
76
+ | 0.0 | 999.0 | 9000 | 0.4178 | 20.1220 |
77
+ | 0.0 | 1111.0 | 10000 | 0.4385 | 17.0732 |
78
 
79
 
80
  ### Framework versions
81
 
82
  - Transformers 4.26.0.dev0
83
+ - Pytorch 1.13.1+cu117
84
  - Datasets 2.8.1.dev0
85
  - Tokenizers 0.13.2