aadel4 commited on
Commit
809b05b
1 Parent(s): d6b8f81

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +87 -0
README.md ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - generated_from_trainer
5
+ metrics:
6
+ - wer
7
+ model-index:
8
+ - name: openai/whisper-medium-en
9
+ results:
10
+ - task:
11
+ type: automatic-speech-recognition
12
+ name: Automatic Speech Recognition
13
+ dataset:
14
+ name: myst-test
15
+ type: asr
16
+ config: en
17
+ split: test
18
+ metrics:
19
+ - type: wer
20
+ value: 8.85
21
+ name: WER
22
+ - task:
23
+ type: automatic-speech-recognition
24
+ name: Automatic Speech Recognition
25
+ dataset:
26
+ name: cslu_scripted
27
+ type: asr
28
+ config: en
29
+ split: test
30
+ metrics:
31
+ - type: wer
32
+ value: 2.38
33
+ name: WER
34
+ - task:
35
+ type: automatic-speech-recognition
36
+ name: Automatic Speech Recognition
37
+ dataset:
38
+ name: cslu_spontaneous
39
+ type: asr
40
+ config: en
41
+ split: test
42
+ metrics:
43
+ - type: wer
44
+ value: 16.53
45
+ name: WER
46
+ - task:
47
+ type: automatic-speech-recognition
48
+ name: Automatic Speech Recognition
49
+ dataset:
50
+ name: librispeech
51
+ type: asr
52
+ config: en
53
+ split: testclean
54
+ metrics:
55
+ - type: wer
56
+ value: 3.52
57
+ name: WER
58
+
59
+ ---
60
+
61
+
62
+ # openai/whisper-medium-en
63
+
64
+ This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset.
65
+ It achieves the following results on the evaluation set:
66
+ - Loss: 0.22987066209316254
67
+ - Wer: 7.945455976651671`
68
+
69
+
70
+ ## Training and evaluation data
71
+
72
+ Training data: Myst Train (125 hours)
73
+ Evaluation data: Myst Dev (20.9 hours)
74
+
75
+
76
+ ### Training hyperparameters
77
+
78
+ The following hyperparameters were used during training:
79
+ - learning_rate: 1e-05
80
+ - train_batch_size: 32
81
+ - eval_batch_size: 16
82
+ - seed: 42
83
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
84
+ - lr_scheduler_type: linear
85
+ - lr_scheduler_warmup_steps: 500
86
+ - training_steps: 10000
87
+