marinone94 commited on
Commit
f850b55
2 Parent(s): 5c64556 4de1fa4

Merge branch 'main' of https://huggingface.co/marinone94/whisper-tiny-sv

Browse files
Files changed (1) hide show
  1. README.md +88 -0
README.md ADDED
@@ -0,0 +1,88 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - 'no'
4
+ - sv
5
+ - da
6
+ license: apache-2.0
7
+ tags:
8
+ - whisper-event
9
+ - generated_from_trainer
10
+ datasets:
11
+ - mozilla-foundation/common_voice_11_0
12
+ - mozilla-foundation/common_voice_11_0
13
+ - mozilla-foundation/common_voice_11_0
14
+ - babelbox/babelbox_voice
15
+ - NbAiLab/NST
16
+ - NbAiLab/NPSC
17
+ - google/fleurs
18
+ - google/fleurs
19
+ - google/fleurs
20
+ metrics:
21
+ - wer
22
+ model-index:
23
+ - name: Whisper Tiny Nordic
24
+ results:
25
+ - task:
26
+ name: Automatic Speech Recognition
27
+ type: automatic-speech-recognition
28
+ metrics:
29
+ - name: Wer
30
+ type: wer
31
+ value: 87.65957446808511
32
+ ---
33
+
34
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
35
+ should probably proofread and complete it, then remove this comment. -->
36
+
37
+ # Whisper Tiny Nordic
38
+
39
+ This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the mozilla-foundation/common_voice_11_0 sv-SE
40
+ mozilla-foundation/common_voice_11_0 da
41
+ mozilla-foundation/common_voice_11_0 nn-NO
42
+ babelbox/babelbox_voice nst
43
+ NbAiLab/NST no-distant
44
+ NbAiLab/NPSC 16K_mp3_nynorsk
45
+ google/fleurs sv_se
46
+ google/fleurs da_dk
47
+ google/fleurs nb_no dataset.
48
+ It achieves the following results on the evaluation set:
49
+ - Loss: 5.1226
50
+ - Wer: 87.6596
51
+
52
+ ## Model description
53
+
54
+ More information needed
55
+
56
+ ## Intended uses & limitations
57
+
58
+ More information needed
59
+
60
+ ## Training and evaluation data
61
+
62
+ More information needed
63
+
64
+ ## Training procedure
65
+
66
+ ### Training hyperparameters
67
+
68
+ The following hyperparameters were used during training:
69
+ - learning_rate: 1e-05
70
+ - train_batch_size: 1
71
+ - eval_batch_size: 1
72
+ - seed: 42
73
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
74
+ - lr_scheduler_type: linear
75
+ - lr_scheduler_warmup_steps: 500
76
+ - training_steps: 1
77
+ - mixed_precision_training: Native AMP
78
+
79
+ ### Training results
80
+
81
+
82
+
83
+ ### Framework versions
84
+
85
+ - Transformers 4.26.0.dev0
86
+ - Pytorch 1.13.1+cu117
87
+ - Datasets 2.7.1.dev0
88
+ - Tokenizers 0.13.2