JacobLinCool commited on
Commit
aacedad
1 Parent(s): 542df43

End of training

Browse files
Files changed (1) hide show
  1. README.md +94 -0
README.md ADDED
@@ -0,0 +1,94 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: peft
3
+ language:
4
+ - zh
5
+ license: mit
6
+ base_model: openai/whisper-large-v3-turbo
7
+ tags:
8
+ - wft
9
+ - whisper
10
+ - automatic-speech-recognition
11
+ - audio
12
+ - speech
13
+ - generated_from_trainer
14
+ datasets:
15
+ - JacobLinCool/mozilla-foundation-common_voice_16_1-zh-TW-preprocessed
16
+ metrics:
17
+ - wer
18
+ model-index:
19
+ - name: whisper-large-v3-turbo-common_voice_16_1-zh-TW-pissa
20
+ results:
21
+ - task:
22
+ type: automatic-speech-recognition
23
+ name: Automatic Speech Recognition
24
+ dataset:
25
+ name: JacobLinCool/mozilla-foundation-common_voice_16_1-zh-TW-preprocessed
26
+ type: JacobLinCool/mozilla-foundation-common_voice_16_1-zh-TW-preprocessed
27
+ metrics:
28
+ - type: wer
29
+ value: 63.665594855305464
30
+ name: Wer
31
+ ---
32
+
33
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
34
+ should probably proofread and complete it, then remove this comment. -->
35
+
36
+ # whisper-large-v3-turbo-common_voice_16_1-zh-TW-pissa
37
+
38
+ This model is a fine-tuned version of [openai/whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) on the JacobLinCool/mozilla-foundation-common_voice_16_1-zh-TW-preprocessed dataset.
39
+ It achieves the following results on the evaluation set:
40
+ - Loss: 0.5133
41
+ - Wer: 63.6656
42
+ - Cer: 23.5752
43
+
44
+ ## Model description
45
+
46
+ More information needed
47
+
48
+ ## Intended uses & limitations
49
+
50
+ More information needed
51
+
52
+ ## Training and evaluation data
53
+
54
+ More information needed
55
+
56
+ ## Training procedure
57
+
58
+ ### Training hyperparameters
59
+
60
+ The following hyperparameters were used during training:
61
+ - learning_rate: 0.0005
62
+ - train_batch_size: 4
63
+ - eval_batch_size: 4
64
+ - seed: 42
65
+ - gradient_accumulation_steps: 8
66
+ - total_train_batch_size: 32
67
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
68
+ - lr_scheduler_type: linear
69
+ - num_epochs: 10
70
+
71
+ ### Training results
72
+
73
+ | Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
74
+ |:-------------:|:------:|:----:|:---------------:|:-------:|:-------:|
75
+ | No log | 0 | 0 | 2.7520 | 77.6125 | 20.7783 |
76
+ | 7.6982 | 0.9987 | 377 | 0.8744 | 87.9421 | 41.2804 |
77
+ | 5.1677 | 2.0 | 755 | 0.7499 | 82.5965 | 36.6407 |
78
+ | 3.3647 | 2.9987 | 1132 | 0.6433 | 76.8087 | 31.6068 |
79
+ | 3.4711 | 4.0 | 1510 | 0.6397 | 76.2460 | 30.2862 |
80
+ | 1.5694 | 4.9987 | 1887 | 0.5779 | 71.5434 | 27.5471 |
81
+ | 0.7951 | 6.0 | 2265 | 0.5664 | 71.3223 | 27.0600 |
82
+ | 0.4709 | 6.9987 | 2642 | 0.5492 | 68.8706 | 26.0131 |
83
+ | 0.116 | 8.0 | 3020 | 0.5427 | 66.7605 | 24.8104 |
84
+ | 0.0512 | 8.9987 | 3397 | 0.5298 | 66.1375 | 24.8632 |
85
+ | 0.0273 | 9.9868 | 3770 | 0.5133 | 63.6656 | 23.5752 |
86
+
87
+
88
+ ### Framework versions
89
+
90
+ - PEFT 0.13.2
91
+ - Transformers 4.46.0
92
+ - Pytorch 2.4.0
93
+ - Datasets 3.0.2
94
+ - Tokenizers 0.20.1