pere commited on
Commit
c953c33
1 Parent(s): 17bee3c

Update stats.md

Browse files
Files changed (1) hide show
  1. stats.md +87 -0
stats.md ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - 'no'
4
+ license: apache-2.0
5
+ base_model: NbAiLabBeta/nb-whisper-large
6
+ tags:
7
+ - audio
8
+ - asr
9
+ - automatic-speech-recognition
10
+ - hf-asr-leaderboard
11
+ model-index:
12
+ - name: nb-whisper-large-v0.8-vad3-verbatim
13
+ results: []
14
+ ---
15
+
16
+ <!-- This model card has been generated automatically according to the information Keras had access to. You should
17
+ probably proofread and complete it, then remove this comment. -->
18
+
19
+ # nb-whisper-large-v0.8-vad3-verbatim
20
+
21
+ This model is a fine-tuned version of [NbAiLabBeta/nb-whisper-large](https://huggingface.co/NbAiLabBeta/nb-whisper-large) on the NbAiLab/NPSC dataset.
22
+ It achieves the following results on the evaluation set:
23
+ - step: 249
24
+ - validation_loss: 0.5839
25
+ - train_loss: 0.4632
26
+ - validation_wer: 7.9358
27
+ - validation_cer: 2.5127
28
+ - validation_exact_wer: 8.0494
29
+ - validation_exact_cer: 2.5279
30
+
31
+ ## Model description
32
+
33
+ More information needed
34
+
35
+ ## Intended uses & limitations
36
+
37
+ More information needed
38
+
39
+ ## Training and evaluation data
40
+
41
+ More information needed
42
+
43
+ ## Training procedure
44
+
45
+ ### Training hyperparameters
46
+
47
+ The following hyperparameters were used during training:
48
+ - learning_rate: 7e-05
49
+ - lr_scheduler_type: linear
50
+ - per_device_train_batch_size: 8
51
+ - total_train_batch_size_per_node: 32
52
+ - total_train_batch_size: 1024
53
+ - total_optimization_steps: 250
54
+ - starting_optimization_step: None
55
+ - finishing_optimization_step: 250
56
+ - num_train_dataset_workers: 32
57
+ - num_hosts: 32
58
+ - total_num_training_examples: 256,000
59
+ - steps_per_epoch: 97
60
+ - num_beams: None
61
+ - weight_decay: 0.01
62
+ - adam_beta1: 0.9
63
+ - adam_beta2: 0.98
64
+ - adam_epsilon: 1e-06
65
+ - dropout: True
66
+ - bpe_dropout_probability: 0.2
67
+ - activation_dropout_probability: 0.1
68
+
69
+ ### Training results
70
+
71
+ | step | validation_loss | train_loss | validation_wer | validation_cer | validation_exact_wer | validation_exact_cer |
72
+ |:----:|:---------------:|:----------:|:--------------:|:--------------:|:--------------------:|:--------------------:|
73
+ | 0 | 1.2831 | 1.1864 | 18.9083 | 11.8409 | 33.9801 | 15.0322 |
74
+ | 40 | 0.5952 | 0.4958 | 8.9760 | 2.9212 | 9.1099 | 2.9390 |
75
+ | 80 | 0.5848 | 0.4761 | 8.3105 | 2.6432 | 8.4330 | 2.6621 |
76
+ | 120 | 0.5831 | 0.4492 | 8.1204 | 2.5679 | 8.2356 | 2.5821 |
77
+ | 160 | 0.5811 | 0.4678 | 7.9302 | 2.5051 | 8.0438 | 2.5193 |
78
+ | 200 | 0.5840 | 0.4692 | 7.9861 | 2.5346 | 8.0945 | 2.5498 |
79
+ | 240 | 0.5844 | 0.4543 | 7.9246 | 2.5051 | 8.0381 | 2.5193 |
80
+ | 249 | 0.5839 | 0.4632 | 7.9358 | 2.5127 | 8.0494 | 2.5279 |
81
+
82
+
83
+ ### Framework versions
84
+
85
+ - Transformers 4.36.2
86
+ - Datasets 2.16.1
87
+ - Tokenizers 0.15.0