jonatasgrosman commited on
Commit
ddf9040
2 Parent(s): d7533a6 c004c7a

Merge branch 'main' of https://huggingface.co/jonatasgrosman/whisper-large-pt-cv11 into main

Browse files
Files changed (1) hide show
  1. README.md +0 -46
README.md CHANGED
@@ -39,49 +39,3 @@ This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingf
39
  It achieves the following results on the evaluation set:
40
  - WER: 4.816664144852979
41
  - CER: 1.6052355927195898
42
-
43
- ## Model description
44
-
45
- More information needed
46
-
47
- ## Intended uses & limitations
48
-
49
- More information needed
50
-
51
- ## Training and evaluation data
52
-
53
- More information needed
54
-
55
- ## Training procedure
56
-
57
- ### Training hyperparameters
58
-
59
- The following hyperparameters were used during training:
60
- - learning_rate: 5e-06
61
- - train_batch_size: 16
62
- - eval_batch_size: 8
63
- - seed: 42
64
- - gradient_accumulation_steps: 2
65
- - total_train_batch_size: 32
66
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
67
- - lr_scheduler_type: linear
68
- - lr_scheduler_warmup_steps: 2000
69
- - training_steps: 20000
70
- - mixed_precision_training: Native AMP
71
-
72
- ### Training results
73
-
74
- | Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
75
- |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
76
- | 0.1051 | 1.24 | 1000 | 0.1501 | 5.1922 | 1.5979 |
77
- | 0.0682 | 2.47 | 2000 | 0.1589 | 5.7523 | 1.8633 |
78
- | 0.0489 | 3.71 | 3000 | 0.1631 | 5.3588 | 1.6819 |
79
- | 0.0309 | 4.94 | 4000 | 0.1707 | 5.2831 | 1.6819 |
80
-
81
-
82
- ### Framework versions
83
-
84
- - Transformers 4.26.0.dev0
85
- - Pytorch 1.13.1+cu117
86
- - Datasets 2.7.1.dev0
87
- - Tokenizers 0.13.2
 
39
  It achieves the following results on the evaluation set:
40
  - WER: 4.816664144852979
41
  - CER: 1.6052355927195898