File size: 4,427 Bytes
70120f2 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 |
---
license: apache-2.0
base_model: openai/whisper-tiny.en
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper2
This model is a fine-tuned version of [openai/whisper-tiny.en](https://huggingface.co/openai/whisper-tiny.en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5233
- Wer: 31.1083
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 3.9553 | 0.1408 | 10 | 3.9646 | 74.8741 |
| 3.9548 | 0.2817 | 20 | 3.8794 | 77.6763 |
| 3.8127 | 0.4225 | 30 | 3.7405 | 76.4169 |
| 3.6178 | 0.5634 | 40 | 3.5547 | 75.3149 |
| 3.3992 | 0.7042 | 50 | 3.3235 | 70.2771 |
| 3.1416 | 0.8451 | 60 | 3.0402 | 67.8526 |
| 2.8052 | 0.9859 | 70 | 2.6852 | 65.9635 |
| 2.3513 | 1.1268 | 80 | 2.2235 | 68.3249 |
| 1.893 | 1.2676 | 90 | 1.6708 | 63.8224 |
| 1.2871 | 1.4085 | 100 | 1.1645 | 63.2557 |
| 0.9146 | 1.5493 | 110 | 0.8785 | 56.8325 |
| 0.8044 | 1.6901 | 120 | 0.7907 | 46.9773 |
| 0.6634 | 1.8310 | 130 | 0.7425 | 47.4811 |
| 0.6722 | 1.9718 | 140 | 0.7100 | 45.9068 |
| 0.6823 | 2.1127 | 150 | 0.6854 | 42.4118 |
| 0.5802 | 2.2535 | 160 | 0.6659 | 40.4282 |
| 0.6084 | 2.3944 | 170 | 0.6503 | 40.8375 |
| 0.6038 | 2.5352 | 180 | 0.6346 | 41.4987 |
| 0.5095 | 2.6761 | 190 | 0.6247 | 42.0340 |
| 0.5251 | 2.8169 | 200 | 0.6155 | 39.3577 |
| 0.5699 | 2.9577 | 210 | 0.6046 | 38.3501 |
| 0.4839 | 3.0986 | 220 | 0.5945 | 37.2796 |
| 0.4843 | 3.2394 | 230 | 0.5861 | 48.3942 |
| 0.4538 | 3.3803 | 240 | 0.5794 | 34.6662 |
| 0.4741 | 3.5211 | 250 | 0.5737 | 33.8161 |
| 0.4542 | 3.6620 | 260 | 0.5663 | 41.9710 |
| 0.4163 | 3.8028 | 270 | 0.5623 | 46.0957 |
| 0.3496 | 3.9437 | 280 | 0.5605 | 42.2544 |
| 0.3835 | 4.0845 | 290 | 0.5557 | 41.6562 |
| 0.3462 | 4.2254 | 300 | 0.5507 | 36.3980 |
| 0.3133 | 4.3662 | 310 | 0.5452 | 42.5693 |
| 0.3638 | 4.5070 | 320 | 0.5435 | 35.9572 |
| 0.3826 | 4.6479 | 330 | 0.5396 | 31.9584 |
| 0.3581 | 4.7887 | 340 | 0.5361 | 33.7846 |
| 0.3127 | 4.9296 | 350 | 0.5339 | 37.3426 |
| 0.2988 | 5.0704 | 360 | 0.5348 | 38.7280 |
| 0.2807 | 5.2113 | 370 | 0.5344 | 35.5164 |
| 0.2612 | 5.3521 | 380 | 0.5305 | 34.6662 |
| 0.2762 | 5.4930 | 390 | 0.5306 | 32.2733 |
| 0.299 | 5.6338 | 400 | 0.5267 | 36.8703 |
| 0.2718 | 5.7746 | 410 | 0.5232 | 41.6877 |
| 0.2618 | 5.9155 | 420 | 0.5208 | 34.0995 |
| 0.2121 | 6.0563 | 430 | 0.5220 | 28.0542 |
| 0.1929 | 6.1972 | 440 | 0.5256 | 35.7997 |
| 0.2504 | 6.3380 | 450 | 0.5296 | 32.8715 |
| 0.2064 | 6.4789 | 460 | 0.5265 | 35.3904 |
| 0.2044 | 6.6197 | 470 | 0.5267 | 38.3186 |
| 0.1844 | 6.7606 | 480 | 0.5231 | 35.1071 |
| 0.1867 | 6.9014 | 490 | 0.5235 | 31.5806 |
| 0.1562 | 7.0423 | 500 | 0.5233 | 31.1083 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.1.dev0
- Tokenizers 0.19.1
|