whisper-medium-ach / README.md
akera's picture
Upload tokenizer
51df51b verified
---
license: apache-2.0
tags:
- generated_from_trainer
base_model: openai/whisper-medium
datasets:
- generator
metrics:
- wer
model-index:
- name: whisper-medium-ach
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: generator
type: generator
config: default
split: train
args: default
metrics:
- type: wer
value: 62.298387096774185
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/bakera-sunbird/huggingface/runs/pkvclqhs)
# whisper-medium-ach
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3501
- Wer: 62.2984
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 1.081 | 0.05 | 200 | 0.4844 | 138.8609 |
| 0.592 | 1.0248 | 400 | 0.3859 | 154.4355 |
| 0.5445 | 1.0748 | 600 | 0.3434 | 146.1694 |
| 0.3446 | 2.0495 | 800 | 0.3272 | 163.9113 |
| 0.2614 | 3.0242 | 1000 | 0.3098 | 86.2903 |
| 0.2542 | 3.0743 | 1200 | 0.3414 | 91.4315 |
| 0.1972 | 4.049 | 1400 | 0.3289 | 89.3145 |
| 0.1172 | 5.0237 | 1600 | 0.3224 | 100.1008 |
| 0.1226 | 5.0738 | 1800 | 0.3377 | 72.3286 |
| 0.0721 | 6.0485 | 2000 | 0.3277 | 105.8972 |
| 0.0504 | 7.0232 | 2200 | 0.3483 | 80.1411 |
| 0.0503 | 7.0732 | 2400 | 0.3514 | 95.0101 |
| 0.0375 | 8.048 | 2600 | 0.3378 | 64.5665 |
| 0.0348 | 9.0228 | 2800 | 0.3492 | 122.5806 |
| 0.0338 | 9.0727 | 3000 | 0.3502 | 88.6089 |
| 0.0273 | 10.0475 | 3200 | 0.3554 | 88.2560 |
| 0.0194 | 11.0222 | 3400 | 0.3501 | 62.2984 |
| 0.0165 | 11.0723 | 3600 | 0.3478 | 73.3871 |
| 0.0117 | 12.047 | 3800 | 0.3618 | 74.2440 |
| 0.0125 | 13.0218 | 4000 | 0.3587 | 97.6815 |
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.2.0
- Datasets 2.19.1.dev0
- Tokenizers 0.19.1