whisper-v3-turbo-id
This model is a fine-tuned version of ayameRushia/whisper-v3-turbo-id on the common_voice_17_0 dataset. It achieves the following results on the evaluation set:
- Loss: 0.1760
- Wer: 9.1737
Model description
Fine tuned from openai/whisper-v3-turbo
Intended uses & limitations
This model only trained using common voice version 17
Training procedure
Preprocess data
import re
chars_to_ignore_regex = '[\,\?\.\!\;\:\"\”\’\'\“\(\)\[\\\\&/!\‘]' # delete following chars
chars_to_space_regex = '[\–\—\-]' # replace the following chars into space
def remove_special_characters(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() + " "
batch["sentence"] = re.sub(chars_to_space_regex, ' ', batch["sentence"]) + " "
# replacing some character
batch["sentence"] = batch["sentence"].replace("é", "e").replace("á", "a").replace("ł", "l").replace("ń", "n").replace("ō", "o").strip()
return batch
common_voice = common_voice.map(remove_special_characters)
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 3000
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
0.0706 | 1.9231 | 1000 | 0.2361 | 18.0484 |
0.0099 | 3.8462 | 2000 | 0.1875 | 10.3607 |
0.001 | 5.7692 | 3000 | 0.1760 | 9.1737 |
Framework versions
- Transformers 4.45.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
- Downloads last month
- 16
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for ayameRushia/whisper-v3-turbo-id
Unable to build the model tree, the base model loops to the model itself. Learn more.