Edit model card

Whisper Medium Romanian

This model is a fine-tuned version of openai/whisper-medium on the Common Voice 11.0 dataset, and the Romanian speech synthesis corpus. It achieves the following results on the evaluation set:

  • eval_loss: 0.06453
  • eval_wer: 4.717
  • epoch: 7.03
  • step: 3500

Model description

The architecture is the same as openai/whisper-medium.

Training and evaluation data

The model was trained on the Common Voice 11.0 dataset (train+validation+other splits) and the Romanian speech synthesis corpus, and was tested on the test split of the Common Voice 11.0 dataset.

Usage

Inference with 🤗 transformers

from transformers import WhisperProcessor, WhisperForConditionalGeneration
from datasets import Audio, load_dataset
import torch

# load model and processor
processor = WhisperProcessor.from_pretrained("gigant/whisper-medium-romanian")
model = WhisperForConditionalGeneration.from_pretrained("gigant/whisper-medium-romanian")

# load dummy dataset and read soundfiles
ds = load_dataset("common_voice", "ro", split="test", streaming=True)
ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
input_speech = next(iter(ds))["audio"]["array"]
model.config.forced_decoder_ids = processor.get_decoder_prompt_ids(language = "ro", task = "transcribe")
input_features = processor(input_speech, return_tensors="pt", sampling_rate=16_000).input_features 
predicted_ids = model.generate(input_features, max_length=448)
# transcription = processor.batch_decode(predicted_ids)
transcription = processor.batch_decode(predicted_ids, skip_special_tokens = True)

The code was adapted from openai/whisper-medium.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 5000
  • mixed_precision_training: Native AMP

Framework versions

  • Transformers 4.26.0.dev0
  • Pytorch 1.13.0+cu117
  • Datasets 2.7.1.dev0
  • Tokenizers 0.13.2
Downloads last month
207
Safetensors
Model size
764M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for gigant/whisper-medium-romanian

Finetuned
(428)
this model

Datasets used to train gigant/whisper-medium-romanian

Space using gigant/whisper-medium-romanian 1

Collection including gigant/whisper-medium-romanian

Evaluation results