Edit model card
Configuration Parsing Warning: In adapter_config.json: "peft.task_type" must be a string

whisper large v3 PEFT LoRA trained on a big collection of vietnamese speech datasets

TODO:

  • training then publish checkpoint
  • evaluate WER on Common Voice & FLEURS & VIVOS

3.6k steps, warm-up 5%, batch size 16×2 (kaggle free T4×2), train 3.6% of 1.6B params

manually evaluate WER on test set - vietnamese part:

@ float16 CommonVoice v16.1 FLEURS VIVOS
original whisper-large-v3 16.2% 8.3% 12.3%
this LoRA 14.7% 14.7% 9.4%

all training + evaluation scripts are on my repo: https://github.com/phineas-pta/fine-tune-whisper-vi

usage example:

# pip install peft accelerate bitsandbytes
import torch
import torchaudio
from peft import PeftModel, PeftConfig
from transformers import WhisperForConditionalGeneration, WhisperFeatureExtractor, WhisperTokenizer

PEFT_MODEL_ID = "doof-ferb/whisper-large-peft-lora-vi"
BASE_MODEL_ID = PeftConfig.from_pretrained(PEFT_MODEL_ID).base_model_name_or_path

FEATURE_EXTRACTOR = WhisperFeatureExtractor.from_pretrained(BASE_MODEL_ID)
TOKENIZER = WhisperTokenizer.from_pretrained(BASE_MODEL_ID)

MODEL = PeftModel.from_pretrained(
    WhisperForConditionalGeneration.from_pretrained(BASE_MODEL_ID, torch_dtype=torch.float16).to("cuda:0"),
    PEFT_MODEL_ID
).merge_and_unload(progressbar=True)

DECODER_ID = torch.tensor(
    TOKENIZER.convert_tokens_to_ids(["<|startoftranscript|>", "<|vi|>", "<|transcribe|>", "<|notimestamps|>"]),
    device=MODEL.device
).unsqueeze(dim=0)

waveform, sampling_rate = torchaudio.load("audio.mp3")
if waveform.size(0) > 1:  # convert dual to mono channel
    waveform = waveform.mean(dim=0, keepdim=True)

inputs = FEATURE_EXTRACTOR(waveform, sampling_rate=sampling_rate, return_tensors="pt").to(MODEL.device)
with torch.inference_mode(), torch.autocast(device_type="cuda"):  # required by PEFT
    predicted_ids = MODEL.generate(input_features=inputs.input_features, decoder_input_ids=DECODER_ID)

TOKENIZER.batch_decode(predicted_ids, skip_special_tokens=True)[0]
Downloads last month
8
Inference Examples
Inference API (serverless) does not yet support peft models for this pipeline type.

Model tree for doof-ferb/whisper-large-peft-lora-vi

Adapter
(74)
this model

Datasets used to train doof-ferb/whisper-large-peft-lora-vi

Evaluation results