Automatic Speech Recognition
PEFT
TensorBoard
Safetensors
Vietnamese
Eval Results
File size: 3,126 Bytes
8960503
 
828e0c2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8960503
828e0c2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2918f0f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
---
license: apache-2.0
datasets:
  - google/fleurs
  - mozilla-foundation/common_voice_16_1
  - vivos
  - doof-ferb/vlsp2020_vinai_100h
  - doof-ferb/fpt_fosd
  - doof-ferb/infore1_25hours
language: ["vi"]
library_name: peft
base_model: openai/whisper-large-v3
pipeline_tag: automatic-speech-recognition
metrics: ["wer"]
model-index:
- name: doof-ferb/whisper-large-peft-lora-vi
  results:
  - task:
      type: automatic-speech-recognition
    dataset:
      type: mozilla-foundation/common_voice_16_1
      name: Mozilla CommonVoice (Vietnamese) v16.1
      config: vi
      split: test
    metrics:
      - type: wer
        value: 14.7
        verified: false
  - task:
      type: automatic-speech-recognition
    dataset:
      type: google/fleurs
      name: Google FLEURS (Vietnamese)
      config: vi_vn
      split: test
    metrics:
      - type: wer
        value: 14.7
        verified: false
  - task:
      type: automatic-speech-recognition
    dataset:
      type: vivos
      name: ĐHQG TPHCM VIVOS
      split: test
    metrics:
      - type: wer
        value: 9.4
        verified: false
---

whisper large v3 PEFT LoRA trained on a big collection of vietnamese speech datasets

TODO:
- [x] training then publish checkpoint
- [x] evaluate WER on Common Voice & FLEURS & VIVOS

3.6k steps, warm-up 5%, batch size 16×2 (kaggle free T4×2), train 3.6% of 1.6B params

manually evaluate WER on test set - vietnamese part:
| @ `float16` | `CommonVoice v16.1` | `FLEURS` | `VIVOS` |
|---|---|---|---|
| original `whisper-large-v3` | 16.2% | 8.3% | 12.3% |
| this LoRA | 14.7% | 14.7% | 9.4% |

all training + evaluation scripts are on my repo: https://github.com/phineas-pta/fine-tune-whisper-vi

usage example:
```python
# pip install peft accelerate bitsandbytes
import torch
import torchaudio
from peft import PeftModel, PeftConfig
from transformers import WhisperForConditionalGeneration, WhisperFeatureExtractor, WhisperTokenizer

PEFT_MODEL_ID = "doof-ferb/whisper-large-peft-lora-vi"
BASE_MODEL_ID = PeftConfig.from_pretrained(PEFT_MODEL_ID).base_model_name_or_path

FEATURE_EXTRACTOR = WhisperFeatureExtractor.from_pretrained(BASE_MODEL_ID)
TOKENIZER = WhisperTokenizer.from_pretrained(BASE_MODEL_ID)

MODEL = PeftModel.from_pretrained(
	WhisperForConditionalGeneration.from_pretrained(BASE_MODEL_ID, torch_dtype=torch.float16).to("cuda:0"),
	PEFT_MODEL_ID
).merge_and_unload(progressbar=True)

DECODER_ID = torch.tensor(
	TOKENIZER.convert_tokens_to_ids(["<|startoftranscript|>", "<|vi|>", "<|transcribe|>", "<|notimestamps|>"]),
	device=MODEL.device
).unsqueeze(dim=0)

waveform, sampling_rate = torchaudio.load("audio.mp3")
if waveform.size(0) > 1:  # convert dual to mono channel
	waveform = waveform.mean(dim=0, keepdim=True)

inputs = FEATURE_EXTRACTOR(waveform, sampling_rate=sampling_rate, return_tensors="pt").to(MODEL.device)
with torch.inference_mode(), torch.autocast(device_type="cuda"):  # required by PEFT
	predicted_ids = MODEL.generate(input_features=inputs.input_features, decoder_input_ids=DECODER_ID)

TOKENIZER.batch_decode(predicted_ids, skip_special_tokens=True)[0]
```