Edit model card

Malaysian Finetune Whisper Small

Finetune Whisper Small on Malaysian dataset,

  1. IMDA STT, https://huggingface.co/datasets/mesolitica/IMDA-STT
  2. Pseudolabel Malaysian youtube videos, https://huggingface.co/datasets/mesolitica/pseudolabel-malaysian-youtube-whisper-large-v3
  3. Malay Conversational Speech Corpus, https://huggingface.co/datasets/malaysia-ai/malay-conversational-speech-corpus
  4. Haqkiem TTS Dataset, this is private, but you can request access from https://www.linkedin.com/in/haqkiem-daim/
  5. Pseudolabel Nusantara audiobooks, https://huggingface.co/datasets/mesolitica/nusantara-audiobook

Script at https://github.com/mesolitica/malaya-speech/tree/malaysian-speech/session/whisper

Wandb at https://wandb.ai/huseinzol05/malaysian-whisper-small?workspace=user-huseinzol05

Wandb report at https://wandb.ai/huseinzol05/malaysian-whisper-base/reports/Finetune-Whisper--Vmlldzo2Mzg2NDgx

What languages we finetuned?

  1. ms, Malay, can be standard malay and local malay.
  2. en, English, can be standard english and manglish.

how-to

from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq, pipeline
from datasets import Audio
import requests

sr = 16000
audio = Audio(sampling_rate=sr)

processor = AutoProcessor.from_pretrained("mesolitica/malaysian-whisper-small")
model = AutoModelForSpeechSeq2Seq.from_pretrained("mesolitica/malaysian-whisper-small")

r = requests.get('https://huggingface.co/datasets/huseinzol05/malaya-speech-stt-test-set/resolve/main/test.mp3')
y = audio.decode_example(audio.encode_example(r.content))['array']
inputs = processor([y], return_tensors = 'pt')
r = model.generate(inputs['input_features'], language='ms', return_timestamps=True)
processor.tokenizer.decode(r[0])
'<|startoftranscript|><|ms|><|transcribe|> Zamily On Aging di Vener Australia, Australia yang telah diadakan pada tahun 1982 dan berasaskan unjuran tersebut maka jabatan perangkaan Malaysia menganggarkan menjelang tahun 2005 sejumlah 15% penduduk kita adalah daripada kalangan warga emas. Untuk makluman Tuan Yang Pertua dan juga Alian Bohon, pembangunan sistem pendafiran warga emas ataupun kita sebutkan event adalah usaha kerajaan ke arah merealisasikan objektif yang telah digangkatkan<|endoftext|>'
r = model.generate(inputs['input_features'], language='en', return_timestamps=True)
processor.tokenizer.decode(r[0])
<|startoftranscript|><|en|><|transcribe|> Assembly on Aging, Divina Australia, Australia, which has been provided in 1982 and the operation of the transportation of Malaysia's implementation to prevent the tourism of the 25th, 15% of our population is from the market. For the information of the President and also the respected, the development of the market system or we have made an event.<|endoftext|>

how to predict longer audio?

You need to chunk the audio by 30 seconds, and predict each samples.

Downloads last month
39
Safetensors
Model size
242M params
Tensor type
BF16
ยท
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mesolitica/malaysian-whisper-small

Finetunes
1 model

Space using mesolitica/malaysian-whisper-small 1

Collection including mesolitica/malaysian-whisper-small