Edit model card

Example how to use with WhisperX (https://github.com/m-bain/whisperX)

import whisperx

device = "cuda" 
audio_file = "oma_nauhoitus_16kHz.wav"
batch_size = 16 # reduce if low on GPU mem
compute_type = "float16" # change to "int8" if low on GPU mem (may reduce accuracy)

# 1. Transcribe with original whisper (batched)
model = whisperx.load_model("Finnish-NLP/whisper-large-finnish-v3-ct2", device, compute_type=compute_type)

audio = whisperx.load_audio(audio_file)
result = model.transcribe(audio, batch_size=batch_size)
print(result["segments"]) # before alignment

How to use in Python with faster-whisper (https://github.com/SYSTRAN/faster-whisper)

import faster_whisper
model = faster_whisper.WhisperModel("Finnish-NLP/whisper-large-finnish-v3-ct2")
print("model loaded")

segments, info = model.transcribe(audio_path, word_timestamps=True, beam_size=5, language="fi")

for segment in segments:
    for word in segment.words:
        print("[%.2fs -> %.2fs] %s" % (word.start, word.end, word.word))
Downloads last month
5
Unable to determine this model’s pipeline type. Check the docs .

Collection including Finnish-NLP/whisper-large-finnish-v3-ct2