Extremely slow, feel like running large-v2

#22
by ThanhHuyLe - opened

Here is my code to transcribe or translate.
The process is extremely slow, even in English transcription.
Previously in large-v2, with the legacy code I could see the transcribe text step by step with the timestamp, now with the pipeline I don't know how to implement it.
I use CPU but GPU, so I'm not sure if it is the root cause or not.

import torch
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
from datasets import load_dataset


device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32

model_id = "openai/whisper-large-v3-turbo"

model = AutoModelForSpeechSeq2Seq.from_pretrained(
    model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
)
model.to(device)

processor = AutoProcessor.from_pretrained(model_id)

pipe = pipeline(
    "automatic-speech-recognition",
    model=model,
    tokenizer=processor.tokenizer,
    feature_extractor=processor.feature_extractor,
    torch_dtype=torch_dtype,
    device=device,
)

generate_kwargs = {
    "max_new_tokens": 400,
    "num_beams": 1,
    "condition_on_prev_tokens": False,
    "compression_ratio_threshold": 1.35,  # zlib compression ratio threshold (in token space)
    "temperature": (0.0, 0.2, 0.4, 0.6, 0.8, 1.0),
    "logprob_threshold": -1.0,
    "no_speech_threshold": 0.6,
    "return_timestamps": True,
    "language": "english",
    "task": "transcribe",
}
dataset = load_dataset("distil-whisper/librispeech_long", "clean", split="validation")
sample = dataset[0]["audio"]

result = pipe("C:/a.mp3", generate_kwargs=generate_kwargs)
print(result["chunks"])

Any ideas to improve this code? Thanks a lot.

One more thing, large-v3-turbo doesn't work with Japanese translate to English, when I test by this command:

whisper --model turbo --task translate --language Japanese --output_format srt  --output_dir ' + f'"{folder_path}"'

Sign up or log in to comment