Usage
Whisper large-v3
is supported in Hugging Face 🤗 Transformers through the main
branch in the Transformers repo. To run the model, first
install the Transformers library through the GitHub repo.
pip install --upgrade pip
pip install --upgrade git+https://github.com/huggingface/transformers.git accelerate datasets[audio]
The model can be used with the pipeline
class to transcribe audio files of arbitrary length. Transformers uses a chunked algorithm to transcribe
long-form audio files, which in-practice is 9x faster than the sequential algorithm proposed by OpenAI
(see Table 7 of the Distil-Whisper paper). The batch size should
be set based on the specifications of your device:
import torch
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
from datasets import load_dataset
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
model_id = "zwan074/maori_ASR"
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
)
model.to(device)
processor = AutoProcessor.from_pretrained(model_id)
pipe = pipeline(
"automatic-speech-recognition",
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
max_new_tokens=128,
chunk_length_s=30,
batch_size=16,
return_timestamps=True,
torch_dtype=torch_dtype,
device=device,
)
To transcribe a local audio file, simply pass the path to your audio file when you call the pipeline:
result = pipe("audio.mp3")
print(result["text"])
Whisper predicts the language of the source audio automatically. If the source audio language is known a-priori, it can be passed as an argument to the pipeline:
result = pipe(sample, generate_kwargs={"language": "maori"})
- Downloads last month
- 10