Edit model card

kotoba-whisper-v1.1-mlx

This repository contains a converted mlx-whisper model of kotoba-whisper-v1.1 which is suitable for running with Apple Silicon. As kotoba-whisper-v1.1 is derived from distil-large-v3, this model is significantly faster than mlx-community/whisper-large-v3-mlx without losing much accuracy for Japanese transcription.

CAUTION: While the original model contains a custom pipeline implementation, this repository does NOT include them. Some functionalities such as stable_ts and punctuator may NOT work.

Usage

pip install mlx-whisper
import mlx_whisper

mlx_whisper.transcribe(speech_file, path_or_hf_repo="kaiinui/kotoba-whisper-v1.1-mlx")

Related Links

Downloads last month
7
Inference API
Unable to determine this model’s pipeline type. Check the docs .