--- library_name: mlx --- # whisper-large-v3-turbo-german-f16 This model was converted to MLX format from primeline/whisper-large-v3-turbo-german made this with a [custom script for converting safetensor whisper models](https://github.com/CrispStrobe/mlx-examples/blob/main/whisper/convert_safetensors.py). it is in float16, works well, quantization should also be possible, untested yet. ## Use with MLX ```bash git clone https://github.com/ml-explore/mlx-examples.git cd mlx-examples/whisper/ pip install -r requirements.txt ``` ```python import mlx_whisper result = mlx_whisper.transcribe("test.mp3", path_or_hf_repo="mlx-community/whisper-large-v3-turbo-german-f16") print(result) ```