whisper-large-v3-turbo-german-f16-q4
This model was converted to MLX format from primeline/whisper-large-v3-turbo-german and is quantized to 4bit, float16.
made with a custom script for converting safetensor whisper models.
there is also an unquantized float16 version
Use with MLX
git clone https://github.com/ml-explore/mlx-examples.git
cd mlx-examples/whisper/
pip install -r requirements.txt
import mlx_whisper
result = mlx_whisper.transcribe("test.mp3", path_or_hf_repo="mlx-community/whisper-large-v3-turbo-german-f16")
print(result)
whisper-large-v3-turbo-german-f16-q4
This model was converted to MLX format.
Use with MLX
git clone https://github.com/ml-explore/mlx-examples.git
cd mlx-examples/whisper/
pip install -r requirements.txt
# Example usage
import mlx_whisper
result = mlx_whisper.transcribe("test.mp3", path_or_hf_repo="whisper-large-v3-turbo-german-f16-q4")
print(result)
- Downloads last month
- 9
Inference API (serverless) does not yet support mlx models for this pipeline type.
Model tree for mlx-community/whisper-large-v3-turbo-german-f16-q4
Base model
primeline/whisper-large-v3-german