Post
974
π OpenAI's new Whisper "turbo": 8x faster, 40% VRAM efficient, minimal accuracy loss.
π Run it locally in-browser for private transcriptions! Transcribe interviews, audio & video.
β‘οΈ 40 tokens/sec on my MacBook
π Try it: webml-community/whisper-large-v3-turbo-webgpu
Model: https://huggingface.co/ylacombe/whisper-large-v3-turbo
π Run it locally in-browser for private transcriptions! Transcribe interviews, audio & video.
β‘οΈ 40 tokens/sec on my MacBook
π Try it: webml-community/whisper-large-v3-turbo-webgpu
Model: https://huggingface.co/ylacombe/whisper-large-v3-turbo