small, tiny, base models

#7
by eschmidbauer - opened

Thank you for sharing this code!
Any plans to release small, tiny or base models?
I'm testing medium in-browser using onnx + transformers.js and it's still too slow.
Wondering if any of the smaller models will be available for further testing of in-browser inference.

Whisper Distillation org

I found Whisper CPP to be very performant: https://huggingface.co/distil-whisper/distil-medium.en#whispercpp

Not sure if you can run this in-browser, but the CPU-only performance is great.

Training 2 decoder layer versions of small.en now!

eschmidbauer changed discussion status to closed

Sign up or log in to comment