request: non-static version for running private HF Space on better hardware

#2
by rawwerks - opened

this demo is so powerful!

the only limit of the 'static' deployment type is that i can't run it on more powerful hardware. (maybe HF can consider that a feature request for Spaces.)

would you consider making a non-static version that can be duplicated to a private HF space, where i can pay to fun this on better hardware? i don't really care if gradio or one of the other "dynamic" HF Space types, i just want to be able to use this demo to transcribe long audio files for myself with more powerful HF infrastructure.

thank you for considering this request.

Hi there. The whisper-web project is open-source, and you can find the source code here:

So, feel free to fork it and make your own modifications there. :) If I understand correctly, you wish to move the execution of the model from the client (in-browser) to the server to take advantage of the hardware hosting the space (instead of relying on whatever the user has). You can follow this Next.js tutorial I created a while ago which shows you how to do server-side inference:

Hope that helps! ๐Ÿค—

@sanchit-gandhi - is this something you'd consider building? i think it would take you 5 minutes and would be a hugely popular HF Space.

the user interface of https://huggingface.co/spaces/Xenova/distil-whisper-web/ , but built with gradio like https://huggingface.co/spaces/distil-whisper/whisper-vs-distil-whisper (so that users like myself can click "duplicate this space" and pay HF to run this privately on faster hardware).

Sign up or log in to comment