120 s limit

#1
by Greencapabara - opened

Hi! Thanks! Works great! Amazing! What causes the 120 seconds limit? I mean I see the variable in your code, but why did you put it in there?

Greencapabara changed discussion status to closed
Greencapabara changed discussion status to open

Well limits I guess. Sorry for asking stupid questions.

It's just because the model is only running on 8 vCPU (no GPUs), so I figured long audio files would take too long to process, considering that this space is shared. For instance, it takes about 2.5 minutes to process a 2 minute audio clip on the "medium" model. And on the large model, it would require about 4 minutes.

Though perhaps 2 minutes is a bit on the short side. Still, it's easy to fork it to your own space or use Google Colab and run it much faster on a GPU.

Thank you! I forked it. They don't have limits for this? Amazing! ๐Ÿ™‚

At least on the free tier it is slower for me on Google colab.

At least on the free tier it is slower for me on Google colab.

Interesting - perhaps you happened to be allocated a mediocre/slow GPU in your Google Colab instance? I just tried it on a 2 minute audio clip, and it took about 33 seconds to run the predictions (that is at 4x real time speed). This was after I'd run the model once, as the first execution will always be slower as you need to download a 2.6GB model file and load in into GPU memory.

The instance was running a Tesla T4:

!nvidia-smi -L
GPU 0: Tesla T4 (UUID: GPU-56fa4907-6c6a-728b-04db-385942c9520b)

I see. Very interesting. Thank you!

Oh yes, now I have it, much faster. Thanks.

aadnk changed discussion status to closed

Sign up or log in to comment