No description provided.

It is not recommended to use such a long input, as it would take a lot of time to generate and exceed the limits of Huggingface.

limitation = os.getenv("SYSTEM") == "spaces"  # limit text and audio length in huggingface spaces

Outside of the Huggingface, there are no limits.

It is not recommended to use such a long input, as it would take a lot of time to generate and exceed the limits of Huggingface.

limitation = os.getenv("SYSTEM") == "spaces"  # limit text and audio length in huggingface spaces

Outside of the Huggingface, there are no limits.

Would it be better to build this locally?

It is not recommended to use such a long input, as it would take a lot of time to generate and exceed the limits of Huggingface.

limitation = os.getenv("SYSTEM") == "spaces"  # limit text and audio length in huggingface spaces

Outside of the Huggingface, there are no limits.

Would it be better to build this locally?

Yes, and you can use GPU for inference by running

python app.py --device cuda
zomehwh changed pull request status to closed

Sign up or log in to comment