Query on Audio File Length in Duplicated Space
Hi there,
I've duplicated the seamless-expressive space for a private run on Nvidia T4. However, I'm facing an issue with an audio file β it's 2 minutes and 30 seconds long, but I'm getting a message that only the initial 10 seconds are processed. Is there a way to address this?
Thanks!
Michel
Hey everyone,
I've successfully adjusted the MAX_INPUT_AUDIO_LENGTH in my HuggingFace space (check the steps below), but now I'm encountering an issue. After uploading the audio file and hitting the RUN button, I get errors in both the audio output and the translation text output.
Here are the steps I followed:
Clone the HuggingFace repo locally:
$ brew install git-lfs $ git lfs install $ git clone https://huggingface.co/spaces/<username>/<spacename>
Open the file app.py:
$ cd <spacename> $ vim app.py
Change the variable: "MAX_INPUT_AUDIO_LENGTH = 10 # in seconds" to the desired length in seconds.
Commit and Push the Changes:
$ git add -A $ git commit -m "Increased MAX_INPUT_AUDIO_LENGTH" $ git push origin main
HuggingFace will automatically build a new copy incorporating the changes.
Now, when I try to process the data, I'm running into errors. Any insights or suggestions would be greatly appreciated.
Thanks in advance!
Michel
It appears that despite upgrading to T4 Medium with increased memory, the CUDA out-of-memory error persists. Here's an edited comment:
Seems like we're still grappling with memory issues, even after upgrading to T4 Medium (8 vCPU, 30 GB RAM, 16 GB VRAM). The error persists:
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 6.28 GiB. GPU 0 has a total capacity of 14.58 GiB, with 3.42 GiB free. Process 199325 is using 11.15 GiB memory, out of which 10.94 GiB is allocated by PyTorch, and 69.17 MiB is reserved but unallocated. If reserved but unallocated memory is substantial, consider setting max_split_size_mb to avoid fragmentation. See the documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF.
The above exception caused the following exception:
Traceback (most recent call last):
File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/gradio/queueing.py", line 497, in process_events
response = await self.call_prediction(awake_events, batch)
File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/gradio/queueing.py", line 468, in call_prediction
raise Exception(str(error) if show_error else None) from error
Exception: None
Despite the hardware upgrade, it seems we're still pushing the limits. Any suggestions or insights would be greatly appreciated. Thanks!
Awesome news! After some trial and error, I figured out that the hiccup was all about memory. The lightbulb moment happened when I split my file into two, around one minute each. That did the trick, and now everything's running smoothly.
If anyone else is in the same memory boat, I'm curious if tweaking some settings could help handle larger audio files. Let me know if you discover any hacks!
Happy coding, everyone! π