ASK

#14
by Fahowezan - opened

how to add other model?
i try add other link model but can't running

Place your model in this format
20230421221443.png

how do I remove the limit of 20 seconds in the Files. Also I tried to used Colab but after I run all the cells in order the use the gradio.live link when im generating the voice it errored

how do I remove the limit of 20 seconds in the Files. Also I tried to used Colab but after I run all the cells in order the use the gradio.live link when im generating the voice it errored

Only limit on huggingface, Is there a detailed error log in the cell on colab?

Idk how i fixed it but I just bringed back the other models and add alice and it works

image.png

can I ask if what's the function of this

image.png

and also how can i fix the deep tones of alice when she is singing perhaps you can recommend me a good instrumental remover thanks!

can I ask if what's the function of this
image.png

Here are the detailed instructions
https://github.com/svc-develop-team/so-vits-svc/blob/4.0/inference_main.py#L34

and also how can i fix the deep tones of alice when she is singing perhaps you can recommend me a good instrumental remover thanks!

You can adjust the pitch by modifying the vc_transform.
For instrumental remover, you could try UVR5 or RIPX

how can I avoid this error? My GPU RAM always spiking and idk why it didn't happen before and the file I put is just less than 5 MB.

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.31 GiB (GPU 0; 14.75 GiB total capacity; 11.40 GiB already allocated; 978.81 MiB free; 12.52 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
DEBUG:httpcore:http11.receive_response_headers.complete return_value=(b'HTTP/1.1', 500, b'Internal Server Error', [(b'date', b'Sat, 22 Apr 2023 04:11:15 GMT'), (b'server', b'uvicorn'), (b'content-length', b'14'), (b'content-type', b'application/json')])
DEBUG:httpx:HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 500 Internal Server Error"
DEBUG:httpcore:http11.receive_response_body.started request=<Request [b'POST']>
DEBUG:httpcore:http11.receive_response_body.complete
DEBUG:httpcore:http11.response_closed.started
DEBUG:httpcore:http11.response_closed.complete
DEBUG:httpcore:http11.send_request_headers.started request=<Request [b'POST']>
DEBUG:httpcore:http11.send_request_headers.complete
DEBUG:httpcore:http11.send_request_body.started request=<Request [b'POST']>
DEBUG:httpcore:http11.send_request_body.complete
DEBUG:httpcore:http11.receive_response_headers.started request=<Request [b'POST']>
DEBUG:httpcore:http11.receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Sat, 22 Apr 2023 04:11:23 GMT'), (b'server', b'uvicorn'), (b'content-length', b'16'), (b'content-type', b'application/json')])
DEBUG:httpx:HTTP Request: POST http://127.0.0.1:7860/reset "HTTP/1.1 200 OK"
DEBUG:httpcore:http11.receive_response_body.started request=<Request [b'POST']>
DEBUG:httpcore:http11.receive_response_body.complete
DEBUG:httpcore:http11.response_closed.started
DEBUG:httpcore:http11.response_closed.complete

how can I avoid this error? My GPU RAM always spiking and idk why it didn't happen before and the file I put is just less than 5 MB.

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.31 GiB (GPU 0; 14.75 GiB total capacity; 11.40 GiB already allocated; 978.81 MiB free; 12.52 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
DEBUG:httpcore:http11.receive_response_headers.complete return_value=(b'HTTP/1.1', 500, b'Internal Server Error', [(b'date', b'Sat, 22 Apr 2023 04:11:15 GMT'), (b'server', b'uvicorn'), (b'content-length', b'14'), (b'content-type', b'application/json')])
DEBUG:httpx:HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 500 Internal Server Error"
DEBUG:httpcore:http11.receive_response_body.started request=<Request [b'POST']>
DEBUG:httpcore:http11.receive_response_body.complete
DEBUG:httpcore:http11.response_closed.started
DEBUG:httpcore:http11.response_closed.complete
DEBUG:httpcore:http11.send_request_headers.started request=<Request [b'POST']>
DEBUG:httpcore:http11.send_request_headers.complete
DEBUG:httpcore:http11.send_request_body.started request=<Request [b'POST']>
DEBUG:httpcore:http11.send_request_body.complete
DEBUG:httpcore:http11.receive_response_headers.started request=<Request [b'POST']>
DEBUG:httpcore:http11.receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Sat, 22 Apr 2023 04:11:23 GMT'), (b'server', b'uvicorn'), (b'content-length', b'16'), (b'content-type', b'application/json')])
DEBUG:httpx:HTTP Request: POST http://127.0.0.1:7860/reset "HTTP/1.1 200 OK"
DEBUG:httpcore:http11.receive_response_body.started request=<Request [b'POST']>
DEBUG:httpcore:http11.receive_response_body.complete
DEBUG:httpcore:http11.response_closed.started
DEBUG:httpcore:http11.response_closed.complete

https://huggingface.co/spaces/zomehwh/sovits-models/discussions/13#643cf414fdb3d500061cffe9

I wanted to try using different models, but their folders are structured differently. Any idea on how I could adapt this for the colab notebook?
image.png

Sign up or log in to comment