Spaces:
Running
on
L4
runtime error Memory limit exceeded (30G)
runtime error
Memory limit exceeded (30G)
yep I get the same, didnt seem like a big change either. Doesn't always happen but seems totally random. First time was after about 6 modification. Second time was after 2, 3rd time was after about 4. All the same changes roughly.
emm🤔, we’ll check our code. Meanwhile, if the space encounter any Runtime Errors again, please inform us to restart the space.
Again
And Upload Not possible
@SteveM62 Try to refresh the page if such error occurs. We'll restart the space if it collapses. Thanks.
Hello! Stopped working in the middle of work. I would like to test the tool, since I just launched it, made a mask and it stopped working
I suppose it's limitation of 4xL4 machines because of popularity of a tool
@SteveM62 Try to refresh the page if such error occurs. We'll restart the space if it collapses. Thanks.
I have exactly the same mistake... " runtime
Memory limit exceeded (30G)" rebooting does not help
Again
again
Again
The same here now.
Restarted.
Again, I think HuggingFace is struggling a bit.
Legend, thanks!
I get this error and I don't know how to correct it, because I don't understand it. Can someone help me?
error
CUDA out of memory. Tried to allocate 146.00 MiB. GPU 0 has a total capacty of 21.96 GiB of which 61.06 MiB is free. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 19.49 GiB is allocated by PyTorch, and 2.10 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF