'This Space only works in duplicated instances'!

#27
by wirrkopp - opened

Always getting the same error 'This Space only works in duplicated instances'!

image.png

I can't find the mistake that I make.

image.png

I have activated the paid GPU after I previously duplicated the inital space.

And after having added the two variables:

image.png

After running the installation I receive the same result:

image.png

Any hint would be very helpful as I am completely new to the stuff and always a victim for stupid beginner-mistakes..

I'm getting the same error with the same steps

I kept getting the same error. A temporary solution that I found was to delete line 150 and 151 in app.py. You still get the same error message but the app keeps training and it actually works. I completed 4 trainings today but now all of the sudden I'm getting the CUDA error instead..

Thanks duja1, commenting out lines 150 and 151 in app.py worked, but now getting that CUDA error as well

Hey, actually you don't need to add the IS_SHARED_UI variable. That one is used to actually check whether this is the original Space. I understand the UX may be confusing as it asks you to do it when duplicating - but that was introduced after. I'll adapt accordingly

Exactly, I just did that and now it works.

I made it to the cuda error as well...
RuntimeError: CUDA out of memory. Tried to allocate 1.58 GiB (GPU 0; 14.76 GiB total capacity; 11.81 GiB already allocated; 581.75 MiB free; 13.06 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

RuntimeError: CUDA out of memory. Tried to allocate 160.00 MiB (GPU 0; 14.76 GiB total capacity; 13.47 GiB already allocated; 35.75 MiB free; 13.59 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Same, running the medium T4.
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 14.76 GiB total capacity; 13.46 GiB already allocated; 17.75 MiB free; 13.60 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Hey @stablepomme and @wirrkopp - I have pushed a performance update that should end the out of memory issues for default settings!

Sign up or log in to comment