CUDA out of memory error

#36
by francosta - opened

I'm trying to finetune the model with pictures of myself (using the T4) and at some point of the training I get the following error:

File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/diffusers/models/attention.py", line 614, in _attention
attention_probs = attention_probs.to(value.dtype)
RuntimeError: CUDA out of memory. Tried to allocate 16.00 MiB (GPU 0; 14.76 GiB total capacity; 13.59 GiB already allocated; 13.75 MiB free; 13.61 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Any hints on how to sort this?

i am getting the same error

Hi @francosta , I can see you duplicated the Space 3 days ago. There was a performance challenge on some settings then that is fixed now. If you could duplicate it again it should not get OOM

Hey @qmerfp , I can see you did manage to train a model successfully - is that still a challenge for you?

when i copied it again the problem was solved thanks. I'm trying to use the produced model now.

@multimodalart how can i enter negative prompts on huggingface ?

@qmerfp currently the Inference API does not support it, but with the growing importance of negative prompts we may consider adding!

For now, you can duplicate this Space and swap the model to yours: https://huggingface.co/spaces/akhaliq/sd2-dreambooth-ClaymationXmas

multimodalart changed discussion status to closed

Sign up or log in to comment