Txt2img Highres takes all memory of GPU

#1
by Maoi - opened

Hello, i really like your scripts, but seems that when using it, takes mostly all of the VRAM to upscale x2 an image of 512x512.

Maoi changed discussion title from Txt2img takes all memory of GPU to Txt2img Highres takes all memory of GPU

Hello, i really like your scripts, but seems that when using it, takes mostly all of the VRAM to upscale x2 an image of 512x512.

hi this repo is WIP I'm trying to replicate the latent high res fix option found in automatic1111 sd web UI , so for now the reason why its using a lot of vram its because you need to do 2 passes in the model to generate a higher res image and it seems i need to do some clean up after the first step to free up more vram, in my testing i was able to get it to use about 6gb of vram using google colab and kaggle for testing

pipe.enable_attention_slicing()
pipe.enable_xformers_memory_efficient_attention()

Yes, the problem isn't during generation, is after executing txt2img when the VRAM is still storing previous data.

I used some tricks to optimize vram usage, but it will always fill up. For example i used this:

!export PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:1024
highrespipe.enable_attention_slicing()

As i said, generating a 512x512px image for itself doesn't takes a lot (like 4gb vram) when upscaling, but after 5 o 6 gens vram will fill up

Sign up or log in to comment