Unable to inference on openjourney-v4 in img2img

#12
by abhishek7-imentus - opened

I tried running inference on openjourney-v4 in img2img

  • prompt 'pixelate image, high res' with
  • inference step set to 1
  • scheduler set to eulerA

After processing for ~2-3 mins it didn't display any output and no errors no message

Hi @kadirnar , Can you help me debug the problem?

@abhishek7-imentus , @abhi22 ,
Thank you for the feedback. I have A10 small(15gb) vram. I will ask for A10 large.

5d45454db5zg46 2023-04-02T08:23:07.735Z torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.19 GiB (GPU 0; 22.20 GiB total capacity; 16.78 GiB already allocated; 1.19 GiB free; 19.52 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
kadirnar changed discussion status to closed

Sign up or log in to comment