First colab notebook

#1
by kobolddoido - opened

I did some fixes and it is able to generate images on colab PRO
Please, feel free to improve the notebook
10 minutes installation
40 seconds to generate 6 images with 50 steps

New version v 0.021:
https://colab.research.google.com/drive/1PZwv4EZtO4zZJpUiAH0h4vRWOiAnW8R5?usp=sharing
old version:
https://colab.research.google.com/drive/1drHxtvwFK2a4YGIA6QZZy3-t0RhGQEGP?usp=sharing

Getting tcmalloc errors during the cloning step - the operation eventually gets killed (on Pro+).

Are you sure you increased the ram amount? I had the same problem before setting it manually

How do you set the RAM in colab manually? Didn't know you could do that!

runtime -> change runtime type
select the option to increase the runtime ram

Is the HUGG_TOKEN the Comp Vis access token? Authentication isn't working for me

It's your personal token - I ran into the same issue.

This comment has been hidden

Has anyone been able to get mamba to install python=3.8.5? Not sure if 3.8 is necessary, but that line keeps failing for me.

I get this at the end of the command:
"Encountered problems while solving.
Problem: conflicting requests"

Image generation is totally working, though.

Am getting CUDA out of memory errors relatively often (V100 on colab pro+), though. High-ram is enabled. Curious of other people are getting out of memory errors, too?

Am getting CUDA out of memory errors relatively often (V100 on colab pro+), though. High-ram is enabled. Curious of other people are getting out of memory errors, too?

Are using init images? if yes you need to make sure it is 512 x 512 (or lower). The last colab version has a command to resize the images to the correct dimension automaticaly.

Yes, I got oom with init images, mostly. I'll make sure there are sized to 512x512. Btw, are rectangular init images possible? I don't see any argument for dims with img2img...

I did also get oom with inference at least once (it may have been when I went above 512 in one of the dimensions).

Thank you so much for putting this colab together so quickly!!!

So, after playing some more—the OOM stuff seems to happen after doing 4 or so runs (either inference or init images). I just got one on the 5th run using an init image of 512x512.
Usually it's sitting at about 13 GiB and then needs about 3 more...

Yes, I got oom with init images, mostly. I'll make sure there are sized to 512x512. Btw, are rectangular init images possible? I don't see any argument for dims with img2img...

I did also get oom with inference at least once (it may have been when I went above 512 in one of the dimensions).

Thank you so much for putting this colab together so quickly!!!

Hi! Yes it is still a rough colab. But thanks!
I added early a function to resize it on the colab automatically. But probably it should take on consideration the user -W and -H input.

So, after playing some more—the OOM stuff seems to happen after doing 4 or so runs (either inference or init images). I just got one on the 5th run using an init image of 512x512.
Usually it's sitting at about 13 GiB and then needs about 3 more...

I'm using colab. And I just notice some memory problems when I'm trying to increase the dimensions or the sample number.

Does these numbers square with what you you've noticed: up to 512 on either side for inference is pretty safe, and up to 4 samples with a 512x512 init image is also safe?

Yes. This was safe! but I didn't still need to try another scenarios.

Installing the dependencies using the repo's conda environment, after running the condacolab cell:

!git clone https://github.com/CompVis/stable-diffusion.git
!mamba env update -n base -f stable-diffusion/environment.yaml
!pip install torchmetrics==0.6.0
!pip install kornia==0.6

The two extra pip installs are based on this pull request.

Thanks madams, I will update it. Is it enough to install the whole environment?

Slightly offtopic question, anyone who got it running on Yandex DataSphere / Paperspace / other online ML spaces? Maybe Jupyter Notebboks without Google Collab hookups?
Just because in the Yandex DataSphere have a problem with installing mamba. I think it requires a Docker image with mamba/conda and maybe there is some docker images for this case available?

I did it using a cloud gpu and it worked out, you have to change a couple installation processes, I used pip

I did it using a cloud gpu and it worked out, you have to change a couple installation processes, I used pip

Can you please share what you changed in the installation procceses? Maybe .ipynb without conda/mamba?

FYI: The colab above does not support the A100 40GB due to the pytorch & cuda version.

New notebook missing the % at all of the cds in the file, such as:

import numpy as np
%cd stable-diffusion

Also perhaps missing:
from google.colab import drive
drive.mount('/content/drive')

I have updated Lucas' notebook. Now generated grid of images are displayed in the notebook. I also made it a little easier to choose between using Google Drive and not.

https://colab.research.google.com/drive/1j6sggQmWSiYrRmfnMR8_h3X6n1YCck77?usp=sharing

Thanks for sharing! I noticed that even if RANDOM_SEED is set to false, it appears that a random seed is always generated anyway (which is strange, because RANDOM_SEED has clearly been set to 'True').

With the current setup, even when RANDOM_SEED is set to false, I notice that when the image gen starts and the seed is printed, it doesn't match the seed at the top of the cell. However, if I comment out the if statement, then the image gen seed does match the seed at the top of the cell...

I have updated Lucas' notebook. Now generated grid of images are displayed in the notebook. I also made it a little easier to choose between using Google Drive and not.

https://colab.research.google.com/drive/1j6sggQmWSiYrRmfnMR8_h3X6n1YCck77?usp=sharing

Thanks gTurk, I added it as the last version

Thanks for sharing! I noticed that even if RANDOM_SEED is set to false, it appears that a random seed is always generated anyway (which is strange, because RANDOM_SEED has clearly been set to 'True').

With the current setup, even when RANDOM_SEED is set to false, I notice that when the image gen starts and the seed is printed, it doesn't match the seed at the top of the cell. However, if I comment out the if statement, then the image gen seed does match the seed at the top of the cell...

You should change the parameter RANDOM_SEED to be treat as a boolean field.

RANDOM_SEED = True #@param {type:"boolean"}

I have updated Lucas' notebook. Now generated grid of images are displayed in the notebook. I also made it a little easier to choose between using Google Drive and not.

https://colab.research.google.com/drive/1j6sggQmWSiYrRmfnMR8_h3X6n1YCck77?usp=sharing

Thank you for this, although just wanted to point out that the max is 500 steps, not 150. That being said, I am still not sure if there are any benefits of using 500 vs 150 steps, but might be worth upping the slider limitations

Thanks db88 and diegogd.
I have updated the notebook

Awesome! Appreciate you all

Here are a few more settings I've personally implemented

STEPS = 150 #@param {type:"slider", min:5, max:500, step:5}
PRECISION = "full" #@param ["full","autocast"]
SCALE = 7.5 #@param {type:"slider", min:1, max:20, step:0.5}

!python scripts/txt2img.py --seed $SEED --prompt "$PROMPT" --ddim_step $STEPS --W $WIDTH --H $HEIGHT --plms --n_samples $NUM_SAMPLES --n_iter $NUM_ITERS --ddim_step $STEPS --outdir out_images --ckpt $model_ckpt --precision $PRECISION --scale $SCALE

defaults are autocast precision & 7.5 scale.
All potential parameters from txt2img can be seen in txt2img itself if you look at the possible arguments.

Awesome! Appreciate you all

Here are a few more settings I've personally implemented

STEPS = 150 #@param {type:"slider", min:5, max:500, step:5}
PRECISION = "full" #@param ["full","autocast"]
SCALE = 7.5 #@param {type:"slider", min:1, max:20, step:0.5}

!python scripts/txt2img.py --seed $SEED --prompt "$PROMPT" --ddim_step $STEPS --W $WIDTH --H $HEIGHT --plms --n_samples $NUM_SAMPLES --n_iter $NUM_ITERS --ddim_step $STEPS --outdir out_images --ckpt $model_ckpt --precision $PRECISION --scale $SCALE

defaults are autocast precision & 7.5 scale.
All potential parameters from txt2img can be seen in txt2img itself if you look at the possible arguments.

kobolddoido changed discussion status to closed
kobolddoido changed discussion status to open

As of a few days ago, I haven't been able to get this notebook to run on colab. I get problems when executing the mamba command that is resolving packages from the environment.yaml file:

Pinned packages:

  • python 3.7.*
  • python_abi 3.7.* cp37
  • cudatoolkit 11.1.*

Encountered problems while solving:

  • conflicting requests
  • conflicting requests

Has anyone else encountered this problem? Does anyone have suggestions for resolving the issue? Thanks!

Sign up or log in to comment