First steps

#2
by lincesagrado - opened

Hi. First, thank you for your work.
Interesting performance. Is not consistent but the good results were way higher than my expectations.
Regarding the second example,
image.png
Is there a way to mask the image after uploading it or we need to do that beforehand? And in case of the second one, How? Masking in black or uploading a png with transparency?
Thanks again.

There is no way to mask the image after uploading.

It expects a png with transparency.

Note: the results may vary depending on whether or not the alpha channel is truly binary (only 0/1 values).

image.png
What about the low resolution of the new pictures? we can scale the canvas but that only gives more space. The resolution is still low. How can we change that?

the GAN is only capable of producing 512x512 images.

the GAN is only capable of producing 512x512 images.

Right!
Ok. What I'm going to do is downscale the original picture, outpainting it and then upscaling the result. When I get something useful I'll let you know.

it's a really cool set of models, is it possible to make them run square by square on a larger image to outpaint the background? like g-diffuser does (from parlance-zz/g-diffuser-bot) if I understand correctly that's how it works there. in order to outpaint image of let's say 1024px square, each direction 128px, it'll need 8 squares of 512x512 to cover all space (all borders that have pixels to extend) and the image will become 1536x1536. Can I somehow script these models to do the same?
thanks for sharing these goodies with the community! =)

i have added the option to run tiled outpainting.
This means you can create images >512px <=1024px without any scaling artifacts.

I have limited it to 2x2 tiles because it will become prohibitively slow (running on cpu that is).

download (2).png

whoa! the example looks sick! I'm gonna try it now with my sample images :P . thank you!

Is there a way make it where it makes a video on how it generated it

do you mean:
a) the tiled outpainting process
b) the diffusion process (in the outpainting_example2.py)
c) something else
?

the images generated by this space are created by a GAN.
The GAN does not have any intermediate generation steps to make a video from.
(unlike diffusion models which refine the image across multiple steps, as with the examples on the front page)

The outpainting process

Sign up or log in to comment