Alternate models for outpainting

#1
by Softology - opened

I wanted to ask you a quick question about your outpainting code here
https://huggingface.co/blog/OzzyGT/outpainting-differential-diffusion

I used your code in a script that generates movies based on shrinking frames then outpainting the edges then repeating.
https://www.instagram.com/p/C6PlB_pMmeA/
https://www.instagram.com/p/C6CeSCpsQJ9/

I wanted to try some other models, but after trying to replace the model in

pipeline = StableDiffusionXLDifferentialImg2ImgPipeline.from_pretrained(
    "SG161222/RealVisXL_V4.0", torch_dtype=torch.float16, variant="fp16"
).to("cuda")

with
"stabilityai/stable-diffusion-xl-base-1.0"
"runwayml/stable-diffusion-inpainting"
"stabilityai/stable-diffusion-2-inpainting"
"lykon-models/dreamshaper-8-inpainting"
"Sanster/anything-4.0-inpainting"
"Uminosachi/realisticVisionV51_v51VAE-inpainting"
they all error out with

  File "D:\<patyh to venv>\lib\site-packages\diffusers\loaders\textual_inversion.py", line 161, in _maybe_convert_prompt
    tokens = tokenizer.tokenize(prompt)
AttributeError: 'NoneType' object has no attribute 'tokenize'

Do you have any other models that will work?  Or know what to tweak to get those models working?

I was just searching huggingface with inpaint as the tag, but that does not seem to be enough to narrow down compatible models.
https://huggingface.co/models?sort=downloads&search=inpaint

Any tips would be greatly appreciated.

Thanks.

First of all, very cool project.

Actually the code should work with any SDXL model, so it's weird that it didn't work with "stabilityai/stable-diffusion-xl-base-1.0". I did a quick test with just copying the code and replacing the model and it worked.

From you error, it seems it didn't find the tokenizer, are you using a specific "textual inversion" you can share to test? It seems the error comes from that and not the base model.

These checkpoints wont' work:

"runwayml/stable-diffusion-inpainting"
"stabilityai/stable-diffusion-2-inpainting"
"lykon-models/dreamshaper-8-inpainting"
"Sanster/anything-4.0-inpainting"
"Uminosachi/realisticVisionV51_v51VAE-inpainting

OK, now I am confused. I tried "stabilityai/stable-diffusion-xl-base-1.0" again now and it does work.

It may have been I was using this syntax originally

pipeline = StableDiffusionXLDifferentialImg2ImgPipeline.from_pretrained(
    "SG161222/RealVisXL_V4.0", torch_dtype=torch.float16, variant="fp16"
).to("cuda")

but have now changed to

pipeline = StableDiffusionXLPipeline.from_pretrained(
    args2.model, torch_dtype=torch.float16, variant="fp16", custom_pipeline="pipeline_stable_diffusion_xl_differential_img2img"
).to("cuda")

Are there any other models you know that will work? It helps to give the user options for which model to use to get different results.

Glad you solved, you must use that custom pipeline for that tutorial, that enables the "differential diffusion" technique.

With that working, as I told you before, you can use any SDXL model, but unfortunately there's a problem with the from_single_filethat doesn't load custom pipelines, so they have to be in diffusers format for now or alternatively you can download the pipeline and use it directly with single file checkpoints.

Edit: I see that you were working with the pipeline directly, maybe I forgot to add something with textual inversion, I'm going to check it but it should work if used directly too.

If you want to search for them, this can help: https://huggingface.co/models?other=diffusers%3AStableDiffusionXLPipeline

Just don't use any of the low step distilled ones, like turbo or hyper models, since they use few steps they are worse when doing this kind of tasks.

For videos an animations, the lighting 8-steps models can work really well, I recommend you use any SDXL model with the lightingt 8-step lora and test it, for example this animation was done with juggernaut plus the lighting lora:

https://twitter.com/OzzyGT/status/1776132146312380879

Thanks for the model search list. I was able to get some fp16 and non fp16 models to work.
"Recursive Outpainting" is now avaiable in Visions of Chaos.

Your other SDXL Prompt Interpolation script is great too. I am playing with that now. Really nice with very smooth results that maintain colors, detail and have minimal temporal flickering.

Owner

Glad that to help and that it worked. Very cool app, I'm going to have it in my list of recommendations.

Can you help point me to which models I can use to replace
"RunDiffusion/Juggernaut-XL-v9"
and also alternates for
"ByteDance/SDXL-Lightning", weight_name="sdxl_lightning_4step_lora.safetensors", adapter_name="lighting"
in SDXL Prompt Interpolation?

If I am right (?) then the models supported need to be finetuned from stabilityai/stable-diffusion-xl-base-1.0 and have the file model_index.json in the repository. Otherwise I get 404 not found and other HF errors. They also have to be fp16. From some testing and trial runs here I have these that are compatible.
RunDiffusion/Juggernaut-XL-v9
UnfilteredAI/NSFW-gen-v2
max-fofanov/RealVisXL_V4.0
SG161222/RealVisXL_V4.0
stabilityai/stable-diffusion-xl-base-1.0
playgroundai/playground-v2-1024px-aesthetic
dataautogpt3/OpenDalleV1.1
misri/kohakuXLEpsilon_rev1
stablediffusionapi/copax-timelessxl-sdxl10
recoilme/ColorfulXL-Lightning

Oh, you can use any of the SDXL fine tuned models but you'll need to change the loading code, for the ones that have the config.json which are the ones in the diffusers format but don't have the fp16 name:

pipeline = StableDiffusionXLPipeline.from_pretrained(
    "dataautogpt3/ProteusV0.2",
    torch_dtype=torch.float16,
    custom_pipeline="pipeline_stable_diffusion_xl_differential_img2img",
).to("cuda")

Just keep in mind that these ones are usually 10 GB because they're saved in full precision.

For the ones that don't have the config.json and are just a single file checkpoint which is the original format, you can use this:

pipeline = StableDiffusionXLDifferentialImg2ImgPipeline.from_single_file(
    "https://huggingface.co/dataautogpt3/TempestV0.1/blob/main/TempestV0.1-Artistic.safetensors",
    torch_dtype=torch.float16,
).to("cuda")

For this ones just need to check they're 6 or 10 GB and have to use the StableDiffusionXLDifferentialImg2ImgPipeline because for now, we can't use custom pipelines when using from_single_file.

Also the playgroundai/playground-v2-1024px-aesthetic it's not a model trained with SDXL as a base, you can still use it but need to change the params they show in their repo.

For the sdxl_lightning_4step_lora.safetensors you can use any of the LoRAs from that repo https://huggingface.co/ByteDance/SDXL-Lightning/tree/main but need to change the appropriate steps, also you can use a new one that was released a couple of days ago: https://huggingface.co/ByteDance/Hyper-SD/tree/main

Use only the SDXL ones that have lora in the name, for the model you can change it for any model with the logic from before.

You can skip loading the loras and use a regular model but I used this so the images gets generated a lot faster.

Thanks for the tips. I can get other LoRAs working now (just need to increase iterations).

@OzzyGT for some reason if i try this:

pipeline = StableDiffusionXLPipeline.from_pretrained(
"dataautogpt3/ProteusV0.2",
torch_dtype=torch.float16,
custom_pipeline="pipeline_stable_diffusion_xl_differential_img2img",
).to("cuda")

i get an error that it throws a 404 error for that custom_pipeline:

huggingface_hub.utils._errors.HfHubHTTPError: 404 Client Error: Not Found for url: https://raw.githubusercontent.com/huggingface/diffusers/v0.27.2/examples/community/pipeline_stable_diffusion_xl_differential_img2img.py

That's because that pipeline was added after the latest release, to use it like that you'll need to install diffusers from main like this:

pip install git+https://github.com/huggingface/diffusers.git

or you can download the pipeline and use it directly:

https://raw.githubusercontent.com/huggingface/diffusers/main/examples/community/pipeline_stable_diffusion_xl_differential_img2img.py

thanks i got it working now.

Sign up or log in to comment