Tile Model for SDXL?

#1
by 97Buckeye - opened

Do you intend to release a Tile model for XL? I really miss using ControlNet for upscaling in SDXL.

Yes, agree with @97Buckeye , in SD 1.5 tile did wonders, for me this is the most anticipated ControlNet model.

Every day I wake up I check if there is tile model for SDXL.
Is this one more difficult then the rest? We already have 9 canny and 11 depth models, but still no tile model.

Every day I wake up I check if there is tile model for SDXL.
Is this one more difficult then the rest? We already have 9 canny and 11 depth models, but still no tile model.

Same here. And I don't even see anyone talking about a Tile model. I hate having to switch down to a 1.5 model just so I can upscale my XL images.

@Illyasviel Might you have any information regarding a Tile model for XL?

Same here. And I don't even see anyone talking about a Tile model. I hate having to switch down to a 1.5 model just so I can upscale my XL images.

@97Buckeye What's your flow for this? As I understand latent spaces are not compatible between 1.5 and XL. Is it like: generate XL -> encode into 1.5 latent -> upscale with 1.5 model + tile controlnet? Are the results close to the original XL image with denoise 0.3 - 0.4?

@97Buckeye What's your flow for this? As I understand latent spaces are not compatible between 1.5 and XL. Is it like: generate XL -> encode into 1.5 latent -> upscale with 1.5 model + tile controlnet? Are the results close to the original XL image with denoise 0.3 - 0.4?

I do pretty much that, but I do get some small "artifacts" on my images - 1.5 model doesn't understand things XL created. I made DA post with my results:
https://www.deviantart.com/yashamon/journal/Testing-out-AI-upscaling-methods-983456810
Sometimes I even get whole scenes in a single tile that somewhat match original. For example check out roof of this one:
https://www.deviantart.com/yashamon/art/AI-4K-8K-Druid-s-house-983850912

I would like to throw in my 2 cents, i'm also searching daily for news about tile support for SDXL. Getting tired of switching between checkpoints. Thanks so much for the work so far with this project, its simply amazing.

Joining the daily waiting team.

this is the feature I use the most with controlnet so I'll be joining the waiting team as well

Same page club

Me too. Would be great to have it for sdxl!

I don't think it's happening, folks. I think he hates us.

deleted

really hoping Tile for SDXL comes out before anything else that's coming

Just saving this thread incase the model finally releasesπŸ‘€

Gentleman, is there still a free seat in the Tile waiting room? Yes? Thank you very much~

Tile ultras!

I saw someone mention the blur model could work for tile upscaling I haven't been successful yet tho: https://huggingface.co/lllyasviel/sd_control_collection/blob/main/kohya_controllllite_xl_blur.safetensors

deleted

I saw someone mention the blur model could work for tile upscaling I haven't been successful yet tho: https://huggingface.co/lllyasviel/sd_control_collection/blob/main/kohya_controllllite_xl_blur.safetensors

Has anyone tested it?

I tested it, doesnt work as Tile. The results are the same as only using Ultimate SD upscaler. Hit and miss and messsed up tiles.

Our wait continues...

Yeah, I don't even understand how that would work any different than just using a slightly higher denoise strength on Img2Img. πŸ€·πŸ½β€β™‚οΈ

Tbh it doesnt surprise me anymore that people still mostly use 1.5 when so many little things here and there and overall are missing or having insane hw requirements for sdxl.

Actually HiRes Fix is working on latest A1111 update, also try using TILED VAE, I upscale my txt2img up to 4 times, then u can upscale it by Extra

Actually HiRes Fix is working on latest A1111 update, also try using TILED VAE, I upscale my txt2img up to 4 times, then u can upscale it by Extra

I don't think I'm using the TILED VAE tool correctly. Do you need to use it in conjunction with any other tool? Do you use it during your initial text-to-image run or afterwards in an image-to-image run? Would you mind too much explaining your process and settings for me? πŸ™πŸΌ

I'm also looking forward to SDXL Controlnet Tile.

Any update on this? I really need the tile model! Do we know if anybody is even working on this?

I'm using StableSwarmUI and I'm able to upscale the generated SDXL images in the "Refiner" function, with a denoise between 0.2 and 0.5. I'm loving using this UI because in addition to being super fast, it's very accurate when upscaling.

I'm using StableSwarmUI and I'm able to upscale the generated SDXL images in the "Refiner" function, with a denoise between 0.2 and 0.5. I'm loving using this UI because in addition to being super fast, it's very accurate when upscaling.

Upscaling is not a problem with low denoise values such as 0.2 - 0.4. The benefit of the tile model is that it add more "relevant" details to the upscaled image on higher denoise value taking into account the context. I think you don't need it if you're able to upscale the whole image in one go, but usually there's not enough resources and an image is split in small chunks which are upscaled separately, e.g. with Ultimate Upscale.

deleted

Upscaling is not a problem with low denoise values such as 0.2 - 0.4. The benefit of the tile model is that it add more "relevant" details to the upscaled image on higher denoise value taking into account the context. I think you don't need it if you're able to upscale the whole image in one go, but usually there's not enough resources and an image is split in small chunks which are upscaled separately, e.g. with Ultimate Upscale.

Of course, I understand perfectly. What I'm saying is that while we don't have Tile for SDXL, a very convenient option I've found is through StableSwarmUI. The StabilityAI are doing an excellent job with this interface and improving it quickly. I'm getting very good, enlarged images (2048x2048px), even without Tile. When they launch Tile, I will use it in this interface, I no longer use AUTOMATIC1111 because it has lost its meaning, as I am getting superior and much faster results.

Upscaling is not a problem with low denoise values such as 0.2 - 0.4. The benefit of the tile model is that it add more "relevant" details to the upscaled image on higher denoise value taking into account the context. I think you don't need it if you're able to upscale the whole image in one go, but usually there's not enough resources and an image is split in small chunks which are upscaled separately, e.g. with Ultimate Upscale.

Of course, I understand perfectly. What I'm saying is that while we don't have Tile for SDXL, a very convenient option I've found is through StableSwarmUI. The StabilityAI are doing an excellent job with this interface and improving it quickly. I'm getting very good, enlarged images (2048x2048px), even without Tile. When they launch Tile, I will use it in this interface, I no longer use AUTOMATIC1111 because it has lost its meaning, as I am getting superior and much faster results.

Isn't it still the same "simple" upscale we have everywhere else, just with another UI? Personally I stopped using a1111 when I learned about Comfy, more flexibility but still clumsy at many things (in terms of UX). You piques my interest with StableSwarmUI, checked the latest version, still alpha, but in their motivation doc (https://github.com/Stability-AI/StableSwarmUI/blob/master/docs/Motivations.md) two things are interesting: 1) non-Python server (they use C#) and they also use Comfy in the backend (not sure if it's good or bad), 2) custom frontend (no dependency on another tools).

How would you compare it to Comfy, if you've used the latter? And just to avoid too much off topic, have you tried upscaling the same image in StableSwarmUI and any other tool with the same parameters, do you get different results (with better in StableSwarmUI)?

deleted

Isn't it still the same "simple" upscale we have everywhere else, just with another UI? Personally I stopped using a1111 when I learned about Comfy, more flexibility but still clumsy at many things (in terms of UX). You piques my interest with StableSwarmUI, checked the latest version, still alpha, but in their motivation doc (https://github.com/Stability-AI/StableSwarmUI/blob/master/docs/Motivations.md) two things are interesting: 1) non-Python server (they use C#) and they also use Comfy in the backend (not sure if it's good or bad), 2) custom frontend (no dependency on another tools).

How would you compare it to Comfy, if you've used the latter? And just to avoid too much off topic, have you tried upscaling the same image in StableSwarmUI and any other tool with the same parameters, do you get different results (with better in StableSwarmUI)?

Exactly. As you mentioned, it uses Comfy on the backend. This is good because it's like using all the power of Comfy in a simple and friendly interface. If you already use Comfy, you can simply link it with the StableSwarm interface and you're ready to go. You can use your Comfy workflows directly in the interface if you wish.

I'm getting very good results upscaling 2x with denoising from 0.2 up to 0.5 on some images. It uses less GPU because with an RTX 2060s, it's taking 35sec to generate 1024x1024px, and it's taking 160sec to generate images up to 2048x2048px. This is interesting because it only upscales in one step, without having to take it to Img2Img.

To do this, use the "Refiner" tab. In "Refine Control Percentage" it is equivalent to the Denoising Strength. In "Refiner Method" I am using: PostApply. In "Refiner Upscale Method" I chose to use the model: 4x-UltraSharp.pth

When they launch the Tile model, it can be used normally in the ControlNet tab. I have already tested ControlNets for SD1.5 and it is working normally.

As someone, who has been using Tile for many, many months now with Inpainting, I have to clarify, that this model is not only used for upscaling, but it's also very, very important for detail enhancement. The lack of a Tile model is the only reason why I'm not using SDXL. I'm using TIle+Detail LoRA combo with SD 1.5 on 0.95+ denoise to drastically increase the details on my images without changing the original image through inpainting, so this is why I'm still waiting for SDXL tile.

To answer your questions, no, lower denoise on SDXL is not a solution, this is the whole point. Also, no, Blur is totally different.
Inpainting with Tile on 1.5 yields amazing results, outperforming any possible workflow in quality, so that's why I just can't wait to see what SDXL tile can do... if we will ever get to see it...

@DarkStoorM I find the concept you mentioned "increasing details with tiles! quite interesting. I was wondering if you would like to share how that works? can you post a workflow somewhere? I would also be happy to chat by any other means (discord, facebook etc)

As someone, who has been using Tile for many, many months now with Inpainting, I have to clarify, that this model is not only used for upscaling, but it's also very, very important for detail enhancement. The lack of a Tile model is the only reason why I'm not using SDXL. I'm using TIle+Detail LoRA combo with SD 1.5 on 0.95+ denoise to drastically increase the details on my images without changing the original image through inpainting, so this is why I'm still waiting for SDXL tile.

To answer your questions, no, lower denoise on SDXL is not a solution, this is the whole point. Also, no, Blur is totally different.
Inpainting with Tile on 1.5 yields amazing results, outperforming any possible workflow in quality, so that's why I just can't wait to see what SDXL tile can do... if we will ever get to see it...

100% correct. Thank you. We need this Tile option for SDXL.

@JPGranizo

@DarkStoorM [...] I was wondering if you would like to share how that works?

This is purely a manual work, but here's the workflow (a bit outdated, I still use roughly the same process as from when Tile was first released):

Happy reading :)

I do a little bit different work, which focuses purely on introducing as much human authorship as possible and reducing the AI-ness on already upscaled images, so it's a very unique approach, that probably no one uses πŸ˜…

Example artwork from my workflow below (pushing the detailing limits):

test.png

As someone, who has been using Tile for many, many months now with Inpainting, I have to clarify, that this model is not only used for upscaling, but it's also very, very important for detail enhancement.

@DarkStoorM I like your definition of "detail enhancement" for Tile and most likely this is what people imply when they talk about upscaling, because essentially it's the same. When you upscale with low denoise you kind of "stretch" existing details, not add new. If you upscale by 2, you'll get 4 times less details (kind of). Tile not only mitigates this but can add even more details, something you mentioned for inpainting. So, it's really more accurate to talk about Tile in the context of detail enhancement in general rather than just upscaling.

deleted

Using denoising as high as 0.95 kind of adds a lot of detail, but to the point of ruining the image. Look at the details of the armor, what type of armor has these details? Of course, Tile helps a lot to maintain the original image, but using denoise strenght wisely. High denoising values can simply create more details than should be.

deleted

This image was generated in the JuggernautXL (SDXL) model and was upscaled to 2x without Tile and 0.5 denoising strength. You can see that there is still enough detail in this image to be considered a good detailed high quality image. Using a good UI and knowing how to configure it, it is now possible to get good 2x upscaled images, even better than SD1.5 with Tile.

8279553745-SDXL.png

This image was generated in the JuggernautXL (SDXL) model and was upscaled to 2x without Tile and 0.5 denoising strength. You can see that there is still enough detail in this image to be considered a good detailed high quality image. Using a good UI and knowing how to configure it, it is now possible to get good 2x upscaled images, even better than SD1.5 with Tile.

And it would be even better with tile model. I am upscaling my images to 8K at 0.15-0.2 denoise (with higher I get a lot of "artifacts"), and if I want god results I have to make it in steps (1536x864->1920x1080->3840x2160->7680x4320) and often have to fix minor issues manually. Also some areas (when you zoom in) are blurry and some are too sharp. With tile model I assume I could do in in one go and 0.8 denoise, which removes a lot of manual work.

Example:

00190+.jpg

Do you guys use the tile model together with multidiffusion upscale in order to increase details? OR do you only use Controlnet Tile with img2img upsascle?

Just here to add another hope for SDXL Tile Controlnet.
I have been using IPadapter in lieu of tile but it's not as good.

I came here looking for a tile control net, but it appears that someone has already done it on GitHub https://github.com/Mikubill/sd-webui-controlnet/issues/2049
Hopefully we can see it soon.

Thats great! Something is finally happening.

Isn't it just a UI plugin? You still need models and for SDXL it links this - https://github.com/Mikubill/sd-webui-controlnet/discussions/2039

...but it's for anime

...but it's for anime

Are you serious? πŸ€¦πŸ½β€β™‚οΈ

"Since the dataset used during training is an anime 2D/2.5D model, currently, its repainting effect on real photography styles is not good; we will have to wait until completing its final version."

"Since the dataset used during training is an anime 2D/2.5D model, currently, its repainting effect on real photography styles is not good; we will have to wait until completing its final version."

Ugh. Weeb culture strikes, again.

"Since the dataset used during training is an anime 2D/2.5D model, currently, its repainting effect on real photography styles is not good; we will have to wait until completing its final version."

Good for waifus though

Any development yet on a photorealistic Controlnet Tile for SDXL??

Something interesting to try out - DemoFusion
They claim it can be run on Windows with 8 VRAM

I developed Hybrid Video for Deforum Stable Diffusion. The Tile model enhances video capability greatly, using controlnet with tile and the video input, as well as using hybrid video with the same video. Hybrid video prepares the init images, but controlnet works in generation. With tile, you can run strength 0 and do good video. I haven't found a suitable replacement for SDXL.

he released them, the wait is over https://huggingface.co/bdsqlsz/qinglong_controlnet-lllite

These don't really work the way that the controlnet tile worked for sd 1.5. Still waiting for SDXL Controlnet Tile

Perhaps, a NeurIPS release, as a winter holiday surprise? One can hope.

Perhaps we can have ControlNet Tile for SDXL via the new X-Adapter?
https://showlab.github.io/X-Adapter/

Perhaps we can have ControlNet Tile for SDXL via the new X-Adapter?
https://showlab.github.io/X-Adapter/

This would be an amazing tool!

Glad to see so much interest for SDXL Tile ControlNet! Adding my voice to the crowd.

What would be awesome as a start is to know why we haven't gotten a SDXL Tile model yet. If it has to take some time, that's fine, but... why is it taking so much time? The lack of understanding and communication is pretty alarming, tbh.

Perhaps we can have ControlNet Tile for SDXL via the new X-Adapter?
https://showlab.github.io/X-Adapter/

WOW!! Interesting, thanks for that

Waiting for this too!

sdxl tile controlnet Come out quickly, come out quickly. We need it!

I've been waiting for this since the summer too... Still hoping to be able to use it with the SDXL models.
But don't you think that the tool already exists but it is developed (or bought) by and for magnific AI? The results seem so close to a SDXL ultimate SD upscale with a tile/blur filter.

I've been waiting for this since the summer too... Still hoping to be able to use it with the SDXL models.
But don't you think that the tool already exists but it is developed (or bought) by and for magnific AI? The results seem so close to a SDXL ultimate SD upscale with a tile/blur filter.

Person who made anime model is working in their own version at least, but it's slow progress. More info on github topic:
https://github.com/Mikubill/sd-webui-controlnet/issues/2049

i get the feeling, that magnificAI use already a tile version of sdxl. Any of the creator maybe got paid to not puplish it ?? Cant be that hard, that a random dude can create one, but not the creator of 1.5 control net tile....

@Dervlex @97Buckeye and everybody else: FYI, I solved the high-fidelity SDXL upscaling in a very different way, and my results are competitive with Magnific AI (if you are not looking for the HDR effect that hugely deviates from the source image). I added the function to my AP Workflow 8.0 for ComfyUI, released earlier this week: https://perilli.com/ai/comfyui/

(scroll the page and you'll also find a couple of videos to show you the quality of the upscaling)

Of course, this doesn't solve the problem for people who don't use ComfyUI, but it's better than nothing. I never had good results with Ultimate SD Upscale. This new method IMO is significantly better and faster (even on my sad M2 Max).

@perilli This looks complicated, but very interesting. Your upscaling looks great! Do you have any examples of very low resolution images with details being added like MAGNIFICAI does? The added details are really what has everyone excited for MagnificAI.

@97Buckeye one of the functions of my workflow, called "Image Enhancer", adds details to the upscaled image. It can't do (yet) exactly what Magnific AI does in terms of achieving that HDR look, but it can add lots of details as you can see in the Old Man video example.

By tweaking the parameters of that function you can add some creativity. I don't have an example of my own handy right now, but somebody else (a user called "monero") used the same approach and you can see the results in this video.

I don't want to derail this thread, so for any additional questions on the AP Workflow 8.0, I suggest you comment on my Reddit post: https://www.reddit.com/r/StableDiffusion/comments/1al5l16/release_ap_workflow_80_for_comfyui_now_with_a/

Still no ControlNet Tile model for SDXL? Anyone know why?

Very much looking forward to development SDXL tile model. Willing to open up a bounty for this work to help fund progress in this regard. @Illyasviel , what is the best way to contribute funding directly towards SDXL tile model development?

@3x3q I think prayer for a miracle is your best bet at this point.

that guy released it. + you can try the X-Adapter thing, so you can use the tile model from 1.5, but no separate extension so far.

what

Realistic (non-anime) SDXL Tile dropped: https://huggingface.co/bdsqlsz/qinglong_controlnet-lllite
Haven't tested it yet, but looks promising.

Edit: Tweaked settings for quite a while and I cannot, for the life of me, get it to produce good results using ControlNet LLLite. As noted in the repo, this model isn't compatible with original ControlNet. I may not be considering something important in my workflow.

Yeah, I don't think I'm gonna bother testing it any further. Gave it another try, I thought XL Inpainting will be better with it, but I'm really not impressed, not as powerful as 1.5. Not really worth switching for twice the vram usage and lower speed, worse results and inpainting resolution limit. Maybe it has some use cases in Comfy if someone uses SDXL, so probably a plus for that.

https://huggingface.co/TTPlanet/TTPLanet_SDXL_Controlnet_Tile_Realistic_V1 try this one and read the instruction to use. Same as 1.5 but for SDXL, realistic version, no guarantee on 2D

Thanks a lot TTplanet! I can confirm that this actually works!

I use ComfyUI with JuggernautXL_v9 and 4x-UltraSharp.pth to upscale a close up landscape photo from 2k to 8k and after 6 hours of experimentation I got the best result using Tiled KSampler with tiling strategy random strict, 20 steps, cfg 7, sampler euler, scheduler sgm_uniform, denoise 0.6, with ControlNet strength set to 0.5.

I found that using IPAdapter (plus) was producing slightly better details but at the cost of the image losing global contrast and no longer looking like the original when seen from further away.

Also I got considerably better result with regular checkpoint at 20 steps than with lightning checkpoint at 6 steps.

Thanks a lot TTplanet! I can confirm that this actually works!

I use ComfyUI with JuggernautXL_v9 and 4x-UltraSharp.pth to upscale a close up landscape photo from 2k to 8k and after 6 hours of experimentation I got the best result using Tiled KSampler with tiling strategy random strict, 20 steps, cfg 7, sampler euler, scheduler sgm_uniform, denoise 0.6, with ControlNet strength set to 0.5.

I found that using IPAdapter (plus) was producing slightly better details but at the cost of the image losing global contrast and no longer looking like the original when seen from further away.

Also I got considerably better result with regular checkpoint at 20 steps than with lightning checkpoint at 6 steps.

I have uploaded my workflow in Civitai and put the link here also. if you have a nice image, you can directly use it, if you have low quality image you need to handle it with i2i at low denoise to fix it. and sent it to my workflow, you will see the same effect as I showed in example. I will prefer to use IPA for the pre-process on the image before the ultimate sd upscale apply. Try my process, you will like it....

Thank you @TTPlanet ! I've tested this (the fp16 version), and it seems to work great (even with 2D). It looks better than Tile 1.5 for working with larger resolution images, as produced by SDXL. And since it can use an SDXL base model to work from, including using the same model that generated the original image, that also helps produce much finer details when upscaling to higher resolutions. It seems to stay much truer to the original image when upscaling as with Ultimate SD Upscale, just adding necessary details, without as many extra hallucinations. It also doesn't seem to get splotchy like 1.5 did after upscaling multiple times. Great work! I look forward to adding this to my workflow. FYI, not sure why but ControlNet warns that it is "Unable to determine version for ControlNet model" as it is running.

Hesajon could you leave a workflow ? For me it looks horrorble πŸ₯²

After some further testing, it might not be as great as it first looked. Still testing...

@TTPlanet I've tried your workflow, and all possible permutations regarding differences between our workflows. After extensive texting, my conclusion is that Ultimate SD Upscale is detrimental. With it, I either can't get rid of visible seams, or the image is too constrained by low denoise and so lacks detail. Your combination of sampler, scheduler and CNet strength values proved interesting however. When I combined those with Tiled KSampler which allows for much higher denoise of 0.6 without visible seams, this resulted in more detailed and better looking result compared to what I had before, at a very slight cost to faithfulness / consistency / coherence. I think both sets of settings are useful, and are probably more such combination that could be discovered.
Here is my workflow featuring both sets of settings: https://comfyworkflows.com/workflows/91690876-a404-4a89-b8e5-1f84aaf64c58

edit: I just tested SUPIR and CCSR against this, and this Controlnet Tile XL model wins by a huge margin, and is 7 times faster. It's fascinating how everyone's talking about those two in the world of upscaling, while the true gem is hidden right here. To be fair, CNet Tile approach does not stick nearly as close to the original as they do, but most of us aren't trying to do doing forensics with these, so allowing the model some creativity is not really an issue, or at least for me it isn't. Model's going to hallucinate anyways as it can never actually know what details were there in reality when zoomed in, might as well let it hallucinate optimally.

Sign up or log in to comment