ControlNet / Img2Img

#5
by radames HF staff - opened

hi @IDKiro , congratulations on your work; the distillation concept is fascinating! I have a question: I'm experimenting with the ControlNet and Img2Img pipelines. Would you have any recommendations on whether this is possible?

Owner

Although the paper discusses how to train ControlNet, considering our training code has not been made public, I suggest you adopt an indirect method to train ControlNet: First, finetune SDXS on your dataset using the original DM training loss; then, train ControlNet on the finetuned model; finally, reattach ControlNet back to SDXS. Although this method is suboptimal, it can still achieve certain control capabilities. For Img2Img based on SDEdit, you can try it directly (I haven't tried it).

The images I generated using the img2img hairstyle are very blurry, and the quality of the images generated by text2img is also not very good. Would it be better to use your method with the playgroundv2.5 mode

I uploaded a sketch2image controlnet, it should be practical:
https://huggingface.co/IDKiro/sdxs-512-dreamshaper-sketch
See our repo for the demo code:
https://github.com/IDKiro/sdxs

I have interest in ControlNet of sdxs-512-0.9.
You mentioned, adopt an indirect method to train ControlNet: First, finetune SDXS on your dataset using the original DM training loss; then, train ControlNet on the finetuned model; finally, reattach ControlNet back to SDXS.

How can I fine tune SDXS using the original DM training loss? What is the original DM training loss?
I just tried to follw fine tuning method of diffuser's repository, using sdxs-512-0.9. But I could not train ControlNet on the finetuned model with the error
"File "/usr/local/lib/python3.10/dist-packages/diffusers/models/controlnet.py", line 442, in init
raise ValueError(f"unknown mid_block_type : {mid_block_type}")".

I have interest in ControlNet of sdxs-512-0.9.
You mentioned, adopt an indirect method to train ControlNet: First, finetune SDXS on your dataset using the original DM training loss; then, train ControlNet on the finetuned model; finally, reattach ControlNet back to SDXS.

How can I fine tune SDXS using the original DM training loss? What is the original DM training loss?
I just tried to follw fine tuning method of diffuser's repository, using sdxs-512-0.9. But I could not train ControlNet on the finetuned model with the error
"File "/usr/local/lib/python3.10/dist-packages/diffusers/models/controlnet.py", line 442, in init
raise ValueError(f"unknown mid_block_type : {mid_block_type}")".

Because the controlnet code for diffusers is not well thought out enough, it is not possible to initialize the controlnet for sdxs via the from_unet method.

Just modify the code in "diffusers/models/controlnet.py":

        else:
            raise ValueError(f"unknown mid_block_type : {mid_block_type}")

into:

       elif mid_block_type is None:
            self.mid_block = None
       else:
            raise ValueError(f"unknown mid_block_type : {mid_block_type}")

If it still doesn't work, please give me feedback.

Sign up or log in to comment