ControlNet / Img2Img
Although the paper discusses how to train ControlNet, considering our training code has not been made public, I suggest you adopt an indirect method to train ControlNet: First, finetune SDXS on your dataset using the original DM training loss; then, train ControlNet on the finetuned model; finally, reattach ControlNet back to SDXS. Although this method is suboptimal, it can still achieve certain control capabilities. For Img2Img based on SDEdit, you can try it directly (I haven't tried it).
The images I generated using the img2img hairstyle are very blurry, and the quality of the images generated by text2img is also not very good. Would it be better to use your method with the playgroundv2.5 mode
I uploaded a sketch2image controlnet, it should be practical:
https://huggingface.co/IDKiro/sdxs-512-dreamshaper-sketch
See our repo for the demo code:
https://github.com/IDKiro/sdxs