--- language: - en tags: - stable-diffusion - text-to-image license: creativeml-openrail-m inference: false --- # ControlNet test This model is an **experimental** CN model. It aims to paint the finish coat from the primer coat. It seems that setting Control mode to "ControlNet is more important" and doing i2i based on the primed image gives relatively good results. ## Model Description | base model | annotator | images | epochs | batch size | precision | | --------------------------------------------------------------- | ---------------------------------------------------------- | ------ | ------ | ---------- | --------- | | [Flex Waifu Rainbow](https://huggingface.co/Ai-tensa/FlexWaifu) | [DanbooRegion](https://github.com/lllyasviel/DanbooRegion) | 6k | 10 | 8 | fp16 | The model was trained from [Flex Waifu Rainbow](https://huggingface.co/Ai-tensa/FlexWaifu) with ~6k images paired with images segmented by [DanbooRegion](https://github.com/lllyasviel/DanbooRegion). The training images are from various authors and models published on the Internet with AI illustration tags. ## License This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) ## Acknowledgements These Models build on the excellent works: SD1.4, developed by [CompVis Researchers](https://ommer-lab.com/), ControlNet and DanbooRegion by [Lvmin Zhang](https://huggingface.co/lllyasviel), et.al., and WD1.3, developed by [Anthony Mercurio](https://github.com/harubaru), [Salt](https://github.com/sALTaccount/), and [Cafe](https://twitter.com/cafeai_labs). ## Example (Flex Waifu Rainbow with CN) **Input** ![](./images/00698-598802874_seg.png) **Output** ![](images/00026-387124715.png) ``` parameters (i2i without prompt) Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 387124715, Size: 1152x1152, Model hash: 3f709dac23, Model: FlexWaifu_FlexWaifuRainbow, Denoising strength: 0.6, Version: v1.2.1, ControlNet 0: "preprocessor: none, model: control_fwr_color_test_fp16 [c6536a9b], weight: 1, starting/ending: (0, 1), resize mode: Crop and Resize, pixel perfect: False, control mode: ControlNet is more important, preprocessor params: (64, 64, 64)" ```