This is the pretrained weights and some other detector weights of ControlNet.
See also: https://github.com/lllyasviel/ControlNet
- The ControlNet+SD1.5 model to control SD using canny edge detection.
- The ControlNet+SD1.5 model to control SD using Midas depth estimation.
- The ControlNet+SD1.5 model to control SD using HED edge detection (soft edge).
- The ControlNet+SD1.5 model to control SD using M-LSD line detection (will also work with traditional Hough transform).
- The ControlNet+SD1.5 model to control SD using normal map. Best to use the normal map generated by that Gradio app. Other normal maps may also work as long as the direction is correct (left looks red, right looks blue, up looks green, down looks purple).
- The ControlNet+SD1.5 model to control SD using OpenPose pose detection. Directly manipulating pose skeleton should also work.
- The ControlNet+SD1.5 model to control SD using human scribbles. The model is trained with boundary edges with very strong data augmentation to simulate boundary lines similar to that drawn by human.
- The ControlNet+SD1.5 model to control SD using semantic segmentation. The protocol is ADE20k.
- Third-party model: Openpose’s pose detection model.
- Third-party model: Openpose’s hand detection model.
- Third-party model: Midas depth estimation model.
- Third-party model: M-LSD detection model.
- Third-party model: M-LSD’s another smaller detection model (we do not use this one).
- Third-party model: HED boundary detection.
- Third-party model: Uniformer semantic segmentation.
- The data for our training tutorial.
Special Thank to the great project - Mikubill' A1111 Webui Plugin !
Thank haofanwang for making ControlNet-for-Diffusers!
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
- Downloads last month