sd-controlnet-mlsd / README.md
patrickvonplaten's picture
Update README.md
456fc1a
metadata
license: openrail
base_model: runwayml/stable-diffusion-v1-5
tags:
  - art
  - controlnet
  - stable-diffusion
  - image-to-image

Controlnet - M-LSD Straight Line Version

ControlNet is a neural network structure to control diffusion models by adding extra conditions. This checkpoint corresponds to the ControlNet conditioned on M-LSD straight line detection.

It can be used in combination with Stable Diffusion.

img

Model Details

Introduction

Controlnet was proposed in Adding Conditional Control to Text-to-Image Diffusion Models by Lvmin Zhang, Maneesh Agrawala.

The abstract reads as follows:

We present a neural network structure, ControlNet, to control pretrained large diffusion models to support additional input conditions. The ControlNet learns task-specific conditions in an end-to-end way, and the learning is robust even when the training dataset is small (< 50k). Moreover, training a ControlNet is as fast as fine-tuning a diffusion model, and the model can be trained on a personal devices. Alternatively, if powerful computation clusters are available, the model can scale to large amounts (millions to billions) of data. We report that large diffusion models like Stable Diffusion can be augmented with ControlNets to enable conditional inputs like edge maps, segmentation maps, keypoints, etc. This may enrich the methods to control large diffusion models and further facilitate related applications.

Released Checkpoints

The authors released 8 different checkpoints, each trained with Stable Diffusion v1-5 on a different type of conditioning:

Model Name Control Image Overview Control Image Example Generated Image Example
lllyasviel/sd-controlnet-canny
Trained with canny edge detection
A monochrome image with white edges on a black background.
lllyasviel/sd-controlnet-depth
Trained with Midas depth estimation
A grayscale image with black representing deep areas and white representing shallow areas.
lllyasviel/sd-controlnet-hed
Trained with HED edge detection (soft edge)
A monochrome image with white soft edges on a black background.
lllyasviel/sd-controlnet-mlsd
Trained with M-LSD line detection
A monochrome image composed only of white straight lines on a black background.
lllyasviel/sd-controlnet-normal
Trained with normal map
A normal mapped image.
lllyasviel/sd-controlnet_openpose
Trained with OpenPose bone image
A OpenPose bone image.
lllyasviel/sd-controlnet_scribble
Trained with human scribbles
A hand-drawn monochrome image with white outlines on a black background.
lllyasviel/sd-controlnet_seg
Trained with semantic segmentation
An ADE20K's segmentation protocol image.

Example

It is recommended to use the checkpoint with Stable Diffusion v1-5 as the checkpoint has been trained on it. Experimentally, the checkpoint can be used with other diffusion models such as dreamboothed stable diffusion.

Note: If you want to process an image to create the auxiliary conditioning, external dependencies are required as shown below:

  1. Install https://github.com/patrickvonplaten/controlnet_aux
$ pip install controlnet_aux
  1. Let's install diffusers and related packages:
$ pip pip install diffusers transformers accelerate
  1. Run code:
from PIL import Image
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler
import torch
from controlnet_aux import MLSDdetector
from diffusers.utils import load_image

mlsd = MLSDdetector.from_pretrained('lllyasviel/ControlNet')

image = load_image("https://huggingface.co/lllyasviel/sd-controlnet-mlsd/resolve/main/images/room.png")

image = mlsd(image)

controlnet = ControlNetModel.from_pretrained(
    "lllyasviel/sd-controlnet-mlsd", torch_dtype=torch.float16
)

pipe = StableDiffusionControlNetPipeline.from_pretrained(
    "runwayml/stable-diffusion-v1-5", controlnet=controlnet, safety_checker=None, torch_dtype=torch.float16
)

pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)

# Remove if you do not have xformers installed
# see https://huggingface.co/docs/diffusers/v0.13.0/en/optimization/xformers#installing-xformers
# for installation instructions
pipe.enable_xformers_memory_efficient_attention()

pipe.enable_model_cpu_offload()

image = pipe("room", image, num_inference_steps=20).images[0]

image.save('images/room_mlsd_out.png')

room

room_mlsd

room_mlsd_out

Training

The hough line model was trained on 600k edge-image, caption pairs. The dataset was generated from Places2 using BLIP to generate text captions and a deep Hough transform to generate edge-images. The model was trained for 160 GPU-hours with Nvidia A100 80G using the Canny model as a base model.

Blog post

For more information, please also have a look at the official ControlNet Blog Post.