patrickvonplaten takuma104 commited on
Commit
4a03be9
1 Parent(s): 9990553

Update readme (#1)

Browse files

- Update readme for control_v11f1e_sd15_tile (2682efb37ebd42207bc113f16d2db62c8dba1a6f)


Co-authored-by: Takuma Mori <takuma104@users.noreply.huggingface.co>

Files changed (5) hide show
  1. .gitattributes +2 -0
  2. README.md +131 -2
  3. images/original.png +3 -0
  4. images/output.png +3 -0
  5. sd.png +3 -0
.gitattributes CHANGED
@@ -32,3 +32,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
32
  *.zip filter=lfs diff=lfs merge=lfs -text
33
  *.zst filter=lfs diff=lfs merge=lfs -text
34
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
32
  *.zip filter=lfs diff=lfs merge=lfs -text
33
  *.zst filter=lfs diff=lfs merge=lfs -text
34
  *tfevents* filter=lfs diff=lfs merge=lfs -text
35
+ *.png filter=lfs diff=lfs merge=lfs -text
36
+ images/ filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -5,9 +5,138 @@ tags:
5
  - art
6
  - controlnet
7
  - stable-diffusion
8
- duplicated_from: ControlNet-1-1-preview/control_v11u_sd15_tile
 
 
9
  ---
10
 
11
  # Controlnet - v1.1 - *Tile Version*
12
 
13
- TODO. Model is still unfinished and will be updated once finished. See [ControlNet v1.1](https://github.com/lllyasviel/ControlNet-v1-1-nightly#controlnet-11-tile-unfinished) for more information.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  - art
6
  - controlnet
7
  - stable-diffusion
8
+ - controlnet-v1-1
9
+ - image-to-image
10
+ duplicated_from: ControlNet-1-1-preview/control_v11f1e_sd15_tile
11
  ---
12
 
13
  # Controlnet - v1.1 - *Tile Version*
14
 
15
+ **Controlnet v1.1** was released in [lllyasviel/ControlNet-v1-1](https://huggingface.co/lllyasviel/ControlNet-v1-1) by [Lvmin Zhang](https://huggingface.co/lllyasviel).
16
+
17
+ This checkpoint is a conversion of [the original checkpoint](https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11f1e_sd15_tile.pth) into `diffusers` format.
18
+ It can be used in combination with **Stable Diffusion**, such as [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5).
19
+
20
+
21
+ For more details, please also have a look at the [🧨 Diffusers docs](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/controlnet).
22
+
23
+
24
+ ControlNet is a neural network structure to control diffusion models by adding extra conditions.
25
+
26
+ ![img](./sd.png)
27
+
28
+ This checkpoint corresponds to the ControlNet conditioned on **tiled image**. Conceptually, it is similar to a super-resolution model, but its usage is not limited to that. It is also possible to generate details at the same size as the input (conditione) image.
29
+
30
+ ## Model Details
31
+ - **Developed by:** Lvmin Zhang, Maneesh Agrawala
32
+ - **Model type:** Diffusion-based text-to-image generation model
33
+ - **Language(s):** English
34
+ - **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
35
+ - **Resources for more information:** [GitHub Repository](https://github.com/lllyasviel/ControlNet), [Paper](https://arxiv.org/abs/2302.05543).
36
+ - **Cite as:**
37
+
38
+ @misc{zhang2023adding,
39
+ title={Adding Conditional Control to Text-to-Image Diffusion Models},
40
+ author={Lvmin Zhang and Maneesh Agrawala},
41
+ year={2023},
42
+ eprint={2302.05543},
43
+ archivePrefix={arXiv},
44
+ primaryClass={cs.CV}
45
+ }
46
+ ## Introduction
47
+
48
+ Controlnet was proposed in [*Adding Conditional Control to Text-to-Image Diffusion Models*](https://arxiv.org/abs/2302.05543) by
49
+ Lvmin Zhang, Maneesh Agrawala.
50
+
51
+ The abstract reads as follows:
52
+
53
+ *We present a neural network structure, ControlNet, to control pretrained large diffusion models to support additional input conditions.
54
+ The ControlNet learns task-specific conditions in an end-to-end way, and the learning is robust even when the training dataset is small (< 50k).
55
+ Moreover, training a ControlNet is as fast as fine-tuning a diffusion model, and the model can be trained on a personal devices.
56
+ Alternatively, if powerful computation clusters are available, the model can scale to large amounts (millions to billions) of data.
57
+ We report that large diffusion models like Stable Diffusion can be augmented with ControlNets to enable conditional inputs like edge maps, segmentation maps, keypoints, etc.
58
+ This may enrich the methods to control large diffusion models and further facilitate related applications.*
59
+
60
+ ## Example
61
+
62
+ It is recommended to use the checkpoint with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) as the checkpoint has been trained on it.
63
+ Experimentally, the checkpoint can be used with other diffusion models such as dreamboothed stable diffusion.
64
+
65
+
66
+ 1. Let's install `diffusers` and related packages:
67
+ ```
68
+ $ pip install diffusers transformers accelerate
69
+ ```
70
+ 2. Run code:
71
+ ```python
72
+
73
+ import torch
74
+ from PIL import Image
75
+ from diffusers import ControlNetModel, DiffusionPipeline
76
+ from diffusers.utils import load_image
77
+
78
+ def resize_for_condition_image(input_image: Image, resolution: int):
79
+ input_image = input_image.convert("RGB")
80
+ W, H = input_image.size
81
+ k = float(resolution) / min(H, W)
82
+ H *= k
83
+ W *= k
84
+ H = int(round(H / 64.0)) * 64
85
+ W = int(round(W / 64.0)) * 64
86
+ img = input_image.resize((W, H), resample=Image.LANCZOS)
87
+ return img
88
+
89
+ controlnet = ControlNetModel.from_pretrained('lllyasviel/control_v11f1e_sd15_tile',
90
+ torch_dtype=torch.float16)
91
+ pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5",
92
+ custom_pipeline="stable_diffusion_controlnet_img2img",
93
+ controlnet=controlnet,
94
+ torch_dtype=torch.float16).to('cuda')
95
+ pipe.enable_xformers_memory_efficient_attention()
96
+
97
+ source_image = load_image('https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/original.png')
98
+
99
+ condition_image = resize_for_condition_image(source_image, 1024)
100
+ image = pipe(prompt="best quality",
101
+ negative_prompt="blur, lowres, bad anatomy, bad hands, cropped, worst quality",
102
+ image=condition_image,
103
+ controlnet_conditioning_image=condition_image,
104
+ width=condition_image.size[0],
105
+ height=condition_image.size[1],
106
+ strength=1.0,
107
+ generator=torch.manual_seed(0),
108
+ num_inference_steps=32,
109
+ ).images[0]
110
+
111
+ image.save('output.png')
112
+ ```
113
+
114
+ ![original](./images/original.png)
115
+ ![tile_output](./images/output.png)
116
+
117
+ ## Other released checkpoints v1-1
118
+ The authors released 14 different checkpoints, each trained with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
119
+ on a different type of conditioning:
120
+
121
+ | Model Name | Control Image Overview| Control Image Example | Generated Image Example |
122
+ |---|---|---|---|
123
+ |[lllyasviel/control_v11p_sd15_canny](https://huggingface.co/lllyasviel/control_v11p_sd15_canny)<br/> *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/image_out.png"/></a>|
124
+ |[lllyasviel/control_v11e_sd15_ip2p](https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p)<br/> *Trained with pixel to pixel instruction* | No condition .|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/image_out.png"/></a>|
125
+ |[lllyasviel/control_v11p_sd15_inpaint](https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint)<br/> Trained with image inpainting | No condition.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/output.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/output.png"/></a>|
126
+ |[lllyasviel/control_v11p_sd15_mlsd](https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd)<br/> Trained with multi-level line segment detection | An image with annotated line segments.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/image_out.png"/></a>|
127
+ |[lllyasviel/control_v11f1p_sd15_depth](https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth)<br/> Trained with depth estimation | An image with depth information, usually represented as a grayscale image.|<a href="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/image_out.png"/></a>|
128
+ |[lllyasviel/control_v11p_sd15_normalbae](https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae)<br/> Trained with surface normal estimation | An image with surface normal information, usually represented as a color-coded image.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/image_out.png"/></a>|
129
+ |[lllyasviel/control_v11p_sd15_seg](https://huggingface.co/lllyasviel/control_v11p_sd15_seg)<br/> Trained with image segmentation | An image with segmented regions, usually represented as a color-coded image.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/image_out.png"/></a>|
130
+ |[lllyasviel/control_v11p_sd15_lineart](https://huggingface.co/lllyasviel/control_v11p_sd15_lineart)<br/> Trained with line art generation | An image with line art, usually black lines on a white background.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/image_out.png"/></a>|
131
+ |[lllyasviel/control_v11p_sd15s2_lineart_anime](https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime)<br/> Trained with anime line art generation | An image with anime-style line art.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/image_out.png"/></a>|
132
+ |[lllyasviel/control_v11p_sd15_openpose](https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime)<br/> Trained with human pose estimation | An image with human poses, usually represented as a set of keypoints or skeletons.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/image_out.png"/></a>|
133
+ |[lllyasviel/control_v11p_sd15_scribble](https://huggingface.co/lllyasviel/control_v11p_sd15_scribble)<br/> Trained with scribble-based image generation | An image with scribbles, usually random or user-drawn strokes.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/image_out.png"/></a>|
134
+ |[lllyasviel/control_v11p_sd15_softedge](https://huggingface.co/lllyasviel/control_v11p_sd15_softedge)<br/> Trained with soft edge image generation | An image with soft edges, usually to create a more painterly or artistic effect.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/image_out.png"/></a>|
135
+ |[lllyasviel/control_v11e_sd15_shuffle](https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle)<br/> Trained with image shuffling | An image with shuffled patches or regions.|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/image_out.png"/></a>|
136
+ |[lllyasviel/control_v11f1e_sd15_tile](https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile)<br/> Trained with image tiling | The base image for drawing details.|<a href="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/original.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/original.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/output.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/output.png"/></a>|
137
+
138
+
139
+
140
+ ## More information
141
+
142
+ For more information, please also have a look at the [Diffusers ControlNet Blog Post](https://huggingface.co/blog/controlnet) and have a look at the [official docs](https://github.com/lllyasviel/ControlNet-v1-1-nightly).
images/original.png ADDED

Git LFS Details

  • SHA256: 9a86aca0a54fe80135c7db42af5bee3e2edfd690566c6a2726cb34c3ca77da80
  • Pointer size: 130 Bytes
  • Size of remote file: 11.3 kB
images/output.png ADDED

Git LFS Details

  • SHA256: 6eb7c9269f6dd0cbe8628180c1c8519e9901cf85bd5241a05f5af9536733ee5e
  • Pointer size: 132 Bytes
  • Size of remote file: 1.23 MB
sd.png ADDED

Git LFS Details

  • SHA256: 9b817de9ff2e0ff87803930ffaa3a1f9edd285122d7dc2b5eb32b0b10c1f58bf
  • Pointer size: 130 Bytes
  • Size of remote file: 59.5 kB