hollowstrawberry
commited on
Commit
•
fddf033
1
Parent(s):
c6dbdcd
Update README.md
Browse files
README.md
CHANGED
@@ -228,7 +228,7 @@ Coming soon: How to use ultimate upscaler.
|
|
228 |
|
229 |
ControlNet is an extremely powerful recent technology for Stable Diffusion. It lets you analyze information about any previously existing image and use it to guide the generation of your AI images. We'll see what this means in a moment.
|
230 |
|
231 |
-
If you're on collab, you should enable the `all_control_models` option. On Windows, you should first install the ControlNet [extension ▲](#extensions), then go [here](https://civitai.com/models/9251/controlnet-pre-trained-models) to download some models which you'll need to place in `stable-diffusion-webui/extensions/sd-webui-controlnet/models`. I recommend at least
|
232 |
|
233 |
I will demonstrate how ControlNet may be used. For this I chose a popular image online as our "sample image". It's not necessary for you to follow along, but you can download the images and put them in the **PNG Info** tab to view their generation data.
|
234 |
|
@@ -269,16 +269,18 @@ First, you must scroll down in the txt2img page and click on ControlNet to open
|
|
269 |
![Open Pose output image](images/openpose2.png)
|
270 |
</details>
|
271 |
|
|
|
|
|
|
|
|
|
272 |
You will notice that there are 2 results for each method. The first is an intermediate step called the *preprocessed image*, which is then used to produce the final image. You can supply the preprocessed image yourself, in which case you should set the preprocessor to *None*. This is extremely powerful with external tools such as Blender.
|
273 |
|
274 |
In the Settings tab there is a ControlNet section where you can enable *multiple controlnets at once*. One particularly good use is when one of them is Openpose, to get a specific character pose in a specific environment, or with specific hand gestures or details. Observe:
|
275 |
|
276 |
-
![Open Pose +
|
277 |
|
278 |
You can also use ControlNet in img2img, in which the input image and sample image both will have a certain effect on the result. I do not have much experience with this method.
|
279 |
|
280 |
-
|
281 |
-
I would also recommend the Scribble model, which lets you draw a crude sketch and turn it into a finished piece with the help of your prompt.
|
282 |
There are also alternative **diff** versions of each ControlNet model, which produce slightly different results. You can [try them](https://civitai.com/models/9868/controlnet-pre-trained-difference-models) if you want, but I personally haven't.
|
283 |
|
284 |
|
|
|
228 |
|
229 |
ControlNet is an extremely powerful recent technology for Stable Diffusion. It lets you analyze information about any previously existing image and use it to guide the generation of your AI images. We'll see what this means in a moment.
|
230 |
|
231 |
+
If you're on collab, you should enable the `all_control_models` option. On Windows, you should first install the ControlNet [extension ▲](#extensions), then go [here](https://civitai.com/models/9251/controlnet-pre-trained-models) to download some models which you'll need to place in `stable-diffusion-webui/extensions/sd-webui-controlnet/models`. I recommend at least Canny, Depth, Openpose and Scribble, which I will show here.
|
232 |
|
233 |
I will demonstrate how ControlNet may be used. For this I chose a popular image online as our "sample image". It's not necessary for you to follow along, but you can download the images and put them in the **PNG Info** tab to view their generation data.
|
234 |
|
|
|
269 |
![Open Pose output image](images/openpose2.png)
|
270 |
</details>
|
271 |
|
272 |
+
* **Scribble**
|
273 |
+
|
274 |
+
Lets you make a crude sketch and convert it into a finished piece with the help of your prompt. Example soon.
|
275 |
+
|
276 |
You will notice that there are 2 results for each method. The first is an intermediate step called the *preprocessed image*, which is then used to produce the final image. You can supply the preprocessed image yourself, in which case you should set the preprocessor to *None*. This is extremely powerful with external tools such as Blender.
|
277 |
|
278 |
In the Settings tab there is a ControlNet section where you can enable *multiple controlnets at once*. One particularly good use is when one of them is Openpose, to get a specific character pose in a specific environment, or with specific hand gestures or details. Observe:
|
279 |
|
280 |
+
![Open Pose + Canny](images/openpose_canny.png)
|
281 |
|
282 |
You can also use ControlNet in img2img, in which the input image and sample image both will have a certain effect on the result. I do not have much experience with this method.
|
283 |
|
|
|
|
|
284 |
There are also alternative **diff** versions of each ControlNet model, which produce slightly different results. You can [try them](https://civitai.com/models/9868/controlnet-pre-trained-difference-models) if you want, but I personally haven't.
|
285 |
|
286 |
|