takuma104 commited on
Commit
1bfa61c
1 Parent(s): 2f641a2

Update multi_controlnet/README.md

Browse files
Files changed (1) hide show
  1. multi_controlnet/README.md +91 -1
multi_controlnet/README.md CHANGED
@@ -1 +1,91 @@
1
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ## Generate Example:
3
+ - Model: [andite/anything-v4.0](https://hf.co/andite/anything-v4.0)
4
+ - Prompt: `best quality, extremely detailed, cowboy shot`
5
+ - Negative_prompt: `cowboy, monochrome, lowres, bad anatomy, worst quality, low quality`
6
+ - Seed: 8 (cherry-picked)
7
+
8
+ |Control Image1|Control Image2|Generated|
9
+ |---|---|---|
10
+ |<img width="200" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/multi_controlnet/pac_pose_512x512.png">|<img width="200" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/multi_controlnet/pac_canny_512x512.png">|<img width="200" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/multi_controlnet/mc_pose_and_canny_result_8.png">|
11
+ |<img width="200" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/multi_controlnet/pac_pose_512x512.png">|(none)|<img width="200" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/multi_controlnet/mc_pose_only_result_8.png">|
12
+ |<img width="200" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/multi_controlnet/pac_canny_512x512.png">|(none)|<img width="200" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/multi_controlnet/mc_canny_only_result_8.png">|
13
+
14
+ - Pose & Canny Control Image Generated by [*Character bones that look like Openpose for blender _ Ver_4.7 Depth+Canny*](https://toyxyz.gumroad.com/l/ciojz)
15
+
16
+
17
+ ## Code
18
+ Using deveploment version [`StableDiffusionMultiControlNetPipeline`](https://github.com/takuma104/diffusers/tree/multi_controlnet)
19
+
20
+ ```py
21
+ from stable_diffusion_multi_controlnet import StableDiffusionMultiControlNetPipeline
22
+ from stable_diffusion_multi_controlnet import ControlNetProcessor
23
+ from diffusers import ControlNetModel
24
+ import torch
25
+ from diffusers.utils import load_image
26
+
27
+ pipe = StableDiffusionMultiControlNetPipeline.from_pretrained(
28
+ "andite/anything-v4.0", safety_checker=None, torch_dtype=torch.float16
29
+ ).to("cuda")
30
+ pipe.scheduler = EulerDiscreteScheduler.from_config("andite/anything-v4.0", subfolder="scheduler")
31
+ pipe.enable_xformers_memory_efficient_attention()
32
+
33
+ controlnet_canny = ControlNetModel.from_pretrained(
34
+ "lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16
35
+ ).to("cuda")
36
+ controlnet_pose = ControlNetModel.from_pretrained(
37
+ "lllyasviel/sd-controlnet-openpose", torch_dtype=torch.float16
38
+ ).to("cuda")
39
+
40
+ canny_image = load_image('https://huggingface.co/takuma104/controlnet_dev/resolve/main/multi_controlnet/pac_canny_512x512.png').convert('RGB')
41
+ pose_image = load_image('https://huggingface.co/takuma104/controlnet_dev/resolve/main/multi_controlnet/pac_pose_512x512.png').convert('RGB')
42
+
43
+ prompt = "best quality, extremely detailed, cowboy shot"
44
+ negative_prompt = "cowboy, monochrome, lowres, bad anatomy, worst quality, low quality"
45
+ seed = 8
46
+
47
+ image = pipe(
48
+ prompt=prompt,
49
+ negative_prompt=negative_prompt,
50
+ processors=[
51
+ ControlNetProcessor(controlnet_pose, pose_image),
52
+ # ControlNetProcessor(controlnet_canny, canny_image),
53
+ ],
54
+ generator=torch.Generator(device="cpu").manual_seed(seed),
55
+ num_inference_steps=30,
56
+ width=512,
57
+ height=512,
58
+ ).images[0]
59
+ image.save(f"./mc_pose_only_result_{seed}.png")
60
+
61
+ image = pipe(
62
+ prompt=prompt,
63
+ negative_prompt=negative_prompt,
64
+ processors=[
65
+ # ControlNetProcessor(controlnet_pose, pose_image),
66
+ ControlNetProcessor(controlnet_canny, canny_image),
67
+ ],
68
+ generator=torch.Generator(device="cpu").manual_seed(seed),
69
+ num_inference_steps=30,
70
+ width=512,
71
+ height=512,
72
+ ).images[0]
73
+ image.save(f"./mc_canny_only_result_{seed}.png")
74
+
75
+ image = pipe(
76
+ prompt=prompt,
77
+ negative_prompt=negative_prompt,
78
+ processors=[
79
+ ControlNetProcessor(controlnet_pose, pose_image),
80
+ ControlNetProcessor(controlnet_canny, canny_image),
81
+ ],
82
+ generator=torch.Generator(device="cpu").manual_seed(seed),
83
+ num_inference_steps=30,
84
+ width=512,
85
+ height=512,
86
+ ).images[0]
87
+ image.save(f"./mc_pose_and_canny_result_{seed}.png")
88
+
89
+
90
+ ```
91
+