## Generate Example: - Model: [andite/anything-v4.0](https://hf.co/andite/anything-v4.0) - Prompt: `best quality, extremely detailed, cowboy shot` - Negative_prompt: `cowboy, monochrome, lowres, bad anatomy, worst quality, low quality` - Seed: 19 (cherry-picked) |Control Image1|Control Image2|Generated| |---|---|---| |||| ||(none)|| ||(none)|| - Pose & Canny Control Image Generated by [*Character bones that look like Openpose for blender _ Ver_4.7 Depth+Canny*](https://toyxyz.gumroad.com/l/ciojz) ## Code Using deveploment version [`StableDiffusionMultiControlNetPipeline`](https://github.com/takuma104/diffusers/tree/multi_controlnet) ```py from stable_diffusion_multi_controlnet import StableDiffusionMultiControlNetPipeline from stable_diffusion_multi_controlnet import ControlNetProcessor from diffusers import ControlNetModel import torch from diffusers.utils import load_image pipe = StableDiffusionMultiControlNetPipeline.from_pretrained( "andite/anything-v4.0", safety_checker=None, torch_dtype=torch.float16 ).to("cuda") pipe.scheduler = EulerDiscreteScheduler.from_config("andite/anything-v4.0", subfolder="scheduler") pipe.enable_xformers_memory_efficient_attention() controlnet_canny = ControlNetModel.from_pretrained( "lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16 ).to("cuda") controlnet_pose = ControlNetModel.from_pretrained( "lllyasviel/sd-controlnet-openpose", torch_dtype=torch.float16 ).to("cuda") canny_image = load_image('https://huggingface.co/takuma104/controlnet_dev/resolve/main/multi_controlnet/pac_canny_512x512.png').convert('RGB') pose_image = load_image('https://huggingface.co/takuma104/controlnet_dev/resolve/main/multi_controlnet/pac_pose_512x512.png').convert('RGB') prompt = "best quality, extremely detailed, cowboy shot" negative_prompt = "cowboy, monochrome, lowres, bad anatomy, worst quality, low quality" seed = 19 image = pipe( prompt=prompt, negative_prompt=negative_prompt, processors=[ ControlNetProcessor(controlnet_pose, pose_image), # ControlNetProcessor(controlnet_canny, canny_image), ], generator=torch.Generator(device="cpu").manual_seed(seed), num_inference_steps=30, width=512, height=512, ).images[0] image.save(f"./mc_pose_only_result_{seed}.png") image = pipe( prompt=prompt, negative_prompt=negative_prompt, processors=[ # ControlNetProcessor(controlnet_pose, pose_image), ControlNetProcessor(controlnet_canny, canny_image), ], generator=torch.Generator(device="cpu").manual_seed(seed), num_inference_steps=30, width=512, height=512, ).images[0] image.save(f"./mc_canny_only_result_{seed}.png") image = pipe( prompt=prompt, negative_prompt=negative_prompt, processors=[ ControlNetProcessor(controlnet_pose, pose_image), ControlNetProcessor(controlnet_canny, canny_image), ], generator=torch.Generator(device="cpu").manual_seed(seed), num_inference_steps=30, width=512, height=512, ).images[0] image.save(f"./mc_pose_and_canny_result_{seed}.png") ```