Edit model card
from diffusers import HunyuanDiT2DControlNetModel, HunyuanDiTControlNetPipeline
import torch


controlnet = HunyuanDiT2DControlNetModel.from_pretrained("Tencent-Hunyuan/HunyuanDiT-v1.2-ControlNet-Diffusers-Pose", torch_dtype=torch.float16)

pipe = HunyuanDiTControlNetPipeline.from_pretrained("Tencent-Hunyuan/HunyuanDiT-v1.2-Diffusers-Distilled", controlnet=controlnet, torch_dtype=torch.float16)
pipe.to("cuda")

from diffusers.utils import load_image
cond_image = load_image('https://huggingface.co/Tencent-Hunyuan/HunyuanDiT-v1.2-ControlNet-Diffusers-Pose/resolve/main/pose.jpg?download=true')

## You may also use English prompt as HunyuanDiT supports both English and Chinese
prompt="在白天的森林中,一位穿着绿色上衣的亚洲女性站在大象旁边。照片采用了中景、平视和居中构图的方式,呈现出写实的效果。这张照片蕴含了人物摄影文化,并展现了宁静的氛围"
#prompt="In the daytime forest, an Asian woman wearing a green shirt stands beside an elephant. The photo uses a medium shot, eye-level, and centered composition to create a realistic effect. This picture embodies the character photography culture and conveys a serene atmosphere."

torch.manual_seed(42)
image = pipe(
    prompt,
    negative_prompt='错误的眼睛,糟糕的人脸,毁容,糟糕的艺术,变形,多余的肢体,模糊的颜色,模糊,重复,病态,残缺,',
    height=1024,
    width=1024,
    guidance_scale=6.0,
    control_image=cond_image,
    num_inference_steps=50,
).images[0]

image.save('./image.png')
Downloads last month
14
Inference API
Unable to determine this model’s pipeline type. Check the docs .