controlnet-liuch37/controlnet-sd-2-1-base-v1
These are controlnet weights trained on stabilityai/stable-diffusion-2-1-base with new type of conditioning.
Intended uses & limitations
How to use
from PIL import Image
from diffusers import (
ControlNetModel,
StableDiffusionControlNetPipeline,
UniPCMultistepScheduler,
)
checkpoint = "liuch37/controlnet-sd-2-1-base-v1"
prompt = "YOUR_FAVORITE_PROMPT"
control_image = Image.open("YOUR_SEMANTIC_IMAGE")
controlnet = ControlNetModel.from_pretrained(checkpoint, torch_dtype=torch.float32)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
"stabilityai/stable-diffusion-2-1-base", controlnet=controlnet, torch_dtype=torch.float32
)
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
generator = torch.manual_seed(0)
image = pipe(prompt, num_inference_steps=30, generator=generator, image=control_image).images[0]
image.save("YOUR_OUTPUT_IMAGE")
Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
Training details
Train the ControlNet with semantic maps as the condition. Cityscapes training set is used for training (https://huggingface.co/datasets/liuch37/controlnet-cityscapes). Only 2 epochs are trained for the current version.
- Downloads last month
- 552
This model does not have enough activity to be deployed to Inference API (serverless) yet.
Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.