jakebabbidge's picture
Update README.md
f057048
---
license: mit
---
A custom pipeline to add inpainting support to T2I-Adapters with the SDXL model.
To use T2I-Adapters with SDXL, ensure you have diffusers installed via the `t2iadapterxl` branch like so:
```bash
pip install git+https://github.com/huggingface/diffusers.git@t2iadapterxl
```
The following is an example on how to use this pipeline along with a sketch T2I-Adapter:
```py
>>> import torch
>>> from diffusers import DiffusionPipeline, T2IAdapter
>>> from diffusers.utils import load_image
>>> from PIL import Image
>>> adapter = T2IAdapter.from_pretrained(
... "TencentARC/t2i-adapter-sketch-sdxl-1.0", torch_dtype=torch.float16, variant="fp16"
... ).to("cuda")
>>> pipe = DiffusionPipeline.from_pretrained(
... "stabilityai/stable-diffusion-xl-base-1.0",
... torch_dtype=torch.float16,
... variant="fp16",
... use_safetensors=True,
... custom_pipeline="jakebabbidge/sdxl-adapter-inpaint",
... adapter=adapter
... ).to("cuda")
>>> image = Image.open(image_path).convert("RGB")
>>> mask = Image.open(mask_path).convert("RGB")
>>> adapter_sketch = Image.open(adapter_sketch_path).convert("RGB")
>>> result_image = pipe(
... image=image,
... mask_image=mask,
... adapter_image=adapter_sketch,
... prompt="a photo of a dog in real world, high quality",
... negative_prompt="extra digit, fewer digits, cropped, worst quality, low quality",
... num_inference_steps=50
... ).images[0]
```