Spaces:
Runtime error
Runtime error
<!--Copyright 2023 The HuggingFace Team. All rights reserved. | |
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with | |
the License. You may obtain a copy of the License at | |
http://www.apache.org/licenses/LICENSE-2.0 | |
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on | |
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | |
specific language governing permissions and limitations under the License. | |
--> | |
# Text-guided image-to-image generation | |
[[open-in-colab]] | |
The [`StableDiffusionImg2ImgPipeline`] lets you pass a text prompt and an initial image to condition the generation of new images. | |
Before you begin, make sure you have all the necessary libraries installed: | |
```bash | |
!pip install diffusers transformers ftfy accelerate | |
``` | |
Get started by creating a [`StableDiffusionImg2ImgPipeline`] with a pretrained Stable Diffusion model like [`nitrosocke/Ghibli-Diffusion`](https://huggingface.co/nitrosocke/Ghibli-Diffusion). | |
```python | |
import torch | |
import requests | |
from PIL import Image | |
from io import BytesIO | |
from diffusers import StableDiffusionImg2ImgPipeline | |
device = "cuda" | |
pipe = StableDiffusionImg2ImgPipeline.from_pretrained("nitrosocke/Ghibli-Diffusion", torch_dtype=torch.float16).to( | |
device | |
) | |
``` | |
Download and preprocess an initial image so you can pass it to the pipeline: | |
```python | |
url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" | |
response = requests.get(url) | |
init_image = Image.open(BytesIO(response.content)).convert("RGB") | |
init_image.thumbnail((768, 768)) | |
init_image | |
``` | |
<div class="flex justify-center"> | |
<img src="https://huggingface.co/datasets/YiYiXu/test-doc-assets/resolve/main/image_2_image_using_diffusers_cell_8_output_0.jpeg"/> | |
</div> | |
<Tip> | |
💡 `strength` is a value between 0.0 and 1.0 that controls the amount of noise added to the input image. Values that approach 1.0 allow for lots of variations but will also produce images that are not semantically consistent with the input. | |
</Tip> | |
Define the prompt (for this checkpoint finetuned on Ghibli-style art, you need to prefix the prompt with the `ghibli style` tokens) and run the pipeline: | |
```python | |
prompt = "ghibli style, a fantasy landscape with castles" | |
generator = torch.Generator(device=device).manual_seed(1024) | |
image = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5, generator=generator).images[0] | |
image | |
``` | |
<div class="flex justify-center"> | |
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ghibli-castles.png"/> | |
</div> | |
You can also try experimenting with a different scheduler to see how that affects the output: | |
```python | |
from diffusers import LMSDiscreteScheduler | |
lms = LMSDiscreteScheduler.from_config(pipe.scheduler.config) | |
pipe.scheduler = lms | |
generator = torch.Generator(device=device).manual_seed(1024) | |
image = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5, generator=generator).images[0] | |
image | |
``` | |
<div class="flex justify-center"> | |
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/lms-ghibli.png"/> | |
</div> | |
Check out the Spaces below, and try generating images with different values for `strength`. You'll notice that using lower values for `strength` produces images that are more similar to the original image. | |
Feel free to also switch the scheduler to the [`LMSDiscreteScheduler`] and see how that affects the output. | |
<iframe | |
src="https://stevhliu-ghibli-img2img.hf.space" | |
frameborder="0" | |
width="850" | |
height="500" | |
></iframe> | |