theSure's picture
Upload 2037 files
a49cc2f verified
|
raw
history blame
12.5 kB
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# DiffEdit
[[open-in-colab]]
이미지 νŽΈμ§‘μ„ ν•˜λ €λ©΄ 일반적으둜 νŽΈμ§‘ν•  μ˜μ—­μ˜ 마슀크λ₯Ό μ œκ³΅ν•΄μ•Ό ν•©λ‹ˆλ‹€. DiffEditλŠ” ν…μŠ€νŠΈ 쿼리λ₯Ό 기반으둜 마슀크λ₯Ό μžλ™μœΌλ‘œ μƒμ„±ν•˜λ―€λ‘œ 이미지 νŽΈμ§‘ μ†Œν”„νŠΈμ›¨μ–΄ 없이도 마슀크λ₯Ό λ§Œλ“€κΈ°κ°€ μ „λ°˜μ μœΌλ‘œ 더 μ‰¬μ›Œμ§‘λ‹ˆλ‹€. DiffEdit μ•Œκ³ λ¦¬μ¦˜μ€ μ„Έ λ‹¨κ³„λ‘œ μž‘λ™ν•©λ‹ˆλ‹€:
1. Diffusion λͺ¨λΈμ΄ 일뢀 쿼리 ν…μŠ€νŠΈμ™€ μ°Έμ‘° ν…μŠ€νŠΈλ₯Ό μ‘°κ±΄λΆ€λ‘œ μ΄λ―Έμ§€μ˜ λ…Έμ΄μ¦ˆλ₯Ό μ œκ±°ν•˜μ—¬ μ΄λ―Έμ§€μ˜ μ—¬λŸ¬ μ˜μ—­μ— λŒ€ν•΄ μ„œλ‘œ λ‹€λ₯Έ λ…Έμ΄μ¦ˆ μΆ”μ •μΉ˜λ₯Ό μƒμ„±ν•˜κ³ , κ·Έ 차이λ₯Ό μ‚¬μš©ν•˜μ—¬ 쿼리 ν…μŠ€νŠΈμ™€ μΌμΉ˜ν•˜λ„λ‘ μ΄λ―Έμ§€μ˜ μ–΄λŠ μ˜μ—­μ„ λ³€κ²½ν•΄μ•Ό ν•˜λŠ”μ§€ μ‹λ³„ν•˜κΈ° μœ„ν•œ 마슀크λ₯Ό μΆ”λ‘ ν•©λ‹ˆλ‹€.
2. μž…λ ₯ 이미지가 DDIM을 μ‚¬μš©ν•˜μ—¬ 잠재 κ³΅κ°„μœΌλ‘œ μΈμ½”λ”©λ©λ‹ˆλ‹€.
3. 마슀크 μ™ΈλΆ€μ˜ 픽셀이 μž…λ ₯ 이미지와 λ™μΌν•˜κ²Œ μœ μ§€λ˜λ„λ‘ 마슀크λ₯Ό κ°€μ΄λ“œλ‘œ μ‚¬μš©ν•˜μ—¬ ν…μŠ€νŠΈ 쿼리에 쑰건이 μ§€μ •λœ diffusion λͺ¨λΈλ‘œ latentsλ₯Ό λ””μ½”λ”©ν•©λ‹ˆλ‹€.
이 κ°€μ΄λ“œμ—μ„œλŠ” 마슀크λ₯Ό μˆ˜λ™μœΌλ‘œ λ§Œλ“€μ§€ μ•Šκ³  DiffEditλ₯Ό μ‚¬μš©ν•˜μ—¬ 이미지λ₯Ό νŽΈμ§‘ν•˜λŠ” 방법을 μ„€λͺ…ν•©λ‹ˆλ‹€.
μ‹œμž‘ν•˜κΈ° 전에 λ‹€μŒ λΌμ΄λΈŒλŸ¬λ¦¬κ°€ μ„€μΉ˜λ˜μ–΄ μžˆλŠ”μ§€ ν™•μΈν•˜μ„Έμš”:
```py
# Colabμ—μ„œ ν•„μš”ν•œ 라이브러리λ₯Ό μ„€μΉ˜ν•˜κΈ° μœ„ν•΄ 주석을 μ œμ™Έν•˜μ„Έμš”
#!pip install -q diffusers transformers accelerate
```
[`StableDiffusionDiffEditPipeline`]μ—λŠ” 이미지 λ§ˆμŠ€ν¬μ™€ λΆ€λΆ„μ μœΌλ‘œ λ°˜μ „λœ latents 집합이 ν•„μš”ν•©λ‹ˆλ‹€. 이미지 λ§ˆμŠ€ν¬λŠ” [`~StableDiffusionDiffEditPipeline.generate_mask`] ν•¨μˆ˜μ—μ„œ μƒμ„±λ˜λ©°, 두 개의 νŒŒλΌλ―Έν„°μΈ `source_prompt`와 `target_prompt`κ°€ ν¬ν•¨λ©λ‹ˆλ‹€. 이 λ§€κ°œλ³€μˆ˜λŠ” μ΄λ―Έμ§€μ—μ„œ 무엇을 νŽΈμ§‘ν• μ§€ κ²°μ •ν•©λ‹ˆλ‹€. 예λ₯Ό λ“€μ–΄, *과일* ν•œ 그릇을 *λ°°* ν•œ κ·Έλ¦‡μœΌλ‘œ λ³€κ²½ν•˜λ €λ©΄ λ‹€μŒκ³Ό 같이 ν•˜μ„Έμš”:
```py
source_prompt = "a bowl of fruits"
target_prompt = "a bowl of pears"
```
λΆ€λΆ„μ μœΌλ‘œ λ°˜μ „λœ latentsλŠ” [`~StableDiffusionDiffEditPipeline.invert`] ν•¨μˆ˜μ—μ„œ μƒμ„±λ˜λ©°, 일반적으둜 이미지λ₯Ό μ„€λͺ…ν•˜λŠ” `prompt` λ˜λŠ” *μΊ‘μ…˜*을 ν¬ν•¨ν•˜λŠ” 것이 inverse latent sampling ν”„λ‘œμ„ΈμŠ€λ₯Ό κ°€μ΄λ“œν•˜λŠ” 데 도움이 λ©λ‹ˆλ‹€. μΊ‘μ…˜μ€ μ’…μ’… `source_prompt`κ°€ 될 수 μžˆμ§€λ§Œ, λ‹€λ₯Έ ν…μŠ€νŠΈ μ„€λͺ…μœΌλ‘œ 자유둭게 μ‹€ν—˜ν•΄ λ³΄μ„Έμš”!
νŒŒμ΄ν”„λΌμΈ, μŠ€μΌ€μ€„λŸ¬, μ—­ μŠ€μΌ€μ€„λŸ¬λ₯Ό 뢈러였고 λ©”λͺ¨λ¦¬ μ‚¬μš©λŸ‰μ„ 쀄이기 μœ„ν•΄ λͺ‡ κ°€μ§€ μ΅œμ ν™”λ₯Ό ν™œμ„±ν™”ν•΄ λ³΄κ² μŠ΅λ‹ˆλ‹€:
```py
import torch
from diffusers import DDIMScheduler, DDIMInverseScheduler, StableDiffusionDiffEditPipeline
pipeline = StableDiffusionDiffEditPipeline.from_pretrained(
"stabilityai/stable-diffusion-2-1",
torch_dtype=torch.float16,
safety_checker=None,
use_safetensors=True,
)
pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config)
pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config)
pipeline.enable_model_cpu_offload()
pipeline.enable_vae_slicing()
```
μˆ˜μ •ν•˜κΈ° μœ„ν•œ 이미지λ₯Ό λΆˆλŸ¬μ˜΅λ‹ˆλ‹€:
```py
from diffusers.utils import load_image, make_image_grid
img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png"
raw_image = load_image(img_url).resize((768, 768))
raw_image
```
이미지 마슀크λ₯Ό μƒμ„±ν•˜κΈ° μœ„ν•΄ [`~StableDiffusionDiffEditPipeline.generate_mask`] ν•¨μˆ˜λ₯Ό μ‚¬μš©ν•©λ‹ˆλ‹€. μ΄λ―Έμ§€μ—μ„œ νŽΈμ§‘ν•  λ‚΄μš©μ„ μ§€μ •ν•˜κΈ° μœ„ν•΄ `source_prompt`와 `target_prompt`λ₯Ό 전달해야 ν•©λ‹ˆλ‹€:
```py
from PIL import Image
source_prompt = "a bowl of fruits"
target_prompt = "a basket of pears"
mask_image = pipeline.generate_mask(
image=raw_image,
source_prompt=source_prompt,
target_prompt=target_prompt,
)
Image.fromarray((mask_image.squeeze()*255).astype("uint8"), "L").resize((768, 768))
```
λ‹€μŒμœΌλ‘œ, λ°˜μ „λœ latentsλ₯Ό μƒμ„±ν•˜κ³  이미지λ₯Ό λ¬˜μ‚¬ν•˜λŠ” μΊ‘μ…˜μ— μ „λ‹¬ν•©λ‹ˆλ‹€:
```py
inv_latents = pipeline.invert(prompt=source_prompt, image=raw_image).latents
```
λ§ˆμ§€λ§‰μœΌλ‘œ, 이미지 λ§ˆμŠ€ν¬μ™€ λ°˜μ „λœ latentsλ₯Ό νŒŒμ΄ν”„λΌμΈμ— μ „λ‹¬ν•©λ‹ˆλ‹€. `target_prompt`λŠ” 이제 `prompt`κ°€ 되며, `source_prompt`λŠ” `negative_prompt`둜 μ‚¬μš©λ©λ‹ˆλ‹€.
```py
output_image = pipeline(
prompt=target_prompt,
mask_image=mask_image,
image_latents=inv_latents,
negative_prompt=source_prompt,
).images[0]
mask_image = Image.fromarray((mask_image.squeeze()*255).astype("uint8"), "L").resize((768, 768))
make_image_grid([raw_image, mask_image, output_image], rows=1, cols=3)
```
<div class="flex gap-4">
<div>
<img class="rounded-xl" src="https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png"/>
<figcaption class="mt-2 text-center text-sm text-gray-500">original image</figcaption>
</div>
<div>
<img class="rounded-xl" src="https://github.com/Xiang-cd/DiffEdit-stable-diffusion/blob/main/assets/target.png?raw=true"/>
<figcaption class="mt-2 text-center text-sm text-gray-500">edited image</figcaption>
</div>
</div>
## Source와 target μž„λ² λ”© μƒμ„±ν•˜κΈ°
Source와 target μž„λ² λ”©μ€ μˆ˜λ™μœΌλ‘œ μƒμ„±ν•˜λŠ” λŒ€μ‹  [Flan-T5](https://huggingface.co/docs/transformers/model_doc/flan-t5) λͺ¨λΈμ„ μ‚¬μš©ν•˜μ—¬ μžλ™μœΌλ‘œ 생성할 수 μžˆμŠ΅λ‹ˆλ‹€.
Flan-T5 λͺ¨λΈκ³Ό ν† ν¬λ‚˜μ΄μ €λ₯Ό πŸ€— Transformers λΌμ΄λΈŒλŸ¬λ¦¬μ—μ„œ λΆˆλŸ¬μ˜΅λ‹ˆλ‹€:
```py
import torch
from transformers import AutoTokenizer, T5ForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-large")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-large", device_map="auto", torch_dtype=torch.float16)
```
λͺ¨λΈμ— ν”„λ‘¬ν”„νŠΈν•  source와 target ν”„λ‘¬ν”„νŠΈλ₯Ό μƒμ„±ν•˜κΈ° μœ„ν•΄ 초기 ν…μŠ€νŠΈλ“€μ„ μ œκ³΅ν•©λ‹ˆλ‹€.
```py
source_concept = "bowl"
target_concept = "basket"
source_text = f"Provide a caption for images containing a {source_concept}. "
"The captions should be in English and should be no longer than 150 characters."
target_text = f"Provide a caption for images containing a {target_concept}. "
"The captions should be in English and should be no longer than 150 characters."
```
λ‹€μŒμœΌλ‘œ, ν”„λ‘¬ν”„νŠΈλ“€μ„ μƒμ„±ν•˜κΈ° μœ„ν•΄ μœ ν‹Έλ¦¬ν‹° ν•¨μˆ˜λ₯Ό μƒμ„±ν•©λ‹ˆλ‹€.
```py
@torch.no_grad()
def generate_prompts(input_prompt):
input_ids = tokenizer(input_prompt, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(
input_ids, temperature=0.8, num_return_sequences=16, do_sample=True, max_new_tokens=128, top_k=10
)
return tokenizer.batch_decode(outputs, skip_special_tokens=True)
source_prompts = generate_prompts(source_text)
target_prompts = generate_prompts(target_text)
print(source_prompts)
print(target_prompts)
```
<Tip>
λ‹€μ–‘ν•œ ν’ˆμ§ˆμ˜ ν…μŠ€νŠΈλ₯Ό μƒμ„±ν•˜λŠ” μ „λž΅μ— λŒ€ν•΄ μžμ„Ένžˆ μ•Œμ•„λ³΄λ €λ©΄ [생성 μ „λž΅](https://huggingface.co/docs/transformers/main/en/generation_strategies) κ°€μ΄λ“œλ₯Ό μ°Έμ‘°ν•˜μ„Έμš”.
</Tip>
ν…μŠ€νŠΈ 인코딩을 μœ„ν•΄ [`StableDiffusionDiffEditPipeline`]μ—μ„œ μ‚¬μš©ν•˜λŠ” ν…μŠ€νŠΈ 인코더 λͺ¨λΈμ„ λΆˆλŸ¬μ˜΅λ‹ˆλ‹€. ν…μŠ€νŠΈ 인코더λ₯Ό μ‚¬μš©ν•˜μ—¬ ν…μŠ€νŠΈ μž„λ² λ”©μ„ κ³„μ‚°ν•©λ‹ˆλ‹€:
```py
import torch
from diffusers import StableDiffusionDiffEditPipeline
pipeline = StableDiffusionDiffEditPipeline.from_pretrained(
"stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16, use_safetensors=True
)
pipeline.enable_model_cpu_offload()
pipeline.enable_vae_slicing()
@torch.no_grad()
def embed_prompts(sentences, tokenizer, text_encoder, device="cuda"):
embeddings = []
for sent in sentences:
text_inputs = tokenizer(
sent,
padding="max_length",
max_length=tokenizer.model_max_length,
truncation=True,
return_tensors="pt",
)
text_input_ids = text_inputs.input_ids
prompt_embeds = text_encoder(text_input_ids.to(device), attention_mask=None)[0]
embeddings.append(prompt_embeds)
return torch.concatenate(embeddings, dim=0).mean(dim=0).unsqueeze(0)
source_embeds = embed_prompts(source_prompts, pipeline.tokenizer, pipeline.text_encoder)
target_embeds = embed_prompts(target_prompts, pipeline.tokenizer, pipeline.text_encoder)
```
λ§ˆμ§€λ§‰μœΌλ‘œ, μž„λ² λ”©μ„ [`~StableDiffusionDiffEditPipeline.generate_mask`] 및 [`~StableDiffusionDiffEditPipeline.invert`] ν•¨μˆ˜μ™€ νŒŒμ΄ν”„λΌμΈμ— μ „λ‹¬ν•˜μ—¬ 이미지λ₯Ό μƒμ„±ν•©λ‹ˆλ‹€:
```diff
from diffusers import DDIMInverseScheduler, DDIMScheduler
from diffusers.utils import load_image, make_image_grid
from PIL import Image
pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config)
pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config)
img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png"
raw_image = load_image(img_url).resize((768, 768))
mask_image = pipeline.generate_mask(
image=raw_image,
- source_prompt=source_prompt,
- target_prompt=target_prompt,
+ source_prompt_embeds=source_embeds,
+ target_prompt_embeds=target_embeds,
)
inv_latents = pipeline.invert(
- prompt=source_prompt,
+ prompt_embeds=source_embeds,
image=raw_image,
).latents
output_image = pipeline(
mask_image=mask_image,
image_latents=inv_latents,
- prompt=target_prompt,
- negative_prompt=source_prompt,
+ prompt_embeds=target_embeds,
+ negative_prompt_embeds=source_embeds,
).images[0]
mask_image = Image.fromarray((mask_image.squeeze()*255).astype("uint8"), "L")
make_image_grid([raw_image, mask_image, output_image], rows=1, cols=3)
```
## λ°˜μ „μ„ μœ„ν•œ μΊ‘μ…˜ μƒμ„±ν•˜κΈ°
`source_prompt`λ₯Ό μΊ‘μ…˜μœΌλ‘œ μ‚¬μš©ν•˜μ—¬ λΆ€λΆ„μ μœΌλ‘œ λ°˜μ „λœ latentsλ₯Ό 생성할 수 μžˆμ§€λ§Œ, [BLIP](https://huggingface.co/docs/transformers/model_doc/blip) λͺ¨λΈμ„ μ‚¬μš©ν•˜μ—¬ μΊ‘μ…˜μ„ μžλ™μœΌλ‘œ 생성할 μˆ˜λ„ μžˆμŠ΅λ‹ˆλ‹€.
πŸ€— Transformers λΌμ΄λΈŒλŸ¬λ¦¬μ—μ„œ BLIP λͺ¨λΈκ³Ό ν”„λ‘œμ„Έμ„œλ₯Ό λΆˆλŸ¬μ˜΅λ‹ˆλ‹€:
```py
import torch
from transformers import BlipForConditionalGeneration, BlipProcessor
processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base", torch_dtype=torch.float16, low_cpu_mem_usage=True)
```
μž…λ ₯ μ΄λ―Έμ§€μ—μ„œ μΊ‘μ…˜μ„ μƒμ„±ν•˜λŠ” μœ ν‹Έλ¦¬ν‹° ν•¨μˆ˜λ₯Ό λ§Œλ“­λ‹ˆλ‹€:
```py
@torch.no_grad()
def generate_caption(images, caption_generator, caption_processor):
text = "a photograph of"
inputs = caption_processor(images, text, return_tensors="pt").to(device="cuda", dtype=caption_generator.dtype)
caption_generator.to("cuda")
outputs = caption_generator.generate(**inputs, max_new_tokens=128)
# μΊ‘μ…˜ generator μ˜€ν”„λ‘œλ“œ
caption_generator.to("cpu")
caption = caption_processor.batch_decode(outputs, skip_special_tokens=True)[0]
return caption
```
μž…λ ₯ 이미지λ₯Ό 뢈러였고 `generate_caption` ν•¨μˆ˜λ₯Ό μ‚¬μš©ν•˜μ—¬ ν•΄λ‹Ή 이미지에 λŒ€ν•œ μΊ‘μ…˜μ„ μƒμ„±ν•©λ‹ˆλ‹€:
```py
from diffusers.utils import load_image
img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png"
raw_image = load_image(img_url).resize((768, 768))
caption = generate_caption(raw_image, model, processor)
```
<div class="flex justify-center">
<figure>
<img class="rounded-xl" src="https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png"/>
<figcaption class="text-center">generated caption: "a photograph of a bowl of fruit on a table"</figcaption>
</figure>
</div>
이제 μΊ‘μ…˜μ„ [`~StableDiffusionDiffEditPipeline.invert`] ν•¨μˆ˜μ— 놓아 λΆ€λΆ„μ μœΌλ‘œ λ°˜μ „λœ latentsλ₯Ό 생성할 수 μžˆμŠ΅λ‹ˆλ‹€!