File size: 1,608 Bytes
cb5df7d
8634b1a
 
 
 
 
cb5df7d
8634b1a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
---
license: mit
tags:
- image-to-image
datasets:
- yulu2/InstructCV-Demo-Data
---

# INSTRUCTCV: YOUR TEXT-TO-IMAGE MODEL IS SECRETLY A VISION GENERALIST

GitHub: https://github.com

[![pCVB5B8.png](https://s1.ax1x.com/2023/06/11/pCVB5B8.png)](https://imgse.com/i/pCVB5B8)


## Example

To use `InstructCV`, install `diffusers` using `main` for now. The pipeline will be available in the next release

```bash
pip install diffusers accelerate safetensors transformers
```

```python
import PIL
import requests
import torch
from diffusers import StableDiffusionInstructPix2PixPipeline, EulerAncestralDiscreteScheduler

model_id = "yulu2/InstructCV"
pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained(model_id, torch_dtype=torch.float16, safety_checker=None, variant="ema")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)

url = "https://raw.githubusercontent.com/timothybrooks/instruct-pix2pix/main/imgs/example.jpg"

def download_image(url):
    image = PIL.Image.open(requests.get(url, stream=True).raw)
    image = PIL.ImageOps.exif_transpose(image)
    image = image.convert("RGB")
    return image

image = download_image(URL)
width, height = image.size
factor = 512 / max(width, height)
factor = math.ceil(min(width, height) * factor / 64) * 64 / min(width, height)
width = int((width * factor) // 64) * 64
height = int((height * factor) // 64) * 64
image = ImageOps.fit(image, (width, height), method=Image.Resampling.LANCZOS)

prompt = "Detect the person."
images = pipe(prompt, image=image, num_inference_steps=100).images
images[0]
```