File size: 4,536 Bytes
ac20447
 
 
 
 
 
 
075d796
 
 
ac20447
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
---
license: openrail
tags:
- stable-diffusion
- stable-diffusion-diffusers
- controlnet
- endpoints-template
- Text-to-Image
thumbnail: >-
  https://huggingface.co/philschmid/ControlNet-endpoint/resolve/main/thumbnail.png
inference: true
---


# Inference Endpoint for [ControlNet](https://huggingface.co/lllyasviel/ControlNet) using [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)


> ControlNet is a neural network structure to control diffusion models by adding extra conditions.
> Official repository: https://github.com/lllyasviel/ControlNet

---

Blog post: [Controlled text to image generation with Inference Endpoints]()

This repository implements a custom `handler` task for `controlled text-to-image` generation on 🤗 Inference Endpoints. The code for the customized pipeline is in the [handler.py](https://huggingface.co/philschmid/ControlNet-endpoint/blob/main/handler.py).

There is also a [notebook](https://huggingface.co/philschmid/ControlNet-endpoint/blob/main/create_handler.ipynb) included, on how to create the `handler.py`

![sample](thumbnail.png)


### expected Request payload

```json
{
    "inputs": "A prompt used for image generation",
    "negative_prompt": "low res, bad anatomy, worst quality, low quality",
    "controlnet_type": "depth",
    "image" : "iVBORw0KGgoAAAANSUhEUgAAAgAAAAIACAIAAAB7GkOtAAAABGdBTUEAALGPC",
}
```

supported `controlnet_type` are: `canny_edge`, `pose`, `depth`, `scribble`, `segmentation`, `normal`, `hed`, `hough`

below is an example on how to run a request using Python and `requests`.


## Use Python to send requests

1. Get image 
```
wget https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_imgvar/input_image_vermeer.png
```

2. Use the following code to send a request to the endpoint

```python
import json
from typing import List
import requests as r
import base64
from PIL import Image
from io import BytesIO

ENDPOINT_URL = "" # your endpoint url
HF_TOKEN = "" # your huggingface token `hf_xxx`

# helper image utils
def encode_image(image_path):
  with open(image_path, "rb") as i:
    b64 = base64.b64encode(i.read())
  return b64.decode("utf-8")


def predict(prompt, image, negative_prompt=None, controlnet_type = "normal"):
    image = encode_image(image)

    # prepare sample payload
    request = {"inputs": prompt, "image": image, "negative_prompt": negative_prompt, "controlnet_type": controlnet_type}
    # headers
    headers = {
        "Authorization": f"Bearer {HF_TOKEN}",
        "Content-Type": "application/json",
        "Accept": "image/png" # important to get an image back
    }

    response = r.post(ENDPOINT_URL, headers=headers, json=request)
    if response.status_code != 200:
        print(response.text)
        raise Exception("Prediction failed")
    img = Image.open(BytesIO(response.content))
    return img


prediction = predict(
  prompt = "cloudy sky background lush landscape house and green trees, RAW photo (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3",
  negative_prompt ="lowres, bad anatomy, worst quality, low quality, city, traffic",
  controlnet_type = "hed",
  image = "huggingface.png"
)

prediction.save("result.png")
```

```
expected output

![sample](result.png) 


[Adding Conditional Control to Text-to-Image Diffusion Models](https://arxiv.org/abs/2302.05543) by Lvmin Zhang and Maneesh Agrawala.

Using the pretrained models we can provide control images (for example, a depth map) to control Stable Diffusion text-to-image generation so that it follows the structure of the depth image and fills in the details.

The abstract of the paper is the following:

We present a neural network structure, ControlNet, to control pretrained large diffusion models to support additional input conditions. The ControlNet learns task-specific conditions in an end-to-end way, and the learning is robust even when the training dataset is small (< 50k). Moreover, training a ControlNet is as fast as fine-tuning a diffusion model, and the model can be trained on a personal devices. Alternatively, if powerful computation clusters are available, the model can scale to large amounts (millions to billions) of data. We report that large diffusion models like Stable Diffusion can be augmented with ControlNets to enable conditional inputs like edge maps, segmentation maps, keypoints, etc. This may enrich the methods to control large diffusion models and further facilitate related applications.