Inference API on Text-To-Image

#82
by LandonBayer - opened

Hi everyone, is it possible to use the HuggingFace hosted Inference API instead of hosting the repository yourself? I notice that on the Model Card page you can type in a prompt and it will generate the the image, I'm trying to figure out if there is a way to access that prompt using code, since I don't have to worry about the details of running the machine.

Thanks!

hi @LandonBayer you can use our API Inference for fast prototyping

here's an JavaScript example, if you copy and past on your Javascript console, replacing "a flying Cat" with your prompt

fetch(
  "https://api-inference.huggingface.co/models/runwayml/stable-diffusion-v1-5",
  {
    method: "POST",
    headers: {
      "content-type": "application/json",
    },
    body: JSON.stringify({ inputs: "a flying Cat" }),
  }
)
  .then((res) => res.blob())
  .then((blob) => {
    const tab = window.open((target = "_blank"));
    tab.location.href = window.URL.createObjectURL(blob);
  });

Thank you! I’ll work on translating it to Python for my app and let you know.

sorry @LandonBayer , I didn't know you need it in Python
here it go

import json
import requests
from PIL import Image
import io
import re
from time import time

API_TOKEN = ""  # token in case you want to use private API
headers = {
    # "Authorization": f"Bearer {API_TOKEN}",
    "X-Wait-For-Model": "true",
    "X-Use-Cache": "false"
}
API_URL = "https://api-inference.huggingface.co/models/runwayml/stable-diffusion-v1-5"


def query(payload):
    data = json.dumps(payload)
    response = requests.request("POST", API_URL, headers=headers, data=data)
    return Image.open(io.BytesIO(response.content))


def slugify(text):
    # remove non-word characters and foreign characters
    text = re.sub(r"[^\w\s]", "", text)
    text = re.sub(r"\s+", "-", text)
    return text


prompt = "A photo of a a flyiing cat"
image = query({"inputs": prompt})
image.save(f"{slugify(prompt)}-{time():.0f}.png")

Will I get similar endpoint if I deploy to inference endpoint ? I have no knowledge of python and I want to run this model on hosted server to serve api for text to image .

I’m not sure what exactly you’re asking @siddhesh-shirawale but I will say this solution worked for me! Thank you @radames !!!

LandonBayer changed discussion status to closed

can we get three images at once if yes then how ? im using javascript

Sign up or log in to comment