Edit model card

Inference API Documentation

Overview

The Inference API allows you to make inference requests to perform image processing tasks using a remote service. This API supports various operations and requires specific input parameters.

Base URL

The base URL for the Inference API is: <hf_endpoint_url>

Authentication

The Inference API requires authentication using a bearer token. Include the token in the Authorization header of your requests.

Request Format

Send a POST request to the endpoint URL with the following JSON payload:

{
  "inputs": "<positive_prompt>",
  "negative_prompt": "<negative_prompt>",
  "height": <height>,
  "width": <width>,
  "guidance_scale": <guidance_scale>,
  "inference_steps" : <inference_steps>,
}

Request Parameters

Parameter Type Required Description
inputs string Yes The positive prompt for the inference.
negative_prompt string No The negative prompt for the inference (optional).
height integer Yes The height of the image.
width integer Yes The width of the image.
guidance_scale float Yes The guidance scale for the inference.
inference_steps integer No The steps for inference.(25 default)

Response Format

The API response will be a JSON object with the following structure:

{
  "image": "<base64_encoded_image>"
}

Response Format

The API response will be a JSON object with the following structure:

Field Type Description
image string The base64-encoded image generated by the API.

Example Request

Here's an example request using Python:

import requests

url = '<hf_endpoint_url>'
token = '<hf_token>'

requestData = {
  'inputs': 'Positve prompt',
  'negative_prompt': 'Negative prompt goes here',
  'height': 512,
  'width': 512,
  'guidance_scale': 7.5,
  'inference_steps': 50,
}

headers = {
  'Authorization': 'Bearer ' + token,
  'Content-Type': 'application/json'
}

response = requests.post(url, json=requestData, headers=headers)
print(response.json())

Here's an example request using JavaScript:

const endpointURL = '<hf_endpoint_url>';
const hfToken = '<hf_token>';

const requestData = {
  
  s: 'Positve prompt',
  negative_prompt: 'Negative prompt goes here',
  height: 512,
  width: 512,
  guidance_scale: 7.5,
  inference_steps: 50,
};

const headers = {
  'Authorization': `Bearer ${hfToken}`,
  'Content-Type': 'application/json'
};

fetch(endpointURL, {
  method: 'POST',
  body: JSON.stringify(requestData),
  headers: headers
})
  .then(response => response.json())
  .then(data => console.log(data))
  .catch(error => console.error(error));

Downloads last month
12
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.