Image to Image
Image-to-image is the task of transforming a source image to match the characteristics of a target image or a target image domain.
Example applications:
- Transferring the style of an image to another image
- Colorizing a black and white image
- Increasing the resolution of an image
For more details about the image-to-image
task, check out its dedicated page! You will find examples and related materials.
Recommended models
- timbrooks/instruct-pix2pix: A model that takes an image and an instruction to edit the image.
This is only a subset of the supported models. Find the model that suits you best here.
Using the API
No snippet available for this task.
API specification
Request
Payload | ||
---|---|---|
inputs* | string | The input image data as a base64-encoded string. If no parameters are provided, you can also provide the image data as a raw bytes payload. |
parameters | object | Additional inference parameters for Image To Image |
guidance_scale | number | For diffusion models. A higher guidance scale value encourages the model to generate images closely linked to the text prompt at the expense of lower image quality. |
negative_prompt | string[] | One or several prompt to guide what NOT to include in image generation. |
num_inference_steps | integer | For diffusion models. The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. |
target_size | object | The size in pixel of the output image. |
width* | integer | |
height* | integer |
Some options can be configured by passing headers to the Inference API. Here are the available headers:
Headers | ||
---|---|---|
authorization | string | Authentication header in the form 'Bearer: hf_****' when hf_**** is a personal user access token with Inference API permission. You can generate one from your settings page. |
x-use-cache | boolean, default to true | There is a cache layer on the inference API to speed up requests we have already seen. Most models can use those results as they are deterministic (meaning the outputs will be the same anyway). However, if you use a nondeterministic model, you can set this parameter to prevent the caching mechanism from being used, resulting in a real new query. Read more about caching here. |
x-wait-for-model | boolean, default to false | If the model is not ready, wait for it instead of receiving 503. It limits the number of requests required to get your inference done. It is advised to only set this flag to true after receiving a 503 error, as it will limit hanging in your application to known places. Read more about model availability here. |
For more information about Inference API headers, check out the parameters guide.
Response
Body | ||
---|---|---|
image | unknown | The output image returned as raw bytes in the payload. |