Diffusers documentation

How to use OpenVINO for inference

You are viewing v0.16.0 version. A newer version v0.27.2 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

How to use OpenVINO for inference

🤗 Optimum provides a Stable Diffusion pipeline compatible with OpenVINO. You can now easily perform inference with OpenVINO Runtime on a variety of Intel processors (see the full list of supported devices).

Installation

Install 🤗 Optimum Intel with the following command:

pip install optimum["openvino"]

Stable Diffusion Inference

To load an OpenVINO model and run inference with OpenVINO Runtime, you need to replace StableDiffusionPipeline with OVStableDiffusionPipeline. In case you want to load a PyTorch model and convert it to the OpenVINO format on-the-fly, you can set export=True.

from optimum.intel.openvino import OVStableDiffusionPipeline

model_id = "runwayml/stable-diffusion-v1-5"
pipe = OVStableDiffusionPipeline.from_pretrained(model_id, export=True)
prompt = "a photo of an astronaut riding a horse on mars"
images = pipe(prompt).images[0]

You can find more examples (such as static reshaping and model compilation) in optimum documentation.