The SDXL Turbo model is converted to OpenVINO for the fast inference on CPU. This model is intended for research purposes only.

Original Model : sdxl-turbo

You can use this model with FastSD CPU.

Sample

To run the model yourself, you can leverage the 🧨 Diffusers library:

  1. Install the dependencies:
pip install optimum-intel openvino diffusers onnx
  1. Run the model:
from optimum.intel.openvino.modeling_diffusion import OVStableDiffusionXLPipeline

pipeline = OVStableDiffusionXLPipeline.from_pretrained(
    "rupeshs/sdxl-turbo-openvino-int8",
    ov_config={"CACHE_DIR": ""},
)
prompt = "Teddy bears working on new AI research on the moon in the 1980s"

images = pipeline(
    prompt=prompt,
    width=512,
    height=512,
    num_inference_steps=1,
    guidance_scale=1.0,
).images
images[0].save("out_image.png")

License

The SDXL Turbo Model is licensed under the Stability AI Non-Commercial Research Community License, Copyright (c) Stability AI Ltd. All Rights Reserved.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Unable to determine this model's library. Check the docs .