This model is a quantized version of stabilityai/sdxl-turbo and is converted to the OpenVINO format. This model was obtained via the nncf-quantization space with optimum-intel. First make sure you have optimum-intel installed:

pip install optimum[openvino]

To load your model you can do as follows:

from optimum.intel import OVStableDiffusionXLPipeline
model_id = "AIFunOver/sdxl-turbo-openvino-8bit"
model = OVStableDiffusionXLPipeline.from_pretrained(model_id)
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model authors have turned it off explicitly.

Model tree for AIFunOver/sdxl-turbo-openvino-8bit

Quantized
(6)
this model