Onnxruntime inferencing using openvino execution provider based on stable diffusion onnx model

#170
by sk2893 - opened

Hi using the stable diffusion v1.4 onnx model, I was trying to get inference on OPENVINO Execution provider.
Using installation of required packages related to openvino in a python environment,
Modified onnx_utils.py from diffusers library to give support of openvino execution provider.

While running the inference, facing an issue related to model loading using Core of openvino. Facing following error
"image = pipe(prompt).images[0] File "onnx_openvpy39\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_onnx_stable_diffusion.py", line 274, in call noise_pred = self.unet(sample=latent_model_input, timestep=timestep, encoder_hidden_states=text_embeddings) File "onnx_openvpy39\lib\site-packages\diffusers\onnx_utils.py", line 62, in call return self.model.run(None, inputs) File "onnx_openvpy39\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 192, in run return self._sess.run(output_names, input_feed, run_options) onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running OpenVINO-EP-subgraph_4 node. Name:'OpenVINOExecutionProvider_OpenVINO-EP-subgraph_4_0' Status Message: C:\Users\sfatima\source\repos\onnxruntime_newmodel\onnxruntime\onnxruntime\core\providers\openvino\ov_interface.cc:36 class std::shared_ptr __cdecl onnxruntime::openvino_ep::OVCore::ReadModel(const class std::basic_string<char,struct std::char_traits,class std::allocator > &) const [OpenVINO-EP] [OpenVINO-EP] Exception while Reading network: invalid external data: ExternalDataInfo(data_full_path: weights.pb, offset: 1738007040, data_length: 13107200, sha1_digest: 0)"

System information:
Windows 11
pip install onnxruntime-openvino=1.11.0
pip install openvino==2022.1
Python version: 3.9
onnx model from stable-diffusion : https://huggingface.co/CompVis/stable-diffusion-v1-4/tree/onnx

To reproduce:

"""
from diffusers import OnnxStableDiffusionPipeline
import onnxruntime as rt
import openvino.utils as utils

pipe = OnnxStableDiffusionPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4",
revision="onnx",
provider="OpenVINOExecutionProvider",
provider_options=[{'device_type' : device}]
)
prompt = "a photo of an astronaut riding a horse on mars"
#Running the session by passing in the input data of the model
image = pipe(prompt).images[0]
"""

@pcuenq , @bes-dev can you share your thoughts. And also @bes-dev can you share steps followed to create stable-diffusion-v1.4.ckpt to onnx models with respect to vae_encoder, vae_decoder, unet.

Sign up or log in to comment