Is there an onnx version of the model?

#24
by rdp-studio - opened

Is there an onnx version of the model?

You can try this: https://huggingface.co/ShadowPower/waifu-diffusion-v1-3-onnx

Yes, this is good. But I want a model that is compatible with the StableDiffusionOnnxPipeline that comes with diffusers.

I just converted to a version that might work for diffusers, here it is: https://huggingface.co/ShadowPower/waifu-diffusion-diffusers-onnx-v1-3

I just converted to a version that might work for diffusers, here it is: https://huggingface.co/ShadowPower/waifu-diffusion-diffusers-onnx-v1-3

Thank you. I'll try it now.

I just converted to a version that might work for diffusers, here it is: https://huggingface.co/ShadowPower/waifu-diffusion-diffusers-onnx-v1-3

Traceback (most recent call last):
  File "C:\Data\AI\picgen\server.py", line 65, in run
    generate(data["prompt"], data["taskid"])
  File "C:\Data\AI\picgen\server.py", line 149, in generate
    file_like = infer(prompt)[0]
  File "C:\Data\AI\picgen\server.py", line 124, in infer
    images = pipe([prompt] * nums, height=height, width=width, num_inference_steps=steps, generator=generator, guidance_scale=guidance_scale )["sample"]
  File "C:\Users\17192\AppData\Local\Programs\Python\Python310\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion_onnx.py", line 167, in __call__
    noise_pred = self.unet(
  File "C:\Users\17192\AppData\Local\Programs\Python\Python310\lib\site-packages\diffusers\onnx_utils.py", line 46, in __call__
    return self.model.run(None, inputs)
  File "C:\Users\17192\AppData\Local\Programs\Python\Python310\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 200, in run
    return self._sess.run(output_names, input_feed, run_options)
onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Unexpected input data type. Actual: (tensor(int32)) , expected: (tensor(int64))

Whats wrong?

(I am using the dml provider of onnx)

It looks like a data type mismatch error, I'll try it myself later.

It seems that the scheduler will use PyTorch's tensor type by default, which is not supported by StableDiffusionOnnxPipeline.
So you need to create one manually to avoid using the default scheduler.
This is an example:

from diffusers import StableDiffusionOnnxPipeline, PNDMScheduler
model_path = r'ShadowPower/waifu-diffusion-diffusers-onnx-v1-3'

scheduler = PNDMScheduler(
    beta_start=0.00085,
    beta_end=0.012,
    beta_schedule='scaled_linear',
    skip_prk_steps=True,
    tensor_format='np'
)
pipe = StableDiffusionOnnxPipeline.from_pretrained(
    model_path,
    provider="CPUExecutionProvider",
    scheduler=scheduler
)

if __name__ == '__main__':
    prompt = "1girl,  hakurei reimu"
    image = pipe(prompt).images[0]

It seems that the scheduler will use PyTorch's tensor type by default, which is not supported by StableDiffusionOnnxPipeline.
So you need to create one manually to avoid using the default scheduler.
This is an example:

from diffusers import StableDiffusionOnnxPipeline, PNDMScheduler
model_path = r'ShadowPower/waifu-diffusion-diffusers-onnx-v1-3'

scheduler = PNDMScheduler(
    beta_start=0.00085,
    beta_end=0.012,
    beta_schedule='scaled_linear',
    skip_prk_steps=True,
    tensor_format='np'
)
pipe = StableDiffusionOnnxPipeline.from_pretrained(
    model_path,
    provider="CPUExecutionProvider",
    scheduler=scheduler
)

if __name__ == '__main__':
    prompt = "1girl,  hakurei reimu"
    image = pipe(prompt).images[0]
022-10-11 21:58:45.5907994 [E:onnxruntime:, sequential_executor.cc:368 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running InstanceNormalization node. Name:'InstanceNormalization_44' Status Message: D:\a\_work\1\s\onnxruntime\core\providers\dml\DmlExecutionProvider\src\MLOperatorAuthorImpl.cpp(1857)\onnxruntime_pybind11_stat  0%|                                                                                           | 0/51 [00:00<?, ?it/s]
Worker thread-0: error:
Traceback (most recent call last):
  File "C:\Data\AI\picgen\server.py", line 78, in run
    generate(data["prompt"], data["taskid"])
  File "C:\Data\AI\picgen\server.py", line 162, in generate
    file_like = infer(prompt)[0]
  File "C:\Data\AI\picgen\server.py", line 137, in infer
    images = pipe([prompt] * nums, height=height, width=width, num_inference_steps=steps, generator=generator, guidance_scale=guidance_scale )["sample"]
  File "C:\Users\17192\AppData\Local\Programs\Python\Python310\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion_onnx.py", line 167, in __call__
    noise_pred = self.unet(
  File "C:\Users\17192\AppData\Local\Programs\Python\Python310\lib\site-packages\diffusers\onnx_utils.py", line 46, in __call__
    return self.model.run(None, inputs)
  File "C:\Users\17192\AppData\Local\Programs\Python\Python310\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 200, in run
    return self._sess.run(output_names, input_feed, run_options)
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException

An error is still reported when using the DmlExecutionProvider provider. CPUExecutionProvider running normally.

Supplement:

When running with the CPU, it seems that a black image is returned. (No NSFW detected)

This is a bug in onnxruntime-directml, you can try using the nightly build version instead.
https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/ort-nightly-directml
I'm not sure it works properly, just that I found it on Google.

This is a bug in onnxruntime-directml, you can try using the nightly build version instead.
https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/ort-nightly-directml
I'm not sure it works properly, just that I found it on Google.

It works, thanks.

I'm trying to see if it outputs normally

Well, maybe we'll just have to wait for the official Microsoft update.

Well, maybe we'll just have to wait for the official Microsoft update.

The returned image is still black.

There is no problem with CPUExecutionProvider, which may be related to the implementation of the DirectML version or the graphics card driver.
I don't have any good ideas.

There is no problem with CPUExecutionProvider, which may be related to the implementation of the DirectML version or the graphics card driver.
I don't have any good ideas.

No, in my environment, the "CPUExecutionProvider" returns pure black image too.

I will try to upgrade "onnxruntime-directml".

"CPUExecutionProvider" works.

"DmlExecutionProvider" doesn't work.

I generated my onnx by replacing the from_pretrained to "hakurei/waifu-diffusion" in the script I had, generated with no issue
Yes you need to use a nightly ort, I used dev20220908001 I think? or 20220917005. One of them had an issue and wouldn't work. The other worked fine. have not attempted to update since.
If stable diffusion generates onnx for you via whatever guide, then replacing the above should work the same.

I generated my onnx by replacing the from_pretrained to "hakurei/waifu-diffusion" in the script I had, generated with no issue
Yes you need to use a nightly ort, I used dev20220908001 I think? or 20220917005. One of them had an issue and wouldn't work. The other worked fine. have not attempted to update since.
If stable diffusion generates onnx for you via whatever guide, then replacing the above should work the same.

Thanks, it works now!

I generated my onnx by replacing the from_pretrained to "hakurei/waifu-diffusion" in the script I had, generated with no issue
Yes you need to use a nightly ort, I used dev20220908001 I think? or 20220917005. One of them had an issue and wouldn't work. The other worked fine. have not attempted to update since.
If stable diffusion generates onnx for you via whatever guide, then replacing the above should work the same.

Thanks, it works now!

Can you share all the steps necessary and the notebook?

What's the advantage of onnx?

Is there a TPU/Flax version of this model? That version is very fast with the original SD model.

Sign up or log in to comment