Add onnx version

#4
by skytnt - opened

Now diffusers introduce a new (and experimental) Stable Diffusion pipeline compatible with the ONNX Runtime. This allows you to run Stable Diffusion on any hardware that supports ONNX (including a significant speedup on CPUs).

You can use this script to convert the model: https://github.com/huggingface/diffusers/blob/main/scripts/convert_stable_diffusion_checkpoint_to_onnx.py

Refer: https://github.com/huggingface/diffusers/releases/tag/v0.3.0.

I'll see if I can convert it and push it to a separate branch

I have tried converting it, but there seems to be a problem with the conversion. It is not running; something about onnx expecting a int64 tensor but the software giving an int32 tensor.

I have tried converting it, but there seems to be a problem with the conversion. It is not running; something about onnx expecting a int64 tensor but the software giving an int32 tensor.

it can be fixed by setting dtype=np.int64 in pipeline_stable_diffusion_onnx.py line 133:

noise_pred = self.unet(
     sample=latent_model_input, timestep=np.array([t], dtype=np.int64), encoder_hidden_states=text_embeddings
)

I am too busy to test again and open an issue or PR in github. You can do that.

I actually fixed it JUST NOW another way, but I really appreciate the rapid response!
Between the model_index.json of waifu-diffusion and stable-diffusion (grapped on sept 14th 2022 from CompVis/stable-diffusion-v1-4 there is only one difference.
That is line #14.
In waifu-diffusion, as slated on the git, it is "DDIMScheduler".
In stable-diffusion-v1.4 as pulled, it is "PNDMScheduler".
To get it to work for me, I just had to put "PNDMScheduler" instead of "DDIMScheduler".

It should be noted this is after conversion to onnx running via DirectML (AMD hardware acceleration).

I actually fixed it JUST NOW another way, but I really appreciate the rapid response!
Between the model_index.json of waifu-diffusion and stable-diffusion (grapped on sept 14th 2022 from CompVis/stable-diffusion-v1-4 there is only one difference.
That is line #14.
In waifu-diffusion, as slated on the git, it is "DDIMScheduler".
In stable-diffusion-v1.4 as pulled, it is "PNDMScheduler".
To get it to work for me, I just had to put "PNDMScheduler" instead of "DDIMScheduler".

It should be noted this is after conversion to onnx running via DirectML (AMD hardware acceleration).

I see astype(np.int64) in scheduling_pndm.py line 168 but not in other schedulers that's why change to PNDMScheduler can fix it.

Interesting!

Bonjour Hakurei and others. Is there any update on this? It's pretty much the last piece I need to get the stack to work.

Sign up or log in to comment