How can I use modles in diffusers library?

#26
by yiximail - opened

I really like these models and truly appreciate the author for sharing them.
And I want to make it into a server-only API deployment without a webui or other Gradio interface.
However, due to my limited knowledge and experience with python, I am encountering some problems.

1.The demo code in the "Use in Diffusers" button in the upper right corner will get an error.

from diffusers import DiffusionPipeline
pipe  = DiffusionPipeline.from_pretrained("WarriorMama777/OrangeMixs")
OSError: Error no file named scheduler_config.json found in directory /root/.cache/huggingface/diffusers/models--WarriorMama777--OrangeMixs/snapshots/641d5be1a5f89a040e58f769cac02b328a277467
  1. Then I tried to download the model first and then load the model path directly.
    According to this documentation, from_pretrained should be able to accept the model path.
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained("/content/AbyssOrangeMix2_nsfw.safetensors")
OSError: It looks like the config file at '/content/AbyssOrangeMix2_nsfw.safetensors' is not a valid JSON file.

However, it seems that the diffusers library cannot directly load *.safetensors files.

I am not familiar with python. Despite researching for a day and reading through documentation, I am still unable to find a way to use the diffusers library to load *.safetensors models and generate images.

I mainly refer to these two demos:
conditional_image_generation
stable-diffusion-v1-5
Here is my test colab notebook: colab

Need help here too, exactly the same problem I'm facing.

do me a favor and open up an issue in diffusers https://github.com/huggingface/diffusers/issues

I'll try to help :-) If @WarriorMama777 is ok, we could set up a bunch of different model repositories here, each one with a nice page. @WarriorMama777 more than happy to help if you'd like :-)

Any tips for the VAE ?

Sign up or log in to comment