Request: DOI

#1
by DarkSeik - opened

Can this model have a model_index.json please? i want to try but i need this archive to download with my UI, thanxs :D

Do you need just this file or a conversion to Diffusers? I was trying to convert to Diffusers but the standard converters were giving me errors.

Well, nvm. I managed the conversion to Diffusers already. You can try it now.

Nice! but now when my UI starts the download it cant still ends because of that error

Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory

Thnx for the try anyway :)

If at some moment i will be able to download the model i will surely do.

Hmm, well like I usually recommend this: https://github.com/AUTOMATIC1111/stable-diffusion-webui since you don't need to mess with diffusers and just downloading .ckpt or .safetensors file is enough. Or like here is live version https://huggingface.co/spaces/JosefJilek/loliDiffusionSpace but the whole thing is running on 2 core CPU without graphics so be prepared to wait a lot

Yeah i already have this, but im forced to work in AMD for some time now, sad for me xD

Hello friend! I was out for a few days, today i retry to download your model and the error was different one:

OSError: Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory C:\Users\seikk/.cache\huggingface\diffusers\models--JosefJilek--loliDiffusion\snapshots\d1c6e51b53f9c6344e1ce376976b3e8df44105fc\text_encoder.

I hope for not being annoying i really want your model looks so cute :D

Honestly I dont't know it should work, you sure you downloaded the Diffusers files correctly? Anyway you should be able to use non Diffusers version with this https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs it works with AMD GPUs on Windows or like running it on CPU is also option

Well, luckily i cant mess the process of instalation with my ignorance cause my WebUI have a model manager, its basicaly 1click process xD I already have lots of models of this site i really dont know why yours is not working... Well anyway, seems i have to wait for now or try it on linux maybe. Honestly run the Automatic1111 on CPU is not really an option cause of masive time needed but i will still trying things like converting the entire directory myself to onnx format for this UI or something...

Thnx for your time and for your york :)

Well like I said, you can try this https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs it is for Windows - AMD configuration, everything you will have to do is copy/run one command and manually download .safetensor file of my model and place it to models/Stable-Diffusion folder It should work without any issues

Yes, just tested it, works on my AMD Windows notebook. It uses DirectML so it can use GPU

LOL OMG man, im just so retard, i really think this version of Auto1111 was the same i already have but no, really thnx for that link.

I was testing too right now and works really fine, slightly better than my previous UI for AMD even when im forced to work with --lowvram for some reason having a RX580 with 8gb, i will work on that later and maybe i can triple my actual speed tweaking things a bit.

Thnx for the help and for your amazing work!

Np, glad I was able to help

Jo, I tried to reach out to you on github but it's not possible to contact you I hope you understand what a goldmine you are sitting on with your stable diffusion loli text to image you haven't even got to buy me a coffee link on here or anywhere do so you're going to get a lot of contributors what would it take to be able to run this on my own private server at home happy to make a small contribution, keep up the great work you need to monetise this as soon as possible my friend it's absolutely brilliant, it destroys all opposition.

Not that much interested in monetizing it but buy me a coffee is actually good idea.

Im very glad your not going to monetize it, as interest will dwindle in it to zero, and your work is of such superior quality this space and NN / AI in general would be very very much worse off without you - - what I'm saying is people the majority of human beings anyway are very good at heart and appreciative of the considerable effort you put in - a simple PayPal donate or buy me a coffee link will mean two things for you you will have quite a bit more money or you have quite a lot more coffee to drink

A fast question my friend, u recomend use VAE with your models? And if its affirmative, which one do u recomend?

With version v0.5.3 and above you don't need to use VAE

Hi,JosefJilek.
Can you always provide the 4GB version for Colab? For free users in need.

4GB version for Colab? What do you mean?

Sign up or log in to comment