Offline usage?

#5
by Tankonor - opened

Not fully understanding spaces, besides their ability to host front end gradio applications, I couldn't find the code to see how the models are actually called and have prompts executed against them.
Is that part of it hosted in a backend (like a lambda function?) that's not visible?
Running this space in a dockerfile locally does seem to download all of the models listed in all_models.py, but I can't for the life of me figure out how / where the models are running.

If they are running in a backend that can't be seen, is there a way to truly run this offline in a docker container using our own CPU/GPU hardware via downloading the models and using the gradio front end for the UI?

Owner

Yeah, everything runs on python, so if it runs here you should be able to run it locally offline, provided you've downloaded the +3TB of models. You'd need to install gradio from here:

https://github.com/gradio-app/gradio (the space uses version 3.46.0, I don't know if more recent versions break it.)

And huggingface's diffusers:

https://github.com/huggingface/diffusers

HOWEVER, if you have the GPU to run all this, then I don't recommend it, it's just a box with a prompt, if you want the power of control like negative prompts and seeds and others like Control net and a long list of things we don't have available, I recommend to run locally a UI like this one:

https://huggingface.co/spaces/Yntec/epiCPhotoGASM-Webui-CPU

There's a lot more you can do with these models than asking for a picture, it's called the Automatic1111 UI:

https://github.com/AUTOMATIC1111/stable-diffusion-webui

You could install it, open 6 different instances, and then run 6 different models with all these features. I maintain these spaces because I don't have the hardware to do it, so I have to use an online Webui-CPU that takes 45 minutes for a picture this space does in seconds, but if you can do them in seconds, then you can do a lot better than this.

Sign up or log in to comment