Any way to locally run this ?

#64
by Misto - opened

Is there any possibility to run this locally on a private host to avoid clogging up the server with requests ? and possibly helping host the project ?

https://github.com/borisdayma/dalle-mini

In the readme, there's a link to a repo (the one by sahar) that runs it locally or on colab for you

I could easily copy the code from here: https://github.com/borisdayma/dalle-mini/blob/main/tools/inference/inference_pipeline.ipynb and make it run on a Linux computer, but I don't have a GPU nor enough memory to run it properly. It took about ten minutes to generate a single image on my notebook (using dall-e Mini, dall-e Macro would not run). I suppose it would be easy to make it work on a cloud computing service, but I could not try it yet.

https://github.com/borisdayma/dalle-mini

In the readme, there's a link to a repo (the one by sahar) that runs it locally or on colab for you

I've used that one for a bit and it's fine but I'm not sure it uses the same model; its generations seem a little bit less advanced and more abstract like something like Hypnogram.

I've used that one for a bit and it's fine but I'm not sure it uses the same model; its generations seem a little bit less advanced and more abstract like something like Hypnogram.

Sahar's repo uses DALL·E mini by default, but can be switched to DALL·E Mega by uncommenting a line of code. See here: https://github.com/saharmor/dalle-playground#using-dall-e-mega

I think the DALL·E mini demo here on huggingface in fact uses the DALL·E Mega model. (Because of the difference in advancedness and abstractness you mention.) Which is a bit confusing.

You can run the inference notebook locally. You will need an Nvidia GPU with at least 12GB of memory with cuda and cudnn installed. I'd recommend running in a docker container. On a 3060 generating 9 images takes ~75 seconds. https://colab.research.google.com/github/borisdayma/dalle-mini/blob/main/tools/inference/inference_pipeline.ipynb

Nope, it does not work locally, not without internet connection. But it could...

Yep, I saw these links:

https://github.com/borisdayma/dalle-mini
https://colab.research.google.com/github/borisdayma/dalle-mini/blob/main/tools/inference/inference_pipeline.ipynb

Try this yourself: (1) set up the docker image, (2) disconnect from internet, (3) launch the docker image. You will see that It will not work locally. Seriously, if you think it is so easy, try it. It does not work.

Here is how it works (if somebody to follow your instructions) :

  • first you build a docker image,
  • then (using "docker run") you launch a container from the docker image,
  • then you launch a jupytor notebook inside of that container,
  • then in the jupytor notebook, you run some code that downloads some huge files -- so-called "model".
  • then you can generate an image.

But if you shutdown the jupytor notebook (which is a webserver program), those huge files are gone!

If you create another docker image from the current container (using "docker commit "), this image is no good.

If your will launch a container and try to use it, then that jupytor notebook will try to connect to internet and try to download those model files again. If you do not have interment connection, then this will not work.

Obviously, there should be a way to download those giant model files into a file, and live them in a directory. Perhaps this piece pre-download-ing should not go into jupytor notebook, but can be placed into a separate script, which could be customized by the user to download various models.

In general, this Jupytor notebook is an extra tool that makes things confusing for everybody. Maybe you could provide some instructions on how to do things without it.

Please provide instructions that would allow:

  1. set up the docker image,
  2. disconnect from internet,
  3. launch the docker image and generate images without being connected to the internet.

Sign up or log in to comment