[SUGGESTION] Offline DALL·E mini copy for local generation

#466
by weareblahs - opened

Made this discussion due to "high traffic" warnings while visiting the Hugging Face app version of DALL·E mini. Is there any offline copy of it (with the model if can) so I can generate images locally instead from the web servers? Or just download the code from "Files and Versions" and generate it locally?

Also, I'm not sure how to use the Google Colab version of this model (https://github.com/borisdayma/dalle-mini).

really need this

It already exists. You can run the inference notebook locally. You will need an Nvidia GPU with at least 12GB of memory with cuda and cudnn installed. I'd recommend running in a docker container. On a 3060 generating 9 images takes ~75 seconds. https://colab.research.google.com/github/borisdayma/dalle-mini/blob/main/tools/inference/inference_pipeline.ipynb

Sign up or log in to comment