How do I set it up to run on my computer?

#25
by Odair - opened

How do I set it up to run on my computer? Can anyone help me? I ran "pip install dalle-mini" from python, and it downloaded some files, but I could not figure out how to use it.

I'd also like to know.

I'm trying to figure this out right now. They have a GitHub for this project at https://github.com/borisdayma/dalle-mini and it doesn't seem as if it runs as a tool on your command line for example. That maybe you have to write some Python code to make it generate images from prompts.

Examples of the code you would write can be found in the Python Notebook (in the GitHub repo) tools/inference/inference_pipeline.ipynb.

Okay. I figured it out (no it didn't take 5 hrs, I was doing other things). So you do indeed open this Interactive Python "Notebook" which is a combination of text and runnable code. You just go in order, read the paragraph and run the code. You can either do this on the cloud using Google Colab, but that's not preferable if you have a fast computer because you'll probably end up running out of virtual RAM.. And it might not be fast. It's just convenient.

So my recommendation is to clone the repo and then open the file I mentioned in my last comment with Visual Studio Code or something else than can open Jupyter Python Notebooks. Then go in order and run the code. It will begin to make sense, because at that point the only thing you have to change to generate new images is the line ' prompt = "sunset over a lake in the mountains" '. All of the images will be generated in the notebook for you to view and save.

Hope this helps, ask if you have any questions!

How large is the model on this thing?
I'm looking at this from the perspective of "I can name arbitrary pop culture characters from Tohro to Max Headroom and it knows how to create images good enough to potentially hold their own in a game of telephone", so I'm wondering what size a dataset would need to zero-shot compress that much information into a single corpus.

A had a problem running:

"model, params = DalleBart.from_pretrained(DALLE_MODEL, revision=DALLE_COMMIT_ID, dtype=jnp.float16, _do_init=False)"

wandb will try to download some files:
"wandb: Downloading large artifact mega-1-fp16:latest, 4938.53MB. 7 files..."

But I dont have enough space on the current hard drive. How do I indicate another path for the files?

I tried my best but with a mediocre knowledge of all things git/coding I failed getting very far. I think having it run locally if at all possible would be the best solution to the upwards of 90-100s of generation from a prompt.

I did my best checking out Visual Studio Code and getting to the inference_pipeline.ipynb file but failed. From what I could read it seems like the "ajaxlib" is not supported on Windows to begin with unless you go by some very vague (for me at least) directions on fan conversion project for Windows for it that supposedly would work.

Considering the popularity of the Dalle mini I believe it'd be appreciated if someone could make a tutorial for how to set it up locally that is fairly beginner friendly. If it's even possible (maybe the library it samples from cannot be downloaded, I just don't know honestly ^^; )

Beshap, you need to build jaxlib yourself as they don't offer precompiled copies for windows, sadly. Only linux and mac. And I had issues getting it to compile, and not enough technical knowledge to work out how to fix the problem.

There IS a repository that has prebuilt jaxlib (search "jax-windows-builder") for various python versions, but I couldn't get the CUDA+CUDNN ones to work, kept spitting out errors relating to jaxlib as I ran the cells. The CPU versions worked, but it took close to 10 minutes to generate a single image on my Ryzen 7 3700X.

So yeah, if someone manages to work it out, let us know.

Odair changed discussion status to closed

You can run the inference notebook locally. You will need an Nvidia GPU with at least 12GB of memory with cuda and cudnn installed. I'd recommend running in a docker container under linux. On a 3060 generating 9 images takes ~75 seconds. https://colab.research.google.com/github/borisdayma/dalle-mini/blob/main/tools/inference/inference_pipeline.ipynb

No, it does not quite work without connection to internet.
In this jupytor notebook, there are some lines that download the model from the internet. There are no instructions on how to download this into a local directory. If you restart the notebook, you have to connect to the internet and download this data again.

There is a similar question, and I wrote about this in here as well: https://huggingface.co/spaces/dalle-mini/dalle-mini/discussions/64

Please provide instructions that would allow:

  • set up the docker image with all libraries, model, and all other necessary files downloaded,
  • disconnect from internet,
  • launch the docker image and generate images without being connected to the internet.

Sign up or log in to comment