File size: 2,533 Bytes
5542365
 
 
 
 
1b9d7ec
 
5542365
 
b0b9920
5542365
bcac695
5542365
bcac695
5542365
bcac695
00ed1ab
bcac695
5542365
b0b9920
5506862
bdaeeba
5542365
bdaeeba
5542365
 
 
 
 
 
 
 
 
 
 
df7b7be
 
 
 
bdaeeba
 
5542365
 
 
 
 
 
 
 
 
 
 
 
 
 
 
00ed1ab
 
5542365
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
---
title: Dalle Mini
emoji: 🎨
colorFrom: red
colorTo: blue
sdk: gradio
app_file: app/app_gradio.py
pinned: false
---

# DALL-E Mini

_Generate images from a text prompt_

TODO: add some cool example

## Create my own images with the demo → Coming soon

## How does it work?

Refer to [our report](https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-mini--Vmlldzo4NjIxODA?accessToken=2ua7j8ebc810fuxyv49wbipmq3fb2e78yq3rvs5dy4wew07wwm2csdo8zcuyr14e).

## Development

This section is for the adventurous people wanting to look into the code.

### Dependencies Installation

The root folder and associated `requirements.txt` is only for the app.

You will find necessary requirements in each sub-section.

You should create a new python virtual environment and install the project dependencies inside the virtual env. You need to use the `-f` (`--find-links`) option for `pip` to be able to find the appropriate `libtpu` required for the TPU hardware.

Adapt the installation to your own hardware and follow library installation instructions.

```
$ pip install -r requirements.txt -f https://storage.googleapis.com/jax-releases/libtpu_releases.html
```

If you use `conda`, you can create the virtual env and install everything using: `conda env update -f environments.yaml`

### Training of VQGAN

The VQGAN was trained using [taming-transformers](https://github.com/CompVis/taming-transformers).

We recommend using the latest version available.

### Conversion of VQGAN to JAX

Use [patil-suraj/vqgan-jax](https://github.com/patil-suraj/vqgan-jax).

### Training of Seq2Seq

Refer to `seq2seq` folder (some parameters may have been hardcoded for convenience when training on our TPU VM).

You can also adjust the [sweep configuration file](https://docs.wandb.ai/guides/sweeps) if you need to perform a hyperparameter search.

### Inference

Refer to the demo notebooks.
TODO: add links

## Authors

- [Boris Dayma](https://github.com/borisdayma)
- [Suraj Patil](https://github.com/patil-suraj)
- [Pedro Cuenca](https://github.com/pcuenca)
- [Khalid Saifullah](https://github.com/khalidsaifullaah)
- [Tanishq Abraham](https://github.com/tmabraham)
- [Phúc Lê Khắc](https://github.com/lkhphuc)
- [Luke Melas](https://github.com/lukemelas)
- [Ritobrata Ghosh](https://github.com/ghosh-r)

## Acknowledgements

- 🤗 Hugging Face for organizing [the FLAX/JAX community week](https://github.com/huggingface/transformers/tree/master/examples/research_projects/jax-projects)
- Google Cloud team for providing access to TPU's