Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
tags:
|
3 |
+
- text-to-image
|
4 |
+
- torch
|
5 |
+
datasets:
|
6 |
+
- laion/laion_100m_vqgan_f8
|
7 |
+
---
|
8 |
+
|
9 |
+
This model is collaboratively trained a part of the NeurIPS 2021 demonstration ["Training Transformers Together"](https://training-transformers-together.github.io/).
|
10 |
+
|
11 |
+
# Model Description
|
12 |
+
|
13 |
+
We train a model similar to [OpenAI DALL-E](https://openai.com/blog/dall-e/) — a Transformer model that generates images from text descriptions. Training happens collaboratively — volunteers from all over the Internet contribute to the training using hardware available to them. We use [LAION-400M](https://laion.ai/laion-400-open-dataset/), the world's largest openly available image-text-pair dataset with 400 million samples. Our model is based on the [dalle‑pytorch](https://github.com/lucidrains/DALLE-pytorch) implementation by [Phil Wang](https://github.com/lucidrains) with a few tweaks to make it communication-efficient.
|
14 |
+
|
15 |
+
# Training
|
16 |
+
|
17 |
+
You can check our [dashboard](https://huggingface.co/spaces/training-transformers-together/Dashboard) to see what is happening during the collaborative training (loss over time, number of active sessions over time, contribution of each participant, leaderboard, etc. ).
|
18 |
+
|
19 |
+
# How to Use
|
20 |
+
|
21 |
+
This section will be updated soon
|
22 |
+
|
23 |
+
# Limitations
|
24 |
+
|
25 |
+
This model is still being trained, so its generative capabilities will evolve with you!
|