tags:
- text-to-image
- torch
inference: false
datasets:
- laion/laion_100m_vqgan_f8
This model is collaboratively trained a part of the NeurIPS 2021 demonstration "Training Transformers Together".
Model Description
We train a model similar to OpenAI DALL-E — a Transformer model that generates images from text descriptions. Training happens collaboratively — volunteers from all over the Internet contribute to the training using hardware available to them. We use LAION-400M, the world's largest openly available image-text-pair dataset with 400 million samples. Our model is based on the dalle‑pytorch implementation by Phil Wang with a few tweaks to make it communication-efficient.
Training
You can check our dashboard to see what is happening during the collaborative training (loss over time, number of active sessions over time, contribution of each participant, leaderboard, etc. ).
How to Use
This section will be updated soon
Limitations
This model is still being trained, so its generative capabilities will evolve with you!