|
--- |
|
license: apache-2.0 |
|
language: |
|
- en |
|
pipeline_tag: text-to-image |
|
tags: |
|
- 'Image Generation ' |
|
- 'Text-to-Image ' |
|
--- |
|
|
|
# EfficientCLIP-GAN: High-Speed Image Generation with Compact CLIP-GAN Architecture |
|
|
|
<p align="center"> |
|
<img src="Logo.png" width="500px"/> |
|
</p> |
|
|
|
# A high-quality, fast, and efficient text-to-image synthesis model |
|
|
|
|
|
<p align="center"> |
|
<b>Generated Images |
|
</p> |
|
<p align="center"> |
|
<img src="Samples.png"/> |
|
</p> |
|
|
|
|
|
## Requirements |
|
- python 3.9 |
|
- Pytorch 1.9 |
|
- At least 1xTesla v100 32GB GPU (for training) |
|
- Only CPU (for inference) |
|
|
|
|
|
**EfficientCLIP-GAN is a small, rapid and efficient generative model which can generate multiple pictures in one second even on the CPU as compared to Diffusion Models.** |
|
|
|
|
|
|
|
## Note : |
|
***For more details on inference and training visit [Github Page](https://github.com/VinayHajare/EfficientCLIP-GAN)*** |