--- license: apache-2.0 --- # CogView2 ## Model description **CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers** - [Paper](https://arxiv.org/abs/2204.14217) - [GitHub Repo](https://github.com/THUDM/CogView2) ### Abstract The development of the transformer-based text-to-image models are impeded by its slow generation and complexity for high-resolution images. In this work, we put forward a solution based on hierarchical transformers and local parallel auto-regressive generation. We pretrain a 6B-parameter transformer with a simple and flexible self-supervised task, Cross-modal general language model (CogLM), and finetune it for fast super-resolution. The new text-to-image system, CogView2, shows very competitive generation compared to concurrent state-of-the-art DALL-E-2, and naturally supports interactive text-guided editing on images. ## BibTeX entry and citation info ```bibtex @article{ding2022cogview2, title={CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers}, author={Ding, Ming and Zheng, Wendi and Hong, Wenyi and Tang, Jie}, journal={arXiv preprint arXiv:2204.14217}, year={2022} } ```