Datasets:
license: apache-2.0
task_categories:
- text-to-image
tags:
- images
size_categories:
- 1M<n<10M
Dataset from (AnyText: Multilingual Visual Text Generation And Editing)[https://arxiv.org/abs/2311.03054].
Dataset description from Anytext Team:
Currently, there is a relative scarcity of public datasets for text generation tasks, especially those involving non-Latin script languages. To address this, we introduce a large-scale multilingual dataset called AnyWord-3M. The images in this dataset are sourced from Noah-Wukong, LAION-400M, and OCR recognition datasets such as ArT, COCO-Text, RCTW, LSVT, MLT, MTWI, ReCTS, etc. These images cover a variety of scenes containing text, including street views, book covers, advertisements, posters, and movie frames. Apart from OCR datasets that directly utilize annotated information, all other images are processed using the detection and recognition models of PP-OCR. Subsequently, text descriptions are generated using BLIP-2. After rigorous filtering rules and meticulous post-processing, we have obtained a total of 3,034,486 images, containing over 9 million lines of text and more than 20 million characters or Latin script characters. Additionally, we randomly selected 1,000 images from the Wukong and LAION subsets to create the evaluation set AnyText-benchmark, specifically for assessing the accuracy and quality of Chinese and English text generation. The remaining images serve as the training set AnyWord-3M, which includes approximately 1.6 million Chinese images, 1.39 million English images, and 10,000 images in other languages, including Japanese, Korean, Arabic, Bengali, and Hindi. For detailed statistical analysis and randomly selected example images, please refer to our paper AnyText. (Note: The dataset released in this instance is version V1.1)