Datasets:
license: apache-2.0
task_categories:
- text-to-image
tags:
- images
size_categories:
- 1M<n<10M
configs:
- config_name: laion
data_files: laion/*.parquet
- config_name: OCR_Art
data_files: OCR_Art/*.parquet
- config_name: OCR_COCO_Text
data_files: OCR_COCO_Text/*.parquet
- config_name: OCR_LSVT
data_files: OCR_LSVT/*.parquet
- config_name: OCR_MTWI2018
data_files: OCR_MTWI2018/*.parquet
- config_name: OCR_ReCTS
data_files: OCR_ReCTS/*.parquet
- config_name: OCR_icdar2017rctw
data_files: OCR_icdar2017rctw/*.parquet
- config_name: OCR_mlt2019
data_files: OCR_mlt2019/*.parquet
- config_name: wukong_1of5
data_files: wukong_1of5/*.parquet
- config_name: wukong_2of5
data_files: wukong_2of5/*.parquet
- config_name: wukong_3of5
data_files: wukong_3of5/*.parquet
- config_name: wukong_4of5
data_files: wukong_4of5/*.parquet
- config_name: wukong_5of5
data_files: wukong_5of5/*.parquet
Dataset from (AnyText: Multilingual Visual Text Generation And Editing)[https://arxiv.org/abs/2311.03054].
Dataset description from Anytext Team:
Currently, there is a relative scarcity of public datasets for text generation tasks, especially those involving non-Latin script languages. To address this, we introduce a large-scale multilingual dataset called AnyWord-3M. The images in this dataset are sourced from Noah-Wukong, LAION-400M, and OCR recognition datasets such as ArT, COCO-Text, RCTW, LSVT, MLT, MTWI, ReCTS, etc. These images cover a variety of scenes containing text, including street views, book covers, advertisements, posters, and movie frames. Apart from OCR datasets that directly utilize annotated information, all other images are processed using the detection and recognition models of PP-OCR. Subsequently, text descriptions are generated using BLIP-2. After rigorous filtering rules and meticulous post-processing, we have obtained a total of 3,034,486 images, containing over 9 million lines of text and more than 20 million characters or Latin script characters. Additionally, we randomly selected 1,000 images from the Wukong and LAION subsets to create the evaluation set AnyText-benchmark, specifically for assessing the accuracy and quality of Chinese and English text generation. The remaining images serve as the training set AnyWord-3M, which includes approximately 1.6 million Chinese images, 1.39 million English images, and 10,000 images in other languages, including Japanese, Korean, Arabic, Bengali, and Hindi. For detailed statistical analysis and randomly selected example images, please refer to our paper AnyText. (Note: The dataset released in this instance is version V1.1)