TextOCR-Dataset / README.md
yunusserhat's picture
Update README.md
039c914
---
tags:
- text-recognition
- dataset
- text-detection
- scene-text
- scene-text-recognition
- scene-text-detection
- text-detection-recognition
- icdar
- total-text
- curve-text
task_categories:
- text-retrieval
- text-classification
language:
- en
- zh
size_categories:
- 10K<n<100K
---
# TextOCR Dataset
## Version 0.1
### Training Set
- **Word Annotations:** 714,770 (272MB)
- **Images:** 21,778 (6.6GB)
### Validation Set
- **Word Annotations:** 107,802 (39MB)
- **Images:** 3,124
### Test Set
- **Metadata:** 1MB
- **Images:** 3,232 (926MB)
## General Information
- **License:** Data is available under CC BY 4.0 license.
- **Important Note:** Numbers in the papers should be reported on the v0.1 test set.
## Images
- Training and validation set images are sourced from the OpenImages train set, while test set images come from the OpenImages test set.
- Validation set's images are contained in the zip for the training set's images.
- **Note:** Some images in OpenImages are rotated; please check the Rotation field in the Image IDs files for train and test.
## Dataset Format
The JSON format mostly follows COCO-Text v2, except the "mask" field in "anns" is named as "points" for the polygon annotation.
### Details
- **Points:** A list of 2D coordinates like `[x1, y1, x2, y2, ...]`. Note that (x1, y1) is always the top-left corner of the text (in its own orientation), and the order of the points is clockwise.
- **BBox:** Contains a horizontal box converted from "points" for convenience, and "area" is computed based on the width and height of the "bbox".
- **Annotation:** In cases when the text is illegible or not in English, the polygon is annotated normally but the word will be annotated as a single "." symbol. Annotations are case-sensitive and can include punctuation.
## Annotation Details
- Annotators were instructed to draw exactly 4 points (quadrilaterals) whenever possible, and only draw more than 4 points when necessary (for cases like curved text).
## Relationship with TextVQA/TextCaps
- The image IDs in TextOCR match the IDs in TextVQA.
- The train/val/test splits are the same as TextVQA/TextCaps. However, due to privacy reasons, we removed 274 images from TextVQA while creating TextOCR.
## TextOCR JSON Files Example
```json
{
"imgs": {
"OpenImages_ImageID_1": {
"id": "OpenImages_ImageID_1",
"width": "INT, Width of the image",
"height": "INT, Height of the image",
"set": "Split train|val|test",
"filename": "train|test/OpenImages_ImageID_1.jpg"
},
"OpenImages_ImageID_2": {
"..."
}
},
"anns": {
"OpenImages_ImageID_1_1": {
"id": "STR, OpenImages_ImageID_1_1, Specifies the nth annotation for an image",
"image_id": "OpenImages_ImageID_1",
"bbox": [
"FLOAT x1",
"FLOAT y1",
"FLOAT x2",
"FLOAT y2"
],
"points": [
"FLOAT x1",
"FLOAT y1",
"FLOAT x2",
"FLOAT y2",
"...",
"FLOAT xN",
"FLOAT yN"
],
"utf8_string": "text for this annotation",
"area": "FLOAT, area of this box"
},
"OpenImages_ImageID_1_2": {
"..."
}
},
"img2Anns": {
"OpenImages_ImageID_1": [
"OpenImages_ImageID_1_1",
"OpenImages_ImageID_1_2",
"OpenImages_ImageID_1_2"
],
"OpenImages_ImageID_N": [
"..."
]
}
}