File size: 3,389 Bytes
a91dd9b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
039c914
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
---
tags:
- text-recognition
- dataset
- text-detection
- scene-text
- scene-text-recognition
- scene-text-detection
- text-detection-recognition
- icdar
- total-text
- curve-text
task_categories:
- text-retrieval
- text-classification
language:
- en
- zh
size_categories:
- 10K<n<100K
---

# TextOCR Dataset

## Version 0.1

### Training Set
- **Word Annotations:** 714,770 (272MB)
- **Images:** 21,778 (6.6GB)

### Validation Set
- **Word Annotations:** 107,802 (39MB)
- **Images:** 3,124 

### Test Set
- **Metadata:** 1MB
- **Images:** 3,232 (926MB)

## General Information
- **License:** Data is available under CC BY 4.0 license.
- **Important Note:** Numbers in the papers should be reported on the v0.1 test set.

## Images
- Training and validation set images are sourced from the OpenImages train set, while test set images come from the OpenImages test set.
- Validation set's images are contained in the zip for the training set's images.
- **Note:** Some images in OpenImages are rotated; please check the Rotation field in the Image IDs files for train and test.

## Dataset Format
The JSON format mostly follows COCO-Text v2, except the "mask" field in "anns" is named as "points" for the polygon annotation.

### Details
- **Points:** A list of 2D coordinates like `[x1, y1, x2, y2, ...]`. Note that (x1, y1) is always the top-left corner of the text (in its own orientation), and the order of the points is clockwise.
- **BBox:** Contains a horizontal box converted from "points" for convenience, and "area" is computed based on the width and height of the "bbox".
- **Annotation:** In cases when the text is illegible or not in English, the polygon is annotated normally but the word will be annotated as a single "." symbol. Annotations are case-sensitive and can include punctuation.

## Annotation Details
- Annotators were instructed to draw exactly 4 points (quadrilaterals) whenever possible, and only draw more than 4 points when necessary (for cases like curved text).

## Relationship with TextVQA/TextCaps
- The image IDs in TextOCR match the IDs in TextVQA.
- The train/val/test splits are the same as TextVQA/TextCaps. However, due to privacy reasons, we removed 274 images from TextVQA while creating TextOCR.

## TextOCR JSON Files Example
```json
{
  "imgs": {
    "OpenImages_ImageID_1": {
      "id": "OpenImages_ImageID_1",
      "width": "INT, Width of the image",
      "height": "INT, Height of the image",
      "set": "Split train|val|test",
      "filename": "train|test/OpenImages_ImageID_1.jpg"
    },
    "OpenImages_ImageID_2": {
      "..."
    }
  },
  "anns": {
    "OpenImages_ImageID_1_1": {
      "id": "STR, OpenImages_ImageID_1_1, Specifies the nth annotation for an image",
      "image_id": "OpenImages_ImageID_1",
      "bbox": [
        "FLOAT x1",
        "FLOAT y1",
        "FLOAT x2",
        "FLOAT y2"
      ],
      "points": [
        "FLOAT x1",
        "FLOAT y1",
        "FLOAT x2",
        "FLOAT y2",
        "...",
        "FLOAT xN",
        "FLOAT yN"
      ],
      "utf8_string": "text for this annotation",
      "area": "FLOAT, area of this box"
    },
    "OpenImages_ImageID_1_2": {
      "..."
    }
  },
  "img2Anns": {
    "OpenImages_ImageID_1": [
      "OpenImages_ImageID_1_1",
      "OpenImages_ImageID_1_2",
      "OpenImages_ImageID_1_2"
    ],
    "OpenImages_ImageID_N": [
      "..."
    ]
  }
}