yunusserhat commited on
Commit
a91dd9b
1 Parent(s): 1907418

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +114 -0
README.md ADDED
@@ -0,0 +1,114 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - text-recognition
4
+ - dataset
5
+ - text-detection
6
+ - scene-text
7
+ - scene-text-recognition
8
+ - scene-text-detection
9
+ - text-detection-recognition
10
+ - icdar
11
+ - total-text
12
+ - curve-text
13
+ task_categories:
14
+ - text-retrieval
15
+ - text-classification
16
+ language:
17
+ - en
18
+ - zh
19
+ size_categories:
20
+ - 10K<n<100K
21
+ ---
22
+
23
+ # TextOCR Dataset
24
+
25
+ ## Version 0.1
26
+
27
+ ### Training Set
28
+ - **Word Annotations:** 714,770 (272MB)
29
+ - **Images:** 21,778 (6.6GB)
30
+
31
+ ### Validation Set
32
+ - **Word Annotations:** 107,802 (39MB)
33
+ - **Images:** 3,124
34
+
35
+ ### Test Set
36
+ - **Metadata:** 1MB
37
+ - **Images:** 3,232 (926MB)
38
+
39
+ ## General Information
40
+ - **License:** Data is available under CC BY 4.0 license.
41
+ - **Important Note:** Numbers in the papers should be reported on the v0.1 test set.
42
+
43
+ ## Images
44
+ - Training and validation set images are sourced from the OpenImages train set, while test set images come from the OpenImages test set.
45
+ - Validation set's images are contained in the zip for the training set's images.
46
+ - **Note:** Some images in OpenImages are rotated; please check the Rotation field in the Image IDs files for train and test.
47
+
48
+ ## Dataset Format
49
+ The JSON format mostly follows COCO-Text v2, except the "mask" field in "anns" is named as "points" for the polygon annotation.
50
+
51
+ ### Details
52
+ - **Points:** A list of 2D coordinates like `[x1, y1, x2, y2, ...]`. Note that (x1, y1) is always the top-left corner of the text (in its own orientation), and the order of the points is clockwise.
53
+ - **BBox:** Contains a horizontal box converted from "points" for convenience, and "area" is computed based on the width and height of the "bbox".
54
+ - **Annotation:** In cases when the text is illegible or not in English, the polygon is annotated normally but the word will be annotated as a single "." symbol. Annotations are case-sensitive and can include punctuation.
55
+
56
+ ## Annotation Details
57
+ - Annotators were instructed to draw exactly 4 points (quadrilaterals) whenever possible, and only draw more than 4 points when necessary (for cases like curved text).
58
+
59
+ ## Relationship with TextVQA/TextCaps
60
+ - The image IDs in TextOCR match the IDs in TextVQA.
61
+ - The train/val/test splits are the same as TextVQA/TextCaps. However, due to privacy reasons, we removed 274 images from TextVQA while creating TextOCR.
62
+
63
+ ## TextOCR JSON Files Example
64
+ ```json
65
+ {
66
+ "imgs": {
67
+ "OpenImages_ImageID_1": {
68
+ "id": "OpenImages_ImageID_1",
69
+ "width": "INT, Width of the image",
70
+ "height": "INT, Height of the image",
71
+ "set": "Split train|val|test",
72
+ "filename": "train|test/OpenImages_ImageID_1.jpg"
73
+ },
74
+ "OpenImages_ImageID_2": {
75
+ "..."
76
+ }
77
+ },
78
+ "anns": {
79
+ "OpenImages_ImageID_1_1": {
80
+ "id": "STR, OpenImages_ImageID_1_1, Specifies the nth annotation for an image",
81
+ "image_id": "OpenImages_ImageID_1",
82
+ "bbox": [
83
+ "FLOAT x1",
84
+ "FLOAT y1",
85
+ "FLOAT x2",
86
+ "FLOAT y2"
87
+ ],
88
+ "points": [
89
+ "FLOAT x1",
90
+ "FLOAT y1",
91
+ "FLOAT x2",
92
+ "FLOAT y2",
93
+ "...",
94
+ "FLOAT xN",
95
+ "FLOAT yN"
96
+ ],
97
+ "utf8_string": "text for this annotation",
98
+ "area": "FLOAT, area of this box"
99
+ },
100
+ "OpenImages_ImageID_1_2": {
101
+ "..."
102
+ }
103
+ },
104
+ "img2Anns": {
105
+ "OpenImages_ImageID_1": [
106
+ "OpenImages_ImageID_1_1",
107
+ "OpenImages_ImageID_1_2",
108
+ "OpenImages_ImageID_1_2"
109
+ ],
110
+ "OpenImages_ImageID_N": [
111
+ "..."
112
+ ]
113
+ }
114
+ }