pierreguillou commited on
Commit
d40dfa2
1 Parent(s): d5abf6a

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +230 -0
README.md ADDED
@@ -0,0 +1,230 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ license: other
5
+ pretty_name: DocLayNet base
6
+ size_categories:
7
+ - 0K<n<1K
8
+ tags:
9
+ - layout-segmentation
10
+ - COCO
11
+ - document-understanding
12
+ - PDF
13
+ - IBM
14
+ task_categories:
15
+ - object-detection
16
+ - image-segmentation
17
+ task_ids:
18
+ - instance-segmentation
19
+ ---
20
+
21
+ # Dataset Card for DocLayNet base
22
+
23
+ ## About this card (01/27/2023)
24
+
25
+ ### Property and license
26
+
27
+ All information from this page but the content of this paragraph has been copied/pasted from [Dataset Card for DocLayNet](https://huggingface.co/datasets/ds4sd/DocLayNet).
28
+
29
+ DocLayNet is a dataset created by Deep Search (IBM Research) published under [license CDLA-Permissive-1.0](https://huggingface.co/datasets/ds4sd/DocLayNet#licensing-information).
30
+
31
+ I do not claim any rights to the data taken from this dataset and published on this page.
32
+
33
+ ### DocLayNet dataset
34
+
35
+ [DocLayNet dataset](https://github.com/DS4SD/DocLayNet) (IBM) provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories.
36
+
37
+ Until today, the dataset can be downloaded through direct links or as a dataset from Hugging Face datasets:
38
+ - direct links: [doclaynet_core.zip](https://codait-cos-dax.s3.us.cloud-object-storage.appdomain.cloud/dax-doclaynet/1.0.0/DocLayNet_core.zip) (28 GiB), [doclaynet_extra.zip](https://codait-cos-dax.s3.us.cloud-object-storage.appdomain.cloud/dax-doclaynet/1.0.0/DocLayNet_extra.zip) (7.5 GiB)
39
+ - Hugging Face dataset library: [dataset DocLayNet](https://huggingface.co/datasets/ds4sd/DocLayNet)
40
+
41
+ ### Processing into a format facilitating it use by HF notebooks
42
+
43
+ These 2 options require the downloading of all the data (approximately 30GBi), which requires downloading time (about 45 mn in Google Colab) and a large space on the hard disk. These could limit experimentation for people with low resources.
44
+
45
+ Moreover, even when using the download via HF datasets library, it is necessary to download the EXTRA zip separately ([doclaynet_extra.zip](https://codait-cos-dax.s3.us.cloud-object-storage.appdomain.cloud/dax-doclaynet/1.0.0/DocLayNet_extra.zip), 7.5 GiB) to associate the annotated bounding boxes with the text extracted by OCR from the PDFs. This operation also requires additional code because the boundings boxes of the texts do not necessarily correspond to those annotated (a calculation of the percentage of area in common between the boundings boxes annotated and those of the texts makes it possible to make a comparison between them).
46
+
47
+ At last, in order to use Hugging Face notebooks on fine-tuning layout models like LayoutLMv3 or LiLT, DocLayNet data must be processed in a proper format.
48
+
49
+ For all these reasons, I decided to process the DocLayNet dataset:
50
+ - into 3 datasets of different sizes:
51
+ - [DocLayNet small](https://huggingface.co/datasets/pierreguillou/DocLayNet-small) < 1.000k document images (691 train, 64 val, 49 test)
52
+ - [DocLayNet base](https://huggingface.co/datasets/pierreguillou/DocLayNet-base) < 10.000k document images- with associated texts,
53
+ - DocLayNet large with full dataset (to be done)
54
+ - and in a format facilitating their use by HF notebooks.
55
+
56
+ *Note: the layout HF notebooks will greatly help participants of the IBM [ICDAR 2023 Competition on Robust Layout Segmentation in Corporate Documents](https://ds4sd.github.io/icdar23-doclaynet/)!*
57
+
58
+ ### Download & overview
59
+
60
+ ```
61
+ # !pip install -q datasets
62
+
63
+ from datasets import load_dataset
64
+
65
+ dataset_base = load_dataset("pierreguillou/DocLayNet-base")
66
+
67
+ # overview of dataset_base
68
+
69
+
70
+ ```
71
+
72
+ ## Annotated bounding boxes
73
+
74
+ The DocLayNet base makes easy to display document image with the annotaed bounding boxes of paragraphes or lines.
75
+
76
+ Check the notebook [processing_DocLayNet_dataset_to_be_used_by_layout_models_of_HF_hub.ipynb]() in order to get the code.
77
+
78
+
79
+ ### HF notebooks
80
+
81
+ - [notebooks LayoutLM](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LayoutLM) (Niels Rogge)
82
+ - [notebooks LayoutLMv2](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LayoutLMv2) (Niels Rogge)
83
+ - [notebooks LayoutLMv3](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LayoutLMv3) (Niels Rogge)
84
+ - [notebooks LiLT](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LiLt) (Niels Rogge)
85
+ - [Document AI: Fine-tuning LiLT for document-understanding using Hugging Face Transformers](https://github.com/philschmid/document-ai-transformers/blob/main/training/lilt_funsd.ipynb) ([post](https://www.philschmid.de/fine-tuning-lilt#3-fine-tune-and-evaluate-lilt) of Phil Schmid)
86
+
87
+ ## Table of Contents
88
+ - [Table of Contents](#table-of-contents)
89
+ - [Dataset Description](#dataset-description)
90
+ - [Dataset Summary](#dataset-summary)
91
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
92
+ - [Dataset Structure](#dataset-structure)
93
+ - [Data Fields](#data-fields)
94
+ - [Data Splits](#data-splits)
95
+ - [Dataset Creation](#dataset-creation)
96
+ - [Annotations](#annotations)
97
+ - [Additional Information](#additional-information)
98
+ - [Dataset Curators](#dataset-curators)
99
+ - [Licensing Information](#licensing-information)
100
+ - [Citation Information](#citation-information)
101
+ - [Contributions](#contributions)
102
+
103
+ ## Dataset Description
104
+
105
+ - **Homepage:** https://developer.ibm.com/exchanges/data/all/doclaynet/
106
+ - **Repository:** https://github.com/DS4SD/DocLayNet
107
+ - **Paper:** https://doi.org/10.1145/3534678.3539043
108
+ - **Leaderboard:**
109
+ - **Point of Contact:**
110
+
111
+ ### Dataset Summary
112
+
113
+ DocLayNet provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. It provides several unique features compared to related work such as PubLayNet or DocBank:
114
+
115
+ 1. *Human Annotation*: DocLayNet is hand-annotated by well-trained experts, providing a gold-standard in layout segmentation through human recognition and interpretation of each page layout
116
+ 2. *Large layout variability*: DocLayNet includes diverse and complex layouts from a large variety of public sources in Finance, Science, Patents, Tenders, Law texts and Manuals
117
+ 3. *Detailed label set*: DocLayNet defines 11 class labels to distinguish layout features in high detail.
118
+ 4. *Redundant annotations*: A fraction of the pages in DocLayNet are double- or triple-annotated, allowing to estimate annotation uncertainty and an upper-bound of achievable prediction accuracy with ML models
119
+ 5. *Pre-defined train- test- and validation-sets*: DocLayNet provides fixed sets for each to ensure proportional representation of the class-labels and avoid leakage of unique layout styles across the sets.
120
+
121
+ ### Supported Tasks and Leaderboards
122
+
123
+ We are hosting a competition in ICDAR 2023 based on the DocLayNet dataset. For more information see https://ds4sd.github.io/icdar23-doclaynet/.
124
+
125
+ ## Dataset Structure
126
+
127
+ ### Data Fields
128
+
129
+ DocLayNet provides four types of data assets:
130
+
131
+ 1. PNG images of all pages, resized to square `1025 x 1025px`
132
+ 2. Bounding-box annotations in COCO format for each PNG image
133
+ 3. Extra: Single-page PDF files matching each PNG image
134
+ 4. Extra: JSON file matching each PDF page, which provides the digital text cells with coordinates and content
135
+
136
+ The COCO image record are defined like this example
137
+
138
+ ```js
139
+ ...
140
+ {
141
+ "id": 1,
142
+ "width": 1025,
143
+ "height": 1025,
144
+ "file_name": "132a855ee8b23533d8ae69af0049c038171a06ddfcac892c3c6d7e6b4091c642.png",
145
+
146
+ // Custom fields:
147
+ "doc_category": "financial_reports" // high-level document category
148
+ "collection": "ann_reports_00_04_fancy", // sub-collection name
149
+ "doc_name": "NASDAQ_FFIN_2002.pdf", // original document filename
150
+ "page_no": 9, // page number in original document
151
+ "precedence": 0, // Annotation order, non-zero in case of redundant double- or triple-annotation
152
+ },
153
+ ...
154
+ ```
155
+
156
+ The `doc_category` field uses one of the following constants:
157
+
158
+ ```
159
+ financial_reports,
160
+ scientific_articles,
161
+ laws_and_regulations,
162
+ government_tenders,
163
+ manuals,
164
+ patents
165
+ ```
166
+
167
+
168
+ ### Data Splits
169
+
170
+ The dataset provides three splits
171
+ - `train`
172
+ - `val`
173
+ - `test`
174
+
175
+ ## Dataset Creation
176
+
177
+ ### Annotations
178
+
179
+ #### Annotation process
180
+
181
+ The labeling guideline used for training of the annotation experts are available at [DocLayNet_Labeling_Guide_Public.pdf](https://raw.githubusercontent.com/DS4SD/DocLayNet/main/assets/DocLayNet_Labeling_Guide_Public.pdf).
182
+
183
+
184
+ #### Who are the annotators?
185
+
186
+ Annotations are crowdsourced.
187
+
188
+
189
+ ## Additional Information
190
+
191
+ ### Dataset Curators
192
+
193
+ The dataset is curated by the [Deep Search team](https://ds4sd.github.io/) at IBM Research.
194
+ You can contact us at [deepsearch-core@zurich.ibm.com](mailto:deepsearch-core@zurich.ibm.com).
195
+
196
+ Curators:
197
+ - Christoph Auer, [@cau-git](https://github.com/cau-git)
198
+ - Michele Dolfi, [@dolfim-ibm](https://github.com/dolfim-ibm)
199
+ - Ahmed Nassar, [@nassarofficial](https://github.com/nassarofficial)
200
+ - Peter Staar, [@PeterStaar-IBM](https://github.com/PeterStaar-IBM)
201
+
202
+ ### Licensing Information
203
+
204
+ License: [CDLA-Permissive-1.0](https://cdla.io/permissive-1-0/)
205
+
206
+
207
+ ### Citation Information
208
+
209
+
210
+ ```bib
211
+ @article{doclaynet2022,
212
+ title = {DocLayNet: A Large Human-Annotated Dataset for Document-Layout Segmentation},
213
+ doi = {10.1145/3534678.353904},
214
+ url = {https://doi.org/10.1145/3534678.3539043},
215
+ author = {Pfitzmann, Birgit and Auer, Christoph and Dolfi, Michele and Nassar, Ahmed S and Staar, Peter W J},
216
+ year = {2022},
217
+ isbn = {9781450393850},
218
+ publisher = {Association for Computing Machinery},
219
+ address = {New York, NY, USA},
220
+ booktitle = {Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining},
221
+ pages = {3743–3751},
222
+ numpages = {9},
223
+ location = {Washington DC, USA},
224
+ series = {KDD '22}
225
+ }
226
+ ```
227
+
228
+ ### Contributions
229
+
230
+ Thanks to [@dolfim-ibm](https://github.com/dolfim-ibm), [@cau-git](https://github.com/cau-git) for adding this dataset.