Search is not available for this dataset
image_id
int64
1
4.56k
image
imagewidth (px)
1.03k
1.03k
width
int32
1.03k
1.03k
height
int32
1.03k
1.03k
1
1,025
1,025
2
1,025
1,025
3
1,025
1,025
4
1,025
1,025
5
1,025
1,025
4,553
1,025
1,025
4,554
1,025
1,025
4,555
1,025
1,025
4,556
1,025
1,025
4,557
1,025
1,025
2,764
1,025
1,025
2,765
1,025
1,025
2,768
1,025
1,025
2,769
1,025
1,025
2,770
1,025
1,025
3,907
1,025
1,025
3,908
1,025
1,025
3,909
1,025
1,025
3,910
1,025
1,025
3,912
1,025
1,025

Dataset Card for ICDAR 2023 Competition on Robust Layout Segmentation in Corporate Documents

Dataset Summary

This is the official competition dataset for the ICDAR 2023 Competition on Robust Layout Segmentation in Corporate Documents. You are invited to advance the research in accurately segmenting the layout on a broad range of document styles and domains. To achieve this, we challenge you to develop a model that can correctly identify and segment the layout components in document pages as bounding boxes on a competition data-set we provide.

For more information see https://ds4sd.github.io/icdar23-doclaynet/.

Training resources

In our recently published DocLayNet dataset, which contains 80k+ human-annotated document pages exposing diverse layouts, we define 11 classes for layout components (paragraphs, headings, tables, figures, lists, mathematical formulas and several more). We encourage you to use this dataset for training and internal evaluation of your solution. Further, you may consider any other publicly available document layout dataset for training (e.g. PubLayNet, DocBank).

Supported Tasks and Leaderboards

This is the official dataset of the ICDAR 2023 Competition on Robust Layout Segmentation in Corporate Documents. For more information see https://ds4sd.github.io/icdar23-doclaynet/.

Evaluation Metric

Your submissions on our EvalAI challenge will be evaluated using the Mean Average Precision (mAP) @ Intersection-over-Union (IoU) [0.50:0.95] metric, as used in the COCO object detection competition. In detail, we will calculate the average precision for a sequence of IoU thresholds ranging from 0.50 to 0.95 with a step size of 0.05. This metric is computed for every document category in the competition-dataset. Then the mean of the average precisions on all categories is computed as the final score.

Submission

We ask you to upload a JSON file in COCO results format here, with complete layout bounding-boxes for each page sample. The given image_ids must correspond to the ones we publish with the competition data-set's coco.json. For each submission you make, the computed mAP will be provided for each category as well as combined. The leaderboard will be ranked based on the overall mAP.

Dataset Structure

Data Fields

DocLayNet provides four types of data assets:

  1. PNG images of all pages, resized to square 1025 x 1025px
  2. Bounding-box annotations in COCO format for each PNG image (annotations will be released at the end of the competition)
  3. Extra: Single-page PDF files matching each PNG image
  4. Extra: JSON file matching each PDF page, which provides the digital text cells with coordinates and content

The COCO image record are defined like this example

    ...
    {
      "id": 1,
      "width": 1025,
      "height": 1025,
      "file_name": "132a855ee8b23533d8ae69af0049c038171a06ddfcac892c3c6d7e6b4091c642.png",

      // Custom fields:
      "doc_category": "financial_reports" // high-level document category
      "collection": "ann_reports_00_04_fancy", // sub-collection name
      "doc_name": "NASDAQ_FFIN_2002.pdf", // original document filename
      "page_no": 9, // page number in original document
      "precedence": 0, // Annotation order, non-zero in case of redundant double- or triple-annotation
    },
    ...

The doc_category field uses one of the following constants:

reports,
manuals,
patents,
pthers

Data Splits

The dataset provides three splits

  • dev, which is extracted from the DocLayNet dataset
  • test, which contains new data for the competition

Dataset Creation

Annotations

Annotation process

The labeling guideline used for training of the annotation experts are available at DocLayNet_Labeling_Guide_Public.pdf.

Who are the annotators?

Annotations are crowdsourced.

Additional Information

Dataset Curators

The dataset is curated by the Deep Search team at IBM Research. You can contact us at deepsearch-core@zurich.ibm.com.

Curators:

Licensing Information

License: CDLA-Permissive-1.0

Citation Information

A publication will be submitted at the end of the competition. Meanwhile, we suggest the cite our original dataset paper.

@article{doclaynet2022,
  title = {DocLayNet: A Large Human-Annotated Dataset for Document-Layout Segmentation},
  doi = {10.1145/3534678.353904},
  url = {https://doi.org/10.1145/3534678.3539043},
  author = {Pfitzmann, Birgit and Auer, Christoph and Dolfi, Michele and Nassar, Ahmed S and Staar, Peter W J},
  year = {2022},
  isbn = {9781450393850},
  publisher = {Association for Computing Machinery},
  address = {New York, NY, USA},
  booktitle = {Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining},
  pages = {3743–3751},
  numpages = {9},
  location = {Washington DC, USA},
  series = {KDD '22}
}

Contributions

Thanks to @dolfim-ibm, @cau-git for adding this dataset.

Downloads last month
60