Dataset Preview
Go to dataset viewer
The dataset preview is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    UnicodeDecodeError
Message:      'utf-8' codec can't decode byte 0xed in position 12: invalid continuation byte
Traceback:    Traceback (most recent call last):
                File "/src/workers/first_rows/src/first_rows/response.py", line 375, in get_first_rows_response
                  rows = get_rows(
                File "/src/workers/first_rows/src/first_rows/utils.py", line 136, in decorator
                  return func(*args, **kwargs)
                File "/src/workers/first_rows/src/first_rows/response.py", line 84, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 782, in __iter__
                  for key, example in self._iter():
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 772, in _iter
                  yield from ex_iterable
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 142, in __iter__
                  yield from self.generate_examples_fn(**self.kwargs)
                File "/tmp/modules-cache/datasets_modules/datasets/biglam--yalta_ai_tabular_dataset/add01bd7838e97fb0042def92d45eec14525c6cc2945772965b995d63750c098/yalta_ai_tabular_dataset.py", line 160, in _generate_examples
                  image_paths = sorted(glob(f"{image_dir}/*.jpg"))
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/datasets/streaming.py", line 69, in wrapper
                  return function(*args, use_auth_token=use_auth_token, **kwargs)
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 542, in xglob
                  fs, *_ = fsspec.get_fs_token_paths(urlpath, storage_options=storage_options)
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/core.py", line 606, in get_fs_token_paths
                  fs = filesystem(protocol, **inkwargs)
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/registry.py", line 284, in filesystem
                  return cls(**storage_options)
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 76, in __call__
                  obj = super().__call__(*args, **kwargs)
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/implementations/zip.py", line 49, in __init__
                  fo = fsspec.open(
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/core.py", line 441, in open
                  return open_files(
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/core.py", line 273, in open_files
                  fs, fs_token, paths = get_fs_token_paths(
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/core.py", line 642, in get_fs_token_paths
                  paths = expand_paths_if_needed(paths, mode, num, fs, name_function)
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/core.py", line 549, in expand_paths_if_needed
                  expanded_paths.extend(fs.glob(curr_path))
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 111, in wrapper
                  return sync(self.loop, func, *args, **kwargs)
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 96, in sync
                  raise return_result
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 53, in _runner
                  result[0] = await coro
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 467, in _glob
                  allpaths = await self._find(
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 733, in _find
                  async for _, dirs, files in self._walk(path, maxdepth, detail=True, **kwargs):
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 605, in _walk
                  listing = await self._ls(path, detail=True, **kwargs)
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 199, in _ls
                  out = await self._ls_real(url, detail=detail, **kwargs)
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 154, in _ls_real
                  text = await r.text()
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/aiohttp/client_reqrep.py", line 1086, in text
                  return self._body.decode(  # type: ignore[no-any-return,union-attr]
              UnicodeDecodeError: 'utf-8' codec can't decode byte 0xed in position 12: invalid continuation byte

Need help to make the dataset viewer work? Open an discussion for direct support.

YALTAi Tabular Dataset

Dataset Summary

This dataset contains a subset of data used in the paper You Actually Look Twice At it (YALTAi): using an object detectionapproach instead of region segmentation within the Kraken engine. This paper proposes treating page layout recognition on historical documents as an object detection task (compared to the usual pixel segmentation approach). This dataset covers pages with tabular information with the following objects "Header", "Col", "Marginal", "text".

Supported Tasks and Leaderboards

  • object-detection: This dataset can be used to train a model for object-detection on historic document images.

Dataset Structure

This dataset has two configurations. These configurations both cover the same data and annotations but provide these annotations in different forms to make it easier to integrate the data with existing processing pipelines.

  • The first configuration, YOLO, uses the data's original format.
  • The second configuration converts the YOLO format into a format which is closer to the COCO annotation format. This is done to make it easier to work with the feature_extractors from the Transformers models for object detection, which expect data to be in a COCO style format.

Data Instances

An example instance from the COCO config:

{'height': 2944,
 'image': <PIL.PngImagePlugin.PngImageFile image mode=L size=2064x2944 at 0x7FA413CDA210>,
 'image_id': 0,
 'objects': [{'area': 435956,
   'bbox': [0.0, 244.0, 1493.0, 292.0],
   'category_id': 0,
   'id': 0,
   'image_id': '0',
   'iscrowd': False,
   'segmentation': []},
  {'area': 88234,
   'bbox': [305.0, 127.0, 562.0, 157.0],
   'category_id': 2,
   'id': 0,
   'image_id': '0',
   'iscrowd': False,
   'segmentation': []},
  {'area': 5244,
   'bbox': [1416.0, 196.0, 92.0, 57.0],
   'category_id': 2,
   'id': 0,
   'image_id': '0',
   'iscrowd': False,
   'segmentation': []},
  {'area': 5720,
   'bbox': [1681.0, 182.0, 88.0, 65.0],
   'category_id': 2,
   'id': 0,
   'image_id': '0',
   'iscrowd': False,
   'segmentation': []},
  {'area': 374085,
   'bbox': [0.0, 540.0, 163.0, 2295.0],
   'category_id': 1,
   'id': 0,
   'image_id': '0',
   'iscrowd': False,
   'segmentation': []},
  {'area': 577599,
   'bbox': [104.0, 537.0, 253.0, 2283.0],
   'category_id': 1,
   'id': 0,
   'image_id': '0',
   'iscrowd': False,
   'segmentation': []},
  {'area': 598670,
   'bbox': [304.0, 533.0, 262.0, 2285.0],
   'category_id': 1,
   'id': 0,
   'image_id': '0',
   'iscrowd': False,
   'segmentation': []},
  {'area': 56,
   'bbox': [284.0, 539.0, 8.0, 7.0],
   'category_id': 1,
   'id': 0,
   'image_id': '0',
   'iscrowd': False,
   'segmentation': []},
  {'area': 1868412,
   'bbox': [498.0, 513.0, 812.0, 2301.0],
   'category_id': 1,
   'id': 0,
   'image_id': '0',
   'iscrowd': False,
   'segmentation': []},
  {'area': 307800,
   'bbox': [1250.0, 512.0, 135.0, 2280.0],
   'category_id': 1,
   'id': 0,
   'image_id': '0',
   'iscrowd': False,
   'segmentation': []},
  {'area': 494109,
   'bbox': [1330.0, 503.0, 217.0, 2277.0],
   'category_id': 1,
   'id': 0,
   'image_id': '0',
   'iscrowd': False,
   'segmentation': []},
  {'area': 52,
   'bbox': [1734.0, 1013.0, 4.0, 13.0],
   'category_id': 1,
   'id': 0,
   'image_id': '0',
   'iscrowd': False,
   'segmentation': []},
  {'area': 90666,
   'bbox': [0.0, 1151.0, 54.0, 1679.0],
   'category_id': 1,
   'id': 0,
   'image_id': '0',
   'iscrowd': False,
   'segmentation': []}],
 'width': 2064}

An example instance from the YOLO config:

{'image': <PIL.PngImagePlugin.PngImageFile image mode=L size=2064x2944 at 0x7FAA140F2450>,
 'objects': {'bbox': [[747, 390, 1493, 292],
   [586, 206, 562, 157],
   [1463, 225, 92, 57],
   [1725, 215, 88, 65],
   [80, 1688, 163, 2295],
   [231, 1678, 253, 2283],
   [435, 1675, 262, 2285],
   [288, 543, 8, 7],
   [905, 1663, 812, 2301],
   [1318, 1653, 135, 2280],
   [1439, 1642, 217, 2277],
   [1737, 1019, 4, 13],
   [26, 1991, 54, 1679]],
  'label': [0, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1]}}

Data Fields

The fields for the YOLO config:

  • image: the image
  • objects: the annotations which consist of:
    • bbox: a list of bounding boxes for the image
    • label: a list of labels for this image

The fields for the COCO config:

  • height: height of the image
  • width: width of the image
  • image: image
  • image_id: id for the image
  • objects: annotations in COCO format, consisting of a list containing dictionaries with the following keys:
    • bbox: bounding boxes for the images
    • category_id: a label for the image
    • image_id: id for the image
    • iscrowd: COCO iscrowd flag
    • segmentation: COCO segmentation annotations (empty in this case but kept for compatibility with other processing scripts)

Data Splits

The dataset contains a train, validation and test split with the following numbers per split:

train validation test
examples 196 22 135

Dataset Creation

[this] dataset was produced using a single source, the Lectaurep Repertoires dataset [Rostaing et al., 2021], which served as a basis for only the training and development split. The testset is composed of original data, from various documents, from the 17th century up to the early 20th with a single soldier war report. The test set is voluntarily very different and out of domain with column borders that are not drawn nor printed in certain cases, layout in some kind of masonry layout. p.8 .

Curation Rationale

This dataset was created to produce a simplified version of the Lectaurep Repertoires dataset, which was found to contain:

around 16 different ways to describe columns, from Col1 to Col7, the case-different col1-col7 and finally ColPair and ColOdd, which we all reduced to Col p.8

Source Data

Initial Data Collection and Normalization

The LECTAUREP (LECTure Automatique de REPertoires) project, which began in 2018, is a joint initiative of the Minutier central des notaires de Paris, the National Archives and the Minutier central des notaires de Paris of the National Archives, the ALMAnaCH (Automatic Language Modeling and Analysis & Computational Humanities) team at Inria and the EPHE (Ecole Pratique des Hautes Etudes), in partnership with the Ministry of Culture.

The lectaurep-bronod corpus brings together 100 pages from the repertoire of Maître Louis Bronod (1719-1765), notary in Paris from December 13, 1719 to July 23, 1765. The pages concerned were written during the years 1742 to 1745.

Who are the source language producers?

[More information needed]

Annotations

Train Dev Test Total Average area Median area
Col 724 105 829 1658 9.32 6.33
Header 103 15 42 160 6.78 7.10
Marginal 60 8 0 68 0.70 0.71
Text 13 5 0 18 0.01 0.00
-

Annotation process

[More information needed]

Who are the annotators?

[More information needed]

Personal and Sensitive Information

This data does not contain information relating to living individuals.

Considerations for Using the Data

Social Impact of Dataset

A growing number of datasets are related to page layout for historical documents. This dataset offers a different approach to annotating these datasets (focusing on object detection rather than pixel-level annotations). Improving document layout recognition can have a positive impact on downstream tasks, in particular Optical Character Recognition.

Discussion of Biases

Historical documents contain a wide variety of page layouts. This means that the ability of models trained on this dataset to transfer to documents with very different layouts is not guaranteed.

Other Known Limitations

[More information needed]

Additional Information

Dataset Curators

Licensing Information

Creative Commons Attribution 4.0 International

Citation Information

@dataset{clerice_thibault_2022_6827706,
  author       = {Clérice, Thibault},
  title        = {YALTAi: Tabular Dataset},
  month        = jul,
  year         = 2022,
  publisher    = {Zenodo},
  version      = {1.0.0},
  doi          = {10.5281/zenodo.6827706},
  url          = {https://doi.org/10.5281/zenodo.6827706}
}

DOI

Contributions

Thanks to @davanstrien for adding this dataset.

Downloads last month
1
Edit dataset card
Evaluate models HF Leaderboard