Datasets:

Size Categories:
n<1K
Language Creators:
expert-generated
Annotations Creators:
expert-generated
ArXiv:
Tags:
manuscripts
LAM
License:
davanstrien HF staff commited on
Commit
58b03f0
1 Parent(s): 4a1a055

update readme

Browse files
Files changed (1) hide show
  1. README.md +35 -52
README.md CHANGED
@@ -219,112 +219,91 @@ The fields for the COCO config:
219
  - `category_id`: label for the image
220
  - `image_id`: id for the image
221
  - `iscrowd`: COCO is crowd flag
222
- - `segmentation`: COCO segmentation annotations (empty in this case but kept for compatibility with other processing scripts
223
 
224
 
225
 
226
  ### Data Splits
227
 
228
- Describe and name the splits in the dataset if there are more than one.
229
 
230
- Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g. if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here.
231
 
232
- Provide the sizes of each split. As appropriate, provide any descriptive statistics for the features, such as average length. For example:
 
 
233
 
234
- | | train | validation | test |
235
- |-------------------------|------:|-----------:|-----:|
236
- | Input Sentences | | | |
237
- | Average Sentence Length | | | |
238
 
239
  ## Dataset Creation
240
 
 
 
241
  ### Curation Rationale
242
 
243
- What need motivated the creation of this dataset? What are some of the reasons underlying the major choices involved in putting it together?
244
 
245
- ### Source Data
246
 
247
- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences,...)
248
 
249
  #### Initial Data Collection and Normalization
250
 
251
- Describe the data collection process. Describe any criteria for data selection or filtering. List any key words or search terms used. If possible, include runtime information for the collection process.
252
-
253
- If data was collected from other pre-existing datasets, link to source here and to their [Hugging Face version](https://huggingface.co/datasets/dataset_name).
254
 
255
- If the data was modified or normalized after being collected (e.g. if the data is word-tokenized), describe the process and the tools used.
256
 
257
  #### Who are the source language producers?
258
 
259
- State whether the data was produced by humans or machine generated. Describe the people or systems who originally created the data.
260
-
261
- If available, include self-reported demographic or identity information for the source data creators, but avoid inferring this information. Instead state that this information is unknown. See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender.
262
-
263
- Describe the conditions under which the data was created (for example, if the producers were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.
264
-
265
- Describe other people represented or mentioned in the data. Where possible, link to references for the information.
266
 
267
  ### Annotations
268
 
269
- If the dataset contains annotations which are not part of the initial data collection, describe them in the following paragraphs.
 
 
 
 
 
 
 
270
 
271
  #### Annotation process
272
 
273
- If applicable, describe the annotation process and any tools used, or state otherwise. Describe the amount of data annotated, if not all. Describe or reference annotation guidelines provided to the annotators. If available, provide interannotator statistics. Describe any annotation validation processes.
274
 
275
  #### Who are the annotators?
276
 
277
- If annotations were collected for the source data (such as class labels or syntactic parses), state whether the annotations were produced by humans or machine generated.
278
-
279
- Describe the people or systems who originally created the annotations and their selection criteria if applicable.
280
-
281
- If available, include self-reported demographic or identity information for the annotators, but avoid inferring this information. Instead state that this information is unknown. See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender.
282
-
283
- Describe the conditions under which the data was annotated (for example, if the annotators were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.
284
 
285
  ### Personal and Sensitive Information
286
 
287
- State whether the dataset uses identity categories and, if so, how the information is used. Describe where this information comes from (i.e. self-reporting, collecting from profiles, inferring, etc.). See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender. State whether the data is linked to individuals and whether those individuals can be identified in the dataset, either directly or indirectly (i.e., in combination with other data).
288
-
289
- State whether the dataset contains other data that might be considered sensitive (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history).
290
-
291
- If efforts were made to anonymize the data, describe the anonymization process.
292
-
293
  ## Considerations for Using the Data
294
 
295
  ### Social Impact of Dataset
296
 
297
- Please discuss some of the ways you believe the use of this dataset will impact society.
298
-
299
- The statement should include both positive outlooks, such as outlining how technologies developed through its use may improve people's lives, and discuss the accompanying risks. These risks may range from making important decisions more opaque to people who are affected by the technology, to reinforcing existing harmful biases (whose specifics should be discussed in the next section), among other considerations.
300
-
301
- Also describe in this section if the proposed dataset contains a low-resource or under-represented language. If this is the case or if this task has any impact on underserved communities, please elaborate here.
302
 
303
  ### Discussion of Biases
304
 
305
- Provide descriptions of specific biases that are likely to be reflected in the data, and state whether any steps were taken to reduce their impact.
306
-
307
- For Wikipedia text, see for example [Dinan et al 2020 on biases in Wikipedia (esp. Table 1)](https://arxiv.org/abs/2005.00614), or [Blodgett et al 2020](https://www.aclweb.org/anthology/2020.acl-main.485/) for a more general discussion of the topic.
308
-
309
- If analyses have been run quantifying these biases, please add brief summaries and links to the studies here.
310
 
311
  ### Other Known Limitations
312
 
313
- If studies of the datasets have outlined other limitations of the dataset, such as annotation artifacts, please outline and cite them here.
 
314
 
315
  ## Additional Information
316
 
317
  ### Dataset Curators
318
 
319
- List the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here.
320
 
321
  ### Licensing Information
322
 
323
- Provide the license and link to the license webpage if available.
324
 
325
  ### Citation Information
326
 
327
-
328
  ```
329
  @dataset{clerice_thibault_2022_6827706,
330
  author = {Clérice, Thibault},
@@ -338,8 +317,12 @@ Provide the license and link to the license webpage if available.
338
  }
339
  ```
340
 
341
- If the dataset has a [DOI](https://doi.org/10.5281/zenodo.6827706), please provide it here.
342
 
 
 
 
 
 
343
  ### Contributions
344
 
345
  Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
 
219
  - `category_id`: label for the image
220
  - `image_id`: id for the image
221
  - `iscrowd`: COCO is crowd flag
222
+ - `segmentation`: COCO segmentation annotations (empty in this case but kept for compatibility with other processing scripts)
223
 
224
 
225
 
226
  ### Data Splits
227
 
228
+ The dataset contains a train, validation and test split with the following numbers per split:
229
 
 
230
 
231
+ | | train | validation | test |
232
+ |-----------------|------:|-----------:|-----:|
233
+ | Input Sentences | 196 | 22 | 135 |
234
 
 
 
 
 
235
 
236
  ## Dataset Creation
237
 
238
+ > [this] dataset was produced using a single source, the Lectaurep Repertoires dataset [Rostaing et al., 2021], which served as a basis for only the training and development split. The testset is composed of original data, from various documents, from the 17th century up to the early 20th with a single soldier war report. The test set is voluntarily very different and out of domainwith column borders that are not drawn nor printed in certain cases, layout in some kind of masonry layout. p.8
239
+ .
240
  ### Curation Rationale
241
 
242
+ This dataset was created to produce a simplified version of the [Lectaurep Repertoires dataset](https://github.com/HTR-United/lectaurep-repertoires) which was found to contain:
243
 
244
+ > around 16 different ways to describe columns, from Col1 to Col7, the case-different col1-col7 and finally ColPair and ColOdd, which we all reduced to Col p.8
245
 
246
+ ### Source Data
247
 
248
  #### Initial Data Collection and Normalization
249
 
250
+ The LECTAUREP (LECTure Automatique de REPertoires) project, which began in 2018, is a joint initiative of the Minutier central des notaires de Paris, the National Archives and the
251
+ Minutier central des notaires de Paris of the National Archives, the [ALMAnaCH (Automatic Language Modeling and Analysis & Computational Humanities)](https://www.inria.fr/en/almanach) team at Inria and the EPHE (Ecole Pratique des Hautes Etudes), in partnership with the Ministry of Culture.
 
252
 
253
+ > The lectaurep-bronod corpus brings together 100 pages from the repertoire of Maître Louis Bronod (1719-1765), notary in Paris from December 13, 1719 to July 23, 1765. The pages concerned were written during the years 1742 to 1745.
254
 
255
  #### Who are the source language producers?
256
 
257
+ [More information needed]
 
 
 
 
 
 
258
 
259
  ### Annotations
260
 
261
+ | | Train | Dev | Test | Total | Average area | Median area |
262
+ |----------|-------|-----|------|-------|--------------|-------------|
263
+ | Col | 724 | 105 | 829 | 1658 | 9.32 | 6.33 |
264
+ | Header | 103 | 15 | 42 | 160 | 6.78 | 7.10 |
265
+ | Marginal | 60 | 8 | 0 | 68 | 0.70 | 0.71 |
266
+ | Text | 13 | 5 | 0 | 18 | 0.01 | 0.00 |
267
+ | | | | - | | | |
268
+ 1
269
 
270
  #### Annotation process
271
 
272
+ [More information needed]
273
 
274
  #### Who are the annotators?
275
 
276
+ [More information needed]
 
 
 
 
 
 
277
 
278
  ### Personal and Sensitive Information
279
 
280
+ This data does not contain information relating to living individuals.
 
 
 
 
 
281
  ## Considerations for Using the Data
282
 
283
  ### Social Impact of Dataset
284
 
285
+ There are a growing number of datasets related to page layout for historical documents. This dataset offers a differnet approach to annotating these datasets (focusing on object detection rather than pixel level annotations).
 
 
 
 
286
 
287
  ### Discussion of Biases
288
 
289
+ Historical documents contain a broad variety of page layouts this means that the ability for models trained on this dataset to transfer to documents which may contain very different layouts is not certain.
 
 
 
 
290
 
291
  ### Other Known Limitations
292
 
293
+ [More information needed]
294
+
295
 
296
  ## Additional Information
297
 
298
  ### Dataset Curators
299
 
 
300
 
301
  ### Licensing Information
302
 
303
+ [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode)
304
 
305
  ### Citation Information
306
 
 
307
  ```
308
  @dataset{clerice_thibault_2022_6827706,
309
  author = {Clérice, Thibault},
 
317
  }
318
  ```
319
 
 
320
 
321
+
322
+ [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.6827706.svg)](https://doi.org/10.5281/zenodo.6827706)
323
+
324
+
325
+
326
  ### Contributions
327
 
328
  Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.