Dataset Preview
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    UnboundLocalError
Message:      local variable 'document_id' referenced before assignment
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/", line 365, in get_rows_or_raise
                  return get_rows(
                File "/src/services/worker/src/worker/", line 307, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/", line 343, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 981, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 116, in __iter__
                  yield from self.generate_examples_fn(**self.kwargs)
                File "/tmp/modules-cache/datasets_modules/datasets/biglam--unsilence_voc/64085179a059bf5ffad6d6a48a0608b371f17750d0ed4c8b7514e7f233ff70db/", line 198, in _generate_examples
                  "document_id": document_id,
              UnboundLocalError: local variable 'document_id' referenced before assignment

Need help to make the dataset viewer work? Open a discussion for direct support.

Dataset Card for Unsilencing Colonial Archives via Automated Entity Recognition

Dataset Summary

Note: this data card was adapted from documentation and a data card written by the creators of the dataset.

Colonial archives are at the center of increased interest from a variety of perspectives, as they contain traces of historically marginalized people. Unfortunately, like most archives, they remain difficult to access due to significant persisting barriers. We focus here on one of them: the biases to be found in historical findings aids, such as indices of person names, which remain in use to this day. In colonial archives, indexes can perpetrate silences by omitting to include mentions of historically marginalized persons. In order to overcome such limitation and pluralize the scope of existing finding aids, we propose using automated entity recognition. To this end, we contribute a fit-for-purpose annotation typology and apply it on the colonial archive of the Dutch East India Company (VOC). We release a corpus of nearly 70,000 annotations as a shared task, for which we provide strong baselines using state-of-the-art neural network models.

This dataset is based on the digitized collection of the Dutch East India Company (VOC) Testaments under the custody of the Dutch National Archives. These testaments of VOC-servants are mainly from the 18th century, for the most part drawn up in the Asian VOC-settlements and to a lesser extent on the VOC ships and in the Republic. The testaments have a fixed order in the text structure and the language is 18th century Dutch.

The dataset has 68,429 annotations spanning over 79,797 tokens across 2193 unique pages. 47% of the total annotations correspond to entities and 53% to attributes of those entities. Of the 32,203 entity annotations, 11,715 (36.3%) correspond to instances that represent persons with associated attributes of gender, legal status and notarial role, 4,510 (14%) correspond to instances of places, 1,080 (3.5%) correspond to organizations with attribute beneficiary and 14,898 (46.2%) correspond to proper names (of places, organizations and persons).

Supported Tasks and Leaderboards

  • named-entity-recognition: This dataset can be used to train a model for Named Entity Recognition. In particular, the dataset was designed to detect mentions of people in archival documents.


The dataset contains 18th Century Dutch. The text in the dataset was produced via handwritten text recognition so contains some errors.

Dataset Structure

Data Instances

Each datapoint refers to a central entity that can be a person, place, organization or proper name or their attributes such as gender, legal status and notarial role of a person.

Data Fields

  • tokens: tokens being annotated
  • NE-MAIN: main entity type, i.e. person, place, Organization, ProperName
  • NE-PER-NAME: person name entity
  • NE-PER-GENDER: When the mention of a person is followed or preceded by trigger words which reveal their gender, the text is annotated as a Person with the appropriate value of the attribute Gender. When a person is mentioned without a gender trigger word, their gender is marked as Unspecified.
  • NE-PER-LEGAL-STATUS: legal status where known, i.e. Free(d), Enslaved, Unspecified
  • NE-PER-ROLE: The attribute Role refers to roles specific to testaments in notarial archives, which may take exactly one of the following values: Testator, Notary, Witness, Beneficiary, Acting Notary, Testator Beneficiary or Other
  • NE-ORG-BENEFICIARY: Organizations have the attribute Beneficiary which can take the value Yes or No depending on whether the testator decides an organization to be their beneficiary.
  • MISC: other annotations not fitting into the above labels.
  • document_id: id for the document being annotated

Data Splits

[More Information Needed]

Dataset Creation

Curation Rationale

This dataset was created for training entity recognition models to create more inclusive content based indexes on the collection of VOC testaments.

Source Data

Initial Data Collection and Normalization

This dataset is based on the digitized collection of the Dutch East India Company (VOC) Testaments under the custody of the Dutch National Archives.

Who are the source language producers?

[More Information Needed]


Entity # %
Person 11,715 36.4
Place 4,510 14
Organization 1,080 3.4
ProperName 14,898 46.2

Annotation process

Annotations were created as a shared annotation task on the Brat annotation software. Annotations were created by highlighting the relevant span of text and choosing its entity type and where applicable exactly one attribute value through a drop down menu. To tag the same span as two entities, the span must be selected two times and labelled accordingly. For example: ‘Adam Domingo’ has been labelled twice as a Person and ProperName.

Who are the annotators?

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

[More Information Needed]

Licensing Information

[More Information Needed]

Citation Information

[More Information Needed]


Thanks to @davanstrien for adding this dataset.

Downloads last month