Datasets:
Tasks:
Token Classification
Sub-tasks:
named-entity-recognition
Languages:
Dutch
Size Categories:
1K<n<10K
ArXiv:
License:
Commit
•
977723b
1
Parent(s):
6317d4e
update sections
Browse files
README.md
CHANGED
@@ -157,27 +157,54 @@ size_categories:
|
|
157 |
|
158 |
### Dataset Summary
|
159 |
|
160 |
-
**
|
161 |
|
162 |
> Colonial archives are at the center of increased interest from a variety of perspectives, as they contain traces of historically marginalized people. Unfortunately, like most archives, they remain difficult to access due to significant persisting barriers. We focus here on one of them: the biases to be found in historical findings aids, such as indices of person names, which remain in use to this day. In colonial archives, indexes can perpetrate silences by omitting to include mentions of historically marginalized persons. In order to overcome such limitation and pluralize the scope of existing finding aids, we propose using automated entity recognition. To this end, we contribute a fit-for-purpose annotation typology and apply it on the colonial archive of the Dutch East India Company (VOC). We release a corpus of nearly 70,000 annotations as a shared task, for which we provide strong baselines using state-of-the-art neural network models.
|
163 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
164 |
### Supported Tasks and Leaderboards
|
165 |
|
166 |
- named-entity-recognition: The dataset can be used to train a model for Named Entity Recognition.
|
167 |
|
168 |
### Languages
|
169 |
|
170 |
-
|
171 |
|
172 |
## Dataset Structure
|
173 |
|
174 |
### Data Instances
|
175 |
|
176 |
-
[More Information Needed]
|
177 |
|
178 |
### Data Fields
|
179 |
|
180 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
181 |
|
182 |
### Data Splits
|
183 |
|
@@ -187,13 +214,16 @@ size_categories:
|
|
187 |
|
188 |
### Curation Rationale
|
189 |
|
190 |
-
|
|
|
191 |
|
192 |
### Source Data
|
193 |
|
194 |
#### Initial Data Collection and Normalization
|
195 |
|
196 |
-
|
|
|
|
|
197 |
|
198 |
#### Who are the source language producers?
|
199 |
|
@@ -203,11 +233,12 @@ size_categories:
|
|
203 |
|
204 |
#### Annotation process
|
205 |
|
206 |
-
|
|
|
207 |
|
208 |
#### Who are the annotators?
|
209 |
|
210 |
-
|
211 |
|
212 |
### Personal and Sensitive Information
|
213 |
|
@@ -243,4 +274,4 @@ size_categories:
|
|
243 |
|
244 |
### Contributions
|
245 |
|
246 |
-
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
|
|
|
157 |
|
158 |
### Dataset Summary
|
159 |
|
160 |
+
**Note**: this data card was adapted from documentation and a [data card](https://github.com/budh333/UnSilence_VOC/blob/main/Datacard.pdf) written by the creators of the dataset.
|
161 |
|
162 |
> Colonial archives are at the center of increased interest from a variety of perspectives, as they contain traces of historically marginalized people. Unfortunately, like most archives, they remain difficult to access due to significant persisting barriers. We focus here on one of them: the biases to be found in historical findings aids, such as indices of person names, which remain in use to this day. In colonial archives, indexes can perpetrate silences by omitting to include mentions of historically marginalized persons. In order to overcome such limitation and pluralize the scope of existing finding aids, we propose using automated entity recognition. To this end, we contribute a fit-for-purpose annotation typology and apply it on the colonial archive of the Dutch East India Company (VOC). We release a corpus of nearly 70,000 annotations as a shared task, for which we provide strong baselines using state-of-the-art neural network models.
|
163 |
|
164 |
+
This dataset is based on the digitized collection of the Dutch East India
|
165 |
+
Company (VOC) Testaments under the custody of the Dutch National
|
166 |
+
Archives. These testaments of VOC-servants are mainly from the 18th
|
167 |
+
century, for the most part drawn up in the Asian VOC-settlements and
|
168 |
+
to a lesser extent on the VOC ships and in the Republic. The
|
169 |
+
testaments have a fixed order in the text structure and the language is
|
170 |
+
18th century Dutch.
|
171 |
+
|
172 |
+
The dataset has 68,429 annotations spanning over 79,797 tokens
|
173 |
+
across 2193 unique pages. 47% of the total annotations correspond to
|
174 |
+
entities and 53% to attributes of those entities. Of the 32,203 entity
|
175 |
+
annotations, 11,715 (36.3%) correspond to instances that represent
|
176 |
+
persons with associated attributes of gender, legal status and notarial
|
177 |
+
role, 4,510 (14%) correspond to instances of places, 1,080 (3.5%)
|
178 |
+
correspond to organizations with attribute beneficiary and 14,898
|
179 |
+
(46.2%) correspond to proper names (of places, organizations and
|
180 |
+
persons).
|
181 |
+
|
182 |
+
|
183 |
### Supported Tasks and Leaderboards
|
184 |
|
185 |
- named-entity-recognition: The dataset can be used to train a model for Named Entity Recognition.
|
186 |
|
187 |
### Languages
|
188 |
|
189 |
+
The dataset contains 18th Century Dutch. The text in the dataset was produced via handwritten text recognition so contains some errors.
|
190 |
|
191 |
## Dataset Structure
|
192 |
|
193 |
### Data Instances
|
194 |
|
|
|
195 |
|
196 |
### Data Fields
|
197 |
|
198 |
+
- tokens
|
199 |
+
- NE-MAIN
|
200 |
+
- NE-PER-NAME
|
201 |
+
- NE-PER-GENDER
|
202 |
+
- NE-PER-LEGAL-STATUS
|
203 |
+
- NE-PER-ROLE
|
204 |
+
- NE-ORG-BENEFICIARY
|
205 |
+
- MISC
|
206 |
+
- document_id
|
207 |
+
|
208 |
|
209 |
### Data Splits
|
210 |
|
|
|
214 |
|
215 |
### Curation Rationale
|
216 |
|
217 |
+
This dataset was created for training entity recognition models to create more inclusive content based indexes on the collection of VOC testaments.
|
218 |
+
|
219 |
|
220 |
### Source Data
|
221 |
|
222 |
#### Initial Data Collection and Normalization
|
223 |
|
224 |
+
This dataset is based on the digitized collection of the Dutch East India
|
225 |
+
Company (VOC) Testaments under the custody of the Dutch National
|
226 |
+
Archives.
|
227 |
|
228 |
#### Who are the source language producers?
|
229 |
|
|
|
233 |
|
234 |
#### Annotation process
|
235 |
|
236 |
+
Annotations were created as a shared annotation task on the Brat annotation software. Annotations were created by highlighting the relevant span of
|
237 |
+
text and choosing its entity type and where applicable exactly one attribute value through a drop down menu. To tag the same span as two entities, the span must be selected two times and labelled accordingly. For example: ‘Adam Domingo’ has been labelled twice as a Person and ProperName.
|
238 |
|
239 |
#### Who are the annotators?
|
240 |
|
241 |
+
|
242 |
|
243 |
### Personal and Sensitive Information
|
244 |
|
|
|
274 |
|
275 |
### Contributions
|
276 |
|
277 |
+
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
|