annotation_id
stringlengths
1
52
line_id
uint16
4
13.9k
start
uint16
0
859
end
uint16
1
866
label
class label
6 classes
label_text
stringlengths
1
196
merged
bool
2 classes
27229_36603_2_3
6
321
340
1LOC
auf deutschem Boden
true
4
8
200
203
4PER
der
false
6
8
262
268
4PER
Männer
false
7
8
262
281
3ORG
Männer Deutschlands
false
10
10
35
38
4PER
ihm
false
12
11
146
155
3ORG
deutschen
false
13
11
156
164
3ORG
Publicum
false
15
16
4
15
4PER
Privatleute
false
17
19
34
44
2MISC
Aufklärung
false
18
21
3
13
2MISC
Aufklärung
false
19_27250_20
22
99
124
4PER
Stifter des Christenthums
true
22
23
4
14
2MISC
Aufklärung
false
23
25
19
29
2MISC
Aufklärung
false
25
26
63
73
2MISC
Aufklärung
false
26
26
175
185
2MISC
Aufklärung
false
27
27
9
19
2MISC
Aufklärung
false
41
50
74
95
2MISC
aufgeklärten Menschen
false
42
50
200
208
2MISC
Anhänger
false
43
50
213
229
2MISC
blinden Glaubens
false
55
68
16
39
3ORG
bürgerlichen Geschäften
false
61
77
73
88
2MISC
heilige Schrift
false
62
77
112
118
3ORG
Kirche
false
70
88
118
143
3ORG
Dienern der Gerechtigkeit
false
80
96
9
18
3ORG
Francorum
false
85
103
0
4
4PER
Prof
false
88
105
14
22
4PER
Levrault
false
93
108
74
102
2MISC
Capitularien der Fränkischen
false
96
108
103
109
2MISC
Konige
false
95_36680_97
108
91
109
3ORG
Fränkischen Konige
true
98
108
103
109
4PER
Konige
false
99_36683_100
110
22
125
2MISC
Dissertatio de Capitularium Regum Francorum nomine, dignitate, auctoritate, et usu tam in rebus eccles.
true
36684_101
111
0
132
2MISC
quam politicis, nec non de eorundem collectionibus et editionibus, quam per modum praefationis collectioni Capitularium Regum Frane.
true
108
121
48
83
2MISC
Capitularien der Fränlischen Könige
false
112
126
48
74
3ORG
bürgerliche Gesellschaften
false
115
131
0
7
4PER
Fürsten
false
119
134
17
23
3ORG
Staate
false
120
134
34
39
3ORG
Staat
false
124
136
32
38
3ORG
Staate
false
126
137
108
129
3ORG
christlichen Religion
false
128
138
94
105
2MISC
Publicisten
false
134
144
0
20
3ORG
Allgemeine Concilien
false
137_27378_138
147
101
123
4PER
öffentlicher Professor
true
142_27382_143
149
35
57
4PER
öffentlichen Professor
true
144
149
108
133
3ORG
bischöflichen Universität
false
145
152
9
19
3ORG
Canonisten
false
146
152
45
51
3ORG
Kirche
false
147
152
82
91
3ORG
Concilien
false
157
159
40
45
3ORG
Staat
false
158
159
208
213
4PER
Fürst
false
161
162
20
25
4PER
Fürst
false
36738_162_27406_163
164
38
78
5TIME
aus den letzten Jahren Marien Theresiens
true
165
166
19
24
4PER
Fürst
false
166
166
65
99
4PER
Beschützer der Kirchenverordnungen
false
196_27461_36766_197_27463_199
178
49
139
4PER
Prinz Rohan⸗Rochefort Canonicus in Straßburg Deo duce, et Auspice Deipara, Praeside Franc.
true
202
198
29
38
3ORG
Germaniae
false
27476_203
198
69
100
4PER
Henricus IV. Imperator exercuit
true
27477_204
198
120
134
4PER
Gregorius VII.
true
211_27489_212
207
97
114
3ORG
imperio germanico
true
213
208
0
16
3ORG
coronae Gallicae
false
218
210
87
96
3ORG
Deutschen
false
219
210
87
96
1LOC
Deutschen
false
222
210
179
189
3ORG
Frankreich
false
224
210
194
202
3ORG
Schweden
false
229_27507_36786_230
214
61
110
3ORG
neuen Lehrstuhle der Staaterechnungs Wissenschaft
true
233_27510_36790_234
215
48
59
4PER
Hrn. Brand.
true
238
219
20
28
4PER
Regenten
false
239
219
30
38
4PER
Censoren
false
240
219
44
58
4PER
Schriftsteller
false
251
257
100
103
4PER
der
false
261
270
16
36
3ORG
gesetzgebenden Macht
false
36827_263
273
18
55
2MISC
k. k. Verordnung für die Büchercensur
true
269
278
56
62
3ORG
Staate
false
273
283
64
70
3ORG
Staate
false
274
284
4
9
3ORG
Staat
false
278
288
169
174
3ORG
Staat
false
283
293
154
161
3ORG
Staates
false
286
294
116
122
3ORG
Staate
false
293
301
66
76
3ORG
kirchliche
false
294
302
10
30
3ORG
kirchlichen Religion
false
300_27598_36853_303
307
41
62
5TIME
am Sesttage des heil.
true
27612_321
315
7
32
3ORG
Joseph Wolffischen Buchh.
true
328
321
0
12
4PER
Neuprofessen
false
333
326
242
253
4PER
Domprediger
false
344
330
217
245
4PER
wohlehrwurdigen Neuprofessen
false
349
334
41
53
4PER
gewöhnlichen
false
350
334
54
68
4PER
Schultheologen
false
355_27653_36909_356
336
243
267
4PER
seinem himmlischen Vater
true
357
337
52
55
4PER
die
false
358
337
56
68
4PER
Neuprofessen
false
362
339
121
125
4PER
heil
false
364
340
0
6
4PER
Vätern
false
365
341
55
58
4PER
die
false
366
341
59
63
4PER
heil
false
367
342
0
5
4PER
Väter
false
36932_377_27674_36934_380
346
59
108
3ORG
Neichsritterstifte Odenheim Speyerischen Bisthums
true
383
348
20
33
4PER
Enderesischen
false
384
348
20
43
2MISC
Enderesischen Schriften
false
385
348
20
43
3ORG
Enderesischen Schriften
false
388
351
24
36
4PER
Dompredigers
false
390
352
26
38
4PER
Dorfpfarrers
false

OALZ/1788/Q1/NER

A named entity recognition system (NER) was trained on text extracted from Oberdeutsche Allgemeine Litteraturzeitung (OALZ) of the first quarter (January, Febuary, March) of 1788. The scans from which text was extracted can be found at Bayerische Staatsbibliothek. The extraction strategy of the KEDiff project can be found at cborgelt/KEDiff.

Annotations

Each text passage was annotated in doccano by two or three annotators and their annotations were cleaned and merged into one dataset. For details on how this was done, see LelViLamp/kediff-doccano-postprocessing. In total, the text consists of about 1.7m characters. The resulting annotation datasets were published on the Hugging Face Hub. There are two versions:

  • union-dataset contains the texts split into chunks. This is how they were presented in the annotation application doccano. This dataset is the result of preprocessing step 5a.
  • merged-union-dataset does not retain this split. The text was merged into one long text and annotation, indices were adapted in preprocessing step 5b.

The following categories were included in the annotation process:

Tag Label Count Total Length Median Annotation Length Mean Annotation Length SD
EVENT Event 294 6,090 18 20.71 13.24
LOC Location 2,449 24,417 9 9.97 6.21
MISC Miscellaneous 2,585 50,654 14 19.60 19.63
ORG Organisation 2,479 34,693 11 13.99 9.33
PER Person 7,055 64,710 7 9.17 9.35
TIME Dates & Time 1,076 13,154 8 12.22 10.98

Data format

Note that there is three versions of the dataset:

  • a Huggingface/Arrow dataset,
  • a CSV, and
  • a JSONL file.

The former two should be used together with the provided text.csv to catch the context of the annotation. The latter JSONL file contains the full text.

The JSONL file contains lines of this format:

{
  "id": "example-42",
  "text": "Dieses Projekt wurde an der Universität Salzburg durchgeführt",
  "label": [[28, 49, "ORG"], [40, 49, "LOC"]]
}

And here are some example entries as used in the CSV and Huggingface dataset:

annotation_id line_id start end label label_text merged
$n$ example-42 28 49 ORG Universität Salzburg ???
$n+1$ example-42 40 49 LOC Salzburg ???

The columns mean:

  • annotation_id was assigned internally by enumerating all annotations in the original dataset, which is not published. This value is not present in the JSONL file.
  • line_id is the fragment of the subdivided text, as shown in doccano. Called id in the JSONL dataset.
  • start index of the first character that is annotated. Included, starts with 0.
  • end index of the last character that is annotated. Excluded, maximum value is len(respectiveText).
  • label indicates what the passage indicated by $[start, end)$ was annotated as.
  • label_text contains the text that is annotated by $[start, end)$. This is not present in the JSONL dataset as it can be inferred from the text entry there.
  • merged indicates whether this annotation is the result of overlapping annotations of the same label. In that case, annotation_id contains the IDs of the individual annotations it was constructed of, separated by underscores. This value is not present in the JSONL dataset, and this column is redundant, as it can be inferred from annotation_id.

NER models

Based on the annotations above, six separate NER classifiers were trained, one for each label type. This was done in order to allow overlapping annotations. For example, in the passage "Dieses Projekt wurde an der Universität Salzburg durchgeführt", you would want to categorise "Universität Salzburg" as an organisation while also extracting "Salzburg" as a location.

To achieve this overlap, each text passage must be run through all the classifiers individually and each classifier's results need to be combined. For details on how the training was done and examples of inference time, see LelViLamp/kediff-ner-training.

The dbmdz/bert-base-historic-multilingual-cased tokeniser was used to create historical embeddings. Therefore, it is necessary to use that in order to use these NER models.

The models' performance measures are shown in the following table. Click the model name to find the model on the Huggingface Hub.

Model Selected Epoch Checkpoint Validation Loss Precision Recall F1 Accuracy
EVENT 1 1393 .021957 .665233 .343066 .351528 .995700
LOC 1 1393 .033602 .829535 .803648 .814146 .990999
MISC 2 2786 .123994 .739221 .503677 .571298 968697
ORG 1 1393 .062769 .744259 .709738 .726212 .980288
PER 2 2786 .059186 .914037 .849048 .879070 .983253
TIME 1 1393 .016120 .866866 .724958 .783099 .994631

Acknowledgements

The data set and models were created in the project Kooperative Erschließung diffusen Wissens (KEDiff), funded by the State of Salzburg, Austria, and carried out at Paris Lodron Universität Salzburg. 🇦🇹

Downloads last month
0
Edit dataset card