File size: 8,619 Bytes
763fb6e
c456f18
 
 
 
 
8688251
 
 
 
c456f18
 
8688251
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
763fb6e
c456f18
 
b50fc83
 
 
c456f18
b50fc83
c456f18
2a42784
 
b50fc83
 
 
c456f18
b50fc83
 
 
 
 
 
c456f18
 
 
 
 
 
 
 
 
 
 
 
b50fc83
2a42784
b50fc83
c456f18
 
 
 
 
 
 
 
 
b50fc83
c456f18
b50fc83
 
 
 
c456f18
 
b50fc83
c456f18
 
 
 
b50fc83
 
c456f18
2a42784
b50fc83
c456f18
 
 
b50fc83
c456f18
 
 
 
 
 
 
 
 
 
 
b50fc83
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
---
language:
- de
- la
- fr
- en
task_categories:
- token-classification
pretty_name: Annotations and models for named entity recognition on Oberdeutsche Allgemeine
  Litteraturzeitung of the first quarter of 1788
tags:
- historical
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
dataset_info:
  features:
  - name: annotation_id
    dtype: string
  - name: line_id
    dtype: uint16
  - name: start
    dtype: uint16
  - name: end
    dtype: uint16
  - name: label
    dtype:
      class_label:
        names:
          '0': EVENT
          '1': LOC
          '2': MISC
          '3': ORG
          '4': PER
          '5': TIME
  - name: label_text
    dtype: string
  - name: merged
    dtype: bool
  splits:
  - name: train
    num_bytes: 702091
    num_examples: 15938
  download_size: 474444
  dataset_size: 702091
---
# OALZ/1788/Q1/NER

- [Postprocessing](https://github.com/LelViLamp/kediff-doccano-postprocessing)
- [Training](https://github.com/LelViLamp/kediff-ner-training)
- Published datasets ([union](https://huggingface.co/datasets/LelViLamp/oalz-1788-q1-ner-annotations-union-dataset), [**_merged union_**](https://huggingface.co/datasets/LelViLamp/oalz-1788-q1-ner-annotations-merged-union-dataset)) and models ([`EVENT`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-event), [`LOC`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-loc), [`MISC`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-misc), [`ORG`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-org), [`PER`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-per), [`TIME`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-time))

A named entity recognition system (NER) was trained on text extracted from _Oberdeutsche Allgemeine Litteraturueitung_ (OALZ) of the first quarter (January, Febuary, March) of 1788. The scans from which text was extracted can be found at [Bayerische Staatsbibliothek](https://www.digitale-sammlungen.de/de/view/bsb10628753?page=,1) using the extraction strategy of the _KEDiff_ project, which can be found at [`cborgelt/KEDiff`](https://github.com/cborgelt/KEDiff).

## Annotations

Each text passage was annotated in [doccano](https://github.com/doccano/doccano) by two or three annotators and their annotations were cleaned and merged into one dataset. For details on how this was done, see [`LelViLamp/kediff-doccano-postprocessing`](https://github.com/LelViLamp/kediff-doccano-postprocessing). In total, the text consists of about 1.7m characters. The resulting annotation datasets were published on the Hugging Face Hub. There are two versions of the dataset
- [`union-dataset`](https://huggingface.co/datasets/LelViLamp/oalz-1788-q1-ner-annotations-union-dataset) contains the texts split into chunks. This is how they were presented in the annotation application doccano and results from preprocessing step 5a.
- [`merged-union-dataset`](https://huggingface.co/datasets/LelViLamp/oalz-1788-q1-ner-annotations-merged-union-dataset) does not retain this split. The text was merged into one long text and annotation indices were adapted in preprocessing step 5b.

Note that both these directories contain three equivalent datasets each:
- a Huggingface/Arrow dataset, <sup>*</sup>
- a CSV, <sup>*</sup> and
- a JSONL file.

<sup>*</sup> The former two should be used together with the provided `text.csv` to catch the context of the annotation. The latter JSONL file contains the full text.

The following categories were included in the annotation process:

| Tag     | Label         | Count | Total Length | Median Annotation Length | Mean Annotation Length |    SD |
|:--------|:--------------|------:|-------------:|-------------------------:|-----------------------:|------:|
| `EVENT` | Event         |   294 |        6,090 |                       18 |                  20.71 | 13.24 |
| `LOC`   | Location      | 2,449 |       24,417 |                        9 |                   9.97 |  6.21 |
| `MISC`  | Miscellaneous | 2,585 |       50,654 |                       14 |                  19.60 | 19.63 |
| `ORG`   | Organisation  | 2,479 |       34,693 |                       11 |                  13.99 |  9.33 |
| `PER`   | Person        | 7,055 |       64,710 |                        7 |                   9.17 |  9.35 |
| `TIME`  | Dates & Time  | 1,076 |       13,154 |                        8 |                  12.22 | 10.98 |

## NER models

Based on the annotations above, six separate NER classifiers were trained, one for each label type. This was done in order to allow overlapping annotations. For example, in the passage "Dieses Projekt wurde an der Universität Salzburg durchgeführt", you would want to categorise "Universität Salzburg" as an organisation while also extracting "Salzburg" as a location. This would result in an annotation like this:

```json
{
  "id": "example-42",
  "text": "Dieses Projekt wurde an der Universität Salzburg durchgeführt",
  "label": [[28, 49, "ORG"], [40, 49, "LOC"]]
}
```

Example entry in CSV and Huggingface dataset

| annotation_id | line_id    | start | end | label | label_text           | merged |
|:--------------|:-----------|------:|----:|:------|:---------------------|:------:|
| $n$           | example-42 |    28 |  49 | ORG   | Universität Salzburg |  ???   |
| $n+1$         | example-42 |    40 |  49 | LOC   | Salzburg             |  ???   |

The columns mean:
- `annotation_id` was assigned internally by enumerating all annotations. This is not present in the JSONL format
- `line_id` is the fragment of the subdivided text, as shown in doccano. Called `id` in the JSONL dataset.
- `start` index of the first character that is annotated. Included, starts with 0.
- `end` index of the last character that is annotated. Excluded, maximum value is `len(respectiveText)`.
- `label` indicates what the passage indicated by $[start, end)$ was annotated as.
- `label_text` contains the text that is annotated by $[start, end)$. This is not present in the JSONL dataset as it can be inferred there.
- `merged` indicates whether this annotation is the result of overlapping annotations of the same label. In that case, `annotation_id` contains the IDs of the individual annotations it was constructed of. This is not present in the JSONL dataset.


To achieve this overlap, each text passage must be run through all the classifiers individually and each classifier's results need to be combined. For details on how the training was done, see [`LelViLamp/kediff-ner-training`](https://github.com/LelViLamp/kediff-ner-training).

The [`dbmdz/bert-base-historic-multilingual-cased`](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased) tokeniser was used to create historical embeddings. Therefore, it is necessary to use that in order to use these NER models.

The models' performance measures are as follows:

| Model                                                              | Selected Epoch | Checkpoint | Validation Loss | Precision |  Recall | F<sub>1</sub> | Accuracy |
|:-------------------------------------------------------------------|:--------------:|-----------:|----------------:|----------:|--------:|--------------:|---------:|
| [`EVENT`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-event) |       1        |     `1393` |         .021957 |   .665233 | .343066 |       .351528 |  .995700 |
| [`LOC`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-loc)     |       1        |     `1393` |         .033602 |   .829535 | .803648 |       .814146 |  .990999 |
| [`MISC`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-misc)   |       2        |     `2786` |         .123994 |   .739221 | .503677 |       .571298 |   968697 |
| [`ORG`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-org)     |       1        |     `1393` |         .062769 |   .744259 | .709738 |       .726212 |  .980288 |
| [`PER`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-per)     |       2        |     `2786` |         .059186 |   .914037 | .849048 |       .879070 |  .983253 |
| [`TIME`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-time)   |       1        |     `1393` |         .016120 |   .866866 | .724958 |       .783099 |  .994631 |

## Acknowledgements
The data set and models were created in the project _Kooperative Erschließung diffusen Wissens_ ([KEDiff](https://uni-salzburg.elsevierpure.com/de/projects/kooperative-erschließung-diffusen-wissens-ein-literaturwissenscha)), funded by the [State of Salzburg](https://salzburg.gv.at), Austria 🇦🇹, and carried out at [Paris Lodron Universität Salzburg](https://plus.ac.at).