LelViLamp commited on
Commit
b50fc83
1 Parent(s): 21f644d

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -71
README.md CHANGED
@@ -1,63 +1,36 @@
1
  ---
 
 
2
  language:
3
  - de
4
  - la
5
  - fr
6
  - en
7
- task_categories:
8
- - token-classification
9
- pretty_name: Annotations and models for named entity recognition on Oberdeutsche Allgemeine
10
- Litteraturzeitung of the first quarter of 1788
11
  tags:
12
  - historical
13
- configs:
14
- - config_name: default
15
- data_files:
16
- - split: train
17
- path: data/train-*
18
- dataset_info:
19
- features:
20
- - name: annotation_id
21
- dtype: string
22
- - name: line_id
23
- dtype: uint16
24
- - name: start
25
- dtype: uint16
26
- - name: end
27
- dtype: uint16
28
- - name: label
29
- dtype:
30
- class_label:
31
- names:
32
- '0': EVENT
33
- '1': LOC
34
- '2': MISC
35
- '3': ORG
36
- '4': PER
37
- '5': TIME
38
- - name: label_text
39
- dtype: string
40
- - name: merged
41
- dtype: bool
42
- splits:
43
- - name: train
44
- num_bytes: 702091
45
- num_examples: 15938
46
- download_size: 474444
47
- dataset_size: 702091
48
  ---
49
  # OALZ/1788/Q1/NER
50
 
51
- A named entity recognition system (NER) was trained on text extracted from _Oberdeutsche Allgemeine Litteraturzeitung_ (OALZ) of the first quarter (January, Febuary, March) of 1788. The scans from which text was extracted can be found at [Bayerische Staatsbibliothek](https://www.digitale-sammlungen.de/de/view/bsb10628753?page=,1). The extraction strategy of the _KEDiff_ project can be found at [`cborgelt/KEDiff`](https://github.com/cborgelt/KEDiff).
52
-
 
53
 
 
54
 
55
  ## Annotations
56
 
57
- Each text passage was annotated in [doccano](https://github.com/doccano/doccano) by two or three annotators and their annotations were cleaned and merged into one dataset. For details on how this was done, see [`LelViLamp/kediff-doccano-postprocessing`](https://github.com/LelViLamp/kediff-doccano-postprocessing). In total, the text consists of about 1.7m characters. The resulting annotation datasets were published on the Hugging Face Hub. There are two versions:
 
 
58
 
59
- - [`union-dataset`](https://huggingface.co/datasets/LelViLamp/oalz-1788-q1-ner-annotations-union-dataset) contains the texts split into chunks. This is how they were presented in the annotation application doccano. This dataset is the result of preprocessing step 5a.
60
- - [`merged-union-dataset`](https://huggingface.co/datasets/LelViLamp/oalz-1788-q1-ner-annotations-merged-union-dataset) does not retain this split. The text was merged into one long text and annotation, indices were adapted in preprocessing step 5b.
 
 
 
 
61
 
62
  The following categories were included in the annotation process:
63
 
@@ -70,16 +43,9 @@ The following categories were included in the annotation process:
70
  | `PER` | Person | 7,055 | 64,710 | 7 | 9.17 | 9.35 |
71
  | `TIME` | Dates & Time | 1,076 | 13,154 | 8 | 12.22 | 10.98 |
72
 
73
- ### Data format
74
-
75
- Note that there is three versions of the dataset:
76
- - a Huggingface/Arrow dataset,
77
- - a CSV, and
78
- - a JSONL file.
79
-
80
- The former two should be used together with the provided `text.csv` to catch the context of the annotation. The latter JSONL file contains the full text.
81
 
82
- The **JSONL file** contains lines of this format:
83
 
84
  ```json
85
  {
@@ -89,33 +55,28 @@ The **JSONL file** contains lines of this format:
89
  }
90
  ```
91
 
92
- And here are some example entries as used in the CSV and Huggingface dataset:
93
 
94
- | `annotation_id` | `line_id` | `start` | `end` | `label` | `label_text` | `merged` |
95
- |:----------------|:-----------|--------:|------:|:--------|:---------------------|:--------:|
96
- | $n$ | example-42 | 28 | 49 | ORG | Universität Salzburg | ??? |
97
- | $n+1$ | example-42 | 40 | 49 | LOC | Salzburg | ??? |
98
 
99
  The columns mean:
100
- - `annotation_id` was assigned internally by enumerating all annotations in the original dataset, which is not published. This value is not present in the JSONL file.
101
  - `line_id` is the fragment of the subdivided text, as shown in doccano. Called `id` in the JSONL dataset.
102
  - `start` index of the first character that is annotated. Included, starts with 0.
103
  - `end` index of the last character that is annotated. Excluded, maximum value is `len(respectiveText)`.
104
  - `label` indicates what the passage indicated by $[start, end)$ was annotated as.
105
- - `label_text` contains the text that is annotated by $[start, end)$. This is not present in the JSONL dataset as it can be inferred from the `text` entry there.
106
- - `merged` indicates whether this annotation is the result of overlapping annotations of the same label. In that case, `annotation_id` contains the IDs of the individual annotations it was constructed of, separated by underscores. This value is not present in the JSONL dataset, and this column is redundant, as it can be inferred from `annotation_id`.
107
-
108
-
109
 
110
- ## NER models
111
-
112
- Based on the annotations above, six separate NER classifiers were trained, one for each label type. This was done in order to allow overlapping annotations. For example, in the passage "Dieses Projekt wurde an der Universität Salzburg durchgeführt", you would want to categorise "Universität Salzburg" as an organisation while also extracting "Salzburg" as a location.
113
 
114
- To achieve this overlap, each text passage must be run through all the classifiers individually and each classifier's results need to be combined. For details on how the training was done and examples of inference time, see [`LelViLamp/kediff-ner-training`](https://github.com/LelViLamp/kediff-ner-training).
115
 
116
  The [`dbmdz/bert-base-historic-multilingual-cased`](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased) tokeniser was used to create historical embeddings. Therefore, it is necessary to use that in order to use these NER models.
117
 
118
- The models' performance measures are shown in the following table. Click the model name to find the model on the Huggingface Hub.
119
 
120
  | Model | Selected Epoch | Checkpoint | Validation Loss | Precision | Recall | F<sub>1</sub> | Accuracy |
121
  |:-------------------------------------------------------------------|:--------------:|-----------:|----------------:|----------:|--------:|--------------:|---------:|
@@ -126,7 +87,5 @@ The models' performance measures are shown in the following table. Click the mod
126
  | [`PER`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-per) | 2 | `2786` | .059186 | .914037 | .849048 | .879070 | .983253 |
127
  | [`TIME`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-time) | 1 | `1393` | .016120 | .866866 | .724958 | .783099 | .994631 |
128
 
129
-
130
-
131
  ## Acknowledgements
132
- The data set and models were created in the project _Kooperative Erschließung diffusen Wissens_ ([KEDiff](https://uni-salzburg.elsevierpure.com/de/projects/kooperative-erschließung-diffusen-wissens-ein-literaturwissenscha)), funded by the [State of Salzburg](https://salzburg.gv.at), Austria, and carried out at [Paris Lodron Universität Salzburg](https://plus.ac.at). 🇦🇹
 
1
  ---
2
+ task_categories:
3
+ - token-classification
4
  language:
5
  - de
6
  - la
7
  - fr
8
  - en
 
 
 
 
9
  tags:
10
  - historical
11
+ pretty_name: >-
12
+ Annotations and models for named entity recognition on Oberdeutsche Allgemeine Litteraturzeitung of the first quarter of 1788
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  ---
14
  # OALZ/1788/Q1/NER
15
 
16
+ - [Postprocessing](https://github.com/LelViLamp/kediff-doccano-postprocessing)
17
+ - [Training](https://github.com/LelViLamp/kediff-ner-training)
18
+ - Published datasets ([union](https://huggingface.co/datasets/LelViLamp/oalz-1788-q1-ner-annotations-union-dataset), [**_merged union_**](https://huggingface.co/datasets/LelViLamp/oalz-1788-q1-ner-annotations-merged-union-dataset)) and models ([`EVENT`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-event), [`LOC`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-loc), [`MISC`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-misc), [`ORG`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-org), [`PER`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-per), [`TIME`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-time))
19
 
20
+ A named entity recognition system (NER) was trained on text extracted from _Oberdeutsche Allgemeine Litteraturueitung_ (OALZ) of the first quarter (January, Febuary, March) of 1788. The scans from which text was extracted can be found at [Bayerische Staatsbibliothek](https://www.digitale-sammlungen.de/de/view/bsb10628753?page=,1) using the extraction strategy of the _KEDiff_ project, which can be found at [`cborgelt/KEDiff`](https://github.com/cborgelt/KEDiff).
21
 
22
  ## Annotations
23
 
24
+ Each text passage was annotated in [doccano](https://github.com/doccano/doccano) by two or three annotators and their annotations were cleaned and merged into one dataset. For details on how this was done, see [`LelViLamp/kediff-doccano-postprocessing`](https://github.com/LelViLamp/kediff-doccano-postprocessing). In total, the text consists of about 1.7m characters. The resulting annotation datasets were published on the Hugging Face Hub. There are two versions of the dataset
25
+ - [`union-dataset`](https://huggingface.co/datasets/LelViLamp/oalz-1788-q1-ner-annotations-union-dataset) contains the texts split into chunks. This is how they were presented in the annotation application doccano and results from preprocessing step 5a.
26
+ - [`merged-union-dataset`](https://huggingface.co/datasets/LelViLamp/oalz-1788-q1-ner-annotations-merged-union-dataset) does not retain this split. The text was merged into one long text and annotation indices were adapted in preprocessing step 5b.
27
 
28
+ Note that both these directories contain three equivalent datasets each:
29
+ - a Huggingface/Arrow dataset, <sup>*</sup>
30
+ - a CSV, <sup>*</sup> and
31
+ - a JSONL file.
32
+
33
+ <sup>*</sup> The former two should be used together with the provided `text.csv` to catch the context of the annotation. The latter JSONL file contains the full text.
34
 
35
  The following categories were included in the annotation process:
36
 
 
43
  | `PER` | Person | 7,055 | 64,710 | 7 | 9.17 | 9.35 |
44
  | `TIME` | Dates & Time | 1,076 | 13,154 | 8 | 12.22 | 10.98 |
45
 
46
+ ## NER models
 
 
 
 
 
 
 
47
 
48
+ Based on the annotations above, six separate NER classifiers were trained, one for each label type. This was done in order to allow overlapping annotations. For example, in the passage "Dieses Projekt wurde an der Universität Salzburg durchgeführt", you would want to categorise "Universität Salzburg" as an organisation while also extracting "Salzburg" as a location. This would result in an annotation like this:
49
 
50
  ```json
51
  {
 
55
  }
56
  ```
57
 
58
+ Example entry in CSV and Huggingface dataset
59
 
60
+ | annotation_id | line_id | start | end | label | label_text | merged |
61
+ |:--------------|:-----------|------:|----:|:------|:---------------------|:------:|
62
+ | $n$ | example-42 | 28 | 49 | ORG | Universität Salzburg | ??? |
63
+ | $n+1$ | example-42 | 40 | 49 | LOC | Salzburg | ??? |
64
 
65
  The columns mean:
66
+ - `annotation_id` was assigned internally by enumerating all annotations. This is not present in the JSONL format
67
  - `line_id` is the fragment of the subdivided text, as shown in doccano. Called `id` in the JSONL dataset.
68
  - `start` index of the first character that is annotated. Included, starts with 0.
69
  - `end` index of the last character that is annotated. Excluded, maximum value is `len(respectiveText)`.
70
  - `label` indicates what the passage indicated by $[start, end)$ was annotated as.
71
+ - `label_text` contains the text that is annotated by $[start, end)$. This is not present in the JSONL dataset as it can be inferred there.
72
+ - `merged` indicates whether this annotation is the result of overlapping annotations of the same label. In that case, `annotation_id` contains the IDs of the individual annotations it was constructed of. This is not present in the JSONL dataset.
 
 
73
 
 
 
 
74
 
75
+ To achieve this overlap, each text passage must be run through all the classifiers individually and each classifier's results need to be combined. For details on how the training was done, see [`LelViLamp/kediff-ner-training`](https://github.com/LelViLamp/kediff-ner-training).
76
 
77
  The [`dbmdz/bert-base-historic-multilingual-cased`](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased) tokeniser was used to create historical embeddings. Therefore, it is necessary to use that in order to use these NER models.
78
 
79
+ The models' performance measures are as follows:
80
 
81
  | Model | Selected Epoch | Checkpoint | Validation Loss | Precision | Recall | F<sub>1</sub> | Accuracy |
82
  |:-------------------------------------------------------------------|:--------------:|-----------:|----------------:|----------:|--------:|--------------:|---------:|
 
87
  | [`PER`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-per) | 2 | `2786` | .059186 | .914037 | .849048 | .879070 | .983253 |
88
  | [`TIME`](https://huggingface.co/LelViLamp/oalz-1788-q1-ner-time) | 1 | `1393` | .016120 | .866866 | .724958 | .783099 | .994631 |
89
 
 
 
90
  ## Acknowledgements
91
+ The data set and models were created in the project _Kooperative Erschließung diffusen Wissens_ ([KEDiff](https://uni-salzburg.elsevierpure.com/de/projects/kooperative-erschließung-diffusen-wissens-ein-literaturwissenscha)), funded by the [State of Salzburg](https://salzburg.gv.at), Austria 🇦🇹, and carried out at [Paris Lodron Universität Salzburg](https://plus.ac.at).