SBB
/

Token Classification
Transformers
PyTorch
German
bert
sequence-tagger-model
Inference Endpoints
cneud Jrglmn commited on
Commit
7357b69
1 Parent(s): 8069f07

Complete update of the model card (#1)

Browse files

- Complete update of the model card (6d848c0c74c0e1636226d3848c648cb9bd2e67c6)


Co-authored-by: Jörg Lehmann <Jrglmn@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +258 -21
README.md CHANGED
@@ -9,31 +9,268 @@ datasets:
9
  - germeval_14
10
  license: apache-2.0
11
  ---
12
- # About `sbb_ner`
13
 
14
- This is a BERT model for named entity recognition (NER) in historical German.
15
- It predicts the classes `PER`, `LOC` and `ORG`. The model is based on the 🤗
16
- [`BERT base multilingual cased`](https://huggingface.co/bert-base-multilingual-cased) model.
17
 
18
- We applied unsupervised pre-training on 2,333,647 pages of
19
- unlabeled historical German text from the Berlin State Library
20
- digital collections, and supervised pre-training on two datasets
21
- with contemporary German text, [conll2003](https://huggingface.co/models?dataset=dataset:conll2003)
22
- and [germeval_14](https://huggingface.co/models?dataset=dataset:germeval_14).
23
 
24
- For further details, have a look at [sbb_ner](https://github.com/qurator-spk/sbb_ner) on GitHub.
25
 
26
- # Results
27
 
28
- In a 5-fold cross validation with different historical German NER corpora
29
- (see our *KONVENS2019* [paper](https://corpora.linguistik.uni-erlangen.de/data/konvens/proceedings/papers/KONVENS2019_paper_4.pdf)),
30
- the model obtained an F1-Score of **84.3**±1.1%.
31
 
32
- In the *CLEF-HIPE-2020* Shared Task ([paper](http://ceur-ws.org/Vol-2696/paper_255.pdf)),
33
- the model ranked 2nd of 13 systems for the German coarse NER task.
34
 
35
- # Weights
36
- We provide model weights for PyTorch.
37
- | Model | Downloads
38
- | ------------------------| ------------------------
39
- | `bert-sbb-de-finetuned` | [`config.json`](https://huggingface.co/SBB/sbb_ner/blob/main/config.json) • [`pytorch_model_ep7.bin`](https://huggingface.co/SBB/sbb_ner/blob/main/pytorch_model_ep7.bin) • [`vocab.txt`](https://huggingface.co/SBB/sbb_ner/blob/main/vocab.txt)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  - germeval_14
10
  license: apache-2.0
11
  ---
 
12
 
 
 
 
13
 
 
 
 
 
 
14
 
 
15
 
 
16
 
 
 
 
17
 
18
+ # Model Card for sbb_ner
 
19
 
20
+ <!-- Provide a quick summary of what the model is/does. [Optional] -->
21
+ A BERT model trained on three German corpora containing contemporary and historical texts for named entity recognition tasks. It predicts the classes PER, LOC and ORG.
22
+ Questions and comments about the model can be directed to Clemens Neudecker at clemens.neudecker@sbb.spk-berlin.de.
23
+
24
+
25
+
26
+
27
+ # Table of Contents
28
+
29
+ - [Model Card for sbb_ner](#model-card-for--model_id-)
30
+ - [Table of Contents](#table-of-contents)
31
+ - [Model Details](#model-details)
32
+ - [Model Description](#model-description)
33
+ - [Uses](#uses)
34
+ - [Direct Use](#direct-use)
35
+ - [Downstream Use [Optional]](#downstream-use-optional)
36
+ - [Out-of-Scope Use](#out-of-scope-use)
37
+ - [Bias, Risks, and Limitations](#bias-risks-and-limitations)
38
+ - [Recommendations](#recommendations)
39
+ - [Training Details](#training-details)
40
+ - [Training Data](#training-data)
41
+ - [Training Procedure](#training-procedure)
42
+ - [Preprocessing](#preprocessing)
43
+ - [Speeds, Sizes, Times](#speeds-sizes-times)
44
+ - [Evaluation](#evaluation)
45
+ - [Testing Data, Factors & Metrics](#testing-data-factors--metrics)
46
+ - [Testing Data](#testing-data)
47
+ - [Factors](#factors)
48
+ - [Metrics](#metrics)
49
+ - [Results](#results)
50
+ - [Model Examination](#model-examination)
51
+ - [Environmental Impact](#environmental-impact)
52
+ - [Technical Specifications [optional]](#technical-specifications-optional)
53
+ - [Model Architecture and Objective](#model-architecture-and-objective)
54
+ - [Compute Infrastructure](#compute-infrastructure)
55
+ - [Hardware](#hardware)
56
+ - [Software](#software)
57
+ - [Citation](#citation)
58
+ - [Glossary [optional]](#glossary-optional)
59
+ - [More Information [optional]](#more-information-optional)
60
+ - [Model Card Authors [optional]](#model-card-authors-optional)
61
+ - [Model Card Contact](#model-card-contact)
62
+ - [How to Get Started with the Model](#how-to-get-started-with-the-model)
63
+
64
+
65
+ # Model Details
66
+
67
+ ## Model Description
68
+
69
+ <!-- Provide a longer summary of what this model is/does. -->
70
+ A BERT model trained on three German corpora containing contemporary and historical texts for named entity recognition tasks.
71
+ It predicts the classes PER, LOC and ORG.
72
+
73
+ - **Developed by:** [Kai Labusch](https://huggingface.co/labusch), [Clemens Neudecker](https://huggingface.co/cneud), David Zellhöfer
74
+ - **Shared by [Optional]:** [Staatsbibliothek zu Berlin / Berlin State Library](https://huggingface.co/SBB)
75
+ - **Model type:** Language model
76
+ - **Language(s) (NLP):** de
77
+ - **License:** apache-2.0
78
+ - **Parent Model:** The BERT base multilingual cased model as provided by [Google] (https://huggingface.co/bert-base-multilingual-cased)
79
+ - **Resources for more information:** More information needed
80
+ - [GitHub Repo](https://github.com/qurator-spk/sbb_ner)
81
+ - [Associated Paper](https://konvens.org/proceedings/2019/papers/KONVENS2019_paper_4.pdf)
82
+
83
+
84
+ # Uses
85
+
86
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
87
+
88
+ ## Direct Use
89
+
90
+ The model can directly be used to perform NER on historical german texts obtained by OCR from digitized documents.
91
+ Supported entity types are PER, LOC and ORG.
92
+
93
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
94
+ <!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
95
+
96
+ ## Downstream Use
97
+
98
+ The model has been pre-trained on 2.300.000 pages of OCR-text of the digitized collections of Berlin State Library.
99
+ Therefore it is adapted to OCR-error prone historical german texts and might be used for particular applications that involve such text material.
100
+
101
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
102
+ <!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
103
+
104
+ ## Out-of-Scope Use
105
+
106
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
107
+ <!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
108
+
109
+
110
+ # Bias, Risks, and Limitations
111
+
112
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
113
+
114
+ The identification of named entities in historical and contemporary texts is a task contributing to knowledge creation aiming at enhancing scientific research and better discoverability of information in digitized historical texts. The aim of the development of this model was to improve this knowledge creation process, an endeavour that is not for profit. The results of the applied model are freely accessible for the users of the digital collections of the Berlin State Library. Against this backdrop, ethical challenges cannot be identified. As a limitation, it has to be noted that there is a lot of performance to gain for historical text by adding more historical ground-truth data.
115
+
116
+ ## Recommendations
117
+
118
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
119
+
120
+ The general observation that historical texts often remain silent and avoid naming of subjects from the colonies and address them anonymously cannot be remedied by named entity recognition. Disambiguation of named entities proves to be challenging beyond the task of automatically identifying entities. The existence of broad variations in the spelling of person and place names because of non-normalized orthography and linguistic change as well as changes in the naming of places according to the context adds to this challenge. Historical texts, especially newspapers, contain narrative descriptions and visual representations of minorities and disadvantaged groups without naming them; de-anonymizing such persons and groups is a research task in itself, which has only been started to be tackled in the 2020&#39;s.
121
+
122
+
123
+ # Training Details
124
+
125
+ ## Training Data
126
+
127
+ <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
128
+
129
+ 1) CoNLL 2003 German Named Entity Recognition Ground Truth (Tjong Kim Sang and De Meulder, 2003)
130
+ 2) GermEval Konvens 2014 Shared Task Data (Benikova et al., 2014)
131
+ 3) DC-SBB Digital Collections of the Berlin State Library (Labusch and Zellhöfer, 2019)
132
+ 4) Europeana Newspapers Historic German Datasets (Neudecker, 2016)
133
+
134
+ ## Training Procedure
135
+
136
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
137
+
138
+ The BERT model is trained directly with respect to the NER by implementation of the same method that has been proposed by the BERT authors (Devlin et al., 2018). We applied unsupervised pre-training on 2,333,647 pages of unlabeled historical German text from the Berlin State Library digital collections, and supervised pre-training on two datasets with contemporary German text, conll2003 and germeval_14. Unsupervised pre-training on the DC-SBB data as well as supervised pre-training on contemporary NER ground truth were applied. Unsupervised and supervised pretraining are combined where unsupervised is done first and supervised second. Performance on different combinations of training and test sets was explored, and a 5-fold cross validation and comparison with state of the art approaches was conducted.
139
+
140
+ ### Preprocessing
141
+
142
+ The model was pretrained on 2.300.000 pages of german texts from the digitized collections of the Berlin State Library.
143
+ The texts have been obtained by OCR from the page scans of the documents.
144
+
145
+ ### Speeds, Sizes, Times
146
+
147
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
148
+
149
+ Since it is an incarnation of the original BERT-model published by Google, all the speed, size and time considerations of that original model hold.
150
+
151
+
152
+ # Evaluation
153
+
154
+ <!-- This section describes the evaluation protocols and provides the results. -->
155
+ The model has been evaluated by 5-fold cross-validation on several german historical OCR ground truth datasets.
156
+ See publication for detail.
157
+
158
+ ## Testing Data, Factors & Metrics
159
+
160
+ ### Testing Data
161
+
162
+ <!-- This should link to a Data Card if possible. -->
163
+
164
+ Two different test sets contained in the CoNLL 2003 German Named Entity Recognition Ground Truth,
165
+ i.e. TEST-A and TEST-B, have been used for testing (DE-CoNLL-TEST).
166
+ Additionaly historical OCR-based ground truth datasets have been used for testing - see publication for details.
167
+
168
+ ### Factors
169
+
170
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
171
+
172
+ The evaluation focuses on NER in historical germans documents, see publication for details.
173
+
174
+ ### Metrics
175
+
176
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
177
+
178
+ Performance metrics used in evaluation is precision, recall and F1-score.
179
+ See paper for actual results in terms of these metrics.
180
+
181
+ ## Results
182
+
183
+ See publication.
184
+
185
+
186
+ # Model Examination
187
+
188
+ See publication.
189
+
190
+
191
+ # Environmental Impact
192
+
193
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
194
+
195
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
196
+
197
+ - **Hardware Type:** V100
198
+ - **Hours used:** Roughly 1-2 week(s) for pretraining. Roughly 1 hour for final NER-training.
199
+ - **Cloud Provider:** No cloud.
200
+ - **Compute Region:** Germany.
201
+ - **Carbon Emitted:** More information needed
202
+
203
+
204
+ # Technical Specifications [optional]
205
+
206
+ ## Model Architecture and Objective
207
+
208
+ See original BERT publication.
209
+
210
+ ## Compute Infrastructure
211
+
212
+ Training and pre-training has been performed on a single V100.
213
+
214
+ ### Hardware
215
+
216
+ See above.
217
+
218
+ ### Software
219
+
220
+ See published code on github.
221
+
222
+
223
+ # Citation
224
+
225
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
226
+
227
+ **BibTeX:**
228
+
229
+ @article{labusch_bert_2019,
230
+ title = {{BERT} for {Named} {Entity} {Recognition} in {Contemporary} and {Historical} {German}},
231
+ volume = {Conference on Natural Language Processing},
232
+ url = {https://konvens.org/proceedings/2019/papers/KONVENS2019_paper_4.pdf},
233
+ abstract = {We apply a pre-trained transformer based representational language model, i.e. BERT (Devlin et al., 2018), to named entity recognition (NER) in contemporary and historical German text and observe state of the art performance for both text categories. We further improve the recognition performance for historical German by unsupervised pre-training on a large corpus of historical German texts of the Berlin State Library and show that best performance for historical German is obtained by unsupervised pre-training on historical German plus supervised pre-training with contemporary NER ground-truth.},
234
+ language = {en},
235
+ author = {Labusch, Kai and Neudecker, Clemens and Zellhöfer, David},
236
+ year = {2019},
237
+ pages = {9},
238
+ }
239
+
240
+ **APA:**
241
+
242
+ (Labusch et al., 2019)
243
+
244
+
245
+ # Glossary [optional]
246
+
247
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
248
+
249
+ [More information needed]
250
+
251
+
252
+ # More Information [optional]
253
+
254
+ [More information needed]
255
+
256
+
257
+ # Model Card Authors [optional]
258
+
259
+ <!-- This section provides another layer of transparency and accountability. Whose views is this model card representing? How many voices were included in its construction? Etc. -->
260
+
261
+ [Kai Labusch](kai.labusch@sbb.spk-berlin.de) and [Jörg Lehmann](https://huggingface.co/Jrglmn)
262
+
263
+
264
+ # Model Card Contact
265
+
266
+ Questions and comments about the model can be directed to Clemens Neudecker at clemens.neudecker@sbb.spk-berlin.de,
267
+ questions and comments about the model card can be directed to Jörg Lehmann at joerg.lehmann@sbb.spk-berlin.de
268
+
269
+
270
+ # How to Get Started with the Model
271
+
272
+ Use the code below to get started with the model.
273
+
274
+ <details>
275
+ How to get started with this model is explained in the ReadMe file of the GitHub repository [over here](https://github.com/qurator-spk/sbb_ner).
276
+ </details>