SBB
/

Token Classification
Transformers
PyTorch
German
bert
sequence-tagger-model
Inference Endpoints
Jrglmn commited on
Commit
59d8d00
1 Parent(s): 7357b69

Some minor changes to the Model Card published this morning

Browse files

Added institution + developing projects to quick summary; corrected table of contents; added datasets and publications to the "More information" section

Files changed (1) hide show
  1. README.md +40 -36
README.md CHANGED
@@ -19,20 +19,20 @@ license: apache-2.0
19
 
20
  <!-- Provide a quick summary of what the model is/does. [Optional] -->
21
  A BERT model trained on three German corpora containing contemporary and historical texts for named entity recognition tasks. It predicts the classes PER, LOC and ORG.
22
- Questions and comments about the model can be directed to Clemens Neudecker at clemens.neudecker@sbb.spk-berlin.de.
23
-
24
 
25
 
26
 
27
  # Table of Contents
28
 
29
- - [Model Card for sbb_ner](#model-card-for--model_id-)
30
  - [Table of Contents](#table-of-contents)
31
  - [Model Details](#model-details)
32
  - [Model Description](#model-description)
33
  - [Uses](#uses)
34
  - [Direct Use](#direct-use)
35
- - [Downstream Use [Optional]](#downstream-use-optional)
36
  - [Out-of-Scope Use](#out-of-scope-use)
37
  - [Bias, Risks, and Limitations](#bias-risks-and-limitations)
38
  - [Recommendations](#recommendations)
@@ -70,13 +70,13 @@ Questions and comments about the model can be directed to Clemens Neudecker at c
70
  A BERT model trained on three German corpora containing contemporary and historical texts for named entity recognition tasks.
71
  It predicts the classes PER, LOC and ORG.
72
 
73
- - **Developed by:** [Kai Labusch](https://huggingface.co/labusch), [Clemens Neudecker](https://huggingface.co/cneud), David Zellhöfer
74
  - **Shared by [Optional]:** [Staatsbibliothek zu Berlin / Berlin State Library](https://huggingface.co/SBB)
75
  - **Model type:** Language model
76
  - **Language(s) (NLP):** de
77
  - **License:** apache-2.0
78
- - **Parent Model:** The BERT base multilingual cased model as provided by [Google] (https://huggingface.co/bert-base-multilingual-cased)
79
- - **Resources for more information:** More information needed
80
  - [GitHub Repo](https://github.com/qurator-spk/sbb_ner)
81
  - [Associated Paper](https://konvens.org/proceedings/2019/papers/KONVENS2019_paper_4.pdf)
82
 
@@ -87,7 +87,7 @@ It predicts the classes PER, LOC and ORG.
87
 
88
  ## Direct Use
89
 
90
- The model can directly be used to perform NER on historical german texts obtained by OCR from digitized documents.
91
  Supported entity types are PER, LOC and ORG.
92
 
93
  <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
@@ -95,17 +95,22 @@ Supported entity types are PER, LOC and ORG.
95
 
96
  ## Downstream Use
97
 
98
- The model has been pre-trained on 2.300.000 pages of OCR-text of the digitized collections of Berlin State Library.
99
- Therefore it is adapted to OCR-error prone historical german texts and might be used for particular applications that involve such text material.
100
-
101
  <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
102
  <!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
 
 
 
 
 
 
103
 
104
  ## Out-of-Scope Use
105
 
106
  <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
107
  <!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
108
 
 
 
109
 
110
  # Bias, Risks, and Limitations
111
 
@@ -113,6 +118,7 @@ Therefore it is adapted to OCR-error prone historical german texts and might be
113
 
114
  The identification of named entities in historical and contemporary texts is a task contributing to knowledge creation aiming at enhancing scientific research and better discoverability of information in digitized historical texts. The aim of the development of this model was to improve this knowledge creation process, an endeavour that is not for profit. The results of the applied model are freely accessible for the users of the digital collections of the Berlin State Library. Against this backdrop, ethical challenges cannot be identified. As a limitation, it has to be noted that there is a lot of performance to gain for historical text by adding more historical ground-truth data.
115
 
 
116
  ## Recommendations
117
 
118
  <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
@@ -131,15 +137,16 @@ The general observation that historical texts often remain silent and avoid nami
131
  3) DC-SBB Digital Collections of the Berlin State Library (Labusch and Zellhöfer, 2019)
132
  4) Europeana Newspapers Historic German Datasets (Neudecker, 2016)
133
 
 
134
  ## Training Procedure
135
 
136
  <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
137
 
138
- The BERT model is trained directly with respect to the NER by implementation of the same method that has been proposed by the BERT authors (Devlin et al., 2018). We applied unsupervised pre-training on 2,333,647 pages of unlabeled historical German text from the Berlin State Library digital collections, and supervised pre-training on two datasets with contemporary German text, conll2003 and germeval_14. Unsupervised pre-training on the DC-SBB data as well as supervised pre-training on contemporary NER ground truth were applied. Unsupervised and supervised pretraining are combined where unsupervised is done first and supervised second. Performance on different combinations of training and test sets was explored, and a 5-fold cross validation and comparison with state of the art approaches was conducted.
139
 
140
  ### Preprocessing
141
 
142
- The model was pretrained on 2.300.000 pages of german texts from the digitized collections of the Berlin State Library.
143
  The texts have been obtained by OCR from the page scans of the documents.
144
 
145
  ### Speeds, Sizes, Times
@@ -147,12 +154,12 @@ The texts have been obtained by OCR from the page scans of the documents.
147
  <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
148
 
149
  Since it is an incarnation of the original BERT-model published by Google, all the speed, size and time considerations of that original model hold.
150
-
151
 
152
  # Evaluation
153
 
154
  <!-- This section describes the evaluation protocols and provides the results. -->
155
- The model has been evaluated by 5-fold cross-validation on several german historical OCR ground truth datasets.
 
156
  See publication for detail.
157
 
158
  ## Testing Data, Factors & Metrics
@@ -161,15 +168,15 @@ See publication for detail.
161
 
162
  <!-- This should link to a Data Card if possible. -->
163
 
164
- Two different test sets contained in the CoNLL 2003 German Named Entity Recognition Ground Truth,
165
- i.e. TEST-A and TEST-B, have been used for testing (DE-CoNLL-TEST).
166
- Additionaly historical OCR-based ground truth datasets have been used for testing - see publication for details.
167
 
168
  ### Factors
169
 
170
  <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
171
 
172
- The evaluation focuses on NER in historical germans documents, see publication for details.
173
 
174
  ### Metrics
175
 
@@ -182,12 +189,10 @@ See paper for actual results in terms of these metrics.
182
 
183
  See publication.
184
 
185
-
186
  # Model Examination
187
 
188
  See publication.
189
 
190
-
191
  # Environmental Impact
192
 
193
  <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
@@ -195,12 +200,11 @@ See publication.
195
  Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
196
 
197
  - **Hardware Type:** V100
198
- - **Hours used:** Roughly 1-2 week(s) for pretraining. Roughly 1 hour for final NER-training.
199
  - **Cloud Provider:** No cloud.
200
  - **Compute Region:** Germany.
201
  - **Carbon Emitted:** More information needed
202
 
203
-
204
  # Technical Specifications [optional]
205
 
206
  ## Model Architecture and Objective
@@ -217,8 +221,7 @@ See above.
217
 
218
  ### Software
219
 
220
- See published code on github.
221
-
222
 
223
  # Citation
224
 
@@ -241,36 +244,37 @@ See published code on github.
241
 
242
  (Labusch et al., 2019)
243
 
244
-
245
  # Glossary [optional]
246
 
247
  <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
248
 
249
- [More information needed]
250
-
251
 
252
  # More Information [optional]
253
 
254
- [More information needed]
 
 
 
 
 
 
 
 
255
 
256
 
257
  # Model Card Authors [optional]
258
 
259
  <!-- This section provides another layer of transparency and accountability. Whose views is this model card representing? How many voices were included in its construction? Etc. -->
260
 
261
- [Kai Labusch](kai.labusch@sbb.spk-berlin.de) and [Jörg Lehmann](https://huggingface.co/Jrglmn)
262
 
263
 
264
  # Model Card Contact
265
 
266
- Questions and comments about the model can be directed to Clemens Neudecker at clemens.neudecker@sbb.spk-berlin.de,
267
- questions and comments about the model card can be directed to Jörg Lehmann at joerg.lehmann@sbb.spk-berlin.de
268
-
269
 
270
  # How to Get Started with the Model
271
 
272
- Use the code below to get started with the model.
273
 
274
- <details>
275
  How to get started with this model is explained in the ReadMe file of the GitHub repository [over here](https://github.com/qurator-spk/sbb_ner).
276
- </details>
 
19
 
20
  <!-- Provide a quick summary of what the model is/does. [Optional] -->
21
  A BERT model trained on three German corpora containing contemporary and historical texts for named entity recognition tasks. It predicts the classes PER, LOC and ORG.
22
+ The model was developed by the Berlin State Library (SBB) in the [QURATOR](https://staatsbibliothek-berlin.de/die-staatsbibliothek/projekte/project-id-1060-2018)
23
+ and [Mensch.Maschine.Kultur]( https://mmk.sbb.berlin/?lang=en) projects.
24
 
25
 
26
 
27
  # Table of Contents
28
 
29
+ - [Model Card for sbb_ner](#model-card-for-sbb_ner)
30
  - [Table of Contents](#table-of-contents)
31
  - [Model Details](#model-details)
32
  - [Model Description](#model-description)
33
  - [Uses](#uses)
34
  - [Direct Use](#direct-use)
35
+ - [Downstream Use [Optional]](#downstream-use)
36
  - [Out-of-Scope Use](#out-of-scope-use)
37
  - [Bias, Risks, and Limitations](#bias-risks-and-limitations)
38
  - [Recommendations](#recommendations)
 
70
  A BERT model trained on three German corpora containing contemporary and historical texts for named entity recognition tasks.
71
  It predicts the classes PER, LOC and ORG.
72
 
73
+ - **Developed by:** [Kai Labusch](kai.labusch@sbb.spk-berlin.de), [Clemens Neudecker](clemens.neudecker@sbb.spk-berlin.de), David Zellhöfer
74
  - **Shared by [Optional]:** [Staatsbibliothek zu Berlin / Berlin State Library](https://huggingface.co/SBB)
75
  - **Model type:** Language model
76
  - **Language(s) (NLP):** de
77
  - **License:** apache-2.0
78
+ - **Parent Model:** The BERT base multilingual cased model as provided by [Google](https://huggingface.co/bert-base-multilingual-cased)
79
+ - **Resources for more information:**
80
  - [GitHub Repo](https://github.com/qurator-spk/sbb_ner)
81
  - [Associated Paper](https://konvens.org/proceedings/2019/papers/KONVENS2019_paper_4.pdf)
82
 
 
87
 
88
  ## Direct Use
89
 
90
+ The model can directly be used to perform NER on historical German texts obtained by OCR from digitized documents.
91
  Supported entity types are PER, LOC and ORG.
92
 
93
  <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
 
95
 
96
  ## Downstream Use
97
 
 
 
 
98
  <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
99
  <!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
100
+
101
+ The model has been pre-trained on 2.300.000 pages of OCR-text of the digitized collections of Berlin State Library.
102
+ Therefore it is adapted to OCR-error prone historical German texts and might be used for particular applications that involve such text material.
103
+
104
+
105
+
106
 
107
  ## Out-of-Scope Use
108
 
109
  <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
110
  <!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
111
 
112
+ More info needed.
113
+
114
 
115
  # Bias, Risks, and Limitations
116
 
 
118
 
119
  The identification of named entities in historical and contemporary texts is a task contributing to knowledge creation aiming at enhancing scientific research and better discoverability of information in digitized historical texts. The aim of the development of this model was to improve this knowledge creation process, an endeavour that is not for profit. The results of the applied model are freely accessible for the users of the digital collections of the Berlin State Library. Against this backdrop, ethical challenges cannot be identified. As a limitation, it has to be noted that there is a lot of performance to gain for historical text by adding more historical ground-truth data.
120
 
121
+
122
  ## Recommendations
123
 
124
  <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
 
137
  3) DC-SBB Digital Collections of the Berlin State Library (Labusch and Zellhöfer, 2019)
138
  4) Europeana Newspapers Historic German Datasets (Neudecker, 2016)
139
 
140
+
141
  ## Training Procedure
142
 
143
  <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
144
 
145
+ The BERT model is trained directly with respect to the NER by implementation of the same method that has been proposed by the BERT authors (Devlin et al., 2018). We applied unsupervised pre-training on 2,333,647 pages of unlabeled historical German text from the Berlin State Library digital collections, and supervised pre-training on two datasets with contemporary German text, conll2003 and germeval_14. Unsupervised pre-training on the DC-SBB data as well as supervised pre-training on contemporary NER ground truth were applied. Unsupervised and supervised pre-training are combined where unsupervised is done first and supervised second. Performance on different combinations of training and test sets was explored, and a 5-fold cross validation and comparison with state of the art approaches was conducted.
146
 
147
  ### Preprocessing
148
 
149
+ The model was pre-trained on 2.300.000 pages of German texts from the digitized collections of the Berlin State Library.
150
  The texts have been obtained by OCR from the page scans of the documents.
151
 
152
  ### Speeds, Sizes, Times
 
154
  <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
155
 
156
  Since it is an incarnation of the original BERT-model published by Google, all the speed, size and time considerations of that original model hold.
 
157
 
158
  # Evaluation
159
 
160
  <!-- This section describes the evaluation protocols and provides the results. -->
161
+
162
+ The model has been evaluated by 5-fold cross-validation on several German historical OCR ground truth datasets.
163
  See publication for detail.
164
 
165
  ## Testing Data, Factors & Metrics
 
168
 
169
  <!-- This should link to a Data Card if possible. -->
170
 
171
+ Two different test sets contained in the CoNLL 2003 German Named Entity Recognition Ground Truth, i.e. TEST-A and TEST-B, have been used for testing (DE-CoNLL-TEST).
172
+ Additionally, historical OCR-based ground truth datasets have been used for testing - see publication for details and below.
173
+
174
 
175
  ### Factors
176
 
177
  <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
178
 
179
+ The evaluation focuses on NER in historical German documents, see publication for details.
180
 
181
  ### Metrics
182
 
 
189
 
190
  See publication.
191
 
 
192
  # Model Examination
193
 
194
  See publication.
195
 
 
196
  # Environmental Impact
197
 
198
  <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
 
200
  Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
201
 
202
  - **Hardware Type:** V100
203
+ - **Hours used:** Roughly 1-2 week(s) for pre-training. Roughly 1 hour for final NER-training.
204
  - **Cloud Provider:** No cloud.
205
  - **Compute Region:** Germany.
206
  - **Carbon Emitted:** More information needed
207
 
 
208
  # Technical Specifications [optional]
209
 
210
  ## Model Architecture and Objective
 
221
 
222
  ### Software
223
 
224
+ See published code on [GithHub]( https://github.com/qurator-spk/sbb_ner).
 
225
 
226
  # Citation
227
 
 
244
 
245
  (Labusch et al., 2019)
246
 
 
247
  # Glossary [optional]
248
 
249
  <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
250
 
251
+ More information needed.
 
252
 
253
  # More Information [optional]
254
 
255
+ In addition to what has been documented above, it should be noted that there are two NER Ground Truth datasets available:
256
+
257
+ 1) [Data provided for the 2020 HIPE campaign on named entity processing]( https://impresso.github.io/CLEF-HIPE-2020/)
258
+ 2) [Data providided for the 2022 HIPE shared task on named entity processing]( https://hipe-eval.github.io/HIPE-2022/)
259
+
260
+ Furthermore, two papers have been published on NER/NED, using BERT:
261
+
262
+ 1) [Entity Linking in Multilingual Newspapers and Classical Commentaries with BERT]( http://ceur-ws.org/Vol-3180/paper-85.pdf)
263
+ 2) [Named Entity Disambiguation and Linking Historic Newspaper OCR with BERT]( http://ceur-ws.org/Vol-2696/paper_163.pdf)
264
 
265
 
266
  # Model Card Authors [optional]
267
 
268
  <!-- This section provides another layer of transparency and accountability. Whose views is this model card representing? How many voices were included in its construction? Etc. -->
269
 
270
+ [Kai Labusch](kai.labusch@sbb.spk-berlin.de) and [Jörg Lehmann](joerg.lehmann@sbb.spk-berlin.de)
271
 
272
 
273
  # Model Card Contact
274
 
275
+ Questions and comments about the model can be directed to Clemens Neudecker at clemens.neudecker@sbb.spk-berlin.de, questions and comments about the model card can be directed to Jörg Lehmann at joerg.lehmann@sbb.spk-berlin.de
 
 
276
 
277
  # How to Get Started with the Model
278
 
 
279
 
 
280
  How to get started with this model is explained in the ReadMe file of the GitHub repository [over here](https://github.com/qurator-spk/sbb_ner).