HisGermaNER / README.md
stefan-it's picture
readme: fix example
8b1ebe8
|
raw
history blame
4.93 kB
metadata
language:
  - de

HisGermaNER: NER Datasets for Historical German

In this repository we release another NER dataset from historical German newspapers.

Newspaper corpus

In the first release of our dataset, we select 11 newspapers from 1710 to 1840 from the Austrian National Library (ONB), resulting in 100 pages:

Year ONB ID Newspaper URL Pages
1720 ONB_wrz_17200511 Wiener Zeitung Viewer 10
1730 ONB_wrz_17300603 Wiener Zeitung Viewer 14
1740 ONB_wrz_17401109 Wiener Zeitung Viewer 12
1770 ONB_rpr_17700517 Reichspostreuter Viewer 4
1780 ONB_wrz_17800701 Wiener Zeitung Viewer 24
1790 ONB_pre_17901030 Preßburger Zeitung Viewer 12
1800 ONB_ibs_18000322 Intelligenzblatt von Salzburg Viewer 8
1810 ONB_mgs_18100508 Morgenblatt für gebildete Stände Viewer 4
1820 ONB_wan_18200824 Der Wanderer Viewer 4
1830 ONB_ild_18300713 Das Inland Viewer 4
1840 ONB_hum_18400625 Der Humorist Viewer 4

Data Workflow

In the first step, we obtain original scans from ONB for our selected newspapers. In the second step, we perform OCR using Transkribus.

We use the Transkribus print M1 model for performing OCR. Note: we experimented with an existing NewsEye model, but the print M1 model is newer and led to better performance in our preliminary experiments.

Only layout hints/fixes were made in Transkribus. So no OCR corrections or normalizations were performed.

We export plain text of all newspaper pages into plain text format and perform normalization of hyphenation and the = character. After normalization we tokenize the plain text newspaper pages using the PreTokenizer of the hmBERT model.

After pre-tokenization we import the corpus into Argilla to start the annotation of named entities. Note: We perform annotation at page/document-level. Thus, no sentence segmentation is needed and performed. In the annotation process we also manually annotate sentence boundaries using a special EOS tag.

The dataset is exported into an CoNLL-like format after the annotation process. The EOS tag is removed and the information of an potential end of sentence is stored in a special column.

Annotation Guidelines

We use the same NE's (PER, LOC and ORG) and annotation guideline as used in the awesome Europeana NER Corpora.

Furthermore, we introduced some specific rules for annotations:

  • PER: We include e.g. Kaiser, Lord, Cardinal or Graf in the NE, but not Herr, Fräulein, General or rank/grades.
  • LOC: We excluded Königreich from the NE.

Dataset Format

Our dataset format is inspired by the HIPE-2022 Shared Task. Here's an example of an annotated document:

TOKEN	NE-COARSE-LIT	MISC

-DOCSTART-	O	_

# onb:id = ONB_wrz_17800701
# onb:image_link = https://anno.onb.ac.at/cgi-content/anno?aid=wrz&datum=17800701&seite=12
# onb:page_nr = 12
# onb:publication_year_str = 17800701
den	O	_
Pöbel	O	_
noch	O	_
mehr	O	_
in	O	_
Harnisch	O	_
.	O	EndOfSentence
Sie	O	_
legten	O	_
sogleich	O	_

Note: we include a -DOCSTART- marker to e.g. allow document-level features for NER as proposed in the FLERT paper.

Dataset Splits

For training powerful NER models on the dataset, we manually splitted the dataset into training, development and test splits.