File size: 6,673 Bytes
7a4e937
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d6e2e26
7a4e937
 
 
 
09947bf
d6e2e26
 
 
 
 
 
 
 
 
 
7a4e937
 
 
 
09947bf
d6e2e26
 
 
 
 
 
 
 
 
 
7a4e937
 
 
 
09947bf
d6e2e26
 
 
 
 
 
 
 
 
 
7a4e937
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
09947bf
7a4e937
 
 
 
 
 
 
 
 
 
 
 
 
 
09947bf
7a4e937
 
 
 
 
 
 
 
 
 
 
 
 
 
09947bf
7a4e937
 
 
 
 
 
 
 
 
 
 
159b0ab
 
 
 
 
 
7a4e937
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
---
license: cc-by-4.0
task_categories:
- token-classification
language:
- de
---

# Filtered GermEval 2014 NER Dataset

This repository hosts a filtered version of the great [GermEval 2014](https://sites.google.com/site/germeval2014ner/) NER Dataset.

After some analysis of the annotated examples in this dataset, it can be seen that the dataset is highly biased by Wikipedia articles.

# Dataset Stats

We present an overview of the top 10 top-level domains where annotations were retrieved from for training, development and test splits:

## Training Split

| TLD                  | Number of examples (Percentage)   |
|:---------------------|:--------------------------------- |
| wikipedia.org        | 12,007 (50.03%)                   |
| welt.de              |    662 (2.76%)                    |
| spiegel.de           |    512 (2.13%)                    |
| tagesspiegel.de      |    424 (1.77%)                    |
| handelsblatt.com     |    369 (1.54%)                    |
| fr-aktuell.de        |    344 (1.43%)                    |
| sueddeutsche.de      |    308 (1.28%)                    |
| abendblatt.de        |    283 (1.18%)                    |
| berlinonline.de      |    255 (1.06%)                    |
| szon.de              |    249 (1.04%)                    |

## Development Split

| TLD                  | Number of examples (Percentage)   |
|:---------------------|:--------------------------------- |
| wikipedia.org        | 1,119 (50.86%)                    |
| welt.de              |    46 (2.09%)                     |
| spiegel.de           |    43 (1.95%)                     |
| fr-aktuell.de        |    38 (1.73%)                     |
| tagesspiegel.de      |    37 (1.68%)                     |
| handelsblatt.com     |    35 (1.59%)                     |
| sueddeutsche.de      |    28 (1.27%)                     |
| szon.de              |    25 (1.14%)                     |
| feedsportal.com      |    24 (1.09%)                     |
| berlinonline.de      |    22 (1.0%)                      |

## Test Split

| TLD                  | Number of examples (Percentage)   |
|:---------------------|:--------------------------------- |
| wikipedia.org        | 2,547 (49.94%)                    |
| welt.de              |   139 (2.73%)                     |
| spiegel.de           |    88 (1.73%)                     |
| tagesspiegel.de      |    86 (1.69%)                     |
| handelsblatt.com     |    84 (1.65%)                     |
| sueddeutsche.de      |    78 (1.53%)                     |
| abendblatt.de        |    72 (1.41%)                     |
| fr-aktuell.de        |    62 (1.22%)                     |
| berlinonline.de      |    59 (1.16%)                     |
| szon.de              |    57 (1.12%)                     |

## Summary

For each dataset split it can be seen, that the portion of annotated examples from Wikipedia are around 50%!

# Filtered Version & Motivation

We now create a Wikipedia-filtered-out version of the GermEval 2014 dataset. Here's one scenario for the main motivation:

Imagine you are pretraining a nice language model and you want to measure performance on GermEval 2014 for named entity recognition. Additionally, you want of course to
compare performance to other existing language models.

What would be the easiest way to get high performance on GermEval 2014 dataset? Yes, you can literally pretrain a language model on Wikipedia only (just as [I did](https://huggingface.co/gwlms))!
It will outperform models that are even pretrained on 100+ GB! See the great [ScandEval leaderboard](https://scandeval.com/german-nlu/) and have a look at the `gwlms` models.
However, the model performance for this pretrained model on Wikipedia-only will be worse on other downstream tasks such as Question Answering.

So this Wikipedia-filtered-out version could help to achieve better comparisons between LMs.

## Stats for Filtered Version

Additionally, we now present the stats for the filtered version of GermEval 2014 dataset:

### Training Split

| TLD                  | Number of examples (Percentage)   |
|:---------------------|:--------------------------------- |
| welt.de              | 662 (5.52%)                       |
| spiegel.de           | 512 (4.27%)                       |
| tagesspiegel.de      | 424 (3.54%)                       |
| handelsblatt.com     | 369 (3.08%)                       |
| fr-aktuell.de        | 344 (2.87%)                       |
| sueddeutsche.de      | 308 (2.57%)                       |
| abendblatt.de        | 283 (2.36%)                       |
| berlinonline.de      | 255 (2.13%)                       |
| szon.de              | 249 (2.08%)                       |
| n-tv.de              | 195 (1.63%)                       |

### Development Split

| TLD                  | Number of examples (Percentage)   |
|:---------------------|:--------------------------------- |
| welt.de              | 46 (4.26%)                        |
| spiegel.de           | 43 (3.98%)                        |
| fr-aktuell.de        | 38 (3.52%)                        |
| tagesspiegel.de      | 37 (3.42%)                        |
| handelsblatt.com     | 35 (3.24%)                        |
| sueddeutsche.de      | 28 (2.59%)                        |
| szon.de              | 25 (2.31%)                        |
| feedsportal.com      | 24 (2.22%)                        |
| berlinonline.de      | 22 (2.04%)                        |
| rp-online.de         | 21 (1.94%)                        |

### Test Split

| TLD                  | Number of examples (Percentage)   |
|:---------------------|:--------------------------------- |
| welt.de              | 139 (5.44%)                       |
| spiegel.de           | 88 (3.45%)                        |
| tagesspiegel.de      | 86 (3.37%)                        |
| handelsblatt.com     | 84 (3.29%)                        |
| sueddeutsche.de      | 78 (3.06%)                        |
| abendblatt.de        | 72 (2.82%)                        |
| fr-aktuell.de        | 62 (2.43%)                        |
| berlinonline.de      | 59 (2.31%)                        |
| szon.de              | 57 (2.23%)                        |
| feedsportal.com      | 52 (2.04%)                        |

# Dataset Creation

We provide a notebook that shows how to recreate this filtered version of GermEval 2014. It can be found [here](https://huggingface.co/datasets/stefan-it/germeval14_no_wikipedia/blob/main/CreateDataset.ipynb).

Additionally, we provide a dataset loader for the awesome Flair library!

# Licence

We keep the original license of GermEval 2014 dataset ( CC-BY-4.0).