Datasets:
Tasks:
Token Classification
Sub-tasks:
named-entity-recognition
Languages:
German
Size:
1M<n<10M
ArXiv:
DOI:
License:
elenanereiss
commited on
Commit
•
9d7a396
1
Parent(s):
25ee654
Update README.md
Browse files
README.md
CHANGED
@@ -69,6 +69,10 @@ train-eval-index:
|
|
69 |
|
70 |
A dataset of Legal Documents from German federal court decisions for Named Entity Recognition. The dataset is human-annotated with 19 fine-grained entity classes. The dataset consists of approx. 67,000 sentences and contains 54,000 annotated entities. NER tags use the `BIO` tagging scheme.
|
71 |
|
|
|
|
|
|
|
|
|
72 |
For more details see [https://arxiv.org/pdf/2003.13016v1.pdf](https://arxiv.org/pdf/2003.13016v1.pdf).
|
73 |
|
74 |
### Supported Tasks and Leaderboards
|
@@ -82,16 +86,17 @@ German
|
|
82 |
|
83 |
## Dataset Structure
|
84 |
### Data Instances
|
85 |
-
```
|
86 |
{
|
87 |
'id': '1',
|
88 |
'tokens': ['Eine', 'solchermaßen', 'verzögerte', 'oder', 'bewusst', 'eingesetzte', 'Verkettung', 'sachgrundloser', 'Befristungen', 'schließt', '§', '14', 'Abs.', '2', 'Satz', '2', 'TzBfG', 'aus', '.'],
|
89 |
-
'ner_tags': [38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 3, 22, 22, 22, 22, 22, 22, 38, 38]
|
|
|
90 |
}
|
91 |
```
|
92 |
### Data Fields
|
93 |
|
94 |
-
```
|
95 |
{
|
96 |
'id': Value(dtype='string', id=None),
|
97 |
'tokens': Sequence(feature=Value(dtype='string', id=None),
|
@@ -138,7 +143,26 @@ German
|
|
138 |
'O'],
|
139 |
id=None),
|
140 |
length=-1,
|
141 |
-
id=None)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
142 |
}
|
143 |
```
|
144 |
|
@@ -149,45 +173,43 @@ German
|
|
149 |
| Input Sentences | 53384 | 6666 | 6673 |
|
150 |
|
151 |
|
152 |
-
|
153 |
|
154 |
## Dataset Creation
|
155 |
|
156 |
### Curation Rationale
|
157 |
|
158 |
-
|
159 |
|
160 |
-
-->
|
161 |
### Source Data
|
162 |
|
163 |
Court decisions from 2017 and 2018 were selected for the dataset, published online by the [Federal Ministry of Justice and Consumer Protection](http://www.rechtsprechung-im-internet.de). The documents originate from seven federal courts: Federal Labour Court (BAG), Federal Fiscal Court (BFH), Federal Court of Justice (BGH), Federal Patent Court (BPatG), Federal Social Court (BSG), Federal Constitutional Court (BVerfG) and Federal Administrative Court (BVerwG).
|
164 |
|
165 |
-
|
166 |
|
167 |
-
|
|
|
|
|
168 |
|
169 |
#### Who are the source language producers?
|
170 |
|
171 |
-
|
172 |
-
-->
|
173 |
|
174 |
### Annotations
|
175 |
|
176 |
-
|
177 |
|
178 |
-
|
179 |
|
180 |
-
|
181 |
|
182 |
-
|
183 |
-
|
184 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
185 |
|
186 |
### Personal and Sensitive Information
|
187 |
|
188 |
-
|
189 |
|
190 |
-
## Considerations for Using the Data
|
191 |
|
192 |
### Social Impact of Dataset
|
193 |
|
@@ -205,8 +227,7 @@ For more details see [https://github.com/elenanereiss/Legal-Entity-Recognition/b
|
|
205 |
|
206 |
### Dataset Curators
|
207 |
|
208 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
209 |
-
-->
|
210 |
|
211 |
### Licensing Information
|
212 |
|
|
|
69 |
|
70 |
A dataset of Legal Documents from German federal court decisions for Named Entity Recognition. The dataset is human-annotated with 19 fine-grained entity classes. The dataset consists of approx. 67,000 sentences and contains 54,000 annotated entities. NER tags use the `BIO` tagging scheme.
|
71 |
|
72 |
+
The dataset includes two different versions of annotations, one with a set of 19 fine-grained semantic classes (`ner_tags`) and another one with a set of 7 coarse-grained classes (`ner_coarse_tags`). There are 53,632 annotated entities in total, the majority of which (74.34 %) are legal entities, the others are person, location and organization (25.66 %).
|
73 |
+
|
74 |
+
![](https://raw.githubusercontent.com/elenanereiss/Legal-Entity-Recognition/master/docs/Distribution.png)
|
75 |
+
|
76 |
For more details see [https://arxiv.org/pdf/2003.13016v1.pdf](https://arxiv.org/pdf/2003.13016v1.pdf).
|
77 |
|
78 |
### Supported Tasks and Leaderboards
|
|
|
86 |
|
87 |
## Dataset Structure
|
88 |
### Data Instances
|
89 |
+
```python
|
90 |
{
|
91 |
'id': '1',
|
92 |
'tokens': ['Eine', 'solchermaßen', 'verzögerte', 'oder', 'bewusst', 'eingesetzte', 'Verkettung', 'sachgrundloser', 'Befristungen', 'schließt', '§', '14', 'Abs.', '2', 'Satz', '2', 'TzBfG', 'aus', '.'],
|
93 |
+
'ner_tags': [38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 3, 22, 22, 22, 22, 22, 22, 38, 38],
|
94 |
+
'ner_coarse_tags': [14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 2, 9, 9, 9, 9, 9, 9, 14, 14]
|
95 |
}
|
96 |
```
|
97 |
### Data Fields
|
98 |
|
99 |
+
```python
|
100 |
{
|
101 |
'id': Value(dtype='string', id=None),
|
102 |
'tokens': Sequence(feature=Value(dtype='string', id=None),
|
|
|
143 |
'O'],
|
144 |
id=None),
|
145 |
length=-1,
|
146 |
+
id=None),
|
147 |
+
'ner_coarse_tags': Sequence(feature=ClassLabel(num_classes=15,
|
148 |
+
names=['B-LIT',
|
149 |
+
'B-LOC',
|
150 |
+
'B-NRM',
|
151 |
+
'B-ORG',
|
152 |
+
'B-PER',
|
153 |
+
'B-REG',
|
154 |
+
'B-RS',
|
155 |
+
'I-LIT',
|
156 |
+
'I-LOC',
|
157 |
+
'I-NRM',
|
158 |
+
'I-ORG',
|
159 |
+
'I-PER',
|
160 |
+
'I-REG',
|
161 |
+
'I-RS',
|
162 |
+
'O'],
|
163 |
+
id=None),
|
164 |
+
length=-1,
|
165 |
+
id=None)
|
166 |
}
|
167 |
```
|
168 |
|
|
|
173 |
| Input Sentences | 53384 | 6666 | 6673 |
|
174 |
|
175 |
|
176 |
+
|
177 |
|
178 |
## Dataset Creation
|
179 |
|
180 |
### Curation Rationale
|
181 |
|
182 |
+
Documents in the legal domain contain multiple references to named entities, especially domain-specific named entities, i. e., jurisdictions, legal institutions, etc. Legal documents are unique and differ greatly from newspaper texts. On the one hand, the occurrence of general-domain named entities is relatively rare. On the other hand, in concrete applications, crucial domain-specific entities need to be identified in a reliable way, such as designations of legal norms and references to other legal documents (laws, ordinances, regulations, decisions, etc.). Most NER solutions operate in the general or news domain, which makes them inapplicable to the analysis of legal documents. Accordingly, there is a great need for an NER-annotated dataset consisting of legal documents, including the corresponding development of a typology of semantic concepts and uniform annotation guidelines.
|
183 |
|
|
|
184 |
### Source Data
|
185 |
|
186 |
Court decisions from 2017 and 2018 were selected for the dataset, published online by the [Federal Ministry of Justice and Consumer Protection](http://www.rechtsprechung-im-internet.de). The documents originate from seven federal courts: Federal Labour Court (BAG), Federal Fiscal Court (BFH), Federal Court of Justice (BGH), Federal Patent Court (BPatG), Federal Social Court (BSG), Federal Constitutional Court (BVerfG) and Federal Administrative Court (BVerwG).
|
187 |
|
188 |
+
#### Initial Data Collection and Normalization
|
189 |
|
190 |
+
From the table of [contents](http://www.rechtsprechung-im-internet.de/rii-toc.xml), 107 documents from each court were selected (see Table 1). The data was collected from the XML documents, i. e., it was extracted from the XML elements `Mitwirkung, Titelzeile, Leitsatz, Tenor, Tatbestand, Entscheidungsgründe, Gründen, abweichende Meinung, and sonstiger Titel`. The metadata at the beginning of the documents (name of court, date of decision, file number, European Case Law Identifier, document type, laws) and those that belonged to previous legal proceedings was deleted. Paragraph numbers were removed.
|
191 |
+
|
192 |
+
The extracted data was split into sentences, tokenised using [SoMaJo](https://github.com/tsproisl/SoMaJo) and manually annotated in [WebAnno](https://webanno.github.io/webanno/).
|
193 |
|
194 |
#### Who are the source language producers?
|
195 |
|
196 |
+
The Federal Ministry of Justice and the Federal Office of Justice provide selected decisions. Court decisions were produced by humans.
|
|
|
197 |
|
198 |
### Annotations
|
199 |
|
200 |
+
#### Annotation process
|
201 |
|
202 |
+
For more details see [annotation guidelines](https://github.com/elenanereiss/Legal-Entity-Recognition/blob/master/docs/Annotationsrichtlinien.pdf) (in German).
|
203 |
|
204 |
+
<!-- #### Who are the annotators?
|
205 |
|
206 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)-->
|
|
|
|
|
207 |
|
208 |
### Personal and Sensitive Information
|
209 |
|
210 |
+
A fundamental characteristic of the published decisions is that all personal information have been anonymised for privacy reasons. This affects the classes person, location and organization.
|
211 |
|
212 |
+
<!-- ## Considerations for Using the Data
|
213 |
|
214 |
### Social Impact of Dataset
|
215 |
|
|
|
227 |
|
228 |
### Dataset Curators
|
229 |
|
230 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)-->
|
|
|
231 |
|
232 |
### Licensing Information
|
233 |
|