1
---
2
annotations_creators:
3
- machine-generated
4
language_creators:
5
- found
6
languages:
7
- de
8
licenses:
9
- unknown
10
multilinguality:
11
- monolingual
12
size_categories:
13
- n>1M
14
source_datasets:
15
- original
16
task_categories:
17
- text-retrieval
18
- text-scoring
19
task_ids:
20
- semantic-similarity-scoring
21
- text-retrieval-other-example-based-retrieval
22
---
23
24
# Dataset Card for German Legal Sentences
25
26
## Table of Contents
27
- [Dataset Card for [Dataset Name]](#dataset-card-for-dataset-name)
28
  - [Table of Contents](#table-of-contents)
29
  - [Dataset Description](#dataset-description)
30
    - [Dataset Summary](#dataset-summary)
31
    - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
32
    - [Languages](#languages)
33
  - [Dataset Structure](#dataset-structure)
34
    - [Data Instances](#data-instances)
35
    - [Data Fields](#data-fields)
36
    - [Data Splits](#data-splits)
37
  - [Dataset Creation](#dataset-creation)
38
    - [Curation Rationale](#curation-rationale)
39
    - [Source Data](#source-data)
40
      - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
41
      - [Who are the source language producers?](#who-are-the-source-language-producers)
42
    - [Annotations](#annotations)
43
      - [Annotation process](#annotation-process)
44
      - [Who are the annotators?](#who-are-the-annotators)
45
    - [Personal and Sensitive Information](#personal-and-sensitive-information)
46
  - [Considerations for Using the Data](#considerations-for-using-the-data)
47
    - [Social Impact of Dataset](#social-impact-of-dataset)
48
    - [Discussion of Biases](#discussion-of-biases)
49
    - [Other Known Limitations](#other-known-limitations)
50
  - [Additional Information](#additional-information)
51
    - [Dataset Curators](#dataset-curators)
52
    - [Licensing Information](#licensing-information)
53
    - [Citation Information](#citation-information)
54
    - [Contributions](#contributions)
55
56
## Dataset Description
57
58
- **Homepage:** https://lavis-nlp.github.io/german_legal_sentences/
59
- **Repository:** https://github.com/lavis-nlp/german_legal_sentences
60
- **Paper:** coming soon
61
- **Leaderboard:**
62
- **Point of Contact:** [Marco Wrzalik](mailto:marco.wrzalik@hs-rm.de)
63
64
### Dataset Summary
65
66
German Legal Sentences (GLS) is an automatically generated training dataset for semantic sentence matching and citation recommendation in the domain in german legal documents. It follows the concept of weak supervision, where imperfect labels are generated using multiple heuristics. For this purpose we use a combination of legal citation matching and BM25 similarity. The contained sentences and their citations are parsed from real judicial decisions provided by [Open Legal Data](http://openlegaldata.io/) (https://arxiv.org/abs/2005.13342).
67
68
### Supported Tasks and Leaderboards
69
70
The main associated task is *Semantic Similarity Ranking*. We propose to use the *Mean Reciprocal Rank* (MRR) cut at the tenth position as well as MAP and Recall on Rankings of size 200. As baselines we provide the follows: 
71
72
| Method                            | MRR@10   | MAP@200    | Recall@200  |
73
|-----------------------------------|---------:|-----------:|------------:|
74
| BM25 - default `(k1=1.2; b=0.75)` |     25.7 |       17.6 |        42.9 |
75
| BM25 - tuned `(k1=0.47; b=0.97)`  |     26.2 |       18.1 |        43.3 |
76
| [CoRT](https://arxiv.org/abs/2010.10252)             |     31.2 |       21.4 |        56.2 |
77
| [CoRT + BM25](https://arxiv.org/abs/2010.10252)      |     32.1 |       22.1 |        67.1 |
78
79
In addition, we want to support a *Citation Recommendation* task in the future.
80
81
If you wish to contribute evaluation measures or give any suggestion or critique, please write an [e-mail](mailto:marco.wrzalik@hs-rm.de).
82
83
### Languages
84
85
This dataset contains texts from the specific domain of German court decisions.
86
87
## Dataset Structure
88
89
### Data Instances
90
91
```
92
{'query.doc_id': 28860,
93
 'query.ref_ids': [6215, 248, 248],
94
 'query.sent_id': 304863,
95
 'query.text': 'Zudem ist zu berücksichtigen , dass die Vollverzinsung nach '
96
               '[REF] i. V. m. [REF] gleichermaßen zugunsten wie zulasten des '
97
               'Steuerpflichtigen wirkt , sodass bei einer Überzahlung durch '
98
               'den Steuerpflichtigen der Staat dem Steuerpflichtigen neben '
99
               'der Erstattung ebenfalls den entstandenen potentiellen Zins- '
100
               'und Liquiditätsnachteil in der pauschalierten Höhe des [REF] '
101
               'zu ersetzen hat , unabhängig davon , in welcher Höhe dem '
102
               'Berechtigten tatsächlich Zinsen entgangen sind .',
103
 'related.doc_id': 56348,
104
 'related.ref_ids': [248, 6215, 62375],
105
 'related.sent_id': 558646,
106
 'related.text': 'Ferner ist zu berücksichtigen , dass der Zinssatz des [REF] '
107
                 'im Rahmen des [REF] sowohl für Steuernachforderung wie auch '
108
                 'für Steuererstattungen und damit gleichermaßen zugunsten wie '
109
                 'zulasten des Steuerpflichtigen wirkt , Vgl. BVerfG , '
110
                 'Nichtannahmebeschluss vom [DATE] [REF] , juris , mit der '
111
                 'Folge , dass auch Erstattungsansprüche unabhängig davon , ob '
112
                 'und in welcher Höhe dem Berechtigten tatsächlich Zinsen '
113
                 'entgangen sind , mit monatlich 0,0 % verzinst werden .'}
114
```
115
116
### Data Fields
117
118
[More Information Needed]
119
120
### Data Splits
121
122
[More Information Needed]
123
124
## Dataset Creation
125
126
### Curation Rationale
127
128
[More Information Needed]
129
130
### Source Data
131
132
#### Initial Data Collection and Normalization
133
134
The documents we take from [Open Legal Data](http://openlegaldata.io/) (https://arxiv.org/abs/2005.13342) are first preprocessed by removing line breaks, enumeration characters and headings. Afterwards we parse legal citations using hand-crafted regular expressions. Each citation is split into it components and normalized, thus different variants of the same citation are matched together. For instance, "§211 Absatz 1 des Strafgesetzbuches" is normalized to "§ 211 Abs. 1 StGB". Every time we discover an unknown citation, we assign an unique id to it. We use these ids to replace parsed citations in the document text with a simple reference tag containing this id (e.g `[REF321]`). At the same time we parse dates and replace them with the date tag `[DATE]`. Both remove dots which can may be confused with the end of a sentence, which makes the next stage easier.
135
136
We use [SoMaJo](https://github.com/tsproisl/SoMaJo) to perform sentence tokenizing on the pre-processed documents. Each sentence that does not contain at least one legal citation is discarded. For the rest we assign sentence ids, remove all reference ids from them as well as any contents in braces (braces often contain large enumerations of citations and their sources). At the same time we keep track of the corresponding document from which a sentence originates and which references occur in it. 
137
138
#### Who are the source language producers?
139
140
The source language originates in the context of German court proceedings.
141
142
### Annotations
143
144
#### Annotation process
145
146
[More Information Needed]
147
148
#### Who are the annotators?
149
150
The annotations are machine-generated.
151
152
### Personal and Sensitive Information
153
154
The source documents are already public and anonymized.
155
156
## Considerations for Using the Data
157
158
### Social Impact of Dataset
159
160
With this dataset, we strive towards better accessibility of court decisions to the general public by accelerating research on semantic search technologies. We hope that emerging search technologies will enable the layperson to find relevant information without knowing the specific terms used by lawyers.
161
162
### Discussion of Biases
163
164
[More Information Needed]
165
166
### Other Known Limitations
167
168
[More Information Needed]
169
170
## Additional Information
171
172
### Dataset Curators
173
174
[More Information Needed]
175
176
### Licensing Information
177
178
[More Information Needed]
179
180
### Citation Information
181
182
Coming soon!
183
184
### Contributions
185
186
Thanks to [@mwrzalik](https://github.com/mwrzalik) for adding this dataset.