joelniklaus commited on
Commit
45a0093
1 Parent(s): 5241c7a

added dataset files

Browse files
Files changed (7) hide show
  1. .gitattributes +4 -0
  2. README.md +260 -0
  3. convert_to_hf_dataset.py +138 -0
  4. meta.jsonl +3 -0
  5. test.jsonl +3 -0
  6. train.jsonl +3 -0
  7. validation.jsonl +3 -0
.gitattributes CHANGED
@@ -35,3 +35,7 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
35
  *.mp3 filter=lfs diff=lfs merge=lfs -text
36
  *.ogg filter=lfs diff=lfs merge=lfs -text
37
  *.wav filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
35
  *.mp3 filter=lfs diff=lfs merge=lfs -text
36
  *.ogg filter=lfs diff=lfs merge=lfs -text
37
  *.wav filter=lfs diff=lfs merge=lfs -text
38
+ test.jsonl filter=lfs diff=lfs merge=lfs -text
39
+ train.jsonl filter=lfs diff=lfs merge=lfs -text
40
+ validation.jsonl filter=lfs diff=lfs merge=lfs -text
41
+ meta.jsonl filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,260 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ - found
5
+ language_creators:
6
+ - found
7
+ languages:
8
+ - de
9
+ license:
10
+ - cc-by-4.0
11
+ multilinguality:
12
+ - monolingual
13
+ pretty_name: Annotated German Legal Decision Corpus
14
+ size_categories:
15
+ - 10K<n<100K
16
+ source_datasets:
17
+ - original
18
+ task_categories:
19
+ - text-classification
20
+ task_ids:
21
+ - multi-class-classification
22
+ ---
23
+
24
+ # Dataset Card for Annotated German Legal Decision Corpus
25
+
26
+ ## Table of Contents
27
+
28
+ - [Table of Contents](#table-of-contents)
29
+ - [Dataset Description](#dataset-description)
30
+ - [Dataset Summary](#dataset-summary)
31
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
32
+ - [Languages](#languages)
33
+ - [Dataset Structure](#dataset-structure)
34
+ - [Data Instances](#data-instances)
35
+ - [Data Fields](#data-fields)
36
+ - [Data Splits](#data-splits)
37
+ - [Dataset Creation](#dataset-creation)
38
+ - [Curation Rationale](#curation-rationale)
39
+ - [Source Data](#source-data)
40
+ - [Annotations](#annotations)
41
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
42
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
43
+ - [Social Impact of Dataset](#social-impact-of-dataset)
44
+ - [Discussion of Biases](#discussion-of-biases)
45
+ - [Other Known Limitations](#other-known-limitations)
46
+ - [Additional Information](#additional-information)
47
+ - [Dataset Curators](#dataset-curators)
48
+ - [Licensing Information](#licensing-information)
49
+ - [Citation Information](#citation-information)
50
+ - [Contributions](#contributions)
51
+
52
+ ## Dataset Description
53
+
54
+ - **Homepage:**
55
+ - **Repository:** https://zenodo.org/record/3936490#.X1ed7ovgomK
56
+ - **Paper:** Urchs., S., Mitrović., J., & Granitzer., M. (2021). Design and Implementation of German Legal Decision
57
+ Corpora. Proceedings of the 13th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,
58
+ 515–521. https://doi.org/10.5220/0010187305150521
59
+ - **Leaderboard:**
60
+ - **Point of Contact:** [Joel Niklaus](joel.niklaus.2@bfh.ch)
61
+
62
+ ### Dataset Summary
63
+
64
+ This dataset consists of 200 randomly chosen judgments. In these judgments a legal expert annotated the components
65
+ conclusion, definition and subsumption of the German legal writing style Urteilsstil.
66
+
67
+ *"Overall 25,075 sentences are annotated. 5% (1,202) of these sentences are marked as conclusion, 21% (5,328) as
68
+ definition, 53% (13,322) are marked as subsumption and the remaining 21% (6,481) as other. The length of judgments in
69
+ sentences ranges from 38 to 862 sentences. The median of judgments have 97 sentences, the length of most judgments is on
70
+ the shorter side."* (Urchs. et al., 2021)
71
+
72
+ *"Judgments from 22 of the 131 courts are selected for the corpus. Most judgments originate from the VG Augsburg (59 /
73
+ 30%) followed by the VG Ansbach (39 / 20%) and LSG Munich (33 / 17%)."* (Urchs. et al., 2021)
74
+
75
+ *"29% (58) of all selected judgments are issued in the year 2016, followed by 22% (44) from the year 2017 and 21% (41)
76
+ issued in the year 2015. [...] The percentages of selected judgments and decisions issued in 2018 and 2019 are roughly
77
+ the same. No judgments from 2020 are selected."* (Urchs. et al., 2021)
78
+
79
+ ### Supported Tasks and Leaderboards
80
+
81
+ The dataset can be used for multi-class text classification tasks, more specifically, for argument mining.
82
+
83
+ ### Languages
84
+
85
+ The language in the dataset is German as it is used in Bavarian courts in Germany.
86
+
87
+ ## Dataset Structure
88
+
89
+ ### Data Instances
90
+
91
+ Each sentence is saved as a json object on a line in one of the three files `train.jsonl`, `validation.jsonl`
92
+ or `test.jsonl`. The file `meta.jsonl` contains meta information for each court. The `file_number` is present in all
93
+ files for identification. Each sentence of the court decision was categorized according to its function.
94
+
95
+ ### Data Fields
96
+
97
+ The file `meta.jsonl` contains for each row the following fields:
98
+
99
+ - `meta_title`: Title provided by the website, it is used for saving the decision
100
+ - `court`: Issuing court
101
+ - `decision_style`: Style of the decision; the corpus contains either *Urteil* (='judgment') or *Endurteil* (
102
+ ='end-judgment')
103
+ - `date`: Date when the decision was issued by the court
104
+ - `file_number`: Identification number used for this decision by the court
105
+ - `title`: Title provided by the court
106
+ - `norm_chains`: Norms related to the decision
107
+ - `decision_guidelines`: Short summary of the decision
108
+ - `keywords`: Keywords associated with the decision
109
+ - `lower_court`: Court that decided on the decision before
110
+ - `additional_information`: Additional Information
111
+ - `decision_reference`: References to the location of the decision in beck-online
112
+ - `tenor`: Designation of the legal consequence ordered by the court (list of paragraphs)
113
+ - `legal_facts`: Facts that form the base for the decision (list of paragraphs)
114
+
115
+ The files `train.jsonl`, `validation.jsonl` and `test.jsonl` contain the following fields:
116
+
117
+ - `file_number`: Identification number for linkage with the file `meta.jsonl`
118
+ - `input_sentence`: The sentence to be classified
119
+ - `label`: In depth explanation of the court decision. Each sentence is assigned to one of the major components of
120
+ German *Urteilsstil* (Urchs. et al., 2021) (list of paragraphs, each paragraph containing list of sentences, each
121
+ sentence annotated with one of the following four labels):
122
+ - `conclusion`: Overall result
123
+ - `definition`: Abstract legal facts and consequences
124
+ - `subsumption`: Determination sentence / Concrete facts
125
+ - `other`: Anything else
126
+ - `context_before`: Context in the same paragraph before the input_sentence
127
+ - `context_after`: Context in the same paragraph after the input_sentence
128
+
129
+ ### Data Splits
130
+
131
+ No split provided in the original release.
132
+
133
+ Splits created by Joel Niklaus. We randomly split the dataset into 80% (160 decisions, 19271 sentences) train, 10%
134
+ validation (20 decisions, 2726 sentences) and 10% test (20 decisions, 3078 sentences). We made sure, that a decision
135
+ only occurs in one split and is not dispersed over multiple splits.
136
+
137
+ Label Distribution
138
+
139
+ | label | train | validation | test |
140
+ |:---------------|-----------:|-------------:|----------:|
141
+ | conclusion | 975 | 115 | 112 |
142
+ | definition | 4105 | 614 | 609 |
143
+ | subsumption | 10034 | 1486 | 1802 |
144
+ | other | 4157 | 511 | 555 |
145
+ | total | **19271** | **2726** | **3078** |
146
+
147
+ ## Dataset Creation
148
+
149
+ ### Curation Rationale
150
+
151
+ Creating a publicly available German legal text corpus consisting of judgments that have been annotated by a legal
152
+ expert. The annotated components consist of *conclusion*, *definition* and *subsumption* of the German legal writing
153
+ style *Urteilsstil*.
154
+
155
+ ### Source Data
156
+
157
+ #### Initial Data Collection and Normalization
158
+
159
+ *“The decision corpus is a collection of the decisions published on the website www.gesetze-bayern.de. At the time of
160
+ the crawling the website offered 32,748 decisions of 131 Bavarian courts, dating back to 2015. The decisions are
161
+ provided from the Bavarian state after the courts agreed to a publication. All decisions are processed by the publisher
162
+ C.H.BECK, commissioned by the Bavarian state. This processing includes anonymisation, key-wording, and adding of
163
+ editorial guidelines to the decisions.”* (Urchs. et al., 2021)
164
+
165
+ #### Who are the source language producers?
166
+
167
+ German courts from Bavaria
168
+
169
+ ### Annotations
170
+
171
+ #### Annotation process
172
+
173
+ *“As stated above, the judgment corpus consist of 200 randomly chosen judgments that are annotated by a legal expert,
174
+ who holds a first legal state exam. Due to financial, staff and time reasons the presented iteration of the corpus was
175
+ only annotated by a single expert. In a future version several other experts will annotate the corpus and the
176
+ inter-annotator agreement will be calculated.”* (Urchs. et al., 2021)
177
+
178
+ #### Who are the annotators?
179
+
180
+ A legal expert, who holds a first legal state exam.
181
+
182
+ ### Personal and Sensitive Information
183
+
184
+ *"All decisions are processed by the publisher C.H.BECK, commissioned by the Bavarian state. This processing includes **
185
+ anonymisation**, key-wording, and adding of editorial guidelines to the decisions.”* (Urchs. et al., 2021)
186
+
187
+ ## Considerations for Using the Data
188
+
189
+ ### Social Impact of Dataset
190
+
191
+ [More Information Needed]
192
+
193
+ ### Discussion of Biases
194
+
195
+ [More Information Needed]
196
+
197
+ ### Other Known Limitations
198
+
199
+ The SoMaJo Sentence Splitter has been used. Upon manual inspection of the dataset, we could see that the sentence
200
+ splitter had poor accuracy in some cases (see ```analyze_dataset()``` in ```convert_to_hf_dataset.py```). When creating
201
+ the splits, we thought about merging small sentences with their neighbors or removing them all together. However, since
202
+ we could not find an straightforward way to do this, we decided to leave the dataset content untouched.
203
+
204
+ Note that the information given in this dataset card refer to the dataset version as provided by Joel Niklaus and Veton
205
+ Matoshi. The dataset at hand is intended to be part of a bigger benchmark dataset. Creating a benchmark dataset
206
+ consisting of several other datasets from different sources requires postprocessing. Therefore, the structure of the
207
+ dataset at hand, including the folder structure, may differ considerably from the original dataset. In addition to that,
208
+ differences with regard to dataset statistics as give in the respective papers can be expected. The reader is advised to
209
+ have a look at the conversion script ```convert_to_hf_dataset.py``` in order to retrace the steps for converting the
210
+ original dataset into the present jsonl-format. For further information on the original dataset structure, we refer to
211
+ the bibliographical references and the original Github repositories and/or web pages provided in this dataset card.
212
+
213
+ ## Additional Information
214
+
215
+ ### Dataset Curators
216
+
217
+ The names of the original dataset curators and creators can be found in references given below, in the section *Citation
218
+ Information*. Additional changes were made by Joel Niklaus ([Email](joel.niklaus.2@bfh.ch)
219
+ ; [Github](https://github.com/joelniklaus)) and Veton Matoshi ([Email](veton.matoshi@bfh.ch)
220
+ ; [Github](https://github.com/kapllan)).
221
+
222
+ ### Licensing Information
223
+
224
+ [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode)
225
+
226
+ ### Citation Information
227
+
228
+ ```
229
+ @dataset{urchs_stefanie_2020_3936490,
230
+ author = {Urchs, Stefanie and
231
+ Mitrović, Jelena},
232
+ title = {{German legal jugements annotated with judement
233
+ style components}},
234
+ month = jul,
235
+ year = 2020,
236
+ publisher = {Zenodo},
237
+ doi = {10.5281/zenodo.3936490},
238
+ url = {https://doi.org/10.5281/zenodo.3936490}
239
+ }
240
+ ```
241
+
242
+ ```
243
+ @conference{icaart21,
244
+ author = {Urchs., Stefanie and Mitrovi{\'{c}}., Jelena and Granitzer., Michael},
245
+ booktitle = {Proceedings of the 13th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,},
246
+ doi = {10.5220/0010187305150521},
247
+ isbn = {978-989-758-484-8},
248
+ issn = {2184-433X},
249
+ organization = {INSTICC},
250
+ pages = {515--521},
251
+ publisher = {SciTePress},
252
+ title = {{Design and Implementation of German Legal Decision Corpora}},
253
+ year = {2021}
254
+ }
255
+ ```
256
+
257
+ ### Contributions
258
+
259
+ Thanks to [@kapllan](https://github.com/kapllan) and [@joelniklaus](https://github.com/joelniklaus) for adding this
260
+ dataset.
convert_to_hf_dataset.py ADDED
@@ -0,0 +1,138 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from glob import glob
2
+ from pathlib import Path
3
+
4
+ import json
5
+ import numpy as np
6
+ import pandas as pd
7
+
8
+ """
9
+ Dataset url: https://zenodo.org/record/3936490/files/annotated_corpus.zip?download=1
10
+ Paper url: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9044329/
11
+
12
+ There are no splits available ==> Make random split ourselves
13
+
14
+ """
15
+
16
+ pd.set_option('display.max_colwidth', None)
17
+ pd.set_option('display.max_columns', None)
18
+
19
+
20
+ def analyze_dataset(df, num_characters_for_short_sentence=25):
21
+ short_sentence = False
22
+ counter = 0
23
+ same_label_counter = 0
24
+ other_number = 0
25
+ num_one_paragraph_len = 0
26
+ for i in range(200):
27
+ for paragraph in df.iloc[i].decision_reasons:
28
+ for sentence in paragraph:
29
+ if short_sentence:
30
+ print("previous sentence was short: ", short_sentence)
31
+ print("current sentence label: ", sentence[1])
32
+ print("current paragraph: ", paragraph)
33
+ if sentence[1] == short_sentence[1]:
34
+ same_label_counter += 1
35
+
36
+ if len(sentence[0]) < num_characters_for_short_sentence:
37
+ counter += 1
38
+ short_sentence = sentence
39
+ print()
40
+ print("short sentence: ", sentence)
41
+ print("short paragraph: ", paragraph)
42
+ if sentence[1] == 'other':
43
+ other_number += 1
44
+ if len(paragraph) == 1:
45
+ num_one_paragraph_len += 1
46
+ else:
47
+ short_sentence = False
48
+
49
+ print("num short sentences: ", counter)
50
+ print("num short sentences containing the same label as the next one: ", same_label_counter)
51
+ print("num short sentences containing 'other' as label: ", other_number)
52
+ print("num short sentences where the paragraph contains only this one short sentence: ", num_one_paragraph_len)
53
+ # ==> the label is only the same in very few cases
54
+ # ==> the label is 'other' in the majority of cases, when it is not: it seems to be mislabeled
55
+ # ==> think about removing them entirely
56
+ # ==> we opted for not interfering in the content of the dataset
57
+
58
+
59
+ # create a summary jsonl file
60
+ dataset_filename = "dataset.jsonl"
61
+ if not Path(dataset_filename).exists():
62
+ with open(dataset_filename, "a") as dataset_file:
63
+ for filename in glob("annotated_corpus/*.json"):
64
+ # we need to do this charade, because some jsons are formatted differently than others
65
+ json_text = Path(filename).read_text()
66
+ json_obj = json.loads(json_text)
67
+ # make it less nested so that it is easier to read as df
68
+ new_dict = {}
69
+ new_dict.update(json_obj["meta"])
70
+ new_dict.update(json_obj["decision_text"])
71
+ dataset_file.write(json.dumps(new_dict) + "\n")
72
+ else:
73
+ print(f"{dataset_filename} already exists. Please delete it to re-aggregate it.")
74
+
75
+ df = pd.read_json(dataset_filename, lines=True)
76
+
77
+ # Do splits before expanding the df so that entire decisions are in the splits and not samples from one decision are spread across splits
78
+ # perform random split 80% train (160 decisions), 10% validation (20 decisions), 10% test (20 decisions)
79
+ train, validation, test = np.split(df.sample(frac=1, random_state=42), [int(.8 * len(df)), int(.9 * len(df))])
80
+
81
+
82
+ def expand_df(df):
83
+ """
84
+ Expand the df so that each sentence has its own row and is its own sample
85
+ :param df:
86
+ :return:
87
+ """
88
+ rows = []
89
+ for index, row in df.iterrows():
90
+ for paragraph in row.decision_reasons:
91
+ for sent_idx, sentence in enumerate(paragraph):
92
+ new_row = {'file_number': row['file_number'], 'input_sentence': sentence[0], 'label': sentence[1]}
93
+ # Discussion with lawyer yielded, that the paragraph as context is enough
94
+ # take the sentences before
95
+ new_row['context_before'] = paragraph[:sent_idx]
96
+ # take the remaining sentences afterwards
97
+ new_row['context_after'] = paragraph[sent_idx + 1:]
98
+ rows.append(new_row)
99
+
100
+ return pd.DataFrame.from_records(rows)
101
+
102
+
103
+ train = expand_df(train)
104
+ validation = expand_df(validation)
105
+ test = expand_df(test)
106
+
107
+ # Num samples for each split: train (19271), validation (2726), test (3078)
108
+ print(len(train.index), len(validation.index), len(test.index))
109
+
110
+ # save to jsonl files for huggingface
111
+ train.to_json("train.jsonl", lines=True, orient="records")
112
+ validation.to_json("validation.jsonl", lines=True, orient="records")
113
+ test.to_json("test.jsonl", lines=True, orient="records")
114
+
115
+ # save main df with meta information to file
116
+ # link to splits is given via file_number
117
+ df = df.drop(['decision_reasons'], axis=1)
118
+ df.to_json("meta.jsonl", lines=True, orient="records")
119
+
120
+
121
+ def print_split_table_single_label(train, validation, test, label_name):
122
+ train_counts = train[label_name].value_counts().to_frame().rename(columns={label_name: "train"})
123
+ validation_counts = validation[label_name].value_counts().to_frame().rename(columns={label_name: "validation"})
124
+ test_counts = test[label_name].value_counts().to_frame().rename(columns={label_name: "test"})
125
+
126
+ table = train_counts.join(validation_counts)
127
+ table = table.join(test_counts)
128
+ table[label_name] = table.index
129
+ total_row = {label_name: "total",
130
+ "train": len(train.index),
131
+ "validation": len(validation.index),
132
+ "test": len(test.index)}
133
+ table = table.append(total_row, ignore_index=True)
134
+ table = table[[label_name, "train", "validation", "test"]] # reorder columns
135
+ print(table.to_markdown(index=False))
136
+
137
+
138
+ print_split_table_single_label(train, validation, test, "label")
meta.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:83f23fed08311a81117bb12f60be1676313f7448ef297ef5fd34cf4641345e82
3
+ size 2828194
test.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:19595a310f39fd1daa89f834930923ed19e3b01d04806d189c382eca70ecbb08
3
+ size 4263762
train.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ed22caee3516ab681121b5896e74de3aceb771d3148587675cb04d4a9d4162d0
3
+ size 26669050
validation.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cf7c5165d3c7d7a42d5d10be6a35391ffd72781dd4008bc6fa15a9daa983d850
3
+ size 3563814