Joelito commited on
Commit
37773c2
1 Parent(s): 1f83336

Upload 4 files

Browse files
Files changed (4) hide show
  1. README.md +122 -0
  2. data/train.jsonl +0 -0
  3. data/train.jsonl.xz +3 -0
  4. prepare_data.py +30 -0
README.md ADDED
@@ -0,0 +1,122 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Dataset Card for GermanRentalAgreements
2
+
3
+ ## Table of Contents
4
+ - [Table of Contents](#table-of-contents)
5
+ - [Dataset Description](#dataset-description)
6
+ - [Dataset Summary](#dataset-summary)
7
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
8
+ - [Languages](#languages)
9
+ - [Dataset Structure](#dataset-structure)
10
+ - [Data Instances](#data-instances)
11
+ - [Data Fields](#data-fields)
12
+ - [Data Splits](#data-splits)
13
+ - [Dataset Creation](#dataset-creation)
14
+ - [Curation Rationale](#curation-rationale)
15
+ - [Source Data](#source-data)
16
+ - [Annotations](#annotations)
17
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
18
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
19
+ - [Social Impact of Dataset](#social-impact-of-dataset)
20
+ - [Discussion of Biases](#discussion-of-biases)
21
+ - [Other Known Limitations](#other-known-limitations)
22
+ - [Additional Information](#additional-information)
23
+ - [Dataset Curators](#dataset-curators)
24
+ - [Licensing Information](#licensing-information)
25
+ - [Citation Information](#citation-information)
26
+ - [Contributions](#contributions)
27
+
28
+ ## Dataset Description
29
+
30
+ - **Homepage:** [GitHub](https://github.com/sebischair/Legal-Sentence-Classification-Datasets-and-Models)
31
+ - **Repository:**
32
+ - **Paper:**
33
+ - **Leaderboard:**
34
+ - **Point of Contact:**
35
+
36
+ ### Dataset Summary
37
+
38
+ [More Information Needed]
39
+
40
+ ### Supported Tasks and Leaderboards
41
+
42
+ [More Information Needed]
43
+
44
+ ### Languages
45
+
46
+ [More Information Needed]
47
+
48
+ ## Dataset Structure
49
+
50
+ ### Data Instances
51
+
52
+ [More Information Needed]
53
+
54
+ ### Data Fields
55
+
56
+ [More Information Needed]
57
+
58
+ ### Data Splits
59
+
60
+ [More Information Needed]
61
+
62
+ ## Dataset Creation
63
+
64
+ ### Curation Rationale
65
+
66
+ [More Information Needed]
67
+
68
+ ### Source Data
69
+
70
+ #### Initial Data Collection and Normalization
71
+
72
+ [More Information Needed]
73
+
74
+ #### Who are the source language producers?
75
+
76
+ [More Information Needed]
77
+
78
+ ### Annotations
79
+
80
+ #### Annotation process
81
+
82
+ [More Information Needed]
83
+
84
+ #### Who are the annotators?
85
+
86
+ [More Information Needed]
87
+
88
+ ### Personal and Sensitive Information
89
+
90
+ [More Information Needed]
91
+
92
+ ## Considerations for Using the Data
93
+
94
+ ### Social Impact of Dataset
95
+
96
+ [More Information Needed]
97
+
98
+ ### Discussion of Biases
99
+
100
+ [More Information Needed]
101
+
102
+ ### Other Known Limitations
103
+
104
+ [More Information Needed]
105
+
106
+ ## Additional Information
107
+
108
+ ### Dataset Curators
109
+
110
+ [More Information Needed]
111
+
112
+ ### Licensing Information
113
+
114
+ [More Information Needed]
115
+
116
+ ### Citation Information
117
+
118
+ [More Information Needed]
119
+
120
+ ### Contributions
121
+
122
+ Thanks to [@JoelNiklaus](https://github.com/JoelNiklaus) for adding this dataset.
data/train.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/train.jsonl.xz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b811f5b41c2044f231fed67f49a44caae0a615453d9fd1ebb0740c9518ac467c
3
+ size 31384
prepare_data.py ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import pandas as pd
2
+
3
+ import os
4
+ from typing import Union
5
+
6
+ import datasets
7
+
8
+
9
+ def save_and_compress(dataset: Union[datasets.Dataset, pd.DataFrame], name: str, idx=None):
10
+ if idx:
11
+ path = f"{name}_{idx}.jsonl"
12
+ else:
13
+ path = f"{name}.jsonl"
14
+
15
+ print("Saving to", path)
16
+ dataset.to_json(path, force_ascii=False, orient='records', lines=True)
17
+
18
+ print("Compressing...")
19
+ os.system(f'xz -zkf -T0 {path}') # -TO to use multithreading
20
+
21
+ dfs = []
22
+ for num_classes in [3, 6, 9]:
23
+ df = pd.read_csv(f"datasets/mietrecht_sentences_{num_classes}_classes.csv", sep=";")
24
+ # remove columns that are not needed
25
+ df = df[['Funktion', 'Text']]
26
+ df.rename(columns={'Funktion': f'label_{num_classes}_classes', 'Text': f'text_{num_classes}_classes'}, inplace=True)
27
+ dfs.append(df)
28
+
29
+ train = pd.concat(dfs, axis=1)
30
+ save_and_compress(train, f"data/train")