joelniklaus commited on
Commit
5fbc120
1 Parent(s): 8664ace

added loading script and dataset card

Browse files
Files changed (2) hide show
  1. README.md +261 -0
  2. legal-mc4.py +133 -0
README.md ADDED
@@ -0,0 +1,261 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - other
4
+ language_creators:
5
+ - found
6
+ language:
7
+ - bg
8
+ - cs
9
+ - da
10
+ - de
11
+ - el
12
+ - en
13
+ - es
14
+ - et
15
+ - fi
16
+ - fr
17
+ - ga
18
+ - hu
19
+ - it
20
+ - lt
21
+ - lv
22
+ - mt
23
+ - nl
24
+ - pl
25
+ - pt
26
+ - ro
27
+ - sk
28
+ - sl
29
+ - sv
30
+ license:
31
+ - cc-by-4.0
32
+ multilinguality:
33
+ - multilingual
34
+ paperswithcode_id: null
35
+ pretty_name: "MC4_Legal: A Corpus Covering the Legal Part of MC4 for European Languages"
36
+ size_categories:
37
+ - 10M<n<100M
38
+ source_datasets:
39
+ - original
40
+ task_categories:
41
+ - fill-mask
42
+
43
+ ---
44
+
45
+ # Dataset Card for MC4_Legal: A Corpus Covering the Legal Part of MC4 for European Languages
46
+
47
+ ## Table of Contents
48
+
49
+ - [Table of Contents](#table-of-contents)
50
+ - [Dataset Description](#dataset-description)
51
+ - [Dataset Summary](#dataset-summary)
52
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
53
+ - [Languages](#languages)
54
+ - [Dataset Structure](#dataset-structure)
55
+ - [Data Instances](#data-instances)
56
+ - [Data Fields](#data-fields)
57
+ - [Data Splits](#data-splits)
58
+ - [Dataset Creation](#dataset-creation)
59
+ - [Curation Rationale](#curation-rationale)
60
+ - [Source Data](#source-data)
61
+ - [Annotations](#annotations)
62
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
63
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
64
+ - [Social Impact of Dataset](#social-impact-of-dataset)
65
+ - [Discussion of Biases](#discussion-of-biases)
66
+ - [Other Known Limitations](#other-known-limitations)
67
+ - [Additional Information](#additional-information)
68
+ - [Dataset Curators](#dataset-curators)
69
+ - [Licensing Information](#licensing-information)
70
+ - [Citation Information](#citation-information)
71
+ - [Contributions](#contributions)
72
+
73
+ ## Dataset Description
74
+
75
+ - **Homepage:**
76
+ - **Repository:** [GitHub](https://github.com/JoelNiklaus/LegalDatasets/tree/main/pretrain/mc4_legal)
77
+ - **Paper:**
78
+ - **Leaderboard:**
79
+ - **Point of Contact:** [Joel Niklaus](mailto:joel@niklaus.ai)
80
+
81
+ ### Dataset Summary
82
+
83
+ This dataset contains large text resources (~106GB in total) from mc4 filtered for legal data that can be used for pretraining language models.
84
+
85
+ This dataset uses a different filtering method compared to [mc4_legal](https://huggingface.co/datasets/joelito/mc4_legal) and uses the smaller filtered [c4](https://huggingface.co/datasets/c4) dataset for the English split to speed up the filtering.
86
+
87
+ Use the dataset like this:
88
+ ```python
89
+ from datasets import load_dataset
90
+ dataset = load_dataset("joelito/mc4_legal", "de", split='train', streaming=True)
91
+ ```
92
+
93
+ ### Supported Tasks and Leaderboards
94
+
95
+ The dataset supports the task of masked language modeling.
96
+
97
+ ### Languages
98
+
99
+ The following languages are supported: bg, cs, da, de, el, en, es, et, fi, fr, ga, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv
100
+
101
+ ## Dataset Structure
102
+
103
+ ### Data Instances
104
+
105
+ The file format is jsonl.xz and there is a validation and train split available.
106
+
107
+ ### Data Fields
108
+
109
+ [More Information Needed]
110
+
111
+ ### Data Splits
112
+
113
+ #### Data Size
114
+
115
+ ```bash
116
+ $ xz --list data/*.xz
117
+ Strms Blocks Compressed Uncompressed Ratio Check Filename
118
+ 1 1 2,080.7 KiB 33.4 MiB 0.061 CRC64 data/bg.train.0.jsonl.xz
119
+ 1 1 22.8 KiB 315.9 KiB 0.072 CRC64 data/bg.validation.0.jsonl.xz
120
+ 1 1 608.0 MiB 3,881.0 MiB 0.157 CRC64 data/cs.train.0.jsonl.xz
121
+ 1 1 608.0 MiB 3,902.6 MiB 0.156 CRC64 data/cs.train.1.jsonl.xz
122
+ 1 1 256.1 MiB 1,644.5 MiB 0.156 CRC64 data/cs.train.2.jsonl.xz
123
+ 1 1 1,450.6 KiB 8,690.7 KiB 0.167 CRC64 data/cs.validation.0.jsonl.xz
124
+ 1 1 7,578.6 KiB 38.3 MiB 0.193 CRC64 data/da.train.0.jsonl.xz
125
+ 1 1 19.7 KiB 82.3 KiB 0.240 CRC64 data/da.validation.0.jsonl.xz
126
+ 1 1 608.0 MiB 3,026.9 MiB 0.201 CRC64 data/de.train.0.jsonl.xz
127
+ 1 1 608.0 MiB 3,038.7 MiB 0.200 CRC64 data/de.train.1.jsonl.xz
128
+ 1 1 608.0 MiB 3,036.1 MiB 0.200 CRC64 data/de.train.2.jsonl.xz
129
+ 1 1 608.0 MiB 3,040.3 MiB 0.200 CRC64 data/de.train.3.jsonl.xz
130
+ 1 1 608.0 MiB 3,038.6 MiB 0.200 CRC64 data/de.train.4.jsonl.xz
131
+ 1 1 608.0 MiB 3,044.2 MiB 0.200 CRC64 data/de.train.5.jsonl.xz
132
+ 1 1 608.0 MiB 3,043.8 MiB 0.200 CRC64 data/de.train.6.jsonl.xz
133
+ 1 1 608.0 MiB 3,038.2 MiB 0.200 CRC64 data/de.train.7.jsonl.xz
134
+ 1 1 55.1 MiB 274.7 MiB 0.201 CRC64 data/de.train.8.jsonl.xz
135
+ 1 1 5,033.5 KiB 24.5 MiB 0.201 CRC64 data/de.validation.0.jsonl.xz
136
+ 1 1 1,280.9 KiB 17.0 MiB 0.073 CRC64 data/el.train.0.jsonl.xz
137
+ 1 1 5,552 B 15.7 KiB 0.346 CRC64 data/el.validation.0.jsonl.xz
138
+ 1 1 608.0 MiB 2,602.1 MiB 0.234 CRC64 data/en.train.0.jsonl.xz
139
+ 1 1 90.0 MiB 386.5 MiB 0.233 CRC64 data/en.train.1.jsonl.xz
140
+ 1 1 826.6 KiB 3,298.8 KiB 0.251 CRC64 data/en.validation.0.jsonl.xz
141
+ 1 1 608.0 MiB 3,106.5 MiB 0.196 CRC64 data/es.train.0.jsonl.xz
142
+ 1 1 608.0 MiB 3,118.1 MiB 0.195 CRC64 data/es.train.1.jsonl.xz
143
+ 1 1 608.0 MiB 3,113.6 MiB 0.195 CRC64 data/es.train.2.jsonl.xz
144
+ 1 1 608.0 MiB 3,122.5 MiB 0.195 CRC64 data/es.train.3.jsonl.xz
145
+ 1 1 608.0 MiB 3,121.5 MiB 0.195 CRC64 data/es.train.4.jsonl.xz
146
+ 1 1 608.0 MiB 3,122.9 MiB 0.195 CRC64 data/es.train.5.jsonl.xz
147
+ 1 1 608.0 MiB 3,128.4 MiB 0.194 CRC64 data/es.train.6.jsonl.xz
148
+ 1 1 608.0 MiB 3,129.5 MiB 0.194 CRC64 data/es.train.7.jsonl.xz
149
+ 1 1 608.0 MiB 3,132.2 MiB 0.194 CRC64 data/es.train.8.jsonl.xz
150
+ 1 1 528.5 MiB 2,722.5 MiB 0.194 CRC64 data/es.train.9.jsonl.xz
151
+ 1 1 6,159.9 KiB 30.7 MiB 0.196 CRC64 data/es.validation.0.jsonl.xz
152
+ 1 1 93.5 MiB 506.2 MiB 0.185 CRC64 data/et.train.0.jsonl.xz
153
+ 1 1 136.2 KiB 571.3 KiB 0.238 CRC64 data/et.validation.0.jsonl.xz
154
+ 1 1 60.6 MiB 312.6 MiB 0.194 CRC64 data/fi.train.0.jsonl.xz
155
+ 1 1 63.2 KiB 262.4 KiB 0.241 CRC64 data/fi.validation.0.jsonl.xz
156
+ 1 1 608.0 MiB 3,400.7 MiB 0.179 CRC64 data/fr.train.0.jsonl.xz
157
+ 1 1 608.0 MiB 3,405.5 MiB 0.179 CRC64 data/fr.train.1.jsonl.xz
158
+ 1 1 135.9 MiB 763.7 MiB 0.178 CRC64 data/fr.train.2.jsonl.xz
159
+ 1 1 1,414.3 KiB 7,626.1 KiB 0.185 CRC64 data/fr.validation.0.jsonl.xz
160
+ 1 1 31.2 KiB 146.4 KiB 0.213 CRC64 data/ga.train.0.jsonl.xz
161
+ 1 0 32 B 0 B --- CRC64 data/ga.validation.0.jsonl.xz
162
+ 1 1 211.5 MiB 1,407.3 MiB 0.150 CRC64 data/hu.train.0.jsonl.xz
163
+ 1 1 212.9 KiB 1,287.6 KiB 0.165 CRC64 data/hu.validation.0.jsonl.xz
164
+ 1 1 608.0 MiB 2,963.4 MiB 0.205 CRC64 data/it.train.0.jsonl.xz
165
+ 1 1 608.0 MiB 2,970.0 MiB 0.205 CRC64 data/it.train.1.jsonl.xz
166
+ 1 1 608.0 MiB 2,973.7 MiB 0.204 CRC64 data/it.train.2.jsonl.xz
167
+ 1 1 315.2 MiB 1,541.6 MiB 0.204 CRC64 data/it.train.3.jsonl.xz
168
+ 1 1 2,419.3 KiB 11.2 MiB 0.211 CRC64 data/it.validation.0.jsonl.xz
169
+ 1 1 9,966.7 KiB 38.2 MiB 0.255 CRC64 data/lt.train.0.jsonl.xz
170
+ 1 1 17.2 KiB 84.7 KiB 0.203 CRC64 data/lt.validation.0.jsonl.xz
171
+ 1 1 66.4 KiB 326.7 KiB 0.203 CRC64 data/lv.train.0.jsonl.xz
172
+ 1 0 32 B 0 B --- CRC64 data/lv.validation.0.jsonl.xz
173
+ 1 1 2,851.6 KiB 16.7 MiB 0.167 CRC64 data/mt.train.0.jsonl.xz
174
+ 1 1 2,092 B 5,079 B 0.412 CRC64 data/mt.validation.0.jsonl.xz
175
+ 1 1 14.6 MiB 71.6 MiB 0.203 CRC64 data/nl.train.0.jsonl.xz
176
+ 1 1 23.5 KiB 79.2 KiB 0.296 CRC64 data/nl.validation.0.jsonl.xz
177
+ 1 1 608.0 MiB 3,635.5 MiB 0.167 CRC64 data/pl.train.0.jsonl.xz
178
+ 1 1 608.0 MiB 3,646.0 MiB 0.167 CRC64 data/pl.train.1.jsonl.xz
179
+ 1 1 401.9 MiB 2,409.0 MiB 0.167 CRC64 data/pl.train.2.jsonl.xz
180
+ 1 1 1,870.5 KiB 10.5 MiB 0.173 CRC64 data/pl.validation.0.jsonl.xz
181
+ 1 1 608.0 MiB 3,173.1 MiB 0.192 CRC64 data/pt.train.0.jsonl.xz
182
+ 1 1 329.1 MiB 1,721.6 MiB 0.191 CRC64 data/pt.train.1.jsonl.xz
183
+ 1 1 989.0 KiB 4,841.2 KiB 0.204 CRC64 data/pt.validation.0.jsonl.xz
184
+ 1 1 365.2 MiB 2,237.9 MiB 0.163 CRC64 data/ro.train.0.jsonl.xz
185
+ 1 1 419.2 KiB 2,320.4 KiB 0.181 CRC64 data/ro.validation.0.jsonl.xz
186
+ 1 1 266.1 MiB 1,668.1 MiB 0.160 CRC64 data/sk.train.0.jsonl.xz
187
+ 1 1 304.1 KiB 1,618.2 KiB 0.188 CRC64 data/sk.validation.0.jsonl.xz
188
+ 1 1 81.6 MiB 416.1 MiB 0.196 CRC64 data/sl.train.0.jsonl.xz
189
+ 1 1 101.0 KiB 416.6 KiB 0.242 CRC64 data/sl.validation.0.jsonl.xz
190
+ 1 1 252.0 MiB 1,423.2 MiB 0.177 CRC64 data/sv.train.0.jsonl.xz
191
+ 1 1 210.8 KiB 1,091.2 KiB 0.193 CRC64 data/sv.validation.0.jsonl.xz
192
+ -------------------------------------------------------------------------------
193
+ 74 72 20.0 GiB 106.2 GiB 0.189 CRC64 74 files
194
+ ```
195
+
196
+ ## Dataset Creation
197
+
198
+ The dataset was created by filtering mc4 for legal data.
199
+ We used terms indicating legal citations to get the texts.
200
+ Note that this dataset can be quite noisy, and the quality is not known.
201
+
202
+ ### Curation Rationale
203
+
204
+ [More Information Needed]
205
+
206
+ ### Source Data
207
+
208
+ #### Initial Data Collection and Normalization
209
+
210
+ [More Information Needed]
211
+
212
+ #### Who are the source language producers?
213
+
214
+ [More Information Needed]
215
+
216
+
217
+ ### Annotations
218
+
219
+ #### Annotation process
220
+
221
+ [More Information Needed]
222
+
223
+ #### Who are the annotators?
224
+
225
+ [More Information Needed]
226
+
227
+ ### Personal and Sensitive Information
228
+
229
+ [More Information Needed]
230
+
231
+ ## Considerations for Using the Data
232
+
233
+ ### Social Impact of Dataset
234
+
235
+ [More Information Needed]
236
+
237
+ ### Discussion of Biases
238
+
239
+ [More Information Needed]
240
+
241
+ ### Other Known Limitations
242
+
243
+ [More Information Needed]
244
+
245
+ ## Additional Information
246
+
247
+ ### Dataset Curators
248
+
249
+ [More Information Needed]
250
+
251
+ ### Licensing Information
252
+
253
+ [More Information Needed]
254
+
255
+ ### Citation Information
256
+
257
+ [More Information Needed]
258
+
259
+ ### Contributions
260
+
261
+ Thanks to [@JoelNiklaus](https://github.com/joelniklaus) for adding this dataset.
legal-mc4.py ADDED
@@ -0,0 +1,133 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Legal MC4"""
2
+ import ast
3
+ import json
4
+
5
+ import datasets
6
+ from huggingface_hub.file_download import hf_hub_url
7
+
8
+ try:
9
+ import lzma as xz
10
+ except ImportError:
11
+ import pylzma as xz
12
+
13
+ datasets.logging.set_verbosity_info()
14
+ logger = datasets.logging.get_logger(__name__)
15
+
16
+ _DESCRIPTION = """
17
+ Legal-MC4: A Corpus Covering the Legal Part of MC4 for European Languages
18
+ """
19
+
20
+ _CITATION = """
21
+ """
22
+
23
+ _REPO_ID = "joelito/legal-mc4"
24
+ _URL = f"https://huggingface.co/datasets/{_REPO_ID}"
25
+
26
+ _LANGUAGES = {
27
+ "bg": 0,
28
+ "cs": 2,
29
+ "da": 0,
30
+ "de": 8,
31
+ "el": 0,
32
+ "en": 1,
33
+ "es": 9,
34
+ "et": 1,
35
+ "fi": 0,
36
+ "fr": 2,
37
+ "ga": 0,
38
+ # "hr", # hr is not present in mc4
39
+ "hu": 0,
40
+ "it": 3,
41
+ "lt": 0,
42
+ "lv": 0,
43
+ "mt": 0,
44
+ "nl": 0,
45
+ "pl": 2,
46
+ "pt": 1,
47
+ "ro": 0,
48
+ "sk": 0,
49
+ "sl": 0,
50
+ "sv": 0,
51
+ }
52
+ _LANGS = list(_LANGUAGES.keys())
53
+
54
+
55
+ class LegalMC4Config(datasets.BuilderConfig):
56
+ """BuilderConfig for Legal-MC4."""
57
+
58
+ def __init__(self, name: str, **kwargs):
59
+ """BuilderConfig for Legal-MC4.
60
+ Args:
61
+ name: One of bg,cs,da,de,el,en,es,et,fi,fr,ga,hu,it,lt,lv,mt,nl,pl,pt,ro,sk,sl,sv or all
62
+ **kwargs: keyword arguments forwarded to super.
63
+ """
64
+ super(LegalMC4Config, self).__init__(**kwargs)
65
+ self.name = name
66
+
67
+
68
+ class MC4Legal(datasets.GeneratorBasedBuilder):
69
+ """Legal-MC4: A Corpus Covering the Legal Part of MC4 for European Languages"""
70
+
71
+ BUILDER_CONFIGS = [LegalMC4Config(language) for language in _LANGS + ["all"]]
72
+
73
+ def _info(self):
74
+ return datasets.DatasetInfo(
75
+ description=_DESCRIPTION,
76
+ features=datasets.Features(
77
+ {
78
+ "index": datasets.Value("int32"),
79
+ "url": datasets.Value("string"),
80
+ "timestamp": datasets.Value("timestamp[s]"),
81
+ "matches": datasets.Sequence(datasets.Value("string")),
82
+ "text": datasets.Value("string"),
83
+ }
84
+ ),
85
+ supervised_keys=None,
86
+ homepage=_URL,
87
+ citation=_CITATION,
88
+ )
89
+
90
+ def _split_generators(self, dl_manager):
91
+ def get_url(file_name):
92
+ return hf_hub_url(repo_id=_REPO_ID, filename=f"data/{file_name}.jsonl.xz", repo_type="dataset")
93
+
94
+ data_urls = []
95
+ languages = _LANGS if self.config.name == "all" else [self.config.name]
96
+ split_generators = []
97
+ for split in [datasets.Split.TRAIN, datasets.Split.VALIDATION]:
98
+ for language in languages:
99
+ shards = range(_LANGUAGES[language] + 1) if split == datasets.Split.TRAIN else [0]
100
+ for shard in shards:
101
+ data_urls.append(get_url(f"{language}.{str(split)}.{shard}"))
102
+
103
+ downloaded_files = dl_manager.download(data_urls)
104
+ split_generators.append(
105
+ datasets.SplitGenerator(name=split, gen_kwargs={"filepaths": downloaded_files})
106
+ )
107
+ return split_generators
108
+
109
+ def _generate_examples(self, filepaths):
110
+ """This function returns the examples in the raw (text) form by iterating on all the files."""
111
+ id_ = 0
112
+ for filepath in filepaths:
113
+ logger.info("Generating examples from = %s", filepath)
114
+ try:
115
+ with xz.open(open(filepath, "rb"), "rt", encoding="utf-8") as f:
116
+ for line in f:
117
+ if line:
118
+ example = json.loads(line)
119
+ if example is not None and isinstance(example, dict):
120
+ timestamp = example.get("timestamp", "")
121
+ # remove the Z at the end (time zone)
122
+ if isinstance(timestamp, str) and timestamp.endswith("Z"):
123
+ timestamp = timestamp[:-1]
124
+ yield id_, {
125
+ "index": example.get("index", ""),
126
+ "url": example.get("url", ""),
127
+ "timestamp": timestamp,
128
+ "matches": ast.literal_eval(example.get("matches", "")),
129
+ "text": example.get("text", ""),
130
+ }
131
+ id_ += 1
132
+ except Exception:
133
+ logger.exception("Error while processing file %s", filepath)