joelniklaus commited on
Commit
4deba17
1 Parent(s): da7bf55

first version of mc4_legal dataset

Browse files
.gitattributes CHANGED
@@ -49,3 +49,29 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
49
  *.jpg filter=lfs diff=lfs merge=lfs -text
50
  *.jpeg filter=lfs diff=lfs merge=lfs -text
51
  *.webp filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49
  *.jpg filter=lfs diff=lfs merge=lfs -text
50
  *.jpeg filter=lfs diff=lfs merge=lfs -text
51
  *.webp filter=lfs diff=lfs merge=lfs -text
52
+ data/pl.jsonl.xz filter=lfs diff=lfs merge=lfs -text
53
+ data/sv.jsonl.xz filter=lfs diff=lfs merge=lfs -text
54
+ data/bg.jsonl.xz filter=lfs diff=lfs merge=lfs -text
55
+ data/da.jsonl.xz filter=lfs diff=lfs merge=lfs -text
56
+ data/es_1.jsonl.xz filter=lfs diff=lfs merge=lfs -text
57
+ data/et.jsonl.xz filter=lfs diff=lfs merge=lfs -text
58
+ data/lv.jsonl.xz filter=lfs diff=lfs merge=lfs -text
59
+ data/mt.jsonl.xz filter=lfs diff=lfs merge=lfs -text
60
+ data/nl.jsonl.xz filter=lfs diff=lfs merge=lfs -text
61
+ data/sk.jsonl.xz filter=lfs diff=lfs merge=lfs -text
62
+ data/el.jsonl.xz filter=lfs diff=lfs merge=lfs -text
63
+ data/fr.jsonl.xz filter=lfs diff=lfs merge=lfs -text
64
+ data/hu.jsonl.xz filter=lfs diff=lfs merge=lfs -text
65
+ data/it.jsonl.xz filter=lfs diff=lfs merge=lfs -text
66
+ data/pt.jsonl.xz filter=lfs diff=lfs merge=lfs -text
67
+ data/ro.jsonl.xz filter=lfs diff=lfs merge=lfs -text
68
+ data/sl.jsonl.xz filter=lfs diff=lfs merge=lfs -text
69
+ data/cs.jsonl.xz filter=lfs diff=lfs merge=lfs -text
70
+ data/en_0.jsonl.xz filter=lfs diff=lfs merge=lfs -text
71
+ data/es_0.jsonl.xz filter=lfs diff=lfs merge=lfs -text
72
+ data/ga.jsonl.xz filter=lfs diff=lfs merge=lfs -text
73
+ data/lt.jsonl.xz filter=lfs diff=lfs merge=lfs -text
74
+ data/de_0.jsonl.xz filter=lfs diff=lfs merge=lfs -text
75
+ data/de_1.jsonl.xz filter=lfs diff=lfs merge=lfs -text
76
+ data/en_1.jsonl.xz filter=lfs diff=lfs merge=lfs -text
77
+ data/fi.jsonl.xz filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,168 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - other
4
+ language_creators:
5
+ - found
6
+ language:
7
+ - bg
8
+ - cs
9
+ - da
10
+ - de
11
+ - el
12
+ - en
13
+ - es
14
+ - et
15
+ - fi
16
+ - fr
17
+ - ga
18
+ - hu
19
+ - it
20
+ - lt
21
+ - lv
22
+ - mt
23
+ - nl
24
+ - pl
25
+ - pt
26
+ - ro
27
+ - sk
28
+ - sl
29
+ - sv
30
+ license:
31
+ - cc-by-4.0
32
+ multilinguality:
33
+ - multilingual
34
+ paperswithcode_id: null
35
+ pretty_name: "MC4_Legal: A Corpus Covering the Legal Part of MC4 for European Languages"
36
+ size_categories:
37
+ - 10M<n<100M
38
+ source_datasets:
39
+ - original
40
+ task_categories:
41
+ - fill-mask
42
+
43
+ ---
44
+
45
+ # Dataset Card for MC4_Legal: A Corpus Covering the Legal Part of MC4 for European Languages
46
+
47
+ ## Table of Contents
48
+
49
+ - [Table of Contents](#table-of-contents)
50
+ - [Dataset Description](#dataset-description)
51
+ - [Dataset Summary](#dataset-summary)
52
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
53
+ - [Languages](#languages)
54
+ - [Dataset Structure](#dataset-structure)
55
+ - [Data Instances](#data-instances)
56
+ - [Data Fields](#data-fields)
57
+ - [Data Splits](#data-splits)
58
+ - [Dataset Creation](#dataset-creation)
59
+ - [Curation Rationale](#curation-rationale)
60
+ - [Source Data](#source-data)
61
+ - [Annotations](#annotations)
62
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
63
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
64
+ - [Social Impact of Dataset](#social-impact-of-dataset)
65
+ - [Discussion of Biases](#discussion-of-biases)
66
+ - [Other Known Limitations](#other-known-limitations)
67
+ - [Additional Information](#additional-information)
68
+ - [Dataset Curators](#dataset-curators)
69
+ - [Licensing Information](#licensing-information)
70
+ - [Citation Information](#citation-information)
71
+ - [Contributions](#contributions)
72
+
73
+ ## Dataset Description
74
+
75
+ - **Homepage:**
76
+ - **Repository:**
77
+ - **Paper:**
78
+ - **Leaderboard:**
79
+ - **Point of Contact:** [Joel Niklaus](mailto:joel.niklaus.2@bfh.ch)
80
+
81
+ ### Dataset Summary
82
+
83
+ This dataset contains large text resources from mc4 filtered for legal data that can be used for pretraining language models.
84
+
85
+ ### Supported Tasks and Leaderboards
86
+
87
+ The dataset supports the tasks of masked language modeling.
88
+
89
+ ### Languages
90
+
91
+ The following languages are supported: bg, cs, da, de, el, en, es, et, fi, fr, ga, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv
92
+
93
+ ## Dataset Structure
94
+
95
+ ### Data Instances
96
+
97
+ The file format is jsonl.xz and there is one split available ("train").
98
+
99
+ ### Data Fields
100
+
101
+ [More Information Needed]
102
+
103
+ ### Data Splits
104
+
105
+ [More Information Needed]
106
+
107
+ ## Dataset Creation
108
+
109
+ ### Curation Rationale
110
+
111
+ [More Information Needed]
112
+
113
+ ### Source Data
114
+
115
+ #### Initial Data Collection and Normalization
116
+
117
+ [More Information Needed]
118
+
119
+ #### Who are the source language producers?
120
+
121
+ [More Information Needed]
122
+
123
+
124
+ ### Annotations
125
+
126
+ #### Annotation process
127
+
128
+ [More Information Needed]
129
+
130
+ #### Who are the annotators?
131
+
132
+ [More Information Needed]
133
+
134
+ ### Personal and Sensitive Information
135
+
136
+ [More Information Needed]
137
+
138
+ ## Considerations for Using the Data
139
+
140
+ ### Social Impact of Dataset
141
+
142
+ [More Information Needed]
143
+
144
+ ### Discussion of Biases
145
+
146
+ [More Information Needed]
147
+
148
+ ### Other Known Limitations
149
+
150
+ [More Information Needed]
151
+
152
+ ## Additional Information
153
+
154
+ ### Dataset Curators
155
+
156
+ [More Information Needed]
157
+
158
+ ### Licensing Information
159
+
160
+ [More Information Needed]
161
+
162
+ ### Citation Information
163
+
164
+ [More Information Needed]
165
+
166
+ ### Contributions
167
+
168
+ Thanks to [@JoelNiklaus](https://github.com/joelniklaus) for adding this dataset.
data/bg.jsonl.xz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c317c864d2a63ca0285e914dc23d9e9a8d1c9a6345235b0f775ab0962453fb59
3
+ size 1816752
data/cs.jsonl.xz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:53dc6fbacf9b1e086541aa323ed398dbbcc64732967b1e9f06c514b640ea8e60
3
+ size 1528265264
data/da.jsonl.xz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2535e81d4c96231df49c5ba88f87f50b65132afcabcec4c0405759c01b09981c
3
+ size 7930372
data/de_0.jsonl.xz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:925edf31892907337e4cddbefade81f679ada1987e04f3590173a9066f924877
3
+ size 3029091892
data/de_1.jsonl.xz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4e7e971968108efb881a9febad1c06db47a9833a3ea43b9e24d33cc9e63e8b9f
3
+ size 3021450784
data/el.jsonl.xz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:525953d4bcdc1c2d033f78dbc515ee4c53d8d21687db6e158f7186daffc933fc
3
+ size 10437548
data/en_0.jsonl.xz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:29443131217c2bbb45ca9d6b31544b9bd663c3d1c2b668726d043d99848e8215
3
+ size 2237463392
data/en_1.jsonl.xz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f74661d7f71b3f7dd5391d5da182cef973ab45a10f118a02e4536925dcf50b7c
3
+ size 2260012328
data/es_0.jsonl.xz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cc81d020f46b61b6e44a4ac2649c21c0cdb9b76f335075dcf83f5b6fa5d1cb2e
3
+ size 2182792196
data/es_1.jsonl.xz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0d0b12610073fd7bba32a77fbae7f61198a19cf97487e91bd4efdbbcc0124287
3
+ size 2197815388
data/et.jsonl.xz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:925465a6575b6a4dbe7ea6fbe1287b255c0e052b7af7efca77d8cf79847e4f17
3
+ size 119394764
data/fi.jsonl.xz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f85e2a0a2cdd58522aaa45c764be6d88bb87b1b9a81c9518567f19e24e48ec9c
3
+ size 2105983944
data/fr.jsonl.xz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e341761081ee86139a68209624c5bd026fe171cc92c85b20ca035212d83321e
3
+ size 2289491856
data/ga.jsonl.xz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4a242590c6c948726c9ddf116cf24aeb3a8ea45d7715b29d583911601ee48bc0
3
+ size 773172
data/hu.jsonl.xz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:66c4a1bb904c8693ea417b076253cace56d1f79b9d23bde8f9b2a18af6a420c6
3
+ size 298607192
data/it.jsonl.xz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9b85d3de366285f14fbbc4dfda127f4e0b532525f14c06499fb3194a8a0c08c0
3
+ size 2681108736
data/lt.jsonl.xz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d2f3ae8ef8bf8d9f4d25a795ddc9c0e997baabe7b5f20678a25d8e0436d711c8
3
+ size 43971092
data/lv.jsonl.xz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e598362f5884f1de689a4c310301fc2c94569c799eb53d346b5edb19d9e2d6ac
3
+ size 65844
data/mt.jsonl.xz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e88dbb3319f6657de53f31a24dd028e1ad01e78c5542abde794243b93aa67991
3
+ size 46965820
data/nl.jsonl.xz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c827790e16443a324a17ebc781f76c1072e3f5075ae4563ba48aa000550ec44f
3
+ size 17483556
data/pl.jsonl.xz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:48e9961925a843f1c5c31186724648fe276d78a6eb8cbd8b4deece2b9cd71f83
3
+ size 1996190756
data/pt.jsonl.xz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aed894b05d2a993ea0dd3ad0949803bd2a8e89e302694963dccbeeb55a880fcc
3
+ size 1029881752
data/ro.jsonl.xz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8706b832daa8298a06ee606277517dfbad0ca1b4c02ce0c9a94455f902e49d8f
3
+ size 403410064
data/sk.jsonl.xz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a7b546c092ceb39ac244c3b41ee8331b982c8aada88446424f5b685b96801d1a
3
+ size 282243428
data/sl.jsonl.xz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f35e0d822a246481bac06b134d29985d83c6a6318f1b688a72300ed6bd20ff4f
3
+ size 87503000
data/sv.jsonl.xz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c46aec9ba607b2e2b51f04b9ff09a11abb9bb241eef1576fd034616af1998df3
3
+ size 418781800
mc4_legal.py ADDED
@@ -0,0 +1,119 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """MC4_Legal"""
2
+
3
+ import json
4
+
5
+ import datasets
6
+
7
+ try:
8
+ import lzma as xz
9
+ except ImportError:
10
+ import pylzma as xz
11
+
12
+ datasets.logging.set_verbosity_info()
13
+ logger = datasets.logging.get_logger(__name__)
14
+
15
+ _DESCRIPTION = """
16
+ """
17
+
18
+ _CITATION = """
19
+ """
20
+
21
+ _URL = "https://huggingface.co/datasets/joelito/mc4_legal"
22
+ _DATA_URL = f"{_URL}/resolve/main/data"
23
+
24
+ _LANGUAGES = [
25
+ "bg",
26
+ "cs",
27
+ "da",
28
+ "de",
29
+ "el",
30
+ "en",
31
+ "es",
32
+ "et",
33
+ "fi",
34
+ "fr",
35
+ "ga",
36
+ # "hr", # hr is not present in mc4
37
+ "hu",
38
+ "it",
39
+ "lt",
40
+ "lv",
41
+ "mt",
42
+ "nl",
43
+ "pl",
44
+ "pt",
45
+ "ro",
46
+ "sk",
47
+ "sl",
48
+ "sv",
49
+ ]
50
+
51
+
52
+ class MC4LegalConfig(datasets.BuilderConfig):
53
+ """BuilderConfig for MC4_Legal."""
54
+
55
+ def __init__(self, name: str, **kwargs):
56
+ """BuilderConfig for MC4_Legal.
57
+ Args:
58
+ name: One of bg,cs,da,de,el,en,es,et,fi,fr,ga,hu,it,lt,lv,mt,nl,pl,pt,ro,sk,sl,sv or all
59
+ **kwargs: keyword arguments forwarded to super.
60
+ """
61
+ super(MC4LegalConfig, self).__init__(**kwargs)
62
+ self.name = name
63
+
64
+
65
+ class MC4Legal(datasets.GeneratorBasedBuilder):
66
+ """MC4_Legal: A Corpus Covering the Legal Part of MC4 for European Languages"""
67
+
68
+ BUILDER_CONFIGS = [MC4LegalConfig(language) for language in _LANGUAGES + ["all"]]
69
+
70
+ def _info(self):
71
+ return datasets.DatasetInfo(
72
+ description=_DESCRIPTION,
73
+ features=datasets.Features(
74
+ {
75
+ "index": datasets.Value("int32"),
76
+ "url": datasets.Value("string"),
77
+ "timestamp": datasets.Value("timestamp[s]"),
78
+ "matches": datasets.Sequence(datasets.Value("string")),
79
+ "text": datasets.Value("string"),
80
+ }
81
+ ),
82
+ supervised_keys=None,
83
+ homepage=_URL,
84
+ citation=_CITATION,
85
+ )
86
+
87
+ def _split_generators(self, dl_manager):
88
+ data_urls = []
89
+ languages = _LANGUAGES if self.config.language == "all" else [self.config.language]
90
+ for language in languages:
91
+ if language in ["de", "en", "es"]: # here we need to chunk because the files are too large
92
+ data_urls.append([f"{_DATA_URL}/{language}_{idx}.jsonl.xz" for idx in [0, 1]])
93
+ else:
94
+ data_urls.append(f"{_DATA_URL}/{language}.jsonl.xz")
95
+
96
+ downloaded_files = dl_manager.download(data_urls)
97
+ return [datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepaths": downloaded_files})]
98
+
99
+ def _generate_examples(self, filepaths):
100
+ """This function returns the examples in the raw (text) form by iterating on all the files."""
101
+ id_ = 0
102
+ for filepath in filepaths:
103
+ logger.info("Generating examples from = %s", filepath)
104
+ try:
105
+ with xz.open(open(filepath, "rb"), "rt", encoding="utf-8") as f:
106
+ for line in f:
107
+ if line:
108
+ example = json.loads(line)
109
+ if example is not None and isinstance(example, dict):
110
+ yield id_, {
111
+ "index": example.get("index", ""),
112
+ "url": example.get("url", ""),
113
+ "timestamp": example.get("timestamp", ""),
114
+ "matches": example.get("matches", ""),
115
+ "text": example.get("text", ""),
116
+ }
117
+ id_ += 1
118
+ except:
119
+ print("Error reading file:", filepath)