parquet-converter commited on
Commit
c45effe
1 Parent(s): d93fd32

Update parquet files

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
DOCS.md DELETED
@@ -1,147 +0,0 @@
1
- # Documentation
2
-
3
- This contains some *really* quick docs and notes on filtering with SuperWIKI NEXT.
4
-
5
- ## wikipedia_soup.py
6
-
7
- ...Is the main class that handles the bulk of the filtering.
8
-
9
- Each filter has code documentation to explain what each function generally does. So I'd suggest you to read those instead.
10
-
11
- ### Usage for wikipedia_soup.py
12
-
13
- probably the most important bit.
14
-
15
- wikipedia_soup takes in `*.ndjson` files directly from html wikipedia dumps. via the `process-root` command.
16
-
17
- *Note: there are 3 publicly exposed commands via typer, `process-root`, `process-folder`, `process-file`*
18
-
19
- `process-root` is probably what you want to use. It takes in the following folder structure:
20
-
21
- ```
22
- dumps <- Input folder for [process-root]
23
- |-afwiki-NS0-20240420-ENTERPRISE-HTML <- Input folder for [process-folder]
24
- |-afwiki_namespace_0_0.ndjson <- Input file for [process-file]
25
- |-afwiki_namespace_0_1.ndjson
26
- |-afwiki_namespace_0_2.ndjson
27
- ...
28
- |-arwiki-NS0-20240420-ENTERPRISE-HTML
29
- |-arwiki_namespace_0_0.ndjson
30
- |-arwiki_namespace_0_1.ndjson
31
- |-arwiki_namespace_0_2.ndjson
32
- ...
33
- ... And so on...
34
- ```
35
-
36
- Downloading and filtering the files is relatively easy.
37
-
38
- 1. Get a list of http urls (Whichever way you prefer)
39
- 2. Download said list (wget, curl, aria2c, etc)
40
- 3. Extract tar files into their own folder as shown above
41
- 4. Run `process-root` command.
42
- 5. Patience.
43
- 6. ???
44
- 7. Finished!
45
-
46
-
47
-
48
- ## wikipedia_template.py
49
-
50
- This file contains templates used in Wikipedia articles.
51
-
52
- If you do need to update a template, follow these steps:
53
-
54
- 1. Open the file in your web browser.
55
- 2. Paste the following URL, replacing `<ID>` with the relevant Wikidata entry ID.
56
-
57
- ```
58
- https://www.wikidata.org/w/api.php?action=wbgetentities&ids=<ID>&format=json&props=labels
59
- ```
60
-
61
- As for the related templates:
62
-
63
- - Stubs: `Q4663261`
64
- - Citation needed: `Q7106262`
65
- - Redirect: `Q6042392`
66
-
67
- **Note:** For Sections, there are currently no templates available. These must be added manually.
68
-
69
- ## mediawiki_soup.py
70
-
71
- Before the introduction of Hugging Face's Datatrove and for the sake of simpler code development, this module implemented the `MediaWikiSoup` class.
72
-
73
- This class processes HTML content into markdown format and performs additional post-processing steps on the resulting markdown.
74
-
75
- `MediaWikiSoup` leverages a "filter chain" architecture. You can extend its functionalities by adding filter functions using either `add_markdown_filter` (for markdown processing) or `add_soup_filter` (for BeautifulSoup processing).
76
-
77
- ## html2markdown.py
78
-
79
- Contains a customized markdownify instance. Since this is mainly carried over from 1.5, details on it are a bit hazy.
80
-
81
- For `<a>` elements, I only use the text contained. That is to say, I don't include the href.
82
-
83
- ```html
84
- <a href="//example.com">This is an example</a>
85
- ```
86
-
87
- Will be md'd into:
88
-
89
- ```md
90
- This is an example
91
- ```
92
-
93
- For image elements:
94
-
95
- ```html
96
- <img src="//example.com" alt="Alt Text"/>
97
- ```
98
-
99
- Will be md'd into:
100
-
101
- ```md
102
- Alt Text
103
- ```
104
-
105
- For `<li>` elements, I'm unsure what was the reason behind it. Now, God/LLM/Model/??? only knows.
106
-
107
- ## folders2jsonl.py
108
-
109
- Is a simple script converting chunked ndjson files into 1 singular file for ease of processing.
110
-
111
- # Tools
112
-
113
- Extra tools unrelated to main filtering. But used in some shape or form.
114
-
115
- ## tools/wikipedia_eligablewiki.py
116
-
117
- As the title says, it unbiasedly selects groups of wikipedia with high enough content. Refer to `Selection of Wikipedia` for how it was computed.
118
-
119
- the stats .json file can be fetched from the source page here: https://commons.wikimedia.org/w/index.php?title=Data:Wikipedia_statistics/data.tab&action=edit
120
-
121
- Copy the json in the source into a .json file and you should be good to go.
122
-
123
- if you have to filter a list of URLS, it should look like this:
124
-
125
- Using mirror.accum.se as mirror:
126
- ```txt
127
- https://mirror.accum.se/mirror/wikimedia.org/other/enterprise_html/runs/20240420/amiwiki-NS0-20240420-ENTERPRISE-HTML.json.tar.gz
128
- https://mirror.accum.se/mirror/wikimedia.org/other/enterprise_html/runs/20240420/amwiki-NS0-20240420-ENTERPRISE-HTML.json.tar.gz
129
- https://mirror.accum.se/mirror/wikimedia.org/other/enterprise_html/runs/20240420/angwiki-NS0-20240420-ENTERPRISE-HTML.json.tar.gz
130
- https://mirror.accum.se/mirror/wikimedia.org/other/enterprise_html/runs/20240420/anwiki-NS0-20240420-ENTERPRISE-HTML.json.tar.gz
131
- ```
132
-
133
- Or with the official dumps:
134
- ```
135
- https://dumps.wikimedia.org/other/enterprise_html/runs/20240420/amiwiki-NS0-20240420-ENTERPRISE-HTML.json.tar.gz
136
- https://dumps.wikimedia.org/other/enterprise_html/runs/20240420/amwiki-NS0-20240420-ENTERPRISE-HTML.json.tar.gz
137
- https://dumps.wikimedia.org/other/enterprise_html/runs/20240420/angwiki-NS0-20240420-ENTERPRISE-HTML.json.tar.gz
138
- https://dumps.wikimedia.org/other/enterprise_html/runs/20240420/anwiki-NS0-20240420-ENTERPRISE-HTML.json.tar.gz
139
- ```
140
-
141
- ## tools/wikipedia_pageview.py
142
-
143
- Not used in NEXT, but included. The idea is to accumulate all pageviews and filter each article based on pageviews. While it's a neat idea, I just didn't use it.
144
-
145
- ## tools/wikipedia_mediaalias.py
146
-
147
- Pretty sure it's unfinished. I didn't use it in the end. Similar to pageview. Though someone could improvise and use it.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,299 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - no-annotation
4
- language_creators:
5
- - crowdsourced
6
- pretty_name: SuperWikiNEXT-32B
7
- paperswithcode_id: null
8
- license:
9
- - cc-by-sa-3.0
10
- task_categories:
11
- - text-generation
12
- - fill-mask
13
- task_ids:
14
- - language-modeling
15
- - masked-language-modeling
16
- source_datasets:
17
- - original
18
- multilinguality:
19
- - multilingual
20
- language:
21
- - af
22
- - ar
23
- - ast
24
- - az
25
- - be
26
- - bg
27
- - bn
28
- - ca
29
- - ce
30
- - cs
31
- - cy
32
- - da
33
- - de
34
- - el
35
- - en
36
- - eo
37
- - es
38
- - et
39
- - eu
40
- - fa
41
- - fi
42
- - fr
43
- - gl
44
- - he
45
- - hi
46
- - hr
47
- - hu
48
- - hy
49
- - id
50
- - it
51
- - ja
52
- - ka
53
- - kk
54
- - ko
55
- - la
56
- - lt
57
- - lv
58
- - mk
59
- - ms
60
- - my
61
- - nl
62
- - nn
63
- - 'no'
64
- - pl
65
- - pt
66
- - ro
67
- - ru
68
- - sh
69
- - sk
70
- - sl
71
- - sr
72
- - sv
73
- - ta
74
- - tg
75
- - th
76
- - tr
77
- - uk
78
- - ur
79
- - uz
80
- - vi
81
- - zh
82
- size_categories:
83
- - 10B<n<100B
84
- ---
85
-
86
- # Dataset Card for SuperWikiNEXT-32B
87
-
88
- ![](Waifu.png "Based off from Wikipe-tan (Maid, cyan hair, short hair) and Wikipedia's globe logo.")
89
-
90
- *Waifu to catch your attention.*
91
-
92
- ## Dataset Details
93
-
94
- ### Dataset Description
95
-
96
- *SuperWikipedia-NEXT* is an enhanced version of the SuperWIKI dataset. Which SuperWIKI was born out of the thought of a better filtered Wikipedia while retaining markdowns.
97
- *SuperWikipedia-NEXT* contains **~32.44B** Tokens (llama-2-7b-chat-tokenizer) / **~27.92B** Tokens (RWKV Tokenizer) from approximately **60** "High quality" / "Selected" languages.
98
-
99
- - **Curated by:** KaraKaraWitch
100
- - **Funded by [optional]:** Recursal.ai (I work there lol)
101
- - **Shared by [optional]:** KaraKaraWitch
102
- - **Language(s) (NLP):** Many. Refer to the data below for a list of languages.
103
- - **License:** cc-by-sa-4.0,
104
-
105
- ### Dataset Sources [optional]
106
-
107
- <!-- Provide the basic links for the dataset. -->
108
-
109
- - **Source Data:** [https://dumps.wikimedia.org/other/enterprise_html/](https://dumps.wikimedia.org/other/enterprise_html)
110
-
111
- ### Dataset Summary
112
-
113
- Wikipedia dataset containing cleaned articles of all languages.
114
- The dataset is manually built from Wikipedia HTML dumps with each split for each language.
115
- Each example contains the content of one full Wikipedia article.
116
-
117
- ### Supported Tasks and Leaderboards
118
-
119
- The dataset is generally used for Language Modelling.
120
-
121
- ### Languages
122
-
123
- We have selected the following Wikipedia's:
124
-
125
- ```
126
- af.wikipedia.org
127
- ar.wikipedia.org
128
- ast.wikipedia.org
129
- az.wikipedia.org
130
- be.wikipedia.org
131
- bg.wikipedia.org
132
- bn.wikipedia.org
133
- ca.wikipedia.org
134
- ce.wikipedia.org
135
- cs.wikipedia.org
136
- cy.wikipedia.org
137
- da.wikipedia.org
138
- de.wikipedia.org
139
- el.wikipedia.org
140
- en.wikipedia.org
141
- eo.wikipedia.org
142
- es.wikipedia.org
143
- et.wikipedia.org
144
- eu.wikipedia.org
145
- fa.wikipedia.org
146
- fi.wikipedia.org
147
- fr.wikipedia.org
148
- gl.wikipedia.org
149
- he.wikipedia.org
150
- hi.wikipedia.org
151
- hr.wikipedia.org
152
- hu.wikipedia.org
153
- hy.wikipedia.org
154
- id.wikipedia.org
155
- it.wikipedia.org
156
- ja.wikipedia.org
157
- ka.wikipedia.org
158
- kk.wikipedia.org
159
- ko.wikipedia.org
160
- la.wikipedia.org
161
- lt.wikipedia.org
162
- lv.wikipedia.org
163
- min.wikipedia.org
164
- mk.wikipedia.org
165
- ms.wikipedia.org
166
- my.wikipedia.org
167
- nl.wikipedia.org
168
- nn.wikipedia.org
169
- no.wikipedia.org
170
- pl.wikipedia.org
171
- pt.wikipedia.org
172
- ro.wikipedia.org
173
- ru.wikipedia.org
174
- sh.wikipedia.org
175
- simple.wikipedia.org
176
- sk.wikipedia.org
177
- sl.wikipedia.org
178
- sr.wikipedia.org
179
- sv.wikipedia.org
180
- ta.wikipedia.org
181
- tg.wikipedia.org
182
- th.wikipedia.org
183
- tr.wikipedia.org
184
- uk.wikipedia.org
185
- ur.wikipedia.org
186
- uz.wikipedia.org
187
- vi.wikipedia.org
188
- zh-min-nan.wikipedia.org
189
- zh.wikipedia.org
190
- zh-yue.wikipedia.org
191
- ```
192
-
193
- *`.wikipedia.org`* extensions have been added for your convenience.
194
-
195
- ### Selection of Wikipedia
196
-
197
- We deem a particular Wikipedia language as high quality if:
198
-
199
- 1. Has a total article count of `>100,000`.
200
- 2. Has a `Depth > 5.1`.
201
-
202
- *Depth is calculated using the following equation:*
203
-
204
- `depth = (article_edits / total_pages) * ((total_pages - articles) / articles) ** 2`
205
-
206
- This formula is directly taken from [list of Wikipedias.](https://meta.wikimedia.org/wiki/Wikipedia_article_depth)
207
-
208
- ### Filtering
209
-
210
- Extensive HTML and markdown filtering has been done to derive the final dataset.
211
-
212
- For HTML:
213
-
214
- 1. Parse the article content with BeautifulSoup.
215
- 2. We first extract out titles from the Soup.
216
- 3. Drop (As in, don't process / skip processing) *Stub articles.* To ensure multilanguage coverage, we use a list of stub names found across multiple languages using wikidata. (We have included the template names within `wikipedia_template.py`)
217
- 4. Drop *Lsjbot* bot created articles.
218
- 5. Collapse styles with `data-mw` component into its next sibling.
219
- 6. Remove raw `href` links. (Text of href == href link)
220
- 7. Remove citation needed Templates
221
- 8. Remove citation Templates
222
- 9. Remove Redirect Templates
223
- 10. Drop articles where the article content consists of 50% or more of tables and lists.
224
- 11. Remove message boxes. (Orange alert boxes on top of articles)
225
- 12. Remove infoboxes boxes. (Infoboxes on the right)
226
- 13. Selectively remove tables which consist of just empty spaces. (Number of `<td>` elements > len(text_size) and text_size < 50)
227
- 14. Cleanup latex code.
228
- 15. Empty `class` attributes and `data-mw` attributes
229
-
230
- For Markdown:
231
-
232
- 1. Cleanup punctuations.
233
- 2. Collect text length (normalized text to NKFC, keeping CJK characters as is while decomposing Arabic characters, Counting double width characters as 2 instead of 1, )
234
- 3. Filter based on the collected text length (If the article is less than 1000 characters long, it is dropped.)
235
-
236
- The final Markdown text and additional data is included in the jsonl file. Additionally, the scripts used are located in the main directory of this folder as well.
237
-
238
- ### Data keys
239
-
240
- Users can run `less` to see the contents. A sample and a list of dictionary keys have been provided below:
241
-
242
- ```json
243
- {
244
- "text": "\n**Tharman Shanmugaratnam** PBM (born 25 February 1957) is a Singaporean politician and economist. He is the President of Singapore since 2023. \n\nHe was Senior Minister of Singapore between 2019 and 2023. He was also the Coordinating Minister for Social Policies between 2015 and 2023, and Chairman of the Monetary Authority of Singapore between 2011 and 2023.\n\nOn 8 June 2023, Tharman announced his plans to run for president in the 2023 presidential election. He was elected on 2 September 2023 in a landslide victory, winning 70.40% of the vote.\n\nEarly life and education\n------------------------\n\nTharman was born in the Colony of Singapore in 1957. He studied at the Anglo-Chinese School. When he was studying there, he was not interested in his studies and was not disciplined. However, he liked to read and tried out poetry. During his time at Anglo-Chinese School, he created four poets with his schoolmates. Also, he was interested in sports and spent most of his time playing sports. He even joined his school's hockey team.\n\nThen, he attended the London School of Economics (LSE), graduating with a Bachelor of Science degree in economics.\n\nAfter getting his bachelor's, Tharman went on to study at Wolfson College at the University of Cambridge. There, he completed a Master of Philosophy degree in economics. \n\nTharman then became a student at the Harvard Kennedy School at Harvard University, where he finished a Master in Public Administration (MPA) degree. He was a student activist there. He explored left-wing politics, as he did not agree with the ruling People's Action Party back in Singapore.\n\nTharman was a recipient of the Lucius N. Littauer Fellows Award. The award is given to students with MPA's who showed academic excellence and leadership.In 2011, the LSE gave him an Honorary Fellowship.<...TRUNCATED IN SAMPLE>",
245
- "meta": {
246
- "title": "Tharman Shanmugaratnam",
247
- "mostly_tablelist": false,
248
- "tablelist_ratio": [
249
- 4082,
250
- 8644,
251
- 0.47223507635354
252
- ],
253
- "infobox": [
254
- "<...TRUNCATED IN SAMPLE>"
255
- ],
256
- "td_tables": [],
257
- "text_length": 5553
258
- }
259
- }
260
- ```
261
-
262
- ```
263
- text: str (Markdown text)
264
- meta: dict (Contains additional metadata / meta)
265
- - title: str (Article Title)
266
- - mostly_tablelist: bool (Internal flag for HTML step 10)
267
- - tablelist_ratio: list (Internal data, used to compute mostly_tablelist.)
268
- - infobox: list (A list of extracted infoboxes with data-mw attribute for the raw html data.)
269
- - td_tables: list (Extracted tables from HTML step 13)
270
- - text_length: int (Obtained from markdown step 2)
271
- ```
272
-
273
- ### Dataset Curators
274
-
275
- KaraKaraWitch. (I typically hangout in PygmalionAI discord, sometimes EleutherAI. If something is wrong, `@karakarawitch` on discord.)
276
-
277
- I'd be happy if you could spread the word and recommend this dataset over wikitext for your use cases `:)`
278
-
279
- ### Licensing Information
280
-
281
- Most of Wikipedia's text and many of its images are co-licensed under the
282
- [Creative Commons Attribution-ShareAlike 3.0 Unported License](https://en.wikipedia.org/wiki/Wikipedia:Text_of_Creative_Commons_Attribution-ShareAlike_3.0_Unported_License)
283
- (CC BY-SA) and the [GNU Free Documentation License](https://en.wikipedia.org/wiki/Wikipedia:Text_of_the_GNU_Free_Documentation_License)
284
- (GFDL) (un-versioned, with no invariant sections, front-cover texts, or back-cover texts).
285
-
286
- Some text has been imported only under CC BY-SA and CC BY-SA-compatible license and cannot be reused under GFDL; such
287
- text will be identified on the page footer, in the page history, or on the discussion page of the article that utilizes
288
- the text.
289
-
290
- ### Citation Information
291
-
292
- ```
293
- @ONLINE{superwiki-next,
294
- title = {SuperWikiNEXT-32B},
295
- author = {KaraKaraWitch, recursal.ai},
296
- year = {2023},
297
- howpublished = {\url{https://huggingface.co/datasets/WitchesSocialStream/SuperWikipedia-NEXT}},
298
- }
299
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/arwiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:6a94bc066616fc70fb3abf1b37b46639541cb0cabb3f40e49afab0bc99bd0cd3
3
- size 2841433814
 
 
 
 
data/bewiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:d0d97c50a8f737138af7c8fb1ccd768f271578ffe60def835b61680746010033
3
- size 1155363165
 
 
 
 
data/bgwiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:27c499f71d9199f73e64083d26c140ed8f503f3de6ef7c59e850fef92344a39d
3
- size 1488758887
 
 
 
 
data/bnwiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:8b140e38f615119ca465c96725676d47db99f2432ecf249ae916a0ed2434fc26
3
- size 1035853272
 
 
 
 
data/cawiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:d77bf5601de9b419f120702eccf6ac262fb5e72a2b3e554c4769d2fb03922e0b
3
- size 5972112714
 
 
 
 
data/cewiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:5e198aa8e1d0c4834ee55f4c12dbd6c31e2b2c62e4f536a0693a258fea7f51d6
3
- size 27235546
 
 
 
 
data/cswiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:cc2b270ba93047f57438eb529414246692649b724b73e5f265d2419ff180d2b8
3
- size 2231152140
 
 
 
 
data/cywiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:c9faae016cb9ea2a5bc00de18116d563edd69fa4b54a38504e1e1ad8c5f68e84
3
- size 1644649803
 
 
 
 
data/dawiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:79d09e1f892a5d881f1f0c27a98cb5c4da6a34d0d45796bf411c79c519c86f59
3
- size 797865431
 
 
 
 
data/dewiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:a937669127fa792f740b56750212e5ecd7780ee38ed6395c7fe8b08ee6726776
3
- size 17005975673
 
 
 
 
data/elwiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:fda812f0b7fbcc9090b8e80ba8893685b67aa921a4e98f1eb163e565d6745a17
3
- size 1868097644
 
 
 
 
data/enwiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:cc09011835d7a1a336486f31335663660c56152db6faa8446fd918167f2f397e
3
- size 21233345828
 
 
 
 
data/eowiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:545c15ff78ac3cc7c15ef2e1e751cf8984bcbc180446cd9a701c6a43b607be7a
3
- size 989516433
 
 
 
 
data/eswiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:49297507ea484fcea8c6209032578f099f7eecf903b16f2fe38985e7174fcf37
3
- size 11071716372
 
 
 
 
data/etwiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:952ecca79ed15b21fbf09309705513be6ed71dbfafb524020ffb56f23ba6331c
3
- size 465971420
 
 
 
 
data/euwiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:6b32494389ebe5d02bbdaf29b1ddcdf6c1367c4af1feddf75a967b2e99ee11eb
3
- size 1567702776
 
 
 
 
data/fawiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:d230a9bb4d8412c4a783fd77e6d6ebb73010693fc5a3da26b19cf984000fadfe
3
- size 1287208913
 
 
 
 
data/fiwiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:08a82d259db3aa05ca578edbf951c8a672e4b1d268fb53cdcd5684071297291d
3
- size 1849519263
 
 
 
 
data/frwiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:b0defc0df25bfb693332558849e8fc8a65e5e1ef9b383fb2550ad73f86e7a952
3
- size 6664758849
 
 
 
 
data/glwiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:34fb45d620e8676fc2d0236c6c4e011bdf3f42c0acb6a67a194b137ea264e218
3
- size 775164199
 
 
 
 
data/hewiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:b9dd2f060158b423c8da1afd108e4ae2bfa4a183772e4aede41829bdd354afa8
3
- size 4196091546
 
 
 
 
data/hiwiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:71d19657576d8527b16a234c43ab64c4ef60e356a6368be0dfc272fbf1a16813
3
- size 636147906
 
 
 
 
data/hrwiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:97709f24b885bb9b2a83fe5e9a04832760d465e04361666343983eb81926a700
3
- size 638061099
 
 
 
 
data/huwiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:22c95b42e72ee5596ee7992ef32d53ccf47d45547e53f73b2c6e0759f2da72c2
3
- size 1805896037
 
 
 
 
data/hywiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:5fdab41b9b2bc288e2ae553d70874071849ce3c368cef03a1a5ee33d5f422cc0
3
- size 1704390562
 
 
 
 
data/idwiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:6e3b7d234e7354b7c6e5b5f9caadcdb6ce396587a5906a0b568df8cd945c6307
3
- size 1294720167
 
 
 
 
data/itwiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:3255ebdfd901c43b9eef3ee73dc096e6c7c5a15091c95c01457b17a85f4395de
3
- size 6305403653
 
 
 
 
data/jawiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:4e286589492e9493993a0e79280f05134d6a6045781772ce61deb77b29acaffd
3
- size 4393248918
 
 
 
 
data/kawiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:501f1941daea65137acafa4a9387077444ad4fe606e582db0d457e3d9831ce4b
3
- size 779076387
 
 
 
 
data/kkwiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:86c20b52b6e7692db119ffbb4c6ddeea11927546c7fed98667999df364a2df05
3
- size 354782915
 
 
 
 
data/kowiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:bfa22e2d62a53453844c05bc07aac65fa84f704aa34da57ad77caad576536975
3
- size 1040587894
 
 
 
 
data/lawiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:d931f581d33bd59f5675d52373b2b5d3c00d7eef5d334ed22f06f6aaa14fe725
3
- size 69064972
 
 
 
 
data/ltwiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:73fda3ccc5eeeecd2537453632e13604f1ad836166b8fc80b205cd5648f5e4e3
3
- size 280550034
 
 
 
 
data/lvwiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:06e51589b3e503eb7305f1afa9f439b2e6dd96dc4747408192a61c5dccdffab6
3
- size 293886139
 
 
 
 
data/minwiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:148d3fadd778b45ef87bc85dfa3d441be025e82879bc268214a3cce6fc997042
3
- size 23172288
 
 
 
 
data/mkwiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:90453387533a56ee02db2c2ccd3bdbe03f5379f5bf7531adb6df14df5141ee17
3
- size 1100524424
 
 
 
 
data/mswiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:7babb8639179c5806e9e04cf5b14920be168cb5a79d3b96299061d72c9667649
3
- size 462629927
 
 
 
 
data/mywiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:1f03791c650d0b11d29ff8d7a65957b6ef57488ab5310b531fb70a684ac52c5e
3
- size 261313382
 
 
 
 
data/nlwiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:d43a30581a6164137b0dcd7fdce7a5e0cdd1d64f6d3c752ca6b8788926857ea3
3
- size 5468502588
 
 
 
 
data/nnwiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:c08f2d3271ec4288090c32f6280a77e23d1312d858f4639c0132d20d6b1422f0
3
- size 271180825
 
 
 
 
data/nowiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:99d3ab12af24fd34b25c118d60129250c0e7c5805a14ec94a3305624814e24ba
3
- size 1388899758
 
 
 
 
data/plwiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:09ca8c267af02063e3d7f32b0486a94fd2b576a9353c9b6649bade9885e46aa6
3
- size 4516610219
 
 
 
 
data/ptwiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:19d9b70329e87f7fd28e772714cda43bf4d85fe246d5ca565fa1f77933efa33a
3
- size 2987828140
 
 
 
 
data/rowiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:84467570161484449efa0a9bc52b8e58e0fabe2e3cd16de6c06e958734a5ce9d
3
- size 607655628
 
 
 
 
data/ruwiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:a0d3dc82c0dae7e9953dd79c6b9c9e27ec7fe127c007d76a0cd19653133bf262
3
- size 14745909726
 
 
 
 
data/shwiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:3e9a64cc7fc1c7b85bee291d0946e95d69d4c2ccceac79e57e3a3629bdcb9b60
3
- size 542774552
 
 
 
 
data/simplewiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:8e1da8411762fb0b0b751a2a00bc7e7a45afdb85ae7b8016effd18328fb0f378
3
- size 169505289
 
 
 
 
data/skwiki.jsonl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:c9b8c69693e693cd8b657cc28ff148930b1804b2445f0699b39532a0d556eb0e
3
- size 511844244