system HF staff commited on
Commit
431e18c
1 Parent(s): 93f8e02

Update files from the datasets library (from 1.3.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.3.0

Files changed (1) hide show
  1. README.md +207 -0
README.md ADDED
@@ -0,0 +1,207 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ ---
3
+
4
+ # Dataset Card for "wikipedia"
5
+
6
+ ## Table of Contents
7
+ - [Dataset Description](#dataset-description)
8
+ - [Dataset Summary](#dataset-summary)
9
+ - [Supported Tasks](#supported-tasks)
10
+ - [Languages](#languages)
11
+ - [Dataset Structure](#dataset-structure)
12
+ - [Data Instances](#data-instances)
13
+ - [Data Fields](#data-fields)
14
+ - [Data Splits Sample Size](#data-splits-sample-size)
15
+ - [Dataset Creation](#dataset-creation)
16
+ - [Curation Rationale](#curation-rationale)
17
+ - [Source Data](#source-data)
18
+ - [Annotations](#annotations)
19
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
20
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
21
+ - [Social Impact of Dataset](#social-impact-of-dataset)
22
+ - [Discussion of Biases](#discussion-of-biases)
23
+ - [Other Known Limitations](#other-known-limitations)
24
+ - [Additional Information](#additional-information)
25
+ - [Dataset Curators](#dataset-curators)
26
+ - [Licensing Information](#licensing-information)
27
+ - [Citation Information](#citation-information)
28
+ - [Contributions](#contributions)
29
+
30
+ ## [Dataset Description](#dataset-description)
31
+
32
+ - **Homepage:** [https://dumps.wikimedia.org](https://dumps.wikimedia.org)
33
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
34
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
35
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
36
+ - **Size of downloaded dataset files:** 30739.25 MB
37
+ - **Size of the generated dataset:** 35376.35 MB
38
+ - **Total amount of disk used:** 66115.60 MB
39
+
40
+ ### [Dataset Summary](#dataset-summary)
41
+
42
+ Wikipedia dataset containing cleaned articles of all languages.
43
+ The datasets are built from the Wikipedia dump
44
+ (https://dumps.wikimedia.org/) with one split per language. Each example
45
+ contains the content of one full Wikipedia article with cleaning to strip
46
+ markdown and unwanted sections (references, etc.).
47
+
48
+ ### [Supported Tasks](#supported-tasks)
49
+
50
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
51
+
52
+ ### [Languages](#languages)
53
+
54
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
55
+
56
+ ## [Dataset Structure](#dataset-structure)
57
+
58
+ We show detailed information for up to 5 configurations of the dataset.
59
+
60
+ ### [Data Instances](#data-instances)
61
+
62
+ #### 20200501.de
63
+
64
+ - **Size of downloaded dataset files:** 5531.82 MB
65
+ - **Size of the generated dataset:** 7716.79 MB
66
+ - **Total amount of disk used:** 13248.61 MB
67
+
68
+ An example of 'train' looks as follows.
69
+ ```
70
+
71
+ ```
72
+
73
+ #### 20200501.en
74
+
75
+ - **Size of downloaded dataset files:** 17396.28 MB
76
+ - **Size of the generated dataset:** 17481.07 MB
77
+ - **Total amount of disk used:** 34877.35 MB
78
+
79
+ An example of 'train' looks as follows.
80
+ ```
81
+
82
+ ```
83
+
84
+ #### 20200501.fr
85
+
86
+ - **Size of downloaded dataset files:** 4653.55 MB
87
+ - **Size of the generated dataset:** 6182.24 MB
88
+ - **Total amount of disk used:** 10835.79 MB
89
+
90
+ An example of 'train' looks as follows.
91
+ ```
92
+
93
+ ```
94
+
95
+ #### 20200501.frr
96
+
97
+ - **Size of downloaded dataset files:** 9.05 MB
98
+ - **Size of the generated dataset:** 5.88 MB
99
+ - **Total amount of disk used:** 14.93 MB
100
+
101
+ An example of 'train' looks as follows.
102
+ ```
103
+
104
+ ```
105
+
106
+ #### 20200501.it
107
+
108
+ - **Size of downloaded dataset files:** 2970.57 MB
109
+ - **Size of the generated dataset:** 3809.89 MB
110
+ - **Total amount of disk used:** 6780.46 MB
111
+
112
+ An example of 'train' looks as follows.
113
+ ```
114
+
115
+ ```
116
+
117
+ ### [Data Fields](#data-fields)
118
+
119
+ The data fields are the same among all splits.
120
+
121
+ #### 20200501.de
122
+ - `title`: a `string` feature.
123
+ - `text`: a `string` feature.
124
+
125
+ #### 20200501.en
126
+ - `title`: a `string` feature.
127
+ - `text`: a `string` feature.
128
+
129
+ #### 20200501.fr
130
+ - `title`: a `string` feature.
131
+ - `text`: a `string` feature.
132
+
133
+ #### 20200501.frr
134
+ - `title`: a `string` feature.
135
+ - `text`: a `string` feature.
136
+
137
+ #### 20200501.it
138
+ - `title`: a `string` feature.
139
+ - `text`: a `string` feature.
140
+
141
+ ### [Data Splits Sample Size](#data-splits-sample-size)
142
+
143
+ | name | train |
144
+ |------------|------:|
145
+ |20200501.de |3140341|
146
+ |20200501.en |6078422|
147
+ |20200501.fr |2210508|
148
+ |20200501.frr| 11803|
149
+ |20200501.it |1931197|
150
+
151
+ ## [Dataset Creation](#dataset-creation)
152
+
153
+ ### [Curation Rationale](#curation-rationale)
154
+
155
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
156
+
157
+ ### [Source Data](#source-data)
158
+
159
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
160
+
161
+ ### [Annotations](#annotations)
162
+
163
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
164
+
165
+ ### [Personal and Sensitive Information](#personal-and-sensitive-information)
166
+
167
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
168
+
169
+ ## [Considerations for Using the Data](#considerations-for-using-the-data)
170
+
171
+ ### [Social Impact of Dataset](#social-impact-of-dataset)
172
+
173
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
174
+
175
+ ### [Discussion of Biases](#discussion-of-biases)
176
+
177
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
178
+
179
+ ### [Other Known Limitations](#other-known-limitations)
180
+
181
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
182
+
183
+ ## [Additional Information](#additional-information)
184
+
185
+ ### [Dataset Curators](#dataset-curators)
186
+
187
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
188
+
189
+ ### [Licensing Information](#licensing-information)
190
+
191
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
192
+
193
+ ### [Citation Information](#citation-information)
194
+
195
+ ```
196
+ @ONLINE {wikidump,
197
+ author = "Wikimedia Foundation",
198
+ title = "Wikimedia Downloads",
199
+ url = "https://dumps.wikimedia.org"
200
+ }
201
+
202
+ ```
203
+
204
+
205
+ ### Contributions
206
+
207
+ Thanks to [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.