Datasets:

Multilinguality:
multilingual
Size Categories:
unknown
Language Creators:
found
Annotations Creators:
machine-generated
Source Datasets:
original
ArXiv:
License:
system HF staff commited on
Commit
f1be553
1 Parent(s): e9b0549

Update files from the datasets library (from 1.3.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.3.0

Files changed (1) hide show
  1. README.md +257 -0
README.md ADDED
@@ -0,0 +1,257 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ ---
3
+
4
+ # Dataset Card for "polyglot_ner"
5
+
6
+ ## Table of Contents
7
+ - [Dataset Description](#dataset-description)
8
+ - [Dataset Summary](#dataset-summary)
9
+ - [Supported Tasks](#supported-tasks)
10
+ - [Languages](#languages)
11
+ - [Dataset Structure](#dataset-structure)
12
+ - [Data Instances](#data-instances)
13
+ - [Data Fields](#data-fields)
14
+ - [Data Splits Sample Size](#data-splits-sample-size)
15
+ - [Dataset Creation](#dataset-creation)
16
+ - [Curation Rationale](#curation-rationale)
17
+ - [Source Data](#source-data)
18
+ - [Annotations](#annotations)
19
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
20
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
21
+ - [Social Impact of Dataset](#social-impact-of-dataset)
22
+ - [Discussion of Biases](#discussion-of-biases)
23
+ - [Other Known Limitations](#other-known-limitations)
24
+ - [Additional Information](#additional-information)
25
+ - [Dataset Curators](#dataset-curators)
26
+ - [Licensing Information](#licensing-information)
27
+ - [Citation Information](#citation-information)
28
+ - [Contributions](#contributions)
29
+
30
+ ## [Dataset Description](#dataset-description)
31
+
32
+ - **Homepage:** [https://sites.google.com/site/rmyeid/projects/polylgot-ner](https://sites.google.com/site/rmyeid/projects/polylgot-ner)
33
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
34
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
35
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
36
+ - **Size of downloaded dataset files:** 43285.14 MB
37
+ - **Size of the generated dataset:** 11958.61 MB
38
+ - **Total amount of disk used:** 55243.75 MB
39
+
40
+ ### [Dataset Summary](#dataset-summary)
41
+
42
+ Polyglot-NER
43
+ A training dataset automatically generated from Wikipedia and Freebase the task
44
+ of named entity recognition. The dataset contains the basic Wikipedia based
45
+ training data for 40 languages we have (with coreference resolution) for the task of
46
+ named entity recognition. The details of the procedure of generating them is outlined in
47
+ Section 3 of the paper (https://arxiv.org/abs/1410.3791). Each config contains the data
48
+ corresponding to a different language. For example, "es" includes only spanish examples.
49
+
50
+ ### [Supported Tasks](#supported-tasks)
51
+
52
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
53
+
54
+ ### [Languages](#languages)
55
+
56
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
57
+
58
+ ## [Dataset Structure](#dataset-structure)
59
+
60
+ We show detailed information for up to 5 configurations of the dataset.
61
+
62
+ ### [Data Instances](#data-instances)
63
+
64
+ #### ar
65
+
66
+ - **Size of downloaded dataset files:** 1055.74 MB
67
+ - **Size of the generated dataset:** 175.05 MB
68
+ - **Total amount of disk used:** 1230.78 MB
69
+
70
+ An example of 'train' looks as follows.
71
+ ```
72
+ This example was too long and was cropped:
73
+
74
+ {
75
+ "id": "2",
76
+ "lang": "ar",
77
+ "ner": ["O", "O", "O", "O", "O", "O", "O", "O", "LOC", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "PER", "PER", "PER", "PER", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"],
78
+ "words": "[\"وفي\", \"مرحلة\", \"موالية\", \"أنشأت\", \"قبيلة\", \"مكناسة\", \"الزناتية\", \"مكناسة\", \"تازة\", \",\", \"وأقام\", \"بها\", \"المرابطون\", \"قلعة\", \"..."
79
+ }
80
+ ```
81
+
82
+ #### bg
83
+
84
+ - **Size of downloaded dataset files:** 1055.74 MB
85
+ - **Size of the generated dataset:** 181.68 MB
86
+ - **Total amount of disk used:** 1237.42 MB
87
+
88
+ An example of 'train' looks as follows.
89
+ ```
90
+ This example was too long and was cropped:
91
+
92
+ {
93
+ "id": "1",
94
+ "lang": "bg",
95
+ "ner": ["O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"],
96
+ "words": "[\"Дефиниция\", \"Наименованията\", \"\\\"\", \"книжовен\", \"\\\"/\\\"\", \"литературен\", \"\\\"\", \"език\", \"на\", \"български\", \"за\", \"тази\", \"кодифи..."
97
+ }
98
+ ```
99
+
100
+ #### ca
101
+
102
+ - **Size of downloaded dataset files:** 1055.74 MB
103
+ - **Size of the generated dataset:** 137.09 MB
104
+ - **Total amount of disk used:** 1192.82 MB
105
+
106
+ An example of 'train' looks as follows.
107
+ ```
108
+ This example was too long and was cropped:
109
+
110
+ {
111
+ "id": "2",
112
+ "lang": "ca",
113
+ "ner": "[\"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O...",
114
+ "words": "[\"Com\", \"a\", \"compositor\", \"deix��\", \"un\", \"immens\", \"llegat\", \"que\", \"inclou\", \"8\", \"simfonies\", \"(\", \"1822\", \"),\", \"diverses\", ..."
115
+ }
116
+ ```
117
+
118
+ #### combined
119
+
120
+ - **Size of downloaded dataset files:** 1055.74 MB
121
+ - **Size of the generated dataset:** 5995.61 MB
122
+ - **Total amount of disk used:** 7051.35 MB
123
+
124
+ An example of 'train' looks as follows.
125
+ ```
126
+ This example was too long and was cropped:
127
+
128
+ {
129
+ "id": "18",
130
+ "lang": "es",
131
+ "ner": ["O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"],
132
+ "words": "[\"Los\", \"cambios\", \"en\", \"la\", \"energía\", \"libre\", \"de\", \"Gibbs\", \"\\\\\", \"Delta\", \"G\", \"nos\", \"dan\", \"una\", \"cuantificación\", \"de..."
133
+ }
134
+ ```
135
+
136
+ #### cs
137
+
138
+ - **Size of downloaded dataset files:** 1055.74 MB
139
+ - **Size of the generated dataset:** 149.53 MB
140
+ - **Total amount of disk used:** 1205.26 MB
141
+
142
+ An example of 'train' looks as follows.
143
+ ```
144
+ This example was too long and was cropped:
145
+
146
+ {
147
+ "id": "3",
148
+ "lang": "cs",
149
+ "ner": ["O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"],
150
+ "words": "[\"Historie\", \"Symfonická\", \"forma\", \"se\", \"rozvinula\", \"se\", \"především\", \"v\", \"období\", \"klasicismu\", \"a\", \"romantismu\", \",\", \"..."
151
+ }
152
+ ```
153
+
154
+ ### [Data Fields](#data-fields)
155
+
156
+ The data fields are the same among all splits.
157
+
158
+ #### ar
159
+ - `id`: a `string` feature.
160
+ - `lang`: a `string` feature.
161
+ - `words`: a `list` of `string` features.
162
+ - `ner`: a `list` of `string` features.
163
+
164
+ #### bg
165
+ - `id`: a `string` feature.
166
+ - `lang`: a `string` feature.
167
+ - `words`: a `list` of `string` features.
168
+ - `ner`: a `list` of `string` features.
169
+
170
+ #### ca
171
+ - `id`: a `string` feature.
172
+ - `lang`: a `string` feature.
173
+ - `words`: a `list` of `string` features.
174
+ - `ner`: a `list` of `string` features.
175
+
176
+ #### combined
177
+ - `id`: a `string` feature.
178
+ - `lang`: a `string` feature.
179
+ - `words`: a `list` of `string` features.
180
+ - `ner`: a `list` of `string` features.
181
+
182
+ #### cs
183
+ - `id`: a `string` feature.
184
+ - `lang`: a `string` feature.
185
+ - `words`: a `list` of `string` features.
186
+ - `ner`: a `list` of `string` features.
187
+
188
+ ### [Data Splits Sample Size](#data-splits-sample-size)
189
+
190
+ | name | train |
191
+ |--------|-------:|
192
+ |ar | 339109|
193
+ |bg | 559694|
194
+ |ca | 372665|
195
+ |combined|21070925|
196
+ |cs | 564462|
197
+
198
+ ## [Dataset Creation](#dataset-creation)
199
+
200
+ ### [Curation Rationale](#curation-rationale)
201
+
202
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
203
+
204
+ ### [Source Data](#source-data)
205
+
206
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
207
+
208
+ ### [Annotations](#annotations)
209
+
210
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
211
+
212
+ ### [Personal and Sensitive Information](#personal-and-sensitive-information)
213
+
214
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
215
+
216
+ ## [Considerations for Using the Data](#considerations-for-using-the-data)
217
+
218
+ ### [Social Impact of Dataset](#social-impact-of-dataset)
219
+
220
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
221
+
222
+ ### [Discussion of Biases](#discussion-of-biases)
223
+
224
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
225
+
226
+ ### [Other Known Limitations](#other-known-limitations)
227
+
228
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
229
+
230
+ ## [Additional Information](#additional-information)
231
+
232
+ ### [Dataset Curators](#dataset-curators)
233
+
234
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
235
+
236
+ ### [Licensing Information](#licensing-information)
237
+
238
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
239
+
240
+ ### [Citation Information](#citation-information)
241
+
242
+ ```
243
+ @article{polyglotner,
244
+ author = {Al-Rfou, Rami and Kulkarni, Vivek and Perozzi, Bryan and Skiena, Steven},
245
+ title = {{Polyglot-NER}: Massive Multilingual Named Entity Recognition},
246
+ journal = {{Proceedings of the 2015 {SIAM} International Conference on Data Mining, Vancouver, British Columbia, Canada, April 30- May 2, 2015}},
247
+ month = {April},
248
+ year = {2015},
249
+ publisher = {SIAM},
250
+ }
251
+
252
+ ```
253
+
254
+
255
+ ### Contributions
256
+
257
+ Thanks to [@joeddav](https://github.com/joeddav) for adding this dataset.