Datasets:
albertvillanova
HF staff
Convert dataset sizes from base 2 to base 10 in the dataset card (#2)
4eebd10
annotations_creators: | |
- machine-generated | |
language_creators: | |
- found | |
language: | |
- ar | |
- bg | |
- ca | |
- cs | |
- da | |
- de | |
- el | |
- en | |
- es | |
- et | |
- fa | |
- fi | |
- fr | |
- he | |
- hi | |
- hr | |
- hu | |
- id | |
- it | |
- ja | |
- ko | |
- lt | |
- lv | |
- ms | |
- nl | |
- 'no' | |
- pl | |
- pt | |
- ro | |
- ru | |
- sk | |
- sl | |
- sr | |
- sv | |
- th | |
- tl | |
- tr | |
- uk | |
- vi | |
- zh | |
license: | |
- unknown | |
multilinguality: | |
- multilingual | |
pretty_name: Polyglot-NER | |
size_categories: | |
- unknown | |
source_datasets: | |
- original | |
task_categories: | |
- token-classification | |
task_ids: | |
- named-entity-recognition | |
paperswithcode_id: polyglot-ner | |
dataset_info: | |
- config_name: ca | |
features: | |
- name: id | |
dtype: string | |
- name: lang | |
dtype: string | |
- name: words | |
sequence: string | |
- name: ner | |
sequence: string | |
splits: | |
- name: train | |
num_bytes: 143746026 | |
num_examples: 372665 | |
download_size: 1107018606 | |
dataset_size: 143746026 | |
- config_name: de | |
features: | |
- name: id | |
dtype: string | |
- name: lang | |
dtype: string | |
- name: words | |
sequence: string | |
- name: ner | |
sequence: string | |
splits: | |
- name: train | |
num_bytes: 156744752 | |
num_examples: 547578 | |
download_size: 1107018606 | |
dataset_size: 156744752 | |
- config_name: es | |
features: | |
- name: id | |
dtype: string | |
- name: lang | |
dtype: string | |
- name: words | |
sequence: string | |
- name: ner | |
sequence: string | |
splits: | |
- name: train | |
num_bytes: 145387551 | |
num_examples: 386699 | |
download_size: 1107018606 | |
dataset_size: 145387551 | |
- config_name: fi | |
features: | |
- name: id | |
dtype: string | |
- name: lang | |
dtype: string | |
- name: words | |
sequence: string | |
- name: ner | |
sequence: string | |
splits: | |
- name: train | |
num_bytes: 95175890 | |
num_examples: 387465 | |
download_size: 1107018606 | |
dataset_size: 95175890 | |
- config_name: hi | |
features: | |
- name: id | |
dtype: string | |
- name: lang | |
dtype: string | |
- name: words | |
sequence: string | |
- name: ner | |
sequence: string | |
splits: | |
- name: train | |
num_bytes: 177698330 | |
num_examples: 401648 | |
download_size: 1107018606 | |
dataset_size: 177698330 | |
- config_name: id | |
features: | |
- name: id | |
dtype: string | |
- name: lang | |
dtype: string | |
- name: words | |
sequence: string | |
- name: ner | |
sequence: string | |
splits: | |
- name: train | |
num_bytes: 152560050 | |
num_examples: 463862 | |
download_size: 1107018606 | |
dataset_size: 152560050 | |
- config_name: ko | |
features: | |
- name: id | |
dtype: string | |
- name: lang | |
dtype: string | |
- name: words | |
sequence: string | |
- name: ner | |
sequence: string | |
splits: | |
- name: train | |
num_bytes: 174523416 | |
num_examples: 560105 | |
download_size: 1107018606 | |
dataset_size: 174523416 | |
- config_name: ms | |
features: | |
- name: id | |
dtype: string | |
- name: lang | |
dtype: string | |
- name: words | |
sequence: string | |
- name: ner | |
sequence: string | |
splits: | |
- name: train | |
num_bytes: 155268778 | |
num_examples: 528181 | |
download_size: 1107018606 | |
dataset_size: 155268778 | |
- config_name: pl | |
features: | |
- name: id | |
dtype: string | |
- name: lang | |
dtype: string | |
- name: words | |
sequence: string | |
- name: ner | |
sequence: string | |
splits: | |
- name: train | |
num_bytes: 159684112 | |
num_examples: 623267 | |
download_size: 1107018606 | |
dataset_size: 159684112 | |
- config_name: ru | |
features: | |
- name: id | |
dtype: string | |
- name: lang | |
dtype: string | |
- name: words | |
sequence: string | |
- name: ner | |
sequence: string | |
splits: | |
- name: train | |
num_bytes: 200717423 | |
num_examples: 551770 | |
download_size: 1107018606 | |
dataset_size: 200717423 | |
- config_name: sr | |
features: | |
- name: id | |
dtype: string | |
- name: lang | |
dtype: string | |
- name: words | |
sequence: string | |
- name: ner | |
sequence: string | |
splits: | |
- name: train | |
num_bytes: 183437513 | |
num_examples: 559423 | |
download_size: 1107018606 | |
dataset_size: 183437513 | |
- config_name: tl | |
features: | |
- name: id | |
dtype: string | |
- name: lang | |
dtype: string | |
- name: words | |
sequence: string | |
- name: ner | |
sequence: string | |
splits: | |
- name: train | |
num_bytes: 47104871 | |
num_examples: 160750 | |
download_size: 1107018606 | |
dataset_size: 47104871 | |
- config_name: vi | |
features: | |
- name: id | |
dtype: string | |
- name: lang | |
dtype: string | |
- name: words | |
sequence: string | |
- name: ner | |
sequence: string | |
splits: | |
- name: train | |
num_bytes: 141062258 | |
num_examples: 351643 | |
download_size: 1107018606 | |
dataset_size: 141062258 | |
- config_name: ar | |
features: | |
- name: id | |
dtype: string | |
- name: lang | |
dtype: string | |
- name: words | |
sequence: string | |
- name: ner | |
sequence: string | |
splits: | |
- name: train | |
num_bytes: 183551222 | |
num_examples: 339109 | |
download_size: 1107018606 | |
dataset_size: 183551222 | |
- config_name: cs | |
features: | |
- name: id | |
dtype: string | |
- name: lang | |
dtype: string | |
- name: words | |
sequence: string | |
- name: ner | |
sequence: string | |
splits: | |
- name: train | |
num_bytes: 156792129 | |
num_examples: 564462 | |
download_size: 1107018606 | |
dataset_size: 156792129 | |
- config_name: el | |
features: | |
- name: id | |
dtype: string | |
- name: lang | |
dtype: string | |
- name: words | |
sequence: string | |
- name: ner | |
sequence: string | |
splits: | |
- name: train | |
num_bytes: 195456401 | |
num_examples: 446052 | |
download_size: 1107018606 | |
dataset_size: 195456401 | |
- config_name: et | |
features: | |
- name: id | |
dtype: string | |
- name: lang | |
dtype: string | |
- name: words | |
sequence: string | |
- name: ner | |
sequence: string | |
splits: | |
- name: train | |
num_bytes: 21961619 | |
num_examples: 87023 | |
download_size: 1107018606 | |
dataset_size: 21961619 | |
- config_name: fr | |
features: | |
- name: id | |
dtype: string | |
- name: lang | |
dtype: string | |
- name: words | |
sequence: string | |
- name: ner | |
sequence: string | |
splits: | |
- name: train | |
num_bytes: 147560734 | |
num_examples: 418411 | |
download_size: 1107018606 | |
dataset_size: 147560734 | |
- config_name: hr | |
features: | |
- name: id | |
dtype: string | |
- name: lang | |
dtype: string | |
- name: words | |
sequence: string | |
- name: ner | |
sequence: string | |
splits: | |
- name: train | |
num_bytes: 154151689 | |
num_examples: 629667 | |
download_size: 1107018606 | |
dataset_size: 154151689 | |
- config_name: it | |
features: | |
- name: id | |
dtype: string | |
- name: lang | |
dtype: string | |
- name: words | |
sequence: string | |
- name: ner | |
sequence: string | |
splits: | |
- name: train | |
num_bytes: 147520094 | |
num_examples: 378325 | |
download_size: 1107018606 | |
dataset_size: 147520094 | |
- config_name: lt | |
features: | |
- name: id | |
dtype: string | |
- name: lang | |
dtype: string | |
- name: words | |
sequence: string | |
- name: ner | |
sequence: string | |
splits: | |
- name: train | |
num_bytes: 165319919 | |
num_examples: 848018 | |
download_size: 1107018606 | |
dataset_size: 165319919 | |
- config_name: nl | |
features: | |
- name: id | |
dtype: string | |
- name: lang | |
dtype: string | |
- name: words | |
sequence: string | |
- name: ner | |
sequence: string | |
splits: | |
- name: train | |
num_bytes: 150737871 | |
num_examples: 520664 | |
download_size: 1107018606 | |
dataset_size: 150737871 | |
- config_name: pt | |
features: | |
- name: id | |
dtype: string | |
- name: lang | |
dtype: string | |
- name: words | |
sequence: string | |
- name: ner | |
sequence: string | |
splits: | |
- name: train | |
num_bytes: 145627857 | |
num_examples: 396773 | |
download_size: 1107018606 | |
dataset_size: 145627857 | |
- config_name: sk | |
features: | |
- name: id | |
dtype: string | |
- name: lang | |
dtype: string | |
- name: words | |
sequence: string | |
- name: ner | |
sequence: string | |
splits: | |
- name: train | |
num_bytes: 134174889 | |
num_examples: 500135 | |
download_size: 1107018606 | |
dataset_size: 134174889 | |
- config_name: sv | |
features: | |
- name: id | |
dtype: string | |
- name: lang | |
dtype: string | |
- name: words | |
sequence: string | |
- name: ner | |
sequence: string | |
splits: | |
- name: train | |
num_bytes: 157058369 | |
num_examples: 634881 | |
download_size: 1107018606 | |
dataset_size: 157058369 | |
- config_name: tr | |
features: | |
- name: id | |
dtype: string | |
- name: lang | |
dtype: string | |
- name: words | |
sequence: string | |
- name: ner | |
sequence: string | |
splits: | |
- name: train | |
num_bytes: 164456506 | |
num_examples: 607324 | |
download_size: 1107018606 | |
dataset_size: 164456506 | |
- config_name: zh | |
features: | |
- name: id | |
dtype: string | |
- name: lang | |
dtype: string | |
- name: words | |
sequence: string | |
- name: ner | |
sequence: string | |
splits: | |
- name: train | |
num_bytes: 165056969 | |
num_examples: 1570853 | |
download_size: 1107018606 | |
dataset_size: 165056969 | |
- config_name: bg | |
features: | |
- name: id | |
dtype: string | |
- name: lang | |
dtype: string | |
- name: words | |
sequence: string | |
- name: ner | |
sequence: string | |
splits: | |
- name: train | |
num_bytes: 190509195 | |
num_examples: 559694 | |
download_size: 1107018606 | |
dataset_size: 190509195 | |
- config_name: da | |
features: | |
- name: id | |
dtype: string | |
- name: lang | |
dtype: string | |
- name: words | |
sequence: string | |
- name: ner | |
sequence: string | |
splits: | |
- name: train | |
num_bytes: 150551293 | |
num_examples: 546440 | |
download_size: 1107018606 | |
dataset_size: 150551293 | |
- config_name: en | |
features: | |
- name: id | |
dtype: string | |
- name: lang | |
dtype: string | |
- name: words | |
sequence: string | |
- name: ner | |
sequence: string | |
splits: | |
- name: train | |
num_bytes: 145491677 | |
num_examples: 423982 | |
download_size: 1107018606 | |
dataset_size: 145491677 | |
- config_name: fa | |
features: | |
- name: id | |
dtype: string | |
- name: lang | |
dtype: string | |
- name: words | |
sequence: string | |
- name: ner | |
sequence: string | |
splits: | |
- name: train | |
num_bytes: 180093656 | |
num_examples: 492903 | |
download_size: 1107018606 | |
dataset_size: 180093656 | |
- config_name: he | |
features: | |
- name: id | |
dtype: string | |
- name: lang | |
dtype: string | |
- name: words | |
sequence: string | |
- name: ner | |
sequence: string | |
splits: | |
- name: train | |
num_bytes: 177231613 | |
num_examples: 459933 | |
download_size: 1107018606 | |
dataset_size: 177231613 | |
- config_name: hu | |
features: | |
- name: id | |
dtype: string | |
- name: lang | |
dtype: string | |
- name: words | |
sequence: string | |
- name: ner | |
sequence: string | |
splits: | |
- name: train | |
num_bytes: 160702240 | |
num_examples: 590218 | |
download_size: 1107018606 | |
dataset_size: 160702240 | |
- config_name: ja | |
features: | |
- name: id | |
dtype: string | |
- name: lang | |
dtype: string | |
- name: words | |
sequence: string | |
- name: ner | |
sequence: string | |
splits: | |
- name: train | |
num_bytes: 193679570 | |
num_examples: 1691018 | |
download_size: 1107018606 | |
dataset_size: 193679570 | |
- config_name: lv | |
features: | |
- name: id | |
dtype: string | |
- name: lang | |
dtype: string | |
- name: words | |
sequence: string | |
- name: ner | |
sequence: string | |
splits: | |
- name: train | |
num_bytes: 76256241 | |
num_examples: 331568 | |
download_size: 1107018606 | |
dataset_size: 76256241 | |
- config_name: 'no' | |
features: | |
- name: id | |
dtype: string | |
- name: lang | |
dtype: string | |
- name: words | |
sequence: string | |
- name: ner | |
sequence: string | |
splits: | |
- name: train | |
num_bytes: 152431612 | |
num_examples: 552176 | |
download_size: 1107018606 | |
dataset_size: 152431612 | |
- config_name: ro | |
features: | |
- name: id | |
dtype: string | |
- name: lang | |
dtype: string | |
- name: words | |
sequence: string | |
- name: ner | |
sequence: string | |
splits: | |
- name: train | |
num_bytes: 96369897 | |
num_examples: 285985 | |
download_size: 1107018606 | |
dataset_size: 96369897 | |
- config_name: sl | |
features: | |
- name: id | |
dtype: string | |
- name: lang | |
dtype: string | |
- name: words | |
sequence: string | |
- name: ner | |
sequence: string | |
splits: | |
- name: train | |
num_bytes: 148140079 | |
num_examples: 521251 | |
download_size: 1107018606 | |
dataset_size: 148140079 | |
- config_name: th | |
features: | |
- name: id | |
dtype: string | |
- name: lang | |
dtype: string | |
- name: words | |
sequence: string | |
- name: ner | |
sequence: string | |
splits: | |
- name: train | |
num_bytes: 360409343 | |
num_examples: 217631 | |
download_size: 1107018606 | |
dataset_size: 360409343 | |
- config_name: uk | |
features: | |
- name: id | |
dtype: string | |
- name: lang | |
dtype: string | |
- name: words | |
sequence: string | |
- name: ner | |
sequence: string | |
splits: | |
- name: train | |
num_bytes: 198251631 | |
num_examples: 561373 | |
download_size: 1107018606 | |
dataset_size: 198251631 | |
- config_name: combined | |
features: | |
- name: id | |
dtype: string | |
- name: lang | |
dtype: string | |
- name: words | |
sequence: string | |
- name: ner | |
sequence: string | |
splits: | |
- name: train | |
num_bytes: 6286855097 | |
num_examples: 21070925 | |
download_size: 1107018606 | |
dataset_size: 6286855097 | |
# Dataset Card for Polyglot-NER | |
## Table of Contents | |
- [Dataset Description](#dataset-description) | |
- [Dataset Summary](#dataset-summary) | |
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) | |
- [Languages](#languages) | |
- [Dataset Structure](#dataset-structure) | |
- [Data Instances](#data-instances) | |
- [Data Fields](#data-fields) | |
- [Data Splits](#data-splits) | |
- [Dataset Creation](#dataset-creation) | |
- [Curation Rationale](#curation-rationale) | |
- [Source Data](#source-data) | |
- [Annotations](#annotations) | |
- [Personal and Sensitive Information](#personal-and-sensitive-information) | |
- [Considerations for Using the Data](#considerations-for-using-the-data) | |
- [Social Impact of Dataset](#social-impact-of-dataset) | |
- [Discussion of Biases](#discussion-of-biases) | |
- [Other Known Limitations](#other-known-limitations) | |
- [Additional Information](#additional-information) | |
- [Dataset Curators](#dataset-curators) | |
- [Licensing Information](#licensing-information) | |
- [Citation Information](#citation-information) | |
- [Contributions](#contributions) | |
## Dataset Description | |
- **Homepage:** [https://sites.google.com/site/rmyeid/projects/polylgot-ner](https://sites.google.com/site/rmyeid/projects/polylgot-ner) | |
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
- **Size of downloaded dataset files:** 45.39 GB | |
- **Size of the generated dataset:** 12.54 GB | |
- **Total amount of disk used:** 57.93 GB | |
### Dataset Summary | |
Polyglot-NER | |
A training dataset automatically generated from Wikipedia and Freebase the task | |
of named entity recognition. The dataset contains the basic Wikipedia based | |
training data for 40 languages we have (with coreference resolution) for the task of | |
named entity recognition. The details of the procedure of generating them is outlined in | |
Section 3 of the paper (https://arxiv.org/abs/1410.3791). Each config contains the data | |
corresponding to a different language. For example, "es" includes only spanish examples. | |
### Supported Tasks and Leaderboards | |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
### Languages | |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
## Dataset Structure | |
### Data Instances | |
#### ar | |
- **Size of downloaded dataset files:** 1.11 GB | |
- **Size of the generated dataset:** 183.55 MB | |
- **Total amount of disk used:** 1.29 GB | |
An example of 'train' looks as follows. | |
``` | |
This example was too long and was cropped: | |
{ | |
"id": "2", | |
"lang": "ar", | |
"ner": ["O", "O", "O", "O", "O", "O", "O", "O", "LOC", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "PER", "PER", "PER", "PER", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"], | |
"words": "[\"وفي\", \"مرحلة\", \"موالية\", \"أنشأت\", \"قبيلة\", \"مكناسة\", \"الزناتية\", \"مكناسة\", \"تازة\", \",\", \"وأقام\", \"بها\", \"المرابطون\", \"قلعة\", \"..." | |
} | |
``` | |
#### bg | |
- **Size of downloaded dataset files:** 1.11 GB | |
- **Size of the generated dataset:** 190.51 MB | |
- **Total amount of disk used:** 1.30 GB | |
An example of 'train' looks as follows. | |
``` | |
This example was too long and was cropped: | |
{ | |
"id": "1", | |
"lang": "bg", | |
"ner": ["O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"], | |
"words": "[\"Дефиниция\", \"Наименованията\", \"\\\"\", \"книжовен\", \"\\\"/\\\"\", \"литературен\", \"\\\"\", \"език\", \"на\", \"български\", \"за\", \"тази\", \"кодифи..." | |
} | |
``` | |
#### ca | |
- **Size of downloaded dataset files:** 1.11 GB | |
- **Size of the generated dataset:** 143.75 MB | |
- **Total amount of disk used:** 1.25 GB | |
An example of 'train' looks as follows. | |
``` | |
This example was too long and was cropped: | |
{ | |
"id": "2", | |
"lang": "ca", | |
"ner": "[\"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O...", | |
"words": "[\"Com\", \"a\", \"compositor\", \"deixà\", \"un\", \"immens\", \"llegat\", \"que\", \"inclou\", \"8\", \"simfonies\", \"(\", \"1822\", \"),\", \"diverses\", ..." | |
} | |
``` | |
#### combined | |
- **Size of downloaded dataset files:** 1.11 GB | |
- **Size of the generated dataset:** 6.29 GB | |
- **Total amount of disk used:** 7.39 GB | |
An example of 'train' looks as follows. | |
``` | |
This example was too long and was cropped: | |
{ | |
"id": "18", | |
"lang": "es", | |
"ner": ["O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"], | |
"words": "[\"Los\", \"cambios\", \"en\", \"la\", \"energía\", \"libre\", \"de\", \"Gibbs\", \"\\\\\", \"Delta\", \"G\", \"nos\", \"dan\", \"una\", \"cuantificación\", \"de..." | |
} | |
``` | |
#### cs | |
- **Size of downloaded dataset files:** 1.11 GB | |
- **Size of the generated dataset:** 156.79 MB | |
- **Total amount of disk used:** 1.26 GB | |
An example of 'train' looks as follows. | |
``` | |
This example was too long and was cropped: | |
{ | |
"id": "3", | |
"lang": "cs", | |
"ner": ["O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"], | |
"words": "[\"Historie\", \"Symfonická\", \"forma\", \"se\", \"rozvinula\", \"se\", \"především\", \"v\", \"období\", \"klasicismu\", \"a\", \"romantismu\", \",\", \"..." | |
} | |
``` | |
### Data Fields | |
The data fields are the same among all splits. | |
#### ar | |
- `id`: a `string` feature. | |
- `lang`: a `string` feature. | |
- `words`: a `list` of `string` features. | |
- `ner`: a `list` of `string` features. | |
#### bg | |
- `id`: a `string` feature. | |
- `lang`: a `string` feature. | |
- `words`: a `list` of `string` features. | |
- `ner`: a `list` of `string` features. | |
#### ca | |
- `id`: a `string` feature. | |
- `lang`: a `string` feature. | |
- `words`: a `list` of `string` features. | |
- `ner`: a `list` of `string` features. | |
#### combined | |
- `id`: a `string` feature. | |
- `lang`: a `string` feature. | |
- `words`: a `list` of `string` features. | |
- `ner`: a `list` of `string` features. | |
#### cs | |
- `id`: a `string` feature. | |
- `lang`: a `string` feature. | |
- `words`: a `list` of `string` features. | |
- `ner`: a `list` of `string` features. | |
### Data Splits | |
| name | train | | |
|----------|---------:| | |
| ar | 339109 | | |
| bg | 559694 | | |
| ca | 372665 | | |
| combined | 21070925 | | |
| cs | 564462 | | |
## Dataset Creation | |
### Curation Rationale | |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
### Source Data | |
#### Initial Data Collection and Normalization | |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
#### Who are the source language producers? | |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
### Annotations | |
#### Annotation process | |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
#### Who are the annotators? | |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
### Personal and Sensitive Information | |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
## Considerations for Using the Data | |
### Social Impact of Dataset | |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
### Discussion of Biases | |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
### Other Known Limitations | |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
## Additional Information | |
### Dataset Curators | |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
### Licensing Information | |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
### Citation Information | |
``` | |
@article{polyglotner, | |
author = {Al-Rfou, Rami and Kulkarni, Vivek and Perozzi, Bryan and Skiena, Steven}, | |
title = {{Polyglot-NER}: Massive Multilingual Named Entity Recognition}, | |
journal = {{Proceedings of the 2015 {SIAM} International Conference on Data Mining, Vancouver, British Columbia, Canada, April 30- May 2, 2015}}, | |
month = {April}, | |
year = {2015}, | |
publisher = {SIAM}, | |
} | |
``` | |
### Contributions | |
Thanks to [@joeddav](https://github.com/joeddav) for adding this dataset. |