Back to all datasets
Dataset: polyglot_ner 🏷
Update on GitHub

How to load this dataset directly with the 🤗/datasets library:

Copy to clipboard
from datasets import load_dataset dataset = load_dataset("polyglot_ner")


Polyglot-NER A training dataset automatically generated from Wikipedia and Freebase the task of named entity recognition. The dataset contains the basic Wikipedia based training data for 40 languages we have (with coreference resolution) for the task of named entity recognition. The details of the procedure of generating them is outlined in Section 3 of the paper ( Each config contains the data corresponding to a different language. For example, "es" includes only spanish examples.


         author = {Al-Rfou, Rami and Kulkarni, Vivek and Perozzi, Bryan and Skiena, Steven},
         title = {{Polyglot-NER}: Massive Multilingual Named Entity Recognition},
         journal = {{Proceedings of the 2015 {SIAM} International Conference on Data Mining, Vancouver, British Columbia, Canada, April 30- May 2, 2015}},
         month     = {April},
         year      = {2015},
         publisher = {SIAM},

Models trained or fine-tuned on polyglot_ner

None yet. Start fine-tuning now =)