The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider removing the loading script and relying on automated data support (you can use convert_to_parquet from the datasets library). If this is not possible, please open a discussion for direct help.

wikiann

The wikiann dataset contains NER tags with labels from O (0), B-PER (1), I-PER (2), B-ORG (3), I-ORG (4), B-LOC (5), I-LOC (6). The Indonesian subset is used.

WikiANN (sometimes called PAN-X) is a multilingual named entity recognition dataset consisting of Wikipedia articles

annotated with LOC (location), PER (person), and ORG (organisation)

tags in the IOB2 format. This version corresponds to the balanced train, dev, and test splits of

Rahimi et al. (2019), and uses the following subsets from the original WikiANN corpus

Language WikiAnn ISO 639-3

Indonesian id ind

Javanese jv jav

Minangkabau min min

Sundanese su sun

Acehnese ace ace

Malay ms mly

Banyumasan map-bms map-bms

Dataset Usage

Run pip install nusacrowd before loading the dataset through HuggingFace's load_dataset.

Citation

@inproceedings{pan-etal-2017-cross,
    title = "Cross-lingual Name Tagging and Linking for 282 Languages",
    author = "Pan, Xiaoman  and
      Zhang, Boliang  and
      May, Jonathan  and
      Nothman, Joel  and
      Knight, Kevin  and
      Ji, Heng",
    booktitle = "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = jul,
    year = "2017",
    address = "Vancouver, Canada",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/P17-1178",
    doi = "10.18653/v1/P17-1178",
    pages = "1946--1958",
    abstract = "The ambitious goal of this work is to develop a cross-lingual name tagging and linking framework
    for 282 languages that exist in Wikipedia. Given a document in any of these languages, our framework is able
    to identify name mentions, assign a coarse-grained or fine-grained type to each mention, and link it to
    an English Knowledge Base (KB) if it is linkable. We achieve this goal by performing a series of
    new KB mining methods: generating {``}silver-standard{''} annotations by
    transferring annotations from English to other languages through cross-lingual links and KB properties,
    refining annotations through self-training and topic selection,
    deriving language-specific morphology features from anchor links, and mining word translation pairs from
    cross-lingual links. Both name tagging and linking results for 282 languages are promising
    on Wikipedia data and on-Wikipedia data.",
}
@inproceedings{rahimi-etal-2019-massively,
    title = "Massively Multilingual Transfer for {NER}",
    author = "Rahimi, Afshin  and
      Li, Yuan  and
      Cohn, Trevor",
    booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
    month = jul,
    year = "2019",
    address = "Florence, Italy",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/P19-1015",
    pages = "151--164",
}

License

Apache-2.0 license

Homepage

https://github.com/afshinrahimi/mmner

NusaCatalogue

For easy indexing and metadata: https://indonlp.github.io/nusa-catalogue

Downloads last month
0
Edit dataset card