The dataset viewer is not available for this dataset.
The dataset viewer doesn't support this dataset because it runs arbitrary python code. Please open a discussion in the discussion tab if you think this is an error and tag @lhoestq and @severo.
Error code:   DatasetWithScriptNotSupportedError

Need help to make the dataset viewer work? Open a discussion for direct support.

Dataset Card for [Dataset Name]

Dataset Summary

MasakhaPOS is the largest publicly available high-quality dataset for part-of-speech (POS) tagging in 20 African languages. The languages covered are:

The train/validation/test sets are available for all the 20 languages.

For more details see https://aclanthology.org/2023.acl-long.609/

Supported Tasks and Leaderboards

[More Information Needed]

  • Part-of-speech: The performance in this task is measured with accuracy (higher is better).

Languages

There are 20 languages available :

  • Bambara (bam)
  • Ghomala (bbj)
  • Ewe (ewe)
  • Fon (fon)
  • Hausa (hau)
  • Igbo (ibo)
  • Kinyarwanda (kin)
  • Luganda (lug)
  • Dholuo (luo)
  • Mossi (mos)
  • Chichewa (nya)
  • Nigerian Pidgin
  • chShona (sna)
  • Kiswahili (swą)
  • Setswana (tsn)
  • Twi (twi)
  • Wolof (wol)
  • isiXhosa (xho)
  • Yorùbá (yor)
  • isiZulu (zul)

Dataset Structure

Data Instances

The examples look like this for Yorùbá:

from datasets import load_dataset
data = load_dataset('masakhane/masakhapos', 'yor') 

# Please, specify the language code

# A data point consists of sentences seperated by empty line and tab-seperated tokens and tags. 
{'id': '0',
 'ner_tags':  [0, 10, 10, 16, 0, 14, 0, 16, 0],
 'tokens': ['Ọ̀gbẹ́ni', 'Nuhu', 'Adam', 'kúrò', 'nípò', 'bí', 'ẹní', 'yọ', 'jìgá']
}

Data Fields

  • id: id of the sample
  • tokens: the tokens of the example text
  • upos: the POS tags of each token

The POS tags correspond to this list:

"NOUN", "PUNCT", "ADP", "NUM", "SYM", "SCONJ", "ADJ", "PART", "DET", "CCONJ", "PROPN", "PRON", "X", "ADV", "INTJ", "VERB", "AUX",```
              
The definition  of the tags can be found on [UD website](https://universaldependencies.org/u/pos/)

### Data Splits

For all languages, there are three splits.

The original splits were named `train`, `dev` and `test` and they correspond to the `train`, `validation` and `test` splits.

The splits have the following sizes :

| Language        | train | validation | test  |
|-----------------|------:|-----------:|------:|
| Bambara         |  775  |        154 |  619  |
| Ghomala         |  750  |        149 |  599  |
| Ewe             |  728  |        145 |  582  |
| Fon             |  810  |        161 |  646  |
| Hausa           |  753  |        150 |  601  |
| Igbo            |  803  |        160 |  642  |
| Kinyarwanda     |  757  |        151 |  604  |
| Luganda         |  733  |        146 |  586  |
| Luo             |  758  |        151 |  606  |
| Mossi           |  757  |        151 |  604  |
| Chichewa        |  728  |        145 |  582  |
| Nigerian-Pidgin |  752  |        150 |  600  |
| chiShona        |  747  |        149 |  596  |
| Kiswahili       |  693  |        138 |  553  |
| Setswana        |  754  |        150 |  602  |
| Akan/Twi        |  785  |        157 |  628  |
| Wolof           |  782  |        156 |  625  |
| isiXhosa        |  752  |        150 |  601  |
| Yoruba          |  893  |        178 |  713  |
| isiZulu         |  753  |        150 |  601  |

## Dataset Creation

### Curation Rationale

The dataset was introduced to introduce new resources to 20 languages that were under-served for natural language processing.

[More Information Needed]

### Source Data

The source of the data is from the news domain, details can be found here https://aclanthology.org/2023.acl-long.609/
#### Initial Data Collection and Normalization

The articles were word-tokenized, information on the exact pre-processing pipeline is unavailable.

#### Who are the source language producers?

The source language was produced by journalists and writers employed by the news agency and newspaper mentioned above.

### Annotations

#### Annotation process

Details can be found here https://aclanthology.org/2023.acl-long.609/

#### Who are the annotators?

Annotators were recruited from [Masakhane](https://www.masakhane.io/)

### Personal and Sensitive Information

The data is sourced from newspaper source and only contains mentions of public figures or individuals

## Considerations for Using the Data

### Social Impact of Dataset
[More Information Needed]


### Discussion of Biases
[More Information Needed]


### Other Known Limitations

Users should keep in mind that the dataset only contains news text, which might limit the applicability of the developed systems to other domains.

## Additional Information

### Dataset Curators


### Licensing Information

The licensing status of the data is CC 4.0 Non-Commercial

### Citation Information

Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example:

@inproceedings{dione-etal-2023-masakhapos, title = "{M}asakha{POS}: Part-of-Speech Tagging for Typologically Diverse {A}frican languages", author = "Dione, Cheikh M. Bamba and Adelani, David Ifeoluwa and Nabende, Peter and Alabi, Jesujoba and Sindane, Thapelo and Buzaaba, Happy and Muhammad, Shamsuddeen Hassan and Emezue, Chris Chinenye and Ogayo, Perez and Aremu, Anuoluwapo and Gitau, Catherine and Mbaye, Derguene and Mukiibi, Jonathan and Sibanda, Blessing and Dossou, Bonaventure F. P. and Bukula, Andiswa and Mabuya, Rooweither and Tapo, Allahsera Auguste and Munkoh-Buabeng, Edwin and Memdjokam Koagne, Victoire and Ouoba Kabore, Fatoumata and Taylor, Amelia and Kalipe, Godson and Macucwa, Tebogo and Marivate, Vukosi and Gwadabe, Tajuddeen and Elvis, Mboning Tchiaze and Onyenwe, Ikechukwu and Atindogbe, Gratien and Adelani, Tolulope and Akinade, Idris and Samuel, Olanrewaju and Nahimana, Marien and Musabeyezu, Th{'e}og{`e}ne and Niyomutabazi, Emile and Chimhenga, Ester and Gotosa, Kudzai and Mizha, Patrick and Agbolo, Apelete and Traore, Seydou and Uchechukwu, Chinedu and Yusuf, Aliyu and Abdullahi, Muhammad and Klakow, Dietrich", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-long.609", doi = "10.18653/v1/2023.acl-long.609", pages = "10883--10900", abstract = "In this paper, we present AfricaPOS, the largest part-of-speech (POS) dataset for 20 typologically diverse African languages. We discuss the challenges in annotating POS for these languages using the universal dependencies (UD) guidelines. We conducted extensive POS baseline experiments using both conditional random field and several multilingual pre-trained language models. We applied various cross-lingual transfer models trained with data available in the UD. Evaluating on the AfricaPOS dataset, we show that choosing the best transfer language(s) in both single-source and multi-source setups greatly improves the POS tagging performance of the target languages, in particular when combined with parameter-fine-tuning methods. Crucially, transferring knowledge from a language that matches the language family and morphosyntactic properties seems to be more effective for POS tagging in unseen languages.", }

```

Contributions

Thanks to @dadelani for adding this dataset.

Downloads last month
4,209
Edit dataset card

Models trained or fine-tuned on masakhane/masakhapos