afriberta-corpus / README.md
kelechi's picture
added dataset card
efaccae
|
raw
history blame
2.18 kB
metadata
language:
  - om
  - am
  - rw
  - rn
  - ha
  - ig
  - pcm
  - so
  - sw
  - ti
  - yo
  - multilingual
license: Apache License 2.0

Dataset Summary

This is the corpus on which [AfriBERTa] (https://huggingface.co/castorini/afriberta_large) was trained on. The dataset contains 11 languages - Afaan Oromoo (also called Oromo), Amharic, Gahuza (a mixed language containing Kinyarwanda and Kirundi), Hausa, Igbo, Nigerian Pidgin, Somali, Swahili, Tigrinya and Yorùbá. The dataset is mostly from the BBC news website, but some languages also have data from Common Crawl.

Supported Tasks and Leaderboards

The AfriBERTa corpus was mostly intended to pre-train language models.

Load Dataset

An example to load the train split of the Somali corpus:

dataset = load_dataset("castorini/afriberta", "somali", split="train")

An example to load the test split of the Pidgin corpus:

dataset = load_dataset("castorini/afriberta", "pidgin", split="test")

Data Fields

The data fields are:

  • id: id of the example
  • text: content as a string

Data Splits

Each language has a train and test split, with varying sizes.

Considerations for Using the Data

Discussion of Biases

Since majority of the data is obtained from the BBC's news website, models trained on this dataset are likely going to be biased towards the news domain.

Also, since some of the data is obtained from Common Crawl, care should be taken (especially for text generation models) since personal and sensitive information might be present.

Citation Information

@inproceedings{ogueji-etal-2021-small,
    title = "Small Data? No Problem! Exploring the Viability of Pretrained Multilingual Language Models for Low-resourced Languages",
    author = "Ogueji, Kelechi  and
      Zhu, Yuxin  and
      Lin, Jimmy",
    booktitle = "Proceedings of the 1st Workshop on Multilingual Representation Learning",
    month = nov,
    year = "2021",
    address = "Punta Cana, Dominican Republic",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.mrl-1.11",
    pages = "116--126",
}

Contributions

Thanks to keleog