File size: 4,683 Bytes
c6b0d5a
 
6872a5a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c6b0d5a
6872a5a
6e9f8eb
 
ff3ab01
6e9f8eb
ff3ab01
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3589493
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ff3ab01
3589493
ff3ab01
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
---
license: apache-2.0
datasets:
- castorini/wura
language:
  - afr
  - amh
  - arz
  - eng
  - fra
  - hau
  - ibo
  - kin
  - mlg
  - nya
  - orm
  - por
  - sna
  - som
  - sot
  - swa
  - tir
  - xho
  - yor
  - zul
---

# AfriTeVa V2 Base

AfriTeVa V2 Base is a multilingual T5 [Version 1.1](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) model pretrained on [Wura](https://huggingface.co/datasets/castorini/wura) with a vocabulary size of 150,000. The model has been shown to improve over existing baselines on [Text Classification](https://huggingface.co/datasets/masakhane/masakhanews), [Machine Translation](https://huggingface.co/datasets/masakhane/mafand), [Summarization](https://huggingface.co/datasets/csebuetnlp/xlsum) and [Cross-lingual Question Answering](https://huggingface.co/datasets/masakhane/afriqa). The model has 428M parameters.

Paper: [Better Quality Pretraining Data & T5 Models for African Languages](https://openreview.net/forum?id=ybc9V6Cbq2)

Authors: *Akintunde Oladipo, Mofetoluwa Adeyemi, Orevaoghene Ahia, Abraham Toluwalase Owodunni, Odunayo Ogundepo, David Ifeoluwa Adelani, Jimmy Lin*

**NOTES**:
* Dropout was turned off during pretraining and should be re-enabled for finetuning.
* Other checkpoints are available [here](https://huggingface.co/models?search=afriteva_v2_base).

## Abstract

In this study, we highlight the importance of enhancing the quality of pretraining data in multilingual language models. Existing web crawls have demonstrated quality issues, particularly in the context of low-resource languages. Consequently, we introduce a new multilingual pretraining corpus for African languages, designed by carefully auditing existing pretraining corpora to understand and rectify prevalent quality issues. To compile this dataset, we undertake a rigorous examination of current data sources for thirteen languages within one of the most extensive multilingual web crawls, mC4, and extract cleaner data through meticulous auditing and improved web crawling strategies. Subsequently, we pretrain a new T5-based model on this dataset and evaluate its performance on multiple downstream tasks. Our model demonstrates better downstream effectiveness over existing pretrained models across four NLP tasks, underscoring the critical role data quality plays in pretraining language models in low-resource scenarios. Specifically, on cross-lingual QA evaluation, our new model is more than twice as effective as multilingual T5. All code, data and models are publicly available at [castorini/AfriTeVa-keji](https://github.com/castorini/AfriTeVa-keji).

## Citation Information

```bibtex
@inproceedings{oladipo-etal-2023-better,
    title = "Better Quality Pre-training Data and T5 Models for {A}frican Languages",
    author = "Oladipo, Akintunde  and
      Adeyemi, Mofetoluwa  and
      Ahia, Orevaoghene  and
      Owodunni, Abraham  and
      Ogundepo, Odunayo  and
      Adelani, David  and
      Lin, Jimmy",
    editor = "Bouamor, Houda  and
      Pino, Juan  and
      Bali, Kalika",
    booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
    month = dec,
    year = "2023",
    address = "Singapore",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.emnlp-main.11",
    pages = "158--168",
    abstract = "In this study, we highlight the importance of enhancing the quality of pretraining data in multilingual language models. Existing web crawls have demonstrated quality issues, particularly in the context of low-resource languages. Consequently, we introduce a new multilingual pretraining corpus for 16 African languages, designed by carefully auditing existing pretraining corpora to understand and rectify prevalent quality issues. To compile this dataset, we undertake a rigorous examination of current data sources for thirteen languages within one of the most extensive multilingual web crawls, mC4, and extract cleaner data through meticulous auditing and improved web crawling strategies. Subsequently, we pretrain a new T5-based model on this dataset and evaluate its performance on multiple downstream tasks. Our model demonstrates better downstream effectiveness over existing pretrained models across four NLP tasks, underscoring the critical role data quality plays in pretraining language models in low-resource scenarios. Specifically, on cross-lingual QA evaluation, our new model is more than twice as effective as multilingual T5. All code, data and models are publicly available at https://github.com/castorini/AfriTeVa-keji.",
}

```