File size: 940 Bytes
e7b0b5d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
Hugging Face's logo
---
language: 
- om
- am
- rw
- rn
- ha
- ig
- pcm
- so
- sw
- ti
- yo
- multilingual
- T5

---
# afriteva_base

## Model desription

AfriTeVa base is a sequence to sequence model pretrained on 10 African languages

## Languages

Afaan Oromoo(orm), Amharic(amh), Gahuza(gah), Hausa(hau), Igbo(igb), Nigerian Pidgin(pcm), Somali(som), Swahili(swa), Tigrinya(tig), Yoruba(yor)

### More information on the model, dataset:

### The model

- 229M parameters encoder-decoder architecture (T5-like)
- 12 layers, 12 attention heads and 512 token sequence length

### The dataset

- Multilingual: 10 African languages listed above
- 143 Million Tokens (1GB of text data)
- Tokenizer Vocabulary Size: 70,000 tokens

## Training Procedure

For information on training procedures, please refer to the AfriTeVa [paper](#) or [repository](https://github.com/castorini/afriteva)

## BibTex entry and Citation info

coming soon ...