kelechi commited on
Commit
ad7c2a0
·
1 Parent(s): efaccae

updated dataset card

Browse files
Files changed (1) hide show
  1. README.md +46 -12
README.md CHANGED
@@ -15,17 +15,41 @@ language:
15
 
16
  license: "Apache License 2.0"
17
  ---
18
- # Dataset Summary
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
  This is the corpus on which [AfriBERTa] (https://huggingface.co/castorini/afriberta_large) was trained on.
20
- The dataset contains 11 languages - Afaan Oromoo (also called Oromo), Amharic, Gahuza (a mixed language containing Kinyarwanda and Kirundi), Hausa, Igbo, Nigerian Pidgin, Somali, Swahili, Tigrinya and Yorùbá.
21
  The dataset is mostly from the BBC news website, but some languages also have data from Common Crawl.
22
 
23
 
24
- # Supported Tasks and Leaderboards
25
  The AfriBERTa corpus was mostly intended to pre-train language models.
26
 
27
-
28
- # Load Dataset
 
29
  An example to load the train split of the Somali corpus:
30
  ```
31
  dataset = load_dataset("castorini/afriberta", "somali", split="train")
@@ -36,25 +60,35 @@ An example to load the test split of the Pidgin corpus:
36
  dataset = load_dataset("castorini/afriberta", "pidgin", split="test")
37
  ```
38
 
39
- # Data Fields
 
 
 
 
 
 
 
 
 
40
 
41
  The data fields are:
42
 
43
  - id: id of the example
44
  - text: content as a string
45
 
46
- # Data Splits
47
  Each language has a train and test split, with varying sizes.
48
 
49
- # Considerations for Using the Data
50
 
51
- ## Discussion of Biases
52
  Since majority of the data is obtained from the BBC's news website, models trained on this dataset are likely going to
53
  be biased towards the news domain.
54
 
55
  Also, since some of the data is obtained from Common Crawl, care should be taken (especially for text generation models) since personal and sensitive information might be present.
56
 
57
- # Citation Information
 
58
  ```
59
  @inproceedings{ogueji-etal-2021-small,
60
  title = "Small Data? No Problem! Exploring the Viability of Pretrained Multilingual Language Models for Low-resourced Languages",
@@ -71,5 +105,5 @@ Also, since some of the data is obtained from Common Crawl, care should be taken
71
  }
72
  ```
73
 
74
- # Contributions
75
- Thanks to [keleog](https://github.com/keleog)
 
15
 
16
  license: "Apache License 2.0"
17
  ---
18
+
19
+ # Dataset Card for AfriBERTa's Corpus
20
+
21
+ ## Table of Contents
22
+ - [Dataset Description](#dataset-description)
23
+ - [Dataset Summary](#dataset-summary)
24
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
25
+ - [Languages](#languages)
26
+ - [Loading Dataset](#loading-dataset)
27
+ - [Dataset Structure](#dataset-structure)
28
+ - [Data Instances](#data-instances)
29
+ - [Data Fields](#data-fields)
30
+ - [Data Splits](#data-splits)
31
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
32
+ - [Discussion of Biases](#discussion-of-biases)
33
+ - [Additional Information](#additional-information)
34
+ - [Citation Information](#citation-information)
35
+ - [Contributions](#contributions)
36
+ ## Dataset Description
37
+ - **Homepage:** https://github.com/keleog/afriberta
38
+ - **Models:** https://huggingface.co/castorini/afriberta_large
39
+ - **Paper:** https://aclanthology.org/2021.mrl-1.11/
40
+ - **Point of Contact:** kelechi.ogueji@uwaterloo.ca
41
+
42
+ ### Dataset Summary
43
  This is the corpus on which [AfriBERTa] (https://huggingface.co/castorini/afriberta_large) was trained on.
 
44
  The dataset is mostly from the BBC news website, but some languages also have data from Common Crawl.
45
 
46
 
47
+ ### Supported Tasks and Leaderboards
48
  The AfriBERTa corpus was mostly intended to pre-train language models.
49
 
50
+ ### Languages
51
+ Afaan Oromoo (also called Oromo), Amharic, Gahuza (a mixed language containing Kinyarwanda and Kirundi), Hausa, Igbo, Nigerian Pidgin, Somali, Swahili, Tigrinya and Yorùbá.
52
+ ### Loading Dataset
53
  An example to load the train split of the Somali corpus:
54
  ```
55
  dataset = load_dataset("castorini/afriberta", "somali", split="train")
 
60
  dataset = load_dataset("castorini/afriberta", "pidgin", split="test")
61
  ```
62
 
63
+ ## Dataset Structure
64
+
65
+ ### Data Instances
66
+ Each data point is a line of text.
67
+ An example from the `igbo` dataset:
68
+ ```
69
+ {"id": "0", "text": "Ngwá ọrụ na-echebe ma na-ebuli gị na kọmputa."}
70
+ ```
71
+
72
+ ### Data Fields
73
 
74
  The data fields are:
75
 
76
  - id: id of the example
77
  - text: content as a string
78
 
79
+ ### Data Splits
80
  Each language has a train and test split, with varying sizes.
81
 
82
+ ## Considerations for Using the Data
83
 
84
+ ### Discussion of Biases
85
  Since majority of the data is obtained from the BBC's news website, models trained on this dataset are likely going to
86
  be biased towards the news domain.
87
 
88
  Also, since some of the data is obtained from Common Crawl, care should be taken (especially for text generation models) since personal and sensitive information might be present.
89
 
90
+ ## Additional Information
91
+ ### Citation Information
92
  ```
93
  @inproceedings{ogueji-etal-2021-small,
94
  title = "Small Data? No Problem! Exploring the Viability of Pretrained Multilingual Language Models for Low-resourced Languages",
 
105
  }
106
  ```
107
 
108
+ ### Contributions
109
+ Thanks to [Kelechi Ogueji](https://github.com/keleog) for adding this dataset.