File size: 3,181 Bytes
3637246
fd91d72
 
 
 
 
 
 
 
 
 
 
 
3637246
fd91d72
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
---
license: cc-by-nc-sa-4.0
tags:
- sentiment analysis, Twitter, tweets
- stopwords
multilinguality:
- monolingual
- multilingual
language:
- hau
- ibo
- yor
pretty_name: NaijaStopwords
---

# Naija-Lexicons

Naija-Lexicons is a part of the [Naija-Senti](https://huggingface.co/datasets/HausaNLP/NaijaSenti-Twitter) project. It is a list of collected stopwords from the four most widely spoken languages in Nigeria — Hausa, Igbo, Nigerian-Pidgin, and Yorùbá.

--------------------------------------------------------------------------------

## Dataset Description


- **Homepage:** https://github.com/hausanlp/NaijaSenti/tree/main/data/stopwords
- **Repository:** [GitHub](https://github.com/hausanlp/NaijaSenti/tree/main/data/stopwords)
- **Paper:**  [NaijaSenti: A Nigerian Twitter Sentiment Corpus for Multilingual Sentiment Analysis](https://aclanthology.org/2022.lrec-1.63/)
- **Leaderboard:** N/A
- **Point of Contact:** [Shamsuddeen Hassan Muhammad](shamsuddeen2004@gmail.com)


### Languages

3 most indigenous Nigerian languages 

* Hausa (hau) 
* Igbo (ibo)
* Yoruba (yor) 


## Dataset Structure

### Data Instances

List of lexicons instances in each of the 3 languages with their sentiment labels.


```
{
  "word": "string",
  "label": "string"
}
```


### How to use it


```python
from  datasets  import  load_dataset

# you can load specific languages (e.g., Hausa). This download manually created and translated lexicons. 
ds = load_dataset("HausaNLP/Naija-Lexicons", "hau")

# you can load specific languages (e.g., Hausa). You may also specify the split you want to downloaf
ds = load_dataset("HausaNLP/Naija-Lexicons", "hau", split = "manual")

```

## Additional Information

### Dataset Curators

* Shamsuddeen Hassan Muhammad
* Idris Abdulmumin
* Ibrahim Said Ahmad
* Bello Shehu Bello


### Licensing Information

This Naija-Lexicons dataset is licensed under a Creative Commons Attribution BY-NC-SA 4.0 International License


### Citation Information

```
@inproceedings{muhammad-etal-2022-naijasenti,
    title = "{N}aija{S}enti: A {N}igerian {T}witter Sentiment Corpus for Multilingual Sentiment Analysis",
    author = "Muhammad, Shamsuddeen Hassan  and
      Adelani, David Ifeoluwa  and
      Ruder, Sebastian  and
      Ahmad, Ibrahim Sa{'}id  and
      Abdulmumin, Idris  and
      Bello, Bello Shehu  and
      Choudhury, Monojit  and
      Emezue, Chris Chinenye  and
      Abdullahi, Saheed Salahudeen  and
      Aremu, Anuoluwapo  and
      Jorge, Al{\'\i}pio  and
      Brazdil, Pavel",
    booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
    month = jun,
    year = "2022",
    address = "Marseille, France",
    publisher = "European Language Resources Association",
    url = "https://aclanthology.org/2022.lrec-1.63",
    pages = "590--602",
}
```

### Contributions

> This work was carried out with support from Lacuna Fund, an initiative co-founded by The Rockefeller Foundation, Google.org, and Canada’s International Development Research Centre. The views expressed herein do not necessarily represent those of Lacuna Fund, its Steering Committee, its funders, or Meridian Institute.