Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,84 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- am
|
5 |
+
- arq
|
6 |
+
- ary
|
7 |
+
- ha
|
8 |
+
- ig
|
9 |
+
- rw
|
10 |
+
- pcm
|
11 |
+
- yo
|
12 |
+
- tw
|
13 |
+
- pt
|
14 |
+
- sw
|
15 |
+
- ts
|
16 |
+
datasets:
|
17 |
+
- shmuhammad/AfriSenti-twitter-sentiment
|
18 |
+
metrics:
|
19 |
+
- accuracy
|
20 |
+
pipeline_tag: text-classification
|
21 |
---
|
22 |
+
|
23 |
+
# naija-twitter-sentiment-afriberta-large
|
24 |
+
## Model description
|
25 |
+
**afrisenti-twitter-sentiment-afroxlmr-large** is the first multilingual twitter **sentiment classification** model for twelve (12) Nigerian languages (Amharic, Algerian Arabic, Darija, Hausa, Igbo, Kinyarwanda, Nigerian Pidgin, Mozambique Portuguese, Swahili, Tsonga, Twi, and Yorùbá) based on a fine-tuned castorini/afriberta_large large model.
|
26 |
+
It achieves the **state-of-the-art performance** for the twitter sentiment classification task trained on the [AfriSenti corpus](https://github.com/afrisenti-semeval/afrisent-semeval-2023).
|
27 |
+
The model has been trained to classify tweets into 3 sentiment classes: negative, neutral and positive
|
28 |
+
Specifically, this model is a *Davlan/afro-xlmr-large* model that was fine-tuned on an aggregation of 12 African language datasets obtained from [AfriSenti](https://github.com/afrisenti-semeval/afrisent-semeval-2023) dataset.
|
29 |
+
|
30 |
+
## Intended uses & limitations
|
31 |
+
#### How to use
|
32 |
+
You can use this model with Transformers for Sentiment Classification.
|
33 |
+
```python
|
34 |
+
from transformers import AutoModelForSequenceClassification
|
35 |
+
from transformers import AutoTokenizer
|
36 |
+
import numpy as np
|
37 |
+
from scipy.special import softmax
|
38 |
+
|
39 |
+
MODEL = "Davlan/afrisenti-twitter-sentiment-afroxlmr-large"
|
40 |
+
tokenizer = AutoTokenizer.from_pretrained(MODEL)
|
41 |
+
|
42 |
+
# PT
|
43 |
+
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
|
44 |
+
|
45 |
+
text = "I like you"
|
46 |
+
encoded_input = tokenizer(text, return_tensors='pt')
|
47 |
+
output = model(**encoded_input)
|
48 |
+
scores = output[0][0].detach().numpy()
|
49 |
+
scores = softmax(scores)
|
50 |
+
|
51 |
+
id2label = {0:"positive", 1:"neutral", 2:"negative"}
|
52 |
+
|
53 |
+
ranking = np.argsort(scores)
|
54 |
+
ranking = ranking[::-1]
|
55 |
+
for i in range(scores.shape[0]):
|
56 |
+
l = id2label[ranking[i]]
|
57 |
+
s = scores[ranking[i]]
|
58 |
+
print(f"{i+1}) {l} {np.round(float(s), 4)}")
|
59 |
+
```
|
60 |
+
#### Limitations and bias
|
61 |
+
This model is limited by its training dataset and domain i.e Twitter. This may not generalize well for all use cases in different domains.
|
62 |
+
|
63 |
+
|
64 |
+
## Training procedure
|
65 |
+
This model was trained on a single Nvidia A10 GPU with recommended hyperparameters from the [original AfriSenti paper](https://arxiv.org/abs/2302.08956).
|
66 |
+
## Eval results on Test set (F-score), average over 5 runs.
|
67 |
+
language|F1-score
|
68 |
+
-|-
|
69 |
+
hau |81.2
|
70 |
+
ibo |80.8
|
71 |
+
pcm |74.5
|
72 |
+
yor |80.4
|
73 |
+
|
74 |
+
### BibTeX entry and citation info
|
75 |
+
```
|
76 |
+
@article{Muhammad2023AfriSentiAT,
|
77 |
+
title={AfriSenti: A Twitter Sentiment Analysis Benchmark for African Languages},
|
78 |
+
author={Shamsuddeen Hassan Muhammad and Idris Abdulmumin and Abinew Ali Ayele and Nedjma Djouhra Ousidhoum and David Ifeoluwa Adelani and Seid Muhie Yimam and Ibrahim Said Ahmad and Meriem Beloucif and Saif M. Mohammad and Sebastian Ruder and Oumaima Hourrane and Pavel Brazdil and Felermino D'ario M'ario Ant'onio Ali and Davis C. Davis and Salomey Osei and Bello Shehu Bello and Falalu Ibrahim and Tajuddeen Rabiu Gwadabe and Samuel Rutunda and Tadesse Destaw Belay and Wendimu Baye Messelle and Hailu Beshada Balcha and Sisay Adugna Chala and Hagos Tesfahun Gebremichael and Bernard Opoku and Steven Arthur},
|
79 |
+
journal={ArXiv},
|
80 |
+
year={2023},
|
81 |
+
volume={abs/2302.08956},
|
82 |
+
url={https://api.semanticscholar.org/CorpusID:257019629}
|
83 |
+
}
|
84 |
+
```
|