File size: 2,629 Bytes
9a1cb7c
 
3c619c5
9a1cb7c
 
 
 
 
 
 
 
 
 
 
 
e5b454f
 
568d417
e5b454f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
74c3c92
e5b454f
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
---
language: 
- pcm
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
license: apache-2.0
widget:
- text: "Mixed Martial Arts joinbodi, Ultimate Fighting Championship, UFC don decide say dem go enta back di octagon on Saturday, 9 May, for Jacksonville, Florida."
---

# Model description
**roberta-base-pcm** is a model based on the fine-tuned RoBERTa base model. It has been trained to recognize four types of entities:
- dates & time (DATE)
- Location (LOC)
- Organizations (ORG)
- Person (PER)

# Intended Use
- Intended to be used for research purposes concerning Named Entity Recognition for African Languages.
- Not intended for practical purposes.

# Training Data
This model was fine-tuned on the Nigerian Pidgin corpus **(pcm)** of the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups.

# Training procedure
This model was trained on a single NVIDIA P5000 from [Paperspace](https://www.paperspace.com)
#### Hyperparameters
- **Learning Rate:** 5e-5
- **Batch Size:** 32
- **Maximum Sequence Length:** 164
- **Epochs:** 30

# Evaluation Data
We evaluated this model on the test split of the Swahili corpus **(pcm)** present in the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) with no thresholding.

# Metrics
- Precision
- Recall
- F1-score

# Limitations
- The size of the pre-trained language model prevents its usage in anything other than research.
- Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.
- The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.

# Caveats and Recommendations
- The topics in the dataset corpus are centered around **News**. Future training could be done with a more diverse corpus.

# Results
Model Name| Precision | Recall | F1-score
-|-|-|-
**roberta-base-pcm**| 88.55 | 82.45 | 85.39

# Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline

tokenizer = AutoTokenizer.from_pretrained("arnolfokam/roberta-base-pcm")
model = AutoModelForTokenClassification.from_pretrained("arnolfokam/roberta-base-pcm")

nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Mixed Martial Arts joinbodi, Ultimate Fighting Championship, UFC don decide say dem go enta back di octagon on Saturday, 9 May, for Jacksonville, Florida."

ner_results = nlp(example)
print(ner_results)
```