bonadossou commited on
Commit
ff3ef36
1 Parent(s): 08a56d8

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +71 -0
README.md ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - masakhane/masakhaner2
4
+ metrics:
5
+ - accuracy
6
+ - f1
7
+ ---
8
+ Paper: `FonMTL: Toward Building a Multi-Task Learning Model for Fon Language`, accepted at WiNLP co-located at EMNLP 2023
9
+
10
+ - Official Github: https://github.com/bonaventuredossou/multitask_fon
11
+
12
+ - Build Multi-task Learning Model: For the shared layers (encoders) we used the following language model heads:
13
+
14
+ - [AfroLM-Large](https://huggingface.co/bonadossou/afrolm_active_learning)
15
+ - [AfroLM: A Self-Active Learning-based Multilingual Pretrained Language Model for 23 African Languages](https://aclanthology.org/2022.sustainlp-1.11/) (Dossou et.al., EMNLP 2022)
16
+
17
+ - [XLMR-Large](https://huggingface.co/xlm-roberta-large):
18
+ - [Unsupervised Cross-lingual Representation Learning at Scale](https://aclanthology.org/2020.acl-main.747) (Conneau et.al., ACL 2020)
19
+
20
+ - Evaluation:
21
+
22
+ - The goal primarily is to explore whether multitask learning improves performance on downstream tasks for Fon. We try two settings: (a) training only on Fon and evaluating on Fon, (b) training on all languages and evaluating on Fon. We evaluate the multi-task learning model on NER and POS tasks, and compare it with baselines (models finetuned and evaluated on single tasks)
23
+
24
+ # How to get started
25
+
26
+ - Run the training: `sbatch run.sh`
27
+
28
+ This command will:
29
+
30
+ - Set up the environement
31
+ - Install required libraries: `pip install -r requirements.txt -q`
32
+ - Move to the code folder: `cd code`
33
+ - Run the training & evaluate: `python run_train.py`
34
+
35
+ # NER Results
36
+ Model | Task | Pretraining/Finetuning Dataset | Pretraining/Finetuning Language(s) | Evaluation Dataset | Metric | Metric's Value |
37
+ |:---: |:---: |:---: | :---: |:---: | :---: | :---: |
38
+ `AfroLM-Large` | Single Task | MasakhaNER 2.0 | All | FON NER | F1-Score | 80.48 |
39
+ `AfriBERTa-Large` | Single Task | MasakhaNER 2.0 | All | FON NER | F1-Score | 79.90 |
40
+ `XLMR-Base` | Single Task | MasakhaNER 2.0 | All | FON NER | F1-Score | 81.90 |
41
+ `XLMR-Large` | Single Task | MasakhaNER 2.0 | All | FON NER | F1-Score | 81.60 |
42
+ `AfroXLMR-Base` | Single Task | MasakhaNER 2.0 | All | FON NER | F1-Score | 82.30 |
43
+ `AfroXLMR-Large` | Single Task | MasakhaNER 2.0 | All | FON NER | F1-Score | 82.70 |
44
+ |:---: |:---: |:---: | :---: |:---: | :---: |
45
+ `MTL Sum (ours)` | Multi-Task | MasakhaNER 2.0 & MasakhaPOS | All | FON NER | F1-Score | 79.87 |
46
+ `MTL Weighted (ours)` | Multi-Task | MasakhaNER 2.0 & MasakhaPOS | All | FON NER | F1-Score | 81.92 |
47
+ `MTL Weighted (ours)` | Multi-Task | MasakhaNER 2.0 & MasakhaPOS | Fon Data | FON NER | F1-Score | 64.43 |
48
+
49
+
50
+ # POS Results
51
+ Model | Task | Pretraining/Finetuning Dataset | Pretraining/Finetuning Language(s) | Evaluation Dataset | Metric | Metric's Value |
52
+ |:---: |:---: |:---: | :---: |:---: | :---: | :---: |
53
+ `AfroLM-Large` | Single Task | MasakhaPOS | All | FON POS | Accuracy | 82.40 |
54
+ `AfriBERTa-Large` | Single Task | MasakhaPOS | All | FON POS | Accuracy | 88.40 |
55
+ `XLMR-Base` | Single Task | MasakhaPOS | All | FON POS | Accuracy | 90.10 |
56
+ `XLMR-Large` | Single Task | MasakhaPOS | All | FON POS | Accuracy | 90.20 |
57
+ `AfroXLMR-Base` | Single Task | MasakhaPOS | All | FON POS | Accuracy | 90.10 |
58
+ `AfroXLMR-Large` | Single Task | MasakhaPOS | All | FON POS | Accuracy | 90.40 |
59
+ |:---: |:---: |:---: | :---: |:---: | :---: |
60
+ `MTL Sum (ours)` | Multi-Task | MasakhaNER 2.0 & MasakhaPOS | All | FON POS | Accuracy | 82.45 |
61
+ `MTL Weighted (ours)` | Multi-Task | MasakhaNER 2.0 & MasakhaPOS | All | FON POS | Accuracy | 89.20 |
62
+ `MTL Weighted (ours)` | Multi-Task | MasakhaNER 2.0 & MasakhaPOS | Fon Data | FON POS | Accuracy | 80.85 |
63
+
64
+ # Model End-Points
65
+
66
+ - [`multitask_model_fon_False_multiplicative.bin`](https://huggingface.co/bonadossou/multitask_model_fon_False_multiplicative) is the MTL Fon Model which has been pre-trained on all MasakhaNER 2.0 and MasakhaPOS datasets, and merging representations in a multiplicative way.
67
+
68
+ - [`multitask_model_fon_True_multiplicative.bin`](https://huggingface.co/bonadossou/multitask-learning-fon-true-multiplicative) is the MTL Fon Model which has been pre-trained only on Fon data from the MasakhaNER 2.0 and MasakhaPOS datasets, and merging representations in a multiplicative way.
69
+
70
+ # How to run inference when you have the model
71
+ To run inference with the model(s), you can use the [testing block](https://github.com/bonaventuredossou/multitask_fon/blob/main/code/run_train.py#L209) defined in our MultitaskFON class.