5roop commited on
Commit
69dd5d1
1 Parent(s): 5632cc0

Created model card.

Browse files
Files changed (1) hide show
  1. README.md +53 -0
README.md ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # roberta-base-frenk-hate
2
+
3
+ Text classification model based on `classla/bcms-bertic` and fine-tuned on the [FRANK dataset](https://www.clarin.si/repository/xmlui/handle/11356/1433) comprising of LGBT and migrant hatespeech. Only the English subset of the data was used for fine-tuning and the dataset has been relabeled for binary classification (offensive or acceptable).
4
+
5
+ ## Fine-tuning hyperparameters
6
+
7
+ Fine-tuning was performed with `simpletransformers`. Beforehand a brief hyperparameter optimisation was performed and the presumed optimal hyperparameters are:
8
+
9
+ ```python
10
+
11
+ model_args = {
12
+ "num_train_epochs": 6,
13
+ "learning_rate": 3e-6,
14
+ "train_batch_size": 69}
15
+ ```
16
+
17
+ ## Performance
18
+
19
+ The same pipeline was run with two other models and with the same dataset. Accuracy and macro F1 score were recorded for each of the 6 fine-tuning sessions and post festum analyzed.
20
+
21
+ | model | average accuracy | average macro F1|
22
+
23
+ |---|---|---|
24
+
25
+ |roberta-base-frenk-hate|0.7915|0.7785|
26
+
27
+ |xlm-roberta-large |0.7904|0.77876|
28
+
29
+ |xlm-roberta-base |0.7577|0.7402|
30
+
31
+ |distilbert-base-uncased-finetuned-sst-2-english|0.7201|0.69862|
32
+
33
+
34
+
35
+ From recorded accuracies and macro F1 scores p-values were also calculated:
36
+
37
+ Comparison with `xlm-roberta-base`:
38
+
39
+ | test | accuracy p-value | macro F1 p-value|
40
+ | --- | --- | --- |
41
+ |Wilcoxon|0.00781|0.00781|
42
+ |Mann Whithney U-test|0.00108|0.00108|
43
+ |Student t-test | 1.35e-08 | 1.05e-07|
44
+
45
+ Comparison with `distilbert-base-uncased-finetuned-sst-2-english`:
46
+
47
+ | test | accuracy p-value | macro F1 p-value|
48
+ | --- | --- | --- |
49
+ |Wilcoxon|0.00781|0.00781|
50
+ |Mann Whithney U-test|0.00108|0.00108|
51
+ |Student t-test | 1.33e-12 | 3.03e-12|
52
+
53
+ Comparison with `xlm-roberta-large` yielded inconclusive results; whereas accuracy was outperformed by this model, the macro F1 score was not. Neither outperformance was statistically significant.