HannahRoseKirk
commited on
Commit
•
3c4cfbc
1
Parent(s):
6575f26
Update README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,16 @@
|
|
1 |
---
|
2 |
license: cc-by-4.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
4 |
|
5 |
# Hatemoji Model
|
@@ -61,7 +72,7 @@ We wished to train a model which could effectively encode information about emoj
|
|
61 |
For the round-specific test sets, we used a weighted F1-score across them to choose the final model in each round. For more details, see our [paper](https://arxiv.org/abs/2108.05921)
|
62 |
|
63 |
## Evaluation results
|
64 |
-
We compare our model
|
65 |
* **P-IA**: the identity attack attribute from Perspective API
|
66 |
* **P-TX**: the toxicity attribute from Perspective API
|
67 |
* **B-D**: A BERT model trained on the [Davidson et al. (2017)](https://github.com/t-davidson/hate-speech-and-offensive-language) dataset
|
|
|
1 |
---
|
2 |
license: cc-by-4.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
tags:
|
6 |
+
- text-classification
|
7 |
+
- pytorch
|
8 |
+
- hate-speech-detection
|
9 |
+
datasets:
|
10 |
+
- HatemojiBuild
|
11 |
+
- HatemojiCheck
|
12 |
+
metrics:
|
13 |
+
- Accuracy, F1 Score
|
14 |
---
|
15 |
|
16 |
# Hatemoji Model
|
|
|
72 |
For the round-specific test sets, we used a weighted F1-score across them to choose the final model in each round. For more details, see our [paper](https://arxiv.org/abs/2108.05921)
|
73 |
|
74 |
## Evaluation results
|
75 |
+
We compare our model to:
|
76 |
* **P-IA**: the identity attack attribute from Perspective API
|
77 |
* **P-TX**: the toxicity attribute from Perspective API
|
78 |
* **B-D**: A BERT model trained on the [Davidson et al. (2017)](https://github.com/t-davidson/hate-speech-and-offensive-language) dataset
|