File size: 5,757 Bytes
90f2b03
 
1e0aaf3
f138ef5
90f2b03
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f138ef5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cf56552
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
98f6a8c
 
 
 
 
 
 
 
 
 
cf56552
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
98f6a8c
cf56552
 
 
98f6a8c
 
 
 
cf56552
 
 
 
 
 
 
 
203e778
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
This is a multilingual misogyny and sexism detection model.

This model was released with the following paper (https://rdcu.be/dmIpq):
```
@InProceedings{10.1007/978-3-031-43129-6_9,
author="Chang, Rong-Ching
and May, Jonathan
and Lerman, Kristina",
editor="Thomson, Robert
and Al-khateeb, Samer
and Burger, Annetta
and Park, Patrick
and A. Pyke, Aryn",
title="Feedback Loops and Complex Dynamics of Harmful Speech in Online Discussions",
booktitle="Social, Cultural, and Behavioral Modeling",
year="2023",
publisher="Springer Nature Switzerland",
address="Cham",
pages="85--94",
abstract="Harmful and toxic speech contribute to an unwelcoming online environment that suppresses participation and conversation. Efforts have focused on detecting and mitigating harmful speech; however, the mechanisms by which toxicity degrades online discussions are not well understood. This paper makes two contributions. First, to comprehensively model harmful comments, we introduce a multilingual misogyny and sexist speech detection model (https://huggingface.co/annahaz/xlm-roberta-base-misogyny-sexism-indomain-mix-bal). Second, we model the complex dynamics of online discussions as feedback loops in which harmful comments lead to negative emotions which prompt even more harmful comments. To quantify the feedback loops, we use a combination of mutual Granger causality and regression to analyze discussions on two political forums on Reddit: the moderated political forum r/Politics and the moderated neutral political forum r/NeutralPolitics. Our results suggest that harmful comments and negative emotions create self-reinforcing feedback loops in forums. Contrarily, moderation with neutral discussion appears to tip interactions into self-extinguishing feedback loops that reduce harmful speech and negative emotions. Our study sheds more light on the complex dynamics of harmful speech and the role of moderation and neutral discussion in mitigating these dynamics.",
isbn="978-3-031-43129-6"
}
```

We combined several multilingual ground truth datasets for misogyny and sexism (M/S) versus non-misogyny and non-sexism (non-M/S) [3,5,8,9,11,13, 20]. Specifically, the dataset expressing misogynistic or sexist speech (M/S) and the same number of texts expressing non-M/S speech in each language included 8, 582 English-language texts, 872 in French, 561 in Hindi, 2, 190 in Italian, and 612 in Bengali. The test data was a balanced set of 100 texts sampled randomly from both M/S and non-M/S groups in each language, for a total of 500 examples of M/S speech and 500 examples of non-M/S speech.

References of the datasets are:

3. Bhattacharya, S., et al.: Developing a multilingual annotated corpus of misog- yny and aggression, pp. 158–168. ELRA, Marseille, France, May 2020. https:// aclanthology.org/2020.trac- 1.25

5. Chiril, P., Moriceau, V., Benamara, F., Mari, A., Origgi, G., Coulomb-Gully, M.: An annotated corpus for sexism detection in French tweets. In: Proceedings of LREC, pp. 1397–1403 (2020)

8. Fersini, E., et al.: SemEval-2022 task 5: multimedia automatic misogyny identification. In: Proceedings of SemEval, pp. 533–549 (2022)

9. Fersini, E., Nozza, D., Rosso, P.: Overview of the Evalita 2018 task on automatic misogyny identification (AMI). EVALITA Eval. NLP Speech Tools Italian 12, 59 (2018)

11. Guest, E., Vidgen, B., Mittos, A., Sastry, N., Tyson, G., Margetts, H.: An expert annotated dataset for the detection of online misogyny. In: Proceedings of EACL, pp. 1336–1350 (2021)

13. Jha, A., Mamidi, R.: When does a compliment become sexist? Analysis and classification of ambivalent sexism using Twitter data. In: Proceedings of NLP+CSS, pp. 7–16 (2017)

20. Waseem, Z., Hovy, D.: Hateful symbols or hateful people? Predictive features for hate speech detection on Twitter. In: Proceedings of NAACL SRW, pp. 88–93 (2016)


Please see the paper for more detail. 

---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: xlm-roberta-base-misogyny-sexism-indomain-mix-bal
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# xlm-roberta-base-misogyny-sexism-indomain-mix-bal

This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8259
- Accuracy: 0.826
- F1: 0.8333
- Precision: 0.7996
- Recall: 0.87
- Mae: 0.174
- Tn: 391
- Fp: 109
- Fn: 65
- Tp: 435

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2

### Training results

| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1     | Precision | Recall | Mae   | Tn  | Fp  | Fn | Tp  |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:-----:|:---:|:---:|:--:|:---:|
| 0.2643        | 1.0   | 1603 | 0.6511          | 0.82     | 0.8269 | 0.7963    | 0.86   | 0.18  | 390 | 110 | 70 | 430 |
| 0.2004        | 2.0   | 3206 | 0.8259          | 0.826    | 0.8333 | 0.7996    | 0.87   | 0.174 | 391 | 109 | 65 | 435 |


### Framework versions

- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
# Multilingual_Misogyny_Detection