tommasoc commited on
Commit
fc95469
1 Parent(s): 736ddf2

Create README.md

Browse files

model card hateBERT

Files changed (1) hide show
  1. README.md +39 -0
README.md ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ tags:
4
+ - HateBERT
5
+ ---
6
+
7
+ #
8
+ [Tommaso Caselli](https://www.semanticscholar.org/author/Tommaso-Caselli/1864635) •
9
+ [Valerio Basile](https://www.semanticscholar.org/author/Valerio-Basile/3101511) •
10
+ [Jelena Mitrovic](https://www.semanticscholar.org/author/Jelena-Mitrovic/145157863) •
11
+ [Michael Granizter](https://www.semanticscholar.org/author/M.-Granitzer/2389675)
12
+
13
+ ## Model description
14
+
15
+ HateBERT is an English pre-trained BERT model obtained by further training the English BERT base uncased model with more than 1 million posts from banned communites from Reddit. The model has been developed as a collaboration between the University of Groningen, the university of Turin, and the University of Passau.
16
+
17
+ For details, check out the paper presented at [WOAH 2021](https://aclanthology.org/2021.woah-1.3/). The code and the fine-tuned models are available on [OSF](https://osf.io/tbd58/?view_onlycb79b3228d4248ddb875eb1803525ad8).
18
+
19
+
20
+ ### BibTeX entry and citation info
21
+
22
+ ```bibtex
23
+ @inproceedings{caselli-etal-2021-hatebert,
24
+ \ttitle = "{H}ate{BERT}: Retraining {BERT} for Abusive Language Detection in {E}nglish",
25
+ \tauthor = "Caselli, Tommaso and
26
+ Basile, Valerio and
27
+ Mitrovi{\'c}, Jelena and
28
+ Granitzer, Michael",
29
+ \tbooktitle = "Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021)",
30
+ \tmonth = aug,
31
+ \tyear = "2021",
32
+ \taddress = "Online",
33
+ \tpublisher = "Association for Computational Linguistics",
34
+ \tturl = "https://aclanthology.org/2021.woah-1.3",
35
+ \tdoi = "10.18653/v1/2021.woah-1.3",
36
+ \tpages = "17--25",
37
+ \tabstract = "We introduce HateBERT, a re-trained BERT model for abusive language detection in English. The model was trained on RAL-E, a large-scale dataset of Reddit comments in English from communities banned for being offensive, abusive, or hateful that we have curated and made available to the public. We present the results of a detailed comparison between a general pre-trained language model and the retrained version on three English datasets for offensive, abusive language and hate speech detection tasks. In all datasets, HateBERT outperforms the corresponding general BERT model. We also discuss a battery of experiments comparing the portability of the fine-tuned models across the datasets, suggesting that portability is affected by compatibility of the annotated phenomena.",
38
+ }
39
+ ```