Model Card for HomophobiaBERT
HomophobiaBERT is a machine learning model designed specifically for the detection of homophobic content in English tweets. Developed from BERT, it has been fine-tuned on a large dataset to identify nuances and contexts associated with homophobia in online discourse.
Model Details
Model Description
HomophobiaBERT was created by Josh McGiff and Nikola S. Nikolov from the University of Limerick, Ireland, to address the gap in hate speech detection, particularly for homophobic content on social media platforms like X (formerly known as Twitter). The model is a part of broader efforts to combat online hate and promote inclusivity.
- Developed by: Josh McGiff and Nikola S. Nikolov
- Funded by: Science Foundation Ireland Centre for Research Training in Artificial Intelligence
- Model type: BERT-based sentiment analysis
- Language(s) (NLP): English
- License: MIT
- Finetuned from model: BERT (Bidirectional Encoder Representations from Transformers)
References
@misc{mcgiff2024bridging,
title={Bridging the Gap in Online Hate Speech Detection: A Comparative Analysis of BERT and Traditional Models for Homophobic Content Identification on X/Twitter},
author={John McGiff and Nikolov N. S.},
year={2024},
journal={Applied and Computational Engineering},
volume={64},
pages={64-69},
primaryClass={cs.CL}
}
Uses
Direct Use
HomophobiaBERT is intended for direct application in social media monitoring tools to detect and flag homophobic content automatically. It helps researchers, platform moderators, and advocacy groups understand and mitigate the spread of hate speech.
Downstream Use [optional]
Beyond direct detection, HomophobiaBERT can be integrated into larger systems for content moderation, academic research, and developing more nuanced models of sentiment analysis that recognise and categorise different forms of hate speech.
Out-of-Scope Use
HomophobiaBERT is not designed for applications beyond text and sentiment analysis. Misuse includes deploying the model without understanding its limitations or using it to label individuals or communities maliciously.
Bias, Risks, and Limitations
HomophobiaBERT may carry biases from its training data. Incorrect classifications can occur, particularly in nuanced contexts or with evolving language. Increasing the dataset could help the model to generalise for such contexts within this homophobia classification task.
Recommendations
As previously mentioned, incorrect classifications may occur due to the limitation imposed by our limited quantity of training data. Hence, users should continuously validate the model's performance in real-world scenarios and consider manual review for sensitive applications.
- Downloads last month
- 28