Edit model card

MetaHateBERT

Model Description

This is a fine-tuned BERT model specifically designed to detect hate speech in text. The model is based on the bert-base-uncased architecture and has been fine-tuned on a custom dataset for the task of binary text classification, where the labels are no hate and hate.

Intended Uses & Limitations

Intended Uses

  • Hate Speech Detection: This model is intended for detecting hate speech in social media comments, forums, and other text data sources.
  • Content Moderation: Can be used by platforms to automatically flag potentially harmful content.

Limitations

  • Biases: The model may carry biases present in the training data.
  • False Positives/Negatives: It's not perfect and may misclassify some instances.
  • Domain Specificity: Performance may vary across different domains.

Citation

If you use this model, please cite the following reference:

@article{Piot_Martín-Rodilla_Parapar_2024,
  title={MetaHate: A Dataset for Unifying Efforts on Hate Speech Detection},
  volume={18},
  url={https://ojs.aaai.org/index.php/ICWSM/article/view/31445},
  DOI={10.1609/icwsm.v18i1.31445},
  abstractNote={Hate speech represents a pervasive and detrimental form of online discourse, often manifested through an array of slurs, from hateful tweets to defamatory posts. As such speech proliferates, it connects people globally and poses significant social, psychological, and occasionally physical threats to targeted individuals and communities. Current computational linguistic approaches for tackling this phenomenon rely on labelled social media datasets for training. For unifying efforts, our study advances in the critical need for a comprehensive meta-collection, advocating for an extensive dataset to help counteract this problem effectively. We scrutinized over 60 datasets, selectively integrating those pertinent into MetaHate. This paper offers a detailed examination of existing collections, highlighting their strengths and limitations. Our findings contribute to a deeper understanding of the existing datasets, paving the way for training more robust and adaptable models. These enhanced models are essential for effectively combating the dynamic and complex nature of hate speech in the digital realm.},
  number={1},
  journal={Proceedings of the International AAAI Conference on Web and Social Media},
  author={Piot, Paloma and Martín-Rodilla, Patricia and Parapar, Javier},
  year={2024},
  month={May},
  pages={2025-2039}
}

Acknowledgements

The authors thank the funding from the Horizon Europe research and innovation programme under the Marie Skłodowska-Curie Grant Agreement No. 101073351. The authors also thank the financial support supplied by the Consellería de Cultura, Educación, Formación Profesional e Universidades (accreditation 2019-2022 ED431G/01, ED431B 2022/33) and the European Regional Development Fund, which acknowledges the CITIC Research Center in ICT of the University of A Coruña as a Research Center of the Galician University System and the project PID2022-137061OB-C21 (Ministerio de Ciencia e Innovación, Agencia Estatal de Investigación, Proyectos de Generación de Conocimiento; supported by the European Regional Development Fund). The authors also thank the funding of project PLEC2021-007662 (MCIN/AEI/10.13039/501100011033, Ministerio de Ciencia e Innovación, Agencia Estatal de Investigación, Plan de Recuperación, Transformación y Resiliencia, Unión Europea-Next Generation EU).

Usage

Inference

To use this model, you can load it via the transformers library:

from transformers import pipeline

# Load the model
classifier = pipeline("text-classification", model="irlab-udc/MetaHateBERT")

# Test the model
result = classifier("Your input text here")
print(result)  # Should print the labels "no hate" or "hate"
Downloads last month
41
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train irlab-udc/MetaHateBERT