--- license: cc-by-4.0 datasets: - jagoldz/gahd - Paul/hatecheck-german language: - de metrics: - f1 library_name: transformers pipeline_tag: text-classification tags: - hate-speech-detection - hate-speech --- # Model Card ## Model Description We fine-tuned this [gelectra-large model](https://huggingface.co/deepset/gelectra-large) for four rounds of dynamic adversarial data collection to create the GAHD dataset. In each round annotators created examples by trying to trick the model into a misclassification. We explored different ways of supporting annotators in finding model-tricking examples during the data collection. This is the final model (R4) in our paper. The model classifies text into "hate speech" (1) or "not-hate speech" (0). Please check out our [paper](https://arxiv.org/abs/2403.19559) for further details about the training procedure (Appendix C) or evaluation (Section 4). - paper: https://arxiv.org/abs/2403.19559 - GAHD dataset on Huggingface: https://huggingface.co/datasets/jagoldz/gahd - GAHD dataset on GitHub: https://github.com/jagol/gahd ## Citation When using this model or the GAHD dataset, please cite our preprint on Arxiv: ``` @misc{goldzycher2024improving, title={Improving Adversarial Data Collection by Supporting Annotators: Lessons from GAHD, a German Hate Speech Dataset}, author={Janis Goldzycher and Paul Röttger and Gerold Schneider}, year={2024}, eprint={2403.19559}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```