Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
parquet
Languages:
English
Size:
10K - 100K
License:
license: bsl-1.0 | |
task_categories: | |
- text-classification | |
language: | |
- en | |
tags: | |
- hate speech detection | |
- implicit hate | |
- subtle hate | |
- benchmark | |
pretty_name: ISHate | |
# Implicit and Subtle Hate | |
Implicit and Subtle Hate (ISHate) is a Hate Speech benchmark for implicit and subtle HS detection on social media messages. The dataset has been presented in the paper *"An In-depth Analysis of Implicit and Subtle Hate Speech Messages"* accepted at EACL 2023. | |
[[Read the Paper]](https://aclanthology.org/2023.eacl-main.147/) | |
## What is Implicit and Subtle Hate? | |
**Implicit Hate Speech** does not immediately denote abuse/hate. Implicitness goes beyond word-related meaning, implying figurative language use such as irony, sarcasm, etc., generally hiding the real meaning, making it more difficult to grasp and undermining the collection of hateful messages. | |
**Subtle Hate Speech** concerns hateful messages that are so delicate or elusive as to be difficult to analyze or describe, and that depend on an indirect method to deliver the meaning. However, literal meanings are of prime importance in subtle messages by contrast to implicit messages where we go beyond literal meanings. Although implicitness and subtlety differ a lot at this point, we still rely on language users’ perception to understand implicit and subtle messages. Despite the challenges in characterizing human perception schematically, in our study, the use of elements such as negations with positive clauses, conditionals, connectors, unrelated constructions, word order, and circumlocution can greatly affect the subtlety of a message. | |
## Data Fields | |
- `message_id`: ID from the source dataset where the text message was extracted from. | |
- `text`: Text message without any preprocessing. | |
- `cleaned_text`: Text message with preprocessing (replace long non-space character chains for only one occurrence, and delete digits, special symbols, and URLs.) | |
- `source`: Source where the message was exctracted from. | |
- `hateful_layer`: Layer with Non-HS and HS labels. | |
- `implicit_layer`: Layer with Explicit HS and Implicit HS labels. | |
- `subtlety_layer`: Layer with Non-Subtle and Subtle labels. | |
- `implicit_props_layer`: Layer with implicit properties labels for Implicit HS instances. | |
- `aug_method`: The augmentation method used for that message (`orig` for original data). | |
- `target`: Target group being attacked (HS instances), or Target group the message is directed to without an specific attack or being hateful (Non-HS instances). | |
## How do I cite this work? | |
### Citation | |
> Nicolás Benjamín Ocampo, Ekaterina Sviridova, Elena Cabrio, and Serena Villata. 2023. An In-depth Analysis of Implicit and Subtle Hate Speech Messages. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 1997–2013, Dubrovnik, Croatia. Association for Computational Linguistics. | |
### Bibtex | |
```tex | |
@inproceedings{ocampo-etal-2023-depth, | |
title = "An In-depth Analysis of Implicit and Subtle Hate Speech Messages", | |
author = "Ocampo, Nicol{\'a}s Benjam{\'\i}n and | |
Sviridova, Ekaterina and | |
Cabrio, Elena and | |
Villata, Serena", | |
editor = "Vlachos, Andreas and | |
Augenstein, Isabelle", | |
booktitle = "Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics", | |
month = may, | |
year = "2023", | |
address = "Dubrovnik, Croatia", | |
publisher = "Association for Computational Linguistics", | |
url = "https://aclanthology.org/2023.eacl-main.147", | |
doi = "10.18653/v1/2023.eacl-main.147", | |
pages = "1997--2013", | |
abstract = "The research carried out so far in detecting abusive content in social media has primarily focused on overt forms of hate speech. While explicit hate speech (HS) is more easily identifiable by recognizing hateful words, messages containing linguistically subtle and implicit forms of HS (as circumlocution, metaphors and sarcasm) constitute a real challenge for automatic systems. While the sneaky and tricky nature of subtle messages might be perceived as less hurtful with respect to the same content expressed clearly, such abuse is at least as harmful as overt abuse. In this paper, we first provide an in-depth and systematic analysis of 7 standard benchmarks for HS detection, relying on a fine-grained and linguistically-grounded definition of implicit and subtle messages. Then, we experiment with state-of-the-art neural network architectures on two supervised tasks, namely implicit HS and subtle HS message classification. We show that while such models perform satisfactory on explicit messages, they fail to detect implicit and subtle content, highlighting the fact that HS detection is not a solved problem and deserves further investigation.", | |
} | |
``` | |