The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider removing the loading script and relying on automated data support (you can use convert_to_parquet from the datasets library). If this is not possible, please open a discussion for direct help.

Suomi-24-toxicity-annotated

This dataset includes comments from Suomi24 sampled using predictions from a toxicity classifier. The comments were taken in intervals for each label. The process of sampling emphasized difficult borderline cases. 500 comments were sampled for each label.
The annotation process used the labels from Perspective, used e.g. for TurkuNLP/wikipedia-toxicity-data-fi.
Instead of multi-label, we annotated each comment only for one label, although a couple comments appear in two labels.
Process of annotation included initial annotation of 100-200 comments followed by a discussion and final annotations. Raw data can be found from here.

Examples that made it to the dataset are ones that had unanimous agreement or were resolved through discussion.

Citing

To cite this dataset use the following bibtex.

@inproceedings{eskelinen-etal-2023-toxicity,
    title = "Toxicity Detection in {F}innish Using Machine Translation",
    author = "Eskelinen, Anni  and
      Silvala, Laura  and
      Ginter, Filip  and
      Pyysalo, Sampo  and
      Laippala, Veronika",
    booktitle = "Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)",
    month = may,
    year = "2023",
    address = "T{\'o}rshavn, Faroe Islands",
    publisher = "University of Tartu Library",
    url = "https://aclanthology.org/2023.nodalida-1.68",
    pages = "685--697",
    abstract = "Due to the popularity of social media platforms and the sheer amount of user-generated content online, the automatic detection of toxic language has become crucial in the creation of a friendly and safe digital space. Previous work has been mostly focusing on English leaving many lower-resource languages behind. In this paper, we present novel resources for toxicity detection in Finnish by introducing two new datasets, a machine translated toxicity dataset for Finnish based on the widely used English Jigsaw dataset and a smaller test set of Suomi24 discussion forum comments originally written in Finnish and manually annotated following the definitions of the labels that were used to annotate the Jigsaw dataset. We show that machine translating the training data to Finnish provides better toxicity detection results than using the original English training data and zero-shot cross-lingual transfer with XLM-R, even with our newly annotated dataset from Suomi24.",
}

Label definitions taken from Perspective API

THREAT: Describes an intention to inflict pain, injury, or violence against an individual or group. THREATENING: Language that is threatening or encouraging violence or harm, including self-harm.

PROFANITY: Swear words, curse words, or other obscene or profane language.

INSULT: Insulting, inflammatory, or negative comment towards a person or a group of people. Such comments are not necessarily identity specific.

IDENTITY ATTACK: Negative or hateful comments targeting someone because of their identity.

TOXICITY: A rude, disrespectful, or unreasonable comment that is likely to make people leave a discussion.

SEVERE TOXICITY: A very hateful, aggressive, disrespectful comment or otherwise very likely to make a user leave a discussion or give up on sharing their perspective. This attribute is much less sensitive to more mild forms of toxicity, such as comments that include positive uses of curse words.

Guidelines used for annotation:

Obscene

swearwords, including mild expletives and misspelled, masked, or other variations
sexually explicit words/terminology that are not topically or contextually appropriate

Threat

suicidal or self-harm comments, incitement to violence or self-harm, hypothetical situations and wishing harm to somebody
comments that are very unlikely to happen if not marked clearly as sarcasm
only threats towards people are annotated as threat

threats made by somebody else other than the writer NOT included
counterfactuals statements NOT included

Insult

terms that are insulting towards groups of people (also in identity attack)
insults against political groups, e.g. "vitun demari/suvakki/persu" -> "fucking liberal/conservative etc."

negative insulting comments towards oneself, things other than people and hypothetical situations NOT included

Identity attack

comments that have no negative language but are still clearly negative

negative statements towards political groups or groups that nobody self-identifies with are NOT included (unless an insult)

Toxicity

unreasonably expressed negative comments regardless of the target present and whether the target is known or not
mild or humoristic swearwords are NOT included
positive or neutral sexually explicit comments are NOT included

Severe toxicity

comments that include only sexually explicit content
only one severely toxic element is needed to have this label and a comment is severely toxic even if the comment contains substantive content
target does not need to be present nor does the target matter

Inter-annotator agreement:

Label Initial (unanimous) After discussion (unanimous) Initial (at least 2/3) After discussion (at least 2/3)
identity attack 54,5 % 66,6 % 92 % 93,6 %
insult 47,5 % 49,6 % 94,5 % 95,6 %
severe toxicity 63 % 66 % 92 % 96,6 %
threat 82 % 80,3 % 98 % 97,3 %
toxicity 58 % 54 % 93 % 89,6 %
obscene 69 % 62 % 97 % 96 %

Evaluation results

Evaluation results from using TurkuNLP/bert-large-finnish-cased-toxicity.

Label Precision Recall F1
identity attack 73,2 32 44,6
insult 59,4 646,8 52,4
severe toxicity 12 28,6 16,9
threat 32,4 28,6 30,4
toxicity 60,4 79,2 68,5
obscene 64,5 82,4 72,3
OVERALL 57,4 58,9 51,1
OVERALL weighted by original sample counts 55,5 65,5 60,1

Licensing Information

Contents of this repository are distributed under the Creative Commons Attribution-ShareAlike 4.0 International License (CC BY-SA 4.0). Copyright of the dataset contents belongs to the original copyright holders.

Downloads last month
7