hatespeech 3-class column

#3
by manueltonneau - opened

Hi all and thanks for the cool contribution! I'm specifically interested in the 3-class hatespeech column. The possible values are 0, 1 or 2. Could you please clarify what each refer to? I couldn't find that information neither here nor on the paper.

By looking at the data, I'm a bit confused because "white people are trash" and "that kenyan girl is gorgeous" are in the same class (0) though one is clearly hateful and the other is not. Thanks in advance for your help!

Social Sciences Data Lab at UC Berkeley org
edited Sep 27, 2023

Hello,

Thanks for checking out our work and for the comments.

For the benchmark hate speech item, 0 = no, 1 = unclear, 2 = yes. However, I encourage you to read the working paper because focusing on that label is missing the point of our work and has a number of disadvantages that are described in the paper.

Re: the examples you gave (comment_id in (1, 50068)), I would try to think about their hatefulness more carefully, as well as consider how its hatefulness is characterized in our dataset. The continuous hate speech score (hate_speech_score) is the measure of its hatefulness on a continuous spectrum (the purpose of our work), and for comment_id =1 it is estimated at 0.46 - which is considered to be in the mild hate speech severity region. Whereas for comment_id = 50068 it is -4.28, which is in the region for supportive/positive identity speech. The continuous hate speech score incorporates 10 items (sentiment:hate_speech) that characterize the hate speech spectrum, for which the ordinal hate speech label is only 1, combining across all annotators that evaluated that comment (4 annotators for comment_id = 1, so 40 total labels; 2 annotators for the other comment, so 20 labels) and adjusting for each annotator's perspective (annotator_severity).

You can further examine the estimated noisiness of an annotator's ratings (annotator_infitms and annotator_outfitms) to check if they are providing responses that are inconsistent with the other raters in the dataset across all comments, a key indicator of annotation quality. Annotator infit or outfit statistics > 2.0 are considered detrimental to the dataset, and > 1.3 are considered potentially low quality.

You can also examine the self-identified demographics (e.g. race, gender) of the annotators who rated that comment, to explore if their interpretation as observed through the labels may appear to be informed by their own identities. See Pratik's "Assessing Annotator Identity Sensitivity via Item Response Theory: A Case Study in a Hate Speech Corpus" paper for more elaboration on that.

Happy to clarify more if it would be helpful.

Cheers,
Chris

manueltonneau changed discussion status to closed

Thanks a lot for your detailed reply, and sorry for my late reply!

Sign up or log in to comment