Annotations?

#1
by sileod - opened

Hi, this dataset looks great, the description mention moderators, does this means that the annotations are original ? Thank you

Hi! Not really. Assuming you are referring to this comment:

There were disagreements among moderators on some labels, due to ambiguity and lack of context.

For some datasets I found online, 3-4 moderators classify the level of toxicity of each comment/message into 3-4 categories (e.g. Toxic, Not Sure, and Neutral). Sometimes, 2 moderators vote for Toxic and 1 vote for Neutral or Not Sure.

Thank you for the clarification
So you basically took several existing datasets and binarized the labels ?
I am building a dataset collection as well (with also multilingual toxicity detection) but I try to use original annotation sources

However, I did find and label roughly 50 messages that were obviously hate speech. So, some annotations are indeed original.

So you basically took several existing datasets and binarized the labels ?

That's right

Wait um. At one point, I contacted Scott from SurgeAI and got 9 datasets from them.

Yeah I think I combined too many (at least 20+) datasets, definitely more than "several"

Sign up or log in to comment