--- task_categories: - text-classification language: - en --- # Tweet Annotations in Different Experimental Conditions ## Description The dataset contains tweet data annotations of **hate speech** and **offensive language** in five experimental conditions. The tweet data was sampled from the corpus created by [Davidson (2017)](https://github.com/t-davidson/hate-speech-and-offensive-language). We selected 3,000 Tweets for our annotation. We developed five experimental conditions that varied the annotation task structure, as shown in the following figure. All tweets were annotated in each condition. - **Condition A** presented the tweet and three options on a single screen: hate speech, offensive language, or neither. Annotators could select one or both of hate speech, offensive language, or indicate that neither applied. - Conditions B and C split the annotation of a single tweet across two screens. + For **Condition B**, the first screen prompted the annotator to indicate whether the tweet contained hate speech. On the following screen, they were shown the tweet again and asked whether it contained offensive language. + **Condition C** was similar to Condition B, but flipped the order of hate speech and offensive language for each tweet. - In Conditions D and E, the two tasks are treated independently with annotators being asked to first annotate all tweets for one task, followed by annotating all tweets again for the second task. + Annotators assigned **Condition D** were first asked to annotate hate speech for all their assigned tweets, and then asked to annotate offensive language for the same set of tweets. + **Condition E** worked the same way, but started with the offensive language annotation task followed by the HS annotation task. We recruited 917 (?) annotators via the crowdsourcing platform [Prolific](https://www.prolific.com/) during November and December 2022. Only platform members in the US were eligible. Each annotator annotated up to 50 tweets. The dataset also contains demographic informations of the annotators. Annotators received a fixed hourly wage in excess of the US federal minimum wage after completing the task. ![](./fig/exp_conditions.png "Illustration of Experimental Conditions") ## Dataset Structure (tbu.: a table with column names, meanings and their variables) ## Citation If you found the dataset useful, please cite: ``` @InProceedings{becketal_22, author="Beck, Jacob and Eckman, Stephanie and Chew, Rob and Kreuter, Frauke", editor="Chen, Jessie Y. C. and Fragomeni, Gino and Degen, Helmut and Ntoa, Stavroula", title="Improving Labeling Through Social Science Insights: Results and Research Agenda", booktitle="HCI International 2022 -- Late Breaking Papers: Interacting with eXtended Reality and Artificial Intelligence", year="2022", publisher="Springer Nature Switzerland", address="Cham", pages="245--261", isbn="978-3-031-21707-4" } ```