boleima commited on
Commit
b338c39
1 Parent(s): ec3f809

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -14,16 +14,16 @@ size_categories:
14
 
15
  ## Description
16
 
17
- The dataset contains tweet data annotations of **hate speech** and **offensive language** in five experimental conditions. The tweet data was sampled from the corpus created by [Davidson et al. (2017)](https://ojs.aaai.org/index.php/ICWSM/article/view/14955). We selected 3,000 Tweets for our annotation. We developed five experimental conditions that varied the annotation task structure, as shown in the following figure. All tweets were annotated in each condition.
18
 
19
- - **Condition A** presented the tweet and three options on a single screen: hate speech, offensive language, or neither. Annotators could select one or both of hate speech, offensive language, or indicate that neither applied.
20
 
21
  - Conditions B and C split the annotation of a single tweet across two screens.
22
- + For **Condition B**, the first screen prompted the annotator to indicate whether the tweet contained hate speech. On the following screen, they were shown the tweet again and asked whether it contained offensive language.
23
- + **Condition C** was similar to Condition B, but flipped the order of hate speech and offensive language for each tweet.
24
 
25
  - In Conditions D and E, the two tasks are treated independently with annotators being asked to first annotate all tweets for one task, followed by annotating all tweets again for the second task.
26
- + Annotators assigned **Condition D** were first asked to annotate hate speech for all their assigned tweets, and then asked to annotate offensive language for the same set of tweets.
27
  + **Condition E** worked the same way, but started with the offensive language annotation task followed by the HS annotation task.
28
 
29
  We recruited 917 (?) annotators via the crowdsourcing platform [Prolific](https://www.prolific.com/) during November and December 2022. Only platform members in the US were eligible. Each annotator annotated up to 50 tweets. The dataset also contains demographic informations of the annotators.
 
14
 
15
  ## Description
16
 
17
+ The dataset contains tweet data annotations of **hate speech** (HS) and **offensive language** (OL) in five experimental conditions. The tweet data was sampled from the corpus created by [Davidson et al. (2017)](https://ojs.aaai.org/index.php/ICWSM/article/view/14955). We selected 3,000 Tweets for our annotation. We developed five experimental conditions that varied the annotation task structure, as shown in the following figure. All tweets were annotated in each condition.
18
 
19
+ - **<font color= #871F78>Condition A</font>** presented the tweet and three options on a single screen: hate speech, offensive language, or neither. Annotators could select one or both of hate speech, offensive language, or indicate that neither applied.
20
 
21
  - Conditions B and C split the annotation of a single tweet across two screens.
22
+ + For **<font color= Blue>Condition B</font>**, the first screen prompted the annotator to indicate whether the tweet contained hate speech. On the following screen, they were shown the tweet again and asked whether it contained offensive language.
23
+ + **<font color= red>Condition C</font>** was similar to Condition B, but flipped the order of hate speech and offensive language for each tweet.
24
 
25
  - In Conditions D and E, the two tasks are treated independently with annotators being asked to first annotate all tweets for one task, followed by annotating all tweets again for the second task.
26
+ + Annotators assigned **<font color=green>Condition D</font>** were first asked to annotate hate speech for all their assigned tweets, and then asked to annotate offensive language for the same set of tweets.
27
  + **Condition E** worked the same way, but started with the offensive language annotation task followed by the HS annotation task.
28
 
29
  We recruited 917 (?) annotators via the crowdsourcing platform [Prolific](https://www.prolific.com/) during November and December 2022. Only platform members in the US were eligible. Each annotator annotated up to 50 tweets. The dataset also contains demographic informations of the annotators.