ibaucells commited on
Commit
34afb56
1 Parent(s): 8dc71c8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -24
README.md CHANGED
@@ -91,6 +91,9 @@ Three JSON files, one for each split.
91
  "puta"
92
  ],
93
  "annotation": {
 
 
 
94
  "abusive_spans": [
95
  [
96
  "no té ni puta idea",
@@ -106,10 +109,7 @@ Three JSON files, one for each split.
106
  "target_type": [
107
  "INDIVIDUAL"
108
  ],
109
- "is_abusive": "abusive",
110
- "context": "no",
111
- "is_implicit": "yes",
112
- "abusiveness_agreement": "full"
113
  }
114
  }
115
 
@@ -117,18 +117,21 @@ Three JSON files, one for each split.
117
 
118
  ### Data Fields
119
 
120
- - id: a string feature.
121
- - context: a string feature.
122
- - sentence: a string feature.
123
- - topic: a string feature.
124
- - keywords: a list of strings.
125
- - context_needed: a string feature.
126
- - is_abusive: a bool feature.
127
- - abusiveness_agreement: a string feature.
128
- - target_type: a list of strings.
129
- - abusive_spans: a dictionary with field 'text' (list of strings) and 'index' (list of strings).
130
- - target_spans: a dictionary with field 'text' (list of strings) and 'index' (list of strings).
131
- - is_implicit: a string.
 
 
 
132
 
133
  ### Data Splits
134
 
@@ -160,14 +163,7 @@ The annotation process was divided into the following two tasks, carried out in
160
 
161
  Task 1. The sentences (around 30.000) were annotated by two annotators as either abusive or not abusive. In case of ambiguity in the sentence, the annotators had the possibility to consult the context, i.e. the whole message of the user (if the sentence to be annotated was a segment contained in the message). In cases where annotators 1 and 2 disagreed about the abusiveness of a message, it was annotated by a third annotator. As a result, the sentences that are ultimately considered abusive are those that were initially annotated as abusive by both annotators or, in the case of an initial disagreement between them, those that were resolved as abusive by the third annotator.
162
 
163
- Task 2. The sentences annotated as abusive (6047) in Task 1 were further annotated by the two main annotators for the following features:
164
- - abusive spans: the sequence of words that attribute to the text's abusiveness
165
- - implicit/explicit abusiveness: whether the abusiveness is explicit (contains a profanity, slur or threat) or implicit (does not contain a profanity or slur, but is likely to contain irony, sarcasm or similar resources)
166
- - target spans: if found in the message, the sequence(s) of words that refer to the target of the text's abusiveness
167
- - target type: three possible categories. The categories are non-exclusive, as some targets may have a dual identity and more than one target may be detected in a single message.
168
- - individual: a famous person, a named person or an unnamed person interacting in the conversation.
169
- - group: considered to be a unit based on the same ethnicity, gender or sexual orientation, political affiliation, religious belief or something else.
170
- - other; e.g. an organization, a situation, an event, or an issue
171
 
172
  The annotation guidelines are published and available on Zenodo.
173
 
 
91
  "puta"
92
  ],
93
  "annotation": {
94
+ "is_abusive": "abusive",
95
+ "abusiveness_agreement": "full",
96
+ "context_needed": "no",
97
  "abusive_spans": [
98
  [
99
  "no té ni puta idea",
 
109
  "target_type": [
110
  "INDIVIDUAL"
111
  ],
112
+ "is_implicit": "yes"
 
 
 
113
  }
114
  }
115
 
 
117
 
118
  ### Data Fields
119
 
120
+ - `id` (a string feature): unique identifier of the instance.
121
+ - `context` (a string feature): complete text message from the user surrounding the sentence (it can coincide totally or only partially with the sentence).
122
+ - sentence (a string feature): text message where the abusiveness is evaluated.
123
+ - topic (a string feature): category from Racó Català forums where the sentence comes from.
124
+ - keywords (a list of strings): keywords used to select the candidate messages to annotate.
125
+ - context_needed (a string feature): "yes" / "no" if all the annotators consulted / did not consult the context to decide on the sentence's abusiveness, "maybe" if there was not agreement about it.
126
+ - is_abusive (a bool feature): "abusive" or "not_abusive"
127
+ - abusiveness_agreement (a string feature): "full" if the two annotators agreed on the abusiveness/not-abusiveness of the sentence, and "partial" if the abusiveness had to be decided by a third annotator.
128
+ - abusive_spans (a dictionary with field 'text' (list of strings) and 'index' (list of strings)): the sequence of words that attribute to the text's abusiveness.
129
+ - is_implicit (a string): whether the abusiveness is explicit (contains a profanity, slur or threat) or implicit (does not contain a profanity or slur, but is likely to contain irony, sarcasm or similar resources)
130
+ - target_spans (a dictionary with field 'text' (list of strings) and 'index' (list of strings)): if found in the message, the sequence(s) of words that refer to the target of the text's abusiveness
131
+ - target_type (a dictionary with field 'text' (list of strings) and 'index' (list of strings)): three possible categories. The categories are non-exclusive, as some targets may have a dual identity and more than one target may be detected in a single message.
132
+ - individual: a famous person, a named person or an unnamed person interacting in the conversation.
133
+ - group: considered to be a unit based on the same ethnicity, gender or sexual orientation, political affiliation, religious belief or something else.
134
+ - other; e.g. an organization, a situation, an event, or an issue
135
 
136
  ### Data Splits
137
 
 
163
 
164
  Task 1. The sentences (around 30.000) were annotated by two annotators as either abusive or not abusive. In case of ambiguity in the sentence, the annotators had the possibility to consult the context, i.e. the whole message of the user (if the sentence to be annotated was a segment contained in the message). In cases where annotators 1 and 2 disagreed about the abusiveness of a message, it was annotated by a third annotator. As a result, the sentences that are ultimately considered abusive are those that were initially annotated as abusive by both annotators or, in the case of an initial disagreement between them, those that were resolved as abusive by the third annotator.
165
 
166
+ Task 2. The sentences annotated as abusive (6047) in Task 1 were further annotated by the two main annotators for the following features, explained in the Summary section: abusive spans, implicit/explicit abusiveness, target spans, and target type.
 
 
 
 
 
 
 
167
 
168
  The annotation guidelines are published and available on Zenodo.
169