Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
Catalan
DOI:
Libraries:
Datasets
pandas
License:
Blanca commited on
Commit
6c1d2bc
1 Parent(s): 369b5d3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -9
README.md CHANGED
@@ -65,7 +65,8 @@ Each instance in the dataset is a pair of original-answer messages, annotated wi
65
  ### Data Instances
66
 
67
  ```
68
- {"id_original": "1413960970066710533",
 
69
  "id_answer": "1413968453690658816",
70
  "original_text": "",
71
  "answer_text": "",
@@ -74,8 +75,8 @@ Each instance in the dataset is a pair of original-answer messages, annotated wi
74
  "original_stance": "FAVOUR",
75
  "answer_stance": "AGAINST",
76
  "original_emotion": ["distrust", "joy", "disgust"],
77
- "answer_emotion": ["distrust"]}
78
-
79
  ```
80
 
81
  ### Data Splits
@@ -102,15 +103,19 @@ The source language producers are user of Twitter.
102
 
103
  ### Annotations
104
 
105
- Emotions are annotated in a multi-label fashion. The labels can be: Anger, Anticipation, Disgust, Fear, Joy, Sadness, Surprise, Distrust.
106
- Static stance is annotated per message. The labels can be: FAVOUR, AGAINST, NEUTRAL, NA.
107
- Dynamic stance is annotated per pair. The labels can be: Agree, Disagree, Elaborate, Query, Neutral, Unrelated, NA.
 
 
108
 
109
  #### Annotation process
110
 
111
- For emotions there were 3 annotators. The gold labels are an aggregation of all the labels annotated by the 3. The IAA calculated with Fleiss' Kappa per label was, on average, 45.38.
112
- For static stance there were 2 annotators, in the cases of disagreement a third annotated chose the gold label. The overall Fleiss' Kappa between the 2 annotators is 82.71.
113
- For dynamic stance there were 4 annotators. If at least 3 of the annotators disagreed, a fifth annotator chose the gold label. The overall Fleiss' Kappa between the 4 annotators was 56.51, and the average Fleiss' Kappa of the annotators with the gold labels is 85.17.
 
 
114
 
115
  #### Who are the annotators?
116
 
 
65
  ### Data Instances
66
 
67
  ```
68
+ {
69
+ "id_original": "1413960970066710533",
70
  "id_answer": "1413968453690658816",
71
  "original_text": "",
72
  "answer_text": "",
 
75
  "original_stance": "FAVOUR",
76
  "answer_stance": "AGAINST",
77
  "original_emotion": ["distrust", "joy", "disgust"],
78
+ "answer_emotion": ["distrust"]
79
+ }
80
  ```
81
 
82
  ### Data Splits
 
103
 
104
  ### Annotations
105
 
106
+ - Emotions are annotated in a multi-label fashion. The labels can be: Anger, Anticipation, Disgust, Fear, Joy, Sadness, Surprise, Distrust.
107
+
108
+ - Static stance is annotated per message. The labels can be: FAVOUR, AGAINST, NEUTRAL, NA.
109
+
110
+ - Dynamic stance is annotated per pair. The labels can be: Agree, Disagree, Elaborate, Query, Neutral, Unrelated, NA.
111
 
112
  #### Annotation process
113
 
114
+ - For emotions there were 3 annotators. The gold labels are an aggregation of all the labels annotated by the 3. The IAA calculated with Fleiss' Kappa per label was, on average, 45.38.
115
+
116
+ - For static stance there were 2 annotators, in the cases of disagreement a third annotated chose the gold label. The overall Fleiss' Kappa between the 2 annotators is 82.71.
117
+
118
+ - For dynamic stance there were 4 annotators. If at least 3 of the annotators disagreed, a fifth annotator chose the gold label. The overall Fleiss' Kappa between the 4 annotators was 56.51, and the average Fleiss' Kappa of the annotators with the gold labels is 85.17.
119
 
120
  #### Who are the annotators?
121