pepa commited on
Commit
4bf2e9d
1 Parent(s): e1621d2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +159 -154
README.md CHANGED
@@ -1,155 +1,160 @@
1
- ---
2
- annotations_creators:
3
- - crowdsourced
4
- language_creators:
5
- - crowdsourced
6
- languages:
7
- - en-US
8
- licenses:
9
- - mit
10
- multilinguality:
11
- - monolingual
12
- pretty_name: sufficient_facts
13
- size_categories:
14
- - 1K<n<10K
15
- source_datasets:
16
- - extended|fever
17
- - extended|hover
18
- - extended|fever_gold_evidence
19
- task_categories:
20
- - text-classification
21
- task_ids:
22
- - fact-checking
23
- ---
24
-
25
- # Dataset Card for sufficient_facts
26
-
27
- ## Table of Contents
28
- - [Table of Contents](#table-of-contents)
29
- - [Dataset Description](#dataset-description)
30
- - [Dataset Summary](#dataset-summary)
31
- - [Languages](#languages)
32
- - [Dataset Structure](#dataset-structure)
33
- - [Data Instances](#data-instances)
34
- - [Data Fields](#data-fields)
35
- - [Data Splits](#data-splits)
36
- - [Dataset Creation](#dataset-creation)
37
- - [Curation Rationale](#curation-rationale)
38
- - [Source Data](#source-data)
39
- - [Annotations](#annotations)
40
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
41
- - [Considerations for Using the Data](#considerations-for-using-the-data)
42
- - [Social Impact of Dataset](#social-impact-of-dataset)
43
- - [Discussion of Biases](#discussion-of-biases)
44
- - [Other Known Limitations](#other-known-limitations)
45
- - [Additional Information](#additional-information)
46
- - [Dataset Curators](#dataset-curators)
47
- - [Licensing Information](#licensing-information)
48
- - [Citation Information](#citation-information)
49
- - [Contributions](#contributions)
50
-
51
- ## Dataset Description
52
-
53
- - **Homepage:** https://github.com/copenlu/sufficient_facts
54
- - **Repository:** https://github.com/copenlu/sufficient_facts
55
- - **Paper:** Will be uploaded soon...
56
- - **Leaderboard:**
57
- - **Point of Contact:** https://apepa.github.io/
58
-
59
- ### Dataset Summary
60
-
61
- This is the dataset SufficientFacts, introduced in the paper "Fact Checking with Insufficient Evidence", accepted at the TACL journal in 2022.
62
-
63
- Automating the fact checking (FC) process relies on information obtained from external sources. In this work, we posit that it is crucial for FC models to make veracity predictions only when there is sufficient evidence and otherwise indicate when it is not enough. To this end, we are the first to study what information FC models consider sufficient by introducing a novel task and advancing it with three main contributions. First, we conduct an in-depth empirical analysis of the task with a new fluency-preserving method for omitting information from the evidence at the constituent and sentence level. We identify when models consider the remaining evidence (in)sufficient for FC, based on three trained models with different Transformer architectures and three FC datasets. Second, we ask annotators whether the omitted evidence was important for FC, resulting in a novel diagnostic dataset, **SufficientFacts**, for FC with omitted evidence. We find that models are least successful in detecting missing evidence when adverbial modifiers are omitted (21% accuracy), whereas it is easiest for omitted date modifiers (63% accuracy). Finally, we propose a novel data augmentation strategy for contrastive self-learning of missing evidence by employing the proposed omission method combined with tri-training. It improves performance for Evidence Sufficiency Prediction by up to 17.8 F1 score, which in turn improves FC performance by up to 2.6 F1 score.
64
-
65
-
66
- ### Languages
67
-
68
- English
69
-
70
- ## Dataset Structure
71
-
72
- The dataset consists of three files, each for one of the datasets -- FEVER, HoVer, and VitaminC.
73
- Each file consists of json lines of the format:
74
-
75
- ```json
76
- {
77
- "claim": "Unison (Celine Dion album) was originally released by Atlantic Records.",
78
- "evidence": [
79
- [
80
- "Unison (Celine Dion album)",
81
- "The album was originally released on 2 April 1990 ."
82
- ]
83
- ],
84
- "label_before": "REFUTES",
85
- "label_after": "NOT ENOUGH",
86
- "agreement": "agree_ei",
87
- "type": "PP",
88
- "removed": ["by Columbia Records"],
89
- "text_orig": "[[Unison (Celine Dion album)]] The album was originally released on 2 April 1990 <span style=\"color:red;\">by Columbia Records</span> ."
90
- }
91
- ```
92
-
93
- ### Data Instances
94
-
95
- * FEVER: 600 consituent-level, 400 sentence-level;
96
- * HoVer - 600 consituent-level, 400 sentence-level;
97
- * VitaminC - 600 consituent-level.
98
-
99
- ### Data Fields
100
-
101
- * `claim` - the claim that is being verified
102
- * `evidence` - the augmented evidence for the claim, i.e. the evidence with some removed information
103
- * `label_before` - the original label for the claim-evidence pair, before information was removed from the evidence
104
- * `label_after` - the label for the augmented claim-evidence pair, after information was removed from the evidence, as annotated by crowd-source workers
105
- * `type` - type of the information removed from the evidence. The types are fine-grained and their mapping to the general types -- 7 constituent and 1 sentence type can be found in [types.json](types.json) file.
106
- * `removed` - the text of the removed information from the evidence
107
- * `text_orig` - the original text of the evidence, as presented to crowd-source workers, the text of the removed information is inside `<span style=\"color:red;\"></span>` tags.
108
-
109
- ### Data Splits
110
-
111
- | name |test_fever|test_hover|test_vitaminc|
112
- |----------|-------:|-----:|-------:|
113
- |test| 1000| 1000| 600|
114
-
115
-
116
- Augmented from the test splits of the corresponding datasets.
117
-
118
- ### Annotations
119
-
120
- #### Annotation process
121
-
122
- The workers were provided with the following task description:
123
-
124
- For each evidence text, some facts have been removed (marked in <span style="color:red;">red</span>).
125
- You should annotate whether, <b>given the remaining facts in the evidence text, the evidence is still enough for verifying the claim.</b> <br></br>
126
- <ul>
127
- <li>You should select <i><b>'ENOUGH -- IRRELEVANT'</b></i>, if the <b>remaining information is still <i>enough</i></b> for verifying the claim because the <b>removed information is irrelevant</b> for identifying the evidence as SUPPORTS or REFUTES. See examples 1 and 2.</li>
128
- <li>You should select <i><b>'ENOUGH -- REPEATED'</b></i>, if the <b>remaining information is still <i>enough</i></b> for verifying the claim because the <b>removed information is relevant but is also present (repeated) in the remaining (not red) text.</b> See example 3.</li>
129
- <li>You should select <i><b>'NOT ENOUGH'</b></i> -- when <b>1) the removed information is <i>relevant</i></b> for verifying the claim <b> AND 2) it is <i>not present (repeated)</i> in the remaining text.</b> See examples 4, 5, and 6.</li>
130
- <!--<li>You should select <i><b>'CHANGED INFO'</b></i> in the rare cases when the remaining evidence has <b>changed the support for the claim</b></li>-->
131
- </ul>
132
-
133
- <b>Note: You should not incorporate your own knowledge or beliefs! You should rely only on the evidence provided for the claim.</b>
134
-
135
- The annotators were then given example instance annotations.
136
- Finally, annotators were asked to complete a qualification test in order to be allowed to annotate instances for the task.
137
- The resulting inter-annotator agreement for SufficientFacts is 0.81 Fleiss'k from three annotators.
138
-
139
- #### Who are the annotators?
140
-
141
- The annotations were performed by workers at Amazon Mechanical Turk.
142
-
143
- ## Additional Information
144
-
145
- ### Licensing Information
146
-
147
- MIT
148
-
149
- ### Citation Information
150
-
151
- To be updated soon...
152
-
153
- ### Contributions
154
-
 
 
 
 
 
155
  Thanks to [@apepa](https://github.com/apepa) for adding this dataset.
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - crowdsourced
6
+ languages:
7
+ - en-US
8
+ licenses:
9
+ - mit
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: sufficient_facts
13
+ size_categories:
14
+ - 1K<n<10K
15
+ source_datasets:
16
+ - extended|fever
17
+ - extended|hover
18
+ - extended|fever_gold_evidence
19
+ task_categories:
20
+ - text-classification
21
+ task_ids:
22
+ - fact-checking
23
+ ---
24
+
25
+ # Dataset Card for sufficient_facts
26
+
27
+ ## Table of Contents
28
+ - [Table of Contents](#table-of-contents)
29
+ - [Dataset Description](#dataset-description)
30
+ - [Dataset Summary](#dataset-summary)
31
+ - [Languages](#languages)
32
+ - [Dataset Structure](#dataset-structure)
33
+ - [Data Instances](#data-instances)
34
+ - [Data Fields](#data-fields)
35
+ - [Data Splits](#data-splits)
36
+ - [Dataset Creation](#dataset-creation)
37
+ - [Curation Rationale](#curation-rationale)
38
+ - [Source Data](#source-data)
39
+ - [Annotations](#annotations)
40
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
41
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
42
+ - [Social Impact of Dataset](#social-impact-of-dataset)
43
+ - [Discussion of Biases](#discussion-of-biases)
44
+ - [Other Known Limitations](#other-known-limitations)
45
+ - [Additional Information](#additional-information)
46
+ - [Dataset Curators](#dataset-curators)
47
+ - [Licensing Information](#licensing-information)
48
+ - [Citation Information](#citation-information)
49
+ - [Contributions](#contributions)
50
+
51
+ ## Dataset Description
52
+
53
+ - **Homepage:** https://github.com/copenlu/sufficient_facts
54
+ - **Repository:** https://github.com/copenlu/sufficient_facts
55
+ - **Paper:** Will be uploaded soon...
56
+ - **Leaderboard:**
57
+ - **Point of Contact:** https://apepa.github.io/
58
+
59
+ ### Dataset Summary
60
+
61
+ This is the dataset SufficientFacts, introduced in the paper "Fact Checking with Insufficient Evidence", accepted at the TACL journal in 2022.
62
+
63
+ Automating the fact checking (FC) process relies on information obtained from external sources. In this work, we posit that it is crucial for FC models to make veracity predictions only when there is sufficient evidence and otherwise indicate when it is not enough. To this end, we are the first to study what information FC models consider sufficient by introducing a novel task and advancing it with three main contributions. First, we conduct an in-depth empirical analysis of the task with a new fluency-preserving method for omitting information from the evidence at the constituent and sentence level. We identify when models consider the remaining evidence (in)sufficient for FC, based on three trained models with different Transformer architectures and three FC datasets. Second, we ask annotators whether the omitted evidence was important for FC, resulting in a novel diagnostic dataset, **SufficientFacts**, for FC with omitted evidence. We find that models are least successful in detecting missing evidence when adverbial modifiers are omitted (21% accuracy), whereas it is easiest for omitted date modifiers (63% accuracy). Finally, we propose a novel data augmentation strategy for contrastive self-learning of missing evidence by employing the proposed omission method combined with tri-training. It improves performance for Evidence Sufficiency Prediction by up to 17.8 F1 score, which in turn improves FC performance by up to 2.6 F1 score.
64
+
65
+
66
+ ### Languages
67
+
68
+ English
69
+
70
+ ## Dataset Structure
71
+
72
+ The dataset consists of three files, each for one of the datasets -- FEVER, HoVer, and VitaminC.
73
+ Each file consists of json lines of the format:
74
+
75
+ ```json
76
+ {
77
+ "claim": "Unison (Celine Dion album) was originally released by Atlantic Records.",
78
+ "evidence": [
79
+ [
80
+ "Unison (Celine Dion album)",
81
+ "The album was originally released on 2 April 1990 ."
82
+ ]
83
+ ],
84
+ "label_before": "REFUTES",
85
+ "label_after": "NOT ENOUGH",
86
+ "agreement": "agree_ei",
87
+ "type": "PP",
88
+ "removed": ["by Columbia Records"],
89
+ "text_orig": "[[Unison (Celine Dion album)]] The album was originally released on 2 April 1990 <span style=\"color:red;\">by Columbia Records</span> ."
90
+ }
91
+ ```
92
+
93
+ ### Data Instances
94
+
95
+ * FEVER: 600 consituent-level, 400 sentence-level;
96
+ * HoVer - 600 consituent-level, 400 sentence-level;
97
+ * VitaminC - 600 consituent-level.
98
+
99
+ ### Data Fields
100
+
101
+ * `claim` - the claim that is being verified
102
+ * `evidence` - the augmented evidence for the claim, i.e. the evidence with some removed information
103
+ * `label_before` - the original label for the claim-evidence pair, before information was removed from the evidence
104
+ * `label_after` - the label for the augmented claim-evidence pair, after information was removed from the evidence, as annotated by crowd-source workers
105
+ * `type` - type of the information removed from the evidence. The types are fine-grained and their mapping to the general types -- 7 constituent and 1 sentence type can be found in [types.json](types.json) file.
106
+ * `removed` - the text of the removed information from the evidence
107
+ * `text_orig` - the original text of the evidence, as presented to crowd-source workers, the text of the removed information is inside `<span style=\"color:red;\"></span>` tags.
108
+
109
+ ### Data Splits
110
+
111
+ | name |test_fever|test_hover|test_vitaminc|
112
+ |----------|-------:|-----:|-------:|
113
+ |test| 1000| 1000| 600|
114
+
115
+
116
+ Augmented from the test splits of the corresponding datasets.
117
+
118
+ ### Annotations
119
+
120
+ #### Annotation process
121
+
122
+ The workers were provided with the following task description:
123
+
124
+ For each evidence text, some facts have been removed (marked in <span style="color:red;">red</span>).
125
+ You should annotate whether, <b>given the remaining facts in the evidence text, the evidence is still enough for verifying the claim.</b> <br></br>
126
+ <ul>
127
+ <li>You should select <i><b>'ENOUGH -- IRRELEVANT'</b></i>, if the <b>remaining information is still <i>enough</i></b> for verifying the claim because the <b>removed information is irrelevant</b> for identifying the evidence as SUPPORTS or REFUTES. See examples 1 and 2.</li>
128
+ <li>You should select <i><b>'ENOUGH -- REPEATED'</b></i>, if the <b>remaining information is still <i>enough</i></b> for verifying the claim because the <b>removed information is relevant but is also present (repeated) in the remaining (not red) text.</b> See example 3.</li>
129
+ <li>You should select <i><b>'NOT ENOUGH'</b></i> -- when <b>1) the removed information is <i>relevant</i></b> for verifying the claim <b> AND 2) it is <i>not present (repeated)</i> in the remaining text.</b> See examples 4, 5, and 6.</li>
130
+ <!--<li>You should select <i><b>'CHANGED INFO'</b></i> in the rare cases when the remaining evidence has <b>changed the support for the claim</b></li>-->
131
+ </ul>
132
+
133
+ <b>Note: You should not incorporate your own knowledge or beliefs! You should rely only on the evidence provided for the claim.</b>
134
+
135
+ The annotators were then given example instance annotations.
136
+ Finally, annotators were asked to complete a qualification test in order to be allowed to annotate instances for the task.
137
+ The resulting inter-annotator agreement for SufficientFacts is 0.81 Fleiss'k from three annotators.
138
+
139
+ #### Who are the annotators?
140
+
141
+ The annotations were performed by workers at Amazon Mechanical Turk.
142
+
143
+ ## Additional Information
144
+
145
+ ### Licensing Information
146
+
147
+ MIT
148
+
149
+ ### Citation Information
150
+
151
+ @article{atanasova2022fact,
152
+ title={Fact Checking with Insufficient Evidence},
153
+ author={Atanasova, Pepa and Simonsen, Jakob Grue and Lioma, Christina and Augenstein, Isabelle},
154
+ journal={Transactions of the Association for Computational Linguistics (TACL)},
155
+ year={2022}
156
+ }
157
+
158
+ ### Contributions
159
+
160
  Thanks to [@apepa](https://github.com/apepa) for adding this dataset.