Datasets:

Modalities:
Text
Formats:
json
Sub-tasks:
fact-checking
Languages:
English
Libraries:
Datasets
pandas
License:
julien-c HF staff commited on
Commit
367911e
1 Parent(s): e349d8f

Fix `license` metadata

Browse files

We recently updated the datasets metadata for consistency with other repo types (models & spaces)

Thanks! 🙏

Files changed (1) hide show
  1. README.md +162 -162
README.md CHANGED
@@ -1,163 +1,163 @@
1
- ---
2
- annotations_creators:
3
- - machine-generated
4
- - expert-generated
5
- language_creators:
6
- - machine-generated
7
- - crowdsourced
8
- languages:
9
- - en
10
- licenses:
11
- - cc-by-sa-3.0
12
- - gpl-3.0
13
- multilinguality:
14
- - monolingual
15
- paperswithcode_id: fever
16
- pretty_name: ''
17
- size_categories:
18
- - 100K<n<1M
19
- source_datasets:
20
- - extended|fever
21
- task_categories:
22
- - text-classification
23
- task_ids:
24
- - fact-checking
25
- ---
26
- # Dataset Card for fever_gold_evidence
27
-
28
- ## Table of Contents
29
- - [Dataset Description](#dataset-description)
30
- - [Dataset Summary](#dataset-summary)
31
- - [Supported Tasks](#supported-tasks-and-leaderboards)
32
- - [Languages](#languages)
33
- - [Dataset Structure](#dataset-structure)
34
- - [Data Instances](#data-instances)
35
- - [Data Fields](#data-instances)
36
- - [Data Splits](#data-instances)
37
- - [Dataset Creation](#dataset-creation)
38
- - [Curation Rationale](#curation-rationale)
39
- - [Source Data](#source-data)
40
- - [Annotations](#annotations)
41
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
42
- - [Considerations for Using the Data](#considerations-for-using-the-data)
43
- - [Social Impact of Dataset](#social-impact-of-dataset)
44
- - [Discussion of Biases](#discussion-of-biases)
45
- - [Other Known Limitations](#other-known-limitations)
46
- - [Additional Information](#additional-information)
47
- - [Dataset Curators](#dataset-curators)
48
- - [Licensing Information](#licensing-information)
49
- - [Citation Information](#citation-information)
50
-
51
- ## Dataset Description
52
-
53
- - **Homepage:** https://github.com/copenlu/fever-adversarial-attacks
54
- - **Repository:** https://github.com/copenlu/fever-adversarial-attacks
55
- - **Paper:** https://aclanthology.org/2020.emnlp-main.256/
56
- - **Leaderboard:** [Needs More Information]
57
- - **Point of Contact:** [Needs More Information]
58
-
59
- ### Dataset Summary
60
-
61
- Dataset for training classification-only fact checking with claims from the FEVER dataset.
62
- This dataset is used in the paper "Generating Label Cohesive and Well-Formed Adversarial Claims", EMNLP 2020
63
-
64
- The evidence is the gold evidence from the FEVER dataset for *REFUTE* and *SUPPORT* claims.
65
- For *NEI* claims, we extract evidence sentences with the system in "Christopher Malon. 2018. Team Papelo: Transformer Networks at FEVER. In Proceedings of the
66
- First Workshop on Fact Extraction and VERification (FEVER), pages 109113."
67
- More details can be found in https://github.com/copenlu/fever-adversarial-attacks
68
-
69
-
70
- ### Supported Tasks and Leaderboards
71
-
72
- [Needs More Information]
73
-
74
- ### Languages
75
-
76
- [Needs More Information]
77
-
78
- ## Dataset Structure
79
-
80
- ### Data Instances
81
-
82
- [Needs More Information]
83
-
84
- ### Data Fields
85
-
86
- [Needs More Information]
87
-
88
- ### Data Splits
89
-
90
- [Needs More Information]
91
-
92
- ## Dataset Creation
93
-
94
- ### Curation Rationale
95
-
96
- [Needs More Information]
97
-
98
- ### Source Data
99
-
100
- #### Initial Data Collection and Normalization
101
-
102
- [Needs More Information]
103
-
104
- #### Who are the source language producers?
105
-
106
- [Needs More Information]
107
-
108
- ### Annotations
109
-
110
- #### Annotation process
111
-
112
- [Needs More Information]
113
-
114
- #### Who are the annotators?
115
-
116
- [Needs More Information]
117
-
118
- ### Personal and Sensitive Information
119
-
120
- [Needs More Information]
121
-
122
- ## Considerations for Using the Data
123
-
124
- ### Social Impact of Dataset
125
-
126
- [Needs More Information]
127
-
128
- ### Discussion of Biases
129
-
130
- [Needs More Information]
131
-
132
- ### Other Known Limitations
133
-
134
- [Needs More Information]
135
-
136
- ## Additional Information
137
-
138
- ### Dataset Curators
139
-
140
- [Needs More Information]
141
-
142
- ### Licensing Information
143
-
144
- [Needs More Information]
145
-
146
- ### Citation Information
147
- ```
148
- @inproceedings{atanasova-etal-2020-generating,
149
- title = "Generating Label Cohesive and Well-Formed Adversarial Claims",
150
- author = "Atanasova, Pepa and
151
- Wright, Dustin and
152
- Augenstein, Isabelle",
153
- booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
154
- month = nov,
155
- year = "2020",
156
- address = "Online",
157
- publisher = "Association for Computational Linguistics",
158
- url = "https://aclanthology.org/2020.emnlp-main.256",
159
- doi = "10.18653/v1/2020.emnlp-main.256",
160
- pages = "3168--3177",
161
- abstract = "Adversarial attacks reveal important vulnerabilities and flaws of trained models. One potent type of attack are universal adversarial triggers, which are individual n-grams that, when appended to instances of a class under attack, can trick a model into predicting a target class. However, for inference tasks such as fact checking, these triggers often inadvertently invert the meaning of instances they are inserted in. In addition, such attacks produce semantically nonsensical inputs, as they simply concatenate triggers to existing samples. Here, we investigate how to generate adversarial attacks against fact checking systems that preserve the ground truth meaning and are semantically valid. We extend the HotFlip attack algorithm used for universal trigger generation by jointly minimizing the target class loss of a fact checking model and the entailment class loss of an auxiliary natural language inference model. We then train a conditional language model to generate semantically valid statements, which include the found universal triggers. We find that the generated attacks maintain the directionality and semantic validity of the claim better than previous work.",
162
- }
163
  ```
 
1
+ ---
2
+ annotations_creators:
3
+ - machine-generated
4
+ - expert-generated
5
+ language_creators:
6
+ - machine-generated
7
+ - crowdsourced
8
+ language:
9
+ - en
10
+ license:
11
+ - cc-by-sa-3.0
12
+ - gpl-3.0
13
+ multilinguality:
14
+ - monolingual
15
+ paperswithcode_id: fever
16
+ pretty_name: ''
17
+ size_categories:
18
+ - 100K<n<1M
19
+ source_datasets:
20
+ - extended|fever
21
+ task_categories:
22
+ - text-classification
23
+ task_ids:
24
+ - fact-checking
25
+ ---
26
+ # Dataset Card for fever_gold_evidence
27
+
28
+ ## Table of Contents
29
+ - [Dataset Description](#dataset-description)
30
+ - [Dataset Summary](#dataset-summary)
31
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
32
+ - [Languages](#languages)
33
+ - [Dataset Structure](#dataset-structure)
34
+ - [Data Instances](#data-instances)
35
+ - [Data Fields](#data-instances)
36
+ - [Data Splits](#data-instances)
37
+ - [Dataset Creation](#dataset-creation)
38
+ - [Curation Rationale](#curation-rationale)
39
+ - [Source Data](#source-data)
40
+ - [Annotations](#annotations)
41
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
42
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
43
+ - [Social Impact of Dataset](#social-impact-of-dataset)
44
+ - [Discussion of Biases](#discussion-of-biases)
45
+ - [Other Known Limitations](#other-known-limitations)
46
+ - [Additional Information](#additional-information)
47
+ - [Dataset Curators](#dataset-curators)
48
+ - [Licensing Information](#licensing-information)
49
+ - [Citation Information](#citation-information)
50
+
51
+ ## Dataset Description
52
+
53
+ - **Homepage:** https://github.com/copenlu/fever-adversarial-attacks
54
+ - **Repository:** https://github.com/copenlu/fever-adversarial-attacks
55
+ - **Paper:** https://aclanthology.org/2020.emnlp-main.256/
56
+ - **Leaderboard:** [Needs More Information]
57
+ - **Point of Contact:** [Needs More Information]
58
+
59
+ ### Dataset Summary
60
+
61
+ Dataset for training classification-only fact checking with claims from the FEVER dataset.
62
+ This dataset is used in the paper "Generating Label Cohesive and Well-Formed Adversarial Claims", EMNLP 2020
63
+
64
+ The evidence is the gold evidence from the FEVER dataset for *REFUTE* and *SUPPORT* claims.
65
+ For *NEI* claims, we extract evidence sentences with the system in "Christopher Malon. 2018. Team Papelo: Transformer Networks at FEVER. In Proceedings of the
66
+ First Workshop on Fact Extraction and VERification (FEVER), pages 109113."
67
+ More details can be found in https://github.com/copenlu/fever-adversarial-attacks
68
+
69
+
70
+ ### Supported Tasks and Leaderboards
71
+
72
+ [Needs More Information]
73
+
74
+ ### Languages
75
+
76
+ [Needs More Information]
77
+
78
+ ## Dataset Structure
79
+
80
+ ### Data Instances
81
+
82
+ [Needs More Information]
83
+
84
+ ### Data Fields
85
+
86
+ [Needs More Information]
87
+
88
+ ### Data Splits
89
+
90
+ [Needs More Information]
91
+
92
+ ## Dataset Creation
93
+
94
+ ### Curation Rationale
95
+
96
+ [Needs More Information]
97
+
98
+ ### Source Data
99
+
100
+ #### Initial Data Collection and Normalization
101
+
102
+ [Needs More Information]
103
+
104
+ #### Who are the source language producers?
105
+
106
+ [Needs More Information]
107
+
108
+ ### Annotations
109
+
110
+ #### Annotation process
111
+
112
+ [Needs More Information]
113
+
114
+ #### Who are the annotators?
115
+
116
+ [Needs More Information]
117
+
118
+ ### Personal and Sensitive Information
119
+
120
+ [Needs More Information]
121
+
122
+ ## Considerations for Using the Data
123
+
124
+ ### Social Impact of Dataset
125
+
126
+ [Needs More Information]
127
+
128
+ ### Discussion of Biases
129
+
130
+ [Needs More Information]
131
+
132
+ ### Other Known Limitations
133
+
134
+ [Needs More Information]
135
+
136
+ ## Additional Information
137
+
138
+ ### Dataset Curators
139
+
140
+ [Needs More Information]
141
+
142
+ ### Licensing Information
143
+
144
+ [Needs More Information]
145
+
146
+ ### Citation Information
147
+ ```
148
+ @inproceedings{atanasova-etal-2020-generating,
149
+ title = "Generating Label Cohesive and Well-Formed Adversarial Claims",
150
+ author = "Atanasova, Pepa and
151
+ Wright, Dustin and
152
+ Augenstein, Isabelle",
153
+ booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
154
+ month = nov,
155
+ year = "2020",
156
+ address = "Online",
157
+ publisher = "Association for Computational Linguistics",
158
+ url = "https://aclanthology.org/2020.emnlp-main.256",
159
+ doi = "10.18653/v1/2020.emnlp-main.256",
160
+ pages = "3168--3177",
161
+ abstract = "Adversarial attacks reveal important vulnerabilities and flaws of trained models. One potent type of attack are universal adversarial triggers, which are individual n-grams that, when appended to instances of a class under attack, can trick a model into predicting a target class. However, for inference tasks such as fact checking, these triggers often inadvertently invert the meaning of instances they are inserted in. In addition, such attacks produce semantically nonsensical inputs, as they simply concatenate triggers to existing samples. Here, we investigate how to generate adversarial attacks against fact checking systems that preserve the ground truth meaning and are semantically valid. We extend the HotFlip attack algorithm used for universal trigger generation by jointly minimizing the target class loss of a fact checking model and the entailment class loss of an auxiliary natural language inference model. We then train a conditional language model to generate semantically valid statements, which include the found universal triggers. We find that the generated attacks maintain the directionality and semantic validity of the claim better than previous work.",
162
+ }
163
  ```