system HF staff commited on
Commit
607626b
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

Files changed (5) hide show
  1. .gitattributes +27 -0
  2. README.md +178 -0
  3. dataset_infos.json +1 -0
  4. dummy/1.1.0/dummy_data.zip +3 -0
  5. health_fact.py +175 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,178 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - mit
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - text-classification
18
+ task_ids:
19
+ - fact-checking
20
+ - multi-class-classification
21
+ ---
22
+
23
+ # Dataset Card for PUBHEALTH
24
+
25
+ ## Table of Contents
26
+ - [Dataset Description](#dataset-description)
27
+ - [Dataset Summary](#dataset-summary)
28
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
29
+ - [Languages](#languages)
30
+ - [Dataset Structure](#dataset-structure)
31
+ - [Data Instances](#data-instances)
32
+ - [Data Fields](#data-instances)
33
+ - [Data Splits](#data-instances)
34
+ - [Dataset Creation](#dataset-creation)
35
+ - [Curation Rationale](#curation-rationale)
36
+ - [Source Data](#source-data)
37
+ - [Annotations](#annotations)
38
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
39
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
40
+ - [Social Impact of Dataset](#social-impact-of-dataset)
41
+ - [Discussion of Biases](#discussion-of-biases)
42
+ - [Other Known Limitations](#other-known-limitations)
43
+ - [Additional Information](#additional-information)
44
+ - [Dataset Curators](#dataset-curators)
45
+ - [Licensing Information](#licensing-information)
46
+ - [Citation Information](#citation-information)
47
+
48
+ ## Dataset Description
49
+
50
+ - **Homepage:** [PUBHEALTH homepage](https://github.com/neemakot/Health-Fact-Checking)
51
+ - **Repository:** [PUBHEALTH repository](https://github.com/neemakot/Health-Fact-Checking/blob/master/data/DATASHEET.md)
52
+ - **Paper:** [Explainable Automated Fact-Checking for Public Health Claims"](https://arxiv.org/abs/2010.09926)
53
+ - **Point of Contact:**[Neema Kotonya](mailto:nk2418@ic.ac.uk)
54
+
55
+ ### Dataset Summary
56
+
57
+ PUBHEALTH is a comprehensive dataset for explainable automated fact-checking of public health claims. Each instance in the PUBHEALTH dataset has an associated veracity label (true, false, unproven, mixture). Furthermore each instance in the dataset has an explanation text field. The explanation is a justification for which the claim has been assigned a particular veracity label.
58
+
59
+ ### Supported Tasks and Leaderboards
60
+
61
+ [More Information Needed]
62
+
63
+ ### Languages
64
+
65
+ The text in the dataset is in English.
66
+
67
+ ## Dataset Structure
68
+
69
+ ### Data Instances
70
+
71
+ The following is an example instance of the PUBHEALTH dataset:
72
+ | Field | Example |
73
+ | ----------------- | -------------------------------------------------------------|
74
+ | __claim__ | Expired boxes of cake and pancake mix are dangerously toxic. |
75
+ | __explanation__ | What's True: Pancake and cake mixes that contain mold can cause life-threatening allergic reactions. What's False: Pancake and cake mixes that have passed their expiration dates are not inherently dangerous to ordinarily healthy people, and the yeast in packaged baking products does not "over time develops spores." |
76
+ | __label__ | mixture |
77
+ | __author(s)__ | David Mikkelson |
78
+ | __date published__ | April 19, 2006 |
79
+ | __tags__ | food, allergies, baking, cake |
80
+ | __main_text__ | In April 2006, the experience of a 14-year-old who had eaten pancakes made from a mix that had gone moldy was described in the popular newspaper column Dear Abby. The account has since been circulated widely on the Internet as scores of concerned homemakers ponder the safety of the pancake and other baking mixes lurking in their larders [...] |
81
+ | __evidence sources__ | [1] Bennett, Allan and Kim Collins. “An Unusual Case of Anaphylaxis: Mold in Pancake Mix.” American Journal of Forensic Medicine & Pathology. September 2001 (pp. 292-295). [2] Phillips, Jeanne. “Dear Abby.” 14 April 2006 [syndicated column]. |
82
+
83
+ ### Data Fields
84
+
85
+ Mentioned above in data instances.
86
+
87
+ ### Data Splits
88
+
89
+ | | # Instances |
90
+ |-----------|-------------|
91
+ | train.tsv | 9832 |
92
+ | dev.tsv | 1221 |
93
+ | test.tsv | 1235 |
94
+ | total | 12288 |
95
+
96
+
97
+ ## Dataset Creation
98
+
99
+ ### Curation Rationale
100
+
101
+ The dataset was created to explore fact-checking of difficult to verify claims i.e., those which require expertise from outside of the journalistics domain, in this case biomedical and public health expertise.
102
+
103
+ It was also created in response to the lack of fact-checking datasets which provide gold standard natural language explanations for verdicts/labels.
104
+
105
+ ### Source Data
106
+
107
+ #### Initial Data Collection and Normalization
108
+
109
+ The dataset was retrieved from the following fact-checking, news reviews and news websites:
110
+
111
+ | URL | Type |
112
+ |-----------------------------------|--------------------|
113
+ | http://snopes.com/ | fact-checking |
114
+ | http://politifact.com/ | fact-checking |
115
+ | http://truthorfiction.com/ | fact-checking |
116
+ | https://www.factcheck.org/ | fact-checking |
117
+ | https://fullfact.org/ | fact-checking |
118
+ | https://apnews.com/ | news |
119
+ | https://uk.reuters.com/ | news |
120
+ | https://www.healthnewsreview.org/ | health news review |
121
+
122
+ #### Who are the source language producers?
123
+
124
+ [More Information Needed]
125
+
126
+ ### Annotations
127
+
128
+ #### Annotation process
129
+
130
+ [More Information Needed]
131
+
132
+ #### Who are the annotators?
133
+
134
+ [More Information Needed]
135
+
136
+ ### Personal and Sensitive Information
137
+
138
+ Not to our knowledge, but if it is brought to our attention that we are mistaken we will make the appropriate corrections to the dataset.
139
+
140
+ ## Considerations for Using the Data
141
+
142
+ ### Social Impact of Dataset
143
+
144
+ [More Information Needed]
145
+
146
+ ### Discussion of Biases
147
+
148
+ [More Information Needed]
149
+
150
+ ### Other Known Limitations
151
+
152
+ [More Information Needed]
153
+
154
+ ## Additional Information
155
+
156
+ ### Dataset Curators
157
+
158
+ The dataset was created by Neema Kotonya, and Francesca Toni, for their research paper "Explainable Automated Fact-Checking for Public Health Claims" presented at EMNLP 2020.
159
+
160
+ ### Licensing Information
161
+
162
+ MIT License
163
+
164
+ ### Citation Information
165
+ ```
166
+ @inproceedings{kotonya-toni-2020-explainable,
167
+ title = "Explainable Automated Fact-Checking for Public Health Claims",
168
+ author = "Kotonya, Neema and
169
+ Toni, Francesca",
170
+ booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
171
+ month = nov,
172
+ year = "2020",
173
+ address = "Online",
174
+ publisher = "Association for Computational Linguistics",
175
+ url = "https://www.aclweb.org/anthology/2020.emnlp-main.623",
176
+ pages = "7740--7754",
177
+ }
178
+ ```
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"default": {"description": "PUBHEALTH is a comprehensive dataset for explainable automated fact-checking of\npublic health claims. Each instance in the PUBHEALTH dataset has an associated\nveracity label (true, false, unproven, mixture). Furthermore each instance in the\ndataset has an explanation text field. The explanation is a justification for which\nthe claim has been assigned a particular veracity label.\n\nThe dataset was created to explore fact-checking of difficult to verify claims i.e.,\nthose which require expertise from outside of the journalistics domain, in this case\nbiomedical and public health expertise.\n\nIt was also created in response to the lack of fact-checking datasets which provide\ngold standard natural language explanations for verdicts/labels.\n\nNOTE: There are missing labels in the dataset and we have replaced them with -1.\n", "citation": "@inproceedings{kotonya-toni-2020-explainable,\n title = \"Explainable Automated Fact-Checking for Public Health Claims\",\n author = \"Kotonya, Neema and Toni, Francesca\",\n booktitle = \"Proceedings of the 2020 Conference on Empirical Methods\n in Natural Language Processing (EMNLP)\",\n month = nov,\n year = \"2020\",\n address = \"Online\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/2020.emnlp-main.623\",\n pages = \"7740--7754\",\n}\n", "homepage": "https://github.com/neemakot/Health-Fact-Checking/blob/master/data/DATASHEET.md", "license": "", "features": {"claim_id": {"dtype": "string", "id": null, "_type": "Value"}, "claim": {"dtype": "string", "id": null, "_type": "Value"}, "date_published": {"dtype": "string", "id": null, "_type": "Value"}, "explanation": {"dtype": "string", "id": null, "_type": "Value"}, "fact_checkers": {"dtype": "string", "id": null, "_type": "Value"}, "main_text": {"dtype": "string", "id": null, "_type": "Value"}, "sources": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 4, "names": ["false", "mixture", "true", "unproven"], "names_file": null, "id": null, "_type": "ClassLabel"}, "subjects": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "health_fact", "config_name": "default", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 53985377, "num_examples": 9832, "dataset_name": "health_fact"}, "test": {"name": "test", "num_bytes": 6825221, "num_examples": 1235, "dataset_name": "health_fact"}, "validation": {"name": "validation", "num_bytes": 6653044, "num_examples": 1225, "dataset_name": "health_fact"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=1eTtRs5cUlBP5dXsx-FTAlmXuB6JQi2qj": {"num_bytes": 24892660, "checksum": "3f0a5541f4a60c09a138a896621402893ce4b3a37060363d9257010c2c27fc3a"}}, "download_size": 24892660, "post_processing_size": null, "dataset_size": 67463642, "size_in_bytes": 92356302}}
dummy/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7e832d34422b681b9c486cc20db4f0483986232bcc235ca196f5158795920440
3
+ size 27884
health_fact.py ADDED
@@ -0,0 +1,175 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """Dataset for explainable fake news detection of public health claims."""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import csv
20
+ import os
21
+
22
+ import datasets
23
+
24
+
25
+ _CITATION = """\
26
+ @inproceedings{kotonya-toni-2020-explainable,
27
+ title = "Explainable Automated Fact-Checking for Public Health Claims",
28
+ author = "Kotonya, Neema and Toni, Francesca",
29
+ booktitle = "Proceedings of the 2020 Conference on Empirical Methods
30
+ in Natural Language Processing (EMNLP)",
31
+ month = nov,
32
+ year = "2020",
33
+ address = "Online",
34
+ publisher = "Association for Computational Linguistics",
35
+ url = "https://www.aclweb.org/anthology/2020.emnlp-main.623",
36
+ pages = "7740--7754",
37
+ }
38
+ """
39
+
40
+ _DESCRIPTION = """\
41
+ PUBHEALTH is a comprehensive dataset for explainable automated fact-checking of
42
+ public health claims. Each instance in the PUBHEALTH dataset has an associated
43
+ veracity label (true, false, unproven, mixture). Furthermore each instance in the
44
+ dataset has an explanation text field. The explanation is a justification for which
45
+ the claim has been assigned a particular veracity label.
46
+
47
+ The dataset was created to explore fact-checking of difficult to verify claims i.e.,
48
+ those which require expertise from outside of the journalistics domain, in this case
49
+ biomedical and public health expertise.
50
+
51
+ It was also created in response to the lack of fact-checking datasets which provide
52
+ gold standard natural language explanations for verdicts/labels.
53
+
54
+ NOTE: There are missing labels in the dataset and we have replaced them with -1.
55
+ """
56
+
57
+ _DATA_URL = "https://drive.google.com/uc?export=download&id=1eTtRs5cUlBP5dXsx-FTAlmXuB6JQi2qj"
58
+ _TEST_FILE_NAME = "PUBHEALTH/test.tsv"
59
+ _TRAIN_FILE_NAME = "PUBHEALTH/train.tsv"
60
+ _VAL_FILE_NAME = "PUBHEALTH/dev.tsv"
61
+
62
+
63
+ class HealthFact(datasets.GeneratorBasedBuilder):
64
+ """Dataset for explainable fake news detection of public health claims."""
65
+
66
+ VERSION = datasets.Version("1.1.0")
67
+
68
+ def _info(self):
69
+ return datasets.DatasetInfo(
70
+ # This is the description that will appear on the datasets page.
71
+ description=_DESCRIPTION,
72
+ # This defines the different columns of the dataset and their types
73
+ features=datasets.Features(
74
+ {
75
+ "claim_id": datasets.Value("string"),
76
+ "claim": datasets.Value("string"),
77
+ "date_published": datasets.Value("string"),
78
+ "explanation": datasets.Value("string"),
79
+ "fact_checkers": datasets.Value("string"),
80
+ "main_text": datasets.Value("string"),
81
+ "sources": datasets.Value("string"),
82
+ "label": datasets.features.ClassLabel(names=["false", "mixture", "true", "unproven"]),
83
+ "subjects": datasets.Value("string"),
84
+ }
85
+ ),
86
+ supervised_keys=None,
87
+ homepage="https://github.com/neemakot/Health-Fact-Checking/blob/master/data/DATASHEET.md",
88
+ citation=_CITATION,
89
+ )
90
+
91
+ def _split_generators(self, dl_manager):
92
+ data_dir = dl_manager.download_and_extract(_DATA_URL)
93
+
94
+ return [
95
+ datasets.SplitGenerator(
96
+ name=datasets.Split.TRAIN,
97
+ # These kwargs will be passed to _generate_examples
98
+ gen_kwargs={
99
+ "filepath": os.path.join(data_dir, _TRAIN_FILE_NAME),
100
+ "split": datasets.Split.TRAIN,
101
+ },
102
+ ),
103
+ datasets.SplitGenerator(
104
+ name=datasets.Split.TEST,
105
+ # These kwargs will be passed to _generate_examples
106
+ gen_kwargs={
107
+ "filepath": os.path.join(data_dir, _TEST_FILE_NAME),
108
+ "split": datasets.Split.TEST,
109
+ },
110
+ ),
111
+ datasets.SplitGenerator(
112
+ name=datasets.Split.VALIDATION,
113
+ # These kwargs will be passed to _generate_examples
114
+ gen_kwargs={
115
+ "filepath": os.path.join(data_dir, _VAL_FILE_NAME),
116
+ "split": datasets.Split.VALIDATION,
117
+ },
118
+ ),
119
+ ]
120
+
121
+ def _generate_examples(self, filepath, split):
122
+ with open(filepath, encoding="utf-8") as f:
123
+ label_list = ["false", "mixture", "true", "unproven"]
124
+ data = csv.reader(f, delimiter="\t")
125
+ next(data, None) # skip the headers
126
+ for row_id, row in enumerate(data):
127
+ row = [x if x != "nan" else "" for x in row] # nan values changed to empty string
128
+ if split != "test":
129
+ if len(row) <= 9:
130
+ elements = ["" for x in range(9 - len(row))]
131
+ row = row + elements
132
+ (
133
+ claim_id,
134
+ claim,
135
+ date_published,
136
+ explanation,
137
+ fact_checkers,
138
+ main_text,
139
+ sources,
140
+ label,
141
+ subjects,
142
+ ) = row
143
+ if label not in label_list: # remove stray labels in dev.tsv, train.tsv
144
+ label = -1
145
+ else:
146
+ if len(row) <= 10:
147
+ elements = ["" for x in range(10 - len(row))]
148
+ row = row + elements
149
+ (
150
+ _,
151
+ claim_id,
152
+ claim,
153
+ date_published,
154
+ explanation,
155
+ fact_checkers,
156
+ main_text,
157
+ sources,
158
+ label,
159
+ subjects,
160
+ ) = row
161
+ if label not in label_list: # remove stray labels in test.tsv
162
+ label = -1
163
+ if label == "":
164
+ label = -1
165
+ yield row_id, {
166
+ "claim_id": claim_id,
167
+ "claim": claim,
168
+ "date_published": date_published,
169
+ "explanation": explanation,
170
+ "fact_checkers": fact_checkers,
171
+ "main_text": main_text,
172
+ "sources": sources,
173
+ "label": label,
174
+ "subjects": subjects,
175
+ }