David Wadden commited on
Commit
de8e9db
1 Parent(s): e5f5baf

Copy from CovidFact.

Browse files
Files changed (2) hide show
  1. README.md +84 -0
  2. healthver_entailment.py +161 -0
README.md ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - found
6
+ language:
7
+ - en
8
+ license:
9
+ - cc-by-nc-2.0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 1K<n<10K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - text-classification
18
+ task_ids:
19
+ - fact-checking
20
+ pretty_name: CovidFact
21
+ dataset_info:
22
+ features:
23
+ - name: claim_id
24
+ dtype: int32
25
+ - name: claim
26
+ dtype: string
27
+ - name: abstract_id
28
+ dtype: int32
29
+ - name: title
30
+ dtype: string
31
+ - name: abstract
32
+ sequence: string
33
+ - name: verdict
34
+ dtype: string
35
+ - name: evidence
36
+ sequence: int32
37
+ splits:
38
+ - name: train
39
+ num_bytes: 1547185
40
+ num_examples: 940
41
+ - name: test
42
+ num_bytes: 523542
43
+ num_examples: 317
44
+ download_size: 3610222
45
+ dataset_size: 2070727
46
+ ---
47
+
48
+
49
+ # Dataset Card for "covidfact_entailment"
50
+
51
+ ## Table of Contents
52
+
53
+ - [Dataset Description](#dataset-description)
54
+ - [Dataset Summary](#dataset-summary)
55
+ - [Dataset Structure](#dataset-structure)
56
+ - [Data Fields](#data-fields)
57
+ - [Data Splits](#data-splits)
58
+
59
+ ## Dataset Description
60
+
61
+ - **Repository:** <https://github.com/asaakyan/covidfact>
62
+ - **Point of Contact:** [David Wadden](mailto:davidw@allenai.org)
63
+
64
+ ### Dataset Summary
65
+
66
+ COVID-FACT is a dataset of claims about COVID-19. For this version of the dataset, we follow the preprocessing from the MultiVerS modeling paper https://github.com/dwadden/multivers, verifying claims against abstracts of scientific research articles. Entailment labels and rationales are included.
67
+
68
+ ## Dataset Structure
69
+
70
+ ### Data fields
71
+
72
+ - `claim_id`: An `int32` claim identifier.
73
+ - `claim`: A `string`.
74
+ - `abstract_id`: An `int32` abstract identifier.
75
+ - `title`: A `string`.
76
+ - `abstract`: A list of `strings`, one for each sentence in the abstract.
77
+ - `verdict`: The fact-checking verdict, a `string`.
78
+ - `evidence`: A list of sentences from the abstract which provide evidence for the verdict.
79
+
80
+ ### Data Splits
81
+
82
+ | |train|validation|
83
+ |------|----:|---------:|
84
+ |claims| 919 | 340|
healthver_entailment.py ADDED
@@ -0,0 +1,161 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Scientific fact-checking dataset. Verifies claims based on citation sentences
2
+ using evidence from the cited abstracts. Formatted as a paragraph-level entailment task."""
3
+
4
+
5
+ import datasets
6
+ import json
7
+
8
+
9
+ _CITATION = """\
10
+ @article{Saakyan2021COVIDFactFE,
11
+ title={COVID-Fact: Fact Extraction and Verification of Real-World Claims on COVID-19 Pandemic},
12
+ author={Arkadiy Saakyan and Tuhin Chakrabarty and Smaranda Muresan},
13
+ journal={ArXiv},
14
+ year={2021},
15
+ volume={abs/2106.03794},
16
+ url={https://api.semanticscholar.org/CorpusID:235364036}
17
+ }
18
+ """
19
+
20
+
21
+ _DESCRIPTION = """\
22
+ COVID-FACT is a dataset of claims about COVID-19. For this version of the dataset, we follow the preprocessing from the MultiVerS modeling paper https://github.com/dwadden/multivers, verifying claims against abstracts of scientific research articles. Entailment labels and rationales are included.
23
+ """
24
+
25
+ _URL = "https://scifact.s3.us-west-2.amazonaws.com/longchecker/latest/data.tar.gz"
26
+
27
+
28
+ def flatten(xss):
29
+ return [x for xs in xss for x in xs]
30
+
31
+
32
+ class CovidFactEntailmentConfig(datasets.BuilderConfig):
33
+ """builderconfig for covidfact"""
34
+
35
+ def __init__(self, **kwargs):
36
+ """
37
+
38
+ Args:
39
+ **kwargs: keyword arguments forwarded to super.
40
+ """
41
+ super(CovidFactEntailmentConfig, self).__init__(
42
+ version=datasets.Version("1.0.0", ""), **kwargs
43
+ )
44
+
45
+
46
+ class CovidFactEntailment(datasets.GeneratorBasedBuilder):
47
+ """TODO(covidfact): Short description of my dataset."""
48
+
49
+ # TODO(covidfact): Set up version.
50
+ VERSION = datasets.Version("0.1.0")
51
+
52
+ def _info(self):
53
+ # TODO(covidfact): Specifies the datasets.DatasetInfo object
54
+
55
+ features = {
56
+ "claim_id": datasets.Value("int32"),
57
+ "claim": datasets.Value("string"),
58
+ "abstract_id": datasets.Value("int32"),
59
+ "title": datasets.Value("string"),
60
+ "abstract": datasets.features.Sequence(datasets.Value("string")),
61
+ "verdict": datasets.Value("string"),
62
+ "evidence": datasets.features.Sequence(datasets.Value("int32")),
63
+ }
64
+
65
+ return datasets.DatasetInfo(
66
+ # This is the description that will appear on the datasets page.
67
+ description=_DESCRIPTION,
68
+ # datasets.features.FeatureConnectors
69
+ features=datasets.Features(
70
+ features
71
+ # These are the features of your dataset like images, labels ...
72
+ ),
73
+ # If there's a common (input, target) tuple from the features,
74
+ # specify them here. They'll be used if as_supervised=True in
75
+ # builder.as_dataset.
76
+ supervised_keys=None,
77
+ # Homepage of the dataset for documentation
78
+ homepage="https://scifact.apps.allenai.org/",
79
+ citation=_CITATION,
80
+ )
81
+
82
+ @staticmethod
83
+ def _read_tar_file(f):
84
+ res = []
85
+ for row in f:
86
+ this_row = json.loads(row.decode("utf-8"))
87
+ res.append(this_row)
88
+
89
+ return res
90
+
91
+ def _split_generators(self, dl_manager):
92
+ """Returns SplitGenerators."""
93
+ # TODO(scifact): Downloads the data and defines the splits
94
+ # dl_manager is a datasets.download.DownloadManager that can be used to
95
+ # download and extract URLs
96
+ archive = dl_manager.download(_URL)
97
+ for path, f in dl_manager.iter_archive(archive):
98
+ # The claims are too similar to paper titles; don't include.
99
+ if path == "data/covidfact/corpus_without_titles.jsonl":
100
+ corpus = self._read_tar_file(f)
101
+ corpus = {x["doc_id"]: x for x in corpus}
102
+ elif path == "data/covidfact/claims_train.jsonl":
103
+ claims_train = self._read_tar_file(f)
104
+ elif path == "data/covidfact/claims_test.jsonl":
105
+ claims_test = self._read_tar_file(f)
106
+
107
+ return [
108
+ datasets.SplitGenerator(
109
+ name=datasets.Split.TRAIN,
110
+ # These kwargs will be passed to _generate_examples
111
+ gen_kwargs={
112
+ "claims": claims_train,
113
+ "corpus": corpus,
114
+ "split": "train",
115
+ },
116
+ ),
117
+ datasets.SplitGenerator(
118
+ name=datasets.Split.TEST,
119
+ # These kwargs will be passed to _generate_examples
120
+ gen_kwargs={
121
+ "claims": claims_test,
122
+ "corpus": corpus,
123
+ "split": "test",
124
+ },
125
+ ),
126
+ ]
127
+
128
+ def _generate_examples(self, claims, corpus, split):
129
+ """Yields examples."""
130
+ # Loop over claims and put evidence together with claim.
131
+ id_ = -1 # Will increment to 0 on first iteration.
132
+ for claim in claims:
133
+ evidence = {int(k): v for k, v in claim["evidence"].items()}
134
+ for cited_doc_id in claim["doc_ids"]:
135
+ cited_doc = corpus[cited_doc_id]
136
+ abstract_sents = [sent.strip() for sent in cited_doc["abstract"]]
137
+
138
+ if cited_doc_id in evidence:
139
+ this_evidence = evidence[cited_doc_id]
140
+ verdict = this_evidence[0][
141
+ "label"
142
+ ] # Can take first evidence since all labels are same.
143
+ evidence_sents = flatten(
144
+ [entry["sentences"] for entry in this_evidence]
145
+ )
146
+ else:
147
+ verdict = "NEI"
148
+ evidence_sents = []
149
+
150
+ instance = {
151
+ "claim_id": claim["id"],
152
+ "claim": claim["claim"],
153
+ "abstract_id": cited_doc_id,
154
+ "title": cited_doc["title"],
155
+ "abstract": abstract_sents,
156
+ "verdict": verdict,
157
+ "evidence": evidence_sents,
158
+ }
159
+
160
+ id_ += 1
161
+ yield id_, instance