Datasets:

Sub-tasks:
fact-checking
Languages:
English
Multilinguality:
monolingual
Size Categories:
1K<n<10K
Language Creators:
found
Annotations Creators:
expert-generated
Source Datasets:
original
License:
David Wadden commited on
Commit
974bdde
1 Parent(s): de8e9db

HealthVer is ready.

Browse files
Files changed (2) hide show
  1. README.md +11 -15
  2. healthver_entailment.py +29 -20
README.md CHANGED
@@ -17,7 +17,7 @@ task_categories:
17
  - text-classification
18
  task_ids:
19
  - fact-checking
20
- pretty_name: CovidFact
21
  dataset_info:
22
  features:
23
  - name: claim_id
@@ -36,17 +36,20 @@ dataset_info:
36
  sequence: int32
37
  splits:
38
  - name: train
39
- num_bytes: 1547185
 
 
 
40
  num_examples: 940
41
  - name: test
42
- num_bytes: 523542
43
- num_examples: 317
44
  download_size: 3610222
45
- dataset_size: 2070727
46
  ---
47
 
48
 
49
- # Dataset Card for "covidfact_entailment"
50
 
51
  ## Table of Contents
52
 
@@ -54,16 +57,15 @@ dataset_info:
54
  - [Dataset Summary](#dataset-summary)
55
  - [Dataset Structure](#dataset-structure)
56
  - [Data Fields](#data-fields)
57
- - [Data Splits](#data-splits)
58
 
59
  ## Dataset Description
60
 
61
- - **Repository:** <https://github.com/asaakyan/covidfact>
62
  - **Point of Contact:** [David Wadden](mailto:davidw@allenai.org)
63
 
64
  ### Dataset Summary
65
 
66
- COVID-FACT is a dataset of claims about COVID-19. For this version of the dataset, we follow the preprocessing from the MultiVerS modeling paper https://github.com/dwadden/multivers, verifying claims against abstracts of scientific research articles. Entailment labels and rationales are included.
67
 
68
  ## Dataset Structure
69
 
@@ -76,9 +78,3 @@ COVID-FACT is a dataset of claims about COVID-19. For this version of the datase
76
  - `abstract`: A list of `strings`, one for each sentence in the abstract.
77
  - `verdict`: The fact-checking verdict, a `string`.
78
  - `evidence`: A list of sentences from the abstract which provide evidence for the verdict.
79
-
80
- ### Data Splits
81
-
82
- | |train|validation|
83
- |------|----:|---------:|
84
- |claims| 919 | 340|
 
17
  - text-classification
18
  task_ids:
19
  - fact-checking
20
+ pretty_name: HealthVer
21
  dataset_info:
22
  features:
23
  - name: claim_id
 
36
  sequence: int32
37
  splits:
38
  - name: train
39
+ num_bytes: 9490482
40
+ num_examples: 5292
41
+ - name: validation
42
+ num_bytes: 1707997
43
  num_examples: 940
44
  - name: test
45
+ num_bytes: 1620257
46
+ num_examples: 903
47
  download_size: 3610222
48
+ dataset_size: 12818736
49
  ---
50
 
51
 
52
+ # Dataset Card for "healthver_entailment"
53
 
54
  ## Table of Contents
55
 
 
57
  - [Dataset Summary](#dataset-summary)
58
  - [Dataset Structure](#dataset-structure)
59
  - [Data Fields](#data-fields)
 
60
 
61
  ## Dataset Description
62
 
63
+ - **Repository:** <https://github.com/sarrouti/HealthVe>
64
  - **Point of Contact:** [David Wadden](mailto:davidw@allenai.org)
65
 
66
  ### Dataset Summary
67
 
68
+ HealthVer is a dataset of public health claims, verified against scientific research articles. For this version of the dataset, we follow the preprocessing from the MultiVerS modeling paper https://github.com/dwadden/multivers, verifying claims against full article abstracts rather than individual sentences. Entailment labels and rationales are included.
69
 
70
  ## Dataset Structure
71
 
 
78
  - `abstract`: A list of `strings`, one for each sentence in the abstract.
79
  - `verdict`: The fact-checking verdict, a `string`.
80
  - `evidence`: A list of sentences from the abstract which provide evidence for the verdict.
 
 
 
 
 
 
healthver_entailment.py CHANGED
@@ -7,19 +7,18 @@ import json
7
 
8
 
9
  _CITATION = """\
10
- @article{Saakyan2021COVIDFactFE,
11
- title={COVID-Fact: Fact Extraction and Verification of Real-World Claims on COVID-19 Pandemic},
12
- author={Arkadiy Saakyan and Tuhin Chakrabarty and Smaranda Muresan},
13
- journal={ArXiv},
14
- year={2021},
15
- volume={abs/2106.03794},
16
- url={https://api.semanticscholar.org/CorpusID:235364036}
17
  }
18
  """
19
 
20
 
21
  _DESCRIPTION = """\
22
- COVID-FACT is a dataset of claims about COVID-19. For this version of the dataset, we follow the preprocessing from the MultiVerS modeling paper https://github.com/dwadden/multivers, verifying claims against abstracts of scientific research articles. Entailment labels and rationales are included.
23
  """
24
 
25
  _URL = "https://scifact.s3.us-west-2.amazonaws.com/longchecker/latest/data.tar.gz"
@@ -29,8 +28,8 @@ def flatten(xss):
29
  return [x for xs in xss for x in xs]
30
 
31
 
32
- class CovidFactEntailmentConfig(datasets.BuilderConfig):
33
- """builderconfig for covidfact"""
34
 
35
  def __init__(self, **kwargs):
36
  """
@@ -38,19 +37,19 @@ class CovidFactEntailmentConfig(datasets.BuilderConfig):
38
  Args:
39
  **kwargs: keyword arguments forwarded to super.
40
  """
41
- super(CovidFactEntailmentConfig, self).__init__(
42
  version=datasets.Version("1.0.0", ""), **kwargs
43
  )
44
 
45
 
46
- class CovidFactEntailment(datasets.GeneratorBasedBuilder):
47
- """TODO(covidfact): Short description of my dataset."""
48
 
49
- # TODO(covidfact): Set up version.
50
  VERSION = datasets.Version("0.1.0")
51
 
52
  def _info(self):
53
- # TODO(covidfact): Specifies the datasets.DatasetInfo object
54
 
55
  features = {
56
  "claim_id": datasets.Value("int32"),
@@ -75,7 +74,6 @@ class CovidFactEntailment(datasets.GeneratorBasedBuilder):
75
  # builder.as_dataset.
76
  supervised_keys=None,
77
  # Homepage of the dataset for documentation
78
- homepage="https://scifact.apps.allenai.org/",
79
  citation=_CITATION,
80
  )
81
 
@@ -90,18 +88,20 @@ class CovidFactEntailment(datasets.GeneratorBasedBuilder):
90
 
91
  def _split_generators(self, dl_manager):
92
  """Returns SplitGenerators."""
93
- # TODO(scifact): Downloads the data and defines the splits
94
  # dl_manager is a datasets.download.DownloadManager that can be used to
95
  # download and extract URLs
96
  archive = dl_manager.download(_URL)
97
  for path, f in dl_manager.iter_archive(archive):
98
  # The claims are too similar to paper titles; don't include.
99
- if path == "data/covidfact/corpus_without_titles.jsonl":
100
  corpus = self._read_tar_file(f)
101
  corpus = {x["doc_id"]: x for x in corpus}
102
- elif path == "data/covidfact/claims_train.jsonl":
103
  claims_train = self._read_tar_file(f)
104
- elif path == "data/covidfact/claims_test.jsonl":
 
 
105
  claims_test = self._read_tar_file(f)
106
 
107
  return [
@@ -114,6 +114,15 @@ class CovidFactEntailment(datasets.GeneratorBasedBuilder):
114
  "split": "train",
115
  },
116
  ),
 
 
 
 
 
 
 
 
 
117
  datasets.SplitGenerator(
118
  name=datasets.Split.TEST,
119
  # These kwargs will be passed to _generate_examples
 
7
 
8
 
9
  _CITATION = """\
10
+ @inproceedings{Sarrouti2021EvidencebasedFO,
11
+ title={Evidence-based Fact-Checking of Health-related Claims},
12
+ author={Mourad Sarrouti and Asma Ben Abacha and Yassine Mrabet and Dina Demner-Fushman},
13
+ booktitle={Conference on Empirical Methods in Natural Language Processing},
14
+ year={2021},
15
+ url={https://api.semanticscholar.org/CorpusID:244119074}
 
16
  }
17
  """
18
 
19
 
20
  _DESCRIPTION = """\
21
+ HealthVer is a dataset of public health claims, verified against scientific research articles. For this version of the dataset, we follow the preprocessing from the MultiVerS modeling paper https://github.com/dwadden/multivers, verifying claims against full article abstracts rather than individual sentences. Entailment labels and rationales are included.
22
  """
23
 
24
  _URL = "https://scifact.s3.us-west-2.amazonaws.com/longchecker/latest/data.tar.gz"
 
28
  return [x for xs in xss for x in xs]
29
 
30
 
31
+ class HealthVerEntailmentConfig(datasets.BuilderConfig):
32
+ """builderconfig for healthver"""
33
 
34
  def __init__(self, **kwargs):
35
  """
 
37
  Args:
38
  **kwargs: keyword arguments forwarded to super.
39
  """
40
+ super(HealthVerEntailmentConfig, self).__init__(
41
  version=datasets.Version("1.0.0", ""), **kwargs
42
  )
43
 
44
 
45
+ class HealthVerEntailment(datasets.GeneratorBasedBuilder):
46
+ """TODO(healthver): Short description of my dataset."""
47
 
48
+ # TODO(healthver): Set up version.
49
  VERSION = datasets.Version("0.1.0")
50
 
51
  def _info(self):
52
+ # TODO(healthver): Specifies the datasets.DatasetInfo object
53
 
54
  features = {
55
  "claim_id": datasets.Value("int32"),
 
74
  # builder.as_dataset.
75
  supervised_keys=None,
76
  # Homepage of the dataset for documentation
 
77
  citation=_CITATION,
78
  )
79
 
 
88
 
89
  def _split_generators(self, dl_manager):
90
  """Returns SplitGenerators."""
91
+ # TODO(healthver): Downloads the data and defines the splits
92
  # dl_manager is a datasets.download.DownloadManager that can be used to
93
  # download and extract URLs
94
  archive = dl_manager.download(_URL)
95
  for path, f in dl_manager.iter_archive(archive):
96
  # The claims are too similar to paper titles; don't include.
97
+ if path == "data/healthver/corpus.jsonl":
98
  corpus = self._read_tar_file(f)
99
  corpus = {x["doc_id"]: x for x in corpus}
100
+ elif path == "data/healthver/claims_train.jsonl":
101
  claims_train = self._read_tar_file(f)
102
+ elif path == "data/healthver/claims_dev.jsonl":
103
+ claims_validation = self._read_tar_file(f)
104
+ elif path == "data/healthver/claims_test.jsonl":
105
  claims_test = self._read_tar_file(f)
106
 
107
  return [
 
114
  "split": "train",
115
  },
116
  ),
117
+ datasets.SplitGenerator(
118
+ name=datasets.Split.VALIDATION,
119
+ # These kwargs will be passed to _generate_examples
120
+ gen_kwargs={
121
+ "claims": claims_validation,
122
+ "corpus": corpus,
123
+ "split": "validation",
124
+ },
125
+ ),
126
  datasets.SplitGenerator(
127
  name=datasets.Split.TEST,
128
  # These kwargs will be passed to _generate_examples