Datasets:
Tasks:
Text Classification
Sub-tasks:
fact-checking
Languages:
English
Multilinguality:
monolingual
Language Creators:
found
Annotations Creators:
expert-generated
Source Datasets:
original
Tags:
License:
Commit
•
07ec232
0
Parent(s):
Update files from the datasets library (from 1.2.0)
Browse filesRelease notes: https://github.com/huggingface/datasets/releases/tag/1.2.0
- .gitattributes +27 -0
- README.md +162 -0
- datacommons_factcheck.py +122 -0
- dummy/fctchk_politifact_wapo/1.0.0/dummy_data.zip +3 -0
- dummy/weekly_standard/1.0.0/dummy_data.zip +3 -0
.gitattributes
ADDED
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.bin.* filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
18 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
19 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
20 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
23 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
24 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
25 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
26 |
+
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
27 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,162 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
annotations_creators:
|
3 |
+
- expert-generated
|
4 |
+
language_creators:
|
5 |
+
- found
|
6 |
+
languages:
|
7 |
+
- en
|
8 |
+
licenses:
|
9 |
+
- cc-by-nc-4-0
|
10 |
+
multilinguality:
|
11 |
+
- monolingual
|
12 |
+
size_categories:
|
13 |
+
fctchk_politifact_wapo:
|
14 |
+
- 1K<n<10K
|
15 |
+
weekly_standard:
|
16 |
+
- n<1K
|
17 |
+
source_datasets:
|
18 |
+
- original
|
19 |
+
task_categories:
|
20 |
+
- text-classification
|
21 |
+
task_ids:
|
22 |
+
- fact-checking
|
23 |
+
---
|
24 |
+
|
25 |
+
# Dataset Card for DataCommons Fact Checked claims
|
26 |
+
|
27 |
+
## Table of Contents
|
28 |
+
- [Dataset Description](#dataset-description)
|
29 |
+
- [Dataset Summary](#dataset-summary)
|
30 |
+
- [Supported Tasks](#supported-tasks-and-leaderboards)
|
31 |
+
- [Languages](#languages)
|
32 |
+
- [Dataset Structure](#dataset-structure)
|
33 |
+
- [Data Instances](#data-instances)
|
34 |
+
- [Data Fields](#data-instances)
|
35 |
+
- [Data Splits](#data-instances)
|
36 |
+
- [Dataset Creation](#dataset-creation)
|
37 |
+
- [Curation Rationale](#curation-rationale)
|
38 |
+
- [Source Data](#source-data)
|
39 |
+
- [Annotations](#annotations)
|
40 |
+
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
41 |
+
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
42 |
+
- [Social Impact of Dataset](#social-impact-of-dataset)
|
43 |
+
- [Discussion of Biases](#discussion-of-biases)
|
44 |
+
- [Other Known Limitations](#other-known-limitations)
|
45 |
+
- [Additional Information](#additional-information)
|
46 |
+
- [Dataset Curators](#dataset-curators)
|
47 |
+
- [Licensing Information](#licensing-information)
|
48 |
+
- [Citation Information](#citation-information)
|
49 |
+
|
50 |
+
## Dataset Description
|
51 |
+
|
52 |
+
- **Homepage:** [Data Commons fact checking FAQ](https://datacommons.org/factcheck/faq)
|
53 |
+
|
54 |
+
### Dataset Summary
|
55 |
+
|
56 |
+
A dataset of fact checked claims by news media maintained by [datacommons.org](https://datacommons.org/) containing the claim, author, and judgments, as well as the URL of the full explanation by the original fact-checker.
|
57 |
+
|
58 |
+
The fact checking is done by [FactCheck.org](https://www.factcheck.org/), [PolitiFact](https://www.politifact.com/), and [The Washington Post](https://www.washingtonpost.com/).
|
59 |
+
|
60 |
+
### Supported Tasks and Leaderboards
|
61 |
+
|
62 |
+
[More Information Needed]
|
63 |
+
|
64 |
+
### Languages
|
65 |
+
|
66 |
+
The data is in English (`en`).
|
67 |
+
|
68 |
+
## Dataset Structure
|
69 |
+
|
70 |
+
### Data Instances
|
71 |
+
|
72 |
+
An example of fact checking instance looks as follows:
|
73 |
+
```
|
74 |
+
{'claim_author_name': 'Facebook posts',
|
75 |
+
'claim_date': '2019-01-01',
|
76 |
+
'claim_text': 'Quotes Michelle Obama as saying, "White folks are what’s wrong with America."',
|
77 |
+
'review_date': '2019-01-03',
|
78 |
+
'review_rating': 'Pants on Fire',
|
79 |
+
'review_url': 'https://www.politifact.com/facebook-fact-checks/statements/2019/jan/03/facebook-posts/did-michelle-obama-once-say-white-folks-are-whats-/',
|
80 |
+
'reviewer_name': 'PolitiFact'}
|
81 |
+
```
|
82 |
+
|
83 |
+
### Data Fields
|
84 |
+
|
85 |
+
A data instance has the following fields:
|
86 |
+
- `review_date`: the day the fact checking report was posted. Missing values are replaced with empty strings
|
87 |
+
- `review_url`: URL for the full fact checking report
|
88 |
+
- `reviewer_name`: the name of the fact checking service.
|
89 |
+
- `claim_text`: the full text of the claim being reviewed.
|
90 |
+
- `claim_author_name`: the author of the claim being reviewed. Missing values are replaced with empty strings
|
91 |
+
- `claim_date` the date of the claim. Missing values are replaced with empty strings
|
92 |
+
- `review_rating`: the judgments of the fact checker (under `alternateName`, names vary by fact checker)
|
93 |
+
|
94 |
+
### Data Splits
|
95 |
+
|
96 |
+
No splits are provided. There are a total of 5632 claims fact-checked.
|
97 |
+
|
98 |
+
## Dataset Creation
|
99 |
+
|
100 |
+
### Curation Rationale
|
101 |
+
|
102 |
+
[More Information Needed]
|
103 |
+
|
104 |
+
### Source Data
|
105 |
+
|
106 |
+
#### Initial Data Collection and Normalization
|
107 |
+
|
108 |
+
[More Information Needed]
|
109 |
+
|
110 |
+
#### Who are the source language producers?
|
111 |
+
|
112 |
+
[More Information Needed]
|
113 |
+
|
114 |
+
### Annotations
|
115 |
+
|
116 |
+
#### Annotation process
|
117 |
+
|
118 |
+
[More Information Needed]
|
119 |
+
|
120 |
+
#### Who are the annotators?
|
121 |
+
|
122 |
+
The fact checking is done by [FactCheck.org](https://www.factcheck.org/), [PolitiFact](https://www.politifact.com/), [The Washington Post](https://www.washingtonpost.com/), and [The Weekly Standard](https://www.weeklystandard.com/).
|
123 |
+
|
124 |
+
- [FactCheck.org](https://www.factcheck.org/) self describes as "a nonpartisan, nonprofit 'consumer advocate' for voters that aims to reduce the level of deception and confusion in U.S. politics." It was founded by journalists Kathleen Hall Jamieson and Brooks Jackson and is currently directed by Eugene Kiely.
|
125 |
+
- [PolitiFact](https://www.politifact.com/) describe their ethics as "seeking to present the true facts, unaffected by agenda or biases, [with] journalists setting their own opinions aside." It was started in August 2007 by Times Washington Bureau Chief Bill Adair. The organization was acquired in February 2018 by the Poynter Institute, a non-profit journalism education and news media research center that also owns the Tampa Bay Times.
|
126 |
+
- [The Washington Post](https://www.washingtonpost.com/) is a newspaper considered to be near the center of the American political spectrum. In 2013 Amazon.com founder Jeff Bezos bought the newspaper and affiliated publications.
|
127 |
+
|
128 |
+
The original data source also contains 132 items reviewed by [The Weekly Standard](https://www.weeklystandard.com/), which was a neo-conservative American newspaper. IT is the most politically loaded source of the group, which was originally a vocal creitic of the activity of fact-checking, and has historically taken stances [close to the American right](https://en.wikipedia.org/wiki/The_Weekly_Standard#Support_of_the_invasion_of_Iraq). It also had to admit responsibility for baseless accusations against a well known author in a public [libel case](https://en.wikipedia.org/wiki/The_Weekly_Standard#Libel_case). The fact checked items from this source can be found in the `weekly_standard` configuration but should be used only with full understanding of this context.
|
129 |
+
|
130 |
+
### Personal and Sensitive Information
|
131 |
+
|
132 |
+
[More Information Needed]
|
133 |
+
|
134 |
+
## Considerations for Using the Data
|
135 |
+
|
136 |
+
### Social Impact of Dataset
|
137 |
+
|
138 |
+
[More Information Needed]
|
139 |
+
|
140 |
+
### Discussion of Biases
|
141 |
+
|
142 |
+
See section above describing the [fact checking organizations](#who-are-the-annotators?).
|
143 |
+
|
144 |
+
[More Information Needed]
|
145 |
+
|
146 |
+
### Other Known Limitations
|
147 |
+
|
148 |
+
[More Information Needed]
|
149 |
+
|
150 |
+
## Additional Information
|
151 |
+
|
152 |
+
### Dataset Curators
|
153 |
+
|
154 |
+
This fact checking dataset is maintained by [datacommons.org](https://datacommons.org/), a Google initiative.
|
155 |
+
|
156 |
+
### Licensing Information
|
157 |
+
|
158 |
+
All fact checked items are released under a `CC-BY-NC-4.0` License.
|
159 |
+
|
160 |
+
### Citation Information
|
161 |
+
|
162 |
+
[More Information Needed]
|
datacommons_factcheck.py
ADDED
@@ -0,0 +1,122 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# coding=utf-8
|
2 |
+
# Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
|
3 |
+
#
|
4 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
+
# you may not use this file except in compliance with the License.
|
6 |
+
# You may obtain a copy of the License at
|
7 |
+
#
|
8 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
9 |
+
#
|
10 |
+
# Unless required by applicable law or agreed to in writing, software
|
11 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
+
# See the License for the specific language governing permissions and
|
14 |
+
# limitations under the License.
|
15 |
+
"""DataCommons Fact Checked claims"""
|
16 |
+
|
17 |
+
from __future__ import absolute_import, division, print_function
|
18 |
+
|
19 |
+
import json
|
20 |
+
|
21 |
+
import datasets
|
22 |
+
|
23 |
+
|
24 |
+
# TODO: Add BibTeX citation
|
25 |
+
# Find for instance the citation on arxiv or on the dataset repo/website
|
26 |
+
_CITATION = """\
|
27 |
+
@InProceedings{huggingface:dataset,
|
28 |
+
title = {Data Commons 2019 Fact Checks},
|
29 |
+
authors={datacommons.org},
|
30 |
+
year={2019}
|
31 |
+
}
|
32 |
+
"""
|
33 |
+
|
34 |
+
# TODO: Add description of the dataset here
|
35 |
+
# You can copy an official description
|
36 |
+
_DESCRIPTION = """\
|
37 |
+
A dataset of fact checked claims by news media maintained by datacommons.org
|
38 |
+
"""
|
39 |
+
|
40 |
+
_HOMEPAGE = "https://datacommons.org/factcheck/faq"
|
41 |
+
|
42 |
+
_LICENSE = "CC-BY-NC-4.0"
|
43 |
+
|
44 |
+
_URL = "https://datacommons.org/data/factcheck/fact_checks_20190605.txt.gz"
|
45 |
+
|
46 |
+
|
47 |
+
class DatacommonsFactcheck(datasets.GeneratorBasedBuilder):
|
48 |
+
"""DataCommons Fact Checked claims"""
|
49 |
+
|
50 |
+
VERSION = datasets.Version("1.0.0")
|
51 |
+
|
52 |
+
BUILDER_CONFIGS = [
|
53 |
+
datasets.BuilderConfig(
|
54 |
+
name="fctchk_politifact_wapo", version=VERSION, description="The 06/05/2019 version of the dataset"
|
55 |
+
),
|
56 |
+
datasets.BuilderConfig(
|
57 |
+
name="weekly_standard",
|
58 |
+
version=VERSION,
|
59 |
+
description="Includes Weekly Standard fact checked claims. See the README for concerns about these data items.",
|
60 |
+
),
|
61 |
+
]
|
62 |
+
|
63 |
+
DEFAULT_CONFIG_NAME = (
|
64 |
+
"fctchk_politifact_wapo" # It's not mandatory to have a default configuration. Just use one if it make sense.
|
65 |
+
)
|
66 |
+
|
67 |
+
def _info(self):
|
68 |
+
features = datasets.Features(
|
69 |
+
{
|
70 |
+
"reviewer_name": datasets.Value("string"),
|
71 |
+
"claim_text": datasets.Value("string"),
|
72 |
+
"review_date": datasets.Value("string"),
|
73 |
+
"review_url": datasets.Value("string"),
|
74 |
+
"review_rating": datasets.Value("string"),
|
75 |
+
"claim_author_name": datasets.Value("string"),
|
76 |
+
"claim_date": datasets.Value("string"),
|
77 |
+
}
|
78 |
+
)
|
79 |
+
return datasets.DatasetInfo(
|
80 |
+
description=_DESCRIPTION,
|
81 |
+
features=features, # Here we define them above because they are different between the two configurations
|
82 |
+
supervised_keys=None,
|
83 |
+
homepage=_HOMEPAGE,
|
84 |
+
license=_LICENSE,
|
85 |
+
citation=_CITATION,
|
86 |
+
)
|
87 |
+
|
88 |
+
def _split_generators(self, dl_manager):
|
89 |
+
"""Returns SplitGenerators."""
|
90 |
+
file_path = dl_manager.download_and_extract(_URL)
|
91 |
+
return [
|
92 |
+
datasets.SplitGenerator(
|
93 |
+
name=datasets.Split.TRAIN,
|
94 |
+
# These kwargs will be passed to _generate_examples
|
95 |
+
gen_kwargs={
|
96 |
+
"filepath": file_path,
|
97 |
+
},
|
98 |
+
),
|
99 |
+
]
|
100 |
+
|
101 |
+
def _generate_examples(self, filepath):
|
102 |
+
with open(filepath, encoding="utf-8") as f:
|
103 |
+
id_ = -1
|
104 |
+
for row in f:
|
105 |
+
data = json.loads(row.strip()[35:-9])
|
106 |
+
res = {
|
107 |
+
"reviewer_name": data["author"]["name"],
|
108 |
+
"claim_text": data["claimReviewed"],
|
109 |
+
"review_date": data.get("datePublished", ""),
|
110 |
+
"review_url": data["url"],
|
111 |
+
"review_rating": data["reviewRating"]["alternateName"],
|
112 |
+
"claim_author_name": data["itemReviewed"]["author"].get("name", ""),
|
113 |
+
"claim_date": data["itemReviewed"].get("datePublished", ""),
|
114 |
+
}
|
115 |
+
if self.config.name == "weekly_standard":
|
116 |
+
if data["author"]["name"] == "The Weekly Standard":
|
117 |
+
id_ += 1
|
118 |
+
yield id_, res
|
119 |
+
else:
|
120 |
+
if data["author"]["name"] != "The Weekly Standard":
|
121 |
+
id_ += 1
|
122 |
+
yield id_, res
|
dummy/fctchk_politifact_wapo/1.0.0/dummy_data.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a56d7009695b22318ddbfc97953ab0ebceadfd86fe1f4c0253218c93a73efb00
|
3 |
+
size 1039
|
dummy/weekly_standard/1.0.0/dummy_data.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:98f3b61a4b93bd5cba3eddaca121bd6dca77b2f03727fcf60c01aa99c8fd8e38
|
3 |
+
size 978
|