system HF staff commited on
Commit
5944b80
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,162 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - expert-generated
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - unknown
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - n<1K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - text-classification
18
+ task_ids:
19
+ - multi-label-classification
20
+ ---
21
+
22
+ # Dataset Card for Fake News English
23
+
24
+ ## Table of Contents
25
+ - [Dataset Description](#dataset-description)
26
+ - [Dataset Summary](#dataset-summary)
27
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
28
+ - [Languages](#languages)
29
+ - [Dataset Structure](#dataset-structure)
30
+ - [Data Instances](#data-instances)
31
+ - [Data Fields](#data-instances)
32
+ - [Data Splits](#data-instances)
33
+ - [Dataset Creation](#dataset-creation)
34
+ - [Curation Rationale](#curation-rationale)
35
+ - [Source Data](#source-data)
36
+ - [Annotations](#annotations)
37
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
38
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
39
+ - [Social Impact of Dataset](#social-impact-of-dataset)
40
+ - [Discussion of Biases](#discussion-of-biases)
41
+ - [Other Known Limitations](#other-known-limitations)
42
+ - [Additional Information](#additional-information)
43
+ - [Dataset Curators](#dataset-curators)
44
+ - [Licensing Information](#licensing-information)
45
+ - [Citation Information](#citation-information)
46
+
47
+ ## Dataset Description
48
+ - **Homepage: https://dl.acm.org/doi/10.1145/3201064.3201100**
49
+ - **Repository: https://github.com/jgolbeck/fakenews/**
50
+ - **Paper: https://doi.org/10.1145/3201064.3201100**
51
+ - **Leaderboard:**
52
+ - **Point of Contact: Jennifer Golbeck (http://www.jengolbeck.com)**
53
+
54
+ ### Dataset Summary
55
+ This dataset contains URLs of news articles classified as either fake or satire. The articles classified as fake also have the URL of a rebutting article.
56
+
57
+ ### Supported Tasks and Leaderboards
58
+ [More Information Needed]
59
+
60
+ ### Languages
61
+ English
62
+
63
+ ## Dataset Structure
64
+
65
+ ### Data Instances
66
+ ```
67
+ {
68
+ "article_number": 102 ,
69
+ "url_of_article": https://newslo.com/roger-stone-blames-obama-possibility-trump-alzheimers-attacks-president-caused-severe-stress/ ,
70
+ "fake_or_satire": 1, # Fake
71
+ "url_of_rebutting_article": https://www.snopes.com/fact-check/donald-trumps-intelligence-quotient/
72
+ }
73
+ ```
74
+
75
+ ### Data Fields
76
+ - article_number: An integer used as an index for each row
77
+ - url_of_article: A string which contains URL of an article to be assessed and classified as either Fake or Satire
78
+ - fake_or_satire: A classlabel for the above variable which can take two values- Fake (1) and Satire (0)
79
+ - url_of_rebutting_article: A string which contains a URL of the article used to refute the article in question (present - in url_of_article)
80
+
81
+ ### Data Splits
82
+ This dataset is not split, only the train split is available.
83
+
84
+ ## Dataset Creation
85
+
86
+ ### Curation Rationale
87
+ [More Information Needed]
88
+
89
+ ### Source Data
90
+
91
+ #### Initial Data Collection and Normalization
92
+
93
+ [More Information Needed]
94
+
95
+ #### Who are the source language producers?
96
+
97
+ [More Information Needed]
98
+
99
+ ### Annotations
100
+
101
+ #### Annotation process
102
+
103
+ [More Information Needed]
104
+
105
+ #### Who are the annotators?
106
+
107
+ [More Information Needed]
108
+
109
+ ### Personal and Sensitive Information
110
+
111
+ [More Information Needed]
112
+
113
+ ## Considerations for Using the Data
114
+
115
+ ### Social Impact of Dataset
116
+
117
+ [More Information Needed]
118
+
119
+ ### Discussion of Biases
120
+
121
+ [More Information Needed]
122
+
123
+ ### Other Known Limitations
124
+
125
+ [More Information Needed]
126
+
127
+ ## Additional Information
128
+
129
+ ### Dataset Curators
130
+ Golbeck, Jennifer
131
+ Everett, Jennine
132
+ Falak, Waleed
133
+ Gieringer, Carl
134
+ Graney, Jack
135
+ Hoffman, Kelly
136
+ Huth, Lindsay
137
+ Ma, Zhenya
138
+ Jha, Mayanka
139
+ Khan, Misbah
140
+ Kori, Varsha
141
+ Mauriello, Matthew
142
+ Lewis, Elo
143
+ Mirano, George
144
+ IV, William
145
+ Mussenden, Sean
146
+ Nelson, Tammie
147
+ Mcwillie, Sean
148
+ Pant, Akshat
149
+ Cheakalos, Paul
150
+
151
+ ### Licensing Information
152
+ [More Information Needed]
153
+
154
+ ### Citation Information
155
+ @inproceedings{inproceedings,
156
+ author = {Golbeck, Jennifer and Everett, Jennine and Falak, Waleed and Gieringer, Carl and Graney, Jack and Hoffman, Kelly and Huth, Lindsay and Ma, Zhenya and Jha, Mayanka and Khan, Misbah and Kori, Varsha and Mauriello, Matthew and Lewis, Elo and Mirano, George and IV, William and Mussenden, Sean and Nelson, Tammie and Mcwillie, Sean and Pant, Akshat and Cheakalos, Paul},
157
+ year = {2018},
158
+ month = {05},
159
+ pages = {17-21},
160
+ title = {Fake News vs Satire: A Dataset and Analysis},
161
+ doi = {10.1145/3201064.3201100}
162
+ }
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"default": {"description": "\nFake news has become a major societal issue and a technical challenge for social media companies to identify. This content is difficult to identify because the term \"fake news\" covers intentionally false, deceptive stories as well as factual errors, satire, and sometimes, stories that a person just does not like. Addressing the problem requires clear definitions and examples. In this work, we present a dataset of fake news and satire stories that are hand coded, verified, and, in the case of fake news, include rebutting stories. We also include a thematic content analysis of the articles, identifying major themes that include hyperbolic support or condemnation of a gure, conspiracy theories, racist themes, and discrediting of reliable sources. In addition to releasing this dataset for research use, we analyze it and show results based on language that are promising for classification purposes. Overall, our contribution of a dataset and initial analysis are designed to support future work by fake news researchers.\n", "citation": "\n@inproceedings{inproceedings,\nauthor = {Golbeck, Jennifer and Everett, Jennine and Falak, Waleed and Gieringer, Carl and Graney, Jack and Hoffman, Kelly and Huth, Lindsay and Ma, Zhenya and Jha, Mayanka and Khan, Misbah and Kori, Varsha and Mauriello, Matthew and Lewis, Elo and Mirano, George and IV, William and Mussenden, Sean and Nelson, Tammie and Mcwillie, Sean and Pant, Akshat and Cheakalos, Paul},\nyear = {2018},\nmonth = {05},\npages = {17-21},\ntitle = {Fake News vs Satire: A Dataset and Analysis},\ndoi = {10.1145/3201064.3201100}\n}\n", "homepage": "https://dl.acm.org/doi/10.1145/3201064.3201100", "license": "", "features": {"article_number": {"dtype": "int32", "id": null, "_type": "Value"}, "url_of_article": {"dtype": "string", "id": null, "_type": "Value"}, "fake_or_satire": {"num_classes": 2, "names": ["Satire", "Fake"], "names_file": null, "id": null, "_type": "ClassLabel"}, "url_of_rebutting_article": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "fake_news_english", "config_name": "default", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 78078, "num_examples": 492, "dataset_name": "fake_news_english"}}, "download_checksums": {"https://github.com/jgolbeck/fakenews/raw/master/FakeNewsData.zip": {"num_bytes": 3002233, "checksum": "f423317ec13c5279bc5879240dca034ee9eae773eaa9c42e1fca8f8f88e37627"}}, "download_size": 3002233, "post_processing_size": null, "dataset_size": 78078, "size_in_bytes": 3080311}}
dummy/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bee5f01ec97eca7eb0cc8e50f387f6158c78aa5533414a1996c55b32c47528e4
3
+ size 6682
fake_news_english.py ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """Fake News vs Satire: A Dataset and Analysis."""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import os
20
+
21
+ import openpyxl # noqa: requires this pandas optional dependency for reading xlsx files
22
+ import pandas as pd
23
+
24
+ import datasets
25
+
26
+
27
+ _CITATION = """
28
+ @inproceedings{inproceedings,
29
+ author = {Golbeck, Jennifer and Everett, Jennine and Falak, Waleed and Gieringer, Carl and Graney, Jack and Hoffman, Kelly and Huth, Lindsay and Ma, Zhenya and Jha, Mayanka and Khan, Misbah and Kori, Varsha and Mauriello, Matthew and Lewis, Elo and Mirano, George and IV, William and Mussenden, Sean and Nelson, Tammie and Mcwillie, Sean and Pant, Akshat and Cheakalos, Paul},
30
+ year = {2018},
31
+ month = {05},
32
+ pages = {17-21},
33
+ title = {Fake News vs Satire: A Dataset and Analysis},
34
+ doi = {10.1145/3201064.3201100}
35
+ }
36
+ """
37
+
38
+ _DESCRIPTION = """
39
+ Fake news has become a major societal issue and a technical challenge for social media companies to identify. This content is difficult to identify because the term "fake news" covers intentionally false, deceptive stories as well as factual errors, satire, and sometimes, stories that a person just does not like. Addressing the problem requires clear definitions and examples. In this work, we present a dataset of fake news and satire stories that are hand coded, verified, and, in the case of fake news, include rebutting stories. We also include a thematic content analysis of the articles, identifying major themes that include hyperbolic support or condemnation of a gure, conspiracy theories, racist themes, and discrediting of reliable sources. In addition to releasing this dataset for research use, we analyze it and show results based on language that are promising for classification purposes. Overall, our contribution of a dataset and initial analysis are designed to support future work by fake news researchers.
40
+ """
41
+
42
+ _HOMEPAGE = "https://dl.acm.org/doi/10.1145/3201064.3201100"
43
+
44
+ # _LICENSE = ""
45
+
46
+ _URLs = "https://github.com/jgolbeck/fakenews/raw/master/FakeNewsData.zip"
47
+
48
+
49
+ class FakeNewsEnglish(datasets.GeneratorBasedBuilder):
50
+ """Fake News vs Satire: A Dataset and Analysis"""
51
+
52
+ VERSION = datasets.Version("1.1.0")
53
+
54
+ def _info(self):
55
+ features = datasets.Features(
56
+ {
57
+ "article_number": datasets.Value("int32"),
58
+ "url_of_article": datasets.Value("string"),
59
+ "fake_or_satire": datasets.ClassLabel(names=["Satire", "Fake"]),
60
+ "url_of_rebutting_article": datasets.Value("string"),
61
+ }
62
+ )
63
+ return datasets.DatasetInfo(
64
+ description=_DESCRIPTION,
65
+ features=features,
66
+ supervised_keys=None,
67
+ homepage=_HOMEPAGE,
68
+ citation=_CITATION,
69
+ )
70
+
71
+ def _split_generators(self, dl_manager):
72
+ """Returns SplitGenerators."""
73
+ data_dir = dl_manager.download_and_extract(_URLs)
74
+ return [
75
+ datasets.SplitGenerator(
76
+ name=datasets.Split.TRAIN,
77
+ # These kwargs will be passed to _generate_examples
78
+ gen_kwargs={"filepath": os.path.join(data_dir, "FakeNewsData", "Fake News Stories.xlsx")},
79
+ )
80
+ ]
81
+
82
+ def _generate_examples(self, filepath):
83
+ """ Yields examples. """
84
+ with open(filepath, "rb") as f:
85
+ f = pd.read_excel(f, engine="openpyxl")
86
+ for id_, row in f.iterrows():
87
+ yield id_, {
88
+ "article_number": row["Article Number"],
89
+ "url_of_article": str(row["URL of article"]),
90
+ "fake_or_satire": str(row["Fake or Satire?"]),
91
+ "url_of_rebutting_article": str(row["URL of rebutting article"]),
92
+ }