system HF staff commited on
Commit
3c6636f
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,181 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - expert-generated
6
+ languages:
7
+ - bg
8
+ licenses:
9
+ - unknown
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 1K<n<10K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - text-classification
18
+ task_ids:
19
+ - fact-checking
20
+ ---
21
+
22
+ # Dataset Card for Clickbait/Fake News in Bulgarian
23
+
24
+ ## Table of Contents
25
+ - [Dataset Description](#dataset-description)
26
+ - [Dataset Summary](#dataset-summary)
27
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
28
+ - [Languages](#languages)
29
+ - [Dataset Structure](#dataset-structure)
30
+ - [Data Instances](#data-instances)
31
+ - [Data Fields](#data-instances)
32
+ - [Data Splits](#data-instances)
33
+ - [Dataset Creation](#dataset-creation)
34
+ - [Curation Rationale](#curation-rationale)
35
+ - [Source Data](#source-data)
36
+ - [Annotations](#annotations)
37
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
38
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
39
+ - [Social Impact of Dataset](#social-impact-of-dataset)
40
+ - [Discussion of Biases](#discussion-of-biases)
41
+ - [Other Known Limitations](#other-known-limitations)
42
+ - [Additional Information](#additional-information)
43
+ - [Dataset Curators](#dataset-curators)
44
+ - [Licensing Information](#licensing-information)
45
+ - [Citation Information](#citation-information)
46
+
47
+ ## Dataset Description
48
+
49
+ - **Homepage:** [Data Science Society / Case Fake News](https://gitlab.com/datasciencesociety/case_fake_news)
50
+ - **Repository:** [Data Science Society / Case Fake News / Data](https://gitlab.com/datasciencesociety/case_fake_news/-/tree/master/data)
51
+ - **Paper:** [This paper uses the dataset.](https://www.acl-bg.org/proceedings/2017/RANLP%202017/pdf/RANLP045.pdf)
52
+ - **Leaderboard:**
53
+ - **Point of Contact:**
54
+
55
+ ### Dataset Summary
56
+
57
+ This is a corpus of Bulgarian news over a fixed period of time, whose factuality had been questioned.
58
+ The news come from 377 different sources from various domains, including politics, interesting facts and tips&tricks.
59
+
60
+ The dataset was prepared for the Hack the
61
+ Fake News hackathon. It was provided by the
62
+ [Bulgarian Association of PR Agencies](http://www.bapra.bg/) and is
63
+ available in [Gitlab](https://gitlab.com/datasciencesociety/).
64
+
65
+ The corpus was automatically collected, and then annotated by students of journalism.
66
+
67
+ The training dataset contains 2,815 examples, where 1,940 (i.e., 69%) are fake news
68
+ and 1,968 (i.e., 70%) are click-baits; There are 761 testing examples.
69
+
70
+ There is 98% correlation between fake news and clickbaits.
71
+
72
+ One important aspect about the training dataset is that it contains many repetitions.
73
+ This should not be surprising as it attempts to represent a natural distribution of factual
74
+ vs. fake news on-line over a period of time. As publishers of fake news often have a group of
75
+ websites that feature the same deceiving content, we should expect some repetition.
76
+ In particular, the training dataset contains
77
+ 434 unique articles with duplicates. These articles have three reposts each on average, with
78
+ the most reposted article appearing 45 times.
79
+ If we take into account the labels of the reposted articles, we can see that if an article
80
+ is reposted, it is more likely to be fake news.
81
+ The number of fake news that have a duplicate in the training dataset are 1018 whereas,
82
+ the number of articles with genuine content
83
+ that have a duplicate article in the training set is 322.
84
+
85
+ (The dataset description is from the following [paper](https://www.acl-bg.org/proceedings/2017/RANLP%202017/pdf/RANLP045.pdf).)
86
+
87
+ ### Supported Tasks and Leaderboards
88
+
89
+ [More Information Needed]
90
+
91
+ ### Languages
92
+
93
+ Bulgarian
94
+
95
+ ## Dataset Structure
96
+
97
+ ### Data Instances
98
+
99
+ [More Information Needed]
100
+
101
+ ### Data Fields
102
+
103
+ Each entry in the dataset consists of the following elements:
104
+
105
+ * `fake_news_score` - a label indicating whether the article is fake or not
106
+
107
+ * `click_bait_score` - another label indicating whether it is a click-bait
108
+
109
+ * `content_title` - article heading
110
+
111
+ * `content_url` - URL of the original article
112
+
113
+ * `content_published_time` - date of publication
114
+
115
+ * `content` - article content
116
+
117
+
118
+ ### Data Splits
119
+
120
+ The **training dataset** contains 2,815 examples, where 1,940 (i.e., 69%) are fake news
121
+ and 1,968 (i.e., 70%) are click-baits;
122
+
123
+ The **validation dataset** contains 761 testing examples.
124
+
125
+ ## Dataset Creation
126
+
127
+ ### Curation Rationale
128
+
129
+ [More Information Needed]
130
+
131
+ ### Source Data
132
+
133
+ #### Initial Data Collection and Normalization
134
+
135
+ [More Information Needed]
136
+
137
+ #### Who are the source language producers?
138
+
139
+ [More Information Needed]
140
+
141
+ ### Annotations
142
+
143
+ #### Annotation process
144
+
145
+ [More Information Needed]
146
+
147
+ #### Who are the annotators?
148
+
149
+ [More Information Needed]
150
+
151
+ ### Personal and Sensitive Information
152
+
153
+ [More Information Needed]
154
+
155
+ ## Considerations for Using the Data
156
+
157
+ ### Social Impact of Dataset
158
+
159
+ [More Information Needed]
160
+
161
+ ### Discussion of Biases
162
+
163
+ [More Information Needed]
164
+
165
+ ### Other Known Limitations
166
+
167
+ [More Information Needed]
168
+
169
+ ## Additional Information
170
+
171
+ ### Dataset Curators
172
+
173
+ [More Information Needed]
174
+
175
+ ### Licensing Information
176
+
177
+ [More Information Needed]
178
+
179
+ ### Citation Information
180
+
181
+ [More Information Needed]
clickbait_news_bg.py ADDED
@@ -0,0 +1,120 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ """ Dataset with clickbait and fake news in Bulgarian. """
17
+
18
+ from __future__ import absolute_import, division, print_function
19
+
20
+ import openpyxl # noqa: requires this pandas optional dependency for reading xlsx files
21
+ import pandas as pd
22
+
23
+ import datasets
24
+
25
+
26
+ _CITATION = """\
27
+ @InProceedings{clickbait_news_bg,
28
+ title = {Dataset with clickbait and fake news in Bulgarian. Introduced for the Hack the Fake News 2017.},
29
+ authors={Data Science Society},
30
+ year={2017},
31
+ url={https://gitlab.com/datasciencesociety/case_fake_news/}
32
+ }
33
+ """
34
+
35
+ # TODO: Add description of the dataset here
36
+ # You can copy an official description
37
+ _DESCRIPTION = """\
38
+ Dataset with clickbait and fake news in Bulgarian. Introduced for the Hack the Fake News 2017.
39
+ """
40
+
41
+ # TODO: Add a link to an official homepage for the dataset here
42
+ _HOMEPAGE = "https://gitlab.com/datasciencesociety/case_fake_news/"
43
+
44
+ # TODO: Add the licence for the dataset here if you can find it
45
+ _LICENSE = ""
46
+
47
+ # TODO: Add link to the official dataset URLs here
48
+ # The HuggingFace dataset library don't host the datasets but only point to the original files
49
+ # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
50
+ _URLs = {
51
+ "default_train": "https://gitlab.com/datasciencesociety/case_fake_news/-/raw/master/data/FN_Training_Set.xlsx",
52
+ "default_validation": "https://gitlab.com/datasciencesociety/case_fake_news/-/raw/master/data/FN_Validation_Set.xlsx",
53
+ }
54
+
55
+
56
+ class ClickbaitNewsBG(datasets.GeneratorBasedBuilder):
57
+ VERSION = datasets.Version("1.1.0")
58
+ DEFAULT_CONFIG_NAME = "default"
59
+
60
+ def _info(self):
61
+ if self.config.name == "default":
62
+ features = datasets.Features(
63
+ {
64
+ "fake_news_score": datasets.features.ClassLabel(names=["legitimate", "fake"]),
65
+ "click_bait_score": datasets.features.ClassLabel(names=["normal", "clickbait"]),
66
+ "content_title": datasets.Value("string"),
67
+ "content_url": datasets.Value("string"),
68
+ "content_published_time": datasets.Value("string"),
69
+ "content": datasets.Value("string"),
70
+ }
71
+ )
72
+ return datasets.DatasetInfo(
73
+ description=_DESCRIPTION,
74
+ features=features,
75
+ supervised_keys=None,
76
+ homepage=_HOMEPAGE,
77
+ license=_LICENSE,
78
+ citation=_CITATION,
79
+ )
80
+
81
+ def _split_generators(self, dl_manager):
82
+ """Returns SplitGenerators."""
83
+ data_dir = dl_manager.download(_URLs)
84
+
85
+ return [
86
+ datasets.SplitGenerator(
87
+ name=spl_enum,
88
+ gen_kwargs={
89
+ "filepath": data_dir[f"{self.config.name}_{spl}"],
90
+ "split": spl,
91
+ },
92
+ )
93
+ for spl, spl_enum in [
94
+ ("train", datasets.Split.TRAIN),
95
+ ("validation", datasets.Split.VALIDATION),
96
+ ]
97
+ ]
98
+
99
+ def _generate_examples(self, filepath, split):
100
+ """ Yields examples. """
101
+ keys = [
102
+ "fake_news_score",
103
+ "click_bait_score",
104
+ "content_title",
105
+ "content_url",
106
+ "content_published_time",
107
+ "content",
108
+ ]
109
+ with open(filepath, "rb") as f:
110
+ data = pd.read_excel(f, engine="openpyxl")
111
+ for id_, row in enumerate(data.itertuples()):
112
+ row_dict = dict()
113
+ for key, value in zip(keys, row[1:]):
114
+ if key == "fake_news_score":
115
+ row_dict[key] = "legitimate" if value == 1 else "fake"
116
+ elif key == "click_bait_score":
117
+ row_dict[key] = "normal" if value == 1 else "clickbait"
118
+ else:
119
+ row_dict[key] = str(value)
120
+ yield id_, row_dict
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"default": {"description": "Dataset with clickbait and fake news in Bulgarian. Introduced for the Hack the Fake News 2017.\n", "citation": "@InProceedings{clickbait_news_bg,\ntitle = {Dataset with clickbait and fake news in Bulgarian. Introduced for the Hack the Fake News 2017.},\nauthors={Data Science Society},\nyear={2017},\nurl={https://gitlab.com/datasciencesociety/case_fake_news/}\n}\n", "homepage": "https://gitlab.com/datasciencesociety/case_fake_news/", "license": "", "features": {"fake_news_score": {"num_classes": 2, "names": ["legitimate", "fake"], "names_file": null, "id": null, "_type": "ClassLabel"}, "click_bait_score": {"num_classes": 2, "names": ["normal", "clickbait"], "names_file": null, "id": null, "_type": "ClassLabel"}, "content_title": {"dtype": "string", "id": null, "_type": "Value"}, "content_url": {"dtype": "string", "id": null, "_type": "Value"}, "content_published_time": {"dtype": "string", "id": null, "_type": "Value"}, "content": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "clickbait_news_bg", "config_name": "default", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 24480402, "num_examples": 2815, "dataset_name": "clickbait_news_bg"}, "validation": {"name": "validation", "num_bytes": 6752242, "num_examples": 761, "dataset_name": "clickbait_news_bg"}}, "download_checksums": {"https://gitlab.com/datasciencesociety/case_fake_news/-/raw/master/data/FN_Training_Set.xlsx": {"num_bytes": 6543801, "checksum": "ffb10237c03f06f73b65a63e3eb507ab8683d5bb71b35233b3f7703ff3a60c7e"}, "https://gitlab.com/datasciencesociety/case_fake_news/-/raw/master/data/FN_Validation_Set.xlsx": {"num_bytes": 2025774, "checksum": "56207dfa58f9b3eb20444a919e739321768cd37ad7e53612658a80f483eb003a"}}, "download_size": 8569575, "post_processing_size": null, "dataset_size": 31232644, "size_in_bytes": 39802219}}
dummy/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2e8413d5cfee4f9bbfd2288e06c5ca72868f06d98467fe105ec361588bc1e987
3
+ size 33067