Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
parquet
Sub-tasks:
fact-checking
Languages:
Bulgarian
Size:
1K - 10K
License:
parquet-converter
commited on
Commit
•
20c7143
1
Parent(s):
fdd92fa
Update parquet files
Browse files- .gitattributes +0 -27
- README.md +0 -219
- clickbait_news_bg.py +0 -119
- dataset_infos.json +0 -1
- default/clickbait_news_bg-train.parquet +3 -0
- default/clickbait_news_bg-validation.parquet +3 -0
.gitattributes
DELETED
@@ -1,27 +0,0 @@
|
|
1 |
-
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
-
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
-
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
-
*.bin.* filter=lfs diff=lfs merge=lfs -text
|
5 |
-
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
6 |
-
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
-
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
-
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
-
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
-
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
-
*.model filter=lfs diff=lfs merge=lfs -text
|
12 |
-
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
13 |
-
*.onnx filter=lfs diff=lfs merge=lfs -text
|
14 |
-
*.ot filter=lfs diff=lfs merge=lfs -text
|
15 |
-
*.parquet filter=lfs diff=lfs merge=lfs -text
|
16 |
-
*.pb filter=lfs diff=lfs merge=lfs -text
|
17 |
-
*.pt filter=lfs diff=lfs merge=lfs -text
|
18 |
-
*.pth filter=lfs diff=lfs merge=lfs -text
|
19 |
-
*.rar filter=lfs diff=lfs merge=lfs -text
|
20 |
-
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
21 |
-
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
22 |
-
*.tflite filter=lfs diff=lfs merge=lfs -text
|
23 |
-
*.tgz filter=lfs diff=lfs merge=lfs -text
|
24 |
-
*.xz filter=lfs diff=lfs merge=lfs -text
|
25 |
-
*.zip filter=lfs diff=lfs merge=lfs -text
|
26 |
-
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
27 |
-
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
README.md
DELETED
@@ -1,219 +0,0 @@
|
|
1 |
-
---
|
2 |
-
annotations_creators:
|
3 |
-
- expert-generated
|
4 |
-
language_creators:
|
5 |
-
- expert-generated
|
6 |
-
language:
|
7 |
-
- bg
|
8 |
-
license:
|
9 |
-
- unknown
|
10 |
-
multilinguality:
|
11 |
-
- monolingual
|
12 |
-
size_categories:
|
13 |
-
- 1K<n<10K
|
14 |
-
source_datasets:
|
15 |
-
- original
|
16 |
-
task_categories:
|
17 |
-
- text-classification
|
18 |
-
task_ids:
|
19 |
-
- fact-checking
|
20 |
-
paperswithcode_id: null
|
21 |
-
pretty_name: Clickbait/Fake News in Bulgarian
|
22 |
-
dataset_info:
|
23 |
-
features:
|
24 |
-
- name: fake_news_score
|
25 |
-
dtype:
|
26 |
-
class_label:
|
27 |
-
names:
|
28 |
-
0: legitimate
|
29 |
-
1: fake
|
30 |
-
- name: click_bait_score
|
31 |
-
dtype:
|
32 |
-
class_label:
|
33 |
-
names:
|
34 |
-
0: normal
|
35 |
-
1: clickbait
|
36 |
-
- name: content_title
|
37 |
-
dtype: string
|
38 |
-
- name: content_url
|
39 |
-
dtype: string
|
40 |
-
- name: content_published_time
|
41 |
-
dtype: string
|
42 |
-
- name: content
|
43 |
-
dtype: string
|
44 |
-
splits:
|
45 |
-
- name: train
|
46 |
-
num_bytes: 24480402
|
47 |
-
num_examples: 2815
|
48 |
-
- name: validation
|
49 |
-
num_bytes: 6752242
|
50 |
-
num_examples: 761
|
51 |
-
download_size: 8569575
|
52 |
-
dataset_size: 31232644
|
53 |
-
---
|
54 |
-
|
55 |
-
# Dataset Card for Clickbait/Fake News in Bulgarian
|
56 |
-
|
57 |
-
## Table of Contents
|
58 |
-
- [Dataset Description](#dataset-description)
|
59 |
-
- [Dataset Summary](#dataset-summary)
|
60 |
-
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
61 |
-
- [Languages](#languages)
|
62 |
-
- [Dataset Structure](#dataset-structure)
|
63 |
-
- [Data Instances](#data-instances)
|
64 |
-
- [Data Fields](#data-fields)
|
65 |
-
- [Data Splits](#data-splits)
|
66 |
-
- [Dataset Creation](#dataset-creation)
|
67 |
-
- [Curation Rationale](#curation-rationale)
|
68 |
-
- [Source Data](#source-data)
|
69 |
-
- [Annotations](#annotations)
|
70 |
-
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
71 |
-
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
72 |
-
- [Social Impact of Dataset](#social-impact-of-dataset)
|
73 |
-
- [Discussion of Biases](#discussion-of-biases)
|
74 |
-
- [Other Known Limitations](#other-known-limitations)
|
75 |
-
- [Additional Information](#additional-information)
|
76 |
-
- [Dataset Curators](#dataset-curators)
|
77 |
-
- [Licensing Information](#licensing-information)
|
78 |
-
- [Citation Information](#citation-information)
|
79 |
-
- [Contributions](#contributions)
|
80 |
-
|
81 |
-
## Dataset Description
|
82 |
-
|
83 |
-
- **Homepage:** [Data Science Society / Case Fake News](https://gitlab.com/datasciencesociety/case_fake_news)
|
84 |
-
- **Repository:** [Data Science Society / Case Fake News / Data](https://gitlab.com/datasciencesociety/case_fake_news/-/tree/master/data)
|
85 |
-
- **Paper:** [This paper uses the dataset.](https://www.acl-bg.org/proceedings/2017/RANLP%202017/pdf/RANLP045.pdf)
|
86 |
-
- **Leaderboard:**
|
87 |
-
- **Point of Contact:**
|
88 |
-
|
89 |
-
### Dataset Summary
|
90 |
-
|
91 |
-
This is a corpus of Bulgarian news over a fixed period of time, whose factuality had been questioned.
|
92 |
-
The news come from 377 different sources from various domains, including politics, interesting facts and tips&tricks.
|
93 |
-
|
94 |
-
The dataset was prepared for the Hack the
|
95 |
-
Fake News hackathon. It was provided by the
|
96 |
-
[Bulgarian Association of PR Agencies](http://www.bapra.bg/) and is
|
97 |
-
available in [Gitlab](https://gitlab.com/datasciencesociety/).
|
98 |
-
|
99 |
-
The corpus was automatically collected, and then annotated by students of journalism.
|
100 |
-
|
101 |
-
The training dataset contains 2,815 examples, where 1,940 (i.e., 69%) are fake news
|
102 |
-
and 1,968 (i.e., 70%) are click-baits; There are 761 testing examples.
|
103 |
-
|
104 |
-
There is 98% correlation between fake news and clickbaits.
|
105 |
-
|
106 |
-
One important aspect about the training dataset is that it contains many repetitions.
|
107 |
-
This should not be surprising as it attempts to represent a natural distribution of factual
|
108 |
-
vs. fake news on-line over a period of time. As publishers of fake news often have a group of
|
109 |
-
websites that feature the same deceiving content, we should expect some repetition.
|
110 |
-
In particular, the training dataset contains
|
111 |
-
434 unique articles with duplicates. These articles have three reposts each on average, with
|
112 |
-
the most reposted article appearing 45 times.
|
113 |
-
If we take into account the labels of the reposted articles, we can see that if an article
|
114 |
-
is reposted, it is more likely to be fake news.
|
115 |
-
The number of fake news that have a duplicate in the training dataset are 1018 whereas,
|
116 |
-
the number of articles with genuine content
|
117 |
-
that have a duplicate article in the training set is 322.
|
118 |
-
|
119 |
-
(The dataset description is from the following [paper](https://www.acl-bg.org/proceedings/2017/RANLP%202017/pdf/RANLP045.pdf).)
|
120 |
-
|
121 |
-
### Supported Tasks and Leaderboards
|
122 |
-
|
123 |
-
[More Information Needed]
|
124 |
-
|
125 |
-
### Languages
|
126 |
-
|
127 |
-
Bulgarian
|
128 |
-
|
129 |
-
## Dataset Structure
|
130 |
-
|
131 |
-
### Data Instances
|
132 |
-
|
133 |
-
[More Information Needed]
|
134 |
-
|
135 |
-
### Data Fields
|
136 |
-
|
137 |
-
Each entry in the dataset consists of the following elements:
|
138 |
-
|
139 |
-
* `fake_news_score` - a label indicating whether the article is fake or not
|
140 |
-
|
141 |
-
* `click_bait_score` - another label indicating whether it is a click-bait
|
142 |
-
|
143 |
-
* `content_title` - article heading
|
144 |
-
|
145 |
-
* `content_url` - URL of the original article
|
146 |
-
|
147 |
-
* `content_published_time` - date of publication
|
148 |
-
|
149 |
-
* `content` - article content
|
150 |
-
|
151 |
-
|
152 |
-
### Data Splits
|
153 |
-
|
154 |
-
The **training dataset** contains 2,815 examples, where 1,940 (i.e., 69%) are fake news
|
155 |
-
and 1,968 (i.e., 70%) are click-baits;
|
156 |
-
|
157 |
-
The **validation dataset** contains 761 testing examples.
|
158 |
-
|
159 |
-
## Dataset Creation
|
160 |
-
|
161 |
-
### Curation Rationale
|
162 |
-
|
163 |
-
[More Information Needed]
|
164 |
-
|
165 |
-
### Source Data
|
166 |
-
|
167 |
-
#### Initial Data Collection and Normalization
|
168 |
-
|
169 |
-
[More Information Needed]
|
170 |
-
|
171 |
-
#### Who are the source language producers?
|
172 |
-
|
173 |
-
[More Information Needed]
|
174 |
-
|
175 |
-
### Annotations
|
176 |
-
|
177 |
-
#### Annotation process
|
178 |
-
|
179 |
-
[More Information Needed]
|
180 |
-
|
181 |
-
#### Who are the annotators?
|
182 |
-
|
183 |
-
[More Information Needed]
|
184 |
-
|
185 |
-
### Personal and Sensitive Information
|
186 |
-
|
187 |
-
[More Information Needed]
|
188 |
-
|
189 |
-
## Considerations for Using the Data
|
190 |
-
|
191 |
-
### Social Impact of Dataset
|
192 |
-
|
193 |
-
[More Information Needed]
|
194 |
-
|
195 |
-
### Discussion of Biases
|
196 |
-
|
197 |
-
[More Information Needed]
|
198 |
-
|
199 |
-
### Other Known Limitations
|
200 |
-
|
201 |
-
[More Information Needed]
|
202 |
-
|
203 |
-
## Additional Information
|
204 |
-
|
205 |
-
### Dataset Curators
|
206 |
-
|
207 |
-
[More Information Needed]
|
208 |
-
|
209 |
-
### Licensing Information
|
210 |
-
|
211 |
-
[More Information Needed]
|
212 |
-
|
213 |
-
### Citation Information
|
214 |
-
|
215 |
-
[More Information Needed]
|
216 |
-
|
217 |
-
### Contributions
|
218 |
-
|
219 |
-
Thanks to [@tsvm](https://github.com/tsvm), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
clickbait_news_bg.py
DELETED
@@ -1,119 +0,0 @@
|
|
1 |
-
# coding=utf-8
|
2 |
-
# Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
|
3 |
-
#
|
4 |
-
# Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
# you may not use this file except in compliance with the License.
|
6 |
-
# You may obtain a copy of the License at
|
7 |
-
#
|
8 |
-
# http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
#
|
10 |
-
# Unless required by applicable law or agreed to in writing, software
|
11 |
-
# distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
# See the License for the specific language governing permissions and
|
14 |
-
# limitations under the License.
|
15 |
-
|
16 |
-
""" Dataset with clickbait and fake news in Bulgarian. """
|
17 |
-
|
18 |
-
|
19 |
-
import openpyxl # noqa: requires this pandas optional dependency for reading xlsx files
|
20 |
-
import pandas as pd
|
21 |
-
|
22 |
-
import datasets
|
23 |
-
|
24 |
-
|
25 |
-
_CITATION = """\
|
26 |
-
@InProceedings{clickbait_news_bg,
|
27 |
-
title = {Dataset with clickbait and fake news in Bulgarian. Introduced for the Hack the Fake News 2017.},
|
28 |
-
authors={Data Science Society},
|
29 |
-
year={2017},
|
30 |
-
url={https://gitlab.com/datasciencesociety/case_fake_news/}
|
31 |
-
}
|
32 |
-
"""
|
33 |
-
|
34 |
-
# TODO: Add description of the dataset here
|
35 |
-
# You can copy an official description
|
36 |
-
_DESCRIPTION = """\
|
37 |
-
Dataset with clickbait and fake news in Bulgarian. Introduced for the Hack the Fake News 2017.
|
38 |
-
"""
|
39 |
-
|
40 |
-
# TODO: Add a link to an official homepage for the dataset here
|
41 |
-
_HOMEPAGE = "https://gitlab.com/datasciencesociety/case_fake_news/"
|
42 |
-
|
43 |
-
# TODO: Add the licence for the dataset here if you can find it
|
44 |
-
_LICENSE = ""
|
45 |
-
|
46 |
-
# TODO: Add link to the official dataset URLs here
|
47 |
-
# The HuggingFace dataset library don't host the datasets but only point to the original files
|
48 |
-
# This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
|
49 |
-
_URLs = {
|
50 |
-
"default_train": "https://gitlab.com/datasciencesociety/case_fake_news/-/raw/master/data/FN_Training_Set.xlsx",
|
51 |
-
"default_validation": "https://gitlab.com/datasciencesociety/case_fake_news/-/raw/master/data/FN_Validation_Set.xlsx",
|
52 |
-
}
|
53 |
-
|
54 |
-
|
55 |
-
class ClickbaitNewsBG(datasets.GeneratorBasedBuilder):
|
56 |
-
VERSION = datasets.Version("1.1.0")
|
57 |
-
DEFAULT_CONFIG_NAME = "default"
|
58 |
-
|
59 |
-
def _info(self):
|
60 |
-
if self.config.name == "default":
|
61 |
-
features = datasets.Features(
|
62 |
-
{
|
63 |
-
"fake_news_score": datasets.features.ClassLabel(names=["legitimate", "fake"]),
|
64 |
-
"click_bait_score": datasets.features.ClassLabel(names=["normal", "clickbait"]),
|
65 |
-
"content_title": datasets.Value("string"),
|
66 |
-
"content_url": datasets.Value("string"),
|
67 |
-
"content_published_time": datasets.Value("string"),
|
68 |
-
"content": datasets.Value("string"),
|
69 |
-
}
|
70 |
-
)
|
71 |
-
return datasets.DatasetInfo(
|
72 |
-
description=_DESCRIPTION,
|
73 |
-
features=features,
|
74 |
-
supervised_keys=None,
|
75 |
-
homepage=_HOMEPAGE,
|
76 |
-
license=_LICENSE,
|
77 |
-
citation=_CITATION,
|
78 |
-
)
|
79 |
-
|
80 |
-
def _split_generators(self, dl_manager):
|
81 |
-
"""Returns SplitGenerators."""
|
82 |
-
data_dir = dl_manager.download(_URLs)
|
83 |
-
|
84 |
-
return [
|
85 |
-
datasets.SplitGenerator(
|
86 |
-
name=spl_enum,
|
87 |
-
gen_kwargs={
|
88 |
-
"filepath": data_dir[f"{self.config.name}_{spl}"],
|
89 |
-
"split": spl,
|
90 |
-
},
|
91 |
-
)
|
92 |
-
for spl, spl_enum in [
|
93 |
-
("train", datasets.Split.TRAIN),
|
94 |
-
("validation", datasets.Split.VALIDATION),
|
95 |
-
]
|
96 |
-
]
|
97 |
-
|
98 |
-
def _generate_examples(self, filepath, split):
|
99 |
-
"""Yields examples."""
|
100 |
-
keys = [
|
101 |
-
"fake_news_score",
|
102 |
-
"click_bait_score",
|
103 |
-
"content_title",
|
104 |
-
"content_url",
|
105 |
-
"content_published_time",
|
106 |
-
"content",
|
107 |
-
]
|
108 |
-
with open(filepath, "rb") as f:
|
109 |
-
data = pd.read_excel(f, engine="openpyxl")
|
110 |
-
for id_, row in enumerate(data.itertuples()):
|
111 |
-
row_dict = dict()
|
112 |
-
for key, value in zip(keys, row[1:]):
|
113 |
-
if key == "fake_news_score":
|
114 |
-
row_dict[key] = "legitimate" if value == 1 else "fake"
|
115 |
-
elif key == "click_bait_score":
|
116 |
-
row_dict[key] = "normal" if value == 1 else "clickbait"
|
117 |
-
else:
|
118 |
-
row_dict[key] = str(value)
|
119 |
-
yield id_, row_dict
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
dataset_infos.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"default": {"description": "Dataset with clickbait and fake news in Bulgarian. Introduced for the Hack the Fake News 2017.\n", "citation": "@InProceedings{clickbait_news_bg,\ntitle = {Dataset with clickbait and fake news in Bulgarian. Introduced for the Hack the Fake News 2017.},\nauthors={Data Science Society},\nyear={2017},\nurl={https://gitlab.com/datasciencesociety/case_fake_news/}\n}\n", "homepage": "https://gitlab.com/datasciencesociety/case_fake_news/", "license": "", "features": {"fake_news_score": {"num_classes": 2, "names": ["legitimate", "fake"], "names_file": null, "id": null, "_type": "ClassLabel"}, "click_bait_score": {"num_classes": 2, "names": ["normal", "clickbait"], "names_file": null, "id": null, "_type": "ClassLabel"}, "content_title": {"dtype": "string", "id": null, "_type": "Value"}, "content_url": {"dtype": "string", "id": null, "_type": "Value"}, "content_published_time": {"dtype": "string", "id": null, "_type": "Value"}, "content": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "clickbait_news_bg", "config_name": "default", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 24480402, "num_examples": 2815, "dataset_name": "clickbait_news_bg"}, "validation": {"name": "validation", "num_bytes": 6752242, "num_examples": 761, "dataset_name": "clickbait_news_bg"}}, "download_checksums": {"https://gitlab.com/datasciencesociety/case_fake_news/-/raw/master/data/FN_Training_Set.xlsx": {"num_bytes": 6543801, "checksum": "ffb10237c03f06f73b65a63e3eb507ab8683d5bb71b35233b3f7703ff3a60c7e"}, "https://gitlab.com/datasciencesociety/case_fake_news/-/raw/master/data/FN_Validation_Set.xlsx": {"num_bytes": 2025774, "checksum": "56207dfa58f9b3eb20444a919e739321768cd37ad7e53612658a80f483eb003a"}}, "download_size": 8569575, "post_processing_size": null, "dataset_size": 31232644, "size_in_bytes": 39802219}}
|
|
|
|
default/clickbait_news_bg-train.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:549a0ee7c5e8f84bc674e15461f243d1756db59be3cb69f7e5f81c0c6f10683f
|
3 |
+
size 9021962
|
default/clickbait_news_bg-validation.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0c908bcff31135131fe61c80b93657deeda52dc89c7d0d608f996f2706735f6d
|
3 |
+
size 2809101
|