system HF staff commited on
Commit
d6a0594
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,154 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - no-annotation
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - id
8
+ licenses:
9
+ - cc-by-4-0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - n>1M
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - sequence-modeling
18
+ task_ids:
19
+ - language-modeling
20
+ ---
21
+
22
+ # Dataset Card for Indonesian Newspapers 2018
23
+
24
+ ## Table of Contents
25
+
26
+ - [Dataset Description](#dataset-description)
27
+ - [Dataset Summary](#dataset-summary)
28
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
29
+ - [Languages](#languages)
30
+ - [Dataset Structure](#dataset-structure)
31
+ - [Data Instances](#data-instances)
32
+ - [Data Fields](#data-instances)
33
+ - [Data Splits](#data-instances)
34
+ - [Dataset Creation](#dataset-creation)
35
+ - [Curation Rationale](#curation-rationale)
36
+ - [Source Data](#source-data)
37
+ - [Annotations](#annotations)
38
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
39
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
40
+ - [Social Impact of Dataset](#social-impact-of-dataset)
41
+ - [Discussion of Biases](#discussion-of-biases)
42
+ - [Other Known Limitations](#other-known-limitations)
43
+ - [Additional Information](#additional-information)
44
+ - [Dataset Curators](#dataset-curators)
45
+ - [Licensing Information](#licensing-information)
46
+ - [Citation Information](#citation-information)
47
+
48
+ ## Dataset Description
49
+
50
+ - **Homepage:** [Indonesian Newspapers](https://github.com/feryandi/Dataset-Artikel)
51
+ - **Repository:** [Indonesian Newspapers](https://github.com/feryandi/Dataset-Artikel)
52
+ - **Paper:**
53
+ - **Leaderboard:**
54
+ - **Point of Contact:** [feryandi.n@gmail.com](mailto:feryandi.n@gmail.com),
55
+ [cahya.wirawan@gmail.com](mailto:cahya.wirawan@gmail.com)
56
+
57
+ ### Dataset Summary
58
+
59
+ The dataset contains around 500K articles (136M of words) from 7 Indonesian newspapers: Detik, Kompas, Tempo,
60
+ CNN Indonesia, Sindo, Republika and Poskota. The articles are dated between 1st January 2018 and 20th August 2018
61
+ (with few exceptions dated earlier). The size of uncompressed 500K json files (newspapers-json.tgz) is around 2.2GB,
62
+ and the cleaned uncompressed in a big text file (newspapers.txt.gz) is about 1GB. The original source in Google Drive
63
+ contains also a dataset in html format which include raw data (pictures, css, javascript, ...)
64
+ from the online news website. A copy of the original dataset is available at
65
+ https://cloud.uncool.ai/index.php/s/mfYEAgKQoY3ebbM
66
+
67
+ ### Supported Tasks and Leaderboards
68
+
69
+ [More Information Needed]
70
+
71
+ ### Languages
72
+ Indonesian
73
+
74
+ ## Dataset Structure
75
+ ```
76
+ {
77
+ 'id': 'string',
78
+ 'url': 'string',
79
+ 'date': 'string',
80
+ 'title': 'string',
81
+ 'content': 'string'
82
+ }
83
+ ```
84
+ ### Data Instances
85
+
86
+ [More Information Needed]
87
+
88
+ ### Data Fields
89
+ - `id`: id of the sample
90
+ - `url`: the url to the original article
91
+ - `date`: the publishing date of the article
92
+ - `title`: the title of the article
93
+ - `content`: the content of the article
94
+
95
+ ### Data Splits
96
+
97
+ The dataset contains train set.
98
+
99
+ ## Dataset Creation
100
+
101
+ ### Curation Rationale
102
+
103
+ [More Information Needed]
104
+
105
+ ### Source Data
106
+
107
+ #### Initial Data Collection and Normalization
108
+
109
+ [More Information Needed]
110
+
111
+ #### Who are the source language producers?
112
+
113
+ [More Information Needed]
114
+
115
+ ### Annotations
116
+
117
+ #### Annotation process
118
+
119
+ [More Information Needed]
120
+
121
+ #### Who are the annotators?
122
+ [More Information Needed]
123
+
124
+ ### Personal and Sensitive Information
125
+
126
+ [More Information Needed]
127
+
128
+ ## Considerations for Using the Data
129
+
130
+ ### Social Impact of Dataset
131
+
132
+ [More Information Needed]
133
+
134
+ ### Discussion of Biases
135
+
136
+ [More Information Needed]
137
+
138
+ ### Other Known Limitations
139
+
140
+ [More Information Needed]
141
+
142
+ ## Additional Information
143
+
144
+ ### Dataset Curators
145
+
146
+ [More Information Needed]
147
+
148
+ ### Licensing Information
149
+
150
+ [More Information Needed]
151
+
152
+ ### Citation Information
153
+
154
+ [More Information Needed]
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"id_newspapers_2018": {"description": "The dataset contains around 500K articles (136M of words) from 7 Indonesian newspapers: Detik, Kompas, Tempo,\nCNN Indonesia, Sindo, Republika and Poskota. The articles are dated between 1st January 2018 and 20th August 2018\n(with few exceptions dated earlier). The size of uncompressed 500K json files (newspapers-json.tgz) is around 2.2GB,\nand the cleaned uncompressed in a big text file (newspapers.txt.gz) is about 1GB. The original source in Google Drive\ncontains also a dataset in html format which include raw data (pictures, css, javascript, ...)\nfrom the online news website\n", "citation": "@inproceedings{id_newspapers_2018,\n author = {},\n title = {Indonesian Newspapers 2018},\n year = {2019},\n url = {https://github.com/feryandi/Dataset-Artikel},\n}\n", "homepage": "https://github.com/feryandi/Dataset-Artikel", "license": "Creative Commons Attribution-ShareAlike 4.0 International Public License", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "url": {"dtype": "string", "id": null, "_type": "Value"}, "date": {"dtype": "string", "id": null, "_type": "Value"}, "title": {"dtype": "string", "id": null, "_type": "Value"}, "content": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "id_newspapers2018", "config_name": "id_newspapers_2018", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1116031922, "num_examples": 499164, "dataset_name": "id_newspapers2018"}}, "download_checksums": {"http://cloud.uncool.ai/index.php/s/kF83dQHfGeS2LX2/download": {"num_bytes": 446018349, "checksum": "9fbafe1f5797316aab786af488bc8d5442b5ee17490d41d0705f8cc1cb93ee1c"}}, "download_size": 446018349, "post_processing_size": null, "dataset_size": 1116031922, "size_in_bytes": 1562050271}}
dummy/id_newspapers_2018/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8c809dc501acb917b189422ff21296a204b0223772d9b13c00f18192563ce13d
3
+ size 1718
id_newspapers_2018.py ADDED
@@ -0,0 +1,119 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """Indonesian Newspapers 2018"""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import glob
20
+ import json
21
+ import logging
22
+ import os
23
+
24
+ import datasets
25
+
26
+
27
+ _CITATION = """\
28
+ @inproceedings{id_newspapers_2018,
29
+ author = {},
30
+ title = {Indonesian Newspapers 2018},
31
+ year = {2019},
32
+ url = {https://github.com/feryandi/Dataset-Artikel},
33
+ }
34
+ """
35
+
36
+ _DESCRIPTION = """\
37
+ The dataset contains around 500K articles (136M of words) from 7 Indonesian newspapers: Detik, Kompas, Tempo,
38
+ CNN Indonesia, Sindo, Republika and Poskota. The articles are dated between 1st January 2018 and 20th August 2018
39
+ (with few exceptions dated earlier). The size of uncompressed 500K json files (newspapers-json.tgz) is around 2.2GB,
40
+ and the cleaned uncompressed in a big text file (newspapers.txt.gz) is about 1GB. The original source in Google Drive
41
+ contains also a dataset in html format which include raw data (pictures, css, javascript, ...)
42
+ from the online news website
43
+ """
44
+
45
+ _HOMEPAGE = "https://github.com/feryandi/Dataset-Artikel"
46
+
47
+ _LICENSE = "Creative Commons Attribution-ShareAlike 4.0 International Public License"
48
+
49
+ _URLs = ["http://cloud.uncool.ai/index.php/s/kF83dQHfGeS2LX2/download"]
50
+
51
+
52
+ class IdNewspapers2018Config(datasets.BuilderConfig):
53
+ """BuilderConfig for IdNewspapers2018"""
54
+
55
+ def __init__(self, **kwargs):
56
+ """BuilderConfig for IdNewspapers2018.
57
+ Args:
58
+ **kwargs: keyword arguments forwarded to super.
59
+ """
60
+ super(IdNewspapers2018Config, self).__init__(**kwargs)
61
+
62
+
63
+ class IdNewspapers2018(datasets.GeneratorBasedBuilder):
64
+ VERSION = datasets.Version("1.0.0")
65
+
66
+ BUILDER_CONFIGS = [
67
+ IdNewspapers2018Config(
68
+ name="id_newspapers_2018",
69
+ version=VERSION,
70
+ description="IdNewspapers2018 dataset",
71
+ ),
72
+ ]
73
+
74
+ def _info(self):
75
+ features = datasets.Features(
76
+ {
77
+ "id": datasets.Value("string"),
78
+ "url": datasets.Value("string"),
79
+ "date": datasets.Value("string"),
80
+ "title": datasets.Value("string"),
81
+ "content": datasets.Value("string"),
82
+ }
83
+ )
84
+ return datasets.DatasetInfo(
85
+ description=_DESCRIPTION,
86
+ features=features,
87
+ supervised_keys=None,
88
+ homepage=_HOMEPAGE,
89
+ license=_LICENSE,
90
+ citation=_CITATION,
91
+ )
92
+
93
+ def _split_generators(self, dl_manager):
94
+ my_urls = _URLs[0]
95
+ data_dir = dl_manager.download_and_extract(my_urls)
96
+ return [
97
+ datasets.SplitGenerator(
98
+ name=datasets.Split.TRAIN,
99
+ gen_kwargs={
100
+ "article_dir": os.path.join(data_dir, "newspapers"),
101
+ "split": "train",
102
+ },
103
+ )
104
+ ]
105
+
106
+ def _generate_examples(self, article_dir, split):
107
+ logging.info("⏳ Generating %s examples from = %s", split, article_dir)
108
+ id = 0
109
+ for path in sorted(glob.glob(os.path.join(article_dir, "**/*.json"), recursive=True)):
110
+ with open(path, encoding="utf-8") as f:
111
+ data = json.load(f)
112
+ yield id, {
113
+ "id": str(id),
114
+ "url": data["url"],
115
+ "date": data["date"],
116
+ "title": data["title"],
117
+ "content": data["content"],
118
+ }
119
+ id += 1