system HF staff commited on
Commit
9f6ef60
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,183 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - found
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - fa
8
+ licenses:
9
+ - mit
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - conditional-text-generation
18
+ - text-classification
19
+ task_ids:
20
+ - summarization
21
+ - text-simplification
22
+ - topic-classification
23
+ ---
24
+
25
+ # Dataset Card for Persian News Summary (pn_summary)
26
+
27
+ ## Table of Contents
28
+ - [Dataset Description](#dataset-description)
29
+ - [Dataset Summary](#dataset-summary)
30
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
31
+ - [Languages](#languages)
32
+ - [Dataset Structure](#dataset-structure)
33
+ - [Data Instances](#data-instances)
34
+ - [Data Fields](#data-instances)
35
+ - [Data Splits](#data-instances)
36
+ - [Dataset Creation](#dataset-creation)
37
+ - [Curation Rationale](#curation-rationale)
38
+ - [Source Data](#source-data)
39
+ - [Annotations](#annotations)
40
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
41
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
42
+ - [Social Impact of Dataset](#social-impact-of-dataset)
43
+ - [Discussion of Biases](#discussion-of-biases)
44
+ - [Other Known Limitations](#other-known-limitations)
45
+ - [Additional Information](#additional-information)
46
+ - [Dataset Curators](#dataset-curators)
47
+ - [Licensing Information](#licensing-information)
48
+ - [Citation Information](#citation-information)
49
+
50
+ ## Dataset Description
51
+
52
+ - **Homepage:** https://github.com/hooshvare/pn-summary/
53
+ - **Repository:** https://github.com/hooshvare/pn-summary/
54
+ - **Paper:** https://arxiv.org/abs/2012.11204
55
+ - **Leaderboard:** [More Information Needed]
56
+ - **Point of Contact:** [Mehrdad Farahani](mailto:m3hrdadfphi@gmail.com)
57
+
58
+ ### Dataset Summary
59
+
60
+ A well-structured summarization dataset for the Persian language consists of 93,207 records. It is prepared for Abstractive/Extractive tasks (like cnn_dailymail for English). It can also be used in other scopes like Text Generation, Title Generation, and News Category Classification.
61
+ It is imperative to consider that the newlines were replaced with the `[n]` symbol. Please interpret them into normal newlines (for ex. `t.replace("[n]", "\n")`) and then use them for your purposes.
62
+
63
+ ### Supported Tasks and Leaderboards
64
+
65
+ The dataset is prepared for Abstractive/Extractive summarization tasks (like cnn_dailymail for English). It can also be used in other scopes like Text Generation, Title Generation, and News Category Classification.
66
+
67
+ ### Languages
68
+
69
+ The dataset covers Persian mostly and somewhere a combination with English.
70
+
71
+ ## Dataset Structure
72
+
73
+ ### Data Instances
74
+
75
+ A record consists of 8 features:
76
+
77
+ ```python
78
+ record = ['id','title', 'article', 'summary', 'category', 'categories', 'network', 'link']
79
+ ```
80
+ In the following, you can see an example of `pn_summmary`.
81
+
82
+ ```json
83
+ {
84
+ "article": "به گزارش شانا، علی کاردر امروز (۲۷ دی ماه) در مراسم تودیع محسن قمصری، مدیر سابق امور بین الملل شرکت ملی نفت ایران و معارفه سعید خوشرو، مدیر جدید امور بین الملل این شرکت، گفت: مدیریت امور بین\u200eالملل به عنوان یکی از تاثیرگذارترین مدیریت\u200cهای شرکت ملی نفت ایران در دوران تحریم\u200cهای ظالمانه غرب علیه کشورمان بسیار هوشمندانه عمل کرد و ما توانستیم به خوبی از عهده تحریم\u200cها برآییم. [n] وی افزود: مجموعه امور بین الملل در همه دوران\u200cها با سختی\u200cها و مشکلات بسیاری مواجه بوده است، به ویژه در دوره اخیر به دلیل مسائل پیرامون تحریم وظیفه سنگینی بر عهده داشت که با تدبیر مدیریت خوب این مجموعه سربلند از آن بیرون آمد. [n] کاردر با قدردانی از زحمات محسن قمصری، به سلامت مدیریت امور بین الملل این شرکت اشاره کرد و افزود: محوریت کار مدیریت اموربین الملل سلامت مالی بوده است. [n] وی بر ضرورت نهادینه سازی جوانگرایی در مدیریت شرکت ملی نفت ایران تاکید کرد و گفت: مدیریت امور بین الملل در پرورش نیروهای زبده و کارآزموده آنچنان قوی عملکرده است که برای انتخاب مدیر جدید مشکلی وجود نداشت. [n] کاردر، حرفه\u200eای\u200eگری و کار استاندارد را از ویژگی\u200cهای مدیران این مدیریت برشمرد و گفت: نگاه جامع، خلاقیت و نوآوری و بکارگیری نیروهای جوان باید همچنان مد نظر مدیریت جدید امور بین الملل شرکت ملی نفت ایران باشد.",
85
+ "categories": "نفت",
86
+ "category": 5,
87
+ "id": "738e296491f8b24c5aa63e9829fd249fb4428a66",
88
+ "link": "https://www.shana.ir/news/275284/%D9%85%D8%AF%DB%8C%D8%B1%DB%8C%D8%AA-%D9%81%D8%B1%D9%88%D8%B4-%D9%86%D9%81%D8%AA-%D8%AF%D8%B1-%D8%AF%D9%88%D8%B1%D8%A7%D9%86-%D8%AA%D8%AD%D8%B1%DB%8C%D9%85-%D9%87%D9%88%D8%B4%D9%85%D9%86%D8%AF%D8%A7%D9%86%D9%87-%D8%B9%D9%85%D9%84-%DA%A9%D8%B1%D8%AF",
89
+ "network": 2,
90
+ "summary": "مدیرعامل شرکت ملی نفت، عملکرد مدیریت امور بین\u200eالملل این شرکت را در دوران تحریم بسیار هوشمندانه خواند و گفت: امور بین الملل در دوران پس از تحریم\u200eها نیز می\u200cتواند نقش بزرگی در تسریع روند توسعه داشته باشد.",
91
+ "title": "مدیریت فروش نفت در دوران تحریم هوشمندانه عمل کرد"
92
+ }
93
+ ```
94
+
95
+
96
+ ### Data Fields
97
+
98
+ - `id (string)`: ID of the news.
99
+ - `title (string)`: The title of the news.
100
+ - `article (string)`: The article of the news.
101
+ - `summary (string)`: The summary of the news.
102
+ - `category (int)`: The category of news in English (index of categories), including `Economy`, `Roads-Urban`, `Banking-Insurance`, `Agriculture`, `International`, `Oil-Energy`, `Industry`, `Transportation`, `Science-Technology`, `Local`, `Sports`, `Politics`, `Art-Culture`, `Society`, `Health`, `Research`, `Education-University`, `Tourism`.
103
+ - `categories (string)`: The category and sub-category of the news in Persian.
104
+ - `network (int)`: The news agency name (index of news agencies), including `Tahlilbazaar`, `Imna`, `Shana`, `Mehr`, `Irna`, `Khabaronline`.
105
+ - `link (string)`: The link of the news.
106
+
107
+ The category in English includes 18 different article categories from economy to tourism.
108
+
109
+ ```bash
110
+ Economy, Roads-Urban, Banking-Insurance, Agriculture, International, Oil-Energy, Industry, Transportation, Science-Technology, Local, Sports, Politics, Art-Culture, Society, Health, Research, Education-University, Tourism
111
+ ```
112
+
113
+ ### Data Splits
114
+
115
+ Training (82,022 records, 8 features), validation (5,592 records, 8 features), and test split (5,593 records and 8 features).
116
+
117
+ ## Dataset Creation
118
+
119
+ ### Curation Rationale
120
+
121
+ [More Information Needed]
122
+
123
+ ### Source Data
124
+
125
+ #### Initial Data Collection and Normalization
126
+
127
+ [More Information Needed]
128
+
129
+ #### Who are the source language producers?
130
+
131
+ The dataset comprises numerous articles of various categories that have been crawled from six news agency websites (Tahlilbazaar, Imna, Shana, Mehr, Irna, and Khabaronline).
132
+
133
+ ### Annotations
134
+
135
+ #### Annotation process
136
+
137
+ Each record (article) includes the long original text as well as a human-generated summary. The total number of cleaned articles is 93,207 (from 200,000 crawled articles).
138
+
139
+ #### Who are the annotators?
140
+
141
+ The dataset was organized by [Mehrdad Farahani](https://github.com/m3hrdadfi), [Mohammad Gharachorloo](https://github.com/baarsaam) and [Mohammad Manthouri](https://github.com/mmanthouri) for this paper [Leveraging ParsBERT and Pretrained mT5 for Persian Abstractive Text Summarization](https://arxiv.org/abs/2012.11204)
142
+
143
+ ### Personal and Sensitive Information
144
+
145
+ [More Information Needed]
146
+
147
+ ## Considerations for Using the Data
148
+
149
+ ### Social Impact of Dataset
150
+
151
+ [More Information Needed]
152
+
153
+ ### Discussion of Biases
154
+
155
+ [More Information Needed]
156
+
157
+ ### Other Known Limitations
158
+
159
+ [More Information Needed]
160
+
161
+ ## Additional Information
162
+
163
+ ### Dataset Curators
164
+
165
+ This dataset was curated by [Mehrdad Farahani](https://github.com/m3hrdadfi), [Mohammad Gharachorloo](https://github.com/baarsaam) and [Mohammad Manthouri](https://github.com/mmanthouri).
166
+
167
+
168
+ ### Licensing Information
169
+
170
+ This dataset is licensed under MIT License.
171
+
172
+ ### Citation Information
173
+
174
+ ```bibtex
175
+ @article{pnSummary,
176
+ title={Leveraging ParsBERT and Pretrained mT5 for Persian Abstractive Text Summarization},
177
+ author={Mehrdad Farahani, Mohammad Gharachorloo, Mohammad Manthouri},
178
+ year={2020},
179
+ eprint={2012.11204},
180
+ archivePrefix={arXiv},
181
+ primaryClass={cs.CL}
182
+ }
183
+ ```
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"1.0.0": {"description": "A well-structured summarization dataset for the Persian language consists of 93,207 records. It is prepared for Abstractive/Extractive tasks (like cnn_dailymail for English). It can also be used in other scopes like Text Generation, Title Generation, and News Category Classification.\nIt is imperative to consider that the newlines were replaced with the `[n]` symbol. Please interpret them into normal newlines (for ex. `t.replace(\"[n]\", \"\n\")`) and then use them for your purposes.\n", "citation": "@article{pnSummary, title={Leveraging ParsBERT and Pretrained mT5 for Persian Abstractive Text Summarization},\nauthor={Mehrdad Farahani, Mohammad Gharachorloo, Mohammad Manthouri},\nyear={2020},\neprint={2012.11204},\narchivePrefix={arXiv},\nprimaryClass={cs.CL}\n}\n", "homepage": "https://github.com/hooshvare/pn-summary", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "title": {"dtype": "string", "id": null, "_type": "Value"}, "article": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"dtype": "string", "id": null, "_type": "Value"}, "category": {"num_classes": 18, "names": ["Economy", "Roads-Urban", "Banking-Insurance", "Agriculture", "International", "Oil-Energy", "Industry", "Transportation", "Science-Technology", "Local", "Sports", "Politics", "Art-Culture", "Society", "Health", "Research", "Education-University", "Tourism"], "names_file": null, "id": null, "_type": "ClassLabel"}, "categories": {"dtype": "string", "id": null, "_type": "Value"}, "network": {"num_classes": 6, "names": ["Tahlilbazaar", "Imna", "Shana", "Mehr", "Irna", "Khabaronline"], "names_file": null, "id": null, "_type": "ClassLabel"}, "link": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "pn_summary", "config_name": "1.0.0", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 309436709, "num_examples": 82022, "dataset_name": "pn_summary"}, "validation": {"name": "validation", "num_bytes": 21311841, "num_examples": 5592, "dataset_name": "pn_summary"}, "test": {"name": "test", "num_bytes": 20936844, "num_examples": 5593, "dataset_name": "pn_summary"}}, "download_checksums": {"https://drive.google.com/u/0/uc?id=16OgJ_OrfzUF_i3ftLjFn9kpcyoi7UJeO&export=download": {"num_bytes": 89591141, "checksum": "49aa6a5fdb11244714f9bbe69517f2079ab934c9c565e272a977fbd8d2d404f7"}}, "download_size": 89591141, "post_processing_size": null, "dataset_size": 351685394, "size_in_bytes": 441276535}}
dummy/1.0.0/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:95861b011340af8a2421819b7c1134ea6068beab8a4bfb6635bcc1e1c9313b57
3
+ size 13172
pn_summary.py ADDED
@@ -0,0 +1,149 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """pn_summary"""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import csv
20
+ import os
21
+
22
+ import datasets
23
+
24
+
25
+ _CITATION = """\
26
+ @article{pnSummary, title={Leveraging ParsBERT and Pretrained mT5 for Persian Abstractive Text Summarization},
27
+ author={Mehrdad Farahani, Mohammad Gharachorloo, Mohammad Manthouri},
28
+ year={2020},
29
+ eprint={2012.11204},
30
+ archivePrefix={arXiv},
31
+ primaryClass={cs.CL}
32
+ }
33
+ """
34
+
35
+ _DESCRIPTION = """\
36
+ A well-structured summarization dataset for the Persian language consists of 93,207 records. It is prepared for Abstractive/Extractive tasks (like cnn_dailymail for English). It can also be used in other scopes like Text Generation, Title Generation, and News Category Classification.
37
+ It is imperative to consider that the newlines were replaced with the `[n]` symbol. Please interpret them into normal newlines (for ex. `t.replace("[n]", "\n")`) and then use them for your purposes.
38
+ """
39
+
40
+ _HOMEPAGE = "https://github.com/hooshvare/pn-summary"
41
+ _LICENSE = "MIT License"
42
+
43
+ _URLs = {
44
+ "1.0.0": {
45
+ "data": "https://drive.google.com/u/0/uc?id=16OgJ_OrfzUF_i3ftLjFn9kpcyoi7UJeO&export=download",
46
+ "features": [
47
+ {"name": "id", "type": datasets.Value("string")},
48
+ {"name": "title", "type": datasets.Value("string")},
49
+ {"name": "article", "type": datasets.Value("string")},
50
+ {"name": "summary", "type": datasets.Value("string")},
51
+ {
52
+ "name": "category",
53
+ "type": datasets.ClassLabel(
54
+ names=[
55
+ "Economy",
56
+ "Roads-Urban",
57
+ "Banking-Insurance",
58
+ "Agriculture",
59
+ "International",
60
+ "Oil-Energy",
61
+ "Industry",
62
+ "Transportation",
63
+ "Science-Technology",
64
+ "Local",
65
+ "Sports",
66
+ "Politics",
67
+ "Art-Culture",
68
+ "Society",
69
+ "Health",
70
+ "Research",
71
+ "Education-University",
72
+ "Tourism",
73
+ ]
74
+ ),
75
+ },
76
+ {"name": "categories", "type": datasets.Value("string")},
77
+ {
78
+ "name": "network",
79
+ "type": datasets.ClassLabel(names=["Tahlilbazaar", "Imna", "Shana", "Mehr", "Irna", "Khabaronline"]),
80
+ },
81
+ {"name": "link", "type": datasets.Value("string")},
82
+ ],
83
+ }
84
+ }
85
+
86
+
87
+ class PnSummaryConfig(datasets.BuilderConfig):
88
+ """BuilderConfig for pn_summary."""
89
+
90
+ def __init__(self, **kwargs):
91
+ """BuilderConfig for pn_summary."""
92
+
93
+ super(PnSummaryConfig, self).__init__(**kwargs)
94
+
95
+
96
+ class PnSummary(datasets.GeneratorBasedBuilder):
97
+ """A well-structured summarization dataset for the Persian language: pn_summary"""
98
+
99
+ BUILDER_CONFIGS = [
100
+ PnSummaryConfig(
101
+ name="1.0.0", version=datasets.Version("1.0.0"), description="The first version of pn_summary"
102
+ ),
103
+ ]
104
+
105
+ def _info(self):
106
+ feature_names_types = _URLs[self.config.name]["features"]
107
+ features = datasets.Features({f["name"]: f["type"] for f in feature_names_types})
108
+
109
+ return datasets.DatasetInfo(
110
+ description=_DESCRIPTION, features=features, homepage=_HOMEPAGE, citation=_CITATION
111
+ )
112
+
113
+ def _split_generators(self, dl_manager):
114
+ my_urls = _URLs[self.config.name]
115
+ data_dir = dl_manager.download_and_extract(my_urls["data"])
116
+
117
+ return [
118
+ datasets.SplitGenerator(
119
+ name=datasets.Split.TRAIN,
120
+ gen_kwargs={
121
+ "filepath": os.path.join(data_dir, "pn_summary", "train.csv"),
122
+ "split": "train",
123
+ },
124
+ ),
125
+ datasets.SplitGenerator(
126
+ name=datasets.Split.VALIDATION,
127
+ gen_kwargs={
128
+ "filepath": os.path.join(data_dir, "pn_summary", "dev.csv"),
129
+ "split": "validation",
130
+ },
131
+ ),
132
+ datasets.SplitGenerator(
133
+ name=datasets.Split.TEST,
134
+ gen_kwargs={
135
+ "filepath": os.path.join(data_dir, "pn_summary", "test.csv"),
136
+ "split": "test",
137
+ },
138
+ ),
139
+ ]
140
+
141
+ def _generate_examples(self, filepath, split):
142
+ feature_names_types = _URLs[self.config.name]["features"]
143
+ features = [f["name"] for f in feature_names_types]
144
+ with open(filepath, encoding="utf-8") as csv_file:
145
+ reader = csv.DictReader(csv_file, quotechar='"', delimiter="\t", quoting=csv.QUOTE_MINIMAL)
146
+
147
+ for _id, row in enumerate(reader):
148
+ if len(row) == len(features):
149
+ yield _id, {f: row[f] for f in features}