Datasets:
Commit
·
ae29b8b
0
Parent(s):
Update files from the datasets library (from 1.2.0)
Browse filesRelease notes: https://github.com/huggingface/datasets/releases/tag/1.2.0
- .gitattributes +27 -0
- README.MD +183 -0
- dataset_infos.json +1 -0
- dummy/0.0.0/dummy_data.zip +3 -0
- saudinewsnet.py +146 -0
.gitattributes
ADDED
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.bin.* filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
18 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
19 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
20 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
23 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
24 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
25 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
26 |
+
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
27 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
README.MD
ADDED
@@ -0,0 +1,183 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
annotations_creators:
|
3 |
+
- no-annotation
|
4 |
+
language_creators:
|
5 |
+
- found
|
6 |
+
languages:
|
7 |
+
- ar
|
8 |
+
licenses:
|
9 |
+
- unknown
|
10 |
+
multilinguality:
|
11 |
+
- monolingual
|
12 |
+
size_categories:
|
13 |
+
- 10K<n<100K
|
14 |
+
source_datasets:
|
15 |
+
- original
|
16 |
+
task_categories:
|
17 |
+
- sequence-modeling
|
18 |
+
task_ids:
|
19 |
+
- language-modeling
|
20 |
+
---
|
21 |
+
|
22 |
+
# Dataset Card for Saudi Newspapers Corpus (SaudiNewsNet)
|
23 |
+
|
24 |
+
## Table of Contents
|
25 |
+
- [Dataset Description](#dataset-description)
|
26 |
+
- [Dataset Summary](#dataset-summary)
|
27 |
+
- [Supported Tasks](#supported-tasks-and-leaderboards)
|
28 |
+
- [Languages](#languages)
|
29 |
+
- [Dataset Structure](#dataset-structure)
|
30 |
+
- [Data Instances](#data-instances)
|
31 |
+
- [Data Fields](#data-instances)
|
32 |
+
- [Data Splits](#data-instances)
|
33 |
+
- [Dataset Creation](#dataset-creation)
|
34 |
+
- [Curation Rationale](#curation-rationale)
|
35 |
+
- [Source Data](#source-data)
|
36 |
+
- [Annotations](#annotations)
|
37 |
+
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
38 |
+
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
39 |
+
- [Discussion of Social Impact and Biases](#discussion-of-social-impact-and-biases)
|
40 |
+
- [Other Known Limitations](#other-known-limitations)
|
41 |
+
- [Additional Information](#additional-information)
|
42 |
+
- [Dataset Curators](#dataset-curators)
|
43 |
+
- [Licensing Information](#licensing-information)
|
44 |
+
- [Citation Information](#citation-information)
|
45 |
+
|
46 |
+
## Dataset Description
|
47 |
+
|
48 |
+
- **Homepage:** [SaudiNewsNet](https://github.com/parallelfold/SaudiNewsNet)
|
49 |
+
- **Repository:** [Website](https://github.com/parallelfold/SaudiNewsNet)
|
50 |
+
- **Paper:** [More Information Needed]
|
51 |
+
- **Point of Contact:** [Mazen Abdulaziz](mailto:mazen.abdulaziz@gmail.com)
|
52 |
+
|
53 |
+
### Dataset Summary
|
54 |
+
|
55 |
+
The dataset contains a set of 31,030 Arabic newspaper articles alongwith metadata,
|
56 |
+
extracted from various online Saudi newspapers and written in MSA.
|
57 |
+
|
58 |
+
The dataset currently contains **31,030** Arabic articles (with a total number of **8,758,976 words**). The articles were extracted from the following Saudi newspapers (sorted by number of articles):
|
59 |
+
|
60 |
+
- [Al-Riyadh](http://www.alriyadh.com/) (4,852 articles)
|
61 |
+
- [Al-Jazirah](http://al-jazirah.com/) (3,690 articles)
|
62 |
+
- [Al-Yaum](http://alyaum.com/) (3,065 articles)
|
63 |
+
- [Al-Eqtisadiya](http://aleqt.com/) (2,964 articles)
|
64 |
+
- [Al-Sharq Al-Awsat](http://aawsat.com/) (2,947 articles)
|
65 |
+
- [Okaz](http://www.okaz.com.sa/) (2,846 articles)
|
66 |
+
- [Al-Watan](http://alwatan.com.sa/) (2,279 articles)
|
67 |
+
- [Al-Madina](http://www.al-madina.com/) (2,252 articles)
|
68 |
+
- [Al-Weeam](http://alweeam.com.sa/) (2,090 articles)
|
69 |
+
- [Ain Alyoum](http://3alyoum.com/) (2,080 articles)
|
70 |
+
- [Sabq](http://sabq.org/) (1,411 articles)
|
71 |
+
- [Saudi Press Agency](http://www.spa.gov.sa) (369 articles)
|
72 |
+
- [Arreyadi](http://www.arreyadi.com.sa/) (133 articles)
|
73 |
+
- [Arreyadiyah](http://www.arreyadiyah.com/) (52 articles)
|
74 |
+
|
75 |
+
### Supported Tasks and Leaderboards
|
76 |
+
|
77 |
+
[More Information Needed]
|
78 |
+
|
79 |
+
### Languages
|
80 |
+
|
81 |
+
The dataset is based on Arabic.
|
82 |
+
|
83 |
+
## Dataset Structure
|
84 |
+
|
85 |
+
### Data Instances
|
86 |
+
|
87 |
+
Example:
|
88 |
+
|
89 |
+
```{'author': 'لندن: «الشرق الأوسط أونلاين»', 'content': 'أصدر الرئيس عبدربه منصور هادي رئيس الجمهورية اليمنية، اليوم (الاثنين)، قراراً جمهورياً بتعيين نايف البكري محافظا لعدن، وذلك بعد الانتصارات التي حققتها قوات المقاومة الشعبية في طرد المتمردين الحوثيين من المدينة.\n\n نايف البكري شغل منصب وكيل محافظة عدن لشؤون المديريات من قبل، وله علاقات جيدة مع مختلف القوى السياسية في المدينة، ويعتبر الرجل الأول.\n\n من جهة أخرى، أعلن بدر محمد باسملة وزير النقل اليمني عن إعداد خطة طوارئ للمدينة، مؤكدا أن أولوية الحكومة تشغيل الموانئ وتحسين الخدمات. موضحا ان فريقاً فنياً متخصصاً وصل من الإمارات لإعادة تشغيل المطار خلال أربع وعشرين ساعة واستئناف الرحلات من وإلى عدن خلال الساعات المقبلة. داعيا المنظمات الحقوقية والاعلاميين للقدومِ الى عدن لرصد المآسي التي ارتكبها المتمردون، مؤكداً ان الحكومة ستؤمن لهم رحلات من أي مطار في العالم.\n\n من جانبه، قال المتحدث الرسمي باسم الحكومة اليمنية راجح بادي، إن حجم الدمار في عدن لم يكن متوقعاً، لا سيما الذي طال البنى التحتية؛ فمطار عدن على سبيل المثال، مدمر بشكل شبه كامل، ومحطات توليد الكهرباء والمنشآت النفطية تعرضت هي الأخرى أكثر من مرة للقصف من قبل الحوثيين، حيث كانوا حينها يريدون الضغط على المقاومة الشعبية من خلال منعها من الحصول على المحروقات. وهو ما لم ينجحوا فيه. مبينا انه سيحتاج إعادة تأهيلها لمبالغ كبيرة وأشهر من العمل.', 'date_extracted': '2015-07-21 02:51:33', 'source': 'aawsat', 'title': 'الرئيس هادي يعين نايف البكري محافظا لعدن', 'url': 'http://aawsat.com/home/article/410801/الرئيس-هادي-يعين-نايف-البكري-محافظا-لعدن'}```
|
90 |
+
|
91 |
+
### Data Fields
|
92 |
+
|
93 |
+
- **`url`**: The full URL from which the article was extracted.
|
94 |
+
- **`date_extracted`**: The timestamp of the date on which the article was extracted. It has the format `YYYY-MM-DD hh:mm:ss`. Notice that this field does not necessarily represent the date on which the article was authored (or made available online), however for articles stamped with a date of extraction after August 1, 2015, this field most probably represents the date of authoring.
|
95 |
+
- **`title`**: The title of the article. Contains missing values that were replaced with an empty string.
|
96 |
+
- **`author`**: The author of the article. Contains missing values that were replaced with an empty string.
|
97 |
+
- **`content`**: The content of the article.
|
98 |
+
|
99 |
+
### Data Splits
|
100 |
+
|
101 |
+
Only a train split.
|
102 |
+
|
103 |
+
## Dataset Creation
|
104 |
+
|
105 |
+
### Curation Rationale
|
106 |
+
|
107 |
+
[More Information Needed]
|
108 |
+
|
109 |
+
[More Information Needed]
|
110 |
+
|
111 |
+
### Source Data
|
112 |
+
|
113 |
+
| String Identifier | Newspaper |
|
114 |
+
| ------------------ | --------- |
|
115 |
+
| aawsat | [Al-Sharq Al-Awsat](http://aawsat.com/) |
|
116 |
+
| aleqtisadiya | [Al-Eqtisadiya](http://aleqt.com/) |
|
117 |
+
| aljazirah | [Al-Jazirah](http://al-jazirah.com/) |
|
118 |
+
| almadina | [Al-Madina](http://www.al-madina.com/) |
|
119 |
+
| alriyadh | [Al-Riyadh](http://www.alriyadh.com/) |
|
120 |
+
| alwatan | [Al-Watan](http://alwatan.com.sa/) |
|
121 |
+
| alweeam | [Al-Weeam](http://alweeam.com.sa/) |
|
122 |
+
| alyaum | [Al-Yaum](http://alyaum.com/) |
|
123 |
+
| arreyadi | [Arreyadi](http://www.arreyadi.com.sa/) |
|
124 |
+
| arreyadiyah | [Arreyadi](http://www.arreyadiyah.com/) |
|
125 |
+
| okaz | [Okaz](http://www.okaz.com.sa/) |
|
126 |
+
| sabq | [Sabq](http://sabq.org/) |
|
127 |
+
| was | [Saudi Press Agency](http://www.spa.gov.sa/) |
|
128 |
+
| 3alyoum | [Ain Alyoum](http://3alyoum.com/) |
|
129 |
+
|
130 |
+
#### Initial Data Collection and Normalization
|
131 |
+
|
132 |
+
The Modern Standard Arabic texts crawled from the Internet.
|
133 |
+
|
134 |
+
#### Who are the source language producers?
|
135 |
+
|
136 |
+
Newspaper Websites.
|
137 |
+
|
138 |
+
### Annotations
|
139 |
+
|
140 |
+
The dataset does not contain any additional annotations.
|
141 |
+
|
142 |
+
#### Annotation process
|
143 |
+
|
144 |
+
[More Information Needed]
|
145 |
+
|
146 |
+
#### Who are the annotators?
|
147 |
+
|
148 |
+
[More Information Needed]
|
149 |
+
|
150 |
+
### Personal and Sensitive Information
|
151 |
+
|
152 |
+
[More Information Needed]
|
153 |
+
|
154 |
+
## Considerations for Using the Data
|
155 |
+
|
156 |
+
### Discussion of Social Impact and Biases
|
157 |
+
|
158 |
+
[More Information Needed]
|
159 |
+
|
160 |
+
### Other Known Limitations
|
161 |
+
|
162 |
+
[More Information Needed]
|
163 |
+
|
164 |
+
## Additional Information
|
165 |
+
|
166 |
+
### Dataset Curators
|
167 |
+
|
168 |
+
[More Information Needed]
|
169 |
+
|
170 |
+
### Licensing Information
|
171 |
+
|
172 |
+
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
|
173 |
+
|
174 |
+
### Citation Information
|
175 |
+
|
176 |
+
```
|
177 |
+
@misc{hagrima2015,
|
178 |
+
author = "M. Alhagri",
|
179 |
+
title = "Saudi Newspapers Arabic Corpus (SaudiNewsNet)",
|
180 |
+
year = 2015,
|
181 |
+
url = "http://github.com/ParallelMazen/SaudiNewsNet"
|
182 |
+
}
|
183 |
+
```
|
dataset_infos.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"default": {"description": "The dataset contains a set of 31,030 Arabic newspaper articles alongwith metadata, extracted from various online Saudi newspapers and written in MSA.", "citation": "@misc{hagrima2015,\nauthor = \"M. Alhagri\",\ntitle = \"Saudi Newspapers Arabic Corpus (SaudiNewsNet)\",\nyear = 2015,\nurl = \"http://github.com/ParallelMazen/SaudiNewsNet\"\n}\n", "homepage": "https://github.com/parallelfold/SaudiNewsNet", "license": "", "features": {"source": {"dtype": "string", "id": null, "_type": "Value"}, "url": {"dtype": "string", "id": null, "_type": "Value"}, "date_extracted": {"dtype": "string", "id": null, "_type": "Value"}, "title": {"dtype": "string", "id": null, "_type": "Value"}, "author": {"dtype": "string", "id": null, "_type": "Value"}, "content": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "saudinewsnet", "config_name": "default", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 103654105, "num_examples": 31030, "dataset_name": "saudinewsnet"}}, "download_checksums": {"https://github.com/parallelfold/SaudiNewsNet/raw/master/dataset/2015-07-21.zip": {"num_bytes": 3424655, "checksum": "7a413b5e1548ad38ca756cdf33c2d5aca1e2c7772a8b8b15679d771f5f6ba29c"}, "https://github.com/parallelfold/SaudiNewsNet/raw/master/dataset/2015-07-22.zip": {"num_bytes": 1977389, "checksum": "09156f62f372b14e37c8bd0fd44eab3a9e1184d9ed96c2891174f3ee9534052f"}, "https://github.com/parallelfold/SaudiNewsNet/raw/master/dataset/2015-07-23.zip": {"num_bytes": 1125337, "checksum": "41a805d15e45fc8919d6ec6f3a577798a778568ce668224091ac434820ae8858"}, "https://github.com/parallelfold/SaudiNewsNet/raw/master/dataset/2015-07-24.zip": {"num_bytes": 1819510, "checksum": "844ceef1647ca7a3ef843e5c33959e36c538a49697739da15e2227877ad1ba99"}, "https://github.com/parallelfold/SaudiNewsNet/raw/master/dataset/2015-07-25.zip": {"num_bytes": 1441295, "checksum": "62fb91f52d3b93dc8f89845e8c23e2650b1f8fe10e0c60378da8a2ad24b81911"}, "https://github.com/parallelfold/SaudiNewsNet/raw/master/dataset/2015-07-26.zip": {"num_bytes": 1381348, "checksum": "53d360aa514cc6ca436899f7aeb5bb5f6ee2fc58a3ef40bad243b62c2c88cde2"}, "https://github.com/parallelfold/SaudiNewsNet/raw/master/dataset/2015-07-27.zip": {"num_bytes": 1288236, "checksum": "b4fe93e9015451d4648146940f6e8e3bf643330ecbbc666b3ad430b995ddad0a"}, "https://github.com/parallelfold/SaudiNewsNet/raw/master/dataset/2015-07-31.zip": {"num_bytes": 2062545, "checksum": "d4933c2c06090dc70796f44af5d8e59f4a45f0c0a13d5a1841a76b935615c26a"}, "https://github.com/parallelfold/SaudiNewsNet/raw/master/dataset/2015-08-01.zip": {"num_bytes": 1187426, "checksum": "c4330bd9d685aa8fe1c70171c2c3e265dd5a21f5109c10ca8e3d0a92a81b2354"}, "https://github.com/parallelfold/SaudiNewsNet/raw/master/dataset/2015-08-02.zip": {"num_bytes": 1229022, "checksum": "b8cfb39bf0ea66ed282a8f88b5deefd0cf234ff8a57f763621f333b5f7be5c9b"}, "https://github.com/parallelfold/SaudiNewsNet/raw/master/dataset/2015-08-03.zip": {"num_bytes": 1253376, "checksum": "c17a6e0eedc6c30d30fdfb9c3a86c42670ee893c23fe2096f5cb5f2ffc18fcd6"}, "https://github.com/parallelfold/SaudiNewsNet/raw/master/dataset/2015-08-04.zip": {"num_bytes": 1233914, "checksum": "a0b7a5e4da847133bd0e943012299e867102c9572529574636976dafb6a03c69"}, "https://github.com/parallelfold/SaudiNewsNet/raw/master/dataset/2015-08-06.zip": {"num_bytes": 2793745, "checksum": "b731d37ae21ae14bc43660ed6679bdabc7d036796989f5750a55d990b1881dc8"}, "https://github.com/parallelfold/SaudiNewsNet/raw/master/dataset/2015-08-07.zip": {"num_bytes": 1385371, "checksum": "6ae3fbb6387038c0fa174c35a2d37dbb8b29f656a56866ca22ca50254809514f"}, "https://github.com/parallelfold/SaudiNewsNet/raw/master/dataset/2015-08-08.zip": {"num_bytes": 1297024, "checksum": "880c650edd04155a888d8d6b81d873b2742ac91e08429acd9fcdd41d733d29af"}, "https://github.com/parallelfold/SaudiNewsNet/raw/master/dataset/2015-08-09.zip": {"num_bytes": 1194371, "checksum": "fa9302785d72810263d6d14e2fe7cb8f7ba19e72426740b8b86a33b8d06a420f"}, "https://github.com/parallelfold/SaudiNewsNet/raw/master/dataset/2015-08-10.zip": {"num_bytes": 1480985, "checksum": "616411ca4b63fa255089f3cafd196cded1cf69c4e5dd7adefc9d446070e0370c"}, "https://github.com/parallelfold/SaudiNewsNet/raw/master/dataset/2015-08-11.zip": {"num_bytes": 1438617, "checksum": "7d3362d4fd07ec2194af70e79745a20ccbcfa57033267615591c2f9dcf40d2ec"}}, "download_size": 29014166, "post_processing_size": null, "dataset_size": 103654105, "size_in_bytes": 132668271}}
|
dummy/0.0.0/dummy_data.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:26aa8aad8b5f28b14b2b7a928198df7f91e43d45175ff3b32a33558d224048a4
|
3 |
+
size 37494
|
saudinewsnet.py
ADDED
@@ -0,0 +1,146 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# coding=utf-8
|
2 |
+
# Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
|
3 |
+
#
|
4 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
+
# you may not use this file except in compliance with the License.
|
6 |
+
# You may obtain a copy of the License at
|
7 |
+
#
|
8 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
9 |
+
#
|
10 |
+
# Unless required by applicable law or agreed to in writing, software
|
11 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
+
# See the License for the specific language governing permissions and
|
14 |
+
# limitations under the License.
|
15 |
+
"""The dataset contains a set of 31,030 Arabic newspaper articles alongwith metadata, extracted from various online Saudi newspapers."""
|
16 |
+
|
17 |
+
from __future__ import absolute_import, division, print_function
|
18 |
+
|
19 |
+
import json
|
20 |
+
import os
|
21 |
+
|
22 |
+
import datasets
|
23 |
+
|
24 |
+
|
25 |
+
_CITATION = """\
|
26 |
+
@misc{hagrima2015,
|
27 |
+
author = "M. Alhagri",
|
28 |
+
title = "Saudi Newspapers Arabic Corpus (SaudiNewsNet)",
|
29 |
+
year = 2015,
|
30 |
+
url = "http://github.com/ParallelMazen/SaudiNewsNet"
|
31 |
+
}
|
32 |
+
"""
|
33 |
+
|
34 |
+
|
35 |
+
_DESCRIPTION = """The dataset contains a set of 31,030 Arabic newspaper articles alongwith metadata, \
|
36 |
+
extracted from various online Saudi newspapers and written in MSA."""
|
37 |
+
|
38 |
+
|
39 |
+
_HOMEPAGE = "https://github.com/parallelfold/SaudiNewsNet"
|
40 |
+
|
41 |
+
|
42 |
+
_LICENSE = "Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License."
|
43 |
+
|
44 |
+
|
45 |
+
_URLs = [
|
46 |
+
"https://github.com/parallelfold/SaudiNewsNet/raw/master/dataset/2015-07-21.zip",
|
47 |
+
"https://github.com/parallelfold/SaudiNewsNet/raw/master/dataset/2015-07-22.zip",
|
48 |
+
"https://github.com/parallelfold/SaudiNewsNet/raw/master/dataset/2015-07-23.zip",
|
49 |
+
"https://github.com/parallelfold/SaudiNewsNet/raw/master/dataset/2015-07-24.zip",
|
50 |
+
"https://github.com/parallelfold/SaudiNewsNet/raw/master/dataset/2015-07-25.zip",
|
51 |
+
"https://github.com/parallelfold/SaudiNewsNet/raw/master/dataset/2015-07-26.zip",
|
52 |
+
"https://github.com/parallelfold/SaudiNewsNet/raw/master/dataset/2015-07-27.zip",
|
53 |
+
"https://github.com/parallelfold/SaudiNewsNet/raw/master/dataset/2015-07-31.zip",
|
54 |
+
"https://github.com/parallelfold/SaudiNewsNet/raw/master/dataset/2015-08-01.zip",
|
55 |
+
"https://github.com/parallelfold/SaudiNewsNet/raw/master/dataset/2015-08-02.zip",
|
56 |
+
"https://github.com/parallelfold/SaudiNewsNet/raw/master/dataset/2015-08-03.zip",
|
57 |
+
"https://github.com/parallelfold/SaudiNewsNet/raw/master/dataset/2015-08-04.zip",
|
58 |
+
"https://github.com/parallelfold/SaudiNewsNet/raw/master/dataset/2015-08-06.zip",
|
59 |
+
"https://github.com/parallelfold/SaudiNewsNet/raw/master/dataset/2015-08-07.zip",
|
60 |
+
"https://github.com/parallelfold/SaudiNewsNet/raw/master/dataset/2015-08-08.zip",
|
61 |
+
"https://github.com/parallelfold/SaudiNewsNet/raw/master/dataset/2015-08-09.zip",
|
62 |
+
"https://github.com/parallelfold/SaudiNewsNet/raw/master/dataset/2015-08-10.zip",
|
63 |
+
"https://github.com/parallelfold/SaudiNewsNet/raw/master/dataset/2015-08-11.zip",
|
64 |
+
]
|
65 |
+
|
66 |
+
_dirs = [
|
67 |
+
"2015-07-21.json",
|
68 |
+
"2015-07-22.json",
|
69 |
+
"2015-07-23.json",
|
70 |
+
"2015-07-24.json",
|
71 |
+
"2015-07-25.json",
|
72 |
+
"2015-07-26.json",
|
73 |
+
"2015-07-27.json",
|
74 |
+
"2015-07-31.json",
|
75 |
+
"2015-08-01.json",
|
76 |
+
"2015-08-02.json",
|
77 |
+
"2015-08-03.json",
|
78 |
+
"2015-08-04.json",
|
79 |
+
"2015-08-06.json",
|
80 |
+
"2015-08-07.json",
|
81 |
+
"2015-08-08.json",
|
82 |
+
"2015-08-09.json",
|
83 |
+
"2015-08-10.json",
|
84 |
+
"2015-08-11.json",
|
85 |
+
]
|
86 |
+
|
87 |
+
|
88 |
+
class Saudinewsnet(datasets.GeneratorBasedBuilder):
|
89 |
+
"""a set of 31,030 Arabic newspaper articles along with metadata"""
|
90 |
+
|
91 |
+
def _info(self):
|
92 |
+
return datasets.DatasetInfo(
|
93 |
+
description=_DESCRIPTION,
|
94 |
+
features=datasets.Features(
|
95 |
+
{
|
96 |
+
"source": datasets.Value(
|
97 |
+
"string"
|
98 |
+
), # A string identifief of the newspaper from which the article was extracted.
|
99 |
+
"url": datasets.Value("string"), # The full URL from which the article was extracted.
|
100 |
+
"date_extracted": datasets.Value(
|
101 |
+
"string"
|
102 |
+
), # The timestamp of the date on which the article was extracted.
|
103 |
+
"title": datasets.Value("string"), # The title of the article. Can be empty.
|
104 |
+
"author": datasets.Value("string"), # The author of the article. Can be empty.
|
105 |
+
"content": datasets.Value("string"), # The content of the article.
|
106 |
+
}
|
107 |
+
),
|
108 |
+
homepage=_HOMEPAGE,
|
109 |
+
citation=_CITATION,
|
110 |
+
supervised_keys=None,
|
111 |
+
)
|
112 |
+
|
113 |
+
def _split_generators(self, dl_manager):
|
114 |
+
"""Returns SplitGenerators."""
|
115 |
+
datadir = dl_manager.download_and_extract(_URLs)
|
116 |
+
pt = []
|
117 |
+
for dd, d in zip(datadir, _dirs):
|
118 |
+
pt.append(os.path.join(dd, d))
|
119 |
+
return [
|
120 |
+
datasets.SplitGenerator(
|
121 |
+
name=datasets.Split.TRAIN,
|
122 |
+
gen_kwargs={"filepath": pt, "split": "train"},
|
123 |
+
)
|
124 |
+
]
|
125 |
+
|
126 |
+
def _generate_examples(self, filepath, split):
|
127 |
+
"""Generates examples"""
|
128 |
+
for path in filepath:
|
129 |
+
with open(path, encoding="utf-8") as f:
|
130 |
+
articles = json.load(f)
|
131 |
+
for _id, article in enumerate(articles):
|
132 |
+
title = article.get("title", "")
|
133 |
+
source = article["source"]
|
134 |
+
dt = article["date_extracted"]
|
135 |
+
link = article["url"]
|
136 |
+
author = article.get("author", "").strip(" ")
|
137 |
+
content = article["content"].strip("/n")
|
138 |
+
|
139 |
+
yield _id, {
|
140 |
+
"title": title,
|
141 |
+
"source": source,
|
142 |
+
"date_extracted": dt,
|
143 |
+
"url": link,
|
144 |
+
"author": author,
|
145 |
+
"content": content,
|
146 |
+
}
|