Datasets:

Languages:
Arabic
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
found
Annotations Creators:
found
Source Datasets:
original
Tags:
License:
system HF staff commited on
Commit
9d76b7e
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,141 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - found
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - ar
8
+ licenses:
9
+ - unknown
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10k<n<100k
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - text_classification
18
+ task_ids:
19
+ - multi-class-classification
20
+ ---
21
+
22
+ # Dataset Card for MetRec
23
+
24
+ ## Table of Contents
25
+ - [Dataset Description](#dataset-description)
26
+ - [Dataset Summary](#dataset-summary)
27
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
28
+ - [Languages](#languages)
29
+ - [Dataset Structure](#dataset-structure)
30
+ - [Data Instances](#data-instances)
31
+ - [Data Fields](#data-instances)
32
+ - [Data Splits](#data-instances)
33
+ - [Dataset Creation](#dataset-creation)
34
+ - [Curation Rationale](#curation-rationale)
35
+ - [Source Data](#source-data)
36
+ - [Annotations](#annotations)
37
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
38
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
39
+ - [Discussion of Social Impact and Biases](#discussion-of-social-impact-and-biases)
40
+ - [Other Known Limitations](#other-known-limitations)
41
+ - [Additional Information](#additional-information)
42
+ - [Dataset Curators](#dataset-curators)
43
+ - [Licensing Information](#licensing-information)
44
+ - [Citation Information](#citation-information)
45
+
46
+ ## Dataset Description
47
+
48
+ - **Homepage:** [LABR](https://github.com/mohamedadaly/LABR)
49
+ - **Repository:** [LABR](https://github.com/mohamedadaly/LABR)
50
+ - **Paper:** [LABR: Large-scale Arabic Book Reviews Dataset](https://www.aclweb.org/anthology/P13-2088.pdf)
51
+ - **Point of Contact:** [Mohammed Aly](mailto:mohamed@mohamedaly.info)
52
+
53
+ ### Dataset Summary
54
+
55
+ This dataset contains over 63,000 book reviews in Arabic. It is the largest sentiment analysis dataset for Arabic to-date. The book reviews were harvested from the website Goodreads during the month or March 2013. Each book review comes with the goodreads review id, the user id, the book id, the rating (1 to 5) and the text of the review.
56
+
57
+ ### Supported Tasks and Leaderboards
58
+
59
+ The dataset was published on this [paper](https://www.aclweb.org/anthology/P13-2088.pdf).
60
+
61
+ ### Languages
62
+
63
+ The dataset is based on Arabic.
64
+
65
+ ## Dataset Structure
66
+
67
+ ### Data Instances
68
+
69
+ A typical data point comprises a rating from 1 to 5 where the higher the rating the better the review.
70
+
71
+ ### Data Fields
72
+
73
+ [More Information Needed]
74
+
75
+ ### Data Splits
76
+
77
+ The data is split into a training and testing. The split is organized as the following
78
+
79
+ | | Tain | Test |
80
+ |---------- | ------ | ---- |
81
+ |data split | 11,760 | 2,935|
82
+
83
+ ## Dataset Creation
84
+
85
+ ### Curation Rationale
86
+
87
+ [More Information Needed]
88
+
89
+ ### Source Data
90
+
91
+ [More Information Needed]
92
+
93
+ #### Initial Data Collection and Normalization
94
+
95
+ downloaded over 220,000 reviews from the
96
+ book readers social network www.goodreads.com
97
+ during the month of March 2013
98
+
99
+ #### Who are the source language producers?
100
+
101
+ Reviews.
102
+
103
+ ### Annotations
104
+
105
+ The dataset does not contain any additional annotations.
106
+
107
+ #### Annotation process
108
+
109
+ [More Information Needed]
110
+
111
+ #### Who are the annotators?
112
+
113
+ [More Information Needed]
114
+
115
+ ### Personal and Sensitive Information
116
+
117
+ [More Information Needed]
118
+
119
+ ## Considerations for Using the Data
120
+
121
+ ### Discussion of Social Impact and Biases
122
+
123
+ [More Information Needed]
124
+
125
+ ### Other Known Limitations
126
+
127
+ [More Information Needed]
128
+
129
+ ## Additional Information
130
+
131
+ ### Dataset Curators
132
+
133
+ [More Information Needed]
134
+
135
+ ### Licensing Information
136
+
137
+ [More Information Needed]
138
+
139
+ ### Citation Information
140
+
141
+ [More Information Needed]
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"plain_text": {"description": "This dataset contains over 63,000 book reviews in Arabic.It is the largest sentiment analysis dataset for Arabic to-date.The book reviews were harvested from the website Goodreads during the month or March 2013.Each book review comes with the goodreads review id, the user id, the book id, the rating (1 to 5) and the text of the review.\n", "citation": "@inproceedings{aly2013labr,\n title={Labr: A large scale arabic book reviews dataset},\n author={Aly, Mohamed and Atiya, Amir},\n booktitle={Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)},\n pages={494--498},\n year={2013}\n}\n", "homepage": "https://github.com/mohamedadaly/LABR", "license": "", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 5, "names": ["1", "2", "3", "4", "5"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "builder_name": "labr", "config_name": "plain_text", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 7051103, "num_examples": 11760, "dataset_name": "labr"}, "test": {"name": "test", "num_bytes": 1703399, "num_examples": 2935, "dataset_name": "labr"}}, "download_checksums": {"https://raw.githubusercontent.com/mohamedadaly/LABR/master/data/5class-balanced-train.txt": {"num_bytes": 80515, "checksum": "389473fb7e673f60b42d0a8f1c6478f0930a61ae0a069f7b83c6ea670fe8171d"}, "https://raw.githubusercontent.com/mohamedadaly/LABR/master/data/5class-balanced-test.txt": {"num_bytes": 20067, "checksum": "0ca1ef7d5c286514defe6d45d1d72696bfdd169364b0d69d1f02beb9fbfe89cf"}, "https://raw.githubusercontent.com/mohamedadaly/LABR/master/data/reviews.tsv": {"num_bytes": 39853130, "checksum": "a72f1b8ece766c918dfdb3c31c2a2da4f408dd2f69a24aef5187c85f313edb04"}}, "download_size": 39953712, "post_processing_size": null, "dataset_size": 8754502, "size_in_bytes": 48708214}}
dummy/plain_text/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9cedc18b8de873d2089129da352d217ed6f7a20042e0d63f6bf58a8550118445
3
+ size 1875
labr.py ADDED
@@ -0,0 +1,115 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ # Lint as: python3
17
+ """Arabic Book Reviews."""
18
+
19
+ from __future__ import absolute_import, division, print_function
20
+
21
+ import csv
22
+
23
+ import datasets
24
+
25
+
26
+ _DESCRIPTION = """\
27
+ This dataset contains over 63,000 book reviews in Arabic.\
28
+ It is the largest sentiment analysis dataset for Arabic to-date.\
29
+ The book reviews were harvested from the website Goodreads during the month or March 2013.\
30
+ Each book review comes with the goodreads review id, the user id, the book id, the rating (1 to 5) and the text of the review.
31
+ """
32
+
33
+ _CITATION = """\
34
+ @inproceedings{aly2013labr,
35
+ title={Labr: A large scale arabic book reviews dataset},
36
+ author={Aly, Mohamed and Atiya, Amir},
37
+ booktitle={Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)},
38
+ pages={494--498},
39
+ year={2013}
40
+ }
41
+ """
42
+
43
+ _URL = "https://raw.githubusercontent.com/mohamedadaly/LABR/master/data/"
44
+ _URLS = {
45
+ "train": _URL + "5class-balanced-train.txt",
46
+ "test": _URL + "5class-balanced-test.txt",
47
+ "reviews": _URL + "reviews.tsv",
48
+ }
49
+
50
+
51
+ class LabrConfig(datasets.BuilderConfig):
52
+ """BuilderConfig for Labr."""
53
+
54
+ def __init__(self, **kwargs):
55
+ """BuilderConfig for Labr.
56
+
57
+ Args:
58
+ **kwargs: keyword arguments forwarded to super.
59
+ """
60
+ super(LabrConfig, self).__init__(version=datasets.Version("1.0.0", ""), **kwargs)
61
+
62
+
63
+ class Labr(datasets.GeneratorBasedBuilder):
64
+ """Labr dataset."""
65
+
66
+ BUILDER_CONFIGS = [
67
+ LabrConfig(
68
+ name="plain_text",
69
+ description="Plain text",
70
+ )
71
+ ]
72
+
73
+ def _info(self):
74
+ return datasets.DatasetInfo(
75
+ description=_DESCRIPTION,
76
+ features=datasets.Features(
77
+ {
78
+ "text": datasets.Value("string"),
79
+ "label": datasets.features.ClassLabel(
80
+ names=[
81
+ "1",
82
+ "2",
83
+ "3",
84
+ "4",
85
+ "5",
86
+ ]
87
+ ),
88
+ }
89
+ ),
90
+ supervised_keys=None,
91
+ homepage="https://github.com/mohamedadaly/LABR",
92
+ citation=_CITATION,
93
+ )
94
+
95
+ def _split_generators(self, dl_manager):
96
+ data_dir = dl_manager.download_and_extract(_URLS)
97
+ self.reviews_path = data_dir["reviews"]
98
+ return [
99
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"directory": data_dir["train"]}),
100
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"directory": data_dir["test"]}),
101
+ ]
102
+
103
+ def _generate_examples(self, directory):
104
+ """Generate examples."""
105
+ # For labeled examples, extract the label from the path.
106
+ reviews = []
107
+ with open(self.reviews_path, encoding="utf-8") as tsvfile:
108
+ tsvreader = csv.reader(tsvfile, delimiter="\t")
109
+ for line in tsvreader:
110
+ reviews.append(line)
111
+
112
+ with open(directory, encoding="utf-8") as f:
113
+ for id_, record in enumerate(f.read().splitlines()):
114
+ rating, _, _, _, review_text = reviews[int(record)]
115
+ yield str(id_), {"text": review_text, "label": rating}