Datasets:

Languages:
Arabic
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
found
Annotations Creators:
found
Source Datasets:
original
Tags:
License:
system HF staff commited on
Commit
5780264
1 Parent(s): 0be15e7

Update files from the datasets library (from 1.17.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.17.0

Files changed (2) hide show
  1. README.md +50 -30
  2. labr.py +1 -1
README.md CHANGED
@@ -18,39 +18,46 @@ task_categories:
18
  task_ids:
19
  - multi-class-classification
20
  paperswithcode_id: labr
 
21
  ---
22
 
23
- # Dataset Card for MetRec
24
 
25
  ## Table of Contents
26
- - [Dataset Description](#dataset-description)
27
- - [Dataset Summary](#dataset-summary)
28
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
29
- - [Languages](#languages)
30
- - [Dataset Structure](#dataset-structure)
31
- - [Data Instances](#data-instances)
32
- - [Data Fields](#data-fields)
33
- - [Data Splits](#data-splits)
34
- - [Dataset Creation](#dataset-creation)
35
- - [Curation Rationale](#curation-rationale)
36
- - [Source Data](#source-data)
37
- - [Annotations](#annotations)
38
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
39
- - [Considerations for Using the Data](#considerations-for-using-the-data)
40
- - [Social Impact of Dataset](#social-impact-of-dataset)
41
- - [Discussion of Biases](#discussion-of-biases)
42
- - [Other Known Limitations](#other-known-limitations)
43
- - [Additional Information](#additional-information)
44
- - [Dataset Curators](#dataset-curators)
45
- - [Licensing Information](#licensing-information)
46
- - [Citation Information](#citation-information)
47
- - [Contributions](#contributions)
 
 
 
 
 
 
 
48
 
49
  ## Dataset Description
50
 
51
- - **Homepage:** [LABR](https://github.com/mohamedadaly/LABR)
52
  - **Repository:** [LABR](https://github.com/mohamedadaly/LABR)
53
- - **Paper:** [LABR: Large-scale Arabic Book Reviews Dataset](https://www.aclweb.org/anthology/P13-2088.pdf)
54
  - **Point of Contact:** [Mohammed Aly](mailto:mohamed@mohamedaly.info)
55
 
56
  ### Dataset Summary
@@ -73,7 +80,8 @@ A typical data point comprises a rating from 1 to 5 where the higher the rating
73
 
74
  ### Data Fields
75
 
76
- [More Information Needed]
 
77
 
78
  ### Data Splits
79
 
@@ -121,13 +129,17 @@ The dataset does not contain any additional annotations.
121
 
122
  ## Considerations for Using the Data
123
 
124
- ### Discussion of Social Impact and Biases
125
 
126
- [More Information Needed]
 
 
 
 
127
 
128
  ### Other Known Limitations
129
 
130
- [More Information Needed]
131
 
132
  ## Additional Information
133
 
@@ -141,7 +153,15 @@ The dataset does not contain any additional annotations.
141
 
142
  ### Citation Information
143
 
144
- [More Information Needed]
 
 
 
 
 
 
 
 
145
 
146
  ### Contributions
147
 
18
  task_ids:
19
  - multi-class-classification
20
  paperswithcode_id: labr
21
+ pretty_name: LABR
22
  ---
23
 
24
+ # Dataset Card for LABR
25
 
26
  ## Table of Contents
27
+ - [Dataset Card for LABR](#dataset-card-for-labr)
28
+ - [Table of Contents](#table-of-contents)
29
+ - [Dataset Description](#dataset-description)
30
+ - [Dataset Summary](#dataset-summary)
31
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
32
+ - [Languages](#languages)
33
+ - [Dataset Structure](#dataset-structure)
34
+ - [Data Instances](#data-instances)
35
+ - [Data Fields](#data-fields)
36
+ - [Data Splits](#data-splits)
37
+ - [|split|num examples|](#splitnum-examples)
38
+ - [Dataset Creation](#dataset-creation)
39
+ - [Curation Rationale](#curation-rationale)
40
+ - [Source Data](#source-data)
41
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
42
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
43
+ - [Annotations](#annotations)
44
+ - [Annotation process](#annotation-process)
45
+ - [Who are the annotators?](#who-are-the-annotators)
46
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
47
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
48
+ - [Social Impact of Dataset](#social-impact-of-dataset)
49
+ - [Discussion of Biases](#discussion-of-biases)
50
+ - [Other Known Limitations](#other-known-limitations)
51
+ - [Additional Information](#additional-information)
52
+ - [Dataset Curators](#dataset-curators)
53
+ - [Licensing Information](#licensing-information)
54
+ - [Citation Information](#citation-information)
55
+ - [Contributions](#contributions)
56
 
57
  ## Dataset Description
58
 
 
59
  - **Repository:** [LABR](https://github.com/mohamedadaly/LABR)
60
+ - **Paper:** [LABR: Large-scale Arabic Book Reviews Dataset](https://aclanthology.org/P13-2088/)
61
  - **Point of Contact:** [Mohammed Aly](mailto:mohamed@mohamedaly.info)
62
 
63
  ### Dataset Summary
80
 
81
  ### Data Fields
82
 
83
+ - `text` (str): Review text.
84
+ - `label` (int): Review rating.
85
 
86
  ### Data Splits
87
 
129
 
130
  ## Considerations for Using the Data
131
 
132
+ ### Social Impact of Dataset
133
 
134
+ [Needs More Information]
135
+
136
+ ### Discussion of Biases
137
+
138
+ [Needs More Information]
139
 
140
  ### Other Known Limitations
141
 
142
+ [Needs More Information]
143
 
144
  ## Additional Information
145
 
153
 
154
  ### Citation Information
155
 
156
+ ```
157
+ @inproceedings{aly2013labr,
158
+ title={Labr: A large scale arabic book reviews dataset},
159
+ author={Aly, Mohamed and Atiya, Amir},
160
+ booktitle={Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)},
161
+ pages={494--498},
162
+ year={2013}
163
+ }
164
+ ```
165
 
166
  ### Contributions
167
 
labr.py CHANGED
@@ -94,7 +94,7 @@ class Labr(datasets.GeneratorBasedBuilder):
94
  )
95
 
96
  def _split_generators(self, dl_manager):
97
- data_dir = dl_manager.download_and_extract(_URLS)
98
  self.reviews_path = data_dir["reviews"]
99
  return [
100
  datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"directory": data_dir["train"]}),
94
  )
95
 
96
  def _split_generators(self, dl_manager):
97
+ data_dir = dl_manager.download(_URLS)
98
  self.reviews_path = data_dir["reviews"]
99
  return [
100
  datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"directory": data_dir["train"]}),