system HF staff commited on
Commit
de67fbe
1 Parent(s): 075fdd4

Update files from the datasets library (from 1.3.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.3.0

Files changed (1) hide show
  1. README.md +179 -0
README.md ADDED
@@ -0,0 +1,179 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ ---
3
+
4
+ # Dataset Card for "yelp_polarity"
5
+
6
+ ## Table of Contents
7
+ - [Dataset Description](#dataset-description)
8
+ - [Dataset Summary](#dataset-summary)
9
+ - [Supported Tasks](#supported-tasks)
10
+ - [Languages](#languages)
11
+ - [Dataset Structure](#dataset-structure)
12
+ - [Data Instances](#data-instances)
13
+ - [Data Fields](#data-fields)
14
+ - [Data Splits Sample Size](#data-splits-sample-size)
15
+ - [Dataset Creation](#dataset-creation)
16
+ - [Curation Rationale](#curation-rationale)
17
+ - [Source Data](#source-data)
18
+ - [Annotations](#annotations)
19
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
20
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
21
+ - [Social Impact of Dataset](#social-impact-of-dataset)
22
+ - [Discussion of Biases](#discussion-of-biases)
23
+ - [Other Known Limitations](#other-known-limitations)
24
+ - [Additional Information](#additional-information)
25
+ - [Dataset Curators](#dataset-curators)
26
+ - [Licensing Information](#licensing-information)
27
+ - [Citation Information](#citation-information)
28
+ - [Contributions](#contributions)
29
+
30
+ ## [Dataset Description](#dataset-description)
31
+
32
+ - **Homepage:** [https://course.fast.ai/datasets](https://course.fast.ai/datasets)
33
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
34
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
35
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
36
+ - **Size of downloaded dataset files:** 158.67 MB
37
+ - **Size of the generated dataset:** 421.28 MB
38
+ - **Total amount of disk used:** 579.95 MB
39
+
40
+ ### [Dataset Summary](#dataset-summary)
41
+
42
+ Large Yelp Review Dataset.
43
+ This is a dataset for binary sentiment classification. We provide a set of 560,000 highly polar yelp reviews for training, and 38,000 for testing.
44
+ ORIGIN
45
+ The Yelp reviews dataset consists of reviews from Yelp. It is extracted
46
+ from the Yelp Dataset Challenge 2015 data. For more information, please
47
+ refer to http://www.yelp.com/dataset_challenge
48
+
49
+ The Yelp reviews polarity dataset is constructed by
50
+ Xiang Zhang (xiang.zhang@nyu.edu) from the above dataset.
51
+ It is first used as a text classification benchmark in the following paper:
52
+ Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks
53
+ for Text Classification. Advances in Neural Information Processing Systems 28
54
+ (NIPS 2015).
55
+
56
+ DESCRIPTION
57
+
58
+ The Yelp reviews polarity dataset is constructed by considering stars 1 and 2
59
+ negative, and 3 and 4 positive. For each polarity 280,000 training samples and
60
+ 19,000 testing samples are take randomly. In total there are 560,000 trainig
61
+ samples and 38,000 testing samples. Negative polarity is class 1,
62
+ and positive class 2.
63
+
64
+ The files train.csv and test.csv contain all the training samples as
65
+ comma-sparated values. There are 2 columns in them, corresponding to class
66
+ index (1 and 2) and review text. The review texts are escaped using double
67
+ quotes ("), and any internal double quote is escaped by 2 double quotes ("").
68
+ New lines are escaped by a backslash followed with an "n" character,
69
+ that is "
70
+ ".
71
+
72
+ ### [Supported Tasks](#supported-tasks)
73
+
74
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
75
+
76
+ ### [Languages](#languages)
77
+
78
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
79
+
80
+ ## [Dataset Structure](#dataset-structure)
81
+
82
+ We show detailed information for up to 5 configurations of the dataset.
83
+
84
+ ### [Data Instances](#data-instances)
85
+
86
+ #### plain_text
87
+
88
+ - **Size of downloaded dataset files:** 158.67 MB
89
+ - **Size of the generated dataset:** 421.28 MB
90
+ - **Total amount of disk used:** 579.95 MB
91
+
92
+ An example of 'train' looks as follows.
93
+ ```
94
+ This example was too long and was cropped:
95
+
96
+ {
97
+ "label": 0,
98
+ "text": "\"Unfortunately, the frustration of being Dr. Goldberg's patient is a repeat of the experience I've had with so many other doctor..."
99
+ }
100
+ ```
101
+
102
+ ### [Data Fields](#data-fields)
103
+
104
+ The data fields are the same among all splits.
105
+
106
+ #### plain_text
107
+ - `text`: a `string` feature.
108
+ - `label`: a classification label, with possible values including `1` (0), `2` (1).
109
+
110
+ ### [Data Splits Sample Size](#data-splits-sample-size)
111
+
112
+ | name |train |test |
113
+ |----------|-----:|----:|
114
+ |plain_text|560000|38000|
115
+
116
+ ## [Dataset Creation](#dataset-creation)
117
+
118
+ ### [Curation Rationale](#curation-rationale)
119
+
120
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
121
+
122
+ ### [Source Data](#source-data)
123
+
124
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
125
+
126
+ ### [Annotations](#annotations)
127
+
128
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
129
+
130
+ ### [Personal and Sensitive Information](#personal-and-sensitive-information)
131
+
132
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
133
+
134
+ ## [Considerations for Using the Data](#considerations-for-using-the-data)
135
+
136
+ ### [Social Impact of Dataset](#social-impact-of-dataset)
137
+
138
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
139
+
140
+ ### [Discussion of Biases](#discussion-of-biases)
141
+
142
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
143
+
144
+ ### [Other Known Limitations](#other-known-limitations)
145
+
146
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
147
+
148
+ ## [Additional Information](#additional-information)
149
+
150
+ ### [Dataset Curators](#dataset-curators)
151
+
152
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
153
+
154
+ ### [Licensing Information](#licensing-information)
155
+
156
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
157
+
158
+ ### [Citation Information](#citation-information)
159
+
160
+ ```
161
+ @article{zhangCharacterlevelConvolutionalNetworks2015,
162
+ archivePrefix = {arXiv},
163
+ eprinttype = {arxiv},
164
+ eprint = {1509.01626},
165
+ primaryClass = {cs},
166
+ title = {Character-Level {{Convolutional Networks}} for {{Text Classification}}},
167
+ abstract = {This article offers an empirical exploration on the use of character-level convolutional networks (ConvNets) for text classification. We constructed several large-scale datasets to show that character-level convolutional networks could achieve state-of-the-art or competitive results. Comparisons are offered against traditional models such as bag of words, n-grams and their TFIDF variants, and deep learning models such as word-based ConvNets and recurrent neural networks.},
168
+ journal = {arXiv:1509.01626 [cs]},
169
+ author = {Zhang, Xiang and Zhao, Junbo and LeCun, Yann},
170
+ month = sep,
171
+ year = {2015},
172
+ }
173
+
174
+ ```
175
+
176
+
177
+ ### Contributions
178
+
179
+ Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@julien-c](https://github.com/julien-c) for adding this dataset.