Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
expert-generated
Annotations Creators:
expert-generated
Source Datasets:
original
Tags:
License:
pt-sk commited on
Commit
6dfd843
1 Parent(s): 23fa2f0

Upload 2 files

Browse files
Files changed (2) hide show
  1. README.md +259 -1
  2. gitattributes +27 -0
README.md CHANGED
@@ -1,3 +1,261 @@
1
  ---
2
- license: mit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - expert-generated
6
+ language:
7
+ - en
8
+ license:
9
+ - other
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - text-classification
18
+ task_ids:
19
+ - sentiment-classification
20
+ paperswithcode_id: imdb-movie-reviews
21
+ pretty_name: IMDB
22
+ dataset_info:
23
+ config_name: plain_text
24
+ features:
25
+ - name: text
26
+ dtype: string
27
+ - name: label
28
+ dtype:
29
+ class_label:
30
+ names:
31
+ '0': neg
32
+ '1': pos
33
+ splits:
34
+ - name: train
35
+ num_bytes: 33432823
36
+ num_examples: 25000
37
+ - name: test
38
+ num_bytes: 32650685
39
+ num_examples: 25000
40
+ - name: unsupervised
41
+ num_bytes: 67106794
42
+ num_examples: 50000
43
+ download_size: 83446840
44
+ dataset_size: 133190302
45
+ configs:
46
+ - config_name: plain_text
47
+ data_files:
48
+ - split: train
49
+ path: plain_text/train-*
50
+ - split: test
51
+ path: plain_text/test-*
52
+ - split: unsupervised
53
+ path: plain_text/unsupervised-*
54
+ default: true
55
+ train-eval-index:
56
+ - config: plain_text
57
+ task: text-classification
58
+ task_id: binary_classification
59
+ splits:
60
+ train_split: train
61
+ eval_split: test
62
+ col_mapping:
63
+ text: text
64
+ label: target
65
+ metrics:
66
+ - type: accuracy
67
+ - name: Accuracy
68
+ - type: f1
69
+ name: F1 macro
70
+ args:
71
+ average: macro
72
+ - type: f1
73
+ name: F1 micro
74
+ args:
75
+ average: micro
76
+ - type: f1
77
+ name: F1 weighted
78
+ args:
79
+ average: weighted
80
+ - type: precision
81
+ name: Precision macro
82
+ args:
83
+ average: macro
84
+ - type: precision
85
+ name: Precision micro
86
+ args:
87
+ average: micro
88
+ - type: precision
89
+ name: Precision weighted
90
+ args:
91
+ average: weighted
92
+ - type: recall
93
+ name: Recall macro
94
+ args:
95
+ average: macro
96
+ - type: recall
97
+ name: Recall micro
98
+ args:
99
+ average: micro
100
+ - type: recall
101
+ name: Recall weighted
102
+ args:
103
+ average: weighted
104
  ---
105
+
106
+ # Dataset Card for "imdb"
107
+
108
+ ## Table of Contents
109
+ - [Dataset Description](#dataset-description)
110
+ - [Dataset Summary](#dataset-summary)
111
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
112
+ - [Languages](#languages)
113
+ - [Dataset Structure](#dataset-structure)
114
+ - [Data Instances](#data-instances)
115
+ - [Data Fields](#data-fields)
116
+ - [Data Splits](#data-splits)
117
+ - [Dataset Creation](#dataset-creation)
118
+ - [Curation Rationale](#curation-rationale)
119
+ - [Source Data](#source-data)
120
+ - [Annotations](#annotations)
121
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
122
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
123
+ - [Social Impact of Dataset](#social-impact-of-dataset)
124
+ - [Discussion of Biases](#discussion-of-biases)
125
+ - [Other Known Limitations](#other-known-limitations)
126
+ - [Additional Information](#additional-information)
127
+ - [Dataset Curators](#dataset-curators)
128
+ - [Licensing Information](#licensing-information)
129
+ - [Citation Information](#citation-information)
130
+ - [Contributions](#contributions)
131
+
132
+ ## Dataset Description
133
+
134
+ - **Homepage:** [http://ai.stanford.edu/~amaas/data/sentiment/](http://ai.stanford.edu/~amaas/data/sentiment/)
135
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
136
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
137
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
138
+ - **Size of downloaded dataset files:** 84.13 MB
139
+ - **Size of the generated dataset:** 133.23 MB
140
+ - **Total amount of disk used:** 217.35 MB
141
+
142
+ ### Dataset Summary
143
+
144
+ Large Movie Review Dataset.
145
+ This is a dataset for binary sentiment classification containing substantially more data than previous benchmark datasets. We provide a set of 25,000 highly polar movie reviews for training, and 25,000 for testing. There is additional unlabeled data for use as well.
146
+
147
+ ### Supported Tasks and Leaderboards
148
+
149
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
150
+
151
+ ### Languages
152
+
153
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
154
+
155
+ ## Dataset Structure
156
+
157
+ ### Data Instances
158
+
159
+ #### plain_text
160
+
161
+ - **Size of downloaded dataset files:** 84.13 MB
162
+ - **Size of the generated dataset:** 133.23 MB
163
+ - **Total amount of disk used:** 217.35 MB
164
+
165
+ An example of 'train' looks as follows.
166
+ ```
167
+ {
168
+ "label": 0,
169
+ "text": "Goodbye world2\n"
170
+ }
171
+ ```
172
+
173
+ ### Data Fields
174
+
175
+ The data fields are the same among all splits.
176
+
177
+ #### plain_text
178
+ - `text`: a `string` feature.
179
+ - `label`: a classification label, with possible values including `neg` (0), `pos` (1).
180
+
181
+ ### Data Splits
182
+
183
+ | name |train|unsupervised|test |
184
+ |----------|----:|-----------:|----:|
185
+ |plain_text|25000| 50000|25000|
186
+
187
+ ## Dataset Creation
188
+
189
+ ### Curation Rationale
190
+
191
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
192
+
193
+ ### Source Data
194
+
195
+ #### Initial Data Collection and Normalization
196
+
197
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
198
+
199
+ #### Who are the source language producers?
200
+
201
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
202
+
203
+ ### Annotations
204
+
205
+ #### Annotation process
206
+
207
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
208
+
209
+ #### Who are the annotators?
210
+
211
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
212
+
213
+ ### Personal and Sensitive Information
214
+
215
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
216
+
217
+ ## Considerations for Using the Data
218
+
219
+ ### Social Impact of Dataset
220
+
221
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
222
+
223
+ ### Discussion of Biases
224
+
225
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
226
+
227
+ ### Other Known Limitations
228
+
229
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
230
+
231
+ ## Additional Information
232
+
233
+ ### Dataset Curators
234
+
235
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
236
+
237
+ ### Licensing Information
238
+
239
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
240
+
241
+ ### Citation Information
242
+
243
+ ```
244
+ @InProceedings{maas-EtAl:2011:ACL-HLT2011,
245
+ author = {Maas, Andrew L. and Daly, Raymond E. and Pham, Peter T. and Huang, Dan and Ng, Andrew Y. and Potts, Christopher},
246
+ title = {Learning Word Vectors for Sentiment Analysis},
247
+ booktitle = {Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies},
248
+ month = {June},
249
+ year = {2011},
250
+ address = {Portland, Oregon, USA},
251
+ publisher = {Association for Computational Linguistics},
252
+ pages = {142--150},
253
+ url = {http://www.aclweb.org/anthology/P11-1015}
254
+ }
255
+
256
+ ```
257
+
258
+
259
+ ### Contributions
260
+
261
+ Thanks to [@ghazi-f](https://github.com/ghazi-f), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text