parquet-converter commited on
Commit
2dc8135
1 Parent(s): 94a9845

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,27 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,251 +0,0 @@
1
- ---
2
- pretty_name: IMDB
3
- annotations_creators:
4
- - expert-generated
5
- language_creators:
6
- - expert-generated
7
- language:
8
- - en
9
- license:
10
- - other
11
- multilinguality:
12
- - monolingual
13
- size_categories:
14
- - 10K<n<100K
15
- source_datasets:
16
- - original
17
- task_categories:
18
- - text-classification
19
- task_ids:
20
- - sentiment-classification
21
- paperswithcode_id: imdb-movie-reviews
22
- train-eval-index:
23
- - config: plain_text
24
- task: text-classification
25
- task_id: binary_classification
26
- splits:
27
- train_split: train
28
- eval_split: test
29
- col_mapping:
30
- text: text
31
- label: target
32
- metrics:
33
- - type: accuracy
34
- - name: Accuracy
35
- - type: f1
36
- name: F1 macro
37
- args:
38
- average: macro
39
- - type: f1
40
- name: F1 micro
41
- args:
42
- average: micro
43
- - type: f1
44
- name: F1 weighted
45
- args:
46
- average: weighted
47
- - type: precision
48
- name: Precision macro
49
- args:
50
- average: macro
51
- - type: precision
52
- name: Precision micro
53
- args:
54
- average: micro
55
- - type: precision
56
- name: Precision weighted
57
- args:
58
- average: weighted
59
- - type: recall
60
- name: Recall macro
61
- args:
62
- average: macro
63
- - type: recall
64
- name: Recall micro
65
- args:
66
- average: micro
67
- - type: recall
68
- name: Recall weighted
69
- args:
70
- average: weighted
71
- dataset_info:
72
- features:
73
- - name: text
74
- dtype: string
75
- - name: label
76
- dtype:
77
- class_label:
78
- names:
79
- 0: neg
80
- 1: pos
81
- config_name: plain_text
82
- splits:
83
- - name: train
84
- num_bytes: 33432835
85
- num_examples: 25000
86
- - name: test
87
- num_bytes: 32650697
88
- num_examples: 25000
89
- - name: unsupervised
90
- num_bytes: 67106814
91
- num_examples: 50000
92
- download_size: 84125825
93
- dataset_size: 133190346
94
- ---
95
-
96
- # Dataset Card for "imdb"
97
-
98
- ## Table of Contents
99
- - [Dataset Description](#dataset-description)
100
- - [Dataset Summary](#dataset-summary)
101
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
102
- - [Languages](#languages)
103
- - [Dataset Structure](#dataset-structure)
104
- - [Data Instances](#data-instances)
105
- - [Data Fields](#data-fields)
106
- - [Data Splits](#data-splits)
107
- - [Dataset Creation](#dataset-creation)
108
- - [Curation Rationale](#curation-rationale)
109
- - [Source Data](#source-data)
110
- - [Annotations](#annotations)
111
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
112
- - [Considerations for Using the Data](#considerations-for-using-the-data)
113
- - [Social Impact of Dataset](#social-impact-of-dataset)
114
- - [Discussion of Biases](#discussion-of-biases)
115
- - [Other Known Limitations](#other-known-limitations)
116
- - [Additional Information](#additional-information)
117
- - [Dataset Curators](#dataset-curators)
118
- - [Licensing Information](#licensing-information)
119
- - [Citation Information](#citation-information)
120
- - [Contributions](#contributions)
121
-
122
- ## Dataset Description
123
-
124
- - **Homepage:** [http://ai.stanford.edu/~amaas/data/sentiment/](http://ai.stanford.edu/~amaas/data/sentiment/)
125
- - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
126
- - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
127
- - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
128
- - **Size of downloaded dataset files:** 80.23 MB
129
- - **Size of the generated dataset:** 127.06 MB
130
- - **Total amount of disk used:** 207.28 MB
131
-
132
- ### Dataset Summary
133
-
134
- Large Movie Review Dataset.
135
- This is a dataset for binary sentiment classification containing substantially more data than previous benchmark datasets. We provide a set of 25,000 highly polar movie reviews for training, and 25,000 for testing. There is additional unlabeled data for use as well.
136
-
137
- ### Supported Tasks and Leaderboards
138
-
139
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
140
-
141
- ### Languages
142
-
143
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
144
-
145
- ## Dataset Structure
146
-
147
- ### Data Instances
148
-
149
- #### plain_text
150
-
151
- - **Size of downloaded dataset files:** 80.23 MB
152
- - **Size of the generated dataset:** 127.06 MB
153
- - **Total amount of disk used:** 207.28 MB
154
-
155
- An example of 'train' looks as follows.
156
- ```
157
- {
158
- "label": 0,
159
- "text": "Goodbye world2\n"
160
- }
161
- ```
162
-
163
- ### Data Fields
164
-
165
- The data fields are the same among all splits.
166
-
167
- #### plain_text
168
- - `text`: a `string` feature.
169
- - `label`: a classification label, with possible values including `neg` (0), `pos` (1).
170
-
171
- ### Data Splits
172
-
173
- | name |train|unsupervised|test |
174
- |----------|----:|-----------:|----:|
175
- |plain_text|25000| 50000|25000|
176
-
177
- ## Dataset Creation
178
-
179
- ### Curation Rationale
180
-
181
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
182
-
183
- ### Source Data
184
-
185
- #### Initial Data Collection and Normalization
186
-
187
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
188
-
189
- #### Who are the source language producers?
190
-
191
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
192
-
193
- ### Annotations
194
-
195
- #### Annotation process
196
-
197
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
198
-
199
- #### Who are the annotators?
200
-
201
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
202
-
203
- ### Personal and Sensitive Information
204
-
205
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
206
-
207
- ## Considerations for Using the Data
208
-
209
- ### Social Impact of Dataset
210
-
211
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
212
-
213
- ### Discussion of Biases
214
-
215
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
216
-
217
- ### Other Known Limitations
218
-
219
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
220
-
221
- ## Additional Information
222
-
223
- ### Dataset Curators
224
-
225
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
226
-
227
- ### Licensing Information
228
-
229
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
230
-
231
- ### Citation Information
232
-
233
- ```
234
- @InProceedings{maas-EtAl:2011:ACL-HLT2011,
235
- author = {Maas, Andrew L. and Daly, Raymond E. and Pham, Peter T. and Huang, Dan and Ng, Andrew Y. and Potts, Christopher},
236
- title = {Learning Word Vectors for Sentiment Analysis},
237
- booktitle = {Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies},
238
- month = {June},
239
- year = {2011},
240
- address = {Portland, Oregon, USA},
241
- publisher = {Association for Computational Linguistics},
242
- pages = {142--150},
243
- url = {http://www.aclweb.org/anthology/P11-1015}
244
- }
245
-
246
- ```
247
-
248
-
249
- ### Contributions
250
-
251
- Thanks to [@ghazi-f](https://github.com/ghazi-f), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"plain_text": {"description": "Large Movie Review Dataset.\nThis is a dataset for binary sentiment classification containing substantially more data than previous benchmark datasets. We provide a set of 25,000 highly polar movie reviews for training, and 25,000 for testing. There is additional unlabeled data for use as well.", "citation": "@InProceedings{maas-EtAl:2011:ACL-HLT2011,\n author = {Maas, Andrew L. and Daly, Raymond E. and Pham, Peter T. and Huang, Dan and Ng, Andrew Y. and Potts, Christopher},\n title = {Learning Word Vectors for Sentiment Analysis},\n booktitle = {Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies},\n month = {June},\n year = {2011},\n address = {Portland, Oregon, USA},\n publisher = {Association for Computational Linguistics},\n pages = {142--150},\n url = {http://www.aclweb.org/anthology/P11-1015}\n}\n", "homepage": "http://ai.stanford.edu/~amaas/data/sentiment/", "license": "", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 2, "names": ["neg", "pos"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "task_templates": [{"task": "text-classification", "text_column": "text", "label_column": "label", "labels": ["neg", "pos"]}], "builder_name": "imdb", "config_name": "plain_text", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 33432835, "num_examples": 25000, "dataset_name": "imdb"}, "test": {"name": "test", "num_bytes": 32650697, "num_examples": 25000, "dataset_name": "imdb"}, "unsupervised": {"name": "unsupervised", "num_bytes": 67106814, "num_examples": 50000, "dataset_name": "imdb"}}, "download_checksums": {"http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz": {"num_bytes": 84125825, "checksum": "c40f74a18d3b61f90feba1e17730e0d38e8b97c05fde7008942e91923d1658fe"}}, "download_size": 84125825, "post_processing_size": null, "dataset_size": 133190346, "size_in_bytes": 217316171}}
 
 
imdb.py DELETED
@@ -1,111 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- # Lint as: python3
17
- """IMDB movie reviews dataset."""
18
-
19
- import datasets
20
- from datasets.tasks import TextClassification
21
-
22
-
23
- _DESCRIPTION = """\
24
- Large Movie Review Dataset.
25
- This is a dataset for binary sentiment classification containing substantially \
26
- more data than previous benchmark datasets. We provide a set of 25,000 highly \
27
- polar movie reviews for training, and 25,000 for testing. There is additional \
28
- unlabeled data for use as well.\
29
- """
30
-
31
- _CITATION = """\
32
- @InProceedings{maas-EtAl:2011:ACL-HLT2011,
33
- author = {Maas, Andrew L. and Daly, Raymond E. and Pham, Peter T. and Huang, Dan and Ng, Andrew Y. and Potts, Christopher},
34
- title = {Learning Word Vectors for Sentiment Analysis},
35
- booktitle = {Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies},
36
- month = {June},
37
- year = {2011},
38
- address = {Portland, Oregon, USA},
39
- publisher = {Association for Computational Linguistics},
40
- pages = {142--150},
41
- url = {http://www.aclweb.org/anthology/P11-1015}
42
- }
43
- """
44
-
45
- _DOWNLOAD_URL = "http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz"
46
-
47
-
48
- class IMDBReviewsConfig(datasets.BuilderConfig):
49
- """BuilderConfig for IMDBReviews."""
50
-
51
- def __init__(self, **kwargs):
52
- """BuilderConfig for IMDBReviews.
53
-
54
- Args:
55
- **kwargs: keyword arguments forwarded to super.
56
- """
57
- super(IMDBReviewsConfig, self).__init__(version=datasets.Version("1.0.0", ""), **kwargs)
58
-
59
-
60
- class Imdb(datasets.GeneratorBasedBuilder):
61
- """IMDB movie reviews dataset."""
62
-
63
- BUILDER_CONFIGS = [
64
- IMDBReviewsConfig(
65
- name="plain_text",
66
- description="Plain text",
67
- )
68
- ]
69
-
70
- def _info(self):
71
- return datasets.DatasetInfo(
72
- description=_DESCRIPTION,
73
- features=datasets.Features(
74
- {"text": datasets.Value("string"), "label": datasets.features.ClassLabel(names=["neg", "pos"])}
75
- ),
76
- supervised_keys=None,
77
- homepage="http://ai.stanford.edu/~amaas/data/sentiment/",
78
- citation=_CITATION,
79
- task_templates=[TextClassification(text_column="text", label_column="label")],
80
- )
81
-
82
- def _split_generators(self, dl_manager):
83
- archive = dl_manager.download(_DOWNLOAD_URL)
84
- return [
85
- datasets.SplitGenerator(
86
- name=datasets.Split.TRAIN, gen_kwargs={"files": dl_manager.iter_archive(archive), "split": "train"}
87
- ),
88
- datasets.SplitGenerator(
89
- name=datasets.Split.TEST, gen_kwargs={"files": dl_manager.iter_archive(archive), "split": "test"}
90
- ),
91
- datasets.SplitGenerator(
92
- name=datasets.Split("unsupervised"),
93
- gen_kwargs={"files": dl_manager.iter_archive(archive), "split": "train", "labeled": False},
94
- ),
95
- ]
96
-
97
- def _generate_examples(self, files, split, labeled=True):
98
- """Generate aclImdb examples."""
99
- # For labeled examples, extract the label from the path.
100
- if labeled:
101
- label_mapping = {"pos": 1, "neg": 0}
102
- for path, f in files:
103
- if path.startswith(f"aclImdb/{split}"):
104
- label = label_mapping.get(path.split("/")[2])
105
- if label is not None:
106
- yield path, {"text": f.read().decode("utf-8"), "label": label}
107
- else:
108
- for path, f in files:
109
- if path.startswith(f"aclImdb/{split}"):
110
- if path.split("/")[2] == "unsup":
111
- yield path, {"text": f.read().decode("utf-8"), "label": -1}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
plain_text/imdb-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fbfa02d0da39e6317174203247cb042fceeaf77dcc39e836918708430dfbea13
3
+ size 20470362
plain_text/imdb-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:353169bf7fb33514d80cd9fa464aa737d9cbfffdaf0bdecc98fe4f0724fcce3f
3
+ size 20979967
plain_text/imdb-unsupervised.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fe2c7bb3825719ff0c908ae8142874244db950db76ec49a75fb064f85042d6d7
3
+ size 41996508