parquet-converter commited on
Commit
e25769e
1 Parent(s): 9772109

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,54 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ftz filter=lfs diff=lfs merge=lfs -text
6
- *.gz filter=lfs diff=lfs merge=lfs -text
7
- *.h5 filter=lfs diff=lfs merge=lfs -text
8
- *.joblib filter=lfs diff=lfs merge=lfs -text
9
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
- *.lz4 filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.npy filter=lfs diff=lfs merge=lfs -text
14
- *.npz filter=lfs diff=lfs merge=lfs -text
15
- *.onnx filter=lfs diff=lfs merge=lfs -text
16
- *.ot filter=lfs diff=lfs merge=lfs -text
17
- *.parquet filter=lfs diff=lfs merge=lfs -text
18
- *.pb filter=lfs diff=lfs merge=lfs -text
19
- *.pickle filter=lfs diff=lfs merge=lfs -text
20
- *.pkl filter=lfs diff=lfs merge=lfs -text
21
- *.pt filter=lfs diff=lfs merge=lfs -text
22
- *.pth filter=lfs diff=lfs merge=lfs -text
23
- *.rar filter=lfs diff=lfs merge=lfs -text
24
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
25
- *.tar.* filter=lfs diff=lfs merge=lfs -text
26
- *.tflite filter=lfs diff=lfs merge=lfs -text
27
- *.tgz filter=lfs diff=lfs merge=lfs -text
28
- *.wasm filter=lfs diff=lfs merge=lfs -text
29
- *.xz filter=lfs diff=lfs merge=lfs -text
30
- *.zip filter=lfs diff=lfs merge=lfs -text
31
- *.zst filter=lfs diff=lfs merge=lfs -text
32
- *tfevents* filter=lfs diff=lfs merge=lfs -text
33
- # Audio files - uncompressed
34
- *.pcm filter=lfs diff=lfs merge=lfs -text
35
- *.sam filter=lfs diff=lfs merge=lfs -text
36
- *.raw filter=lfs diff=lfs merge=lfs -text
37
- # Audio files - compressed
38
- *.aac filter=lfs diff=lfs merge=lfs -text
39
- *.flac filter=lfs diff=lfs merge=lfs -text
40
- *.mp3 filter=lfs diff=lfs merge=lfs -text
41
- *.ogg filter=lfs diff=lfs merge=lfs -text
42
- *.wav filter=lfs diff=lfs merge=lfs -text
43
- # Image files - uncompressed
44
- *.bmp filter=lfs diff=lfs merge=lfs -text
45
- *.gif filter=lfs diff=lfs merge=lfs -text
46
- *.png filter=lfs diff=lfs merge=lfs -text
47
- *.tiff filter=lfs diff=lfs merge=lfs -text
48
- # Image files - compressed
49
- *.jpg filter=lfs diff=lfs merge=lfs -text
50
- *.jpeg filter=lfs diff=lfs merge=lfs -text
51
- *.webp filter=lfs diff=lfs merge=lfs -text
52
- production.csv filter=lfs diff=lfs merge=lfs -text
53
- training.csv filter=lfs diff=lfs merge=lfs -text
54
- validation.csv filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,140 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - expert-generated
4
- language_creators:
5
- - expert-generated
6
- language:
7
- - en
8
- license:
9
- - mit
10
- multilinguality:
11
- - monolingual
12
- pretty_name: sentiment-classification-reviews-with-drift
13
- size_categories:
14
- - 10K<n<100K
15
- task_categories:
16
- - text-classification
17
- task_ids:
18
- - sentiment-classification
19
- ---
20
-
21
- # Dataset Card for `reviews_with_drift`
22
-
23
- ## Table of Contents
24
- - [Table of Contents](#table-of-contents)
25
- - [Dataset Description](#dataset-description)
26
- - [Dataset Summary](#dataset-summary)
27
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
28
- - [language](#language)
29
- - [Dataset Structure](#dataset-structure)
30
- - [Data Instances](#data-instances)
31
- - [Data Fields](#data-fields)
32
- - [Data Splits](#data-splits)
33
- - [Dataset Creation](#dataset-creation)
34
- - [Curation Rationale](#curation-rationale)
35
- - [Source Data](#source-data)
36
- - [Annotations](#annotations)
37
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
38
- - [Considerations for Using the Data](#considerations-for-using-the-data)
39
- - [Social Impact of Dataset](#social-impact-of-dataset)
40
- - [Discussion of Biases](#discussion-of-biases)
41
- - [Other Known Limitations](#other-known-limitations)
42
- - [Additional Information](#additional-information)
43
- - [Dataset Curators](#dataset-curators)
44
- - [Licensing Information](#licensing-information)
45
- - [Citation Information](#citation-information)
46
- - [Contributions](#contributions)
47
-
48
- ## Dataset Description
49
-
50
- ### Dataset Summary
51
-
52
- This dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added (`age`, `gender`, `context`) as well as a made up timestamp `prediction_ts` of when the inference took place.
53
-
54
- ### Supported Tasks and Leaderboards
55
-
56
- `text-classification`, `sentiment-classification`: The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative).
57
-
58
- ### language
59
-
60
- Text is mainly written in english.
61
-
62
- ## Dataset Structure
63
-
64
- ### Data Instances
65
-
66
- [More Information Needed]
67
-
68
- ### Data Fields
69
-
70
- [More Information Needed]
71
-
72
- ### Data Splits
73
-
74
- [More Information Needed]
75
-
76
- ## Dataset Creation
77
-
78
- ### Curation Rationale
79
-
80
- [More Information Needed]
81
-
82
- ### Source Data
83
-
84
- [More Information Needed]
85
-
86
- #### Initial Data Collection and Normalization
87
-
88
- [More Information Needed]
89
-
90
- #### Who are the source language producers?
91
-
92
- [More Information Needed]
93
-
94
- ### Annotations
95
-
96
- [More Information Needed]
97
-
98
- #### Annotation process
99
-
100
- [More Information Needed]
101
-
102
- #### Who are the annotators?
103
-
104
- [More Information Needed]
105
-
106
- ### Personal and Sensitive Information
107
-
108
- [More Information Needed]
109
-
110
- ## Considerations for Using the Data
111
-
112
- ### Social Impact of Dataset
113
-
114
- [More Information Needed]
115
-
116
- ### Discussion of Biases
117
-
118
- [More Information Needed]
119
-
120
- ### Other Known Limitations
121
-
122
- [More Information Needed]
123
-
124
- ## Additional Information
125
-
126
- ### Dataset Curators
127
-
128
- [More Information Needed]
129
-
130
- ### Licensing Information
131
-
132
- [More Information Needed]
133
-
134
- ### Citation Information
135
-
136
- [More Information Needed]
137
-
138
- ### Contributions
139
-
140
- Thanks to [@fjcasti1](https://github.com/fjcasti1) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
beer_reviews_label_drift_neutral.py DELETED
@@ -1,181 +0,0 @@
1
- # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
2
- #
3
- # Licensed under the Apache License, Version 2.0 (the "License");
4
- # you may not use this file except in compliance with the License.
5
- # You may obtain a copy of the License at
6
- #
7
- # http://www.apache.org/licenses/LICENSE-2.0
8
- #
9
- # Unless required by applicable law or agreed to in writing, software
10
- # distributed under the License is distributed on an "AS IS" BASIS,
11
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
- # See the License for the specific language governing permissions and
13
- # limitations under the License.
14
-
15
- # Lint as: python3
16
- """IMDb movie revies dataset mixed with Trip Advisor Hotel Reviews to simulate drift accross time."""
17
-
18
-
19
- import csv
20
- import json
21
- import os
22
-
23
- import datasets
24
- from datasets.tasks import TextClassification
25
-
26
-
27
-
28
- # TODO: Add BibTeX citation to our BLOG
29
- # Find for instance the citation on arxiv or on the dataset repo/website
30
- _CITATION = ""
31
- # _CITATION = """\
32
- # @InProceedings{huggingface:dataset,
33
- # title = {A great new dataset},
34
- # author={huggingface, Inc.
35
- # },
36
- # year={2020}
37
- # }
38
- # """
39
-
40
- # TODO: Add description of the dataset here
41
- # You can copy an official description
42
- _DESCRIPTION = """\
43
- This dataset was crafted to be used in our tutorial [Link to the tutorial when
44
- ready]. It consists on product reviews from an e-commerce store. The reviews
45
- are labeled on a scale from 1 to 5 (stars). The training & validation sets are
46
- fully composed by reviews written in english. However, the production set has
47
- some reviews written in spanish. At Arize, we work to surface this issue and
48
- help you solve it.
49
- """
50
-
51
- # TODO: Add a link to an official homepage for the dataset here
52
- _HOMEPAGE = ""
53
-
54
- # TODO: Add the licence for the dataset here if you can find it
55
- _LICENSE = ""
56
-
57
- # TODO: Add link to the official dataset URLs here
58
- # The HuggingFace Datasets library doesn't host the datasets but only points to the original files.
59
- # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
60
- _URL = "https://huggingface.co/datasets/arize-ai/beer_reviews_label_drift_neutral/resolve/main/"
61
- _URLS = {
62
- "training": _URL + "training.csv",
63
- "validation": _URL + "validation.csv",
64
- "production": _URL + "production.csv",
65
- }
66
-
67
-
68
- # TODO: Name of the dataset usually match the script name with CamelCase instead of snake_case
69
- class BeerReviewsLabelDriftNeutral(datasets.GeneratorBasedBuilder):
70
- """TODO: Short description of my dataset."""
71
-
72
- VERSION = datasets.Version("1.0.0")
73
-
74
- # This is an example of a dataset with multiple configurations.
75
- # If you don't want/need to define several sub-sets in your dataset,
76
- # just remove the BUILDER_CONFIG_CLASS and the BUILDER_CONFIGS attributes.
77
-
78
- # If you need to make complex sub-parts in the datasets with configurable options
79
- # You can create your own builder configuration class to store attribute, inheriting from datasets.BuilderConfig
80
- # BUILDER_CONFIG_CLASS = MyBuilderConfig
81
-
82
- # You will be able to load one or the other configurations in the following list with
83
- # data = datasets.load_dataset('my_dataset', 'first_domain')
84
- # data = datasets.load_dataset('my_dataset', 'second_domain')
85
- BUILDER_CONFIGS = [
86
- datasets.BuilderConfig(name="default", version=VERSION, description="Default"),
87
- ]
88
-
89
- DEFAULT_CONFIG_NAME = "default" # It's not mandatory to have a default configuration. Just use one if it make sense.
90
-
91
- def _info(self):
92
- #class_names = ["negative", "neutral", "positive"]
93
- class_names = ["negative", "neutral", "positive"]
94
- # This method specifies the datasets.DatasetInfo object which contains informations and typings for the dataset
95
- features = datasets.Features(
96
- # These are the features of your dataset like images, labels ...
97
- {
98
- "prediction_ts": datasets.Value("float"),
99
- "beer_ABV": datasets.Value("float"),
100
- "beer_name":datasets.Value("string"),
101
- "beer_style":datasets.Value("string"),
102
- "review_appearance": datasets.Value("float"),
103
- "review_palette": datasets.Value("float"),
104
- "review_taste": datasets.Value("float"),
105
- "review_aroma": datasets.Value("float"),
106
- "text":datasets.Value("string"),
107
- "label":datasets.ClassLabel(names=class_names),
108
- }
109
- )
110
-
111
- return datasets.DatasetInfo(
112
- # This is the description that will appear on the datasets page.
113
- description=_DESCRIPTION,
114
- # This defines the different columns of the dataset and their types
115
- features=features, # Here we define them above because they are different between the two configurations
116
- # If there's a common (input, target) tuple from the features, uncomment supervised_keys line below and
117
- # specify them. They'll be used if as_supervised=True in builder.as_dataset.
118
- supervised_keys=("text", "label"),
119
- # Homepage of the dataset for documentation
120
- # License for the dataset if available
121
- license=_LICENSE,
122
- # Citation for the dataset
123
- citation=_CITATION,
124
- task_templates=[TextClassification(text_column="text", label_column="label")],
125
- )
126
-
127
- def _split_generators(self, dl_manager):
128
- # This method is tasked with downloading/extracting the data and defining the splits depending on the configuration
129
- # If several configurations are possible (listed in BUILDER_CONFIGS), the configuration selected by the user is in self.config.name
130
-
131
- # dl_manager is a datasets.download.DownloadManager that can be used to download and extract URLS
132
- # It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
133
- # By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
134
- extracted_paths = dl_manager.download_and_extract(_URLS)
135
- return [
136
- datasets.SplitGenerator(
137
- name=datasets.Split("training"),
138
- # These kwargs will be passed to _generate_examples
139
- gen_kwargs={
140
- "filepath": extracted_paths['training'],
141
- },
142
- ),
143
- datasets.SplitGenerator(
144
- name=datasets.Split("validation"),
145
- # These kwargs will be passed to _generate_examples
146
- gen_kwargs={
147
- "filepath": extracted_paths['validation'],
148
- },
149
- ),
150
- datasets.SplitGenerator(
151
- name=datasets.Split("production"),
152
- # These kwargs will be passed to _generate_examples
153
- gen_kwargs={
154
- "filepath": extracted_paths['production'],
155
- },
156
- ),
157
- ]
158
-
159
-
160
- # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
161
- def _generate_examples(self, filepath):
162
- # This method handles input defined in _split_generators to yield (key, example) tuples from the dataset.
163
- # The `key` is for legacy reasons (tfds) and is not important in itself, but must be unique for each example.
164
- with open(filepath) as csv_file:
165
- csv_reader = csv.reader(csv_file)
166
- for id_, row in enumerate(csv_reader):
167
- prediction_ts,name,style,ABV,appearance,palette,taste,aroma,text,label=row
168
- if id_==0:
169
- continue
170
- yield id_, {
171
- "prediction_ts":prediction_ts,
172
- "beer_name": name,
173
- "beer_style": style,
174
- "beer_ABV": ABV,
175
- "review_appearance": appearance,
176
- "review_palette": palette,
177
- "review_taste": taste,
178
- "review_aroma": aroma,
179
- "text": text,
180
- "label":label,
181
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
production.csv → default/beer_reviews_label_drift_neutral-production.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ba0a962ea574775c7233373db656baa20921e4eebde34d285aa3b08e3c7cfcd1
3
- size 20539674
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:efca3a16eae4c1b7f4a10da91153e76e0b96e8a91df7ecae4c962dd52ef9df60
3
+ size 11951825
training.csv → default/beer_reviews_label_drift_neutral-training.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d972f3fe36590ce444e05f49bc7ef71d1506448d3966d984ae002bd2f8ff81d5
3
- size 6650136
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7d8f04cef87c09a23f3c187af86429181548fd862fdb10334ec77a6c5e960b0f
3
+ size 3901575
validation.csv → default/beer_reviews_label_drift_neutral-validation.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:989c27939cea789957f1fb643cefa2fb726488640a8df3f60d39677ba25a042d
3
- size 941861
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8804f5ae90a94aed21704215abb7abf3dc0e143efb4a09a6db5aeba3c959af68
3
+ size 562825