kashif HF staff commited on
Commit
3834aff
1 Parent(s): 0dfc3f1

added electricity load diagram dataset (#3722)

Browse files

* added electricity load diagram

* typo

* one more typo

* fixed dataset name

* rename folder

* added citation

* Update card

* Update script

* Add new task to tasks list

* Missing comma

* Set lang to unknown

Co-authored-by: mariosasko <mariosasko777@gmail.com>

Commit from https://github.com/huggingface/datasets/commit/edc97be7de00f7282a8998933177164caa4ad96a

README.md ADDED
@@ -0,0 +1,212 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - no-annotation
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - unknown
8
+ licenses:
9
+ - unknown
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: Electricity Load Diagrams
13
+ size_categories:
14
+ - 1K<n<10K
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - time-series-forecasting
19
+ task_ids:
20
+ - univariate-time-series-forecasting
21
+ ---
22
+
23
+ # Dataset Card for Electricity Load Diagrams
24
+
25
+ ## Table of Contents
26
+ - [Table of Contents](#table-of-contents)
27
+ - [Dataset Description](#dataset-description)
28
+ - [Dataset Summary](#dataset-summary)
29
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
30
+ - [Languages](#languages)
31
+ - [Dataset Structure](#dataset-structure)
32
+ - [Data Instances](#data-instances)
33
+ - [Data Fields](#data-fields)
34
+ - [Data Splits](#data-splits)
35
+ - [Dataset Creation](#dataset-creation)
36
+ - [Curation Rationale](#curation-rationale)
37
+ - [Source Data](#source-data)
38
+ - [Annotations](#annotations)
39
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
40
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
41
+ - [Social Impact of Dataset](#social-impact-of-dataset)
42
+ - [Discussion of Biases](#discussion-of-biases)
43
+ - [Other Known Limitations](#other-known-limitations)
44
+ - [Additional Information](#additional-information)
45
+ - [Dataset Curators](#dataset-curators)
46
+ - [Licensing Information](#licensing-information)
47
+ - [Citation Information](#citation-information)
48
+ - [Contributions](#contributions)
49
+
50
+ ## Dataset Description
51
+
52
+ - **Homepage:** [Electricity Load Diagrams 2011-2014](https://archive.ics.uci.edu/ml/datasets/ElectricityLoadDiagrams20112014)
53
+ - **Paper:** [Modeling Long- and Short-Term Temporal Patterns with Deep Neural Networks
54
+ ](https://dl.acm.org/doi/10.1145/3209978.3210006)
55
+ - **Point of Contact:** [Artur Trindade](mailto:artur.trindade@elergone.pt)
56
+
57
+ ### Dataset Summary
58
+
59
+ This dataset contains hourly kW electricity consumption time series of 370 Portuguese clients from 2011 to 2014.
60
+
61
+ ### Dataset Usage
62
+
63
+ The dataset has the following configuration parameters:
64
+
65
+ - `freq` is the time series frequency at which we resample (default: `"1H"`)
66
+ - `prediction_length` is the forecast horizon for this task which is used to make the validation and test splits (default: `24`)
67
+ - `rolling_evaluations` is the number of rolling window time series in the test split for evaluation purposes (default: `7`)
68
+
69
+ For example, you can specify your own configuration different from those used in the papers as follows:
70
+
71
+ ```python
72
+ load_dataset("electricity_load_diagrams", "uci", rolling_evaluations=10)
73
+ ```
74
+
75
+ > Notes:
76
+ > - Data set has no missing values.
77
+ > - Values are in kW of each 15 min rescaled to hourly. To convert values in kWh values must be divided by 4.
78
+ > - All time labels report to Portuguese hour, however all days present 96 measures (24*4).
79
+ > - Every year in March time change day (which has only 23 hours) the values between 1:00 am and 2:00 am are zero for all points.
80
+ > - Every year in October time change day (which has 25 hours) the values between 1:00 am and 2:00 am aggregate the consumption of two hours.
81
+
82
+ ### Supported Tasks and Leaderboards
83
+
84
+ - `univariate-time-series-forecasting`: The time series forecasting tasks involves learning the future `target` values of time series in a dataset for the `prediction_length` time steps. The results of the forecasts can then be validated via the ground truth in the `validation` split and tested via the `test` split.
85
+
86
+ ### Languages
87
+
88
+ ## Dataset Structure
89
+
90
+ Data set has no missing values. The raw values are in kW of each 15 min interval and are resampled to hourly frequency.
91
+ Each time series represent one client. Some clients were created after 2011. In these cases consumption were considered zero. All time labels report to Portuguese hour, however all days contain 96 measurements (24*4). Every year in March time change day (which has only 23 hours) the values between 1:00 am and 2:00 am are zero for all points. Every year in October time change day (which has 25 hours) the values between 1:00 am and 2:00 am aggregate the consumption of two hours.
92
+
93
+ ### Data Instances
94
+
95
+ A sample from the training set is provided below:
96
+
97
+ ```
98
+ {
99
+ 'start': datetime.datetime(2012, 1, 1, 0, 0),
100
+ 'target': [14.0, 18.0, 21.0, 20.0, 22.0, 20.0, 20.0, 20.0, 13.0, 11.0], # <= this target array is a concatenated sample
101
+ 'feat_static_cat': [0],
102
+ 'item_id': '0'
103
+ }
104
+ ```
105
+
106
+ We have two configurations `uci` and `lstnet`, which are specified as follows.
107
+
108
+ The time series are resampled to hourly frequency. We test on 7 rolling windows of prediction length of 24.
109
+
110
+ The `uci` validation therefore ends 24*7 time steps before the end of each time series. The training split ends 24 time steps before the end of the validation split.
111
+
112
+ For the `lsnet` configuration we split the training window so that it is 0.6-th of the full time series and the validation is 0.8-th of the full time series and the last 0.2-th length time windows is used as the test set of 7 rolling windows of the 24 time steps each. Finally, as in the LSTNet paper, we only consider time series that are active in the year 2012--2014, which leaves us with 320 time series.
113
+
114
+ ### Data Fields
115
+
116
+ For this univariate regular time series we have:
117
+
118
+ - `start`: a `datetime` of the first entry of each time series in the dataset
119
+ - `target`: an `array[float32]` of the actual target values
120
+ - `feat_static_cat`: an `array[uint64]` which contains a categorical identifier of each time series in the dataset
121
+ - `item_id`: a string identifier of each time series in a dataset for reference
122
+
123
+ Given the `freq` and the `start` datetime, we can assign a datetime to each entry in the target array.
124
+
125
+ ### Data Splits
126
+
127
+ | name |train|unsupervised|test |
128
+ |----------|----:|-----------:|----:|
129
+ |uci|370| 2590|370|
130
+ |lstnet|320| 2240|320|
131
+
132
+ ## Dataset Creation
133
+
134
+ The Electricity Load Diagrams 2011–2014 Dataset was developed by Artur Trindade and shared in UCI Machine Learning Repository. This dataset covers the electricity load of 370 substations in Portugal from the start of 2011 to the end of 2014 with a sampling period of 15 min. We will resample this to hourly time series.
135
+
136
+ ### Curation Rationale
137
+
138
+ Research and development of load forecasting methods. In particular short-term electricity forecasting.
139
+
140
+ ### Source Data
141
+
142
+ This dataset covers the electricity load of 370 sub-stations in Portugal from the start of 2011 to the end of 2014 with a sampling period of 15 min.
143
+
144
+ #### Initial Data Collection and Normalization
145
+
146
+ [More Information Needed]
147
+
148
+ #### Who are the source language producers?
149
+
150
+ [More Information Needed]
151
+
152
+ ### Annotations
153
+
154
+ #### Annotation process
155
+
156
+ [More Information Needed]
157
+
158
+ #### Who are the annotators?
159
+
160
+ [More Information Needed]
161
+
162
+ ### Personal and Sensitive Information
163
+
164
+ [More Information Needed]
165
+
166
+ ## Considerations for Using the Data
167
+
168
+ ### Social Impact of Dataset
169
+
170
+ [More Information Needed]
171
+
172
+ ### Discussion of Biases
173
+
174
+ [More Information Needed]
175
+
176
+ ### Other Known Limitations
177
+
178
+ [More Information Needed]
179
+
180
+ ## Additional Information
181
+
182
+ ### Dataset Curators
183
+
184
+ [More Information Needed]
185
+
186
+ ### Licensing Information
187
+
188
+ [More Information Needed]
189
+
190
+ ### Citation Information
191
+
192
+ ```bibtex
193
+ @inproceedings{10.1145/3209978.3210006,
194
+ author = {Lai, Guokun and Chang, Wei-Cheng and Yang, Yiming and Liu, Hanxiao},
195
+ title = {Modeling Long- and Short-Term Temporal Patterns with Deep Neural Networks},
196
+ year = {2018},
197
+ isbn = {9781450356572},
198
+ publisher = {Association for Computing Machinery},
199
+ address = {New York, NY, USA},
200
+ url = {https://doi.org/10.1145/3209978.3210006},
201
+ doi = {10.1145/3209978.3210006},
202
+ booktitle = {The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval},
203
+ pages = {95--104},
204
+ numpages = {10},
205
+ location = {Ann Arbor, MI, USA},
206
+ series = {SIGIR '18}
207
+ }
208
+ ```
209
+
210
+ ### Contributions
211
+
212
+ Thanks to [@kashif](https://github.com/kashif) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"uci": {"description": "This new dataset contains hourly kW electricity consumption time series of 370 Portuguese clients from 2011 to 2014.\n", "citation": "@inproceedings{10.1145/3209978.3210006,\n author = {Lai, Guokun and Chang, Wei-Cheng and Yang, Yiming and Liu, Hanxiao},\n title = {Modeling Long- and Short-Term Temporal Patterns with Deep Neural Networks},\n year = {2018},\n isbn = {9781450356572},\n publisher = {Association for Computing Machinery},\n address = {New York, NY, USA},\n url = {https://doi.org/10.1145/3209978.3210006},\n doi = {10.1145/3209978.3210006},\n booktitle = {The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval},\n pages = {95--104},\n numpages = {10},\n location = {Ann Arbor, MI, USA},\n series = {SIGIR '18}\n}\n", "homepage": "https://archive.ics.uci.edu/ml/datasets/ElectricityLoadDiagrams20112014", "license": "", "features": {"start": {"dtype": "timestamp[s]", "id": null, "_type": "Value"}, "target": {"feature": {"dtype": "float32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "feat_static_cat": {"feature": {"dtype": "uint64", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "item_id": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "electricty_load_diagram", "config_name": "uci", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 42968147, "num_examples": 370, "dataset_name": "electricty_load_diagram"}, "test": {"name": "test", "num_bytes": 302059069, "num_examples": 2590, "dataset_name": "electricty_load_diagram"}, "validation": {"name": "validation", "num_bytes": 43004777, "num_examples": 370, "dataset_name": "electricty_load_diagram"}}, "download_checksums": {"https://archive.ics.uci.edu/ml/machine-learning-databases/00321/LD2011_2014.txt.zip": {"num_bytes": 261335609, "checksum": "f6c4d0e0df12ecdb9ea008dd6eef3518adb52c559d04a9bac2e1b81dcfc8d4e1"}}, "download_size": 261335609, "post_processing_size": null, "dataset_size": 388031993, "size_in_bytes": 649367602}, "lstnet": {"description": "This new dataset contains hourly kW electricity consumption time series of 370 Portuguese clients from 2011 to 2014.\n", "citation": "@inproceedings{10.1145/3209978.3210006,\n author = {Lai, Guokun and Chang, Wei-Cheng and Yang, Yiming and Liu, Hanxiao},\n title = {Modeling Long- and Short-Term Temporal Patterns with Deep Neural Networks},\n year = {2018},\n isbn = {9781450356572},\n publisher = {Association for Computing Machinery},\n address = {New York, NY, USA},\n url = {https://doi.org/10.1145/3209978.3210006},\n doi = {10.1145/3209978.3210006},\n booktitle = {The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval},\n pages = {95--104},\n numpages = {10},\n location = {Ann Arbor, MI, USA},\n series = {SIGIR '18}\n}\n", "homepage": "https://archive.ics.uci.edu/ml/datasets/ElectricityLoadDiagrams20112014", "license": "", "features": {"start": {"dtype": "timestamp[s]", "id": null, "_type": "Value"}, "target": {"feature": {"dtype": "float32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "feat_static_cat": {"feature": {"dtype": "uint64", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "item_id": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "electricty_load_diagram", "config_name": "lstnet", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 20843200, "num_examples": 320, "dataset_name": "electricty_load_diagram"}, "test": {"name": "test", "num_bytes": 195401080, "num_examples": 2240, "dataset_name": "electricty_load_diagram"}, "validation": {"name": "validation", "num_bytes": 27787720, "num_examples": 320, "dataset_name": "electricty_load_diagram"}}, "download_checksums": {"https://archive.ics.uci.edu/ml/machine-learning-databases/00321/LD2011_2014.txt.zip": {"num_bytes": 261335609, "checksum": "f6c4d0e0df12ecdb9ea008dd6eef3518adb52c559d04a9bac2e1b81dcfc8d4e1"}}, "download_size": 261335609, "post_processing_size": null, "dataset_size": 244032000, "size_in_bytes": 505367609}}
dummy/lstnet/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cc2aa39cff1c88e011fc12c7c0345b0565fd6c272363cd0caf6787773f218bf7
3
+ size 3726
dummy/uci/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ca2aa1c60876be2536b3364fbf947adce6e2528e4f1441f155896b2719d72ac2
3
+ size 6341
electricity_load_diagrams.py ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2022 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """Electricity Load Diagrams 2011-2014 time series dataset."""
16
+ from pathlib import Path
17
+
18
+ import pandas as pd
19
+
20
+ import datasets
21
+
22
+ from .utils import to_dict
23
+
24
+
25
+ _CITATION = """\
26
+ @inproceedings{10.1145/3209978.3210006,
27
+ author = {Lai, Guokun and Chang, Wei-Cheng and Yang, Yiming and Liu, Hanxiao},
28
+ title = {Modeling Long- and Short-Term Temporal Patterns with Deep Neural Networks},
29
+ year = {2018},
30
+ isbn = {9781450356572},
31
+ publisher = {Association for Computing Machinery},
32
+ address = {New York, NY, USA},
33
+ url = {https://doi.org/10.1145/3209978.3210006},
34
+ doi = {10.1145/3209978.3210006},
35
+ booktitle = {The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval},
36
+ pages = {95--104},
37
+ numpages = {10},
38
+ location = {Ann Arbor, MI, USA},
39
+ series = {SIGIR '18}
40
+ }
41
+ """
42
+
43
+ _DESCRIPTION = """\
44
+ This new dataset contains hourly kW electricity consumption time series of 370 Portuguese clients from 2011 to 2014.
45
+ """
46
+
47
+ _HOMEPAGE = "https://archive.ics.uci.edu/ml/datasets/ElectricityLoadDiagrams20112014"
48
+
49
+ _LICENSE = ""
50
+
51
+ _URL = "https://archive.ics.uci.edu/ml/machine-learning-databases/00321/LD2011_2014.txt.zip"
52
+
53
+
54
+ class ElectricityLoadDiagramsConfig(datasets.BuilderConfig):
55
+ """A builder config with some added meta data."""
56
+
57
+ freq: str = "1H"
58
+ prediction_length: int = 24
59
+ rolling_evaluations: int = 7
60
+
61
+
62
+ class ElectricityLoadDiagrams(datasets.GeneratorBasedBuilder):
63
+ """Hourly electricity consumption of 370 points/clients."""
64
+
65
+ VERSION = datasets.Version("1.0.0")
66
+
67
+ BUILDER_CONFIGS = [
68
+ ElectricityLoadDiagramsConfig(
69
+ name="uci",
70
+ version=VERSION,
71
+ description="Original UCI time series.",
72
+ ),
73
+ ElectricityLoadDiagramsConfig(
74
+ name="lstnet",
75
+ version=VERSION,
76
+ description="Electricity time series preprocessed as in LSTNet paper.",
77
+ ),
78
+ ]
79
+
80
+ DEFAULT_CONFIG_NAME = "lstnet"
81
+
82
+ def _info(self):
83
+ features = datasets.Features(
84
+ {
85
+ "start": datasets.Value("timestamp[s]"),
86
+ "target": datasets.Sequence(datasets.Value("float32")),
87
+ "feat_static_cat": datasets.Sequence(datasets.Value("uint64")),
88
+ # "feat_static_real": datasets.Sequence(datasets.Value("float32")),
89
+ # "feat_dynamic_real": datasets.Sequence(datasets.Sequence(datasets.Value("uint64"))),
90
+ # "feat_dynamic_cat": datasets.Sequence(datasets.Sequence(datasets.Value("uint64"))),
91
+ "item_id": datasets.Value("string"),
92
+ }
93
+ )
94
+ return datasets.DatasetInfo(
95
+ description=_DESCRIPTION,
96
+ features=features,
97
+ homepage=_HOMEPAGE,
98
+ license=_LICENSE,
99
+ citation=_CITATION,
100
+ )
101
+
102
+ def _split_generators(self, dl_manager):
103
+ data_dir = dl_manager.download_and_extract(_URL)
104
+
105
+ train_ts = []
106
+ val_ts = []
107
+ test_ts = []
108
+
109
+ df = pd.read_csv(
110
+ Path(data_dir) / "LD2011_2014.txt",
111
+ sep=";",
112
+ index_col=0,
113
+ parse_dates=True,
114
+ decimal=",",
115
+ )
116
+ df.sort_index(inplace=True)
117
+ df = df.resample(self.config.freq).sum()
118
+ unit = pd.tseries.frequencies.to_offset(self.config.freq).name
119
+
120
+ if self.config.name == "uci":
121
+ val_end_date = df.index.max() - pd.Timedelta(
122
+ self.config.prediction_length * self.config.rolling_evaluations, unit
123
+ )
124
+ train_end_date = val_end_date - pd.Timedelta(self.config.prediction_length, unit)
125
+ else:
126
+ # concate the time series to be from 2012 till 2014
127
+ df = df[(df.index.year >= 2012) & (df.index.year <= 2014)]
128
+
129
+ # drop time series which are zero at the start
130
+ df = df.T[df.iloc[0] > 0].T
131
+
132
+ # tran/val/test split from LSTNet paper
133
+ # validation ends at 8/10-th of the time series
134
+ val_end_date = df.index[int(len(df) * (8 / 10)) - 1]
135
+ # training ends at 6/10-th of the time series
136
+ train_end_date = df.index[int(len(df) * (6 / 10)) - 1]
137
+
138
+ for cat, (ts_id, ts) in enumerate(df.iteritems()):
139
+ start_date = ts.ne(0).idxmax()
140
+
141
+ sliced_ts = ts[start_date:train_end_date]
142
+ train_ts.append(
143
+ to_dict(
144
+ target_values=sliced_ts.values,
145
+ start=start_date,
146
+ cat=[cat],
147
+ item_id=ts_id,
148
+ )
149
+ )
150
+
151
+ sliced_ts = ts[start_date:val_end_date]
152
+ val_ts.append(
153
+ to_dict(
154
+ target_values=sliced_ts.values,
155
+ start=start_date,
156
+ cat=[cat],
157
+ item_id=ts_id,
158
+ )
159
+ )
160
+
161
+ for i in range(self.config.rolling_evaluations):
162
+ for cat, (ts_id, ts) in enumerate(df.iteritems()):
163
+ start_date = ts.ne(0).idxmax()
164
+
165
+ test_end_date = val_end_date + pd.Timedelta(self.config.prediction_length * (i + 1), unit)
166
+ sliced_ts = ts[start_date:test_end_date]
167
+ test_ts.append(
168
+ to_dict(
169
+ target_values=sliced_ts.values,
170
+ start=start_date,
171
+ cat=[cat],
172
+ item_id=ts_id,
173
+ )
174
+ )
175
+
176
+ return [
177
+ datasets.SplitGenerator(
178
+ name=datasets.Split.TRAIN,
179
+ gen_kwargs={
180
+ "split": train_ts,
181
+ },
182
+ ),
183
+ datasets.SplitGenerator(
184
+ name=datasets.Split.TEST,
185
+ gen_kwargs={
186
+ "split": test_ts,
187
+ },
188
+ ),
189
+ datasets.SplitGenerator(
190
+ name=datasets.Split.VALIDATION,
191
+ gen_kwargs={
192
+ "split": val_ts,
193
+ },
194
+ ),
195
+ ]
196
+
197
+ def _generate_examples(self, split):
198
+ for key, row in enumerate(split):
199
+ yield key, row
utils.py ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Any, Dict, List, Optional
2
+
3
+ import numpy as np
4
+ import pandas as pd
5
+
6
+
7
+ def to_dict(
8
+ target_values: np.ndarray,
9
+ start: pd.Timestamp,
10
+ cat: Optional[List[int]] = None,
11
+ item_id: Optional[Any] = None,
12
+ real: Optional[np.ndarray] = None,
13
+ ) -> Dict:
14
+ def serialize(x):
15
+ if np.isnan(x):
16
+ return "NaN"
17
+ else:
18
+ # return x
19
+ return float("{0:.6f}".format(float(x)))
20
+
21
+ res = {
22
+ "start": start,
23
+ "target": [serialize(x) for x in target_values],
24
+ }
25
+
26
+ if cat is not None:
27
+ res["feat_static_cat"] = cat
28
+
29
+ if item_id is not None:
30
+ res["item_id"] = item_id
31
+
32
+ if real is not None:
33
+ res["feat_dynamic_real"] = real.astype(np.float32).tolist()
34
+ return res