parquet-converter commited on
Commit
44c74e4
1 Parent(s): 273b0eb

Update parquet files

Browse files
README.md DELETED
@@ -1,314 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - no-annotation
4
- language_creators:
5
- - found
6
- language: []
7
- license:
8
- - cc-by-4.0
9
- multilinguality:
10
- - monolingual
11
- pretty_name: Electricity Transformer Temperature
12
- size_categories:
13
- - 1K<n<10K
14
- source_datasets:
15
- - original
16
- task_categories:
17
- - time-series-forecasting
18
- task_ids:
19
- - univariate-time-series-forecasting
20
- - multivariate-time-series-forecasting
21
- dataset_info:
22
- - config_name: h1
23
- features:
24
- - name: start
25
- dtype: timestamp[s]
26
- - name: target
27
- sequence: float32
28
- - name: feat_static_cat
29
- sequence: uint64
30
- - name: feat_dynamic_real
31
- sequence:
32
- sequence: float32
33
- - name: item_id
34
- dtype: string
35
- splits:
36
- - name: train
37
- num_bytes: 241978
38
- num_examples: 1
39
- - name: test
40
- num_bytes: 77508960
41
- num_examples: 240
42
- - name: validation
43
- num_bytes: 33916080
44
- num_examples: 120
45
- download_size: 2589657
46
- dataset_size: 111667018
47
- - config_name: h2
48
- features:
49
- - name: start
50
- dtype: timestamp[s]
51
- - name: target
52
- sequence: float32
53
- - name: feat_static_cat
54
- sequence: uint64
55
- - name: feat_dynamic_real
56
- sequence:
57
- sequence: float32
58
- - name: item_id
59
- dtype: string
60
- splits:
61
- - name: train
62
- num_bytes: 241978
63
- num_examples: 1
64
- - name: test
65
- num_bytes: 77508960
66
- num_examples: 240
67
- - name: validation
68
- num_bytes: 33916080
69
- num_examples: 120
70
- download_size: 2417960
71
- dataset_size: 111667018
72
- - config_name: m1
73
- features:
74
- - name: start
75
- dtype: timestamp[s]
76
- - name: target
77
- sequence: float32
78
- - name: feat_static_cat
79
- sequence: uint64
80
- - name: feat_dynamic_real
81
- sequence:
82
- sequence: float32
83
- - name: item_id
84
- dtype: string
85
- splits:
86
- - name: train
87
- num_bytes: 967738
88
- num_examples: 1
89
- - name: test
90
- num_bytes: 1239008640
91
- num_examples: 960
92
- - name: validation
93
- num_bytes: 542089920
94
- num_examples: 480
95
- download_size: 10360719
96
- dataset_size: 1782066298
97
- - config_name: m2
98
- features:
99
- - name: start
100
- dtype: timestamp[s]
101
- - name: target
102
- sequence: float32
103
- - name: feat_static_cat
104
- sequence: uint64
105
- - name: feat_dynamic_real
106
- sequence:
107
- sequence: float32
108
- - name: item_id
109
- dtype: string
110
- splits:
111
- - name: train
112
- num_bytes: 967738
113
- num_examples: 1
114
- - name: test
115
- num_bytes: 1239008640
116
- num_examples: 960
117
- - name: validation
118
- num_bytes: 542089920
119
- num_examples: 480
120
- download_size: 9677236
121
- dataset_size: 1782066298
122
- ---
123
-
124
- # Dataset Card for [Electricity Transformer Temperature](https://github.com/zhouhaoyi/ETDataset)
125
-
126
- ## Table of Contents
127
- - [Table of Contents](#table-of-contents)
128
- - [Dataset Description](#dataset-description)
129
- - [Dataset Summary](#dataset-summary)
130
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
131
- - [Languages](#languages)
132
- - [Dataset Structure](#dataset-structure)
133
- - [Data Instances](#data-instances)
134
- - [Data Fields](#data-fields)
135
- - [Data Splits](#data-splits)
136
- - [Dataset Creation](#dataset-creation)
137
- - [Curation Rationale](#curation-rationale)
138
- - [Source Data](#source-data)
139
- - [Annotations](#annotations)
140
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
141
- - [Considerations for Using the Data](#considerations-for-using-the-data)
142
- - [Social Impact of Dataset](#social-impact-of-dataset)
143
- - [Discussion of Biases](#discussion-of-biases)
144
- - [Other Known Limitations](#other-known-limitations)
145
- - [Additional Information](#additional-information)
146
- - [Dataset Curators](#dataset-curators)
147
- - [Licensing Information](#licensing-information)
148
- - [Citation Information](#citation-information)
149
- - [Contributions](#contributions)
150
-
151
- ## Dataset Description
152
-
153
- - **Homepage:** [Electricity Transformer Dataset](https://github.com/zhouhaoyi/ETDataset)
154
- - **Repository:** https://github.com/zhouhaoyi/ETDataset
155
- - **Paper:** [Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting](https://arxiv.org/abs/2012.07436)
156
- - **Point of Contact:** [Haoyi Zhou](mailto:zhouhy@act.buaa.edu.cn)
157
-
158
- ### Dataset Summary
159
-
160
- The electric power distribution problem is the distribution of electricity to different areas depending on its sequential usage. But predicting the future demand of a specific area is difficult, as it varies with weekdays, holidays, seasons, weather, temperatures, etc. However, no existing method can perform a long-term prediction based on super long-term real-world data with high precision. Any false predictions may damage the electrical transformer. So currently, without an efficient method to predict future electric usage, managers have to make decisions based on the empirical number, which is much higher than the real-world demands. It causes unnecessary waste of electric and equipment depreciation. On the other hand, the oil temperatures can reflect the condition of the Transformer. One of the most efficient strategies is to predict how the electrical transformers' oil temperature is safe and avoid unnecessary waste. As a result, to address this problem, the authors and Beijing Guowang Fuda Science & Technology Development Company have provided 2-years worth of data.
161
-
162
- Specifically, the dataset combines short-term periodical patterns, long-term periodical patterns, long-term trends, and many irregular patterns. The dataset are obtained from 2 Electricity Transformers at 2 stations and come in an `1H` (hourly) or `15T` (15-minute) frequency containing 2 year * 365 days * 24 hours * (4 for 15T) times = 17,520 (70,080 for 15T) data points.
163
-
164
- The target time series is the **O**il **T**emperature and the dataset comes with the following 6 covariates in the univariate setup:
165
- * **H**igh **U**se**F**ul **L**oad
166
- * **H**igh **U**se**L**ess **L**oad
167
- * **M**iddle **U**se**F**ul **L**oad
168
- * **M**iddle **U**se**L**ess **L**oad
169
- * **L**ow **U**se**F**ul **L**oad
170
- * **L**ow **U**se**L**ess **L**oad
171
-
172
-
173
- ### Dataset Usage
174
-
175
- To load a particular variant of the dataset just specify its name e.g:
176
-
177
- ```python
178
- load_dataset("ett", "m1", multivariate=False) # univariate 15-min frequency dataset from first transformer
179
- ```
180
-
181
- or to specify a prediction length:
182
-
183
- ```python
184
- load_dataset("ett", "h2", prediction_length=48) # multivariate dataset from second transformer with prediction length of 48 (hours)
185
- ```
186
-
187
-
188
- ### Supported Tasks and Leaderboards
189
-
190
- The time series data is split into train/val/test set of 12/4/4 months respectively. Given the prediction length (default: 1 day (24 hours or 24*4 15T)) we create rolling windows of this size for the val/test sets.
191
-
192
- #### `time-series-forecasting`
193
-
194
- ##### `univariate-time-series-forecasting`
195
-
196
- The univariate time series forecasting tasks involves learning the future one dimensional `target` values of a time series in a dataset for some `prediction_length` time steps. The performance of the forecast models can then be validated via the ground truth in the `validation` split and tested via the `test` split. The covriates are stored in the `feat_dynamic_real` key of each time series.
197
-
198
- ##### `multivariate-time-series-forecasting`
199
-
200
- The multivariate time series forecasting task involves learning the future vector of `target` values of a time series in a dataset for some `prediction_length` time steps. Similar to the univariate setting the performance of a multivariate model can be validated via the ground truth in the `validation` split and tested via the `test` split.
201
-
202
-
203
- ### Languages
204
-
205
- ## Dataset Structure
206
-
207
- ### Data Instances
208
-
209
- A sample from the training set is provided below:
210
-
211
- ```python
212
- {
213
- 'start': datetime.datetime(2012, 1, 1, 0, 0),
214
- 'target': [14.0, 18.0, 21.0, 20.0, 22.0, 20.0, ...],
215
- 'feat_static_cat': [0],
216
- 'feat_dynamic_real': [[0.3, 0.4], [0.1, 0.6], ...],
217
- 'item_id': 'OT'
218
- }
219
- ```
220
-
221
- ### Data Fields
222
-
223
- For the univariate regular time series each series has the following keys:
224
-
225
- * `start`: a datetime of the first entry of each time series in the dataset
226
- * `target`: an array[float32] of the actual target values
227
- * `feat_static_cat`: an array[uint64] which contains a categorical identifier of each time series in the dataset
228
- * `feat_dynamic_real`: optional array of covariate features
229
- * `item_id`: a string identifier of each time series in a dataset for reference
230
-
231
- For the multivariate time series the `target` is a vector of the multivariate dimension for each time point.
232
-
233
- ### Data Splits
234
-
235
- The time series data is split into train/val/test set of 12/4/4 months respectively.
236
-
237
- ## Dataset Creation
238
-
239
- ### Curation Rationale
240
-
241
- Develop time series methods that can perform a long-term prediction based on super long-term real-world data with high precision.
242
-
243
- ### Source Data
244
-
245
- #### Initial Data Collection and Normalization
246
-
247
- [More Information Needed]
248
-
249
- #### Who are the source language producers?
250
-
251
- [More Information Needed]
252
-
253
- ### Annotations
254
-
255
- #### Annotation process
256
-
257
- [More Information Needed]
258
-
259
- #### Who are the annotators?
260
-
261
- [More Information Needed]
262
-
263
- ### Personal and Sensitive Information
264
-
265
- [More Information Needed]
266
-
267
- ## Considerations for Using the Data
268
-
269
- ### Social Impact of Dataset
270
-
271
- [More Information Needed]
272
-
273
- ### Discussion of Biases
274
-
275
- [More Information Needed]
276
-
277
- ### Other Known Limitations
278
-
279
- [More Information Needed]
280
-
281
- ## Additional Information
282
-
283
- ### Dataset Curators
284
-
285
- * [Haoyi Zhou](mailto:zhouhy@act.buaa.edu.cn)
286
-
287
- ### Licensing Information
288
-
289
- [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode)
290
-
291
- ### Citation Information
292
-
293
- ```tex
294
- @inproceedings{haoyietal-informer-2021,
295
- author = {Haoyi Zhou and
296
- Shanghang Zhang and
297
- Jieqi Peng and
298
- Shuai Zhang and
299
- Jianxin Li and
300
- Hui Xiong and
301
- Wancai Zhang},
302
- title = {Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting},
303
- booktitle = {The Thirty-Fifth {AAAI} Conference on Artificial Intelligence, {AAAI} 2021, Virtual Conference},
304
- volume = {35},
305
- number = {12},
306
- pages = {11106--11115},
307
- publisher = {{AAAI} Press},
308
- year = {2021},
309
- }
310
- ```
311
-
312
- ### Contributions
313
-
314
- Thanks to [@kashif](https://github.com/kashif) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"h1": {"description": "The data of Electricity Transformers from two separated counties\nin China collected for two years at hourly and 15-min frequencies.\nEach data point consists of the target value \"oil temperature\" and\n6 power load features. The train/val/test is 12/4/4 months.\n", "citation": "@inproceedings{haoyietal-informer-2021,\n author = {Haoyi Zhou and\n Shanghang Zhang and\n Jieqi Peng and\n Shuai Zhang and\n Jianxin Li and\n Hui Xiong and\n Wancai Zhang},\n title = {Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting},\n booktitle = {The Thirty-Fifth {AAAI} Conference on Artificial Intelligence, {AAAI} 2021, Virtual Conference},\n volume = {35},\n number = {12},\n pages = {11106--11115},\n publisher = {{AAAI} Press},\n year = {2021},\n}\n", "homepage": "https://github.com/zhouhaoyi/ETDataset", "license": "The Creative Commons Attribution 4.0 International License. https://creativecommons.org/licenses/by/4.0/", "features": {"start": {"dtype": "timestamp[s]", "id": null, "_type": "Value"}, "target": {"feature": {"dtype": "float32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "feat_static_cat": {"feature": {"dtype": "uint64", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "feat_dynamic_real": {"feature": {"feature": {"dtype": "float32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "length": -1, "id": null, "_type": "Sequence"}, "item_id": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "ett", "config_name": "h1", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 241978, "num_examples": 1, "dataset_name": "ett"}, "test": {"name": "test", "num_bytes": 77508960, "num_examples": 240, "dataset_name": "ett"}, "validation": {"name": "validation", "num_bytes": 33916080, "num_examples": 120, "dataset_name": "ett"}}, "download_checksums": {"https://raw.githubusercontent.com/zhouhaoyi/ETDataset/main/ETT-small/ETTh1.csv": {"num_bytes": 2589657, "checksum": "f18de3ad269cef59bb07b5438d79bb3042d3be49bdeecf01c1cd6d29695ee066"}}, "download_size": 2589657, "post_processing_size": null, "dataset_size": 111667018, "size_in_bytes": 114256675}, "h2": {"description": "The data of Electricity Transformers from two separated counties\nin China collected for two years at hourly and 15-min frequencies.\nEach data point consists of the target value \"oil temperature\" and\n6 power load features. The train/val/test is 12/4/4 months.\n", "citation": "@inproceedings{haoyietal-informer-2021,\n author = {Haoyi Zhou and\n Shanghang Zhang and\n Jieqi Peng and\n Shuai Zhang and\n Jianxin Li and\n Hui Xiong and\n Wancai Zhang},\n title = {Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting},\n booktitle = {The Thirty-Fifth {AAAI} Conference on Artificial Intelligence, {AAAI} 2021, Virtual Conference},\n volume = {35},\n number = {12},\n pages = {11106--11115},\n publisher = {{AAAI} Press},\n year = {2021},\n}\n", "homepage": "https://github.com/zhouhaoyi/ETDataset", "license": "The Creative Commons Attribution 4.0 International License. https://creativecommons.org/licenses/by/4.0/", "features": {"start": {"dtype": "timestamp[s]", "id": null, "_type": "Value"}, "target": {"feature": {"dtype": "float32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "feat_static_cat": {"feature": {"dtype": "uint64", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "feat_dynamic_real": {"feature": {"feature": {"dtype": "float32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "length": -1, "id": null, "_type": "Sequence"}, "item_id": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "ett", "config_name": "h2", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 241978, "num_examples": 1, "dataset_name": "ett"}, "test": {"name": "test", "num_bytes": 77508960, "num_examples": 240, "dataset_name": "ett"}, "validation": {"name": "validation", "num_bytes": 33916080, "num_examples": 120, "dataset_name": "ett"}}, "download_checksums": {"https://raw.githubusercontent.com/zhouhaoyi/ETDataset/main/ETT-small/ETTh2.csv": {"num_bytes": 2417960, "checksum": "a3dc2c597b9218c7ce1cd55eb77b283fd459a1d09d753063f944967dd6b9218b"}}, "download_size": 2417960, "post_processing_size": null, "dataset_size": 111667018, "size_in_bytes": 114084978}, "m1": {"description": "The data of Electricity Transformers from two separated counties\nin China collected for two years at hourly and 15-min frequencies.\nEach data point consists of the target value \"oil temperature\" and\n6 power load features. The train/val/test is 12/4/4 months.\n", "citation": "@inproceedings{haoyietal-informer-2021,\n author = {Haoyi Zhou and\n Shanghang Zhang and\n Jieqi Peng and\n Shuai Zhang and\n Jianxin Li and\n Hui Xiong and\n Wancai Zhang},\n title = {Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting},\n booktitle = {The Thirty-Fifth {AAAI} Conference on Artificial Intelligence, {AAAI} 2021, Virtual Conference},\n volume = {35},\n number = {12},\n pages = {11106--11115},\n publisher = {{AAAI} Press},\n year = {2021},\n}\n", "homepage": "https://github.com/zhouhaoyi/ETDataset", "license": "The Creative Commons Attribution 4.0 International License. https://creativecommons.org/licenses/by/4.0/", "features": {"start": {"dtype": "timestamp[s]", "id": null, "_type": "Value"}, "target": {"feature": {"dtype": "float32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "feat_static_cat": {"feature": {"dtype": "uint64", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "feat_dynamic_real": {"feature": {"feature": {"dtype": "float32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "length": -1, "id": null, "_type": "Sequence"}, "item_id": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "ett", "config_name": "m1", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 967738, "num_examples": 1, "dataset_name": "ett"}, "test": {"name": "test", "num_bytes": 1239008640, "num_examples": 960, "dataset_name": "ett"}, "validation": {"name": "validation", "num_bytes": 542089920, "num_examples": 480, "dataset_name": "ett"}}, "download_checksums": {"https://raw.githubusercontent.com/zhouhaoyi/ETDataset/main/ETT-small/ETTm1.csv": {"num_bytes": 10360719, "checksum": "6ce1759b1a18e3328421d5d75fadcb316c449fcd7cec32820c8dafda71986c9e"}}, "download_size": 10360719, "post_processing_size": null, "dataset_size": 1782066298, "size_in_bytes": 1792427017}, "m2": {"description": "The data of Electricity Transformers from two separated counties\nin China collected for two years at hourly and 15-min frequencies.\nEach data point consists of the target value \"oil temperature\" and\n6 power load features. The train/val/test is 12/4/4 months.\n", "citation": "@inproceedings{haoyietal-informer-2021,\n author = {Haoyi Zhou and\n Shanghang Zhang and\n Jieqi Peng and\n Shuai Zhang and\n Jianxin Li and\n Hui Xiong and\n Wancai Zhang},\n title = {Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting},\n booktitle = {The Thirty-Fifth {AAAI} Conference on Artificial Intelligence, {AAAI} 2021, Virtual Conference},\n volume = {35},\n number = {12},\n pages = {11106--11115},\n publisher = {{AAAI} Press},\n year = {2021},\n}\n", "homepage": "https://github.com/zhouhaoyi/ETDataset", "license": "The Creative Commons Attribution 4.0 International License. https://creativecommons.org/licenses/by/4.0/", "features": {"start": {"dtype": "timestamp[s]", "id": null, "_type": "Value"}, "target": {"feature": {"dtype": "float32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "feat_static_cat": {"feature": {"dtype": "uint64", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "feat_dynamic_real": {"feature": {"feature": {"dtype": "float32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "length": -1, "id": null, "_type": "Sequence"}, "item_id": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "ett", "config_name": "m2", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 967738, "num_examples": 1, "dataset_name": "ett"}, "test": {"name": "test", "num_bytes": 1239008640, "num_examples": 960, "dataset_name": "ett"}, "validation": {"name": "validation", "num_bytes": 542089920, "num_examples": 480, "dataset_name": "ett"}}, "download_checksums": {"https://raw.githubusercontent.com/zhouhaoyi/ETDataset/main/ETT-small/ETTm2.csv": {"num_bytes": 9677236, "checksum": "db973ca252c6410a30d0469b13d696cf919648d0f3fd588c60f03fdbdbadd1fd"}}, "download_size": 9677236, "post_processing_size": null, "dataset_size": 1782066298, "size_in_bytes": 1791743534}}
 
 
ett.py DELETED
@@ -1,242 +0,0 @@
1
- # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
2
- #
3
- # Licensed under the Apache License, Version 2.0 (the "License");
4
- # you may not use this file except in compliance with the License.
5
- # You may obtain a copy of the License at
6
- #
7
- # http://www.apache.org/licenses/LICENSE-2.0
8
- #
9
- # Unless required by applicable law or agreed to in writing, software
10
- # distributed under the License is distributed on an "AS IS" BASIS,
11
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
- # See the License for the specific language governing permissions and
13
- # limitations under the License.
14
- """Electricity Transformer Temperature (ETT) dataset."""
15
- from dataclasses import dataclass
16
-
17
- import pandas as pd
18
-
19
- import datasets
20
-
21
-
22
- _CITATION = """\
23
- @inproceedings{haoyietal-informer-2021,
24
- author = {Haoyi Zhou and
25
- Shanghang Zhang and
26
- Jieqi Peng and
27
- Shuai Zhang and
28
- Jianxin Li and
29
- Hui Xiong and
30
- Wancai Zhang},
31
- title = {Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting},
32
- booktitle = {The Thirty-Fifth {AAAI} Conference on Artificial Intelligence, {AAAI} 2021, Virtual Conference},
33
- volume = {35},
34
- number = {12},
35
- pages = {11106--11115},
36
- publisher = {{AAAI} Press},
37
- year = {2021},
38
- }
39
- """
40
-
41
- _DESCRIPTION = """\
42
- The data of Electricity Transformers from two separated counties
43
- in China collected for two years at hourly and 15-min frequencies.
44
- Each data point consists of the target value "oil temperature" and
45
- 6 power load features. The train/val/test is 12/4/4 months.
46
- """
47
-
48
- _HOMEPAGE = "https://github.com/zhouhaoyi/ETDataset"
49
-
50
- _LICENSE = "The Creative Commons Attribution 4.0 International License. https://creativecommons.org/licenses/by/4.0/"
51
-
52
- # The HuggingFace Datasets library doesn't host the datasets but only points to the original files.
53
- # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
54
- _URLS = {
55
- "h1": "https://raw.githubusercontent.com/zhouhaoyi/ETDataset/main/ETT-small/ETTh1.csv",
56
- "h2": "https://raw.githubusercontent.com/zhouhaoyi/ETDataset/main/ETT-small/ETTh2.csv",
57
- "m1": "https://raw.githubusercontent.com/zhouhaoyi/ETDataset/main/ETT-small/ETTm1.csv",
58
- "m2": "https://raw.githubusercontent.com/zhouhaoyi/ETDataset/main/ETT-small/ETTm2.csv",
59
- }
60
-
61
-
62
- @dataclass
63
- class ETTBuilderConfig(datasets.BuilderConfig):
64
- """ETT builder config."""
65
-
66
- prediction_length: int = 24
67
- multivariate: bool = False
68
-
69
-
70
- class ETT(datasets.GeneratorBasedBuilder):
71
- """Electricity Transformer Temperature (ETT) dataset"""
72
-
73
- VERSION = datasets.Version("1.0.0")
74
-
75
- # You will be able to load one or the other configurations in the following list with
76
- # data = datasets.load_dataset('ett', 'h1')
77
- # data = datasets.load_dataset('ett', 'm2')
78
- BUILDER_CONFIGS = [
79
- ETTBuilderConfig(
80
- name="h1",
81
- version=VERSION,
82
- description="Time series from first county at hourly frequency.",
83
- ),
84
- ETTBuilderConfig(
85
- name="h2",
86
- version=VERSION,
87
- description="Time series from second county at hourly frequency.",
88
- ),
89
- ETTBuilderConfig(
90
- name="m1",
91
- version=VERSION,
92
- description="Time series from first county at 15-min frequency.",
93
- ),
94
- ETTBuilderConfig(
95
- name="m2",
96
- version=VERSION,
97
- description="Time series from second county at 15-min frequency.",
98
- ),
99
- ]
100
-
101
- DEFAULT_CONFIG_NAME = "h1" # It's not mandatory to have a default configuration. Just use one if it make sense.
102
-
103
- def _info(self):
104
- if self.config.multivariate:
105
- features = datasets.Features(
106
- {
107
- "start": datasets.Value("timestamp[s]"),
108
- "target": datasets.Sequence(datasets.Sequence(datasets.Value("float32"))),
109
- "feat_static_cat": datasets.Sequence(datasets.Value("uint64")),
110
- "item_id": datasets.Value("string"),
111
- }
112
- )
113
- else:
114
- features = datasets.Features(
115
- {
116
- "start": datasets.Value("timestamp[s]"),
117
- "target": datasets.Sequence(datasets.Value("float32")),
118
- "feat_static_cat": datasets.Sequence(datasets.Value("uint64")),
119
- "feat_dynamic_real": datasets.Sequence(datasets.Sequence(datasets.Value("float32"))),
120
- "item_id": datasets.Value("string"),
121
- }
122
- )
123
-
124
- return datasets.DatasetInfo(
125
- # This is the description that will appear on the datasets page.
126
- description=_DESCRIPTION,
127
- # This defines the different columns of the dataset and their types
128
- features=features, # Here we define them above because they are different between the two configurations
129
- # If there's a common (input, target) tuple from the features, uncomment supervised_keys line below and
130
- # specify them. They'll be used if as_supervised=True in builder.as_dataset.
131
- # supervised_keys=("sentence", "label"),
132
- # Homepage of the dataset for documentation
133
- homepage=_HOMEPAGE,
134
- # License for the dataset if available
135
- license=_LICENSE,
136
- # Citation for the dataset
137
- citation=_CITATION,
138
- )
139
-
140
- def _split_generators(self, dl_manager):
141
- urls = _URLS[self.config.name]
142
- filepath = dl_manager.download_and_extract(urls)
143
-
144
- return [
145
- datasets.SplitGenerator(
146
- name=datasets.Split.TRAIN,
147
- # These kwargs will be passed to _generate_examples
148
- gen_kwargs={
149
- "filepath": filepath,
150
- "split": "train",
151
- },
152
- ),
153
- datasets.SplitGenerator(
154
- name=datasets.Split.TEST,
155
- # These kwargs will be passed to _generate_examples
156
- gen_kwargs={
157
- "filepath": filepath,
158
- "split": "test",
159
- },
160
- ),
161
- datasets.SplitGenerator(
162
- name=datasets.Split.VALIDATION,
163
- # These kwargs will be passed to _generate_examples
164
- gen_kwargs={
165
- "filepath": filepath,
166
- "split": "dev",
167
- },
168
- ),
169
- ]
170
-
171
- # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
172
- def _generate_examples(self, filepath, split):
173
- data = pd.read_csv(filepath, parse_dates=True, index_col=0)
174
- start_date = data.index.min()
175
-
176
- if self.config.name in ["m1", "m2"]:
177
- factor = 4 # 15-min frequency
178
- else:
179
- factor = 1 # hourly frequency
180
- train_end_date_index = 12 * 30 * 24 * factor # 1 year
181
-
182
- if split == "dev":
183
- end_date_index = train_end_date_index + 4 * 30 * 24 * factor # 1 year + 4 months
184
- else:
185
- end_date_index = train_end_date_index + 8 * 30 * 24 * factor # 1 year + 8 months
186
-
187
- if self.config.multivariate:
188
- if split in ["test", "dev"]:
189
- # rolling windows of prediction_length for dev and test
190
- for i, index in enumerate(
191
- range(
192
- train_end_date_index,
193
- end_date_index,
194
- self.config.prediction_length,
195
- )
196
- ):
197
- yield i, {
198
- "start": start_date,
199
- "target": data[: index + self.config.prediction_length].values.astype("float32").T,
200
- "feat_static_cat": [0],
201
- "item_id": "0",
202
- }
203
- else:
204
- yield 0, {
205
- "start": start_date,
206
- "target": data[:train_end_date_index].values.astype("float32").T,
207
- "feat_static_cat": [0],
208
- "item_id": "0",
209
- }
210
- else:
211
- if split in ["test", "dev"]:
212
- # rolling windows of prediction_length for dev and test
213
- for i, index in enumerate(
214
- range(
215
- train_end_date_index,
216
- end_date_index,
217
- self.config.prediction_length,
218
- )
219
- ):
220
- target = data["OT"][: index + self.config.prediction_length].values.astype("float32")
221
- feat_dynamic_real = data[["HUFL", "HULL", "MUFL", "MULL", "LUFL", "LULL"]][
222
- : index + self.config.prediction_length
223
- ].values.T.astype("float32")
224
- yield i, {
225
- "start": start_date,
226
- "target": target,
227
- "feat_dynamic_real": feat_dynamic_real,
228
- "feat_static_cat": [0],
229
- "item_id": "OT",
230
- }
231
- else:
232
- target = data["OT"][:train_end_date_index].values.astype("float32")
233
- feat_dynamic_real = data[["HUFL", "HULL", "MUFL", "MULL", "LUFL", "LULL"]][
234
- :train_end_date_index
235
- ].values.T.astype("float32")
236
- yield 0, {
237
- "start": start_date,
238
- "target": target,
239
- "feat_dynamic_real": feat_dynamic_real,
240
- "feat_static_cat": [0],
241
- "item_id": "OT",
242
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
h1/ett-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dc84bc5029e3805ea4f04041eaa27f0b2909655471147a36a6e439f7f21a1451
3
+ size 24566760
h1/ett-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:876fa8a91dd05965e6338ec84bda18c0e6244ee46c2d2f9b664316231f60f0de
3
+ size 92925
h1/ett-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:831c0882564fca17a0f966538f240c5f39822110e66ea6ae6c4e8cf225b0ea93
3
+ size 10531285
h2/ett-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:457b52ddc236162faef1b3cab33270e5f1a63ca48922b3884bc4ed85514483dc
3
+ size 25816073
h2/ett-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:581df7980d745f45daae0b7d4bb9fbd405e719d3bfd6ffdcae843f0c67a2b15a
3
+ size 101813
h2/ett-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ce3cdff554fa36561ad9d362e1c017367119cabbd25f7a2da5ad18ea6e85e681
3
+ size 10990372
m1/ett-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:47b4c3d74d590126abeff2ac1a2bd221a12f1254c2fe46242a9992de7de56942
3
+ size 408301336
m1/ett-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:754175eb25d2327dd5adf32bb0b7c1c0b1007f9a59a9179ca7d535f37e8bca55
3
+ size 334194
m1/ett-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:80962ad9a830ba8a6a3d9bd150db82c4c4b780266602ea8da5f5b7ecd1314b0b
3
+ size 178349929
m2/ett-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e304e411c431c9ed4560beef331bb92d51b9c91b6fd2b24a6ce08cc57a49959a
3
+ size 394266924
m2/ett-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8b940c47f52e310ec5ea1ff0751a88fa7ede664898aff76ae366883fe92b220a
3
+ size 336174
m2/ett-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:58eaeb7c23979f763e88bc371e71af0903829916d4e7a530d27fad928f4fb9e7
3
+ size 171667570