start
unknown
feat_static_cat
sequence
feat_dynamic_real
sequence
item_id
stringlengths
2
4
target
sequence
"2015-01-01T00:00:01"
[ 0 ]
null
T1
[-0.7127609544951682,-0.6743409178438863,-0.7255676333789287,-0.7271684682393988,-0.7223659636579886(...TRUNCATED)
"2015-01-01T00:00:01"
[ 1 ]
null
T2
[-0.6491427031245565,-0.6450101854466948,-0.7421243508764448,-0.8268409632726096,-0.8371722574672638(...TRUNCATED)
"2015-01-01T00:00:01"
[ 2 ]
null
T3
[-0.936347588054217,-0.8621403049076466,-0.963110870500521,-1.022719999585471,-1.055565846224117,-1.(...TRUNCATED)
"2015-01-01T00:00:01"
[ 3 ]
null
T4
[-1.1548068998362957,-1.0677193076804254,-1.2037936704239731,-1.2963242370895853,-1.318096135128553,(...TRUNCATED)
"2015-01-01T00:00:01"
[ 4 ]
null
T5
[-0.5690327939527468,-0.5132777151500406,-0.6108491030547764,-0.6666041818574823,-0.6805429515581588(...TRUNCATED)
"2015-01-01T00:00:01"
[ 5 ]
null
T6
[-0.5009712072435244,-0.48894515498074026,-0.5941731122801015,-0.6723424519881985,-0.706917352243702(...TRUNCATED)
"2015-01-01T00:00:01"
[ 6 ]
null
T7
[-0.765328164656501,-0.7013937808475273,-0.8227051757671183,-0.8866395595760919,-0.9259837957662295,(...TRUNCATED)
"2015-01-01T00:00:01"
[ 7 ]
null
T8
[-0.6655659148168757,-0.4900223161552902,-0.6417943858314528,-0.8100236678821389,-0.9489956834892274(...TRUNCATED)
"2015-01-01T00:00:01"
[ 8 ]
null
T9
[-0.6609950662270155,-0.4470942852185521,-0.6816951418084797,-0.9599961579592762,-1.1140967206212875(...TRUNCATED)
"2015-01-01T00:00:01"
[ 9 ]
null
T10
[-0.41419614239710056,-0.4077868894915753,-0.48469792435787773,-0.6022008942925062,-0.57335925621764(...TRUNCATED)

Dataset Card for "traffic_hourly"

More Information needed

Download the Dataset:

from datasets import load_dataset

dataset = load_dataset("LeoTungAnh/traffic_hourly")

Dataset Card for Electricity Consumption

this dataset encompasses 862 hourly time series data points revealing the road occupancy rates across freeways in the San Francisco Bay area from 2015 to 2016.

Preprocessing information:

  • Grouped by hour (frequency: "1H").
  • Applied Standardization as preprocessing technique ("Std").

Dataset information:

  • Number of time series: 862
  • Number of training samples: 17448
  • Number of validation samples: 17496 (number_of_training_samples + 48)
  • Number of testing samples: 17544 (number_of_validation_samples + 48)

Dataset format:

  Dataset({
  
      features: ['start', 'target', 'feat_static_cat', 'feat_dynamic_real', 'item_id'],
      
      num_rows: 862
      
  })

Data format for a sample:

  • 'start': datetime.datetime

  • 'target': list of a time series data

  • 'feat_static_cat': time series index

  • 'feat_dynamic_real': None

  • 'item_id': name of time series

Data example:

{'start': datetime.datetime(2015, 1, 1, 0, 0, 1),
 'feat_static_cat': [0],
 'feat_dynamic_real': None,
 'item_id': 'T1',
 'target': [-0.7127609544951682, -0.6743409178438863, -0.3749847989359815, ... 0.12447567753068307,...]
}

Usage:

  • The dataset can be used by available Transformer, Autoformer, Informer of Huggingface.
  • Other algorithms can extract data directly by making use of 'target' feature.
Downloads last month
0
Edit dataset card