traffic_hourly / README.md
LeoTungAnh's picture
Update README.md
facd290
metadata
dataset_info:
  features:
    - name: start
      dtype: timestamp[s]
    - name: feat_static_cat
      sequence: uint64
    - name: feat_dynamic_real
      sequence:
        sequence: float32
    - name: item_id
      dtype: string
    - name: target
      sequence: float64
  splits:
    - name: train
      num_bytes: 120352440
      num_examples: 862
    - name: validation
      num_bytes: 120683448
      num_examples: 862
    - name: test
      num_bytes: 121014456
      num_examples: 862
  download_size: 124542918
  dataset_size: 362050344
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*

Dataset Card for "traffic_hourly"

More Information needed

Download the Dataset:

from datasets import load_dataset

dataset = load_dataset("LeoTungAnh/traffic_hourly")

Dataset Card for Electricity Consumption

this dataset encompasses 862 hourly time series data points revealing the road occupancy rates across freeways in the San Francisco Bay area from 2015 to 2016.

Preprocessing information:

  • Grouped by hour (frequency: "1H").
  • Applied Standardization as preprocessing technique ("Std").

Dataset information:

  • Number of time series: 862
  • Number of training samples: 17448
  • Number of validation samples: 17496 (number_of_training_samples + 48)
  • Number of testing samples: 17544 (number_of_validation_samples + 48)

Dataset format:

  Dataset({
  
      features: ['start', 'target', 'feat_static_cat', 'feat_dynamic_real', 'item_id'],
      
      num_rows: 862
      
  })

Data format for a sample:

  • 'start': datetime.datetime

  • 'target': list of a time series data

  • 'feat_static_cat': time series index

  • 'feat_dynamic_real': None

  • 'item_id': name of time series

Data example:

{'start': datetime.datetime(2015, 1, 1, 0, 0, 1),
 'feat_static_cat': [0],
 'feat_dynamic_real': None,
 'item_id': 'T1',
 'target': [-0.7127609544951682, -0.6743409178438863, -0.3749847989359815, ... 0.12447567753068307,...]
}

Usage:

  • The dataset can be used by available Transformer, Autoformer, Informer of Huggingface.
  • Other algorithms can extract data directly by making use of 'target' feature.