ajati commited on
Commit
4cfa124
1 Parent(s): a7df08a

Model card v1

Browse files
Files changed (1) hide show
  1. README.md +114 -0
README.md ADDED
@@ -0,0 +1,114 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ metrics:
4
+ - mse
5
+ ---
6
+ # PatchTSMixer model pre-trained on ETTh1 dataset
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+ The [`PatchTSMixer`](https://huggingface.co/docs/transformers/model_doc/patchtsmixer) is a lightweight and fast multivariate time series forecasting model with state-of-the-art performance on benchmark datasets.
11
+ In this context, we offer a pre-trained `PatchTSMixer` model encompassing all seven channels of the `ETTh1` dataset.
12
+ This specific pre-trained model yields a Mean Squared Error (MSE) of 0.37 on the test split of the `ETTh1` dataset.
13
+
14
+ For training and evaluating a `PatchTSMixer` model, you can refer to [this notebook](https://github.com/IBM/tsfm/blob/main/notebooks/hfdemo/patch_tsmixer_getting_started.ipynb).
15
+
16
+ ## Model Details
17
+
18
+ The PatchTSMixer model was proposed in [TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting](https://arxiv.org/pdf/2306.09364.pdf) by Vijay Ekambaram, Arindam Jati, Nam Nguyen, Phanwadee Sinthong and Jayant Kalagnanam.
19
+
20
+ PatchTSMixer is a lightweight time-series modeling approach based on the MLP-Mixer architecture. In [this HuggingFace implementation](https://huggingface.co/docs/transformers/model_doc/patchtsmixer), we provide PatchTSMixer’s capabilities to effortlessly facilitate lightweight mixing across patches, channels, and hidden features for effective multivariate time-series modeling. It also supports various attention mechanisms starting from simple gated attention to more complex self-attention blocks that can be customized accordingly. The model can be pretrained and subsequently used for various downstream tasks such as forecasting, classification and regression.
21
+
22
+ ### Model Description
23
+
24
+ <!-- Provide a longer summary of what this model is. -->
25
+
26
+ TSMixer is a lightweight neural architecture exclusively composed of multi-layer perceptron (MLP) modules designed for multivariate forecasting and representation learning on patched time series. Our model draws inspiration from the success of MLP-Mixer models in computer vision. We demonstrate the challenges involved in adapting Vision MLP-Mixer for time series and introduce empirically validated components to enhance accuracy. This includes a novel design paradigm of attaching online reconciliation heads to the MLP-Mixer backbone, for explicitly modeling the time-series properties such as hierarchy and channel-correlations. We also propose a Hybrid channel modeling approach to effectively handle noisy channel interactions and generalization across diverse datasets, a common challenge in existing patch channel-mixing methods. Additionally, a simple gated attention mechanism is introduced in the backbone to prioritize important features. By incorporating these lightweight components, we significantly enhance the learning capability of simple MLP structures, outperforming complex Transformer models with minimal computing usage. Moreover, TSMixer’s modular design enables compatibility with both supervised and masked self-supervised learning methods, making it a promising building block for time-series Foundation Models. TSMixer outperforms state-of-the-art MLP and Transformer models in forecasting by a considerable margin of 8-60%. It also outperforms the latest strong benchmarks of Patch-Transformer models (by 1-2%) with a significant reduction in memory and runtime (2-3X).
27
+
28
+ ### Model Sources
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [PatchTSMixer Hugging Face](https://huggingface.co/docs/transformers/model_doc/patchtsmixer)
33
+ - **Paper:** [PatchTSMixer KDD 2023 paper](https://dl.acm.org/doi/abs/10.1145/3580305.3599533)
34
+ - **Demo:** [Get started with PatchTSMixer](https://github.com/IBM/tsfm/blob/main/notebooks/hfdemo/patch_tsmixer_getting_started.ipynb)
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+ This pre-trained model can be utilized for fine-tuning or evaluation with any Electrical Transformer dataset that shares the same channels as the `ETTh1` dataset, namely: `HUFL, HULL, MUFL, MULL, LUFL, LULL, OT`. It is important to ensure that the data is normalized. For detailed information on data pre-processing, please refer to the paper or the demo.
40
+
41
+ ## How to Get Started with the Model
42
+
43
+ Use the code below to get started with the model.
44
+
45
+ [Demo](https://github.com/IBM/tsfm/blob/main/notebooks/hfdemo/patch_tsmixer_getting_started.ipynb)
46
+
47
+ ## Training Details
48
+
49
+ ### Training Data
50
+
51
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
52
+
53
+ [`ETTh1`/train split](https://github.com/zhouhaoyi/ETDataset/blob/main/ETT-small/ETTh1.csv).
54
+ Train/validation/test splits are shown in the [demo](https://github.com/IBM/tsfm/blob/main/notebooks/hfdemo/patch_tsmixer_getting_started.ipynb).
55
+
56
+ #### Training Hyperparameters
57
+
58
+ Please refer to the [PatchTSMixer paper](https://arxiv.org/pdf/2306.09364.pdf).
59
+
60
+ #### Speeds, Sizes, Times [optional]
61
+
62
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
63
+
64
+ [More Information Needed]
65
+
66
+ ## Evaluation
67
+
68
+ <!-- This section describes the evaluation protocols and provides the results. -->
69
+
70
+ ### Testing Data, Factors & Metrics
71
+
72
+ #### Testing Data
73
+
74
+ [`ETTh1`/test split](https://github.com/zhouhaoyi/ETDataset/blob/main/ETT-small/ETTh1.csv).
75
+ Train/validation/test splits are shown in the [demo](https://github.com/IBM/tsfm/blob/main/notebooks/hfdemo/patch_tsmixer_getting_started.ipynb).
76
+
77
+
78
+ #### Metrics
79
+
80
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
81
+
82
+ Mean Squared Error (MSE).
83
+
84
+ ### Results
85
+
86
+ [More Information Needed]
87
+
88
+
89
+ #### Hardware
90
+
91
+ 1 NVIDIA A100 GPU
92
+
93
+ #### Software
94
+
95
+ PyTorch
96
+
97
+ ## Citation
98
+
99
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
100
+
101
+ **BibTeX:**
102
+ ```
103
+ @article{ekambaram2023tsmixer,
104
+ title={TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting},
105
+ author={Ekambaram, Vijay and Jati, Arindam and Nguyen, Nam and Sinthong, Phanwadee and Kalagnanam, Jayant},
106
+ journal={arXiv preprint arXiv:2306.09364},
107
+ year={2023}
108
+ }
109
+ ```
110
+
111
+ **APA:**
112
+ ```
113
+ Ekambaram, V., Jati, A., Nguyen, N., Sinthong, P., & Kalagnanam, J. (2023). TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting. arXiv preprint arXiv:2306.09364.
114
+ ```