vijaye12 commited on
Commit
302e839
1 Parent(s): d96e0b2

Update readme (TTM)

Browse files

TTM Model card updates based on the new arXiv paper release.

Files changed (1) hide show
  1. README.md +20 -13
README.md CHANGED
@@ -13,7 +13,8 @@ TinyTimeMixers (TTMs) are compact pre-trained models for Multivariate Time-Serie
13
 
14
  TTM outperforms several popular benchmarks demanding billions of parameters in zero-shot and few-shot forecasting. TTMs are lightweight
15
  forecasters, pre-trained on publicly available time series data with various augmentations. TTM provides state-of-the-art zero-shot forecasts and can easily be
16
- fine-tuned for multi-variate forecasts with just 5% of the training data to be competitive. Refer to our [paper](https://arxiv.org/pdf/2401.03955.pdf) for more details.
 
17
 
18
  **The current open-source version supports point forecasting use-cases ranging from minutely to hourly resolutions
19
  (Ex. 10 min, 15 min, 1 hour, etc.)**
@@ -21,6 +22,9 @@ fine-tuned for multi-variate forecasts with just 5% of the training data to be c
21
  **Note that zeroshot, fine-tuning and inference tasks using TTM can easily be executed in 1 GPU machine or in laptops too!!**
22
 
23
 
 
 
 
24
 
25
  ## How to Get Started with the Model
26
 
@@ -33,7 +37,7 @@ fine-tuned for multi-variate forecasts with just 5% of the training data to be c
33
 
34
  ## Benchmark Highlights:
35
 
36
- - TTM (with less than 1 Million parameters) outperforms the following popular Pre-trained SOTAs demanding several hundred Million to Billions of parameters [paper](https://arxiv.org/pdf/2401.03955.pdf):
37
  - *GPT4TS (NeurIPS 23) by 7-12% in few-shot forecasting*
38
  - *LLMTime (NeurIPS 23) by 24% in zero-shot forecasting*.
39
  - *SimMTM (NeurIPS 23) by 17% in few-shot forecasting*.
@@ -80,7 +84,7 @@ getting started [notebook](https://github.com/IBM/tsfm/blob/main/notebooks/hfdem
80
 
81
  ## Model Details
82
 
83
- For more details on TTM architecture and benchmarks, refer to our [paper](https://arxiv.org/pdf/2401.03955.pdf).
84
 
85
  TTM-1 currently supports 2 modes:
86
 
@@ -99,15 +103,16 @@ In addition, TTM also supports exogenous infusion and categorical data which is
99
  Stay tuned for these extended features.
100
 
101
  ## Recommended Use
102
- 1. Users have to externally standard scale their data indepedently for every channel before feeding it to the model (Refer to [TSP](https://github.com/IBM/tsfm/blob/main/tsfm_public/toolkit/time_series_preprocessor.py), our data processing utility for data scaling.)
103
- 2. Enabling any upsampling or prepending zeros to virtually increase the context length for shorter length datasets is not recommended and will
104
  impact the model performance.
105
 
106
 
107
  ### Model Sources
108
 
109
  - **Repository:** https://github.com/IBM/tsfm/tree/main/tsfm_public/models/tinytimemixer
110
- - **Paper:** https://arxiv.org/pdf/2401.03955.pdf
 
111
 
112
 
113
  ## Uses
@@ -173,22 +178,24 @@ work
173
  **BibTeX:**
174
 
175
  ```
176
- @article{ekambaram2024ttms,
177
- title={TTMs: Fast Multi-level Tiny Time Mixers for Improved Zero-shot and Few-shot Forecasting of Multivariate Time Series},
178
- author={Ekambaram, Vijay and Jati, Arindam and Nguyen, Nam H and Dayama, Pankaj and Reddy, Chandra and Gifford, Wesley M and Kalagnanam, Jayant},
179
- journal={arXiv preprint arXiv:2401.03955},
180
- year={2024}
 
 
181
  }
182
  ```
183
 
184
  **APA:**
185
 
186
- Ekambaram, V., Jati, A., Nguyen, N. H., Dayama, P., Reddy, C., Gifford, W. M., & Kalagnanam, J. (2024). TTMs: Fast Multi-level Tiny Time Mixers for Improved Zero-shot and Few-shot Forecasting of Multivariate Time Series. arXiv preprint arXiv:2401.03955.
187
 
188
 
189
  ## Model Card Authors
190
 
191
- Vijay Ekambaram, Arindam Jati, Pankaj Dayama, Nam H. Nguyen, Wesley Gifford and Jayant Kalagnanam
192
 
193
 
194
  ## IBM Public Repository Disclosure:
 
13
 
14
  TTM outperforms several popular benchmarks demanding billions of parameters in zero-shot and few-shot forecasting. TTMs are lightweight
15
  forecasters, pre-trained on publicly available time series data with various augmentations. TTM provides state-of-the-art zero-shot forecasts and can easily be
16
+ fine-tuned for multi-variate forecasts with just 5% of the training data to be competitive. Refer to our [paper](https://arxiv.org/pdf/2401.03955v5.pdf) for more details.
17
+
18
 
19
  **The current open-source version supports point forecasting use-cases ranging from minutely to hourly resolutions
20
  (Ex. 10 min, 15 min, 1 hour, etc.)**
 
22
  **Note that zeroshot, fine-tuning and inference tasks using TTM can easily be executed in 1 GPU machine or in laptops too!!**
23
 
24
 
25
+ **Recent updates:** We have developed more sophisticated variants of TTMs (TTM-B, TTM-E and TTM-A), featuring extended benchmarks that compare them with some of the latest models
26
+ such as TimesFM, Moirai, Chronos, Lag-llama, and Moment. For full details, please refer to the latest version of our [paper](https://arxiv.org/pdf/2401.03955.pdf).
27
+ Stay tuned for the release of the model weights for these newer variants.
28
 
29
  ## How to Get Started with the Model
30
 
 
37
 
38
  ## Benchmark Highlights:
39
 
40
+ - TTM (with less than 1 Million parameters) outperforms the following popular Pre-trained SOTAs demanding several hundred Million to Billions of parameters [paper](https://arxiv.org/pdf/2401.03955v5.pdf):
41
  - *GPT4TS (NeurIPS 23) by 7-12% in few-shot forecasting*
42
  - *LLMTime (NeurIPS 23) by 24% in zero-shot forecasting*.
43
  - *SimMTM (NeurIPS 23) by 17% in few-shot forecasting*.
 
84
 
85
  ## Model Details
86
 
87
+ For more details on TTM architecture and benchmarks, refer to our [paper](https://arxiv.org/pdf/2401.03955v5.pdf).
88
 
89
  TTM-1 currently supports 2 modes:
90
 
 
103
  Stay tuned for these extended features.
104
 
105
  ## Recommended Use
106
+ 1. Users have to externally standard scale their data independently for every channel before feeding it to the model (Refer to [TSP](https://github.com/IBM/tsfm/blob/main/tsfm_public/toolkit/time_series_preprocessor.py), our data processing utility for data scaling.)
107
+ 2. Enabling any upsampling or prepending zeros to virtually increase the context length for shorter-length datasets is not recommended and will
108
  impact the model performance.
109
 
110
 
111
  ### Model Sources
112
 
113
  - **Repository:** https://github.com/IBM/tsfm/tree/main/tsfm_public/models/tinytimemixer
114
+ - **Paper:** https://arxiv.org/pdf/2401.03955v5.pdf
115
+ - **Paper (Newer variants, extended benchmarks):** https://arxiv.org/pdf/2401.03955.pdf
116
 
117
 
118
  ## Uses
 
178
  **BibTeX:**
179
 
180
  ```
181
+ @misc{ekambaram2024tiny,
182
+ title={Tiny Time Mixers (TTMs): Fast Pre-trained Models for Enhanced Zero/Few-Shot Forecasting of Multivariate Time Series},
183
+ author={Vijay Ekambaram and Arindam Jati and Pankaj Dayama and Sumanta Mukherjee and Nam H. Nguyen and Wesley M. Gifford and Chandra Reddy and Jayant Kalagnanam},
184
+ year={2024},
185
+ eprint={2401.03955},
186
+ archivePrefix={arXiv},
187
+ primaryClass={cs.LG}
188
  }
189
  ```
190
 
191
  **APA:**
192
 
193
+ Ekambaram, V., Jati, A., Dayama, P., Mukherjee, S., Nguyen, N. H., Gifford, W. M., Kalagnanam, J. (2024). Tiny Time Mixers (TTMs): Fast Pre-trained Models for Enhanced Zero/Few-Shot Forecasting of Multivariate Time Series. arXiv [Cs.LG]. Retrieved from http://arxiv.org/abs/2401.03955
194
 
195
 
196
  ## Model Card Authors
197
 
198
+ Vijay Ekambaram, Arindam Jati, Pankaj Dayama, Nam H. Nguyen, Wesley Gifford, and Jayant Kalagnanam
199
 
200
 
201
  ## IBM Public Repository Disclosure: