File size: 12,335 Bytes
ab779ac d96e0b2 b292a8b ac932f8 ab779ac 746a2a0 672fe08 746a2a0 4ce186f 81bebe4 4ce186f c6f7e8c dc31cd1 672fe08 8c3aa40 dc31cd1 0ddaf23 dc31cd1 350aa94 27c3010 f494e7a 746a2a0 d1748f6 4cda9a9 8b70129 c3203be 55a115d dc31cd1 faad555 dc31cd1 faad555 dc31cd1 dbcc76b dc31cd1 c3203be dc31cd1 c3203be faad555 dc31cd1 e5198b8 faad555 dc31cd1 faad555 dc31cd1 faad555 dc31cd1 c3fc0b2 c3203be dbcc76b f494e7a d1748f6 746a2a0 dc31cd1 746a2a0 fa68c66 746a2a0 fa68c66 746a2a0 fac05af 746a2a0 dad53f8 746a2a0 dc31cd1 746a2a0 f494e7a ba1f9d5 746a2a0 dc31cd1 746a2a0 dc31cd1 746a2a0 ba1f9d5 746a2a0 ba1f9d5 746a2a0 ba1f9d5 746a2a0 ba1f9d5 746a2a0 ba1f9d5 746a2a0 9d3a4de 746a2a0 e1ff042 746a2a0 9d3a4de 746a2a0 ba1f9d5 dc31cd1 350aa94 dc31cd1 350aa94 746a2a0 ba1f9d5 746a2a0 4920a28 746a2a0 dc31cd1 746a2a0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 |
---
license: apache-2.0
pipeline_tag: time-series-forecasting
tags:
- time series
- forecasting
- pretrained models
- foundation models
- time series foundation models
- time-series
library_name: granite-tsfm
---
# Granite-TimeSeries-TTM-R1 Model Card
<p align="center" width="100%">
<img src="ttm_image.webp" width="600">
</p>
TinyTimeMixers (TTMs) are compact pre-trained models for Multivariate Time-Series Forecasting, open-sourced by IBM Research.
**With less than 1 Million parameters, TTM (accepted in NeurIPS 24) introduces the notion of the first-ever “tiny” pre-trained models for Time-Series Forecasting.**
TTM outperforms several popular benchmarks demanding billions of parameters in zero-shot and few-shot forecasting. TTMs are lightweight
forecasters, pre-trained on publicly available time series data with various augmentations. TTM provides state-of-the-art zero-shot forecasts and can easily be
fine-tuned for multi-variate forecasts with just 5% of the training data to be competitive. Refer to our [paper](https://arxiv.org/pdf/2401.03955.pdf) for more details.
**The current open-source version supports point forecasting use-cases specifically ranging from minutely to hourly resolutions
(Ex. 10 min, 15 min, 1 hour.).**
**Note that zeroshot, fine-tuning and inference tasks using TTM can easily be executed in 1 GPU machine or in laptops too!!**
**New updates:** TTM-R1 comprises TTM variants pre-trained on 250M public training samples. We have another set of TTM models released recently under TTM-R2 trained on a much larger pretraining
dataset (~700M samples) which can be accessed from [here](https://huggingface.co/ibm-granite/granite-timeseries-ttm-r2). In general, TTM-R2 models perform better than
TTM-R1 models as they are trained on larger pretraining dataset. However, the choice of R1 vs R2 depends on your target data distribution. Hence requesting users to
try both R1 and R2 variants and pick the best for your data.
## Model Description
TTM falls under the category of “focused pre-trained models”, wherein each pre-trained TTM is tailored for a particular forecasting
setting (governed by the context length and forecast length). Instead of building one massive model supporting all forecasting settings,
we opt for the approach of constructing smaller pre-trained models, each focusing on a specific forecasting setting, thereby
yielding more accurate results. Furthermore, this approach ensures that our models remain extremely small and exceptionally fast,
facilitating easy deployment without demanding a ton of resources.
Hence, in this model card, we plan to release several pre-trained
TTMs that can cater to many common forecasting settings in practice. Additionally, we have released our source code along with
our pretraining scripts that users can utilize to pretrain models on their own. Pretraining TTMs is very easy and fast, taking
only 3-6 hours using 6 A100 GPUs, as opposed to several days or weeks in traditional approaches.
Each pre-trained model will be released in a different branch name in this model card. Kindly access the required model using our
getting started [notebook](https://github.com/IBM/tsfm/blob/main/notebooks/hfdemo/ttm_getting_started.ipynb) mentioning the branch name.
## Model Releases (along with the branch name where the models are stored):
- **512-96:** Given the last 512 time-points (i.e. context length), this model can forecast up to next 96 time-points (i.e. forecast length)
in future. This model is targeted towards a forecasting setting of context length 512 and forecast length 96 and
recommended for hourly and minutely resolutions (Ex. 10 min, 15 min, 1 hour, etc). This model refers to the TTM-Q variant used in the paper. (branch name: main) [[Benchmark Scripts]](https://github.com/ibm-granite/granite-tsfm/blob/main/notebooks/hfdemo/tinytimemixer/ttm-r1_benchmarking_512_96.ipynb)
- **1024-96:** Given the last 1024 time-points (i.e. context length), this model can forecast up to next 96 time-points (i.e. forecast length)
in future. This model is targeted towards a long forecasting setting of context length 1024 and forecast length 96 and
recommended for hourly and minutely resolutions (Ex. 10 min, 15 min, 1 hour, etc). (branch name: 1024-96-v1) [[Benchmark Scripts]](https://github.com/ibm-granite/granite-tsfm/blob/main/notebooks/hfdemo/tinytimemixer/ttm-r1_benchmarking_1024_96.ipynb)
We can also use the [[get_model]](https://github.com/ibm-granite/granite-tsfm/blob/main/tsfm_public/toolkit/get_model.py) utility to automatically select the required model based on your input context length and forecast length requirement.
For more variants (till forecast length 720), refer to our new model card [here](https://huggingface.co/ibm-granite/granite-timeseries-ttm-r2)
## Model Capabilities with example scripts
The below model scripts can be used for any of the above TTM models. Please update the HF model URL and branch name in the `from_pretrained` call appropriately to pick the model of your choice.
- Getting Started [[colab]](https://colab.research.google.com/github/ibm-granite/granite-tsfm/blob/main/notebooks/hfdemo/ttm_getting_started.ipynb)
- Zeroshot Multivariate Forecasting [[Example]](https://github.com/ibm-granite/granite-tsfm/blob/main/notebooks/hfdemo/ttm_getting_started.ipynb)
- Finetuned Multivariate Forecasting:
- Channel-Independent Finetuning [[Example 1]](https://github.com/ibm-granite/granite-tsfm/blob/main/notebooks/hfdemo/ttm_getting_started.ipynb) [[Example 2]](https://github.com/ibm-granite/granite-tsfm/blob/main/notebooks/hfdemo/tinytimemixer/ttm_m4_hourly.ipynb)
- Channel-Mix Finetuning [[Example]](https://github.com/ibm-granite/granite-tsfm/blob/main/notebooks/tutorial/ttm_channel_mix_finetuning.ipynb)
- **New Releases (extended features released on October 2024)**
- Finetuning and Forecasting with Exogenous/Control Variables [[Example]](https://github.com/ibm-granite/granite-tsfm/blob/main/notebooks/tutorial/ttm_with_exog_tutorial.ipynb)
- Finetuning and Forecasting with static categorical features [Example: To be added soon]
- Rolling Forecasts - Extend forecast lengths beyond 96 via rolling capability [[Example]](https://github.com/ibm-granite/granite-tsfm/blob/main/notebooks/hfdemo/ttm_rolling_prediction_getting_started.ipynb)
- Helper scripts for optimal Learning Rate suggestions for Finetuning [[Example]](https://github.com/ibm-granite/granite-tsfm/blob/main/notebooks/tutorial/ttm_with_exog_tutorial.ipynb)
## Benchmarks
TTM outperforms popular benchmarks such as TimesFM, Moirai, Chronos, Lag-Llama, Moment, GPT4TS, TimeLLM, LLMTime in zero/fewshot forecasting while reducing computational requirements significantly.
Moreover, TTMs are lightweight and can be executed even on CPU-only machines, enhancing usability and fostering wider
adoption in resource-constrained environments. For more details, refer to our [paper](https://arxiv.org/pdf/2401.03955.pdf) TTM-Q referred in the paper maps to the `512-96` model
uploaded in the main branch. For other variants (TTM-B, TTM-E and TTM-A) please refer [here](https://huggingface.co/ibm-granite/granite-timeseries-ttm-r2). For more details, refer to the paper.
<p align="center" width="100%">
<img src="benchmarks.webp" width="600">
</p>
## Recommended Use
1. Users have to externally standard scale their data independently for every channel before feeding it to the model (Refer to [TSP](https://github.com/IBM/tsfm/blob/main/tsfm_public/toolkit/time_series_preprocessor.py), our data processing utility for data scaling.)
2. The current open-source version supports only minutely and hourly resolutions(Ex. 10 min, 15 min, 1 hour.). Other lower resolutions (say weekly, or monthly) are currently not supported in this version, as the model needs a minimum context length of 512 or 1024.
3. Enabling any upsampling or prepending zeros to virtually increase the context length for shorter-length datasets is not recommended and will
impact the model performance.
## Model Details
For more details on TTM architecture and benchmarks, refer to our [paper](https://arxiv.org/pdf/2401.03955.pdf).
TTM-1 currently supports 2 modes:
- **Zeroshot forecasting**: Directly apply the pre-trained model on your target data to get an initial forecast (with no training).
- **Finetuned forecasting**: Finetune the pre-trained model with a subset of your target data to further improve the forecast.
**Since, TTM models are extremely small and fast, it is practically very easy to finetune the model with your available target data in few minutes
to get more accurate forecasts.**
The current release supports multivariate forecasting via both channel independence and channel-mixing approaches.
Decoder Channel-Mixing can be enabled during fine-tuning for capturing strong channel-correlation patterns across
time-series variates, a critical capability lacking in existing counterparts.
In addition, TTM also supports exogenous infusion and categorical data infusion.
### Model Sources
- **Repository:** https://github.com/ibm-granite/granite-tsfm/tree/main/tsfm_public/models/tinytimemixer
- **Paper:** https://arxiv.org/pdf/2401.03955.pdf
### Blogs and articles on TTM:
- Refer to our [wiki](https://github.com/ibm-granite/granite-tsfm/wiki)
## Uses
```
# Load Model from HF Model Hub mentioning the branch name in revision field
model = TinyTimeMixerForPrediction.from_pretrained(
"https://huggingface.co/ibm/TTM", revision="main"
)
# Do zeroshot
zeroshot_trainer = Trainer(
model=model,
args=zeroshot_forecast_args,
)
)
zeroshot_output = zeroshot_trainer.evaluate(dset_test)
# Freeze backbone and enable few-shot or finetuning:
# freeze backbone
for param in model.backbone.parameters():
param.requires_grad = False
finetune_forecast_trainer = Trainer(
model=model,
args=finetune_forecast_args,
train_dataset=dset_train,
eval_dataset=dset_val,
callbacks=[early_stopping_callback, tracking_callback],
optimizers=(optimizer, scheduler),
)
finetune_forecast_trainer.train()
fewshot_output = finetune_forecast_trainer.evaluate(dset_test)
```
## Training Data
The original r1 TTM models were trained on a collection of datasets from the Monash Time Series Forecasting repository. The datasets used include:
- Australian Electricity Demand: https://zenodo.org/records/4659727
- Australian Weather: https://zenodo.org/records/4654822
- Bitcoin dataset: https://zenodo.org/records/5122101
- KDD Cup 2018 dataset: https://zenodo.org/records/4656756
- London Smart Meters: https://zenodo.org/records/4656091
- Saugeen River Flow: https://zenodo.org/records/4656058
- Solar Power: https://zenodo.org/records/4656027
- Sunspots: https://zenodo.org/records/4654722
- Solar: https://zenodo.org/records/4656144
- US Births: https://zenodo.org/records/4656049
- Wind Farms Production data: https://zenodo.org/records/4654858
- Wind Power: https://zenodo.org/records/4656032
## Citation
Kindly cite the following paper, if you intend to use our model or its associated architectures/approaches in your
work
**BibTeX:**
```
@inproceedings{ekambaram2024tinytimemixersttms,
title={Tiny Time Mixers (TTMs): Fast Pre-trained Models for Enhanced Zero/Few-Shot Forecasting of Multivariate Time Series},
author={Vijay Ekambaram and Arindam Jati and Pankaj Dayama and Sumanta Mukherjee and Nam H. Nguyen and Wesley M. Gifford and Chandra Reddy and Jayant Kalagnanam},
booktitle={Advances in Neural Information Processing Systems (NeurIPS 2024)},
year={2024},
}
```
## Model Card Authors
Vijay Ekambaram, Arindam Jati, Pankaj Dayama, Wesley M. Gifford, Sumanta Mukherjee, Chandra Reddy and Jayant Kalagnanam
## IBM Public Repository Disclosure:
All content in this repository including code has been provided by IBM under the associated
open source software license and IBM is under no obligation to provide enhancements,
updates, or support. IBM developers produced this code as an
open source project (not as an IBM product), and IBM makes no assertions as to
the level of quality nor security, and will not be maintaining this code going forward. |