arrowthink
commited on
Commit
•
d1eefed
1
Parent(s):
e298be7
Upload README.md
Browse files
README.md
ADDED
@@ -0,0 +1,137 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
library_name: timesfm
|
4 |
+
pipeline_tag: time-series-forecasting
|
5 |
+
---
|
6 |
+
|
7 |
+
# TimesFM
|
8 |
+
|
9 |
+
TimesFM (Time Series Foundation Model) is a pretrained time-series foundation model developed by Google Research for time-series forecasting.
|
10 |
+
|
11 |
+
**Resources and Technical Documentation**:
|
12 |
+
|
13 |
+
* Paper: [A decoder-only foundation model for time-series forecasting](https://arxiv.org/abs/2310.10688), to appear in ICML 2024.
|
14 |
+
* [Google Research blog](https://research.google/blog/a-decoder-only-foundation-model-for-time-series-forecasting/)
|
15 |
+
* [GitHub repo](https://github.com/google-research/timesfm)
|
16 |
+
|
17 |
+
**Authors**: Google Research
|
18 |
+
|
19 |
+
This is not an officially supported Google product.
|
20 |
+
|
21 |
+
## Checkpoint timesfm-1.0-200m
|
22 |
+
|
23 |
+
`timesfm-1.0-200m` is the first open model checkpoint:
|
24 |
+
|
25 |
+
- It performs univariate time series forecasting for context lengths up to 512 time points and any horizon lengths, with an optional frequency indicator.
|
26 |
+
- It focuses on point forecasts and does not support probabilistic forecasts. We experimentally offer quantile heads but they have not been calibrated after pretraining.
|
27 |
+
- It requires the context to be contiguous (i.e. no "holes"), and the context and the horizon to be of the same frequency.
|
28 |
+
|
29 |
+
## Benchmarks
|
30 |
+
|
31 |
+
Please refer to our result tables on the [extended benchmarks](https://github.com/google-research/timesfm/blob/master/experiments/extended_benchmarks/tfm_results.png) and the [long horizon benchmarks](https://github.com/google-research/timesfm/blob/master/experiments/long_horizon_benchmarks/tfm_long_horizon.png).
|
32 |
+
|
33 |
+
Please look into the README files in the respective benchmark directories within `experiments/` for instructions for running TimesFM on the respective benchmarks.
|
34 |
+
|
35 |
+
## Installation
|
36 |
+
|
37 |
+
This HuggingFace repo hosts TimesFm checkpoints. Please visit our [GitHub repo](https://github.com/google-research/timesfm) and follow the instructions there to install the `timesfm` library for model inference.
|
38 |
+
|
39 |
+
In particular, the dependency `lingvo` does not support ARM architectures and the inference code is not working for machines with Apple silicon. We are aware of this issue and are working on a solution. Stay tuned.
|
40 |
+
|
41 |
+
## Usage
|
42 |
+
|
43 |
+
### Initialize the model and load a checkpoint.
|
44 |
+
Then the base class can be loaded as,
|
45 |
+
|
46 |
+
```python
|
47 |
+
import timesfm
|
48 |
+
|
49 |
+
tfm = timesfm.TimesFm(
|
50 |
+
context_len=<context>,
|
51 |
+
horizon_len=<horizon>,
|
52 |
+
input_patch_len=32,
|
53 |
+
output_patch_len=128,
|
54 |
+
num_layers=20,
|
55 |
+
model_dims=1280,
|
56 |
+
backend=<backend>,
|
57 |
+
)
|
58 |
+
tfm.load_from_checkpoint(repo_id="google/timesfm-1.0-200m")
|
59 |
+
```
|
60 |
+
|
61 |
+
Note that the four parameters are fixed to load the 200m model
|
62 |
+
|
63 |
+
```python
|
64 |
+
input_patch_len=32,
|
65 |
+
output_patch_len=128,
|
66 |
+
num_layers=20,
|
67 |
+
model_dims=1280,
|
68 |
+
```
|
69 |
+
|
70 |
+
1. The context_len here can be set as the max context length **of the model**. You can provide a shorter series to the `tfm.forecast()` function and the model will handle it. Currently, the model handles a max context length of 512, which can be increased in later releases. The input time series can have **any context length**. Padding / truncation will be handled by the inference code if needed.
|
71 |
+
|
72 |
+
2. The horizon length can be set to anything. We recommend setting it to the largest horizon length you would need in the forecasting tasks for your application. We generally recommend horizon length <= context length but it is not a requirement in the function call.
|
73 |
+
|
74 |
+
### Perform inference
|
75 |
+
|
76 |
+
We provide APIs to forecast from either array inputs or `pandas` dataframe. Both forecast methods expect (1) the input time series contexts, (2) along with their frequencies. Please look at the documentation of the functions `tfm.forecast()` and `tfm.forecast_on_df()` for detailed instructions.
|
77 |
+
|
78 |
+
In particular, regarding the frequency, TimesFM expects a categorical indicator valued in {0, 1, 2}:
|
79 |
+
|
80 |
+
- **0** (default): high frequency, long horizon time series. We recommend using this for time series up to daily granularity.
|
81 |
+
- **1**: medium frequency time series. We recommend using this for weekly and monthly data.
|
82 |
+
- **2**: low frequency, short horizon time series. We recommend using this for anything beyond monthly, e.g. quarterly or yearly.
|
83 |
+
|
84 |
+
This categorical value should be directly provided with the array inputs. For dataframe inputs, we convert the conventional letter coding of frequencies to our expected categories, that
|
85 |
+
|
86 |
+
- **0**: T, MIN, H, D, B, U
|
87 |
+
- **1**: W, M
|
88 |
+
- **2**: Q, Y
|
89 |
+
|
90 |
+
Notice you do **NOT** have to strictly follow our recommendation here. Although this is our setup during model training and we expect it to offer the best forecast result, you can also view the frequency input as a free parameter and modify it per your specific use case.
|
91 |
+
|
92 |
+
|
93 |
+
Examples:
|
94 |
+
|
95 |
+
Array inputs, with the frequencies set to low, medium, and high respectively.
|
96 |
+
|
97 |
+
```python
|
98 |
+
import numpy as np
|
99 |
+
forecast_input = [
|
100 |
+
np.sin(np.linspace(0, 20, 100))
|
101 |
+
np.sin(np.linspace(0, 20, 200)),
|
102 |
+
np.sin(np.linspace(0, 20, 400)),
|
103 |
+
]
|
104 |
+
frequency_input = [0, 1, 2]
|
105 |
+
|
106 |
+
point_forecast, experimental_quantile_forecast = tfm.forecast(
|
107 |
+
forecast_input,
|
108 |
+
freq=frequency_input,
|
109 |
+
)
|
110 |
+
```
|
111 |
+
|
112 |
+
`pandas` dataframe, with the frequency set to "M" monthly.
|
113 |
+
|
114 |
+
```python
|
115 |
+
import pandas as pd
|
116 |
+
|
117 |
+
# e.g. input_df is
|
118 |
+
# unique_id ds y
|
119 |
+
# 0 T1 1975-12-31 697458.0
|
120 |
+
# 1 T1 1976-01-31 1187650.0
|
121 |
+
# 2 T1 1976-02-29 1069690.0
|
122 |
+
# 3 T1 1976-03-31 1078430.0
|
123 |
+
# 4 T1 1976-04-30 1059910.0
|
124 |
+
# ... ... ... ...
|
125 |
+
# 8175 T99 1986-01-31 602.0
|
126 |
+
# 8176 T99 1986-02-28 684.0
|
127 |
+
# 8177 T99 1986-03-31 818.0
|
128 |
+
# 8178 T99 1986-04-30 836.0
|
129 |
+
# 8179 T99 1986-05-31 878.0
|
130 |
+
|
131 |
+
forecast_df = tfm.forecast_on_df(
|
132 |
+
inputs=input_df,
|
133 |
+
freq="M", # monthly
|
134 |
+
value_name="y",
|
135 |
+
num_jobs=-1,
|
136 |
+
)
|
137 |
+
```
|