Duplicate from amazon/chronos-bolt-mini
Browse filesCo-authored-by: Lorenzo Stella <lostella@users.noreply.huggingface.co>
- .gitattributes +35 -0
- README.md +123 -0
- config.json +51 -0
- model.safetensors +3 -0
.gitattributes
ADDED
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
28 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
29 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
30 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
31 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
32 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
33 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,123 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
pipeline_tag: time-series-forecasting
|
4 |
+
tags:
|
5 |
+
- time series
|
6 |
+
- forecasting
|
7 |
+
- pretrained models
|
8 |
+
- foundation models
|
9 |
+
- time series foundation models
|
10 |
+
- time-series
|
11 |
+
---
|
12 |
+
|
13 |
+
# Chronos-Bolt⚡ (Mini)
|
14 |
+
|
15 |
+
Chronos-Bolt is a family of pretrained time series forecasting models which can be used for zero-shot forecasting. It is based on the [T5 encoder-decoder architecture](https://arxiv.org/abs/1910.10683) and has been trained on nearly 100 billion time series observations. It chunks the historical time series context into patches of multiple observations, which are then input into the encoder. The decoder then uses these representations to directly generate quantile forecasts across multiple future steps—a method known as direct multi-step forecasting. Chronos-Bolt models are up to 250 times faster and 20 times more memory-efficient than the [original Chronos](https://arxiv.org/abs/2403.07815) models of the same size.
|
16 |
+
|
17 |
+
The following plot compares the inference time of Chronos-Bolt against the original Chronos models for forecasting 1024 time series with a context length of 512 observations and a prediction horizon of 64 steps.
|
18 |
+
|
19 |
+
<center>
|
20 |
+
<img src="https://autogluon.s3.amazonaws.com/images/chronos_bolt_speed.svg" width="50%"/>
|
21 |
+
</center>
|
22 |
+
|
23 |
+
Chronos-Bolt models are not only significantly faster but also more accurate than the original Chronos models. The following plot reports the probabilistic and point forecasting performance of Chronos-Bolt in terms of the [Weighted Quantile Loss (WQL)](https://auto.gluon.ai/stable/tutorials/timeseries/forecasting-metrics.html#autogluon.timeseries.metrics.WQL) and the [Mean Absolute Scaled Error (MASE)](https://auto.gluon.ai/stable/tutorials/timeseries/forecasting-metrics.html#autogluon.timeseries.metrics.MASE), respectively, aggregated over 27 datasets (see the [Chronos paper](https://arxiv.org/abs/2403.07815) for details on this benchmark). Remarkably, despite having no prior exposure to these datasets during training, the zero-shot Chronos-Bolt models outperform commonly used statistical models and deep learning models that have been trained on these datasets (highlighted by *). Furthermore, they also perform better than other FMs, denoted by a +, which indicates that these models were pretrained on certain datasets in our benchmark and are not entirely zero-shot. Notably, Chronos-Bolt (Base) also surpasses the original Chronos (Large) model in terms of the forecasting accuracy while being over 600 times faster.
|
24 |
+
|
25 |
+
<center>
|
26 |
+
<img src="https://autogluon.s3.amazonaws.com/images/chronos_bolt_accuracy.svg" width="80%"/>
|
27 |
+
</center>
|
28 |
+
|
29 |
+
Chronos-Bolt models are available in the following sizes.
|
30 |
+
|
31 |
+
|
32 |
+
<div align="center">
|
33 |
+
|
34 |
+
| Model | Parameters | Based on |
|
35 |
+
| ----------------------------------------------------------------------------- | ---------- | ---------------------------------------------------------------------- |
|
36 |
+
| [**chronos-bolt-tiny**](https://huggingface.co/amazon/chronos-bolt-tiny) | 9M | [t5-efficient-tiny](https://huggingface.co/google/t5-efficient-tiny) |
|
37 |
+
| [**chronos-bolt-mini**](https://huggingface.co/amazon/chronos-bolt-mini) | 21M | [t5-efficient-mini](https://huggingface.co/google/t5-efficient-mini) |
|
38 |
+
| [**chronos-bolt-small**](https://huggingface.co/amazon/chronos-bolt-small) | 48M | [t5-efficient-small](https://huggingface.co/google/t5-efficient-small) |
|
39 |
+
| [**chronos-bolt-base**](https://huggingface.co/amazon/chronos-bolt-base) | 205M | [t5-efficient-base](https://huggingface.co/google/t5-efficient-base) |
|
40 |
+
|
41 |
+
</div>
|
42 |
+
|
43 |
+
|
44 |
+
## Usage with AutoGluon
|
45 |
+
|
46 |
+
The recommended way of using Chronos for production use cases is through [AutoGluon](https://auto.gluon.ai/stable/index.html), which features effortless fine-tuning, augmenting Chronos models with exogenous information through covariate regressors, ensembling with other statistical and machine learning models, as well as seamless deployments on AWS with SageMaker.
|
47 |
+
Check out the AutoGluon Chronos [tutorial](https://auto.gluon.ai/stable/tutorials/timeseries/forecasting-chronos.html).
|
48 |
+
|
49 |
+
A minimal example showing how to perform zero-shot inference using Chronos-Bolt with AutoGluon:
|
50 |
+
|
51 |
+
```
|
52 |
+
pip install autogluon
|
53 |
+
```
|
54 |
+
|
55 |
+
```python
|
56 |
+
from autogluon.timeseries import TimeSeriesPredictor, TimeSeriesDataFrame
|
57 |
+
|
58 |
+
df = TimeSeriesDataFrame("https://autogluon.s3.amazonaws.com/datasets/timeseries/m4_hourly/train.csv")
|
59 |
+
|
60 |
+
predictor = TimeSeriesPredictor(prediction_length=48).fit(
|
61 |
+
df,
|
62 |
+
hyperparameters={
|
63 |
+
"Chronos": {"model_path": "amazon/chronos-bolt-mini"},
|
64 |
+
},
|
65 |
+
)
|
66 |
+
|
67 |
+
predictions = predictor.predict(df)
|
68 |
+
```
|
69 |
+
|
70 |
+
## Usage with inference library
|
71 |
+
|
72 |
+
Alternatively, you can install the package in the GitHub [companion repo](https://github.com/amazon-science/chronos-forecasting).
|
73 |
+
This is intended for research purposes and provides a minimal interface to Chronos models.
|
74 |
+
Install the library by running:
|
75 |
+
|
76 |
+
```
|
77 |
+
pip install chronos-forecasting
|
78 |
+
```
|
79 |
+
|
80 |
+
A minimal example showing how to perform inference using Chronos-Bolt models:
|
81 |
+
|
82 |
+
```python
|
83 |
+
import pandas as pd # requires: pip install pandas
|
84 |
+
import torch
|
85 |
+
from chronos import BaseChronosPipeline
|
86 |
+
|
87 |
+
pipeline = BaseChronosPipeline.from_pretrained(
|
88 |
+
"amazon/chronos-bolt-mini",
|
89 |
+
device_map="cuda", # use "cpu" for CPU inference and "mps" for Apple Silicon
|
90 |
+
torch_dtype=torch.bfloat16,
|
91 |
+
)
|
92 |
+
|
93 |
+
df = pd.read_csv(
|
94 |
+
"https://raw.githubusercontent.com/AileenNielsen/TimeSeriesAnalysisWithPython/master/data/AirPassengers.csv"
|
95 |
+
)
|
96 |
+
|
97 |
+
# context must be either a 1D tensor, a list of 1D tensors,
|
98 |
+
# or a left-padded 2D tensor with batch as the first dimension
|
99 |
+
# Chronos-Bolt models generate quantile forecasts, so forecast has shape
|
100 |
+
# [num_series, num_quantiles, prediction_length].
|
101 |
+
forecast = pipeline.predict(
|
102 |
+
context=torch.tensor(df["#Passengers"]), prediction_length=12
|
103 |
+
)
|
104 |
+
```
|
105 |
+
|
106 |
+
## Citation
|
107 |
+
|
108 |
+
If you find Chronos or Chronos-Bolt models useful for your research, please consider citing the associated [paper](https://arxiv.org/abs/2403.07815):
|
109 |
+
|
110 |
+
```
|
111 |
+
@article{ansari2024chronos,
|
112 |
+
title={Chronos: Learning the Language of Time Series},
|
113 |
+
author={Ansari, Abdul Fatir and Stella, Lorenzo and Turkmen, Caner and Zhang, Xiyuan, and Mercado, Pedro and Shen, Huibin and Shchur, Oleksandr and Rangapuram, Syama Syndar and Pineda Arango, Sebastian and Kapoor, Shubham and Zschiegner, Jasper and Maddix, Danielle C. and Mahoney, Michael W. and Torkkola, Kari and Gordon Wilson, Andrew and Bohlke-Schneider, Michael and Wang, Yuyang},
|
114 |
+
journal={Transactions on Machine Learning Research},
|
115 |
+
issn={2835-8856},
|
116 |
+
year={2024},
|
117 |
+
url={https://openreview.net/forum?id=gerNCVqqtR}
|
118 |
+
}
|
119 |
+
```
|
120 |
+
|
121 |
+
## License
|
122 |
+
|
123 |
+
This project is licensed under the Apache-2.0 License.
|
config.json
ADDED
@@ -0,0 +1,51 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_name_or_path": "autogluon/chronos-bolt-mini",
|
3 |
+
"architectures": [
|
4 |
+
"ChronosBoltModelForForecasting"
|
5 |
+
],
|
6 |
+
"chronos_config": {
|
7 |
+
"context_length": 2048,
|
8 |
+
"input_patch_size": 16,
|
9 |
+
"input_patch_stride": 16,
|
10 |
+
"prediction_length": 64,
|
11 |
+
"quantiles": [
|
12 |
+
0.1,
|
13 |
+
0.2,
|
14 |
+
0.3,
|
15 |
+
0.4,
|
16 |
+
0.5,
|
17 |
+
0.6,
|
18 |
+
0.7,
|
19 |
+
0.8,
|
20 |
+
0.9
|
21 |
+
],
|
22 |
+
"use_reg_token": true
|
23 |
+
},
|
24 |
+
"chronos_pipeline_class": "ChronosBoltPipeline",
|
25 |
+
"classifier_dropout": 0.0,
|
26 |
+
"d_ff": 1536,
|
27 |
+
"d_kv": 64,
|
28 |
+
"d_model": 384,
|
29 |
+
"decoder_start_token_id": 0,
|
30 |
+
"dense_act_fn": "relu",
|
31 |
+
"dropout_rate": 0.1,
|
32 |
+
"eos_token_id": 1,
|
33 |
+
"feed_forward_proj": "relu",
|
34 |
+
"initializer_factor": 0.05,
|
35 |
+
"is_encoder_decoder": true,
|
36 |
+
"is_gated_act": false,
|
37 |
+
"layer_norm_epsilon": 1e-06,
|
38 |
+
"model_type": "t5",
|
39 |
+
"n_positions": 512,
|
40 |
+
"num_decoder_layers": 4,
|
41 |
+
"num_heads": 8,
|
42 |
+
"num_layers": 4,
|
43 |
+
"pad_token_id": 0,
|
44 |
+
"reg_token_id": 1,
|
45 |
+
"relative_attention_max_distance": 128,
|
46 |
+
"relative_attention_num_buckets": 32,
|
47 |
+
"torch_dtype": "float32",
|
48 |
+
"transformers_version": "4.39.3",
|
49 |
+
"use_cache": true,
|
50 |
+
"vocab_size": 2
|
51 |
+
}
|
model.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1a1a4297f132b808c5c7da24e3cce549d519c01ff5c2661fcad404baba018f24
|
3 |
+
size 84956096
|