README.md CHANGED
@@ -1,110 +1,83 @@
1
  ---
2
  license: apache-2.0
3
- pipeline_tag: time-series-forecasting
4
  tags:
5
- - time series
6
- - forecasting
7
- - pretrained models
8
- - foundation models
9
- - time series foundation models
10
- - time-series
11
  ---
12
 
13
- # Chronos-T5 (Mini)
14
 
15
- 🚀 **Update Feb 14, 2025**: Chronos-Bolt & original Chronos models are now available on Amazon SageMaker JumpStart! Check out the [tutorial notebook](https://github.com/amazon-science/chronos-forecasting/blob/main/notebooks/deploy-chronos-bolt-to-amazon-sagemaker.ipynb) to learn how to deploy Chronos endpoints for production use in a few lines of code.
 
 
16
 
17
- 🚀 **Update Nov 27, 2024**: We have released Chronos-Bolt⚡️ models that are more accurate (5% lower error), up to 250 times faster and 20 times more memory-efficient than the original Chronos models of the same size. Check out the new models [here](https://huggingface.co/amazon/chronos-bolt-mini).
18
-
19
- Chronos is a family of **pretrained time series forecasting models** based on language model architectures. A time series is transformed into a sequence of tokens via scaling and quantization, and a language model is trained on these tokens using the cross-entropy loss. Once trained, probabilistic forecasts are obtained by sampling multiple future trajectories given the historical context. Chronos models have been trained on a large corpus of publicly available time series data, as well as synthetic data generated using Gaussian processes.
20
-
21
- For details on Chronos models, training data and procedures, and experimental results, please refer to the paper [Chronos: Learning the Language of Time Series](https://arxiv.org/abs/2403.07815).
22
-
23
- <p align="center">
24
- <img src="figures/main-figure.png" width="100%">
25
- <br />
26
- <span>
27
- Fig. 1: High-level depiction of Chronos. (<b>Left</b>) The input time series is scaled and quantized to obtain a sequence of tokens. (<b>Center</b>) The tokens are fed into a language model which may either be an encoder-decoder or a decoder-only model. The model is trained using the cross-entropy loss. (<b>Right</b>) During inference, we autoregressively sample tokens from the model and map them back to numerical values. Multiple trajectories are sampled to obtain a predictive distribution.
28
- </span>
29
- </p>
30
-
31
- ---
32
 
33
  ## Architecture
34
 
35
- The models in this repository are based on the [T5 architecture](https://arxiv.org/abs/1910.10683). The only difference is in the vocabulary size: Chronos-T5 models use 4096 different tokens, compared to 32128 of the original T5 models, resulting in fewer parameters.
 
 
36
 
37
- | Model | Parameters | Based on |
38
- | ---------------------------------------------------------------------- | ---------- | ---------------------------------------------------------------------- |
39
- | [**chronos-t5-tiny**](https://huggingface.co/amazon/chronos-t5-tiny) | 8M | [t5-efficient-tiny](https://huggingface.co/google/t5-efficient-tiny) |
40
- | [**chronos-t5-mini**](https://huggingface.co/amazon/chronos-t5-mini) | 20M | [t5-efficient-mini](https://huggingface.co/google/t5-efficient-mini) |
41
- | [**chronos-t5-small**](https://huggingface.co/amazon/chronos-t5-small) | 46M | [t5-efficient-small](https://huggingface.co/google/t5-efficient-small) |
42
- | [**chronos-t5-base**](https://huggingface.co/amazon/chronos-t5-base) | 200M | [t5-efficient-base](https://huggingface.co/google/t5-efficient-base) |
43
- | [**chronos-t5-large**](https://huggingface.co/amazon/chronos-t5-large) | 710M | [t5-efficient-large](https://huggingface.co/google/t5-efficient-large) |
44
 
45
  ## Usage
46
 
47
- To perform inference with Chronos models, install the package in the GitHub [companion repo](https://github.com/amazon-science/chronos-forecasting) by running:
48
 
49
- ```
50
  pip install git+https://github.com/amazon-science/chronos-forecasting.git
51
  ```
52
 
53
- A minimal example showing how to perform inference using Chronos models:
54
 
55
  ```python
56
- import matplotlib.pyplot as plt
57
  import numpy as np
58
  import pandas as pd
 
59
  import torch
60
  from chronos import ChronosPipeline
61
 
62
- pipeline = ChronosPipeline.from_pretrained(
63
- "amazon/chronos-t5-mini",
64
- device_map="cuda",
65
- torch_dtype=torch.bfloat16,
66
- )
67
 
68
- df = pd.read_csv("https://raw.githubusercontent.com/AileenNielsen/TimeSeriesAnalysisWithPython/master/data/AirPassengers.csv")
 
 
 
 
 
69
 
70
- # context must be either a 1D tensor, a list of 1D tensors,
71
- # or a left-padded 2D tensor with batch as the first dimension
72
- context = torch.tensor(df["#Passengers"])
73
- prediction_length = 12
74
- forecast = pipeline.predict(context, prediction_length) # shape [num_series, num_samples, prediction_length]
75
 
76
- # visualize the forecast
77
- forecast_index = range(len(df), len(df) + prediction_length)
78
- low, median, high = np.quantile(forecast[0].numpy(), [0.1, 0.5, 0.9], axis=0)
 
 
79
 
80
- plt.figure(figsize=(8, 4))
81
- plt.plot(df["#Passengers"], color="royalblue", label="historical data")
82
- plt.plot(forecast_index, median, color="tomato", label="median forecast")
83
- plt.fill_between(forecast_index, low, high, color="tomato", alpha=0.3, label="80% prediction interval")
84
  plt.legend()
85
  plt.grid()
86
  plt.show()
87
  ```
88
 
89
- ## Citation
90
 
91
- If you find Chronos models useful for your research, please consider citing the associated [paper](https://arxiv.org/abs/2403.07815):
92
 
93
  ```
94
- @article{ansari2024chronos,
95
- title={Chronos: Learning the Language of Time Series},
96
- author={Ansari, Abdul Fatir and Stella, Lorenzo and Turkmen, Caner and Zhang, Xiyuan, and Mercado, Pedro and Shen, Huibin and Shchur, Oleksandr and Rangapuram, Syama Syndar and Pineda Arango, Sebastian and Kapoor, Shubham and Zschiegner, Jasper and Maddix, Danielle C. and Mahoney, Michael W. and Torkkola, Kari and Gordon Wilson, Andrew and Bohlke-Schneider, Michael and Wang, Yuyang},
97
- journal={Transactions on Machine Learning Research},
98
- issn={2835-8856},
99
- year={2024},
100
- url={https://openreview.net/forum?id=gerNCVqqtR}
101
- }
102
  ```
103
-
104
- ## Security
105
-
106
- See [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more information.
107
-
108
- ## License
109
-
110
- This project is licensed under the Apache-2.0 License.
 
1
  ---
2
  license: apache-2.0
 
3
  tags:
4
+ - time series
5
+ - forecasting
6
+ - pretrained models
7
+ - foundation models
8
+ - time series foundation models
9
+ - time-series
10
  ---
11
 
12
+ # Chronos-T5 Mini
13
 
14
+ Chronos models are pre-trained **time series forecasting models** based on language model architectures.
15
+ A time series is transformed into a sequence of tokens via scaling and quantization, and forecasts are obtained by sampling multiple sequences of future observations given historical context.
16
+ Chronos models are trained on a large corpus of publicly available time series data, as well as synthetic data.
17
 
18
+ For details on Chronos models, training data and procedures, and experimental results, refer to the paper [Chronos: Learning the Language of Time Series](https://www.example.com/).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
 
20
  ## Architecture
21
 
22
+ The model in this repository is based on the [T5 architecture](https://arxiv.org/abs/1910.10683).
23
+ The only difference is in the vocabulary size:
24
+ Chronos-T5 uses 4096 different tokens, compared to 32128 of the original T5 models, resulting in a smaller number of total parameters.
25
 
26
+ Model | Parameters | Based on
27
+ ----------------|-------------------|----------------------
28
+ [chronos-t5-mini](https://huggingface.co/amazon/chronos-t5-mini) | 20M | [t5-efficient-mini](https://huggingface.co/google/t5-efficient-mini)
29
+ [chronos-t5-small](https://huggingface.co/amazon/chronos-t5-small) | 46M | [t5-efficient-small](https://huggingface.co/google/t5-efficient-small)
30
+ [chronos-t5-base](https://huggingface.co/amazon/chronos-t5-base) | 200M | [t5-efficient-base](https://huggingface.co/google/t5-efficient-base)
31
+ [chronos-t5-large](https://huggingface.co/amazon/chronos-t5-large) | 710M | [t5-efficient-large](https://huggingface.co/google/t5-efficient-large)
 
32
 
33
  ## Usage
34
 
35
+ To do inference with Chronos models, you will need to install the code from the [companion GitHub repo](https://www.example.com/).
36
 
37
+ ```bash
38
  pip install git+https://github.com/amazon-science/chronos-forecasting.git
39
  ```
40
 
41
+ A minimal example:
42
 
43
  ```python
 
44
  import numpy as np
45
  import pandas as pd
46
+ import matplotlib.pyplot as plt
47
  import torch
48
  from chronos import ChronosPipeline
49
 
50
+ pipeline = ChronosPipeline.from_pretrained("amazon/chronos-t5-base")
 
 
 
 
51
 
52
+ df = pd.read_csv(
53
+ "https://raw.githubusercontent.com/AileenNielsen/"
54
+ "TimeSeriesAnalysisWithPython/master/data/AirPassengers.csv",
55
+ index_col=0,
56
+ parse_dates=True,
57
+ )
58
 
59
+ context = torch.Tensor(df["#Passengers"].values)
60
+ forecast = pipeline.predict(context, prediction_length=12)
 
 
 
61
 
62
+ forecast_steps = range(len(df), len(df) + 12)
63
+ forecast_np = forecast.numpy()[0].T
64
+ low = np.quantile(forecast_np, 0.1, axis=1)
65
+ median = np.quantile(forecast_np, 0.5, axis=1)
66
+ high = np.quantile(forecast_np, 0.9, axis=1)
67
 
68
+ plt.plot(range(len(df)), df["#Passengers"], color="royalblue", label="historical data")
69
+ plt.plot(forecast_steps, forecast_np, color="grey", alpha=0.1)
70
+ plt.fill_between(forecast_steps, low, high, color="tomato", alpha=0.4, label="80% interval")
71
+ plt.plot(forecast_steps, median, color="tomato", label="median")
72
  plt.legend()
73
  plt.grid()
74
  plt.show()
75
  ```
76
 
77
+ ## References
78
 
79
+ If you find Chronos models useful for your research, please consider citing the associated [paper](https://www.example.com/):
80
 
81
  ```
82
+ paper citation
 
 
 
 
 
 
 
83
  ```
 
 
 
 
 
 
 
 
chronos_config.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "low_limit": -15.0,
3
+ "high_limit": 15.0,
4
+ "n_tokens": 4096,
5
+ "n_special_tokens": 2,
6
+ "pad_token_id": 0,
7
+ "eos_token_id": 1,
8
+ "use_eos_token": true,
9
+ "model_type": "seq2seq",
10
+ "context_length": 512,
11
+ "prediction_length": 64,
12
+ "num_samples": 20,
13
+ "temperature": 1.0,
14
+ "top_k": 50,
15
+ "top_p": 1.0
16
+ }
config.json CHANGED
@@ -1,5 +1,7 @@
1
  {
2
- "architectures": ["T5ForConditionalGeneration"],
 
 
3
  "d_ff": 1536,
4
  "d_kv": 64,
5
  "d_model": 384,
@@ -11,7 +13,7 @@
11
  "initializer_factor": 0.05,
12
  "is_encoder_decoder": true,
13
  "is_gated_act": false,
14
- "layer_norm_epsilon": 1e-6,
15
  "model_type": "t5",
16
  "n_positions": 512,
17
  "num_decoder_layers": 4,
@@ -20,27 +22,8 @@
20
  "pad_token_id": 0,
21
  "relative_attention_max_distance": 128,
22
  "relative_attention_num_buckets": 32,
23
- "torch_dtype": "float32",
24
  "transformers_version": "4.31.0",
25
  "use_cache": true,
26
- "vocab_size": 4096,
27
- "chronos_config": {
28
- "tokenizer_class": "MeanScaleUniformBins",
29
- "tokenizer_kwargs": {
30
- "low_limit": -15.0,
31
- "high_limit": 15.0
32
- },
33
- "n_tokens": 4096,
34
- "n_special_tokens": 2,
35
- "pad_token_id": 0,
36
- "eos_token_id": 1,
37
- "use_eos_token": true,
38
- "model_type": "seq2seq",
39
- "context_length": 512,
40
- "prediction_length": 64,
41
- "num_samples": 20,
42
- "temperature": 1.0,
43
- "top_k": 50,
44
- "top_p": 1.0
45
- }
46
  }
 
1
  {
2
+ "architectures": [
3
+ "T5ForConditionalGeneration"
4
+ ],
5
  "d_ff": 1536,
6
  "d_kv": 64,
7
  "d_model": 384,
 
13
  "initializer_factor": 0.05,
14
  "is_encoder_decoder": true,
15
  "is_gated_act": false,
16
+ "layer_norm_epsilon": 1e-06,
17
  "model_type": "t5",
18
  "n_positions": 512,
19
  "num_decoder_layers": 4,
 
22
  "pad_token_id": 0,
23
  "relative_attention_max_distance": 128,
24
  "relative_attention_num_buckets": 32,
25
+ "torch_dtype": "bfloat16",
26
  "transformers_version": "4.31.0",
27
  "use_cache": true,
28
+ "vocab_size": 4096
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
  }
figures/main-figure.png DELETED
Binary file (232 kB)
 
model.safetensors → pytorch_model.bin RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5cf49253460171e8dfc3a75cfd5e3796091352ec48df16baeb8a1bfeacef769c
3
- size 81835272
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a009ea2d6d916da48e620eefda4d11c1b072550f92be248b4c5abdde34cb2060
3
+ size 40943036