vilhess commited on
Commit
0d3a8c4
Β·
verified Β·
1 Parent(s): 85bca4d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +123 -5
README.md CHANGED
@@ -1,10 +1,128 @@
1
  ---
2
  tags:
3
- - model_hub_mixin
4
- - pytorch_model_hub_mixin
 
 
 
 
 
5
  ---
6
 
7
  This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
8
- - Code: [More Information Needed]
9
- - Paper: [More Information Needed]
10
- - Docs: [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  tags:
3
+ - timeseries
4
+ - forecasting
5
+ - transformer
6
+ - patches
7
+ - foundation
8
+ - zero-shot
9
+ pipeline_tag: time-series-forecasting
10
  ---
11
 
12
  This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
13
+ - Code: [GitHub](https://github.com/vilhess/PatchFM)
14
+ - Paper: Incoming
15
+
16
+ # A tutorial on how to build a Foundation Model for Univariate Time Series Forecasting
17
+
18
+ [Huggingface Model Card](https://huggingface.co/vilhess/PatchFM)
19
+
20
+ A concise, reproducible recipe for training a transformer-based, patch-to-patch forecasting model for univariate time series. The approach mirrors Large Language Model (LLM) practices (next-token β†’ next-patch) while remaining lightweight compared to a classic LLM and practical.
21
+
22
+ ## Highlights
23
+ - Next-patch prediction objective (autoregressive, causal)
24
+ - Patch-based representation of time series (tokens ↔ patches)
25
+ - Causal masking self-attention with RoPE (relative positions)
26
+ - RevIN (Reversible Instance Normalization) with causal statistics
27
+ - SwiGLU feed-forward networks
28
+ - Multi-quantile outputs (median + uncertainty bands)
29
+ - Efficient rollout with KV caching
30
+
31
+ ## Installation
32
+ ```bash
33
+ git clone https://github.com/vilhess/PatchFM
34
+ cd PatchFM
35
+ pip install -r requirements.txt
36
+ ```
37
+
38
+ ## Quick Start
39
+
40
+ ```python
41
+ import torch
42
+ from model import Forecaster
43
+ from configs import PatchFMConfig
44
+
45
+ # --- Instantiate model ---
46
+ config = PatchFMConfig(load_from _hub=True)
47
+ model = Forecaster(config)
48
+
49
+ # --- Inference ---
50
+ forecast_horizon = 64
51
+ seq = torch.randn(1, 1024) # (batch, time)
52
+ pred_median, pred_quantiles = model(seq, forecast_horizon=forecast_horizon, quantiles=[0.1, 0.5, 0.9]) # (batch, time, quantiles)
53
+ ```
54
+
55
+ We provide an extended quick start example in [notebooks/tutorial.ipynb](./notebooks/tutorial.ipynb).
56
+ If you dont have suitable hardware you can run the the extended quick start example example also in Google Colab:
57
+
58
+ <a target="_blank" href="https://colab.research.google.com/drive/17sdf-7luCkv5TaeLj3Z6kIaTDkwkz3VR?usp=share_link">
59
+ <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open Quick Start In Colab"/>
60
+ </a>
61
+
62
+ ## Method (TL;DR)
63
+ - Patching: Split a context signal of length $w$ into $P_{num} = w / P_{len}$ patches of length $P_{len}$.
64
+ - RevIN: Normalize patches using causal running mean/variance over past patches, and denormalize outputs to the original scale.
65
+ - Architecture: Input residual MLP β†’ stacked Transformer blocks (MHA + SwiGLU FFN, pre-norm, residual) β†’ $|\mathcal{Q}|$ output heads mapping back to patch space.
66
+ - Positional encoding: Rotary Position Embeddings (RoPE) applied to queries/keys.
67
+ - Training: Multi-quantile (pinball) loss across positions, elements, and quantiles $\mathcal{Q}$.
68
+ - Inference: Predict next patch; roll out autoregressively with KV caching for long horizons.
69
+
70
+ ## Problem Formulation
71
+ Given context patches $x_{p_1}, \ldots, x_{p_n}$, predict the next patch $x_{p_{i+1}}$ for each position $i$ using only past patches (causality). The model outputs quantiles $\{\hat{x}_{p_{i+1}}^{(q)}: q \in \mathcal{Q}\}$ with median (q=0.5) as the point forecast.
72
+
73
+ ## Loss: Multi-Quantile (Pinball)
74
+ For residual $u = x - \hat{x}^{(q)}$:
75
+ $$\rho_q(u) = \begin{cases} q\,u, & u \ge 0,\\ (q-1)\,u, & u < 0. \end{cases}$$
76
+ Aggregate over positions, patch elements, and quantiles.
77
+
78
+ ## Architecture
79
+ - Input MLP: $\mathbb{R}^{P_{len}} \to \mathbb{R}^{dim}$ residual 2-layer MLP (ReLU)
80
+ - Multi-Head Attention: causal mask, RoPE; queries/keys/values per head
81
+ - FFN: SwiGLU (SiLU-gated), pre-norm + residual
82
+ - Output heads: |Q| linear maps $\mathbb{R}^{dim} \to \mathbb{R}^{P_{len}}$ (one per quantile)
83
+
84
+ ### Model Details
85
+ - Patch size: 32
86
+ - Max context: 32 patches (1024 steps)
87
+ - Forecast horizon: 32 steps per forward pass
88
+ - Quantiles $\mathcal{Q}$: {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9}
89
+ - Layers: 6
90
+ - Attention heads: 64 (head dim 32)
91
+ - Model dim: 2048
92
+ - Parameters: ~300M
93
+
94
+ ## Inference
95
+ - Single step: predict next patch ($P_{len}$ values)
96
+ - Long-horizon: append prediction to context and repeat (optionally drop oldest patch to keep window fixed)
97
+ - KV caching: reuse cached keys/values for past patches; compute new Q/K/V only for the appended patch
98
+
99
+ ## Datasets
100
+ - UTSD (Unified Time Series Dataset) [UTSD]: seven domains (Energy, IoT, Nature, Web, Health, Transport, Environment). We start with UTSD-1G (~55M series after preprocessing).
101
+ - Artificial: ~1M synthetic series (sinusoidal, linear, polynomial, logarithmic) plus mixtures via TSMixup [Chronos]; Gaussian Process samples via KernelSynth (mixtures of RBF/periodic/linear kernels with swept hyperparameters).
102
+
103
+ ## Repository Layout
104
+
105
+ - `model/training/` β€” main PatchFM model class
106
+
107
+ - `modules.py` - core modules (Residual Layers, MHA, SwiGLU, RoPE, Transformer Encoder, ...)
108
+ - `revin.py` β€” causal RevIN
109
+ - `loss.py` β€” multi-quantile (pinball) loss
110
+ - `trainer.py` β€” PyTorch Lightning trainer class
111
+
112
+ - `model/inference/` β€” main PatchFM model class for inference with KV caching
113
+ - `modules.py` β€” core modules with caching support
114
+ - `forecaster.py` β€” Forecasting model with KV caching and rollout logic
115
+
116
+ - `dataset/` β€” data loading and preprocessing
117
+ - `artificial.py` β€” synthetic dataset : artificial signals + TSMixup + KernelSynth
118
+ - `utsd.py` β€” Unified Time Series Dataset (UTSD) loading and preprocessing
119
+ - `get_data.py` β€” utility to fetch and preprocess datasets
120
+ - `generate_data.py` β€” utility to generate and save the KernelSynth dataset (long to generate)
121
+
122
+ - `configs/` β€” model and training configurations
123
+ - `notebooks/inference` β€” how to load a trained model and generate forecasts
124
+ - `training.py` β€” training script using PyTorch Lightning
125
+
126
+ ## Acknowledgements
127
+ We thank the authors of the following repositories for inspiration and code snippets:
128
+ - [TiRex](https://github.com/NX-AI/tirex)