Safetensors
flowstate
Thomas Ortner commited on
Commit
f63b671
·
1 Parent(s): 00efb80

Updated model card

Browse files
Files changed (1) hide show
  1. README.md +3 -5
README.md CHANGED
@@ -1,11 +1,8 @@
1
  ---
2
  license: apache-2.0
3
  ---
4
- ---
5
- license: apache-2.0
6
- ---
7
  # FlowState
8
- [Paper](https://www.arxiv.org/abs/2508.05287) | [HuggingFace Model Card](https://huggingface.co/ibm-granite/granite-timeseries-flowstate-r1) | [GitHub Model Code](https://github.com/ibm-granite/granite-tsfm/tree/main/tsfm_public/models/flowstate)
9
 
10
  ![Illustration](figs/FlowState.png)
11
  FlowState is the first time-scale adjustable Time Series Foundation Model (TSFM), open-sourced by IBM Research.
@@ -35,7 +32,8 @@ FlowState can be used to make predictions as follows:
35
  from tsfm_public import FlowStateForPrediction
36
  import torch
37
  device= 'cuda'
38
- predictor = FlowStateForPrediction.from_pretrained("ibm-granite/granite-timeseries-flowstate-r1").to(device)
 
39
  time_series = torch.randn((2048, 32, 1), device=device) # context, batch, n_ch
40
  forecast = predictor(time_series, scale_factor=0.25, prediction_length=960, batch_first=False)
41
  print(forecast.prediction_outputs.shape) # torch.Size([32, 9, 48, 1]) (batch, quantiles, forecast_length, n_ch)
 
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
4
  # FlowState
5
+ [Paper](https://www.arxiv.org/abs/2508.05287) | [HuggingFace Model Card](https://huggingface.co/ibm-research/flowstate) | [GitHub Model Code](https://github.com/ibm-granite/granite-tsfm/tree/main/tsfm_public/models/flowstate)
6
 
7
  ![Illustration](figs/FlowState.png)
8
  FlowState is the first time-scale adjustable Time Series Foundation Model (TSFM), open-sourced by IBM Research.
 
32
  from tsfm_public import FlowStateForPrediction
33
  import torch
34
  device= 'cuda'
35
+ # Download FlowState Research checkpoint (non-commercial use):
36
+ predictor = FlowStateForPrediction.from_pretrained("ibm-research/flowstate").to(device)
37
  time_series = torch.randn((2048, 32, 1), device=device) # context, batch, n_ch
38
  forecast = predictor(time_series, scale_factor=0.25, prediction_length=960, batch_first=False)
39
  print(forecast.prediction_outputs.shape) # torch.Size([32, 9, 48, 1]) (batch, quantiles, forecast_length, n_ch)