AI & ML interests

Natural Language Processing in Finance, Accounting, Business, Management, Economics, and Marketing

Recent Activity

Eghbal  published a model 11 days ago
FinText/Chronos_Tiny_2002_US
Eghbal  updated a Space 14 days ago
FinText/README
Eghbal  published a model 14 days ago
FinText/TimesFM_20M_2023_Augmented
View all activity

FinText Logo

Time Series Foundation Models for Finance

🚀 TSFMs Release

We are pleased to introduce FinText-TSFM, a comprehensive suite of time series foundation models (TSFMs) with 613 models pre-trained for quantitative finance. This release accompanies the paper : Re(Visiting) Time Series Foundation Models in Finance by Eghbal Rahimikia, Hao Ni, and Weiguan Wang (2025).

💡 Key Highlights

  • Finance-Native Pre-training:
    Models are pre-trained from scratch on large-scale financial time series datasets — including daily excess returns across 89 markets and over 2 billion observations — to ensure full temporal and domain alignment.

  • Bias-Free Design:
    Pre-training strictly follows a chronological expanding-window setup, avoiding any look-ahead bias or information leakage.
    Each variation includes 23 separately pre-trained models, corresponding to each year from 2000 to 2023, with data starting in 1990.

  • Model Families:
    This release includes variants of Chronos and TimesFM architectures adapted for financial time series:

    • Chronos-Tiny (8M) / Mini (20M) / Small (46M)
    • TimesFM-8M / 20M
  • Model Collections:

    • U.S.: Covers U.S. market-wide excess returns from 2000 to 2023, with one pre-trained model per year.
    • Global: Covers excess returns across 94 global markets from 2000 to 2023, with one pre-trained model for each year.
    • Augmented: Extends the global data with augmented factors from 2000 to 2023, with one pre-trained model for each year.
    • The remaining 253 pre-trained models are available for download via the FinText.ai Portal. These include models pre-trained with varying hyperparameter configurations for extended experimentation and performance comparison.
  • Performance Insights:
    Our findings show that off-the-shelf TSFMs underperform in zero-shot forecasting, while finance-pretrained models achieve large gains in both predictive accuracy and portfolio performance.

  • Evaluation Scope:
    Models are benchmarked across U.S. and seven international markets, using rolling windows of 5, 21, 252, and 512 days, with over 18 million out-of-sample forecasts spanning 22 years (2001–2023) of daily excess returns, evaluated at both the statistical and economic performance levels.

🧠 Technical Overview

  • Architecture: Transformer-based TSFMs (Chronos & TimesFM)
  • Compute: 50,000 GPU hours on NVIDIA GH200 Grace Hopper clusters

📚 Citation

Please cite the accompanying paper if you use these models:

Re(Visiting) Time Series Foundation Models in Finance.
Rahimikia, Eghbal; Ni, Hao; Wang, Weiguan.
SSRN: [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5770562)

🔋 Acknowledgments

This project was made possible through computational and institutional support from:

  • UK Research and Innovation (UKRI)
  • Isambard-AI National AI Research Resource (AIRR)
  • Alliance Manchester Business School (AMBS), University of Manchester
  • N8 Centre of Excellence in Computationally Intensive Research (N8 CIR)
  • The University of Manchester (Research IT & Computational Shared Facility)
  • University College London (UCL)
  • The Alan Turing Institute
  • Shanghai University

Developed by:

University of Manchester Logo UCL Logo

Alliance Manchester Business School, University of Manchester
Department of Mathematics, University College London (UCL)

Powered by:

BriCS Logo N8 Bede Logo

Isambard-AI, Bristol Centre for Supercomputing (BriCS)
The Bede Supercomputer

datasets 0

None public yet