Lag-Llama / README.md
arjunashok's picture
Update README.md
3f8d9a2 verified
|
raw
history blame
2.11 kB
---
license: apache-2.0
tags:
- time series
- forecasting
- pretrained models
- foundation models
- time series foundation models
- time-series
---
# Lag-Llama: Towards Foundation Models for Probabilistic Time Series Forecasting
![lag-llama-architecture](images/lagllama.webp)
Lag-Llama is the <b>first open-source foundation model for time series forecasting</b>!
[[Tweet Thread](https://twitter.com/arjunashok37/status/1755261111233114165)] [[Model Weights](https://huggingface.co/time-series-foundation-models/Lag-Llama)] [[Colab Demo on Zero-Shot Forecasting](https://colab.research.google.com/drive/13HHKYL_HflHBKxDWycXgIUAHSeHRR5eo?usp=sharing)] [[GitHub](https://github.com/time-series-foundation-models/lag-llama)] [[Paper](https://arxiv.org/abs/2310.08278)]
____
This HuggingFace model houses the <a href="https://huggingface.co/time-series-foundation-models/Lag-Llama/blob/main/lag-llama.ckpt" target="_blank">pretrained checkpoint</a> of Lag-Llama.
____
* **Coming Next**: Fine-tuning scripts with examples on real-world datasets and best practices in using Lag-Llama!πŸš€
<b>Updates</b>:
* **17-Feb-2024**: We have released a new updated [Colab Demo](https://colab.research.google.com/drive/1XxrLW9VGPlZDw3efTvUi0hQimgJOwQG6?usp=sharing) for zero-shot forecasting that shows how one can load time series of different formats.
* **7-Feb-2024**: We released Lag-Llama, with open-source model checkpoints and a Colab Demo for zero-shot forecasting.
____
<b>Current Features:</b>
πŸ’« <b>Zero-shot forecasting</b> on a dataset of <b>any frequency</b> for <b>any prediction length</b>, using the <a href="https://colab.research.google.com/drive/13HHKYL_HflHBKxDWycXgIUAHSeHRR5eo?usp=sharing" target="_blank">Colab Demo.</a><br/>
____
Coming Soon:
⭐ An <b>online gradio demo</b> where you can upload time series and get zero-shot predictions and perform finetuning.
⭐ Features for <b>finetuning</b> the foundation model
⭐ Features for <b>pretraining</b> Lag-Llama on your own large-scale data
⭐ Scripts to <b>reproduce</b> all results in the paper.
____
Stay Tuned!πŸ¦™