--- license: apache-2.0 viewer: false --- # ChaosBench We propose ChaosBench, a large-scale, multi-channel, physics-based benchmark for subseasonal-to-seasonal (S2S) climate prediction. It is framed as a high-dimensional video regression task that consists of 45-year, 60-channel observations for validating physics-based and data-driven models, and training the latter. Physics-based forecasts are generated from 4 national weather agencies with 44-day lead-time and serve as baselines to data-driven forecasts. Our benchmark is one of the first to incorporate physics-based metrics to ensure physically-consistent and explainable models. We establish two tasks: full and sparse dynamics prediction. 🔗: [https://github.com/leap-stc/](https://github.com/leap-stc/) 📚: [https://arxiv.org/](https://arxiv.org/) ## Table of Content - [Getting Started](#getting-started) - [Dataset Overview](#dataset-overview) - [Evaluation Metrics](#evaluation-metrics) - [Leaderboard](#leaderboard) ## Getting Started 1. Set up the project... - Clone the `ChaosBench` Github repository - Create local directory to store your data, e.g., `data/` - Navigate to `chaosbench/config.py` and change the field `DATA_DIR = ` 2. Download the dataset... - First we perform initialization: ``` cd data/ wget https://huggingface.co/datasets/juannat7/ChaosBench/blob/main/process.sh chmod +x process.sh ``` - And finally: (__NOTE__: you can also run each line _one at a time_ to retrieve each dataset) ``` ./process.sh era5 # For input ERA5 data ./process.sh ukmo # For simulation from UKMO ./process.sh ncep # For simulation from NCEP ./process.sh cma # For simulation from CMA ./process.sh ecmwf # For simulation from ECMWF ./process.sh climatology # For climatology ``` ## Dataset Overview - __Input:__ ERA5 Reanalysis (1979-2023) - __Target:__ The following table indicates the 48 variables (channels) that are available for Physics-based models. Note that the __Input__ ERA5 observations contains __ALL__ fields, including the unchecked boxes: Parameters/Levels (hPa) | 1000 | 925 | 850 | 700 | 500 | 300 | 200 | 100 | 50 | 10 :---------------------- | :----| :---| :---| :---| :---| :---| :---| :---| :--| :-| Geopotential height, z ($gpm$) | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | Specific humidity, q ($kg kg^{-1}$) | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |   |   |   | Temperature, t ($K$) | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | U component of wind, u ($ms^{-1}$) | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | V component of wind, v ($ms^{-1}$) | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | Vertical velocity, w ($Pas^{-1}$) |   |   |   |   | ✓ |   |   |   |   |   | - __Baselines:__ - Physics-based models: - [x] UKMO: UK Meteorological Office - [x] NCEP: National Centers for Environmental Prediction - [x] CMA: China Meteorological Administration - [x] ECMWF: European Centre for Medium-Range Weather Forecasts - Data-driven models: - [x] Lagged-Autoencoder - [x] Fourier Neural Operator (FNO) - [x] ResNet - [x] UNet - [x] ViT/ClimaX - [x] PanguWeather - [x] Fourcastnetv2 ## Evaluation Metrics We divide our metrics into 2 classes: (1) ML-based, which cover evaluation used in conventional computer vision and forecasting tasks, (2) Physics-based, which are aimed to construct a more physically-faithful and explainable data-driven forecast. - __Vision-based:__ - [x] RMSE - [x] Bias - [x] Anomaly Correlation Coefficient (ACC) - [x] Multiscale Structural Similarity Index (MS-SSIM) - __Physics-based:__ - [x] Spectral Divergence (SpecDiv) - [x] Spectral Residual (SpecRes) ## Leaderboard You can access the full score and checkpoints in `logs/` within the following subdirectory: - Scores: `eval/.csv` - Model checkpoints: `lightning_logs/`