Datasets:
LEAP
/

ArXiv:
License:
The Dataset Viewer has been disabled on this dataset.

ChaosBench

We propose ChaosBench, a large-scale, multi-channel, physics-based benchmark for subseasonal-to-seasonal (S2S) climate prediction. It is framed as a high-dimensional video regression task that consists of 45-year, 60-channel observations for validating physics-based and data-driven models, and training the latter. Physics-based forecasts are generated from 4 national weather agencies with 44-day lead-time and serve as baselines to data-driven forecasts. Our benchmark is one of the first to incorporate physics-based metrics to ensure physically-consistent and explainable models. We establish two tasks: full and sparse dynamics prediction.

πŸ”—: https://leap-stc.github.io/ChaosBench/

πŸ“š: https://arxiv.org/abs/2402.00712

Getting Started

Step 1: Clone the ChaosBench Github repository

Step 2: Install package dependencies

cd ChaosBench
pip install -r requirements.txt

Step 3: Initialize the data space by running

cd data/
wget https://huggingface.co/datasets/LEAP/ChaosBench/resolve/main/process.sh
chmod +x process.sh

Step 5: Download the data

# NOTE: you can also run each line one at a time to retrieve individual dataset

./process.sh era5            # Required: For input ERA5 data
./process.sh climatology     # Required: For climatology
./process.sh ukmo            # Optional: For simulation from UKMO
./process.sh ncep            # Optional: For simulation from NCEP
./process.sh cma             # Optional: For simulation from CMA
./process.sh ecmwf           # Optional: For simulation from ECMWF

Dataset Overview

  • Input: ERA5 Reanalysis (1979-2023)

  • Target: The following table indicates the 48 variables (channels) that are available for Physics-based models. Note that the Input ERA5 observations contains ALL fields, including the unchecked boxes:

    Parameters/Levels (hPa) 1000 925 850 700 500 300 200 100 50 10
    Geopotential height, z ($gpm$) βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ βœ“
    Specific humidity, q ($kg kg^{-1}$) βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ βœ“      
    Temperature, t ($K$) βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ βœ“
    U component of wind, u ($ms^{-1}$) βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ βœ“
    V component of wind, v ($ms^{-1}$) βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ βœ“
    Vertical velocity, w ($Pas^{-1}$)         βœ“          
  • Baselines:

    • Physics-based models:
      • UKMO: UK Meteorological Office
      • NCEP: National Centers for Environmental Prediction
      • CMA: China Meteorological Administration
      • ECMWF: European Centre for Medium-Range Weather Forecasts
    • Data-driven models:
      • Lagged-Autoencoder
      • Fourier Neural Operator (FNO)
      • ResNet
      • UNet
      • ViT/ClimaX
      • PanguWeather
      • Fourcastnetv2
      • GraphCast

Evaluation Metrics

We divide our metrics into 2 classes: (1) ML-based, which cover evaluation used in conventional computer vision and forecasting tasks, (2) Physics-based, which are aimed to construct a more physically-faithful and explainable data-driven forecast.

  • Vision-based:
    • RMSE
    • Bias
    • Anomaly Correlation Coefficient (ACC)
    • Multiscale Structural Similarity Index (MS-SSIM)
  • Physics-based:
    • Spectral Divergence (SpecDiv)
    • Spectral Residual (SpecRes)

Leaderboard

You can access the full score and checkpoints in logs/<MODEL_NAME> within the following subdirectory:

  • Scores: eval/<METRIC>.csv
  • Model checkpoints: lightning_logs/
Downloads last month
0
Edit dataset card