Datasets:
LEAP
/

ArXiv:
License:
ChaosBench / README.md
juannat7's picture
Add climatology in README
4c8ace3 verified
|
raw
history blame
4.41 kB
metadata
license: apache-2.0
viewer: false

ChaosBench

We propose ChaosBench, a large-scale, multi-channel, physics-based benchmark for subseasonal-to-seasonal (S2S) climate prediction. It is framed as a high-dimensional video regression task that consists of 45-year, 60-channel observations for validating physics-based and data-driven models, and training the latter. Physics-based forecasts are generated from 4 national weather agencies with 44-day lead-time and serve as baselines to data-driven forecasts. Our benchmark is one of the first to incorporate physics-based metrics to ensure physically-consistent and explainable models. We establish two tasks: full and sparse dynamics prediction.

πŸ”—: https://github.com/leap-stc/

πŸ“š: https://arxiv.org/

Table of Content

Getting Started

  1. Set up the project...
  • Clone the ChaosBench Github repository
  • Create local directory to store your data, e.g., data/
  • Navigate to chaosbench/config.py and change the field DATA_DIR = <YOUR_WORKING_DATA_DIR>
  1. Download the dataset...
  • First we perform initialization:
    cd data/
    wget https://huggingface.co/datasets/juannat7/ChaosBench/blob/main/process.sh
    chmod +x process.sh
    
  • And finally: (NOTE: you can also run each line one at a time to retrieve each dataset)
    ./process.sh era5            # For input ERA5 data
    ./process.sh ukmo            # For simulation from UKMO
    ./process.sh ncep            # For simulation from NCEP
    ./process.sh cma             # For simulation from CMA
    ./process.sh ecmwf           # For simulation from ECMWF
    ./process.sh climatology     # For climatology
    

Dataset Overview

  • Input: ERA5 Reanalysis (1979-2023)

  • Target: The following table indicates the 48 variables (channels) that are available for Physics-based models. Note that the Input ERA5 observations contains ALL fields, including the unchecked boxes:

    Parameters/Levels (hPa) 1000 925 850 700 500 300 200 100 50 10
    Geopotential height, z ($gpm$) βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ βœ“
    Specific humidity, q ($kg kg^{-1}$) βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ βœ“      
    Temperature, t ($K$) βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ βœ“
    U component of wind, u ($ms^{-1}$) βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ βœ“
    V component of wind, v ($ms^{-1}$) βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ βœ“
    Vertical velocity, w ($Pas^{-1}$)         βœ“          
  • Baselines:

    • Physics-based models:
      • UKMO: UK Meteorological Office
      • NCEP: National Centers for Environmental Prediction
      • CMA: China Meteorological Administration
      • ECMWF: European Centre for Medium-Range Weather Forecasts
    • Data-driven models:
      • Lagged-Autoencoder
      • Fourier Neural Operator (FNO)
      • ResNet
      • UNet
      • ViT/ClimaX
      • PanguWeather
      • Fourcastnetv2

Evaluation Metrics

We divide our metrics into 2 classes: (1) ML-based, which cover evaluation used in conventional computer vision and forecasting tasks, (2) Physics-based, which are aimed to construct a more physically-faithful and explainable data-driven forecast.

  • Vision-based:
    • RMSE
    • Bias
    • Anomaly Correlation Coefficient (ACC)
    • Multiscale Structural Similarity Index (MS-SSIM)
  • Physics-based:
    • Spectral Divergence (SpecDiv)
    • Spectral Residual (SpecRes)

Leaderboard

You can access the full score and checkpoints in logs/<MODEL_NAME> within the following subdirectory:

  • Scores: eval/<METRIC>.csv
  • Model checkpoints: lightning_logs/