Datasets:
LEAP
/

ArXiv:
License:
juannat7 commited on
Commit
b5831cb
1 Parent(s): 0de514f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +87 -0
README.md CHANGED
@@ -1,3 +1,90 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+ # ChaosBench
5
+ We propose ChaosBench, a large-scale, multi-channel, physics-based benchmark for subseasonal-to-seasonal (S2S) climate prediction.
6
+ It is framed as a high-dimensional video regression task that consists of 45-year, 60-channel observations
7
+ for validating physics-based and data-driven models, and training the latter.
8
+ Physics-based forecasts are generated from 4 national weather agencies with 44-day lead-time and serve as baselines to data-driven forecasts.
9
+ Our benchmark is one of the first to incorporate physics-based metrics to ensure physically-consistent and explainable models.
10
+ We establish two tasks: full and sparse dynamics prediction.
11
+
12
+ 🔗: [https://github.com/leap-stc/](https://github.com/leap-stc/)
13
+
14
+ 📚: [https://arxiv.org/](https://arxiv.org/)
15
+
16
+ ## Table of Content
17
+ - [Getting Started](#getting-started)
18
+ - [Dataset Overview](#overview)
19
+ - [Evaluation Metrics](#metrics)
20
+ - [Leaderboard](#leaderboard)
21
+
22
+ ## Getting Started
23
+ 1. Set up the project...
24
+ - Clone the `ChaosBench` Github repository
25
+ - Create local directory to store your data, e.g., `data/`
26
+ - Navigate to `chaosbench/config.py` and change the field `DATA_DIR = <YOUR_WORKING_DATA_DIR>`
27
+
28
+ 2. Download the dataset...
29
+ - First we perform initialization:
30
+ ```
31
+ cd data/
32
+ wget https://huggingface.co/datasets/juannat7/ChaosBench/blob/main/process.sh
33
+ chmod +x process.sh
34
+ ```
35
+ - And finally: (__NOTE__: you can also run each line _one at a time_ to retrieve each dataset)
36
+ ```
37
+ ./process.sh era5 # For input ERA5 data
38
+ ./process.sh ukmo # For simulation from UKMO
39
+ ./process.sh ncep # For simulation from NCEP
40
+ ./process.sh cma # For simulation from CMA
41
+ ./process.sh ecmwf # For simulation from ECMWF
42
+ ```
43
+
44
+ ## Dataset Overview
45
+
46
+ - __Input:__ ERA5 Reanalysis (1979-2023)
47
+
48
+ - __Target:__ The following table indicates the 48 variables (channels) that are available for Physics-based models. Note that the __Input__ ERA5 observations contains __ALL__ fields, including the unchecked boxes:
49
+
50
+ Parameters/Levels (hPa) | 1000 | 925 | 850 | 700 | 500 | 300 | 200 | 100 | 50 | 10
51
+ :---------------------- | :----| :---| :---| :---| :---| :---| :---| :---| :--| :-|
52
+ Geopotential height, z ($gpm$) | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; |
53
+ Specific humidity, q ($kg kg^{-1}$) | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &nbsp; | &nbsp; | &nbsp; |
54
+ Temperature, t ($K$) | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; |
55
+ U component of wind, u ($ms^{-1}$) | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; |
56
+ V component of wind, v ($ms^{-1}$) | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; | &check; |
57
+ Vertical velocity, w ($Pas^{-1}$) | &nbsp; | &nbsp; | &nbsp; | &nbsp; | &check; | &nbsp; | &nbsp; | &nbsp; | &nbsp; | &nbsp; |
58
+
59
+ - __Baselines:__
60
+ - Physics-based models:
61
+ - [x] UKMO: UK Meteorological Office
62
+ - [x] NCEP: National Centers for Environmental Prediction
63
+ - [x] CMA: China Meteorological Administration
64
+ - [x] ECMWF: European Centre for Medium-Range Weather Forecasts
65
+ - Data-driven models:
66
+ - [x] Lagged-Autoencoder
67
+ - [x] Fourier Neural Operator (FNO)
68
+ - [x] ResNet
69
+ - [x] UNet
70
+ - [x] ViT/ClimaX
71
+ - [x] PanguWeather
72
+ - [x] Fourcastnetv2
73
+
74
+ ## Evaluation Metrics
75
+ We divide our metrics into 2 classes: (1) ML-based, which cover evaluation used in conventional computer vision and forecasting tasks, (2) Physics-based, which are aimed to construct a more physically-faithful and explainable data-driven forecast.
76
+
77
+ - __Vision-based:__
78
+ - [x] RMSE
79
+ - [x] Bias
80
+ - [x] Anomaly Correlation Coefficient (ACC)
81
+ - [x] Multiscale Structural Similarity Index (MS-SSIM)
82
+ - __Physics-based:__
83
+ - [x] Spectral Divergence (SpecDiv)
84
+ - [x] Spectral Residual (SpecRes)
85
+
86
+
87
+ ## Leaderboard
88
+ You can access the full score and checkpoints in `logs/<MODEL_NAME>` within the following subdirectory:
89
+ - Scores: `eval/<METRIC>.csv`
90
+ - Model checkpoints: `lightning_logs/`