Fix model card and add missing links
Browse files- fixes mesh parameter
- add arxiv links
- clarify time horizon for use case
- training procedure details updated
README.md
CHANGED
|
@@ -13,7 +13,7 @@ pipeline_tag: graph-ml
|
|
| 13 |
|
| 14 |
**FastNet** is an data-driven medium range numerical weather prediction model developed jointly by the UK Met Office and the Alan Turing Institute. This release of FastNet v.1.1 marks the first publicly shared experimental release of the FastNet project.
|
| 15 |
|
| 16 |
-
FastNet produces highly skilled forecasts that overcome commonly known limitations of AI models, resulting in more physically realistic forecasts as demonstrated in the corresponding publication
|
| 17 |
|
| 18 |
⚠️ **Note** A future v2 of this model is in development, based on the [Anemoi framework](https://github.com/ecmwf/anemoi-core). No further updates to the v1 codebase are planned.
|
| 19 |
|
|
@@ -21,9 +21,10 @@ FastNet produces highly skilled forecasts that overcome commonly known limitatio
|
|
| 21 |
|
| 22 |
|
| 23 |

|
|
|
|
| 24 |
### Model Description
|
| 25 |
|
| 26 |
-
FastNet has an encode-process-decode structure with a series of graph neural networks and auto-regressive rollout. The encoder is a directional bipartite graph linking the current atmospheric state defined on input grid cells to a lower resolution latent space defined on mesh nodes. The processor then advances the mesh state in time by six hour increments. The processor operates on a multi-scale icosahedral mesh, starting from the 12-node icosahedron and subdividing
|
| 27 |
|
| 28 |
- **Developed by:** Met Office & The Alan Turing Institute
|
| 29 |
- **Model type:** Encoder-processor-decoder model
|
|
@@ -32,12 +33,12 @@ FastNet has an encode-process-decode structure with a series of graph neural net
|
|
| 32 |
### Model Sources
|
| 33 |
|
| 34 |
- **Inference-only repository:** https://github.com/MetOffice/fastnet-inference
|
| 35 |
-
- **Paper:**
|
| 36 |
|
| 37 |
## Uses
|
| 38 |
|
| 39 |
### Direct Use
|
| 40 |
-
This model is intended for research and exploratory inference
|
| 41 |
It is released for inference only: weights are provided for forward prediction, but training and fine-tuning are not supported in this release.
|
| 42 |
|
| 43 |
Typical direct uses include: benchmarking against baselines, sensitivity experiments (e.g., perturbing input fields), case-study analysis of notable events
|
|
@@ -45,8 +46,6 @@ Typical direct uses include: benchmarking against baselines, sensitivity experim
|
|
| 45 |
### Out of scope
|
| 46 |
This release is not intended for operational forecasting, safety-critical decision making or issuing public warnings
|
| 47 |
|
| 48 |
-
## Known Limitations
|
| 49 |
-
|
| 50 |
## Training Details
|
| 51 |
|
| 52 |
### Training Data
|
|
@@ -84,8 +83,8 @@ Below we summarize the three-stage training procedure, including the number of u
|
|
| 84 |
| Stage | # rollout | Loss / Objective |
|
| 85 |
| ------------- | ----------- | ---------------- |
|
| 86 |
| Pre-training | n = 1 (6h) | weighted MSE |
|
| 87 |
-
| Fine-tuning 1 | n = 7 (42h) | MSE |
|
| 88 |
-
| Fine-tuning 2 | n =
|
| 89 |
Weighting was applied per-variable and was proportional to pressure level.
|
| 90 |
|
| 91 |
#### Preprocessing
|
|
|
|
| 13 |
|
| 14 |
**FastNet** is an data-driven medium range numerical weather prediction model developed jointly by the UK Met Office and the Alan Turing Institute. This release of FastNet v.1.1 marks the first publicly shared experimental release of the FastNet project.
|
| 15 |
|
| 16 |
+
FastNet produces highly skilled forecasts that overcome commonly known limitations of AI models, resulting in more physically realistic forecasts as demonstrated in the corresponding publication [FastNet: Improving the physical consistency of machine-learning weather prediction models through loss function design](https://arxiv.org/abs/2509.17601).
|
| 17 |
|
| 18 |
⚠️ **Note** A future v2 of this model is in development, based on the [Anemoi framework](https://github.com/ecmwf/anemoi-core). No further updates to the v1 codebase are planned.
|
| 19 |
|
|
|
|
| 21 |
|
| 22 |
|
| 23 |

|
| 24 |
+
|
| 25 |
### Model Description
|
| 26 |
|
| 27 |
+
FastNet has an encode-process-decode structure with a series of graph neural networks and auto-regressive rollout. The encoder is a directional bipartite graph linking the current atmospheric state defined on input grid cells to a lower resolution latent space defined on mesh nodes. The processor then advances the mesh state in time by six hour increments. The processor operates on a multi-scale icosahedral mesh, starting from the 12-node icosahedron and subdividing five times, enabling the model to capture both localised and long-range interactions. Finally, the decoder maps the latent mesh representation back to the output domain, and the prediction is fed back as input for subsequent steps during rollout. FastNet uses a residual formulation, where the decoder output represents the increment to be added to the input state via skip-level connections, rather than predicting the full field from scratch. Notably, FastNet was trained with loss-function adaptations designed to improve physical realism compared to similar models.
|
| 28 |
|
| 29 |
- **Developed by:** Met Office & The Alan Turing Institute
|
| 30 |
- **Model type:** Encoder-processor-decoder model
|
|
|
|
| 33 |
### Model Sources
|
| 34 |
|
| 35 |
- **Inference-only repository:** https://github.com/MetOffice/fastnet-inference
|
| 36 |
+
- **Paper:** [main publication](https://arxiv.org/abs/2509.17601) and [technical paper](https://arxiv.org/abs/2509.17658)
|
| 37 |
|
| 38 |
## Uses
|
| 39 |
|
| 40 |
### Direct Use
|
| 41 |
+
This model is intended for research and exploratory inference using historical or real-time atmospheric reanalysis inputs to generate global weather-pattern predictions. Model performance has typically been evaluated for lead times of up to 10 days, although fine-tuning was focused on shorter horizons.
|
| 42 |
It is released for inference only: weights are provided for forward prediction, but training and fine-tuning are not supported in this release.
|
| 43 |
|
| 44 |
Typical direct uses include: benchmarking against baselines, sensitivity experiments (e.g., perturbing input fields), case-study analysis of notable events
|
|
|
|
| 46 |
### Out of scope
|
| 47 |
This release is not intended for operational forecasting, safety-critical decision making or issuing public warnings
|
| 48 |
|
|
|
|
|
|
|
| 49 |
## Training Details
|
| 50 |
|
| 51 |
### Training Data
|
|
|
|
| 83 |
| Stage | # rollout | Loss / Objective |
|
| 84 |
| ------------- | ----------- | ---------------- |
|
| 85 |
| Pre-training | n = 1 (6h) | weighted MSE |
|
| 86 |
+
| Fine-tuning 1 | n = 7 (42h) | weighted MSE |
|
| 87 |
+
| Fine-tuning 2 | up to n = 12 (72h) in a [1, 2, 4, 8, 12] pattern | spectral AMSE |
|
| 88 |
Weighting was applied per-variable and was proportional to pressure level.
|
| 89 |
|
| 90 |
#### Preprocessing
|