Fundamental Physics Neural Operators
Paper: Learning Data-Efficient and Generalizable Neural Operators via Fundamental Physics Knowledge Published at: ICLR 2026 Authors: Siying Ma, Mehrdad M. Zadeh, Mauricio Soroco, Wuyang Chen, Jiguo Cao, Vijay Ganesh Affiliations: Simon Fraser University, Georgia Institute of Technology Project Page: https://sites.google.com/view/sciml-fundemental-pde
Overview
We propose a multiphysics training framework that jointly learns from both original PDEs and their simplified basic forms (decomposed fundamental physics terms). This approach improves data efficiency, long-term physical consistency, and out-of-distribution generalization across 1D/2D/3D PDE problems. The method is architecture-agnostic and demonstrates consistent improvements in nRMSE.
Checkpoints
All checkpoints are Transformer-based neural operators (VideoMAE architecture, see Appendix B of the paper).
Naming Convention
{Model}_{PDE}_{dsN_P_B}.pt
- Model prefix:
Transformer= Baseline (trained only on original PDE)TransformerAux= Ours (jointly trained on original PDE + decomposed basic form)
- PDE suffix:
3D= 3D Incompressible Navier-Stokes_RD= 2D Diffusion-Reaction- (no suffix) = 2D Incompressible Navier-Stokes
- Data composition
dsN_P_B: N = equivalent baseline samples, P = PDE samples, B = basic form samples
Available Checkpoints
| File | PDE | Type | Size | Reproduces |
|---|---|---|---|---|
navier_stokes_3d/Transformer3D_ds64_32_96.pt |
3D Navier-Stokes | Baseline | 3.32 GB | Table 6, Figure 10 |
navier_stokes_3d/TransformerAux3D_ds64_32_96.pt |
3D Navier-Stokes | Ours | 3.32 GB | Table 6, Figure 10 |
diffusion_reaction_2d/Transformer_RD_ds128_64_192.pt |
2D Diffusion-Reaction | Baseline | 2.54 GB | Table 5, Figure 9 |
diffusion_reaction_2d/TransformerAux_RD_ds128_64_192.pt |
2D Diffusion-Reaction | Ours | 2.54 GB | Table 5, Figure 9 |
navier_stokes_2d/TransformerAux_ds16_8_48.pt |
2D Navier-Stokes | Ours | 1.25 GB | Figure 9, Figure 11 |
Usage
import torch
# Download checkpoint
from huggingface_hub import hf_hub_download
ckpt_path = hf_hub_download(
repo_id="delta-lab-ai/fundamental-physics-neural-operators",
filename="navier_stokes_3d/TransformerAux3D_ds64_32_96.pt"
)
# Load
checkpoint = torch.load(ckpt_path, map_location="cpu")
Citation
@inproceedings{ma2026learning,
title={Learning Data-Efficient and Generalizable Neural Operators via Fundamental Physics Knowledge},
author={Ma, Siying and Zadeh, Mehrdad M. and Soroco, Mauricio and Chen, Wuyang and Cao, Jiguo and Ganesh, Vijay},
booktitle={International Conference on Learning Representations (ICLR)},
year={2026}
}
License
MIT