
TerraMesh
A planetary‑scale, multimodal analysis‑ready dataset for Earth‑Observation foundation models.
TerraMesh merges data from Sentinel‑1 SAR, Sentinel‑2 optical, Copernicus DEM, NDVI and land‑cover sources into more than 9 million co‑registered patches ready for large‑scale representation learning.
Dataset to be released soon.
Samples from the TerraMesh dataset with seven spatiotemporal aligned modalities. Sentinel-2 L2A uses IRRG pseudo-coloring and Sentinel-1 RTC is visualized in db scale as VH-VV-VV/VH. Copernicus DEM is scaled based on the image value range with an additional 10 meter buffer to highlight flat scenes.
Dataset organisation
The archive ships two top‑level splits train/
and val/
, each holding one folder per modality. More details follow with the dataset release.
Description
TerraMesh fuses complementary optical, radar, topographic and thematic layers into pixel‑aligned 10 m cubes, allowing models to learn joint representations of land cover, vegetation dynamics and surface structure at planetary scale. The dataset is globally distributed and covers multiple years.
Performance evaluation
TerraMesh was used to pre-train TerraMind-B. On the six evaluated segmentation tasks from PANGAEA bench, TerraMind‑B reaches an average mIoU of 66.6%, the best overall score with an average rank of 2.33. This amounts to roughly a 3pp improvement over the next‑best open model (CROMA), underscoring the benefits of pre‑training on TerraMesh. Compared to an ablation model pre-trained only on SSL4EO-S12 locations TerraMind-B performs overall 1pp better with better global generalization on more remote tasks like CTM-SS. More details in our paper.
Usage
We provide the data loading code in terramesh.py
which can be downloaded via this link or with:
wget https://huggingface.co/datasets/ibm-esa-geospatial/TerraMesh/resolve/main/terramesh.py
You can use the build_terramesh_dataset
function to initalize a dataset, which uses the WebDataset package to load samples from the shard files. You can stream the data from Hugging Face or download the full dataset and pass a local path.
from terramesh import build_terramesh_dataset
from torch.utils.data import DataLoader
# If you only pass one modality, the modality is loaded with the "image" key
dataset = build_terramesh_dataset(
path="https://huggingface.co/datasets/ibm-esa-geospatial/TerraMesh/resolve/main/", # Streaming or local path
modalities=["S2L2A"],
split='val',
batch_size=8
)
# Batch keys: ['__key__', '__url__', 'image']
# If you pass multiple modalities, the modalities are returned using the modality names as keys
dataset = build_terramesh_dataset(
path="https://huggingface.co/datasets/ibm-esa-geospatial/TerraMesh/resolve/main/", # Streaming or local path
modalities=["S2L2A", "S1GRD", "S1RTC", "DEM"],
split='val',
batch_size=8
)
# Set batch size to None because batching is handled by WebDataset. Otherwise, set it to None in the dataset build
dataloader = DataLoader(dataset, batch_size=None, num_workers=4)
# Iterate over the dataloader
for batch in dataloader:
print("Batch keys:", list(batch.keys()))
# Batch keys: ['__key__', '__url__', 'S2L2A', 'S1RTC', 'DEM']
# The code removes the time dim from the source data
print("Data shape:", batch["S2L2A"].shape)
# Data shape: torch.Size([8, 12, 264, 264]
break
We provide some additional code for wrapping albumentations
transform functions.
We recommend albumentations because parameters are shared between all image modalities (e.g., same random crop).
However, it requires some wrapping to bring the data into the expected shape.
import albumentations as A
from albumentations.pytorch import ToTensorV2
from terramesh import build_terramesh_dataset, Transpose, MultimodalTransforms
# Define all image modalities
modalities = ["S2L2A", "S1GRD", "S1RTC", "DEM"]
# Define multimodal transform function that converts the data into the expected shape from albumentations
val_transform = MultimodalTransforms(
transforms=A.Compose([ # We use albumentations because of the shared transform between image modalities
Transpose([1, 2, 0]), # Convert data to channel last (expected shape from albumentations)
A.CenterCrop(224, 224), # Use center crop in val split
# A.RandomCrop(224, 224), # Use random crop in train split
# A.D4(), # Optionally, use random flipping and rotation for the train split
ToTensorV2(), # Convert to tensor and back to channel first
],
is_check_shapes=False, # Not needed because of aligned data in TerraMesh
additional_targets={m: "image" for m in modalities}
),
non_image_modalities=["__key__", "__url__"], # Additional non-image keys
)
dataset = build_terramesh_dataset(
path="https://huggingface.co/datasets/ibm-esa-geospatial/TerraMesh/resolve/main/", # Streaming or local path
modalities=modalities,
split='val',
transform=val_transform,
batch_size=8,
)
If you have any issues with data loading, please create a discussion in the community tab and tag @blumenstiel
.
Citation
If you use TerraMesh, please cite:
@article{blumenstiel2025terramesh,
title={Terramesh: A planetary mosaic of multimodal earth observation data},
author={Blumenstiel, Benedikt and Fraccaro, Paolo and Marsocci, Valerio and Jakubik, Johannes and Maurogiovanni, Stefano and Czerkawski, Mikolaj and Sedona, Rocco and Cavallaro, Gabriele and Brunschwiler, Thomas and Bernabe-Moreno, Juan and others},
journal={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
year={2025}
}
License
TerraMesh is released under the Creative Commons Attribution‑ShareAlike 4.0 (CC‑BY‑SA‑4.0) license.
Acknowledgements
TerraMesh is part of the FAST‑EO project funded by the European Space Agency Φ‑Lab (contract #4000143501/23/I‑DT) and builds upon SSL4EO‑S12 and MajorTOM‑Core datasets.
- Downloads last month
- 458
Models trained or fine-tuned on ibm-esa-geospatial/TerraMesh
