---
license: mit
task_categories:
- image-to-image
language:
- en
tags:
- earth
- remote sensing
- super-resolution
- Sentinel-2
- sentinel-2
pretty_name: opensr_test
size_categories:
- 10K
A comprehensive benchmark for real-world Sentinel-2 imagery super-resolution
--- **GitHub**: [https://github.com/ESAOpenSR/opensr-test](https://github.com/ESAOpenSR/opensr-test) **Documentation**: [https://esaopensr.github.io/opensr-test](https://esaopensr.github.io/opensr-test) **PyPI**: [https://pypi.org/project/opensr-test/](https://pypi.org/project/opensr-test/) **Paper**: [https://www.techrxiv.org/users/760184/articles/735467-a-comprehensive-benchmark-for-optical-remote-sensing-image-super-resolution](https://www.techrxiv.org/users/760184/articles/735467-a-comprehensive-benchmark-for-optical-remote-sensing-image-super-resolution) --- # ## **Overview** Super-Resolution (SR) aims to improve satellite imagery ground sampling distance. However, two problems are common in the literature. First, most models are **tested on synthetic data**, raising doubts about their real-world applicability and performance. Second, traditional evaluation metrics such as PSNR, LPIPS, and SSIM are not designed to assess SR performance. These metrics fall short, especially in conditions involving changes in luminance or spatial misalignments - scenarios frequently encountered in real world. To address these challenges, 'opensr-test' provides a fair approach for SR benchmark. We provide three datasets carefully crafted to minimize spatial and spectral misalignment. Besides, 'opensr-test' precisely assesses SR algorithm performance across three independent metrics groups that measure consistency, synthesis, and correctness.## **Datasets** The `opensr-test` package provides five datasets for benchmarking SR models. These datasets are carefully crafted to minimize spatial and spectral misalignment. Each dataset consists of a dictionary with the following keys: - **`L2A`**: Sentinel-2 L2A bands (12 bands). - **`L1C`**: Sentinel-2 L1C bands (12 bands). - **`HR`**: High-resolution image (RGBNIR) without harmonization. - **`HRharm`**: Harmonized high-resolution image (RGBNIR). The HRharm image is **harmonized with respect to the Sentinel-2 L2A bands**. - **`metadata`**: A pandas DataFrame with the images' metadata. - **`lr_file:`** The low-resolution image file path. - **`hr_file:`** The high-resolution image file path. - **`roi:`** The spatial unique identifier. - **`lr_gee_id:`** The low-resolution image Google Earth Engine id. - **`reflectance:`** How SR affects the mean of reflectance values. It uses the L1 norm. The lower the value, the better the reflectance consistency. - **`spectral:`** This shows how the harmonization affects the spectral signature compared to the LR image. It uses the spectral angle distance. The lower the value, the better the spectral consistency. The values are in degrees. - **`spatial:`** The spatial misalignment in terms of LR pixels (10m). The lower the value, the better the spatial consistency. | Band | Description | Resolution (m) | L2A Index | L1C index | |------|-------------|----------------|-------| -------| | B01 | Coastal aerosol | 60 | 0 | 0 | | B02 | Blue | 10 | 1 | 1 | | B03 | Green | 10 | 2 | 2 | | B04 | Red | 10 | 3 | 3 | | B05 | Vegetation red edge | 20 | 4 | 4 | | B06 | Vegetation red edge | 20 | 5 | 5 | | B07 | Vegetation red edge | 20 | 6 | 6 | | B08 | NIR | 10 | 7 | 7 | | B8A | Narrow NIR | 20 | 8 | 8 | | B09 | Water vapor | 60 | 9 | 9 | | B10 | Cirrus | 60 | - | 10 | | B11 | SWIR-I | 20 | 10 | 11 | | B12 | SWIR-II | 20 | 11 | 12 | ### **NAIP (X4 scale factor)** The National Agriculture Imagery Program (NAIP) dataset is a high-resolution aerial imagery dataset covering the continental United States. It consists of 2.5m NAIP imagery captured in the visible and near-infrared spectrum (RGBNIR) and all Sentinel-2 L1C and L2A bands. The dataset focuses on crop fields, forests, and bare soil areas. ```python import opensr_test naip = opensr_test.load("naip") ``` ### **SPOT (X4 scale factor)** The SPOT imagery was obtained from the Worldstat dataset. The dataset consists of 2.5m SPOT imagery captured in the visible and near-infrared spectrum (RGBNIR) and all Sentinel-2 L1C and L2A bands. It focuses on urban areas, crop fields, and bare soil areas. ```python import opensr_test spot = opensr_test.load("spot") ``` ### **Venµs (X2 scale factor)** The Venµs images were obtained from the [**Sen2Venµs dataset**](https://zenodo.org/records/6514159). The dataset consists of 5m Venµs imagery captured in the visible and near-infrared spectrum (RGBNIR) and all Sentinel-2 L1C and L2A bands. The dataset focuses on **crop fields, forests, urban areas, and bare soil areas**. ```python import opensr_test venus = opensr_test.load("venus") ``` ### **SPAIN CROPS (x4 scale factor)** The SPAIN CROPS dataset consists of 2.5m aerial imagery captured in the visible and near-infrared spectrum (RGBNIR) by the Spanish National Geographic Institute (IGN). The dataset includes all Sentinel-2 L1C and L2A bands. The dataset focuses on **crop fields and forests**. ```python import opensr_test spain_crops = opensr_test.load("spain_crops") ``` ### **SPAIN URBAN (x4 scale factor)** The SPAIN URBAN dataset consists of 2.5m aerial imagery captured in the visible and near-infrared spectrum (RGBNIR) by the Spanish National Geographic Institute (IGN). The dataset includes all Sentinel-2 L1C and L2A bands. The dataset focuses on **urban areas and roads**. ```python spain_urban = opensr_test.load("spain_urban") ``` ## **Deeper understanding** Explore the [API](https://esaopensr.github.io/opensr-test/docs/API/config_pydantic.html) section for more details about personalizing your benchmark experiments. ## **Citation** If you use `opensr-test` in your research, please cite our paper: ``` @article{aybar2024comprehensive, title={A Comprehensive Benchmark for Optical Remote Sensing Image Super-Resolution}, author={Aybar, Cesar and Montero, David and Donike, Simon and Kalaitzis, Freddie and G{\'o}mez-Chova, Luis}, journal={Authorea Preprints}, year={2024}, publisher={Authorea} } ``` ## **Acknowledgements** This work was done with the support of the European Space Agency (ESA) under the project “Explainable AI: application to trustworthy super-resolution (OpenSR).” Cesar Aybar acknowledges support by the National Council of Science, Technology, and Technological Innovation (CONCYTEC, Peru) through the “PROYECTOS DE INVESTIGACIÓN BÁSICA – 2023-01” program with contract number PE501083135-2023-PROCIENCIA. Luis Gómez-Chova acknowledges support from the Spanish Ministry of Science and Innovation (project PID2019-109026RB-I00 funded by MCIN/AEI/10.13039/501100011033).