Datasets:
license: cc-by-4.0
language:
- en
tags:
- image-to-image
- shadow-removal
- synthetic-dataset
- 3dfront
- objaverse
- aaai-2025
task_categories:
- image-to-image
size_categories:
- 10K<n<100K
Dataset Card for OmniSR: Shadow Removal under Direct and Indirect Lighting
Dataset Description
- Repository: [Your Hugging Face Dataset Link]
- Paper: [Link to Paper (e.g., arXiv or AAAI proceedings)]
- Curated by: Jiamin Xu, Zelong Li, Yuxin Zheng, Chenyu Huang, Renshu Gu, Gang Xu (Hangzhou Dianzi University); Weiwei Xu (Zhejiang University)
- Language(s): N/A (Image dataset)
- License: cc-by-4.0
Dataset Summary
This is the official dataset for the paper "OmniSR: Shadow Removal under Direct and Indirect Lighting" (AAAI 2025). It is a large-scale synthetic image dataset specifically designed for studying shadow removal under both direct and indirect illumination conditions.
The dataset contains over 30,000 pairs of images. Each pair consists of:
- An image with shadows caused by direct lighting and indirect lighting.
- A corresponding shadow-free ground truth image.
It was rendered using the 3D-Front and Objaverse 3D model libraries, covering a wide variety of indoor scenes, object types, and lighting conditions.
Uses
Direct Use
The primary use of this dataset is for training and evaluating image shadow removal models, particularly those aiming to handle complex lighting scenarios involving both direct and indirect shadows.
Out-of-Scope Use
As a purely synthetic dataset, models trained solely on it may not generalize perfectly to real-world photographs without additional fine-tuning or domain adaptation techniques.
Dataset Structure
Data Instances
Data is organized into folders. A typical data instance comprises three image files:
direct_shadow.pngindirect_shadow.pngshadow_free.png
Data Fields
direct_shadow: Path to the image file containing shadows from direct light.indirect_shadow: Path to the image file containing shadows from indirect light (e.g., light bouncing off surfaces).shadow_free: Path to the corresponding ground truth image with no shadows.
Data Splits
The full dataset contains over 30,000 image pairs. For specific details on training, validation, and test splits, please refer to the original paper or the split files included with the dataset release.
Dataset Creation
Source Data
The dataset was generated through a custom rendering pipeline using two major 3D assets sources:
- 3D-Front: Used for generating indoor scene layouts.
- Objaverse: Used to populate scenes with a diverse set of 3D objects.
Annotations
The data is automatically generated. Shadow states (present/absent) are controlled by the rendering engine's lighting setup, so no manual human annotation was involved.
Personal and Sensitive Information
This dataset consists entirely of synthetic, virtual scenes. It does not contain any portraits, personal information, or real-world sensitive data.
Bias, Risks, and Limitations
- Synthetic Domain Gap: The primary limitation is its synthetic nature. Models may need adaptation to perform well on real-world images.
- Scene and Object Bias: The dataset's content is derived from 3D-Front and Objaverse, which may not represent the full diversity of real-world objects, scenes, and lighting conditions. This potential distribution bias should be considered when using the dataset.
Recommendations
Users are encouraged to:
- Be aware of the synthetic-to-real domain gap.
- Use the dataset in conjunction with real-world shadow removal data if the target application is real-world photos.
- Evaluate models on diverse test sets to understand their generalization capabilities.
Citation
If you use this dataset in your research, please cite the following paper:
BibTeX:
@inproceedings{xu2024omnisr,
title={OmniSR: Shadow Removal under Direct and Indirect Lighting},
author={Xu, Jiamin and Li, Zelong and Zheng, Yuxin and Huang, Chenyu and Gu, Renshu and Xu, Weiwei and Xu, Gang},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
year={2025}
}