license: other
task_categories:
- image-classification
language:
- en
tags:
- occlusion
size_categories:
- 1K<n<10K
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': banana
'1': baseball
'2': cowboy hat
'3': cup
'4': dumbbell
'5': hammer
'6': laptop
'7': microwave
'8': mouse
'9': orange
'10': pillow
'11': plate
'12': screwdriver
'13': skillet
'14': spatula
'15': vase
splits:
- name: ROD
num_bytes: 3306212413
num_examples: 1231
download_size: 3285137456
dataset_size: 3306212413
Real Occlusion Dataset (ROD)
The Realistic Occlusion Dataset is the product of a meticulous object collection protocol aimed at collecting and capturing 40+ distinct objects from 16 classes: banana, baseball, cowboy hat, cup, dumbbell, hammer, laptop, microwave, mouse, orange, pillow, plate, screwdriver, skillet, spatula, and vase. Images are taken in a bright room with soft, natural light. All objects are captured on a brown wooden table against a solid colored wall. An iPhone 13 Pro ultra-wide camera with a tripod is used to capture images at an elevation of approx. 90 degrees and distance of 1 meter from the object. Occluder objects are wooden blocks or square pieces of cardboard, painted red or blue. The occluder object is added between the camera and the main object and its x-axis position is varied such that it begins at the left of the frame and ends at the right. In total, 1 clean image and 12 occluded images are captured for each object. Each object is measured and the occluder step size is broken up into equal sizes.
ROD was created for testing model robustness to occlusion in Hardwiring ViT Patch Selectivity into CNNs using Patch Mixing.
Citations
@misc{lee2023hardwiring,
title={Hardwiring ViT Patch Selectivity into CNNs using Patch Mixing},
author={Ariel N. Lee and Sarah Adel Bargal and Janavi Kasera and Stan Sclaroff and Kate Saenko and Nataniel Ruiz},
year={2023},
eprint={2306.17848},
archivePrefix={arXiv},
primaryClass={cs.CV}
}