Datasets:
metadata
license: mit
language:
- en
pretty_name: Hidden-Objects
size_categories:
- 10K<n<100K
task_categories:
- object-detection
tags:
- computer-vision
- diffusion-priors
- spatial-reasoning
configs:
- config_name: default
data_files:
- split: train
path: ho_irany_train_rel_full.jsonl
- split: test
path: ho_irany_test_rel_full.jsonl
Hidden-Objects
Image-object pairs with localized bounding boxes for learning realistic object placement in background scenes.
- Project page: https://hidden-objects.github.io/
- Backgrounds: Places365
Schema
| Field | Type | Description |
|---|---|---|
entry_id |
int64 | Unique row identifier |
bg_path |
string | Relative path to background image in Places365 |
fg_class |
string | Foreground object category (e.g. "bottle") |
bbox |
list | Normalized bounding box [x, y, w, h] in range 0–1 |
label |
int64 | 1 = positive, 0 = negative |
image_reward_score |
float64 | ImageReward quality score |
confidence |
float64 | GroundingDINO detection confidence |
source |
string | Origin tag of the annotation |
Sample:
{
"entry_id": 1,
"bg_path": "data_large_standard/k/kitchen/00002986.jpg",
"fg_class": "bottle",
"bbox": [0.542969, 0.591797, 0.0625, 0.152344],
"label": 1,
"image_reward_score": -1.542461,
"confidence": 0.388181,
"source": "ho"
}
Bounding Boxes
Bounding boxes are relative to a 512×512 center crop of the background image:
# Normalized → pixel coordinates
x, y, w, h = [v * 512 for v in bbox]
Usage
Quick start
from datasets import load_dataset
dataset = load_dataset("marco-schouten/hidden-objects")
print(dataset["train"][0])
PyTorch Dataset
Requires Places365 backgrounds downloaded locally:
huggingface-cli login
import torchvision.datasets as datasets
background_images = datasets.Places365(root="./data/places365", split="train-standard", small=False, download=True)
import os
import torch
from PIL import Image
from torch.utils.data import Dataset
from datasets import load_dataset
import torchvision.transforms as T
class HiddenObjectsDataset(Dataset):
def __init__(self, places_root, split="train"):
self.data = load_dataset("marco-schouten/hidden-objects", split=split)
self.places_root = places_root
self.transform = T.Compose([T.Resize(512), T.CenterCrop(512), T.ToTensor()])
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
item = self.data[idx]
image = self.transform(Image.open(os.path.join(self.places_root, item["bg_path"])).convert("RGB"))
return {
"entry_id": item["entry_id"],
"image": image,
"bbox": torch.tensor(item["bbox"]) * 512,
"label": item["label"],
"class": item["fg_class"],
"image_reward_score": item["image_reward_score"],
"confidence": item["confidence"],
}
# Usage
hidden_object_dataset = HiddenObjectsDataset(places_root="./data/places365")