DocCornerDataset / README.md
mapo80's picture
Add dataset card
f597012 verified
metadata
license: cc-by-4.0
task_categories:
  - image-segmentation
  - keypoint-detection
  - object-detection
language:
  - en
tags:
  - document-detection
  - corner-detection
  - document-scanner
  - quadrilateral-detection
  - perspective-correction
  - computer-vision
size_categories:
  - 10K<n<100K

DocCornerDataset

A comprehensive dataset for document corner detection and quadrilateral localization. This dataset is designed for training models that detect the four corners of documents in natural images, enabling applications like document scanning, perspective correction, and automatic document cropping.

Dataset Description

DocCornerDataset contains 27,860 images with precise corner annotations:

  • 23,496 training samples
  • 4,364 validation samples
  • Includes both positive samples (with documents) and negative samples (without documents)

Key Features

  • High-quality annotations: 4-corner coordinates (TL, TR, BR, BL) in normalized format [0-1]
  • Diverse sources: Aggregated from multiple public datasets covering various document types
  • Negative samples: Non-document images to reduce false positives
  • Pre-split data: Ready-to-use train/validation splits
  • Parquet format: Efficient storage with embedded images

Dataset Structure

The dataset is stored in Parquet format with the following columns:

Column Type Description
image_bytes bytes Raw JPEG image data
filename string Original filename
has_document bool True if image contains a document
x0, y0 float32 Top-left corner (normalized 0-1)
x1, y1 float32 Top-right corner (normalized 0-1)
x2, y2 float32 Bottom-right corner (normalized 0-1)
x3, y3 float32 Bottom-left corner (normalized 0-1)

Source Datasets

This dataset aggregates and re-annotates images from multiple public sources:

Source Dataset Samples Description
MIDV-500 ~9,500 Mobile Identity Document Video dataset
AutoCapture ~8,000 Auto-captured document images
MIDV-2019 ~1,400 Extended mobile ID document dataset
SmartDoc-QA ~1,400 Document images for QA tasks
Sample Dataset ~1,000 Mixed document samples
Four Corners Detection ~950 Corner detection focused dataset
Document Segmentation ~950 Curated segmentation samples
ReceiptExtractor ~620 Receipt and ticket images
Receipt Instance Segmentation ~200 Receipt instance annotations
CORD v2 ~80 Consolidated Receipt Dataset
Negative Samples ~4,300 Non-document background images

Loading the Dataset

Using PyArrow/Pandas

import pyarrow.parquet as pq
import pandas as pd
from PIL import Image
import io

# Load train data
train_df = pd.read_parquet("hf://datasets/mapo80/DocCornerDataset/data/train_chunk000.parquet")

# View a sample
sample = train_df.iloc[0]
image = Image.open(io.BytesIO(sample['image_bytes']))
corners = [sample['x0'], sample['y0'], sample['x1'], sample['y1'], 
           sample['x2'], sample['y2'], sample['x3'], sample['y3']]
print(f"Filename: {sample['filename']}")
print(f"Has document: {sample['has_document']}")
print(f"Corners: {corners}")
image.show()

Using HuggingFace Datasets

from datasets import load_dataset
from PIL import Image
import io

# Load the dataset
dataset = load_dataset("mapo80/DocCornerDataset", data_files={
    "train": "data/train_chunk*.parquet",
    "validation": "data/val_chunk*.parquet"
})

# View a sample
sample = dataset["train"][0]
image = Image.open(io.BytesIO(sample['image_bytes']))
print(f"Filename: {sample['filename']}")
print(f"Corners: x0={sample['x0']:.3f}, y0={sample['y0']:.3f}, ...")

Using PyTorch DataLoader

import torch
from torch.utils.data import Dataset, DataLoader
import pyarrow.parquet as pq
from PIL import Image
import io
import torchvision.transforms as T

class DocCornerDataset(Dataset):
    def __init__(self, parquet_files, transform=None):
        self.data = pq.ParquetDataset(parquet_files).read().to_pandas()
        self.transform = transform or T.Compose([
            T.Resize((224, 224)),
            T.ToTensor(),
            T.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
        ])
    
    def __len__(self):
        return len(self.data)
    
    def __getitem__(self, idx):
        row = self.data.iloc[idx]
        image = Image.open(io.BytesIO(row['image_bytes'])).convert('RGB')
        image = self.transform(image)
        
        corners = torch.tensor([
            row['x0'], row['y0'], row['x1'], row['y1'],
            row['x2'], row['y2'], row['x3'], row['y3']
        ], dtype=torch.float32)
        
        has_doc = torch.tensor(row['has_document'], dtype=torch.float32)
        
        return image, corners, has_doc

# Usage
train_files = ["data/train_chunk000.parquet", "data/train_chunk001.parquet", ...]
dataset = DocCornerDataset(train_files)
loader = DataLoader(dataset, batch_size=32, shuffle=True)

Use Cases

  • Document Corner Detection: Train models to localize document corners
  • Document Scanning Apps: Build automatic document capture features
  • Perspective Correction: Detect quadrilaterals for perspective transformation
  • Document Segmentation: Segment documents from background
  • OCR Preprocessing: Improve OCR accuracy with proper document alignment

Citation

If you use this dataset in your research, please cite:

@dataset{doccornerdataset2024,
  title={DocCornerDataset: A Comprehensive Dataset for Document Corner Detection},
  author={mapo80},
  year={2024},
  publisher={Hugging Face},
  url={https://huggingface.co/datasets/mapo80/DocCornerDataset}
}

Source Dataset Citations

Please also consider citing the original source datasets:

  • MIDV-500/2019: Bulatov et al., "MIDV-500: A Dataset for Identity Documents Analysis and Recognition on Mobile Devices in Video Stream"
  • SmartDoc: Burie et al., "ICDAR 2015 Competition on Smartphone Document Capture and OCR"
  • CORD: Park et al., "CORD: A Consolidated Receipt Dataset for Post-OCR Parsing"

License

This dataset is released under the CC-BY-4.0 license. Please respect the licenses of the original source datasets when using this data.

Acknowledgments

This dataset was created by aggregating and re-annotating images from multiple public document datasets. We thank the creators of the original datasets for making their data publicly available.