You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Dataset Card for VISION Datasets

Dataset Summary

The VISION Datasets are a collection of 14 industrial inspection datasets, designed to explore the unique challenges of vision-based industrial inspection. These datasets are carefully curated from Roboflow and cover a wide range of manufacturing processes, materials, and industries. To further enable precise defect segmentation, we annotate each dataset with polygon labels based on the provided bounding box labels.

Supported Tasks and Leaderboards

We currently host two prized challenges on the VISION Datasets:

  • The VISION Track 1 Challenge aims to evaluate solutions that can effectively learn with limited labeled data in combination with unlabeled data across diverse images from different industries and contexts.

  • The VISION Track 2 Challenge aims to challenge algorithmic solutions to generate synthetic data that will help improve model performance given only limited labeled data.

Please check out our workshop website and competition pages for further details.

Dataset Information

Datasets Overview

The VISION Datasets consist of the following 14 individual datasets:

  • Cable
  • Capacitor
  • Casting
  • Console
  • Cylinder
  • Electronics
  • Groove
  • Hemisphere
  • Lens
  • PCB_1
  • PCB_2
  • Ring
  • Screw
  • Wood

Data Splits

Each dataset contains three folders: train, val, and inference. The train and val folders contain the training and validation data, respectively. The inference folder contains both the testing data and the unused data for generating submissions to our evaluation platform. The _annotations.coco.json files contain the COCO format annotations for each dataset. We will release more information on the testing data as the competitions conclude.

Each dataset has the following structure:

β”œβ”€β”€ dataset_name/
β”‚   β”œβ”€β”€ train/
β”‚   β”‚   β”œβ”€β”€ _annotations.coco.json # COCO format annotation
β”‚   β”‚   β”œβ”€β”€ 000001.png # Images
β”‚   β”‚   β”œβ”€β”€ 000002.png
β”‚   β”‚   β”œβ”€β”€ ...
β”‚   β”œβ”€β”€ val/
β”‚   β”‚   β”œβ”€β”€ _annotations.coco.json # COCO format annotation
β”‚   β”‚   β”œβ”€β”€ xxxxxx.png # Images
β”‚   β”‚   β”œβ”€β”€ ...
β”‚   β”œβ”€β”€ inference/
β”‚   β”‚   β”œβ”€β”€ _annotations.coco.json # COCO format annotation with unlabeled image list only
β”‚   β”‚   β”œβ”€β”€ xxxxxx.png # Images
β”‚   β”‚   β”œβ”€β”€ ...

Dataset Creation

Curation Rationale

Our primary goal is to encourage further alignment between academic research and production practices in vision-based industrial inspection. Due to both the consideration to remain faithful to naturally existing label challenges and the difficulty in distinguishing between unintentional labeling oversight and domain-specific judgments without the manufacturers' specification sheets, we refrain from modifying original defect decisions. To enable precise defect detection even with existing label limitations, we provide refined segmentation masks for each defect indicated by the original bounding boxes.

Building Dataset Splits

To ensure the benchmark can faithfully reflect the performance of algorithms, we need to minimize leakage across train, validation, and testing data. Due to the crowd-sourced nature, the original dataset splits are not always guaranteed to be free of leakage. As a result, we design a process to resplit the datasets with specific considerations for industrial defect detection.

Given distinct characteristics of defect detection datasets, including but not limited to:

  • Stark contrast between large image size and small defect size
  • Highly aligned non-defective images may seem to be duplicates, but are necessary to represent natural distribution and variation to properly assess the false detection rate.

Naively deduping with image-level embedding or hash would easily drown out small defects and regard distinct non-defective images as duplicates. Therefore, we first only deduplicate images with identical byte contents and set the images without defect annotation aside. For images with defect annotations, we want to reduce leakage at the defect level. We train a self-supervised similarity model on the defect regions and model the similarity between two images as the maximum pairwise similarity between the defects on each image. Finally, we perform connected component analysis on the image similarity graph and randomly assign connected components to dataset splits in a stratified manner. In order to discourage manual exploitation during the data competition, the discarded images are provided alongside the test split data as the inference data for participants to generate their submissions. However, the testing performance is evaluated exclusively based on the test split data. Further details will be provided in a paper to be released soon.

Additional Information

License

The provided polygon annotations are licensed under CC BY-NC 4.0 License. All the original dataset assets are under the original dataset licenses.

Disclaimer

While we believe the terms of the original datasets permit our use and publication herein, we do not make any representations as to the license terms of the original dataset. Please follow the license terms of such datasets if you would like to use them.

Citation

If you apply this dataset to any project and research, please cite our repo:

@article{vision-datasets,
  title         = {VISION Datasets: A Benchmark for Vision-based InduStrial InspectiON},
  author        = {Haoping Bai, Shancong Mou, Tatiana Likhomanenko, Ramazan Gokberk Cinbis, Oncel Tuzel, Ping Huang, Jiulong Shan, Jianjun Shi, Meng Cao},
  journal       = {arXiv preprint arXiv:2306.07890},
  year          = {2023},
}
Downloads last month
16