|
--- |
|
language: |
|
- en |
|
tags: |
|
- counting |
|
- object-counting |
|
--- |
|
|
|
![Omnicount-191](https://raw.githubusercontent.com/mondalanindya/OmniCount/main/assets/figs/omnicount191.png) |
|
# Dataset Card for Dataset Name |
|
|
|
OmniCount-191 is a first-of-its-kind dataset with multi-label object counts, including points, bounding boxes, and VQA annotations. |
|
|
|
## Dataset Details |
|
|
|
### Dataset Description |
|
|
|
Omnicount-191 is a dataset that caters to a broad spectrum of visual categories and instances featuring various visual categories with multiple instances and classes per image. The current datasets, primarily designed for object counting, focusing on singular object categories like humans and vehicles, fall short for multi-label object counting tasks. Despite the presence of multi-class datasets like MS COCO their utility is limited for counting due to the sparse nature of object appearance. Addressing this gap, we created a new dataset with 30,230 images spanning 191 diverse categories, including kitchen utensils, office supplies, vehicles, and animals. This dataset, featuring a wide range of object instance counts per image ranging from 1 to 160 and an average count of 10, bridges the existing void and establishes a benchmark for assessing counting models in varied scenarios. |
|
|
|
|
|
|
|
- **Curated by:** [Anindya Mondal](https://mondalanindya.github.io), [Sauradip Nag](http://sauradip.github.io/), [Xiatian Zhu](https://surrey-uplab.github.io), [Anjan Dutta](https://www.surrey.ac.uk/people/anjan-dutta) |
|
- **License:** [OpenRAIL] |
|
|
|
### Dataset Sources |
|
|
|
- **Paper :**[OmniCount: Multi-label Object Counting with Semantic-Geometric Priors](https://arxiv.org/pdf/2403.05435.pdf) |
|
- **Demo :** [https://mondalanindya.github.io/OmniCount/](https://mondalanindya.github.io/OmniCount/) |
|
|
|
## Uses |
|
|
|
|
|
### Direct Use |
|
|
|
Object Counting |
|
|
|
### Out-of-Scope Use |
|
|
|
Visual Question Answering (VQA), Object Detection (OD) |
|
|
|
#### Data Collection and Processing |
|
|
|
The data collection process for OmniCount-191 involved a team of 13 members who manually curated images from the web, released under Creative Commons (CC) licenses. The images were sourced using relevant keywords such as “Aerial Images”, “Supermarket Shelf”, “Household Fruits”, and “Many Birds and Animals”. Initially, 40,000 images were considered, from which 30,230 images were selected based on the following criteria: |
|
1. **Object instances**: Each image must contain at least five object instances, aiming to challenge object enumeration in complex scenarios; |
|
2. **Image quality**: High-resolution images were selected to ensure clear object identification and counting; |
|
3. **Severe occlusion**: We excluded images with significant occlusion to maintain accuracy in object counting; |
|
4. **Object dimensions**: Images with objects too small or too distant for accurate counting or annotation were removed, ensuring all objects are adequately sized for analysis. |
|
The selected images were annotated using the [Labelbox](https://labelbox.com) annotation platform. |
|
|
|
### Statistics |
|
The OmniCount-191 benchmark presents images with small, densely packed objects from multiple classes, reflecting real-world object counting scenarios. This dataset encompasses 30,230 images, with dimensions averaging 700 × 580 pixels. Each image contains an average of 10 objects, totaling 302,300 objects, with individual images ranging from 1 to 160 objects. To ensure diversity, the dataset is split into training and testing sets, with no overlap in object categories – 118 categories for training and 73 for testing, corresponding to a 60%-40% split. This results in 26,978 images for training and 3,252 for testing. |
|
|
|
### Splits |
|
We have prepared dedicated splits within the OmniCount-191 dataset to facilitate the assessment of object counting models under zero-shot and few-shot learning conditions. Please refer to the [technical report](https://arxiv.org/pdf/2403.05435.pdf) (Sec. 9.1, 9.2) for more detais. |
|
|
|
|
|
## Citation |
|
|
|
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> |
|
|
|
**BibTeX:** |
|
|
|
``` |
|
@article{mondal2024omnicount, |
|
title={OmniCount: Multi-label Object Counting with Semantic-Geometric Priors}, |
|
author={Mondal, Anindya and Nag, Sauradip and Zhu, Xiatian and Dutta, Anjan}, |
|
journal={arXiv preprint arXiv:2403.05435}, |
|
year={2024} |
|
} |
|
``` |
|
|
|
|
|
## Dataset Card Authors |
|
|
|
[Anindya Mondal](https://mondalanindya.github.io), [Sauradip Nag](http://sauradip.github.io/), [Xiatian Zhu](https://surrey-uplab.github.io), [Anjan Dutta](https://www.surrey.ac.uk/people/anjan-dutta) |
|
|
|
## Dataset Card Contact |
|
|
|
{a[dot]mondal, s[dot]nag, xiatian[dot]zhu, anjan[dot]dutta}[at]surrey[dot]ac[dot]uk |
|
|
|
## License |
|
Object counting has legitimate commercial applications in urban planning, event logistics, and consumer behavior analysis. However, said technology concurrently facilitates human surveillance capabilities, which unscrupulous actors may intentionally or unintentionally misappropriate for nefarious purposes. As such, we must exercise reasoned skepticism towards any downstream deployment of our research that enables the monitoring of individuals without proper legal safeguards and ethical constraints. Therefore, in an effort to mitigate foreseeable misuse and uphold principles of privacy and civil liberties, we will hereby release all proprietary source code pursuant to the Open RAIL-S License, which expressly prohibits exploitative applications through robust contractual obligations and liabilities. |