anindyamondal
commited on
Commit
•
d6ed572
1
Parent(s):
68a7457
Update README.md
Browse files
README.md
CHANGED
@@ -32,57 +32,21 @@ Omnicount-191 is a dataset that caters to a broad spectrum of visual categories
|
|
32 |
|
33 |
Object Counting
|
34 |
|
35 |
-
[More Information Needed]
|
36 |
-
|
37 |
### Out-of-Scope Use
|
38 |
|
39 |
Visual Question Answering (VQA), Object Detection (OD)
|
40 |
|
41 |
-
## Dataset Structure
|
42 |
-
|
43 |
-
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
|
44 |
-
|
45 |
-
[More Information Needed]
|
46 |
-
|
47 |
-
## Dataset Creation
|
48 |
-
|
49 |
-
### Curation Rationale
|
50 |
-
|
51 |
-
<!-- Motivation for the creation of this dataset. -->
|
52 |
-
|
53 |
-
[More Information Needed]
|
54 |
-
|
55 |
-
### Source Data
|
56 |
-
|
57 |
-
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
|
58 |
-
|
59 |
#### Data Collection and Processing
|
60 |
|
61 |
-
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
|
68 |
-
|
69 |
-
[More Information Needed]
|
70 |
-
|
71 |
-
### Annotations [optional]
|
72 |
-
|
73 |
-
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
|
74 |
|
75 |
-
|
76 |
-
|
77 |
-
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
|
78 |
-
|
79 |
-
[More Information Needed]
|
80 |
-
|
81 |
-
#### Who are the annotators?
|
82 |
-
|
83 |
-
<!-- This section describes the people or systems who created the annotations. -->
|
84 |
-
|
85 |
-
[More Information Needed]
|
86 |
|
87 |
#### Personal and Sensitive Information
|
88 |
|
|
|
32 |
|
33 |
Object Counting
|
34 |
|
|
|
|
|
35 |
### Out-of-Scope Use
|
36 |
|
37 |
Visual Question Answering (VQA), Object Detection (OD)
|
38 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
39 |
#### Data Collection and Processing
|
40 |
|
41 |
+
The data collection process for OmniCount-191 involved a team of 13 members who manually curated images from the web, released under Creative Commons (CC) licenses. The images were sourced using relevant keywords such as “Aerial Images”, “Supermarket Shelf”, “Household Fruits”, and “Many Birds and Ani- mals”. Initially, 40,000 images were considered, from which 30,230 images were selected based on the following criteria:
|
42 |
+
1. **Object instances**: Each image must contain at least five object instances, aiming to challenge object enumeration in complex scenarios;
|
43 |
+
2. **Image quality**: High-resolution images were selected to ensure clear object identification and counting;
|
44 |
+
3. **Severe occlusion**: We excluded images with significant occlusion to maintain accuracy in object counting;
|
45 |
+
4. **Object dimensions**: Images with objects too small or too distant for accurate counting or annotation were removed, ensuring all objects are adequately sized for analysis.
|
46 |
+
The selected images were annotated using the [Labelbox](https://labelbox.com) annotation platform.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
47 |
|
48 |
+
### Statistics
|
49 |
+
The OmniCount-191 benchmark presents images with small, densely packed objects from multiple classes, reflecting real-world object counting scenarios. This dataset encompasses 30,230 images, with dimensions averaging 700 × 580 pixels. Each image contains an average of 10 objects, totaling 302,300 objects, with individual images ranging from 1 to 160 objects. To ensure diversity, the dataset is split into training and testing sets, with no overlap in object categories – 118 categories for training and 73 for testing, corresponding to a 60%-40% split. This results in 26,978 images for training and 3,252 for testing.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
50 |
|
51 |
#### Personal and Sensitive Information
|
52 |
|