anindyamondal
commited on
Commit
•
30fbf4e
1
Parent(s):
9af33e0
Update README.md
Browse files
README.md
CHANGED
@@ -50,25 +50,11 @@ The selected images were annotated using the [Labelbox](https://labelbox.com) an
|
|
50 |
### Statistics
|
51 |
The OmniCount-191 benchmark presents images with small, densely packed objects from multiple classes, reflecting real-world object counting scenarios. This dataset encompasses 30,230 images, with dimensions averaging 700 × 580 pixels. Each image contains an average of 10 objects, totaling 302,300 objects, with individual images ranging from 1 to 160 objects. To ensure diversity, the dataset is split into training and testing sets, with no overlap in object categories – 118 categories for training and 73 for testing, corresponding to a 60%-40% split. This results in 26,978 images for training and 3,252 for testing.
|
52 |
|
53 |
-
|
|
|
54 |
|
55 |
-
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
|
56 |
|
57 |
-
|
58 |
-
|
59 |
-
## Bias, Risks, and Limitations
|
60 |
-
|
61 |
-
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
62 |
-
|
63 |
-
[More Information Needed]
|
64 |
-
|
65 |
-
### Recommendations
|
66 |
-
|
67 |
-
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
68 |
-
|
69 |
-
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
|
70 |
-
|
71 |
-
## Citation [optional]
|
72 |
|
73 |
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
|
74 |
|
|
|
50 |
### Statistics
|
51 |
The OmniCount-191 benchmark presents images with small, densely packed objects from multiple classes, reflecting real-world object counting scenarios. This dataset encompasses 30,230 images, with dimensions averaging 700 × 580 pixels. Each image contains an average of 10 objects, totaling 302,300 objects, with individual images ranging from 1 to 160 objects. To ensure diversity, the dataset is split into training and testing sets, with no overlap in object categories – 118 categories for training and 73 for testing, corresponding to a 60%-40% split. This results in 26,978 images for training and 3,252 for testing.
|
52 |
|
53 |
+
### Splits
|
54 |
+
We have prepared dedicated splits within the OmniCount-191 dataset to facilitate the assessment of object counting models under zero-shot and few-shot learning conditions. Please refer to the [technical report](https://arxiv.org/pdf/2403.05435.pdf) (Sec. 9.1, 9.2) for more detais.
|
55 |
|
|
|
56 |
|
57 |
+
## Citation
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
58 |
|
59 |
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
|
60 |
|