Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,28 @@
|
|
1 |
-
---
|
2 |
-
license:
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-4.0
|
3 |
+
task_categories:
|
4 |
+
- image-segmentation
|
5 |
+
tags:
|
6 |
+
- open-vocabulary-segmentation
|
7 |
+
- zero-shot-segmentation
|
8 |
+
---
|
9 |
+
|
10 |
+
## Dataset Card for Segmentation in the Wild
|
11 |
+
### Dataset Description
|
12 |
+
Segmentation in the Wild (SegInW) is a computer vision challenge that aims to evaluate the transferability of pre-trained vision models. It proposes a new benchmark that assesses both the segmentation accuracy and transfer efficiency of models on a diverse set of downstream segmentation tasks. The challenge consists of 25 free, public segmentation datasets, crowd-sourced on roboflow.com, providing a wide range of visual data for model training and testing.
|
13 |
+
|
14 |
+
### Composition
|
15 |
+
The SegInW challenge brings together 25 diverse segmentation datasets, offering a comprehensive evaluation of model performance across various scenarios. These datasets cover a broad range of visual content.
|
16 |
+
|
17 |
+
### Data Instances
|
18 |
+
|
19 |
+
- Images: Visual data in the form of images, depending on the dataset.
|
20 |
+
- Annotations: Manual annotations specifying regions of interest or providing referring phrases for language-based segmentation.
|
21 |
+
- Segmentation Masks: Pixel-level annotations that define the boundaries of objects or regions in the visual data.
|
22 |
+
- Metadata: Additional information about the data, such as collection sources, dates, and any relevant pre-processing steps.
|
23 |
+
|
24 |
+
**Data Splits**
|
25 |
+
Each folder has a train, train 10-shot and validation splits.
|
26 |
+
|
27 |
+
**Dataset Creation**
|
28 |
+
The SegInW challenge is a community effort, with the 25 datasets crowd-sourced and contributed by different researchers and organizations. The diversity of sources ensures a wide range of visual data and evaluation scenarios. The datasets were labeled on roboflow.com as part of [X-Decoder](https://x-decoder-vl.github.io/) project.
|