File size: 5,480 Bytes
0eec1cb 4d26bbd 0eec1cb 4d26bbd 493ee46 4d26bbd 493ee46 4d26bbd 493ee46 4d26bbd 493ee46 4d26bbd 6c4796c c70523c 69ecb68 c70523c 69ecb68 c70523c 872e520 4d26bbd 872e520 4d26bbd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 |
---
{}
---
## Segmented ImageNet-1K Subset
A subset of ImageNet-1K that has instance segmentation annotations (classes, boxes, and masks), originally intended for use by the [ViT Prisma Library](https://github.com/soniajoseph/ViT-Prisma).
The annotations were autogenerated by [Grounded Segment Anything](https://github.com/IDEA-Research/Grounded-Segment-Anything).
The total size of the dataset is 12,000 images: 10,000 from ImageNet-1K train and 1,000 each from test and val.
### Organization
Images are organized in the same structure as ImageNet-1K:
```
images/
train_images/
val_images/
test_images/
```
The train and val ImageNet classes can be identified from the filenames. See [imagenet-1k classes](https://huggingface.co/datasets/imagenet-1k/blob/main/classes.py).
Masks are stored in a similar manner:
```
masks/
train_masks/
val_masks/
test_masks/
```
Finally `train.json`, `val.json`, `test.json` store box, label, score and path information:
```json
{
"image": "images/val_images/ILSVRC2012_val_00000025_n01616318.JPEG",
"scores": [0.5, 0.44, 0.43, 0.28],
"boxes": [[149, 117, 400, 347], [2, 2, 498, 497], [148, 115, 401, 349], [2, 2, 498, 497]],
"labels": ["bird", "dirt field", "vulture", "land"],
"masks": ["masks/val_masks/ILSVRC2012_val_00000025_n01616318_00.png", "masks/val_masks/ILSVRC2012_val_00000025_n01616318_01.png", "masks/val_masks/ILSVRC2012_val_00000025_n01616318_02.png", "masks/val_masks/ILSVRC2012_val_00000025_n01616318_03.png"]
}
```
You can use this dataloader for your patch level labels. Patch size is a hyperparameter.
```
class PatchDataset(Dataset):
def __init__(self, dataset, patch_size=16, width=224, height=224):
"""
dataset: A list of dictionaries, each dictionary corresponds to an image and its details
"""
self.dataset = dataset
self.transform = transforms.Compose([
transforms.Resize((width, height)), # Resize the image
# 3 channels
# transforms.Grayscale(num_output_channels=3), # Convert the image to grayscale
transforms.ToTensor(), # Convert the image to a tensor
])
self.patch_size = patch_size
self.width = width
self.height = height
def __len__(self):
return len(self.dataset)
def __getitem__(self, idx):
item = self.dataset[idx]
image = self.transform(item['image'])
masks = item['masks']
labels = item['labels'] # Assuming labels are aligned with masks
# Calculate the size of the reduced mask
num_patches = self.width // self.patch_size
label_array = [[[] for _ in range(num_patches)] for _ in range(num_patches)]
for mask, label in zip(masks, labels):
# Resize and reduce the mask
mask = mask.resize((self.width, self.height))
mask_array = np.array(mask) > 0
reduced_mask = self.reduce_mask(mask_array)
# Populate the label array based on the reduced mask
for i in range(num_patches):
for j in range(num_patches):
if reduced_mask[i, j]:
label_array[i][j].append(label)
# Convert label_array to a format suitable for tensor operations, if necessary
# For now, it's a list of lists of lists, which can be used directly in Python
return image, label_array
def reduce_mask(self, mask):
"""
Reduce the mask size by dividing it into patches and checking if there's at least
one True value within each patch.
"""
# Calculate new height and width
new_h = mask.shape[0] // self.patch_size
new_w = mask.shape[1] // self.patch_size
reduced_mask = np.zeros((new_h, new_w), dtype=bool)
for i in range(new_h):
for j in range(new_w):
patch = mask[i*self.patch_size:(i+1)*self.patch_size, j*self.patch_size:(j+1)*self.patch_size]
reduced_mask[i, j] = np.any(patch) # Set to True if any value in the patch is True
return reduced_mask
```
# Citation
Please consider citing this dataset if used in your research:
```bibtex
@misc{segmented_imagenet1k_subset_2024,
author = {ViT-Prisma Contributors},
title = {Segmented ImageNet-1k Subset},
url = {https://huggingface.co/datasets/Prisma-Multimodal/segmented-imagenet1k-subset},
version = {1.0.0},
date = {2024-04-02},
}
```
Grounded Segment Anything and Imagenet can be cited as follows:
```bibtex
@software{grounded_segment_anything,
author = {Grounded-SAM Contributors},
title = {Grounded-Segment-Anything},
url = {https://github.com/IDEA-Research/Grounded-Segment-Anything},
version = {1.2.0},
date = {2023-04-06},
license = {Apache-2.0},
message = {If you use this software, please cite it as below.}
}
```
```bibtex
@article{imagenet15russakovsky,
Author = {Olga Russakovsky and Jia Deng and Hao Su and Jonathan Krause and Sanjeev Satheesh and Sean Ma and Zhiheng Huang and Andrej Karpathy and Aditya Khosla and Michael Bernstein and Alexander C. Berg and Li Fei-Fei},
Title = { {ImageNet Large Scale Visual Recognition Challenge} },
Year = {2015},
journal = {International Journal of Computer Vision (IJCV)},
doi = {10.1007/s11263-015-0816-y},
volume={115},
number={3},
pages={211-252}
}
``` |