The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider removing the loading script and relying on automated data support (you can use convert_to_parquet from the datasets library). If this is not possible, please open a discussion for direct help.

Segmented ImageNet-1K Subset

A subset of ImageNet-1K that has instance segmentation annotations (classes, boxes, and masks), originally intended for use by the ViT Prisma Library.

The annotations were autogenerated by Grounded Segment Anything.

The total size of the dataset is 12,000 images: 10,000 from ImageNet-1K train and 1,000 each from test and val.

Organization

Images are organized in the same structure as ImageNet-1K:

images/
  train_images/
  val_images/
  test_images/

The train and val ImageNet classes can be identified from the filenames. See imagenet-1k classes.

Masks are stored in a similar manner:

masks/
  train_masks/
  val_masks/
  test_masks/

Finally train.json, val.json, test.json store box, label, score and path information:

{
  "image": "images/val_images/ILSVRC2012_val_00000025_n01616318.JPEG", 
  "scores": [0.5, 0.44, 0.43, 0.28], 
  "boxes": [[149, 117, 400, 347], [2, 2, 498, 497], [148, 115, 401, 349], [2, 2, 498, 497]], 
  "labels": ["bird", "dirt field", "vulture", "land"], 
  "masks": ["masks/val_masks/ILSVRC2012_val_00000025_n01616318_00.png", "masks/val_masks/ILSVRC2012_val_00000025_n01616318_01.png", "masks/val_masks/ILSVRC2012_val_00000025_n01616318_02.png", "masks/val_masks/ILSVRC2012_val_00000025_n01616318_03.png"]
}

You can use this dataloader for your patch level labels. Patch size is a hyperparameter.

class PatchDataset(Dataset):
    def __init__(self, dataset, patch_size=16, width=224, height=224):
        """
        dataset: A list of dictionaries, each dictionary corresponds to an image and its details
        """
        self.dataset = dataset
        self.transform = transforms.Compose([
            transforms.Resize((width, height)), # Resize the image
            # 3 channels
            # transforms.Grayscale(num_output_channels=3), # Convert the image to grayscale
            transforms.ToTensor(), # Convert the image to a tensor
        ])
        self.patch_size = patch_size

        self.width = width
        self.height = height
        
    def __len__(self):
        return len(self.dataset)
    
    def __getitem__(self, idx):
        item = self.dataset[idx]
        image = self.transform(item['image'])
        masks = item['masks']
        labels = item['labels']  # Assuming labels are aligned with masks
        
        # Calculate the size of the reduced mask
        num_patches = self.width // self.patch_size
        label_array = [[[] for _ in range(num_patches)] for _ in range(num_patches)]
        
        for mask, label in zip(masks, labels):
            # Resize and reduce the mask
            mask = mask.resize((self.width, self.height))
            mask_array = np.array(mask) > 0
            reduced_mask = self.reduce_mask(mask_array)
            
            # Populate the label array based on the reduced mask
            for i in range(num_patches):
                for j in range(num_patches):
                    if reduced_mask[i, j]:
                        label_array[i][j].append(label)
        
        # Convert label_array to a format suitable for tensor operations, if necessary
        # For now, it's a list of lists of lists, which can be used directly in Python
        
        return image, label_array
    

    def reduce_mask(self, mask):
        """
        Reduce the mask size by dividing it into patches and checking if there's at least
        one True value within each patch.
        """
        # Calculate new height and width
        new_h = mask.shape[0] // self.patch_size
        new_w = mask.shape[1] // self.patch_size
        
        reduced_mask = np.zeros((new_h, new_w), dtype=bool)
        
        for i in range(new_h):
            for j in range(new_w):
                patch = mask[i*self.patch_size:(i+1)*self.patch_size, j*self.patch_size:(j+1)*self.patch_size]
                reduced_mask[i, j] = np.any(patch)  # Set to True if any value in the patch is True
        
        return reduced_mask

Citation

Please consider citing this dataset if used in your research:

@misc{segmented_imagenet1k_subset_2024,
  author = {ViT-Prisma Contributors},
  title = {Segmented ImageNet-1k Subset},
  url = {https://huggingface.co/datasets/Prisma-Multimodal/segmented-imagenet1k-subset},
  version = {1.0.0},
  date = {2024-04-02},
}

Grounded Segment Anything and Imagenet can be cited as follows:

@software{grounded_segment_anything,
  author = {Grounded-SAM Contributors},
  title = {Grounded-Segment-Anything},
  url = {https://github.com/IDEA-Research/Grounded-Segment-Anything},
  version = {1.2.0},
  date = {2023-04-06},
  license = {Apache-2.0},
  message = {If you use this software, please cite it as below.}
}
@article{imagenet15russakovsky,
    Author = {Olga Russakovsky and Jia Deng and Hao Su and Jonathan Krause and Sanjeev Satheesh and Sean Ma and Zhiheng Huang and Andrej Karpathy and Aditya Khosla and Michael Bernstein and Alexander C. Berg and Li Fei-Fei},
    Title = { {ImageNet Large Scale Visual Recognition Challenge} },
    Year = {2015},
    journal   = {International Journal of Computer Vision (IJCV)},
    doi = {10.1007/s11263-015-0816-y},
    volume={115},
    number={3},
    pages={211-252}
}
Downloads last month
339
Edit dataset card