Try this demo for PEEKABOO, introduced in our BMVC'2024 paper PEEKABOO: Hiding Parts of an Image for Unsupervised Object Localization.
Peekaboo aims to explicitly model contextual relationship among pixels through image masking for unsupervised object localization. In a self-supervised procedure (i.e. pretext task) without any additional training (i.e. downstream task), context-based representation learning is done at both the pixel-level by making predictions on masked images and at shape-level by matching the predictions of the masked input to the unmasked one.
You can use this demo to segment the most salient as well as novel object(s) in your images. To use it, simply upload an image of your choice and hit submit. You will get one or more segmentation maps of the most salient objects present in your images.
Project Page