Image segmentation is one of the fundamental problems in computer vision. Traditional methods have focused on segmenting a single image. These methods typically utilize segmentation clues, e.g., changes of colors and sharp edges, to divide a given image into pieces. However, due to noiseless in these clues, single image techniques typically lead to poor results.

Recently, there is growing interest in unsupervised image co-segmentation, where the segmentations are forced to be consistent across a collection of similar images. The key idea is to establish correspondences across images, and consistent segmentations that agree with the segmentation clues provided by all the images together. This formulation, which applies an robust filtering operator across all the segmentation clues, turns out to perform much better than single image segmentation techniques. However, state-of-the-art image co-segmentation techniques are all restricted in a harmonic setting --- where all the input images contain the same set of objects (mostly just the foreground and the background).

In this paper, we consider a much more general image co-segmentation, where each input image may contain an arbitrary subset of objects of the all possible objects. We will call such an image collection a heterogenous image collection.  Obviously, co-segmenting
a heterogenous image collection poses fundamental challenges both in how to establish reliable relations across the images and in how to identify objects that only appear in subset of input images.

We propose to address these two issues using the functional maps, which are recently introduced to the vision community by. Unlike traditional image matching techniques, which establish correspondences between image pixels/superpixels. Functional maps establish maps between functions. As image segmentation can be considered as computing binary segmentation functions on pixels/superpixels, functional maps are particularly suitable for the purpose of image co-segmentation since it nicely integrates the problems of segmentation and image matching.

The proposed image co-segmentation framework consists of two stages. The first stage establishes consistent functional maps across the input images. We builds upon the framework of, and introduce a formulation that explicitly models partial similarity across images. Given the optimized consistent functional maps across the images, the second stage optimizes multiple groups of consistent segmentations across the image collection. The objectives include the alignment between the segmentations and sharp edges, their agreement with the functional maps as well as their mutual exclusiveness, i.e., the segmentations of different objects should not overlap. We show how to formulate these objectives as an objective function that can be effectively optimized via alternating optimization.


The proposed approach exhibits significantly improved performance on standard cosegmentation data sets MSRC~\cite{Shotton2006MSRC} and Flickr~\cite{Kim2012CVPR} compared with recent state-of-the-art methods. Moreover, we create a more challenging multi-class data set with a larger number of images and larger variance in object appearance using images from the PASCAL VOC data set~\cite{PASCAL2010}. Our method outperforms other techniques on this data set as well.

% These experiments further suggest that more unsupervised data is highly beneficial to our technique and thus the proposed approach may show even further improvements with the larger image data sets now becoming easily available.
