



\subsection{Experiments on MSRC Dataset}

We first evaluate our proposed method on the public cosegmentation dataset MSRC~\cite{Shotton2006MSRC}. It includes 591 pixelwise labeled images of 23 object classes.
Each group contains two classes, i.e. background and a common
object with the similar appearance, e.g., cow, dog, etc. It is a standard binary segmentation setting, therefore, many existing single-class cosegmentation algorithms are applicable here.
Table.~\ref{tbl:msrc} gives a quantitative comparison with \cite{Joulin2012,Kim2011ICCV,Mukherjee2011}, and we select the same classes as reported in \cite{Joulin2012}.
The segmentation performance is measured by the intersection-over-union score which is standard in PASCAL challenges. 
Our method is significantly better than the state-of-the-art cosegmentation methods in most of the cases.

\begin{table}
\centering
{\small
\begin{tabular}{ccc|cccc}
  \hline
  class     & $N$   & $M$       & \cite{Joulin2012} & \cite{Kim2011ICCV}    & \cite{Mukherjee2011}  &FMaps-multi        \\ \hline
  Bike      & 30    & 2         & 43.3              & 29.9                  & 42.8                  & \textbf{51.2}  \\
  Bird      & 30    & 2         & 47.7              & 29.9                  & -                     & \textbf{55.7} \\
  Car       & 30    & 2         & 59.7              & 37.1                  & 52.5                  & \textbf{72.9}  \\
  Cat       & 24    & 2         & 31.9              & 24.4                  & 5.6                   & \textbf{65.9} \\
  Chair     & 30    & 2         & 39.6              & 28.7                  & 39.4                  & \textbf{46.5} \\
  Cow       & 30    & 2         & 52.7              & 33.5                  & 26.1                  & \textbf{68.4} \\
  Dog       & 30    & 2         & 41.8              & 33.0                  & -                     & \textbf{55.8} \\
  Face      & 30    & 2         & 70.0              & 33.2                  & 40.8                  & 60.9 \\
  Flower    & 30    & 2         & 51.9              & 40.2                  & -                     & \textbf{67.2} \\
  House     & 30    & 2         & 51.0              & 32.2                  & 66.4                  & 56.6 \\
  Plane     & 30    & 2         & 21.6              & 25.1                  & 33.4                  & \textbf{52.2} \\
  Sheep     & 30    & 2         & 66.3              & 60.8                  & 45.7                  & \textbf{72.2} \\
  Sign      & 30    & 2         & 58.9              & 43.2                  & -                     & \textbf{59.1} \\
  Tree      & 30    & 2         & 67.0              & 61.2                  & 55.9                  & 62.0 \\   \hline
\end{tabular}
}
\caption{Performance of binary segmentation on MSRC data set.}
\label{tbl:msrc}
\end{table}

\subsection{Experiments on FLickr Dataset}
\label{ssec:flickr}

We then evaluated our proposed method on the public multi-class image dataset Flickr~\cite{Kim2012CVPR}. This dataset consists of 14 groups, where each
group contains 10 ~ 20 images along with groundtruth pixel-level annotations. We compare our method with other state-of-the-art methods, including
\cite{Kim2012CVPR,Kim2011ICCV,Joulin2010,Russel2006}.


\begin{table}
\centering
{\small
\begin{tabular}{ccc|ccccc}
  \hline
  class             & $N$   & $M$   &\cite{Kim2012CVPR}& \cite{Kim2011ICCV}    &\cite{Joulin2010}  & \cite{Russel2006} & FMaps-multi        \\ \hline
  Apple             & 20    & 6     & 40.9             & 32.6                  & 24.8              & 25.6              & 59.6       \\
  baseball          & 18    & 5     & 31.0             & 31.3                  & 19.2              & 16.1              & 76.0       \\
  Butterfly         & 18    & 8     & 29.8             & 32.4                  & 29.5              & 10.7              & 75.0     \\
  Cheetah           & 20    & 5     & 32.1             & 40.1                  & 50.9              & 41.9              & 84.6     \\
  Cow               & 20    & 5     & 35.6             & 43.8                  & 25.0              & 27.2              &      \\
  Dog               & 20    & 4     & 34.5             & 35.0                  & 32.0              & 30.6              &      \\
  Dolphin           & 18    & 3     & 34.0             & 47.4                  & 37.2              & 30.1              &      \\
  Fishing           & 18    & 5     & 20.3             & 27.2                  & 19.8              & 18.3              &      \\
  Gorilla           & 18    & 4     & 41.0             & 38.8                  & 41.1              & 28.1              &      \\
  Liberty           & 18    & 4     & 31.5             & 41.2                  & 44.6              & 32.1              &      \\
  Parrot            & 18    & 5     & 29.9             & 36.5                  & 35.0              & 26.6              &      \\
  Stonehenge        & 20    & 5     & 35.3             & 49.3                  & 47.0              & 32.6              &      \\
  Swan              & 20    & 3     & 17.1             & 18.4                  & 14.3              & 16.3              &      \\
  Thinker           & 17    & 4     & 25.6             & 34.4                  & 27.6              & 15.7              &      \\
  Average           & -     & -     & 31.3             & 36.3                  & 32.0              & 25.1              &      \\ \hline
  \end{tabular}
  }
\caption{Performance comparison on Flickr data set.}
\label{tbl:unsupervised}
\end{table}

\subsection{Experiments on PASCAL-multi Dataset}

Besides the standard benchmark datasets, we create a more challenging multi-class dataset (``PASCAL-multi'') based on PASCAL VOC 2012 dataset.
Given a pre-selected set of class labels, a group of images are retrieved from the PASCAL dataset such that each image
 ONLY contain a subset of the pre-selected labels. This can ensure the pre-selected classes are the only re-occurring object classes
 in the images. This dataset is much more challenging than the Flickr dataset in \S\ref{ssec:flickr} due to its larger size and
 the larger appearance variability of the shared objects.

Our framework is compared with \cite{Shi2000NCut} and \cite{Timothee2005MNcut} (with the number of objects given as prior for each image), 
and the results are shown in Table.~\ref{tbl:pascal}. Our method significantly improves the segmentation performance even without knowing the
object configuration of images.


\begin{table}
\centering
{\small
\begin{tabular}{ccc|ccc}
  \hline
  class                 & imgNum    & classNum  & Ncut~\cite{Shi2000NCut}   & \cite{Timothee2005MNcut}  & FMaps-multi        \\ \hline
  Bike+ person          & 248       & 2         & 27.3                      & 30.5                      & 40.1  \\
  Boat + person         & 260       & 2         & 29.3                      & 32.6                      & 44.6  \\
  bottle + dining table & 90        & 2         & 37.8                      & 39.5                      & 47.6  \\
  bus + car             & 195       & 2         & 36.3                      & 39.4                      & 49.2  \\
  bus + person          & 243       & 2         & 38.9                      & 41.3                      & 55.5  \\
  car + person          & 301       & 2         & 27.4                      & 28.6                      & 39.7  \\
  chair + dining table  & 134       & 2         & 32.3                      & 30.8                      & 40.3  \\
  chair + potted plant  & 115       & 2         & 19.7                      & 19.7                      & 22.3 \\
  cow + person          & 263       & 2         & 30.5                      & 33.5                      & 45.0 \\
  dog+sofa              & 217       & 2         & 44.6                      & 42.2                      & \\
  horse+ person         & 276       & 2         & 27.3                      & 30.8                      & \\
  potted plant +sofa    & 119       & 2         & 37.4                      & 37.5                      & \\    \hline
\end{tabular}
}
\caption{Performance comparison on PASCAL-multi data set.}
\label{tbl:pascal}
\end{table}