
\subsection{Initialization}
\label{Sec:Seg:Init}

We use a greedy procedure to initialize the segmentation functions and the association between the classes and input images. Specifically, we first compute the images associated withe class $\set{C}_1$ and the corresponding segmentation functions. Starting from the second class, when computing the segmentation functions, we constraint that these segmentations should be orthogonal to the ones of previous classes. The details are described below.

\para{Initializing $\set{C}_1$.} Motivated from the factor that segmentation functions of the same class should commute with the functional maps, we optimize the segmentation function $\bs{s}_i$ on each image such that
\begin{align*}
\maximize & \quad \frac{1}{|\set{E}|}\sum\limits_{(i,j) \in \set{E}}\|\bs{X}_{ij}\bs{s}_i - \bs{s}_{j}\|^2 + \frac{\mu}{N}\sum\limits_{i=1}^{N}\bs{s}_i ^{T}\bs{L}_i \bs{s}_i\nonumber \\
\subjectto & \quad \sum\limits_{i=1}^{N} \|\bs{s}_i \|^2 = 1.
\end{align*}

We need a good interpretation here,.... Suppose all the functional map $\bs{X}_{ij}$ are good, then this optimization problem simply recovers one set of consistent segmentations. This one is similar to the normalized cut behavior.

After computing the segmentation functions, we pick $\set{C}_{k}$ to include those images, where the magnitude of segmentation functions are large, i.e.,
$$
\set{C}_{k} = \{I_i| \|\bs{s}_i \| \geq \max\limits_{i}\|\bs{s}_i\|/4\}.
$$

\para{Initializing $\set{C}_{k}, 2\leq k \leq M$.} When initialize the segmentation functions of other sets, we add a term to prioritize the mutual exclusive of the segmentation functions with other sets:
\begin{align*}
\maximize & \quad \frac{1}{|\set{E}|}\sum\limits_{(i,j) \in \set{E}}\|\bs{X}_{ij}\bs{s}_i - \bs{s}_{j}\|^2 + \frac{\mu}{N}\sum\limits_{i=1}^{N}\bs{s}_i ^{T}\bs{L}_i \bs{s}_i\nonumber \\
                      & \quad \frac{\lambda}{N} \sum\limits_{i=1}^{N}\sum\limits_{k' < k, I_i \in \set{C}_{k'}}(\bs{s}_{ik}^{T}\bs{s}_{ik'})^2 \nonumber \\
\subjectto & \quad \sum\limits_{i=1}^{N} \|\bs{s}_i \|^2 = 1.
\end{align*}

We use the same thresholding strategy to determine the new class and the associated segmentation functions.

\subsection{Continuous Optimization}
\label{Sec:Seg:Opt}

Given the fixed associations between the classes and the input images, we optimize the corresponding segmentations $\bs{s}_{ik}$ via constrained optimization. The objective function consists of three terms. The first term measures the consistency between $\bs{s}_{jk}$ and $\bs{X}_{ij} \bs{s}_{ik}, (i,j) \in \set{E}$, i.e., the induced segmentations from its neighbors:
\begin{equation}
f_{ij}^{\consistency} = \|\bs{X}_{ij}\bs{s}_{ik} - \bs{s}_{jk}\|^2.
\end{equation}
The second term evaluates the mutual exclusiveness of segmentation functions of difference classes  on the same image:
\begin{equation}
f_{i}^{\exclusive} = \sum\limits_{(k,k') \in \{I_i \in \set{C}_{k}\cap \set{C}_{k'}\}} (\bs{s}_{ik}^{T}\bs{s}_{ik'})^2.
\end{equation}
The final term evaluates the saliency of each segmentation. In this case, we employ the normalized cut potential
\begin{equation}
f_{i}^{\textup{cut}} = \bs{s}_{ik}^{T}\bs{L}_i \bs{s}_{ik}.
\end{equation}
Merely optimizing $f_{ij}^{\consistency}$,$f_{i}^{\exclusive}$ and $f_{i}^{\textup{cut}}$ would force all segmenting functions to be the zero vector. Thus, we include the regularization constraints $\sum\limits_{i \in \set{C}_{k}} \|\bs{s}_{ik}\|^2 =|\set{C}_{k}|, \quad 1\leq k \leq K$ to address this issue. Combining these three objective terms, we arrive at the following optimization problem:
\begin{align}
\underset{\bs{s}_{ik}, i \in \set{C}_{k}}{\minimize} & \quad  \sum\limits_{k=1}^{M}\sum\limits_{(i,j)\in \set{E}\cap (\set{C}_{k}\times \set{C}_{k})} \|\bs{X}_{ij}\bs{s}_{ik} - \bs{s}_{jk}\|^2 \nonumber \\
                               & + \gamma \sum\limits_{l \neq k} \sum\limits_{i \in \set{C}_{k}\cap \set{C}_{l}}(\bs{s}_{il}^{T}\bs{s}_{ik})^2 + \mu \sum\limits_{k=1}^{M}\sum\limits_{i \in \set{C}_{k}}\bs{s}_{ik}^{T}\bs{L}_i \bs{s}_{ik} \nonumber \\
 \subjectto                              &\quad  \sum\limits_{i \in \set{C}_{k}} \|\bs{s}_{ik}\|^2 =|\set{C}_{k}|, \quad 1\leq k \leq K.
\label{Eq:Prob1}
\end{align}

It is hard to optimize (\ref{Eq:Prob1}) directly because term $(\bs{s}_{ik}\bs{s}_{il})^2$ is quadric in segmentation function coefficient. However, the objective functions becomes quadratic if we only optimize the segmentation functions associated with each class. This leads to the following alternating optimization procedure. Specifically, at each step, we optimize the segmentation functions associated withe class $\set{C}_{k}$. This leads the following constrained optimization problem:
\begin{align}
\underset{\bs{s}_{ik}, i \in \set{C}_{k}}{\minimize} & \quad  \sum\limits_{(i,j)\in \set{E}\cap (\set{C}_{k}\times \set{C}_{k})} \|\bs{X}_{ij}\bs{s}_{ik} - \bs{s}_{jk}\|^2 \nonumber \\
                               & + \gamma\sum\limits_{i \in \set{C}_{l}, l\neq k}(\bs{s}_{il}^{T}\bs{s}_{ik})^2 + \mu \sum\limits_{i \in \set{C}_{k}}\bs{s}_{ik}^{T}\bs{L}_i \bs{s}_{ik} \nonumber \\
 \subjectto                              &\quad  \sum\limits_{i \in \set{C}_{k}} \|\bs{s}_{ik}\|^2 =|\set{C}_{k}|.
\label{Eq:Prob2}
\end{align}
The objective function of (\ref{Eq:Prob2} is quadratic in the variables $\bs{s}_{ik}$, and thus can be solved by computing the maximize eigenvalue of a matrix....

We alternate ....... until convergence.

\begin{figure*}[t]
\begin{center}
\includegraphics[width=1.0\textwidth]{figures/seg_prop.eps}
\end{center}
\caption{The segmentation propagation process on xxx dataset. We show four randomly sampled images for each class at each iteration.}
\label{Fig:SegProp}
\end{figure*}


\subsection{Combinatorial Optimization}
\label{Sec:Seg:Prop}

Given the current segmentation functions $\bs{s}_{ik},I_ i \in \set{C}_{k}, 1\leq k \leq M$, we proceed to expand $\set{C}_{k}$ by propagating segmentation functions to using functional maps, and detect salient segmentations. Specifically, for each class $\set{C}_{k}$ and for each image $I_i$, where $\exists I_{j} \in \set{C}_{k}, (i,j) \in \set{E}$, we compute the induced segmentation $\bs{s}_{ik}$ by solving the following constrainted optimization problem
\begin{align}
\underset{\bs{s}_{ik}}{\maximize} & \quad \frac{1}{|\set{N}(i) \cap \set{C}_{k}|}\sum\limits_{j \in \set{N}(i) \cap \set{C}_{k}} (\bs{s}_{ik}^{T}X_{ji}\bs{s}_{jk})^2 \nonumber \\
                         & \quad - \gamma \sum\limits_{l \neq k, i \in \set{C}_{l}} (\bs{s}_{ik}^{T}\bs{s}_{il})^2 - \mu \bs{s}_{ik}^{T} \bs{L}_{i}\bs{s}_{ik} \label{Eq:Prop:Func} \\
\subjectto & \quad \|\bs{s}_{ik}\|^2 = 1.
\end{align}
The three terms in objective function (\ref{Eq:Prop:Func}) have the following behavior. The first term $\frac{1}{|\set{N}(i) \cap \set{C}_{k}|}\sum\limits_{j \in \set{N}(i) \cap \set{C}_{k}} (\bs{s}^{T}X_{ji}\bs{s}_{jk})^2$ proportions that $\vec{s}_{ik}$ agrees with the induced segmentation functions $\bs{X}_{ji}\bs{s}_{jk}$ from its neighboring images. The second term $\gamma \sum\limits_{l \neq k, i \in \set{C}_{l}} (\bs{s}_{ik}^{T}\bs{s}_{il})^2$ ensures that $\bs{s}_{ik}$ is orthogonal to existing segmentation functions of other classes on this image. The first term measures the saliency of $\bs{s}_{ik}$, with respect to the normalized cut potential.  Since the objective function in quadratic in $\bs{s}_{ik}$, the optimal value of $\bs{s}_{ik}$ is given by the maximum eigenvector of matrix
\begin{align*}
\set{S}_{ik} & = \frac{1}{|\set{N}(i) \cap \set{C}_{k}|}\sum\limits_{j \in \set{N}(i) \cap \set{C}_{k}} (X_{ji}\bs{s}_{jk})(X_{ji}\bs{s}_{jk})^{T} \nonumber \\
                         & -  \gamma\sum\limits_{l \neq k, i \in \set{C}_{l}} \bs{s}_{il} \bs{s}_{il}^{T} - \mu L_i.
\end{align*}
After computing the segmentation function $\bs{s}_{ik}$, we compute the saliency score $\bs{s}_{ik}^{T}\bs{L}_i \bs{s}_{ik}$. We then include $I_i$ into $\set{C}_{k}$ if
$$
\bs{s}_{ik}^{T}\bs{L}_i \bs{s}_{ik} < \epsilon\max_{j \in \set{C}_{k}}\frac{\bs{s}_{jk}^{T}\bs{L}_j \bs{s}_{jk}}{\bs{s}_{jk}^{T}\bs{s}_{jk}},
$$
where we choose $\epsilon  =1/4$.