


\subsection{Initialization}


\subsubsection{Similarity graph}
\label{Sec:Similarity:Graph}

We compute a sparse similarity graph $\set{G}$ for the input image collection $\set{I}$, and only compute functional maps between pairs of images specified by $\set{G}$. In this paper, we simply connect each image with its $k = 30$  nearest images using their GIST~\cite{Oliva2001gist} descriptors $\vec{g}_i, 1\leq i \leq N$. We assign a weight to the edge between image pair $(i,j) \in \set{G}$ via
\begin{equation}
w_{ij} = \exp(-\|\vec{g}_i-\vec{g}_j\|^2/2\sigma^2),
\end{equation}
where $\sigma =\textup{median}(\|\vec{g}_i-\vec{g}_j\|)$ is the median of image descriptor differences.

\subsubsection{Aligning image features}
\label{Sec:Probe:Function}

When computing the functional map $X_{ij}$ from image $I_i$ to image $I_j$, an obvious constraint is to enforce that $X_{ij}$ agrees with the image features computed from both images. In the functional setting, this is starightforward to formulate, i.e., we simply constrain that $X_{ij}\vec{d}_i \approx\vec{d}_j$, where $\vec{d}_i$ is a descriptor function on image $I_i$, and $\vec{d}_j$ is the corresponding descriptor function on image $I_j$.

\begin{equation}
f^{\textup{feature}}_{ij} = \sum_{k=1}^{n_D}\|X_{ij} d^k_i - d^k_{j}\|_2,
\label{Eq:Feature:Align:Term}
\end{equation}

\subsection{Regularization}

It is easy to see that the optimization of each row of $X_{ij}$ is independent of each other. Thus, merely minimizing $f_{\textup{feature}}$ would generate a rank-deficient matrix of the form $X_{ij} = (\vec{x}_{ij}^{T}, \cdots,\vec{x}_{ij}^{T})$. Following ~\cite{Ovsjanikov2012}, we introduce the following regularization term to improve the condition of $X_{ij}$:
The regularizer used for $X_{ij}$ is represented as:
\begin{equation}
f^{\textup{reg}}_{ij} = \sum\limits_{1\leq s, s'\leq M} \left(|\lambda_i^{s} - \lambda_{j}^{s'}|X_{ij}(s,s')\right)^2,
\label{Eq:Regularization:Term}
\end{equation}
where $\lambda_i^{s}$ ($\lambda_j^{s'})$ denotes the $s\textup{-th}$ ($s'\textup{-th}$) eigenvalue of the graph Laplacian matrix $L_i$ ($L_j)$. 


Combining
Eq.~\ref{Eq:Feature:Align:Term}-\ref{Eq:Regularization:Term}, we
arrive at the following optimization problem for computing functional maps:
\begin{align}
\min & \quad \sum\limits_{(i,j)\in \set{G}} w_{ij} \left( f^{\textup{feature}}_{ij} + \mu f^{\textup{reg}}_{ij} + \lambda f^{\textup{cons}}_{ij} \right) \nonumber \\
s.t. &  \quad Y^{T}Y = I_{m},
\label{Eq:Objective:Term}
\end{align}
where $\lambda$ control the tradeoffs between the two terms. For all the experiments, we set $\lambda = 10$. 




\subsection{Optimization}




\subsection{Updating Process}



Convergence is guaranteed if the combinatorial structure is fixed.
Although we cannot guarantee global convergence of our approach, a very good behavior is observed in practice: the combinatorial structure starts settling down after a few iterations
so the algorithm converges afterwards. 