\subsection{Formulation}
\label{Sec:Formu}

We consider three objective  terms for optimizing the functional maps. The first two terms measures the quality of each functional map, while the third term measures the consistency among the network of functional maps.

\para{Pair-wise objectives.} Similar to ICCV13, we force the functional map $\bs{X}_{ij}$ to align with probe functions on images $I_i$ and $I_J$. In addition, we regularize that $\bs{X}_{ij}$ align functions with similar frequencies. These two objectives result in the following pair-wise objective function:
\begin{equation}
f_{\textup{pair}} = \|\bs{X}_{ij} \bs{D}_i - \bs{D}_j\|_1 +  \lambda_{\textup{regu}} \textup{vec}(\bs{X}_{ij})^{T} \Sigma_{ij} \textup{vec}(\bs{X}_{ij}),
\end{equation}
where we set $\lambda_{\textup{regu}} = 0.1$.

\para{Consistency term.}  Following many low-rank matrix completion techniques, a straight-forward approach for ensuring the low-rank property of $\bs{X}$ is to minimize its trace-norm. This would lead to a convex program. However, we found that this strategy is inefficient for large datasets because it has to compute and store the functional maps between all pairs of images. To address this issue, we directly introduce the low-rank factorization $\bs{Y} = (\bs{Y}_1^{T}, \cdots, \bs{Y}_{N}^{T})^{T})$ and $\bs{Z} = (\bs{Z}_1^{T}, \cdots, \bs{Z}_{N}^{T})^{T})$ as latent variables to be optimized.  Expanding (\ref{Eq:LowRank}) and enforcing the constraint in the least square sense, we write down the following regularization term
\begin{equation}
f_{\textup{lowrank}} = \sum\limits_{(i,j)\in \set{E}} \|\bs{X}_{ij} - \bs{Y}_i \bs{Z}_{j}^{T}\|_{\set{F}}^2.
\end{equation}

In this paper, we let the column dimension of $\bs{Y}$ to be $K = 100$. The perfect rank can determined by monitoring the residual of $f_{\textup{lowrank}}$. We would lie to choose a minimum value of $K$ such that $f_{\textup{lowrank}}$ is small.

\para{Formulation.} Combing $f_{\textup{pair}} $ and $f_{\textup{lowrank}}$, we write down the following optimization problem for optimizing consistent functional maps
\begin{align}
\{\bs{X}_{ij}^{\star}\} & =\arg\min_{\bs{X}_{ij}} \sum\limits_{(i,j)\in \set{E}}(\|\bs{X}_{ij}\bs{D}_i - \bs{D}_j\|_1 \nonumber \\
                                              & + \lambda_{\textup{regu}} \textup{vec}(\bs{X}_{ij})^{T} \Sigma_{ij} \textup{vec}(\bs{X}_{ij})) \nonumber \\
                                              & + \lambda_{\textup{lowrank}} \sum\limits_{(i,j)\in \set{E}} \|\bs{X}_{ij} - \bs{Y}_i \bs{Z}_{j}^{T}\|_{\set{F}}^2,
\end{align}
where $\lambda_{\textup{lowrank}} = 1000$.

\subsection{Optimization}
\label{Sec:Opt}

\para{Initialize functional maps.} We begin with optimizing the functional maps between pairs of images by dropping the low-rank regularization term. In this case, the functional maps can be computed for each pair of images insolation:
\begin{equation}
\bs{X}_{ij}^{\star} = \arg\min_{\bs{X}_{ij}}\|\bs{X}_{ij} \bs{D}_{i} - \bs{D}_{j}\|_1 +\lambda \textup{vec}(\bs{X}_{ij})^{T}\Sigma_i\textup{vec}(\bs{X}_{ij}).
\label{Eq:X:OPT1}
\end{equation}
It is clear that Solving (\ref{Eq:X:OPT1}) amounts to solve a quadratic program,  whose global optimal can be achieved. We used CVX for this task.

\para{Initialize low-rank factorization.} When the functional maps as fixed, the optimal low-rank factorization is given by optimizing
\begin{equation}
f_{\textup{lowrank}} = \sum\limits_{(i,j)\in \set{E}} \|\bs{X}_{ij} - \bs{Y}_i\bs{Z}_{j}^{T}\|_{\set{F}}^2.
\end{equation}
In the case where $\set{E}$ connects all pairs of images, $\bs{Y}_i$ and $\bs{Z}_{j}$ can be computed analytically by performing SVD of the map collection matrix $\bs{X}$. In general case, there is no analytically solution to $\bs{Y}$ and $\bs{Z}$, and one has to perform local optimizations, where an good initialization of them turns out to be very important.

We found a good strategy is to first perform SVD decomposition of the incomplete map collection matrix $\overline{\bs{X}}$, where
$$
\overline{\bs{X}}_{ij} = \left\{
                                  \begin{array}{cc}
                                    \bs{X}_{ij} & (i,j)\in \set{E} \\
                                    0 & \textup{otherwise} \\
                                  \end{array}
\right.\
$$

Let the top $K = 100$ singular values of $\overline{\bs{X}}$ and associated singular vectors to be $\Omega$, $\bs{U}$ and $\bs{V}$, respectively. We initialize
$$
\overline{\bs{Y}} = \bs{U}\Omega^{\frac{1}{2}}, \quad \overline{\bs{Z}} = \bs{V}\Omega^{\frac{1}{2}} .
$$

Introduce a $K\times K$ matrix $\bs{D}$, and let
$$
\bs{Y} = \overline{\bs{Y}}, \bs{Z} = \overline{\bs{Z}}\bs{D}^{T}.
$$

We then optimize $\bs{D}$ to minimize $f_{\lowrank}$:
\begin{equation}
\bs{D}^{\star} = \sum\limits_{(i,j)\in \set{E}}\|\bs{X}_{ij} - \bs{Y}_i \bs{D}\overline{\bs{Z}}_{j}^{T}\|.
\end{equation}
As $f_{\lowrank}$ is quadratic in $\bs{D}$, $\bs{D}^{\star}$ can be obtained by solving a least square problem.

\para{Optimizing the low-rank approximation.} Given the initialization of $\bs{Y}$ and $\bs{Z}$, we apply alternating projection to refine them. Specifically, we first fix $\bs{Y}$ to optimize $\bs{Z}$.  In this case, $f_{\textup{lowrank}} $ becomes a quadratic function , and it is easy to see that the optimal value of $\bs{Y}_i$ is given by
\begin{align}
\bs{Y}_i^{\star} &= \min_{\bs{Y}_i} \sum\limits_{j \in \set{N}(i)} \|\bs{X}_{ij} - \bs{Y}_i\bs{Z}_j\|_{\set{F}}^2, \nonumber \\
                 & = (\sum\limits_{j \in \set{N}(i)}\bs{Z}_{j}\bs{Z}_{j}^{T})^{\star}(\sum\limits_{j \in \set{N}(i)}\bs{Z}_j \bs{X}_{ij}^{T}), \quad 1\leq i \leq N. \nonumber
\end{align}
where $\bs{A}^{\star}$ denotes the pseudo-inverse of a square matrix $\bs{A}$. Similarly, when $\bs{Y}_i, 1 \leq i \leq N$ are fixed, the optimal values of $\bs{Z}_j, 1\leq j \leq N$ are given by
\begin{align}
\bs{Z}_j^{\star} &= \min_{\bs{Z}_j} \sum\limits_{i \in \set{N}(j)} \|\bs{X}_{ij} - \bs{Y}_i\bs{Z}\|_{\set{F}}^2, \nonumber \\
                 & = (\sum\limits_{i \in \set{N}(j)}\bs{Y}_{i}\bs{Y}_{i}^{T})^{\star}(\sum\limits_{i \in \set{N}(j)}\bs{Y}_i \bs{X}_{ij}), \quad 1\leq j \leq N. \nonumber
\end{align}
It is well known that this alternating projection procedure converges linearly to a local optimal of $f_{\textup{lowrank}}$ (find a reference described in the wikipedia page). We detect the convergence if
$$
\max (\max\limits_{i}\frac{\|\bs{Y}_i-\bs{Y}_i^{\textup{prev}}\|}{\|\bs{Y}_i\|}, \max\limits_{j}\frac{\|\bs{Z}_j-\bs{Z}_j^{\textup{prev}}\|}{\|\bs{Z}_j\|}) \leq 10^{-3}.
$$
Typically, 4-5 alternating projection steps are sufficient.

\para{Optimizing functional maps.} When the low-rank factorizations $Y_{i}, 1\leq i \leq N$ and $Z_{j}, 1\leq j \leq N$ are fixed, it is easy to see that the optimal pair-wise functional maps can be computed independently by solving the following optimization problem
\begin{align}
\bs{X}_{ij}^{\star} &= \arg\min_{\bs{X}_{ij}}\|\bs{X}_{ij} \bs{D}_{i} - \bs{D}_{j}\|_1 + \mu \|\bs{X}_{ij} - \bs{Y}_i \bs{Z}_j\|_{\set{F}}^2 \nonumber \\
                                        & +\lambda \textup{vec}(\bs{X}_{ij})^{T}\Sigma_i\textup{vec}(\bs{X}_{ij}).
\label{Eq:X:OPT2}
\end{align}
Similar to (\ref{Eq:X:OPT1}), (\ref{Eq:X:OPT2})  amounts to solve a convex program, whose global optimal can be obtained.

\para{Convergence detection.} The alternating optimization described above is guaranteed to converge to a local optimal of $f$. We detect the convergence by checking
$$
\max\limits_{(i,j)\in \set{E}}\frac{\|\bs{X}_{ij} - \bs{X}_{ij}^{\textup{prev}}\|}{\|\bs{X}_{ij}\|} \leq 10^{-3}.
$$
