\documentclass[10pt,twocolumn,letterpaper]{article}
\usepackage{amsfonts}

\usepackage{cvpr}
\usepackage{times}
\usepackage{epsfig}
\usepackage{graphicx}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{rotating}
\usepackage{subfig}

\usepackage{color,soul} % For \hl

% Include other packages here, before hyperref.

% If you comment hyperref and then uncomment it, you should delete
% egpaper.aux before re-running latex.  (Or just hit 'q' on the first latex
% run, let it finish, and you should be clear).
\usepackage[pagebackref=true,breaklinks=true,letterpaper=true,colorlinks,bookmarks=false]{hyperref}


% \cvprfinalcopy % *** Uncomment this line for the final submission

\def\cvprPaperID{61} % *** Enter the CVPR Paper ID here
\def\httilde{\mbox{\tt\raisebox{-.5ex}{\symbol{126}}}}

% Pages are numbered in submission mode, and unnumbered in camera-ready
\ifcvprfinal\pagestyle{empty}\fi
\begin{document}

%%%%%%%%% TITLE
\title{Exploring Image Relationships Using Functional Maps}

\iffalse
\author{Fan Wang\\
Stanford University\\
{\tt\small fanw@stanford.edu}
\and
Raif Rustamov\\
Stanford University\\
{\tt\small raifrustamov@gmail.com}
\and
Adrian Butscher\\
Stanford University\\
{\tt\small adrian.butscher@gmail.com}
\and
Justin Solomon\\
Stanford University\\
{\tt\small justin.solomon@stanford.edu}
% For a paper whose authors are all at the same institution,
% omit the following lines up until the closing ``}''.
% Additional authors and addresses can be added with ``\and'',
% just like the second author.
% To save space, use either the email address or home page, not both
\and
Leonidas Guibas\\
Stanford University\\
{\tt\small guibas@cs.stanford.edu}
%{\small\url{http://www.author.org/~second}}
}
\fi

\maketitle
% \thispagestyle{empty}

%%%%%%%%% ABSTRACT
\begin{abstract}
Establishing correspondences between two images
is a challenging task,
especially when point-to-point correspondence is hard to obtain
due to drastically different scenes in the images.
In this paper,
we propose to represent the images in the functional space,
and then find a functional map between two images by
minimizing the mapping error in the functional space
with a commutativity regularization term.
This mapping is flexible yet powerful, and by using a limited number of basis,
the computational cost can be reduced significantly.
We explore the efficacy of the proposed method using two applications:
First, functional map is used to transfer segmentation
from training images to testing images.
Even though very simple features are used,
thanks to the power of the functional map,
our result matches the state-of-the-art methods.
Second, we used functional map to
analyze and visualize the distortion between faces
due to pose and expression variation,
and interesting observations were reveiled using functional map.
\end{abstract}

%%%%%%%%% BODY TEXT

\section{Introduction}

Finding corresponding points between images is a long-standing research problem in computer vision
and it plays an important role in multiple applications.

Correspondences between images were originally built to align images with close spatial relationship,
i.e. Structure-from-Motion~\cite{Szeliski2011} and stereo matching~\cite{Scharstein2002stereo}.
In this case, features such as corner points need to be tracked from one image to the next to build the correspondence,
and the output is usually the geometric transformation between view points.
Correspondences were also built for dynamic scenes in video sequences for motion estimation and video tracking,
and an estimation of a 2D flow field as optical flows can be obtained~\cite{Horn1981opticalflow}.
These techniques all require the images containing the same scene, i.e. the same foreground object and background,
only with limited spatial displacement due to view point changing or object movement.

The problem becomes especially difficult when we need to find image correspondence for object recognition and image retrieval scenarios.
The goal is to match different instances of the same object category across images, and the objects may have different appearance and shapes.
Local feature representations have been successfully used here,
and point-to-point matching methods with local descriptors (e.g., SIFT~\cite{Lowe1999}, shape context~\cite{Belongie2002sc})
are particularly valuable for these tasks.
Although the local features are robust to partial occlusion, illumination changes and rotation to some degree,
they still require the object to be visually similar and with limited background clutter.
%The SIFT flow method~\cite{Liu2008} provides dense correspondences between every pixel of two images, by extracting SIFT descriptor at each pixel and find a discrete
%and discontinuity-preserving optical flow to match the SIFT descriptors between two images. It has shown promising results for scene matching, however,
%it will work well only when images have similar spatial layout.

In image parsing, or segmenting objects in an image,
matching between training and testing images is also crucial in most nonparametric and data-driven approaches.
In these approaches, for each new test image,
the most similar training images are retrieved from the data set
and the desired information can be transferred from the training images to the query.

A coarse-to-fine SIFT flow algorithm is used to align the structures between the input image and retrieved best-matching annotated images~\cite{Liu2009};
however, this requires the retrieval set images to be very similar to the input image in terms of spatial layout.
Since the overall scene matching is generally imperfect,
an input image was then be explained by partial matches of similar scenes,
i.e., each region of the input image can be matched to semantically similar regions taken from different images in the data set.
A stack of images are used to determine the likely semantic boundaries between objects,
and pixels belonging to the same object are grouped across the stack~\cite{Russell2009}.
Under similar assumption, labels are transferred at superpixel-level based on matching with local features,
which allows for more variation between the layout of the test image and the retrieved images~\cite{Tighe2010ECCV}.
Images have also been decomposed into overlapping windows likely to cover foreground objects,
and segmentation masks are transferred from training windows which are similar to windows in the test images~\cite{Kuettel2012Figure}.
The assumption here is that visually similar windows often have similar segmentation masks.

In this paper, we are also interested in finding correspondences between images with different scenes.
The images may contain different objects of the same category, with different background, at different spatial locations, in different view points.
The variability of the visual world is just so vast,
with exponential number of different object combinations within each scene,
that it's hard to always find the exact match in the limited retrieved training set.
Instead, we argue that an input image might not be matched to different images in the original image space,
but they can be matched in another space, namely functional space in this paper.
Rather than building point-to-point or patch-to-patch correspondences in image space,
we consider mapping between functions defined on the images.

This paper utilizes a novel functional representation,
which can infer and manipulate the mapping between images in a fundamentally different way.
A direct advantage of using this representation is that,
every pointwise correspondence on the image can be turned into a mapping between function spaces,
while the opposite way is not true generally. This means the representation is more flexible
compared with the traditional point-to-point correspondences.
Another advantage of this representation is that,
many natural constraints on the map become linear constraints on the functional map.

Based on this new representation, we design a framework for building maps between images,
and apply it to the task of segmentation transfer,
which achieves comparable performance as other state-of-the-art methods on benchmark data set.
We also demonstrate the usefulness of this representation on another task of face distortion analysis by directly analyzing
and visualizing the functional map between two faces, without establishing point-to-point correspondences on facial landmarks.

The rest of this paper is organized as below: the novel representation, functional map, is introduced in Sec.\ref{sec:fmap}.
We then apply functional representation to build maps between images and discuss the related issued in Sec.\ref{sec:imgcorrespondence}.
The first application of functional map on segmentation transfer is described in Sec.\ref{sec:framework},
followed by the experiments in Sec.\ref{sec:experiments}.
Sec.\ref{sec:face} is dedicated to another application about face distortion analysis by analyzing and visualizing functional map.
The paper is concluded in Sec.\ref{sec:conclusion}.


\section{Functional Map}
\label{sec:fmap}

Functional maps~\cite{Ovsjanikov2012} provide a novel representation of maps between pairs of graphs.
Suppose there is a bijective mapping between two graphs $G_a$ and $G_b$: $T:G_a\to G_b$,
this mapping $T$ can also transform the functions on $G_a$ to $G_b$.
That is, if there is a function $f:G_a\to \mathbb{R}$ on graph $G_a$ which takes a scalar value on each graph node,
we can get a corresponding function $g:G_b\to \mathbb{R}$ such as $g=f\circ T^{-1}$.
This induced transformation is denoted as $T_F:\mathcal{F}(G_a,\mathbb{R}) \to \mathcal{F}(G_b,\mathbb{R})$,
where $\mathcal{F}(\cdot,\mathbb{R})$ denotes a generic space of real-valued functions.
We call $T_F$ the functional representation of the mapping $T$.

Suppose $\left\{ {\varphi _i^a } \right\}$ are basis for the function space of $G_a$,
so that a function $f:G_a\to \mathbb{R}$ can be represented as a linear $f = \sum\nolimits_i {a_i \varphi _i^a }$, so
\begin{equation}
\label{eq:basisa}
g = T_F \left( f \right) = T_F \left( {\sum\nolimits_i {a_i \varphi _i^a } } \right) = \sum\nolimits_i {a_i T_F \left( {\varphi _i^a } \right)}.
\end{equation}

Similarly, suppose $\left\{\varphi_j^b \right\}$ are basis for the function space of $G_b$,
we have $g = \sum\nolimits_j {b_j \varphi _j^b }$.
Since $\varphi _i^a$ is also a function on $G_a$, $T_F \left( {\varphi _i^a } \right)$ should be a function on $G_b$,
thus it can be represented by basis $\left\{\varphi_j^b \right\}$, i.e.
\begin{equation}
\label{eq:basisab}
T_F \left( {\varphi _i^a } \right) = \sum\nolimits_j {c_{ij} \varphi _j^b }.
\end{equation}
Combining Eq.~\ref{eq:basisa} and Eq.~\ref{eq:basisab}, we can get
\begin{equation}
\label{eq:basisb}
T_F \left( f \right) = \sum\limits_i {a_i \sum\limits_j {c_{ij} \varphi _j^b } }  = \sum\limits_j {\sum\limits_i {a_i c_{ij} \varphi _j^b } }
\end{equation}
That is, if $f$ is represented as a vector of coefficients $\mathbf{a} = \left[ {a_0 ,a_1 , \cdots ,a_i , \cdots } \right]$
and $g=T_F(f)$ is represented as $\mathbf{b} = \left[ {b_0 ,b_1 , \cdots ,b_i , \cdots } \right]$,
Eq.~\ref{eq:basisb} tells us that $b_j  = \sum\limits_i {a_i c_{ij} }$, i.e. $\mathbf{b} = C\mathbf{a}$.
The matrix $C$ is independent of the functions and is only determined by the basis and the map $T$.

The functional mapping $T_F:\mathcal{F}(G_a,\mathbb{R}) \to \mathcal{F}(G_b,\mathbb{R})$ is the operator defined by:
\begin{equation}
\label{eq:operator}
T_F \left( \sum\nolimits_i {a_i \varphi _i^a } \right) = \sum\limits_j {\sum\limits_i {a_i c_{ij} \varphi _j^b } }
\end{equation}
and given the basis $\left\{ {\varphi _i^a } \right\}$ and $\left\{\varphi_j^b \right\}$ for $\mathcal{F}(G_a,\mathbb{R})$ and $\mathcal{F}(G_b,\mathbb{R})$,
the functional map is then represented as the matrix $C = \left\{ c_{ij}\right\}$.

This functional map correspond real-valued functions rather than nodes on the graphs.
It can be proved that the original mapping $T$ can be recovered from $T_F$,
which means the knowledge of $T_F$ is equivalent to knowledge of $T$~\cite{Ovsjanikov2012}.
Moreover, $T_F$ is a linear map between functional spaces,
unlike the original map $T$ which may be complicated and hard to manipulate.
Besides, this linear functional mapping is more general than the classical point-to-point mappings.
While each point-to-point mapping can be transferred into a functional map, the opposite way is not always true.

Similar to this representation, spectral embeddings~\cite{Rustamov2007}
and their application in shape matching \cite{Jain2007}\cite{Mateus2008} are also using graph spectrum.
However, this functional representation does not assume one-to-one correspondences between the basis,
i,e, the eigenfunctions of the Laplace operator.
This difference is crucial because combinatorial complexity will be introduced by matching basis.

\section{Building Image Correspondences by Functional Map}
\label{sec:imgcorrespondence}

To build image correspondences and find the functional maps between images, a graph $G(V,E)$ needs to be defined on each image.

There are multiple choices about the construction of graph.
Our suggestion is that, the definition of graph node should be stable according to the application scenario.
If the images are well structured and have explicit point-to-point map,
using pixels as graph nodes may be good to reflect distortion more precisely;
if the images are quite different in terms of the object appearance, viewpoint and spatial layout,
there might be noise in pixel-level.
It will be reasonable to generate small regions in images, i.e., i.e. superpixels,
so that the pixels in the same region share certain visual characteristics.
These superpixels can be usually obtained by over-segmentation of the image,
and the neighborhood relationship between them can reflect the object composite and the spatial interaction between objects.

The edge weights should be aware of the image contents,
so they can be determined by the spatial distance and visual similarities between nodes,
and the nodes which are spatially far away should not be connected to make the graph sparse.

\subsection{Constraints of Functional Representation}

As discussed in Sec.\ref{sec:fmap},
if we have function $f$ on graph $G_a$ and corresponding functions $g$ on graph $G_b$ both represented in functional space
as coefficients $\mathbf{a}$ and $\mathbf{b}$, the functional map can be recovered by $\mathbf{b} = C\mathbf{a}$.
Note that $\mathbf{a}$ and $\mathbf(b)$ can be obtained only with the basis,
and it doesn't require any knowledge of the underlying correspondence $T$.
This means, the functional map $C$ can be recovered given enough constraints of $C\mathbf{a_i} = \mathbf{b_i}$.

If there are a group of probe functions $\{f_1, f_2, \cdots\}$ and $\{g_1, g_2, \cdots\}$ on $G_a$ and $G_b$ respectively,
and the two graphs are equipped with basis $\left\{\varphi_i^a \right\}$ and $\left\{\varphi_j^b \right\}$,
the functions can be represented as coefficients in functional space
$A = \left[ {\mathbf{a_1} ,\mathbf{a_2} , \cdots } \right]$ and $B = \left[ {\mathbf{b_1} ,\mathbf{b_2} , \cdots } \right]$.
Ideally, the functional map $C$ can be recovered by solving an optimization problem:
\begin{equation}
C^*  = \mathop {\arg \min }\limits_C \left\| {CA - B} \right\|_F
\end{equation}

One important advantage of functional map is that any functions on a graph can be easily represented,
thus functions on different graphs can be paired by the functional map between graphs.
Many kinds of natural relationships between images become linear constraints in its functional representation,
which means, the probe functions can be obtained in many different ways.
If $f$ and $g$ are functions correspondence to visual descriptors,
these descriptors should be preserved by the mapping and they can be used as the constraints.
Moreover, if a descriptor is multidimensional, i.e. $f \in \mathbb{R}^k$ on each node (superpixel),
they should be regarded as $k$ scalar functions, one for each dimension, and they will be used as $k$ constraints.
When pixel is used as node, color intensities or point descriptors such as SIFT at each pixel can be used for descriptor preservation constraints;
if superpixels are regarded as nodes, the descriptors can be color histogram, bag-of-visual-words, shape descriptors of each superpixel, etc.

If we are given some exact point-to-point landmark correspondence between two images,
it can be regarded as functional constraint by considering each point as
a delta function with support only on the particular superpixel,
or a function of distance value from each superpixel to the landmark point.
The landmark point correspondence can be obtained manually or by local descriptor matching and geometric verification.

Similarly, if the correspondence between part of the image is provided as prior knowledge,
for example, a large segment consisting of multiple superpixels in one image is labeled as the same object as in another image,
this large segment can be regarded as an indicator function on the graph which only has support on the corresponding superpixels,
then the region-to-region correspondence is transferred into function correspondence,
and can be used as linear constraints as well.

\subsection{Regularization}

\hl{(I don't know how to explain the necessity of commutativity regularization...
Is it related to spatial smoothness of the map, i.e.
delta functions on nearby nodes are mapped to similar functions? )}

For a function $f$ on one graph $G_a$, if we first map it to another graph $G_b$ as function $g = T_F(f)$,
then perform Laplacian operator on $g$ as $\mathcal{L}(g)$, we will get $\mathcal{L}(T_F(f))$;
if we first perform Laplacian operator on $f$ as $\mathcal{L}(f)$,
then map it to $G_b$, we can get $T_F(\mathcal{L}(g))$.
The commutativity of functional map and Laplacian operator means that, these two results should be equivalent.

Represented as functional map $C$ and the basis $\Phi_a$ and $\Phi_B$ whose columns are $\{\varphi_i^a\}$ and $\{\varphi_j^b\}$
respectively, the commutativity can be written as
\begin{equation}
\label{eq:commutativity}
C \Phi_a^T L_a \Phi_a = \Phi_b^T L_b \Phi_b C,
\end{equation}
in which $L_a$ and $L_b$ are Laplacian matrices on graph $G_a$ and $G_b$.
With this regularization, the optimization of solving functional map becomes:
\begin{align}
\label{eq:fmap-general}
\text{min.} \left\| CA - B \right\|_F  + \lambda \left\|C\Phi_a^T L_a \Phi_a  - \Phi_b^T L_b \Phi_b C\right\|_F
\end{align}

Additionally, if both graphs have the same Laplacian matrix $L$ and equipped
with the same basis $\Phi$, $C$ commutes with $\Phi^T L \Phi$ means that they admit the common set of eigenvectors.


\subsection{Choice of Basis}
\label{sec:basis}

The basis used for functional space should be both compact and stability.
Local basis are good at representing meaningful parts of a graph, and can capture the local distortion better,
but they are not compact, meaning that usually we cannot reconstruct any signal with a small set of basis.
Global basis such as Laplacian eigenfunctions, principle components obtained by PCA
are good at reconstructing functions in terms of least error with limited number of of basis,
however they are lack of high frequency components.

This functional map representation works well when combined with the eigenfunctions of the Laplace operator,
by benefiting from their multi-scale, geometry-aware nature.
Additionally, the commutativity regularization becomes simpler in this case,
because $\Phi_a$ consists of eigenvectors of Laplacian matrix $L_a$,
therefore, $\Phi_a^T L_a \Phi_a = S_a$ in which $S_a$ is a diagonal matrix with eigenvalues of $L_a$.
Similarly, $\Phi_b^T L_b \Phi_b = S_b$ and $S_b$ is a diagonal matrix containing eigenvalues of $L_b$.
The optimization in Eq.\ref{eq:fmap-general} becomes a simpler one:
\begin{align}
\label{eq:fmap-simple}
\text{min.} \quad & \left\| CA - B \right\|_F  + \lambda \left\|CS_a - S_bC\right\|_F
\end{align}

The commutativity regularization $|CS_a - S_bC|_F$ can be transferred to an element-wise term as
\[
\sum\limits_i {\sum\limits_j {\left( {c_{ij}s_b(i) - c_{ij}s_a(j)} \right)^2} },
\]
which means, if $s_a(j)$ is far away from $s_b(i)$, $c_{ij}$ is penalized.
If two graphs have similar spectrum, i.e. $s_a(j)$ is close to $s_b(i)$ when $i$ and $j$ are close,
this regularization will force $C$ to have larger values on diagonal
and near-diagonal entries and almost zero on off-diagonal entries, making $C$ a sparse matrix.

\section{Segmentation Transfer by Functional Map}
\label{sec:framework}

The task for segmentation transfer is to obtain the pixel-level background/foreground segmentation for a test image,
based on the training images whose pixel-level segmentation results are already given.
The general idea of applying functional map here is to first learn the functional map between training and testing images
based on some probe functions about image visual characteristics,
then treat the ground truth segmentation on the training image as an indicator function and map it to the testing image.

\begin{figure*}[t]
\centering
 \includegraphics[width=0.9\linewidth]{figures/framework-large}
   \caption{(a) Test image. (b) The graph constructed based on test image. (c) Training images. (d) Functional map. (e) The foreground/background indicator function on test image transferred from
   each of the training images. (f) The summation of all transferred indicator functions. (g) Final segmentation result.}\label{fig:framework}
\end{figure*}

\subsection{Graph Construction}
\label{sec:segtran-graph}

As we have discussed before, in this case it's better to over-segment each image into super-pixels.
Normalized Cut is utilized here and each image is segmented to 100 non-overlapping regions as an example in Fig.\ref{fig:framework}(b).
The segments which are too small are merged to nearby segments.

Graphs are built for training and testing image, denoted as $G_{tr}$ and $G_{tt}$ respectively.
Each super-pixel is considered as a graph node,
the edge between two nodes exists only when two corresponding superpixels are adjacent,
and the edge weight is determined by the length of the shared boundary of two super-pixels,
normalized by the average perimeter of two super-pixels. Therefore, the edge weight is no larger than 1.

\subsection{Find Functional Map}
\label{sec:segtran-fmap}

Color features and bag-of-visual-words features are used as probe functions for preservation constraints.
The color features include average RGB values of the segment, which is 3-dimensional, and a 64-dimensional
color histogram of each superpixel. SIFT descriptors are extracted at each pixel
and the bag-of-visual-words histograms are obtained for each superpixel based on a dictionary of 300 words.
In total there are 367 probe functions which serve as function preservation constraint.

Laplacian eigenvectors on each graph is used to as basis of the function space of graph,
because of their compact nature.
Using Laplacian eigenfunctions will also make the regularization of commutability easy to solve
since the constraint becomes element-wise as discussed in Sec.\ref{sec:basis}.

The Laplacian eigenfunctions on $G_{tr}$ are denoted as $\Phi ^{tt}$, and the basis on $G_{tt}$ are denoted as $\Phi ^{tr}$,
both $\Phi _{tt}$ and $\Phi _{tr}$ are orthonormal, so the probe functions $f_{tr}$ and $f_{tt}$ are represented
in function space as $A = \Phi_{tr}^T f_{tr}$ and $B = \Phi_{tt}^T f_{tt}$.
The optimization problem Eq.\ref{eq:fmap-simple} is solved to obtain the functional map $C$.

Intuitively, the more basis we keep, the more accurate representation we can get in the function space,
which will be shown in Fig.\ref{fig:num-eigen}. However, the size of functional map $C$ will grow quadratically with
the number of basis, so there exists a tradeoff between computational cost and the performance.

\subsection{Transfer the Segmentation Results}
\label{sec:segtran-transfer}

For each training image, the ground truth segmentation result is represented as an indicator
function $f_{gt}$ about foreground/background, i.e. the function takes value of +1 if the corresponding superpixel
contains all foreground pixels, and -1 if the superpixel belongs to background.
If a superpixel contains both foreground and background pixels, the node value is defined as
the percentage of foreground pixels minus that of background pixels, which makes the value between -1 and +1.
To compensate the effect of unbalanced foreground and background proportion in the whole image,
this indicator function is normalized such that $\left\| {f_{gt}^ +  } \right\|_1  = \left\| {f_{gt}^ -  } \right\|_1 = 1$,
in which $f_{gt}^ +$ and $f_{gt}^ -$ are the positive and negative part of $f_{gt}$ respectively:
\begin{equation}
f_{gt}^ +  \left( i \right) = \left\{ \begin{array}{l}
 f_{gt} \left( i \right), \quad f_{gt} \left( i \right) \ge 0 \\
 0,\quad f_{gt} \left( i \right) < 0 \\
 \end{array} \right.
\nonumber
 \end{equation}
 \begin{equation}
 f_{gt}^ -  \left( i \right) = \left\{ \begin{array}{l}
 0,\quad f_{gt} \left( i \right) \ge 0 \\
 f_{gt} \left( i \right),\quad f_{gt} \left( i \right) < 0 \\
 \end{array} \right.
\end{equation}

Using the functional map $C$ solved in Sec.~\ref{sec:segtran-fmap}, we can map
function $f_{gt}$ to the test image $G_{tt}$ by $g = \Phi _{tt}*C*\Phi _{tr}^T * f_{gt}$.
$g$ can be regarded as the transferred indicator function on graph $G_{tt}$, and its value
on each node can be viewed as a score of how likely the corresponding superpixel belongs to foreground.
By thresholding this function, a foreground/background segmentation can be obtained.

\subsection{Result merging}
\label{sec:segtran-merging}

Intuitively, if the training and testing image look visually similar to each other,
the map should be more like a point-to-point map and
thus more accurate. However, in most cases, the images are quite different, so it's unrealistic to expect
the map between one training image and one test image to be perfect. For example, in Fig.\ref{fig:example-img}
the left image is a test image, and the cows are the foreground we would like to segment, the 1st and 3rd
row of Fig.\ref{fig:example-map} are some training images, whose ground-truth indicator functions are mapped to
the test image, and the mapped results on test image are visualized in the 2nd and 4th rows of Fig.\ref{fig:example-map}.
It is easy to observe that, some of the indicator functions have peak on the correct foreground position,
while some totally go wrong and look like random functions.
If we directly apply a threshold on each mapped results, some of them would get better results than others.

It's hard to predict which training image can provide more accurate map and further transfer better segmentation,
therefore, we propose to sum up all the mapped indicator functions. Additionally, this summation is weighted
according to the global visual similarity between the test image and each training image. Here we compute GIST descriptor
for each image, and the weight is defined as $exp(-d_i^2/\sigma^2)$ for the $i$-th function where $d_i$ is the Euclidean distance
between two GIST descriptors. The summation is visualized in the right image of Fig.\ref{fig:example-img}.
It can be found that, this weighted summation greatly improves the result as shown in Fig.\ref{fig:framework}(f),
and if we apply a threshold on this function, we can get much better foreground segmentation like in Fig.\ref{fig:framework}(g).
The threshold is determined as $\min \left( f \right) + 0.55\left( {\max \left( f \right) - \min \left( f \right)} \right)$ with $f$
denoting the function after summation, which
is a learned value from experiments.

The intuition behind this summation process is that, there is no perfect map from a single image
to the test image, and the map from each training image can only get partially correct results due to
the large variability of foreground and background. The summation will diminish the effect of object
variability, so the mapping from similar objects will be added up and emphasized, while
the effect from various different foregrounds and background will be canceled out.

\begin{figure}
\centering
\subfloat[]
{\label{fig:example-img}\includegraphics[width=.4\textwidth,trim=1cm 2cm 1cm 0, clip = true]{figures/example-img}} \hspace{4mm}
\subfloat[]
{\label{fig:example-map}\includegraphics[width=.45\textwidth]{figures/example-map}}
   \caption{\textbf{(a)}: The test image and the summation of all its indicator functions mapped from all training images.
   \textbf{(b)}: The 1st and 3rd rows are training images, and 2nd and 4th rows are the mapped indicator function from the corresponding
   training image to test image in (a). Red color means higher value. This figure is best viewed in color.}
\end{figure}



\section{Experiments on Segmentation Transfer}
\label{sec:experiments}

In this section, we evaluate the whole proposed segmentation transfer framework on the
PASCAL VOC 2011 challenge data set~\cite{PASCAL2010}.
It is one of the most challenging datasets for segmentation and it contains real-world consumer images from Flickr.
The dataset is annotated with pixelwise segmentations of 20 different object classes. The segmentation accuracy is
evaluated by a metric of intersection/union, i.e.,
the number of correctly labelled pixels of that class, divided by the number of pixels labelled with that
class in either the ground truth labelling or the inferred labelling.

\subsection{Parameter Selection}

There are two important parameters to choose in solving functional map, one is the number of basis $k$ chosen on each graph, the other
is the weight of regularization term $\lambda$. In this section, to investigate the proper selection of parameters,
we map from the training set to the validation set of PASCAL VOC 2011 segmentation challenge and examine the performance only
on the class of ``aeroplane''.

Intuitively, the more basis we choose, the better representation ability we can get in the function space. But the functional map
$C$ has size of $k \times k$, which means the computational cost of the optimization problem Eq.\ref{eq:fmap-general} will increase
at least quadratically with $k$. Fig.\ref{fig:num-eigen} shows the accuracy of ``aeroplane'' class when fixing $\lambda = 10$, which demonstrates our
intuition.

\begin{figure}
\centering
\subfloat[]
{\label{fig:num-eigen}
\includegraphics[width=0.45\linewidth, trim = 1cm 5mm 1cm 1cm, clip]{figures/num-eigen}} \hspace{4mm}
\subfloat[]
{\label{fig:weight-commute}\includegraphics[width=0.45\linewidth, trim = 1cm 3mm 1cm 1cm, clip]{figures/weight-commute}}
   \caption{\textbf{(a)}: The accuracy of ``aeroplane'' varying with different number of basis.
   \textbf{(b)}: The accuracy of ``aero plane'' varying with different weights of commutativity regularization term.}
\end{figure}

\iffalse
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.4\linewidth, trim = 1cm 5mm 1cm 1cm, clip]{figures/num-eigen}
\end{center}
\vspace{-5mm}
   \caption{The accuracy of ``aeroplane'' varying with different number of basis.}
\label{fig:num-eigen}
\end{figure}

\begin{figure}[t]
\begin{center}
\includegraphics[width=0.4\linewidth, trim = 1cm 3mm 1cm 1cm, clip]{figures/weight-commute}
\end{center}
\vspace{-5mm}
   \caption{The accuracy of ``aeroplane'' varying with different weights of commutativity regularization term.}
\label{fig:weight-commute}
\end{figure}
\fi

As we have discussed in Sec.\ref{sec:basis}, the commutativity regularization term may help us to obtain a
sparse and diagonal-dominant matrix
of functional map $C$, thus its weight will influence the obtained functional map greatly. Fig.\ref{fig:weight-commute} shows
the accuracy of ``aeroplane'' when fixing the number of basis as 30.
Each probe function has been normalized to minimize the scaling difference between probe functions and between two terms
in the optimization problem. Emphasizing too much on the regularization term will diminish the effect of function preservation constraint,
making the functional map not aware of the actual functions to be mapped, thus the results will be poor. Besides that, when
$10 \le \lambda \le 15$ is a region showing better performance, therefore, we use $\lambda = 12$ in the following experiments.


\subsection{Experiments on PASCAL VOC 2011}

\subsubsection{Implementation Details}
\label{sssec:imp-details}

For each test image, we select the top-100 most similar images from training images of each object classes according to
global GIST descriptor. Some object classes contain less than 100 training images so we take them all. The indicator functions
from the training images in the same class will be summed, and we obtain 20 indicator functions in total. Based on our analysis,
if the test image doesn't contain object from a particular class, the summation of indicator functions from this class will all be
canceled out and contains no obvious peak. We normalized each summed function by the number of training images it comes
from, and at each superpixel, the indicator function which has the largest value among all 20 classes will be the winner and
provide label to this superpixel. If the largest value doesn't exceed the predefined threshold, this superpixel belongs to
background.

\subsubsection{Segmentation Refinement}
\label{sssec:refine}

After thresholding the transferred indicator function on the graph, the segmentation result
is only obtained at the superpixel level, meaning that, the resolution of the result is restricted
by the over-segmentation result. However, the over-segmentation might not be perfect aligned with the object boundaries,
i.e. foreground and background pixels might be mixed in one superpixel.
Thus we need to refine the segmentation and GrabCut~\cite{Rother2004} is utilized here.
The GrabCut algorithm builds two gaussian mixture models (GMM), one for the foreground, one
for the background. The foreground and background pixels obtained from Sec.\ref{sec:segtran-merging}
are used for estimating the initial GMMs.
The GMM for foreground has 5 components and the GMM for background contains 10 components.

\subsubsection{Results}
\label{sssec:results}

Table~\ref{tbl:comparison} shows the segmentation accuracy of all 20 classes, the ``best'' results are the state-of-the-art results for
each class according to PASCAL VOC website, and the best results for different categories may come from different methods. From this table we
can see that our method, although with a simple framework, achieves comparable performance as the state-of-the-art methods.

\setlength{\tabcolsep}{5pt}
\begin{table*}\footnotesize
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
  \hline
  % after \\: \hline or \cline{col1-col2} \cline{col3-col4} ...
  \begin{sideways}method\end{sideways} &\begin{sideways}aero plane\end{sideways}  & \begin{sideways}bicycle\end{sideways} & \begin{sideways}bird\end{sideways} & \begin{sideways}boat\end{sideways} & \begin{sideways}bottle\end{sideways}
  & \begin{sideways}bus\end{sideways} & \begin{sideways}car\end{sideways} & \begin{sideways}cat\end{sideways}   & \begin{sideways}chair\end{sideways} & \begin{sideways}cow\end{sideways}
  & \begin{sideways}dining table\end{sideways}  &\begin{sideways}dog\end{sideways} &\begin{sideways}horse\end{sideways} &\begin{sideways}motor bike\end{sideways}&\begin{sideways}person\end{sideways}
  & \begin{sideways}potted plant\end{sideways} & \begin{sideways}sheep\end{sideways} & \begin{sideways}sofa\end{sideways}  & \begin{sideways}train\end{sideways} & \begin{sideways}tv/monitor\end{sideways} \\ \hline
  best  & \textbf{54.3}   & 23.9  & 46.0  & 35.3     & 49.4     & 66.2     & 56.2  & 46.1  & 15.0  & 47.4 & 30.1  & 33.9  & 49.1  & 54.4  & 46.4  & 28.8 & 51.3 & 26.4 & 44.9 & 45.8 \\ \hline
%  ours    & 43.4   & 13.6  & 36.6  & 36.8     & 22.5     & 56.1     & 35.1  & 49.9  & 19.3  & 46.0 & 46.8  & 48.6  & 46.0  & 44.2  & 33.5  & 20.4 & 46.7 & 38.2 & 52.9 & 28.5\\ \hline
  Ours & 48.8 & 16.3 & 42.9 & \textbf{41.7} & 26.5 & 59.9 & 38.5 & \textbf{52.6} & \textbf{21.7} & \textbf{51.7} & 47.0 & \textbf{53.7} & 48.7 & 47.0 & 35.8 & 24.4 & \textbf{51.6} & \textbf{41.5} & \textbf{57.3} & 40.3  \\ \hline
\end{tabular}
\caption{Performance comparison on PASCAL VOC 2011.}
\label{tbl:comparison}
\end{table*}


\section{Face Distortion Analysis by Functional Map}
\label{sec:face}

Correspondences are relatively easy to build between Face images, because all faces have the same structure, and there always exists
an intuitive ground-truth map between two faces.However, the variation of faces, such as poses, lighting
conditions, expressions make the face image matching challenging, and the ground-truth map is usually hard to obtain, even only
at landmark points. In this section, we will show how to find the distortion
between two face images using functional map without finding landmark correspondences.

Each face image is resized to $64 \times 64$, and each pixel is used as
a graph node in this case. The nodes are connected based on 4-connectivity of pixels with uniform edge weight.
Local features including pixel intensity, SIFT and LBP( local binary pattern) are extracted at each pixel as probe functions.
Eigenfaces corresponding to the largest 50 eigenvalues are used as basis, so the Laplacian matrix $L$
and the basis $\Phi$ keep the same for all graphs.

After finding the functional map between two face images by Eq.\ref{eq:fmap-general},
two groups of functions $w^a$ and $w^b$ are obtained as:
\begin{align}
\centering
& [U,S,V] = svd(C) \\
& w^a_h \leftarrow \Phi V_{1\cdots r}, \quad w^a_l \leftarrow \Phi V_{k-r+1\cdots k} \\
& w^b_h \leftarrow \Phi U_{1\cdots r}, \quad w^b_l \leftarrow \Phi U_{k-r+1\cdots k} \\
& s_h \leftarrow diag(S_{1\cdots r,1\cdots r}), \nonumber \\
& s_l \leftarrow diag(S_{k-r+1\cdots k,k-r+1\cdots k})
\end{align}

The computed functions $w^a_h$ are used to visualize parts in $G_a$ which stretches the most if $s_h > 1$, while
$w^a_l$ can visualize parts in $G_a$ which shrinks the most with $s_l < 1$. The distortion is larger
if $|s_h - 1|$ or $|s_l - 1|$ is has larger value. Every column of
$w^a$ and $w^b$ are corresponding under the map. Fig.\ref{fig:svd-eigenface50-yaleb} visualizes the map between
two face images ( shown at the left), and the first 10 columns of $w^a_h$ and $w^b_h$ are shown in the 1st and 3rd
row, the values are colored in gray-scale, so that the brighter pixel means higher value. These functions
are also thresholded and a mask is shown for the position of the top 5\% values at the 2nd and 4th rows.
When there are pose changes between the two faces as shown in Fig.\ref{fig:svd-eigenface50-yaleb}, the dark space
in the first image on the right of the face is stretched the most because the face is turning and the second image has
a larger dark space on the right, this distortion is clearly shown by the first function in $w^a_h$.
%The most stretched part in the second face is between the nose and the right eye, which is shown by the first function in $w^b$.

When there are expression changes between the two faces as shown in Fig.\ref{fig:svd-eigenface50-jaffe},
the larst 10 columns of $w^a_l$ and $w^b_l$ and their thresholded results are shown in similar way as
 in Fig.\ref{fig:svd-eigenface50-yaleb}. The lip and the area between eye and eyebrow are shrinking the most,
 which can be shown by the last functions in $w^a_l$.

\begin{figure*}
\centering
 \includegraphics[width=0.95\linewidth, trim = 1cm 3cm 1cm 1cm, clip]{figures/map_svd_eigenface50_new_yaleb}
   \caption{Visualization of the functional map between two face images with different poses. Face images are taken
   from YaleB data set~\cite{Lee2005yaleb}.}\label{fig:svd-eigenface50-yaleb}
\end{figure*}

\begin{figure*}
\centering
 \includegraphics[width=0.95\linewidth, trim = 1cm 3cm 1cm 1cm, clip]{figures/map_svd_diff_eigenface_JAFFE_95_shrink}
   \caption{Visualization of the functional map between two face images with different expressions.
   Face images are taken from JAFFE dataset~\cite{jaffe1998}.}\label{fig:svd-eigenface50-jaffe}
\end{figure*}



\section{Conclusion}
\label{sec:conclusion}

In this paper we introduced a new representation of relationships between images called functional map, and
details are discussed for applying functional map to build correspondences between images.
We first proposed to do segmentation transfer by functional map, and the proposed simple framework
achieves comparable performance as other state-of-the-art methods, which means the map actually finds
reasonable relationship between images. In another application, the functional map is built between two face
images, and distortion between two faces can be easily visualized by analysis of functional map.
To sum up, functional map works well in exploring the relationship between two images, no matter
the map in image space exists explicitly or implicitly.


{\small
\bibliographystyle{ieee}
\bibliography{bibs/segmentation,bibs/FunctionalMap,bibs/image_descriptors,bibs/imcorrespondence}
}


\end{document}
