% Use first line for draft mode, fast compile
% Use second line for final version
%\documentclass[10pt,twocolumn,letterpaper,draft]{article}
\documentclass[10pt,twocolumn,letterpaper]{article}
\usepackage{amsfonts}

\usepackage{cvpr}
\usepackage{times}
\usepackage{epsfig}
\usepackage{graphicx}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{rotating}
\usepackage[font=small]{caption}
\usepackage{subfig}
\usepackage{float}
\usepackage{color,soul} % For \hl

\hyphenation{formulated iso-morphism relation-ships functional effectively two}
\hyphenpenalty=700
%\tolerance=500

%\addtolength{\topmargin}{-5.5mm}
%\addtolength{\textheight}{11mm}

%\addtolength{\textwidth}{16mm}
%\addtolength{\oddsidemargin}{-8mm}
%\addtolength{\evensidemargin}{-8mm}

%\addtolength{\floatsep}{-3mm}
%\addtolength{\textfloatsep}{-7mm}
%\addtolength{\abovedisplayskip}{-1mm}
%\addtolength{\belowdisplayskip}{-1mm}
%\renewcommand{\baselinestretch}{0.9}

%\renewcommand\textfraction{0.1}
%\renewcommand\floatpagefraction{0.9}

% Include other packages here, before hyperref.

% If you comment hyperref and then uncomment it, you should delete
% egpaper.aux before re-running latex.  (Or just hit 'q' on the first latex
% run, let it finish, and you should be clear).
\usepackage[pagebackref=true,breaklinks=true,letterpaper=true,colorlinks,bookmarks=false]{hyperref}

\usepackage{enumitem}
% use nolistsep to remove all space, or noitemsep to remove space only between items
\setlist{noitemsep}

%\usepackage[footnotes,ignoremode]{trackchanges}
% use the following line for final version
%\usepackage[finalnew,ignoremode]{trackchanges}
%\tcignore{\cite}{1}{1}
%\tcignore{\ref}{1}{1}
%\addeditor{jc}

\makeatletter
\newcommand{\thinparagraph}{%
  \@startsection{paragraph}{4}%
  {\z@}{1.5ex \@plus 1ex \@minus .2ex}{-1em}%
  {\normalfont\normalsize\bfseries}%
}
\makeatother

% \cvprfinalcopy % *** Uncomment this line for the final submission

\def\cvprPaperID{61} % *** Enter the CVPR Paper ID here
\def\httilde{\mbox{\tt\raisebox{-.5ex}{\symbol{126}}}}

\newcommand{\R}{\mathbb{R}}
\newcommand{\eps}{\varepsilon}
\setlength{\textfloatsep}{10pt plus 1.0pt minus 2.0pt}

% Pages are numbered in submission mode, and unnumbered in camera-ready
\ifcvprfinal\pagestyle{empty}\fi
\begin{document}

%%%%%%%%% TITLE
\title{Exploring Image Relationships Using Functional Maps}

\iffalse
\author{Fan Wang\\
Stanford University\\
{\tt\small fanw@stanford.edu}
\and
Raif Rustamov\\
Stanford University\\
{\tt\small raifrustamov@gmail.com}
\and
Adrian Butscher\\
Stanford University\\
{\tt\small adrian.butscher@gmail.com}
\and
Justin Solomon\\
Stanford University\\
{\tt\small justin.solomon@stanford.edu}
% For a paper whose authors are all at the same institution,
% omit the following lines up until the closing ``}''.
% Additional authors and addresses can be added with ``\and'',
% just like the second author.
% To save space, use either the email address or home page, not both
\and
Leonidas Guibas\\
Stanford University\\
{\tt\small guibas@cs.stanford.edu}
%{\small\url{http://www.author.org/~second}}
}
\fi

\maketitle
% \thispagestyle{empty}

%%%%%%%%% ABSTRACT
\begin{abstract}
Establishing correspondences between images
is challenging,
especially when point-to-point matching
is hard to obtain
due to large variability in object appearances.
%, e.g., viewpoint changes and non-rigid deformations.
In this paper, instead of computing point-based correspondences between images,
we find functional maps between them which can act as information transporters
between the images.
Functional maps are based on correspondences between
local properties or attributes defined over images, and
can be found efficiently by solving a linear system.
After introducing this functional framework, we explore its efficacy using two applications.
First, functional maps are used to transfer segmentation
from a set of training images to a test image, with results that match or improve other state-of-the-art methods.
Second, functional maps are exploited to analyze and visualize the distortion between faces
and perform expression clustering.
\end{abstract}


%%%%%%%%% BODY TEXT

\section{Introduction}

Finding corresponding content between images is a long-standing
research topic in computer vision and plays an important role in
multiple applications. At the core of the problem is the issue of
translating the semantic notions of similarity or sameness into a quantitative
representation that is flexible, rich in content and structure, and
computationally efficient.

Among most existing representations, establishing explicit point-based
correspondences based on local image features has been a key step in
motion estimation and video tracking~\cite{Horn1981opticalflow},
image stitching~\cite{Szeliski2006Stitching},
structure-from-motion~\cite{Szeliski2011}, stereo
images~\cite{Scharstein2002stereo}, etc. However,
this representation is not flexible enough due to its reliance on a
strong assumption that the two images contain the same
object undergoing approximately rigid transformations. In object
categorization and image parsing, when the images contain different
objects of the same category or have variable appearance, cluttered
background, or different viewpoints, explicit correspondences are
rarely used because the local features are not sufficiently versatile and robust. In
these contexts, a popular approach is to use local feature
appearance models such as Bag of
Features~\cite{Boureau2010,Dalal2005,Grauman2005,Lazebnik2006} or
region-based correspondences
~\cite{Kuettel2012Figure,Russell2009,Tighe2010ECCV}. However,
similarity in the image space is still required to
build correspondences. Graph matching techniques have also been
incorporated to exploit information about spatial arrangement of
features. However, graph matching is in general an NP-hard problem,
and only approximate solutions can be
obtained~\cite{Duchenne2011GMKernel}.

In this work we introduce a novel representation of the relationships
between images, which we call \emph{functional maps},
and design a computational
framework for efficiently computing functional maps between images.
Our approach is based on building correspondences between local properties
of images, rather than point-to-point or patch-to-patch
correspondences in the image space. More precisely, each image
is represented as a graph where the nodes can be pixels, superpixels, or small regions (Fig.~\ref{fig:framework}(b)).
The image properties are considered
as real-valued functions over the nodes of this graph. Any classical image descriptor
can be used as a function here: for example, every dimension of an
$N$-dimensional dense SIFT descriptor can be treated as a separate
function, giving rise to a total of $N$ real-valued functions on the
graph. The correspondences between two images are then reflected by
the dual correspondences between these functions over the graphs.
As it turns out, the functional map between two images is a linear transformation between
the spaces of all real-valued functions defined on the graphs of the two images.
We use the term \emph{correspondence} and \emph{map} interchangeably in the following discussions.

Our representation has the following advantages:

First, the proposed framework is robust
and flexible in establishing correspondences between images in many challenging cases.
Ideal pixel-level correspondences may exist between very similar images, yielding node-to-node
correspondences in the associated graphs. However, when the images are less similar,
images properties can still correspond and functional maps capture these more abstract notions of ``sameness''.
The functional framework includes node-to-node correspondences as a special case,
but can also represent more abstract similarity relations.

Second, as we show, many natural constraints on image correspondences become linear constraints
in the functional representation.
Then a functional map can be obtained through solving a least-squares problem with linear
constraints -- an approach that can be much more efficient than graph matching approaches.

Third, since our functional framework represents maps as matrices in a linear setting,
many of the tools of
linear algebra can be applied. Maps can be manipulated by addition or composition, and analyzed by SVD.

Based on the proposed new representation ({\S}\ref{sec:fmap}),
we design a framework for establishing functional
maps between images ({\S}\ref{sec:imgcorrespondence}), and explore its capabilities in
two applications.
First, functional maps are applied in segmentation transfer ({\S}\ref{sec:framework}) ,
with performance that matches or exceeds other state-of-the-art methods on PASCAL VOC 2012 ---
even though very simple image features are used.
Second, we demonstrate the effectiveness of the
functional maps in
face distortion analysis and facial expression clustering ({\S}\ref{sec:face}), exploiting the linear nature
of our map representation.



%The rest of this paper is organized as follows:
%after introducing the related works in {\S}\ref{sec:related},
%the novel representation, the functional map, is introduced in {\S}\ref{sec:fmap}.
%We describe the general framework for adapting the functional map representation to create maps between images in {\S}\ref{sec:imgcorrespondence}.
%The two applications, segmentation transfer and
%facial expression analysis, are described in
%{\S}\ref{sec:framework} and
%{\S}\ref{sec:face}, respectively.

\subsection{Related Work}
\label{sec:related}

The method proposed in this paper is related to graph matching methods for feature correspondences in object categorization
and image matching.
In these methods, an image is usually represented as a graph whose nodes are regions in the image.
The edges of the graph
reflect the underlying spatial structure of the image, such as region proximity,
and are used to guarantee the geometric consistency of nearby
regions during matching.
An objective function describing
appearance similarity and geometric compatibility
is maximized in order to establish
visual correspondences~\cite{Berg2005,Leordeanu2005}.
%Given pairs of graphs and matches between them,
%compatibility functions can also be learned
%to optimize the graph matching results~\cite{Caetano2009LearningGM}.
Since the graph matching problem is NP-hard in many versions,
an important topic in this theme is
to design efficient algorithms
for solving the quadratic
assignment problem
approximately~\cite{Duchenne2011GMKernel,Torresani2008,Zhou2011}.
In the proposed method, we also use a graph to model an image.
However, our framework solves
the graph matching problem in a fundamentally different way,
leading to a \emph{linear system} with an easily-obtained \emph{optimal solution}.
Appearance similarity and geometric compatibility are also included in our framework, but in a different form.

The proposed method is related to some non-parametric and data-driven approaches in image parsing, or segmenting objects in an image.
In these approaches, for each new test image, the most similar training images are retrieved from the data set,
and the desired information is transferred from the training images to the test image based on the similarity between them.
For example, labels are transferred between two images at the pixel-level based on SIFT flow~\cite{Liu2009}, or between superpixels
based on local features to perform scene parsing~\cite{Tighe2010ECCV}. Segmentation boundaries are identified by matching each region to semantically similar regions~\cite{Russell2009},
and segmentation masks are transferred from the training windows to the similar windows in test images~\cite{Kuettel2012Figure}.
However, these works all assume the images to be globally or partially similar in the original pixel space.
Our functional representation can obtain correspondences in a more abstract sense even when images are not obviously similar.
Another difference is that, while these methods are transferring information between images by matching at different levels, our proposed functional maps effectively invert this process: given corresponding properties or attributes of images treated as functions,
these functional correspondences are used to derive a functional map.

The proposed representation is also related to spectral embedding and its applications~\cite{Jain2007,Mateus2008,Rustamov2007}.
However, these methods still assume one-to-one correspondences
between the bases, making the problem combinatorial.
We don't make such assumption, thus obtain an elegant linear formulation with an optimal solution.
This is a crucial distinction between our method and others ---
by removing the reliance on one-to-one correspondence, we avoid the inherently combinatorial nature of the graph matching problem.


\section{Functional Maps}
\label{sec:fmap}

Functional maps~\cite{Ovsjanikov2012} were initially introduced to
describe maps between meshed surfaces in geometry processing. In
this paper, we adapt the functional maps framework for maps between
graphs and also between images.

We motivate functional maps as follows. Let $G_a$ and $G_b$ be
two graphs, and suppose $T \colon G_a\to G_b$ is a bijective map between
their nodes. Then $T$ can be used to transform \emph{functions} on $G_a$
into functions on $G_b$. That is, if $f \colon G_a\to \R$ is a function on
$G_a$ taking a scalar value on each node, we can obtain a
corresponding function $g \colon G_b\to \R$ by composition $g
= f\circ T^{-1}$. We denote this induced transformation as
$T_F \colon \mathcal{F}(G_a,\R) \to \mathcal{F}(G_b,\R)$, where
$\mathcal{F}(\cdot,\R)$ denotes the vector space of real-valued
functions on the nodes of a graph. We call $T_F$ the
\emph{functional representation} of the map $T$, and we observe that
$T_F$ is a linear transformation of vector spaces.

The functional map $T_F$ puts real-valued functions on the two graphs, rather than graph nodes,
into correspondence with each other.
Moreover, $T_F$ is a linear map between vector spaces,
unlike the original map $T$ which may be complicated and hard to
manipulate.
It can be shown that the original map $T$ can be efficiently recovered
from $T_F$, meaning $T_F$ carries the same information as $T$~\cite{Ovsjanikov2012}.

However, the functional maps framework is much more general than
the classical point-to-point mappings.  While each point-to-point
mapping induces a functional map, it is not always true the other
way --- there are functional maps that
are \emph{not} induced by any point-to-point maps.  Hence more
abstract notions of ``sameness" can be encoded in functional maps.
For example, various kinds of ambiguities in matching can be directly encoded in a functional map.
This feature is crucial to the effectiveness of the functional maps
framework for matching properties of graphs.

A further key advantage of functional maps is that they can be
represented compactly as matrices after choosing the bases. Suppose $\left\{
{\varphi _i^a } \right\}$ and $\left\{ {\varphi _j^b } \right\}$ are
bases for $\mathcal F (G_a, \R)$ and $\mathcal F (G_b, \R)$, and let $T_F \colon \mathcal F (G_a, \R) \rightarrow \mathcal
F (G_b, \R)$ be a functional map. Since $T_F \left( {\varphi _i^a
} \right)$ is a function on $G_b$, it can be expanded on the basis
of $G_b$.  We can thus write
\begin{equation}
    T_F \left( {\varphi _i^a } \right) = \sum\limits_j {c_{ij} \varphi _j^b },
\end{equation}
where the coefficients $c_{ij}$ reflect the relationship between the two sets of
bases and the map $T_F$. Now if
$f \colon G_a\to \R$ is represented as $f =
\sum\nolimits_i {a_i \varphi _i^a }$, then by linearity
we have
\begin{align}
    \label{eq:basisb}
    %
    T_F \left( f \right) &= \sum\limits_i {a_i T_F \left( {\varphi _i^a } \right)}
    = \sum\limits_j {\sum\limits_i {a_i c_{ij} \varphi _j^b } } \, .
\end{align}

The action of $T_F$ on a function $f \in \mathcal F(G_a, \R)$ becomes
matrix multiplication in this representation.
That is, if we represent $f$ and $T_F(f)$ as coefficient vectors
$\mathbf{a} = \left[ {a_0 ,a_1 , \cdots  } \right]$ and
$\mathbf{b} = \left[ {b_0
,b_1 , \cdots } \right]$, Eq.~\ref{eq:basisb}
says $b_j  = \sum_i {a_i c_{ij} }$.
This is the matrix product $\mathbf{b} = C\mathbf{a}$.

\section{Image Correspondence by Functional Maps}
\label{sec:imgcorrespondence}

Four steps must be performed in order to
use functional maps in image-related applications:
\begin{enumerate}
    \item Construct a graph on each image.
    \item Obtain \emph{probe functions} (see below) as constraints.
    \item Find a regularizer for the functional map equations.
    \item Choose a  basis for each graph.
\end{enumerate}
Step 1 is usually application-specific,
so we defer its discussion until we describe
the detailed application scenarios later
in \S\ref{sec:segtran-graph} and \S\ref{sec:face}.
In this section, we discuss the techniques for
steps 2, 3, 4 in \S\ref{subsec:constraints},
\ref{subsec:regularization}, and \ref{sec:basis}, respectively.


\begin{figure}
\centering
 \includegraphics[width=1\linewidth]{figures/framework_new3}
   \caption{The test and training images (a and c)
are segmented to superpixels (b and d).
Functional maps between (b) and each graph in (d)
are obtained, and used to transfer the foreground indicator function
from the training image
to the test image, resulting (f), which are averaged to get (g),
then further thresholded to generate segmentation (h).}
\label{fig:framework}
\end{figure}


\subsection{Constraints on Functional Maps}
\label{subsec:constraints}

Suppose we wish to \emph{constrain} a functional map $T_F$  between
two graphs in such a way that a certain pair of functions $f \in
\mathcal F(G_a, \R)$ and $g \in \mathcal F(G_b, \R)$
correspond as $T_F(f) = g$. After choosing the basis, this
constraint becomes the \emph{linear} constraint $\mathbf{b} =
C\mathbf{a}$ on the matrix $C$ representing $T_F$ and the
coefficients $\mathbf{a}$ and $\mathbf{b}$ representing the functions $f$ and
$g$, respectively. We refer to the functions $f$ and $g$  participating in
these constraints as \emph{probe functions}.

Let $f_1, f_2, \ldots \in \mathcal F(G_a, \R)$ and $g_1, g_2, \ldots
\in \mathcal F(G_b, \R)$ be probe functions. We can stack the
coefficients of their basis representations as the matrices $A =
\left[ {\mathbf{a_1} ,\mathbf{a_2} , \cdots } \right]$ and $B =
\left[ {\mathbf{b_1} ,\mathbf{b_2} , \cdots } \right]$. The system
of constraints imposed by matching probe functions is the matrix
equation $B = CA$.  Thus we can recover $C$ by solving the optimization problem
\begin{equation}
    C^*  = \mathop {\arg \min }\limits_C \left\| {CA - B} \right\|_F
\end{equation}
where $\| \cdot \|_F$ is the Frobenius norm.

The formulation above is very flexible, because any function on a
graph can be represented as a linear combination of the basis functions,
then paired with another function on a different graph using the
functional map, yielding a linear constraint. Many natural
relationships between images can be incorporated in this way
and we have a lot of freedom in choosing the
probe functions. For example, $f$ and $g$ can be visual descriptors of the graph nodes, such as
color, SIFT, shape descriptor, bag-of-visual-words, etc. ---
whenever we believe a property should be preserved by the functional map. Multidimensional
descriptors are regarded as multiple scalar functions, one for each
dimension, yielding multiple constraints.

Furthermore, we may be lucky enough also to have some point-to-point
landmark correspondences between the two images (e.g., obtained
via local descriptor matching followed by geometric verification or perhaps
by supervision).
Our framework easily incorporates this information by
regarding each point as a delta function whose support is the landmark,
or a Gaussian function whose support is the local neighborhood of the landmark.
These functions yield linear constraints as above.

More general functions can also be used as constraints. For example,
if  multiple nodes in two images are labeled as the same object,
this region-to-region correspondence can be regarded as an indicator
function supported on the corresponding nodes and naturally
transferred into the functional representation as another linear constraint.

\subsection{Regularization}
\label{subsec:regularization}

When matching graphs, both the node attributes and the structural
pair-wise relationships between the nodes should be preserved. Using
pairings of functions as constraints, we achieve node attribute preservation.
In this section, we describe another term to be added as a regularizer
to our optimization problem, that will enforce the preservation of
pair-wise relationships of graph nodes.

%By using pairings of functions instead of point-to-point
%correspondences, finding a functional map can be formulated as a
%system of linear equations. However, the solution may be
%over-fitting the data if no proper regularization is used. In this
%section, we describe a regularization technique that can enforce spatial smoothness
%and help avoid over-fitting, especially when the number of probe function
%constraints used is small -- the most desirable setting.

To motivate the form of this term, let us consider two isomorphic
graphs $G_{a}$ and $G_{b}$ with a bijective map $T\colon G_{a}\rightarrow G_{b}$.
Since $T$ is an isomorphism, it preserves the pairwise relationships
captured by the graph adjacency matrices. Using the expression
of the Laplacian in terms of the adjacency matrix, it is easy to see
that $T$ must commute with the Laplacian operators: equality $\mathcal{L}_{b}(f\circ T)=(\mathcal{L}_{a}f)\circ T$
will hold for all functions $f\in\mathcal{F}(G_{a},\R)$.
%Consider two isomorphic graphs $G_a$ and $G_b$ with isomorphism $T \colon G_a \rightarrow G_b$. To preserve the pairwise relationships in
%the adjacency matrix, $T$ should commute with the Laplacian operators, namely $\mathcal L_b (f
%\circ T) = (\mathcal L_a f) \circ T$ for all function $f \in \mathcal F(G_a,\R)$.
We can express this property in terms of the functional
representation of $T$ as follows. Let $\{\varphi_i^a\}$ and
$\{\varphi_j^b\}$ be orthonormal bases for $\mathcal F(G_a, \R)$ and
$\mathcal F(G_b, \R)$, respectively, stacked into matrices $\Phi_a$
and $\Phi_b$. Then $f = \Phi_a \mathbf{a}$ where $\mathbf{a} =
\Phi_a^T f$. Applying $T_F$ followed by $\mathcal L_b$ yields
\begin{equation*}
    \mathcal{L}_b \left(T_F(f)\right) = \Phi_b ( \Phi_b^T \mathcal L_b \Phi_b ) (\Phi_b^T T_F \Phi_a) ( \Phi_a^T f ) = \Phi_b ( L_b C a )
\end{equation*}
where $L_b$ is the matrix representation of $\mathcal L_b$ in the
basis $\Phi_b$. The Laplacian $\mathcal L_a$ followed by $T_F$ yields
\begin{equation*}
    T_F(\mathcal{L}_a\left(f) \right) = \Phi_b ( \Phi_b^T T_F \Phi_a ) (\Phi_a^T \mathcal L_a \Phi_a) ( \Phi_a^T f ) = \Phi_b ( C L_a a )
\end{equation*}
where $L_a$ is the matrix representation of $\mathcal L_a$ in the
basis $\Phi_a$.  Comparing the two results yields $ \Phi_b ( L_b C
a ) = \Phi_b ( C L_a a )$.  Since this holds for all vectors $a$, we
see that the matrix $C$ commutes with $L_a$ and $L_b$, namely $C L_a
= L_b C $.

The commutation relationship derived above holds exactly for any graph
isomorphism. In cases where we expect $G_a$ and $G_b$ to be
approximately isomorphic, we can use this relationship as a
\emph{regularizer} for the optimization problem
\begin{align}
    \label{eq:fmap-general}
    %
    \mbox{min.} \left\| CA - B \right\|_F  + \lambda \left\|C L_a  -  L_b C \right\|_F,
\end{align}
and then construct the functional map as $T_F = \Phi_b C \Phi_a^T$.
The first term in Eq.~\ref{eq:fmap-general} represents appearance similarity
between images, while the second one ensures geometric compatibility of the functional map.
Moreover, the regularization also helps avoid over-fitting
when the number of probe function constraints is small.

In practice,
the (near-)isomorphism of graphs still holds even for images
with different contents,
if the adjacency of the nodes is only determined by their spatial layouts
rather than appearances.
%In the latter case, we will not be able to solve
%for the full matrix representation of a functional map between the
%image graph on account of its large size ($N \times N$ where $N$ is
%the number of pixels).
When the graph is large,
we can solve for the functional map in a well-chosen \emph{subspaces} of the functional spaces
to reduce computational complexity.
Here the commutation relationship acts as a regularizer as well, if we expect $G_a$ and $G_b$ to
be approximately isomorphic in a ``low-frequency'' sense, as determined by the chosen reduced bases.


\subsection{Choice of Basis}
\label{sec:basis}

There are many options when it comes to selecting a basis to
represent functions on graphs.
In general, the choice depends on the application.
However, the basis should satisfy two general
criteria: First, the basis
should be \emph{compact}, meaning that functions
should be well reconstructed by only a few basis functions.
Second, if only a few basis functions are kept,
the graphs should still be approximately isomorphic,
or isomorphic in the ``low-frequency"
sense.

Local bases, such as the ones obtained by non-negative matrix
factorization (NMF)~\cite{Lee1999} or independent components
analysis (ICA)~\cite{Draper2003ica}, are good at representing
meaningful parts of a graph and capturing local distortions.
However, they are not compact, meaning that usually we cannot
reconstruct an image with a small set of basis functions.

On the other hand, global bases such as Laplacian eigenfunctions and principle components from PCA,
are good at approximating functions
%with minimal least-squares error
using a limited number of basis functions.
Moreover, the Laplacian eigenfunctions are multi-scale and
geometry-aware -- desirable properties of a good basis. Also, the
commutativity regularization discussed in
\S\ref{subsec:regularization} becomes simpler, because $\Phi_a$
diagonalizes the Laplacian operator $\mathcal L_a$ through $\Phi_a^T
\mathcal L_a \Phi_a = \Sigma_a$ where $\Sigma_a$ is a diagonal
matrix of the eigenvalues of $\mathcal L_a$. Similarly, $\Phi_b^T
\mathcal L_b \Phi_b = \Sigma_b$. Then Eq.~\ref{eq:fmap-general}
becomes:
\begin{equation}
\label{eq:fmap-simple}
\mbox{min.} \left\| CA - B \right\|_F  + \lambda \left\|C\Sigma_a - \Sigma_bC\right\|_F,
\end{equation}
where the commutativity regularizer can be further rewritten
as an element-wise sum:
\begin{equation}
\left\|C\Sigma_a - \Sigma_bC\right\|_F = \sum\limits_{i,j} {c_{ij}\left[ {\Sigma_b(i,i) - \Sigma_a(j,j)} \right]^2} ,
\end{equation}
which means that if the two eigenvalues
$\Sigma_a(j,j)$ and $\Sigma_b(i,i)$ are far away from each other,
the corresponding term $c_{ij}$ will be penalized.
If the two graphs are nearly isomorphic, they have similar spectra,
this regularization will force $C$ to have significant values on the diagonal
and near-diagonal entries,
and almost zero everywhere else, making $C$ a sparse matrix.



%By selecting a proper set of basis functions,
%each function on the graph can be well-approximated using
%only a small number $k$ of basis functions,
%and the functional map can be expressed as a (possibly sparse) matrix $C$ of size $k \times k$.


\section{Functional Maps for Segmentation Transfer}
\label{sec:framework}

The goal of segmentation transfer
is to obtain the pixel-level background/foreground segmentation of a test image
given segmentations of the training images.
Functional maps are well-suited here:
we can
first learn the functional map between the training and test images
via probe functions about image visual characteristics,
then treat the given segmentations on the training images as indicator functions,
and transfer them to the test image.
New training images can be incorporated into this framework very efficiently
without model re-training.

\subsection{Graph Construction}
\label{sec:segtran-graph}

The images in this application have considerably different object appearance,
viewpoints, and spatial layouts.
However, small image regions, i.e., superpixels,
may share certain visual characteristics.
The neighborhood relationships between superpixels
can reflect the object composite
and the spatial interactions between objects.
In our experiments,
we use Normalized Cuts~\cite{Shi2000NCut}
%\footnote{\url{http://www.cis.upenn.edu/~jshi/software}}
to segment each image into $100$ non-overlapping regions as shown in Fig.~\ref{fig:framework}(b, d).
%Segments that are too small are merged with their neighbors.

For the training and test images,
each superpixel is a graph node,
and an edge between two nodes exists only for adjacent superpixels.
The strength of the edge
is determined by the length of the shared boundary of the two super-pixels
normalized by the average perimeter.
This creates sparse planar graphs,
because superpixels that are spatially far away from each other are not connected.
Note that the graph connectivity of the nodes only reflects the spatial
connectivity of the superpixels, which are similar across images as shown in
Fig.~\ref{fig:framework}(b, d). Therefore,
our assumption of near-isomorphism holds.
\subsection{Finding the Functional Map}
\label{sec:segtran-fmap}

Any classical image descriptor can be regarded as functions over the image.
We take color features and bag-of-visual-words features as probe functions in deriving our
function preservation constraints.
For each superpixel,
the color features include the $3$ average RGB values,
and a $64$-dimensional color histogram.
SIFT descriptors are extracted at each pixel,
and the bag-of-visual-words histograms are obtained for each superpixel
based on a dictionary of $300$ visual words.
In total there are $367$ probe functions used as function preservation constraints.

Laplacian eigenfunctions are used as the basis for functions on the graphs.
Note that the lower order eigenfunctions are typically quite smooth, and effectively
promote continuity in the functional map, which is often desirable.
%Using Laplacian eigenfunctions will also make the regularization of commutability easy to solve
%since the constraint becomes element-wise as discussed in Sec.\ref{sec:basis}.
If we denote the graphs for the training and test images as $G_{i}$ and $G_{0}$, respectively,
and their Laplacian eigenfunctions as $\Phi_{i}$ and $\Phi_{0}$, respectively,
the probe functions $f_{i}$ and $f_{0}$ can be transformed to their basis representation
as $A = \Phi_{i}^T f_{i}$ and $B = \Phi_{0}^T f_{0}$,
because both $\Phi _{i}$ and $\Phi _{0}$ are orthonormal.
The optimization problem Eq.~\ref{eq:fmap-simple} is then solved to obtain the functional map $C$.
%Intuitively, the more basis functions we keep,
%the more accurate a representation we get,
%as shown in Fig.~\ref{fig:num-eigen}.
%Since Laplacian eigenvectors form a compact basis,
%usually only a small subset of them corresponding to the smallest
%eigenvalues suffices to approximate a function.

\subsection{Transferring the Segmentation}
\label{sec:segtran-transfer}

For each training image $i$, the ground truth segmentation is represented as an indicator
function $f_{i,\mathit{gt}}$.
In this indicator function,
the value for each superpixel equals the percentage of foreground pixels minus the percentage
of background pixels.
For all superpixels, $f_{i,\mathit{gt}} \in [-1, 1]$, and is $1$ for a superpixel
completely in the foreground, and is $-1$ for one completely in the background.
Furthermore, to compensate for the effect of unbalanced foreground and background size,
we normalize $f_{i,\mathit{gt}}$
so that $\left\| {f_{i,\mathit{gt}}^ +  } \right\|_1  = \left\| {f_{i,\mathit{gt}}^ -  } \right\|_1 = 1$,
where $f_{i,\mathit{gt}}^ \pm$ are the positive and negative parts of the
indicator function, respectively.

Using the functional map $C$ found in \S\ref{sec:segtran-fmap},
we can map
the function $f_{i,\mathit{gt}}$ to the test image
$G_{0}$ using $g = \Phi _{0}C\Phi _{i}^T f_{i,\mathit{gt}}$. This new function can
be regarded as the transferred indicator function on the test graph. A higher value of $g$
on a graph node means this superpixel is more likely to belong to the foreground.
A straightforward way of obtaining a foreground/background segmentation
is to perform thresholding on this indicator function.
\subsection{Merging the Results}
\label{sec:segtran-merging}

Intuitively, if the training and test images look visually similar to each other,
an ideal correspondence exists as a point-to-point map.
However, in most cases, the training and test images are quite different and the image descriptors contain noise,
so it is unrealistic to expect
the map between one training image and one test image to perfectly reflect the true correspondences.
For example, Fig.~\ref{fig:example-combined}(a)
shows a test image where the cows are the foreground that we would like to segment.
Fig.~\ref{fig:example-combined}(b) shows some training images whose ground truth indicator functions in Fig.~\ref{fig:example-combined}(c) are mapped to
the test image as shown in Fig.~\ref{fig:example-combined}(d).
We see that some of the indicator functions have peaks on the correct foreground positions,
while others go totally awry.
If we directly apply a threshold on each mapped indicator function, some would thus get much better results than others.

We therefore employ a voting mechanism to combine the mapped indicator
functions from $N$ training graphs $G_1,\cdots, G_N$.  We define
\begin{equation}
    g = \frac{1}{\sum_{i=1}^{N} w_i}\sum_{i=1}^{N} w_i \Phi _{0}C_{i\rightarrow 0}\Phi_{i}^T f_{i,\mathit{gt}},
    \label{eqn:merge}
\end{equation}
where the weight $w_i$ for each training image $i$ is determined based on
its distance $d_i$ to the test image using the GIST descriptor~\cite{Oliva2001gist},
as
$w_i = e^{-d_i^2/2\sigma^2}$,
where
$\sigma$ is chosen to be the mean value of all these distances.

%This voting mechanism enhances the meaningful correspondences
%between objects in the test image and
%the similar objects appearing frequently in the training images,
%and also suppresses noisy correspondences.

Fig.~\ref{fig:framework}(f) and Fig.~\ref{fig:example-combined}(e) show two
examples of
the averaged indicator functions.
We find in our experiments that the weighted voting mechanism
greatly improves the result.
The meaningful correspondences between objects in the test image and
similar objects appearing frequently in the training images are
enhanced; and the noisy and incoherent correspondences are suppressed.
We can now apply a threshold on the function $g$ in order to
produce good foreground segmentation results such as the one
shown in Fig.~\ref{fig:framework}(g). The threshold is
determined as $\min  f  + \alpha\left( {\max  f - \min  f } \right)$
where we learn the value $\alpha$ for each class in the validation set.

%Even though there is no perfect map from a single training image
%to the test image due to
%the large variability of foreground and background,
%the voting mechanism diminishes the effect of object
%variability so that the maps from similar objects will be added up
%and emphasized, while
%the incoherent effect from various different foregrounds and background
%will be canceled out.

\begin{figure}
\includegraphics[width=1\linewidth]{figures/example-combined-v2}
\vspace{-4ex}
\caption{
Given a set of training images (b),
their indicator functions (c),
and a test image (a),
we obtain a set of indicator functions (d) mapped from each image of (b),
which are combined to the final indicator function (e).
Red color means higher value. This figure is best viewed in color.
}
\label{fig:example-combined}
\end{figure}

\subsection{Segmentation Transfer Experiments}
\label{sec:experiments}

We evaluate our segmentation transfer framework
on the
PASCAL VOC challenge 2012 data set~\cite{PASCAL2010}.
The data set contains real-world consumer images from Flickr,
and is one of the most challenging datasets for segmentation.
The dataset is annotated with pixel-wise segmentations of $20$
different object classes.
For each class, the segmentation accuracy is calculated as
the number of correctly labeled pixels
divided by the number of pixels labeled with that
class by either the segmentation algorithm or the ground truth.
Please refer to~\cite{PASCAL2010} for details.

\subsubsection{Parameter Selection}

%The functional map method requires choosing two key parameters.
%One is the number $k$ of basis functions chosen for each graph; the other
%is the weight $\lambda$ of the regularization term
%in Eq.~\ref{eq:fmap-general}.
%To investigate the best parameters selection,
%we take the class ``aeroplane'' of the PASCAL VOC challenge 2011 as an example,
%and show how the performance of its validation set
%varies with these parameters based on mapping from its training set.
In this section, we take the ``aeroplane'' class
of the PASCAL VOC 2012
data set to
show how the performance on the validation set
varies with two parameters:


\thinparagraph{Number of basis functions $k$: }
Intuitively, the more basis functions we choose,
the better representation capability we have.
However, because the Laplacian eigenvectors are compact,
only a small subset of the basis functions suffices to represent most
functions.
Fig.~\ref{fig:num-eigen} shows the accuracy of the ``aeroplane'' class
with varying numbers of basis functions when $\lambda$ is fixed at $10$.
We see that when the number of the eigenfunctions
is large enough (more than $20$),
the value of using more eigenfunctions is diminishing.
Moreover,
since the functional map has size $k \times k$,
the scale of the optimization problem in
Eq.~\ref{eq:fmap-general} grows
quadratically with $k$.
This motivates us to use a small set of eigenfunctions.
In our experiments, we choose $k=30$.

%Compactness is an important property for a basis,
%especially when the graph has a large number of nodes
%and thus there is a huge set of candidate basis functions.

\begin{figure}
\centering
\subfloat[]
{\label{fig:num-eigen}
\includegraphics[height=0.48\linewidth, trim = 0mm 0mm 5mm 5mm, clip]{figures/num-eigen}}
\subfloat[]
{\label{fig:weight-commute}
\includegraphics[height=0.48\linewidth, trim = 0mm 0mm 5mm 5mm, clip]{figures/weight-commute}}
\vspace{-2ex}
\caption{
The accuracy for class ``aeroplane'' as a function of
(a) the number of basis functions and
(b) and the weight on the commutativity regularizer.}
\end{figure}


%As discussed in Sec.~\ref{sec:basis},
%the commutativity regularizer helps to produce a
%sparse and diagonal-dominant matrix
%for the functional map $C$.

\thinparagraph{Regularizer weight $\lambda$: }
Fig.~\ref{fig:weight-commute} shows
the accuracy of ``aeroplane'' as a function of $\lambda$
when the number of basis functions is fixed at $30$.
We normalize all probe functions to remove scale differences,
and the experiment shows that $\lambda \in [10, 15]$ gives good performance in
general.
Since we have $k^2=900$ unknowns in the functional map but only $367$ linear constraints,
the problem is under-determined. If the regularizer is barely used, i.e., if $\lambda$ is very small, the accuracy is suboptimal.
%45.6\%,
This means that the regularizer is important for obtaining a meaningful map.
If the regularizer is
over-emphasized with a large $\lambda$, the resulting functional map overlooks the actual
functions being mapped, yielding poor results.
In our experiments, we use $\lambda = 10$.
\subsubsection{Experiments on PASCAL VOC 2012}

\thinparagraph{Implementation Details.} For each test image, we select up
to $100$ most similar training images from each object class using
the GIST descriptor. With these training images, we use
Eq.~\ref{eqn:merge} to obtain an indicator function for each class.

We observe that if the test image does not contain objects from a
particular class, the resulting indicator function for this class
contains no significant peaks.
We use a ``winner-take-all'' strategy,
where for each superpixel, the label whose indicator function
is the maximum among all the classes wins.
If this value does not exceed the threshold,
this superpixel is assigned to the background.
Each superpixel is now labeled as background or one of the 20 classes.

\thinparagraph{Segmentation Refinement.}
The computed segmentation is only at the superpixel level so far,
i.e., the resolution of the result is restricted
by the granularity of the over-segmentation.
We thus use GrabCut~\cite{Rother2004} to refine the segmentation.
The GrabCut algorithm iteratively refines Gaussian mixture models (GMMs) in the RGB color
space for the foreground and the background
and runs graph-cut based on the respective probabilities.

This refinement step is performed independently for each significant label that appears
in the test image.
Finally, if a pixel is assigned to multiple foregrounds after GrabCut,
we choose the class that gives the maximum posterior
probability for this pixel according to the estimated GMMs.

\thinparagraph{Results.}
Table~\ref{tbl:comparison} shows the segmentation accuracy
for all $20$ classes.
The ``Best'' results are the state-of-the-art for
each class according to the PASCAL VOC website\footnote{\tiny\url{http://pascallin.ecs.soton.ac.uk/challenges/VOC/voc2012/results/index.html}}.
These may come from different methods for different categories.
We can see that our method
achieves comparable and sometimes better performance than the current state-of-the-art.

The functional method generally performs well on natural objects, i.e. ``cat'', ``cow'', ``dog'', ``horse'', ``sheep'', etc.
Although objects in these classes show non-rigid deformations in different images, most properties of the test image
can find a match with at least some training images.
On the other hand, for classes such as ``bottle'', ``car'', ``person'', the properties in some test images
rarely appear in training images, thus the map is not meaningful even in its functional form. This can be solved
using more advanced descriptors, or providing weights on constraints from different properties.
The low accuracy on ``bicycle'', ``bottle'', ``chair'' also arises because the superpixels generated by Normalized Cuts
do not capture well thin or small objects, so these objects are mainly mixed with nearby pixels. This can be possibly
solved by increasing the number of superpixels or integrating multiple segmentation methods with different resolutions.
Our method can be further improved by using more sophisticated post-processing,
and by assigning multi-class labels jointly using co-occurrence statistics.

The importance of our work is to introduce the functional map
representation and to demonstrate its efficacy for creating image correspondence in a more abstract,
flexible, and meaningful way,
as well as to apply it in situations where
point-to-point mapping is not directly applicable.
\section{Functional Maps for Distortion Analysis}
\label{sec:face}

Face images have a highly preserved feature structure; however,
variations due to pose, lighting, and expression changes still pose challenges
for face matching. Many face-related applications
require accurate detection of landmark points on each face. Here we will
show how to analyze distortion between faces using
functional maps without landmark correspondences.

We resize the images to $64 \times 64$, and use the
pixels as graph nodes. Each node is connected to its 4 neighbors
with uniform edge weight. The assumption of near-isomorphism
of graphs holds here because we have identical graphs for different faces.
Local features including pixel intensity,
SIFT, and Local Binary Pattern (LBP) are extracted at each pixel as
probe functions. Eigenfaces~\cite{Turk1991Eigenface} corresponding to the largest 50
eigenvalues are used as the basis. Because this basis is generated
using PCA, it is compact and most faces can be represented using
only a few Eigenfaces.

\thinparagraph{Distortion Analysis Using SVD.}
The functional framework allows the maps between images
to be written in a linear form, so many linear algebra tools
become available, and we can switch to a basis that best captures
the effect of this map.
Specifically, if we take the SVD form $C = U \Sigma V^T$ of the map matrix,
the magnitudes of the singular values reflect the amount of change or distortion
introduced by the map.
When the magnitude of an eigenvalue is greater/smaller than $1$,
a function on the graph is stretched/shrunken in the direction
of the corresponding eigenvector.
%If the magnitude of an eigenvalue is smaller than $1$,
%the function is shrunken in the corresponding direction.}
These stretching or shrinking operations
happen in the space spanned by the basis $\Phi$,
so $\Phi V$ and $\Phi U$ give us the transformation directions
as functions on the original graphs.
We can pick out the columns of $\Phi V$ and $\Phi U$
whose corresponding singular value
magnitudes differ the most from $1$ to visualize
the most significant distortions.

We select a random face from the dataset as an anchor face
(Fig.~\ref{fig:expressionclustering} top),
and obtain functional maps from it to all other faces.
Within each group of Fig.~\ref{fig:expressionclustering}, the left image is the face in gray-scale;
the middle one visualizes the eigenvector corresponding to the most significant distortion direction;
the right image shows a mask highlighting the locations
of the largest $5\%$ of the values in the middle image, emphasizing the most distorted regions.
The functional maps here successfully identify
the most distorted facial parts, including the lips, the eyebrows and sometimes the face outline.

%Fig.~\ref{fig:svd-eigenface50-yaleb} visualizes the functional map between
%two face images shown on the left.
%The two directions with the most significant distortions are shown
%in gray scale.
%Below each gray scale image we show the distortion
%direction thresholded by the $5\%$ percentile
%to emphasize the most distorted regions.
%Because the face is turning from the first image to the second image in
%Fig.~\ref{fig:svd-eigenface50-yaleb}, the functional map
%clearly visualizes the changing edge on the right.
%When there are expression changes between the two faces,
%as shown in Fig.~\ref{fig:svd-eigenface50-jaffe},
%the functional map can also successfully identify the regions with the most
%change: the lip, and the area between each eye and the eyebrow.



%Fig.~\ref{fig:svd-eigenface50-yaleb} visualizes the functional map between
%two face images shown on the left.
%The two directions with the most significant distortions are shown
%in gray scale.
%Below each gray scale image we show the distortion
%direction thresholded by the $5\%$ percentile
%to emphasize the most distorted regions.
%Because the face is turning from the first image to the second image in
%Fig.~\ref{fig:svd-eigenface50-yaleb}, the functional map
%clearly visualizes the changing edge on the right.
%When there are expression changes between the two faces,
%as shown in Fig.~\ref{fig:svd-eigenface50-jaffe},
%the functional map can also successfully identify the regions with the most
%change: the lip, and the area between each eye and the eyebrow.



%\begin{figure}
%\centering
%\subfloat[]
%{\label{fig:svd-eigenface50-yaleb}
%\includegraphics[width=0.4\linewidth, trim = 3cm 3cm 19cm 1cm, clip]{figures/map_svd_eigenface50_new_yaleb}} \hspace{4mm}
%\subfloat[]
%{\label{fig:svd-eigenface50-jaffe} \includegraphics[width=0.4\linewidth, trim = 2cm 3cm 16.5cm 1cm, clip]{figures/map_svd_diff_eigenface_JAFFE_95_shrink}}
%   \caption{\textbf{(a)}: Visualization of the functional map between two face images with different poses. Face images are taken
%   from the YaleB data set~\cite{Lee2005yaleb}.
%   \textbf{(b)}: Visualization of the functional map between two face images with different expressions.
%   Face images are taken from the JAFFE dataset~\cite{jaffe1998}.}
%\end{figure}

\thinparagraph{Expression Clustering Using Functional Maps.}
The functional map contains information regarding face deformations.
As a result, we can use it for expression clustering.
The functional map matrices from the anchor face to every other faces are regarded as vectors in $\R^{k^2}$,
and we simply perform $k$-means clustering on them based on Euclidean distance.
Fig.~\ref{fig:expressionclustering} shows the results with 7 clusters and
only one mis-clustered sample shown in the yellow box.
Please see the figure caption for details.

%Intuitively, because each functional map captures
%how two faces are different from each other,
%to which direction and how much the distortion is.
%If two faces are similar, the maps
%between another face and each of them should also be similar.
%This fact means the clustering results don't depend on the
%randomly-selected face.

\begin{figure*}
\centering
 \includegraphics[width=1\linewidth]{figures/expressionclusterv3}
   \caption{\small{The expression clustering and map visualization on face images.
One face is randomly selected as anchor face (top),
and the functional maps from it to all the other faces are obtained.
%Within each blue box, from left to right, we show the face image,
%the eigenvector showing the direction of the most significant distortion,
%and the thresholded eigenvector for better visualization.
There are in total 7 expression clusters: 1. neutral, 2. surprised,
3. sad, 4. happy, 5. angry, 6. disgusted, and 7. afraid.
The clusters are arranged according to their distances to the anchor face,
defined as the minimum value of the largest singular value of the maps in the cluster.
The anchor face itself belongs to the first cluster ``neutral'' in this example.
Face images are taken from the JAFFE dataset~\cite{jaffe1998}.
Red color means higher value in the color images. This figure is best viewed in color.}}\label{fig:expressionclustering}
\end{figure*}



\setlength{\tabcolsep}{5pt}
\begin{table*}\scriptsize
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
  \hline
  % after \\: \hline or \cline{col1-col2} \cline{col3-col4} ...
  \begin{sideways}\end{sideways} &\begin{sideways}\parbox{0.25in}{back ground}\end{sideways} &\begin{sideways}\parbox{0.25in}{aero plane}\end{sideways}  & \begin{sideways}bicycle\end{sideways} & \begin{sideways}bird\end{sideways} & \begin{sideways}boat\end{sideways} & \begin{sideways}bottle\end{sideways}
  & \begin{sideways}bus\end{sideways} & \begin{sideways}car\end{sideways} & \begin{sideways}cat\end{sideways}   & \begin{sideways}chair\end{sideways} & \begin{sideways}cow\end{sideways}
  & \begin{sideways}\parbox{0.25in}{dining table}\end{sideways}  &\begin{sideways}dog\end{sideways} &\begin{sideways}horse\end{sideways} &\begin{sideways}\parbox{0.25in}{motor bike}\end{sideways}&\begin{sideways}person\end{sideways}
  & \begin{sideways}\parbox{0.25in}{potted plant}\end{sideways} & \begin{sideways}sheep\end{sideways} & \begin{sideways}sofa\end{sideways}  & \begin{sideways}train\end{sideways} & \begin{sideways}monitor\end{sideways} \\ \hline
%  Best  & 54.3   & 23.9  & 46.0  & 35.3     & 49.4     & 66.2     & 56.2  & 46.1  & 15.0  & 47.4 & 30.1  & 33.9  & 49.1  & 54.4  & 46.4  & 28.8 & 51.3 & 26.4 & 44.9 & 45.8 \\ \hline
%  Ours & 48.8 & 16.3 & 42.9 & \textbf{41.7} & 26.5 & 59.9 & 38.5 & \textbf{52.6} & \textbf{21.7} & \textbf{51.7} & \textbf{47.0} & \textbf{53.7} & 48.7 & 47.0 & 35.8 & 24.4 & \textbf{51.6} & \textbf{41.5} & \textbf{57.3} & 40.3  \\ \hline
   Best & 85.1 & 65.4 & 31.0 & 51.3  & 44.5 & 58.9 & 60.8 & 61.5  & 56.4  & 22.6 & 53.6 & 32.6  & 47.4  & 57.6  & 57.9  & 51.9  & 35.7 & 55.3  & 40.8 & 54.2 & 47.8 \\ \hline
  Ours  & 82.8 & 62.8 & 20.6 & 48.4  & 43.2 & 33.3 & \textbf{61.3} & 48.1  & \textbf{60.3}  & 14.6 & \textbf{60.9} & 28.1  & \textbf{55.4}  & \textbf{60.5}  & 51.7  & 35.4  & 29.7 & \textbf{62.7}  & 33.9 & \textbf{63.1} & 43.2 \\ \hline
\end{tabular}
\end{center}
\vspace{-4ex}
\caption{Performance comparison on PASCAL VOC 2012. Boldface shows where ours is better than the state-of-the-art.}
\label{tbl:comparison}
\end{table*}

\section{Conclusion}
\label{sec:conclusion}

In this paper, we have introduced a new representation of relationships
between images called functional maps. It is conceptually simple and
computationally efficient, yet flexible enough to be applied to a variety of
applications. We discuss how to estimate functional maps between images based on
functional or property correspondences, and then demonstrate its capabilities in
two computer vision tasks. First, the functional map is used to
transfer segmentations between images. The method finds reasonable
relationships between images and achieves comparable or better performance
with other state-of-the-art methods. Second, the functional map
between two faces is used to effectively analyze and visualize the
distortion between them.

{\scriptsize
\bibliographystyle{ieee}
\bibliography{bibs/segmentation,bibs/FunctionalMap,bibs/image_descriptors,bibs/imcorrespondence}
}

\end{document}
