\documentclass[10pt]{article}
\usepackage{acl-hlt2011}
\usepackage{times}
\usepackage{latexsym}
\usepackage{amsmath, amsthm, amssymb}
\usepackage{multirow}
\usepackage{array}
\usepackage{url}
\usepackage{graphicx}
\DeclareMathOperator*{\argmax}{arg\,max}
\setlength\titlebox{6.5cm}

\title{Sparse Description of Deformation Fields \\ as means to \\ Handwritten Digits Classification}

\author{Nishith Tirpankar, Wathsala Widanagamaachchi, Anshul Joshi \\
  School of Computing \\
  University of Utah \\
  Salt Lake City, UT 84112 \\  
  {\tt tirpankar@sci.utah.edu, widanaga@cs.utah.edu, anshul@cs.utah.edu}}

\date{December 18, 2011}

\begin{document}
\maketitle

\section{Introduction}
%Describe the problem we are working on & state the challenges involved
Handwritten Digits Classification is an area that has gained much popularity over the years in the Machine 
Learning(ML) and Pattern Recognition community. Various approaches have being taken as an effort to achieve 
a near human perfomance for this problem. Typically this problem involves dealing with high dimensional data. 
For example if size of an image is 16x16, the variability of the image is on the dimensionality of 256. 
Although the variability in images is high dimensional, their inherant variability can be 
expressed with a much smaller number of dimensions. Since classification on high dimensional data is 
computationally expensive, we prefer to work on a less dimensional space. 

For this project, we reformulated the Handwritten Digits Classification problem to an image registration problem 
which uses landmarks/control points. By using landmarks we reduce the dimensionality of the problem. Intuitively 
a larger dimensionality will result a smaller error in classification stage. Keeping the number of landmarks to 
a reasonable number and hence the dimensionality, and still achieve a reasonable classification rate is our objective.


\section{Datasets and Prior Work}
%A brief survey of existing work done by *others* on this problem (highlight the pros and cons of these existing works)
For the task at hand, we used the zip digits handwritten database ~\cite{lecun:99} which is made available by the neural 
network group at $AT\&T$ Research Labs. It consists of two repositories of training and 
test dataset respectively. The datasets are of normalized handwritten digits automatically scanned from the
envelopes by the U.S. Postal Service. The original scanned digits are binary and of different sizes and orientations,
and the images have been deslanted and size normalized resulting in 16x16 grayscale images. There are 7291 images in the
training dataset and 2007 images in the test dataset. Their distributions are shown in table 1.

\begin{table*} [!ht]
\caption{Distributions of images in the zip digits handwritten database}
\begin{center}
\begin{tabular}{l | c | c | c | c | c | c | c | c | c | c | c}
   & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & Total \\ \hline   
  Train & 1194 & 1005 & 731 & 658 & 652 & 556 & 664 & 645 & 542 & 644 & 7291 \\
  Test & 359 & 264 & 198 & 166 & 200 & 160 & 170 & 147 & 166 & 177 & 2007 \\
\end{tabular}
\end{center}
\end{table*}

\section{Proposed method}
%Describe our approach to this problem
Our approach in the digit classification task comprises of 5 major modules - pairwise registration, computing the
best set of landmarks, atlas estimation, classification and error estimation. 

In the first step we implemented the 
landmark matching as described in ~\cite{Stanley:11} for pairwise registration of two images. This involved 
estimating a continuous deformation field parameterised by momenta vectors at each landmark. Thus, the returned 
deformation field is represented by a set of momenta vectors one at each landmark position. For the further discussion 
we refer to the set of momenta vectors as $\alpha$. Next we computed the variability of the training dataset to find 
the best set of landmarks for each class (0-9). This process involves registering the mean image of a particular class 
with each images of the class in the training dataset. For the third step we estimate an atlas for each class 
using the best set of control points found in the previous step. Here an atlas is a collection of template images and 
deformation momenta $\alpha$. One template image is the representative image of a class, so we have one per each class. 
The momenta vectors define the deformations which registers the template with each image in the training dataset. So we 
have a set of momenta vectors for each class. Thus, for a particular class the atlas can be defined as:
\[
  \textit{$atlas_j$}  = \left\{ 
  \begin{array}{l l}
    \textit{$template_j$} \\
    \textit{$\mbox{\boldmath$\alpha$}_i$} | \textit{$\mbox{\boldmath$\alpha$}_i$} \text{ deform $template_j$ to training image $i$}\\    
  \end{array} \right. 
\]

where $i$ = $1...N_s, N_s$ = training images in class $j$ 

Next is the classification. For that we use the momenta vectors which form the atlas not the training images. Given a test 
image first we register that with the template images of each class. Next we compare the obtained deformation vector with 
those from the atlas. In doing so, we compute the Mahalanobis distance of the test deformation from the mean deformation 
of each class. Finally we classify the image to the class which has the smallest Mahalanobis distance from the 
test image. The error of the classification is also computed in this step.

The following sections give a detailed description of each of the implementation steps:
\subsection{Pairwise Registration}
Let $\phi$ be an intensity preserving deformation field that maps each point in the source image domain to the target image domain. If
$I_{src}$ and $I_{tar}$ be the continuous functions in the source and target domains. Thus, objective of registering the source image
with the target is minimizing the $L^2$ norm between the images:
\begin{align}
A(y) &= ||I_{src}\circ\phi^{-1}-I_{tar}||^2\\
&= \sum_{k=1}^M (I_{src}(\phi^{-1}(y_k))-I_{tar}(y_k))^2
\end{align}
Let $\mathbf{c}=\{c_1,...c_N\}$ be a finite set of control points. The deformation field is parameterised by momenta vectors at the control points
$\mbox{\boldmath$\alpha$}=\{\alpha_1,...\alpha_N\}$. The velocity field being continuous can be found at any point $x$ in the source image
domain by using a gaussian interpolating kernel:
\begin{align}
&v(x) = \sum_{i=1}^N K(x,c_i)\alpha_i\\
&\text{where }K(x,y) = exp(-|x-y|^2/\sigma^2)
\end{align}
The transform $\phi$ in this small deformation setting can be seen as $\phi(x)=x+v(x)$. It should be noted that the inverse of the field
is approximated as $\phi^{-1}(y_k)=y_k-v(y_k)$. The regularity term can be defined as the the kinetic energy of the deformation field
which makes sure that the field is regularised as:
\begin{align}
||v||^2 = \sum_{i=1}^N \sum_{j=1}^N \alpha_i^T K(c_i,c_j) \alpha_j
\end{align}
Now we can write the objective function that we minimize in order to match the source image to the target image:
\begin{align}
E(\mathbf{c},\mbox{\boldmath$\alpha$}) &= ||I_{src}\circ\phi^{-1}-I_{tar}||^2 + \gamma||v||^2\\
&= A(y) + \gamma||v||^2
\end{align}
We perform gradient descent on this objective to get the optimal value of momenta vectors as well as the control points. The gradient
with respect to the momenta vectors can be written as:
\begin{align*}
&\frac{1}{2}\nabla_{\alpha_i}E=\\
&-\sum_{k=1}^MK(c_i,y_k)(I_{src}(y_k-v(y_k))-I_{tar}(y_k))\nabla_{y_k-v(y_k)}I_{src}\\
&+\gamma\sum_{j=1}^NK(c_i,c_j)\alpha_j
\end{align*}
Although we will not be using the gradient update for finding the optimal control point positions since we need to have a basis for
comparison as discussed later, we mention the gradient of the objective with respect to the control point positions:
\begin{align*}
&\frac{1}{2}\nabla_{c_i}=\\
&\sum_{k=1}^M\big(\frac{2}{\sigma^2}K(c_i,y_k)(I_{src}(y_k-v(y_k))-I_{tar}(y_k))\\
&(\nabla_{y_k-v(y_k)}I_{src})^T\alpha_i(c_i-y_k)\big)\\
&-\gamma\sum_{j=1}^N K(c_i,c_j)\alpha_i\alpha_j(c_i-c_j)
\end{align*}
Figure 1 shows the result of deforming the image of a digit 6 to match another image of a handwritten 6.
\begin{figure}[h!]
 \begin{center}
    $
    \begin{array}{cc}
      \includegraphics[width=1.5in]{./source6.png}&      
      \includegraphics[width=1.5in]{./target6.png}
    \end{array}
    $\\$
    \begin{array}{c}
      \includegraphics[width=3in]{./deformed6overlay256Mom.png}
    \end{array}
    $
 \end{center}  
 \caption{\textbf{Top Left}: Source Image of digit 6. \textbf{Top Right}: Target Image of image 6. 
		  \textbf{Bottom}: Deformed image with 256 momenta vectors overlain over the control points.}
\end{figure}

\subsection{Computing the optimal set of landmarks}
We will be classifying the images based upon the deformation that is required to match the template images of each class with the test
image. We can compare deformation fields using the momenta vectors $\mbox{\boldmath$\alpha$}$ that parameterize each of the deformation
field. In order to compare the momenta vectors, they need to be defined at the same set of control points. Thus, we cannot move the
control points in any step of the entire process.

We can always place the control points on a regularly spaced grid. But since we cannot move the control points, there are two issues we
face with this distribution. The first issue is that we need to have a reasonably dense distribution of control points in order to
capture the variations in the data. If we increase the number of control points in order to increase this density, then the number of
feature vectors goes up. Next, we need to capture any possible deformation possible from any template source image to any image in the
database. Capturing a deformation implies that in a given region in the image domain which would require a deformation to match some
source image in the dataset or atlas to some other image in the dataset, we would need a control point in that region.

Since we are interested only in deformations within images of a single class and not outside, we need to find out all the possible
deformations that can occur between the template (which we can approximate with the mean image) and all the images of a class in the
dataset. To find this, we place control points at all the grid locations in the image and register the mean image of a class with each
image of the class in the dataset. The variance of the momenta vectors will tell us which control points tend to have the most varying
momenta. Such points are valid candidates for being control points. We find such high variance points for each class in the entire dataset
and take a union of all such sets to get the final set of control points. The process to do the above is as follows.

Let us denote $\Theta_l$ as the set of all the momenta vectors that deform the mean image of the class $l$ to the images of class $l$ in
the dataset assuming that we have a control point at each grid element or pixel in the image. Let $I_{li}$ denote the image $i$ of class
$l$ in the dataset, $\mu_l$ denote the mean image of class $l$, $\phi_{\mbox{\boldmath$\alpha$}}$ be the deformation field parameterised
by the momenta vectors $\mbox{\boldmath$\alpha$}$. Thus, we can define:
\begin{align}
\Theta_l = \{\mbox{\boldmath$\alpha$}_i|  \mu_l(\phi_{\mbox{\boldmath$\alpha$}_i}(x))\approx I_{li}(x) \}
\end{align}
The deformed mean image $\mu_l\circ\phi_{\mbox{\boldmath$\alpha$}_i}$ is not exactly equal to the target image $I_{li}$ since the
registration process does not exactly match the two.

The $L_2$ norm of the variance defined for each grid point(pixel) over the set $\Theta_l$ can be obtained as:
\begin{align}
||&\Sigma_l^2||(x) = ||\mathbb{E}[(\mbox{\boldmath$\alpha$}_i - \mathbb{E}[\mbox{\boldmath$\alpha$}_i])^2]||(x)\\
&\text{where $\mathbb{E}$ acts on all the momenta vectors $i$ for class $l$}
\end{align}
This value is defined for each pixel position $x$ over the image domain for each class $l$. The images in figure 2 show the $L_2$ norm of
variance for different values of the kernel width used for the interpolation kernel $K(x,y)$ defined in equation (4).
\begin{figure}[h!]
 \begin{center}
    $
    \begin{array}{cc}
      \includegraphics[width=1.25in]{./var_sig0pt5.png}&
      \includegraphics[width=1.25in]{./var_sig1.png}
    \end{array}
    $\\
    $
    \begin{array}{cc}
      \includegraphics[width=1.25in]{./var_sig2.png}&
      \includegraphics[width=1.25in]{./var_sig3.png}
    \end{array}
    $\\
    $
    \begin{array}{c}
      \includegraphics[width=1.25in]{./var_sig4.png}
    \end{array}
    $
 \end{center}  
 \caption{\textbf{Top to bottom, Left to right:} $L_2$ norm of variance of $\phi_{\mbox{\boldmath$\alpha$}_i}$ for the interpolating
	  kernel width $\sigma = 0.5, 1, 2, 3$ and $4$. The variance tends to be more distributed over the entire image domain as we
	  increase the value of $\sigma$.}
\end{figure}
As can be seen from the figure 1, the smaller kernel tends to give us a better judgement of which pixels have high variance. Intuitively,
it can be seen that the variance should be on the boundary of the main contour of any handwritten digit which is what we see for smaller
values of $\sigma$.

Now to find the optimum position of the control we perform a form of discrete peak detection that tells us which pixels have the largest
variation of deformation vectors for each class. Figure 3 shows the result of the peak detection operation.
\begin{figure}[h!]
 \begin{center}
    $
    \begin{array}{cc}
      \includegraphics[width=1.25in]{./digits_peaks_sig0pt5.png}&
      \includegraphics[width=1.25in]{./digits_peaks_sig1.png}
    \end{array}
    $\\
    $
    \begin{array}{cc}
      \includegraphics[width=1.25in]{./digits_peaks_sig2.png}&
      \includegraphics[width=1.25in]{./digits_peaks_sig3.png}
    \end{array}
    $\\
    $
    \begin{array}{c}
      \includegraphics[width=1.25in]{./digits_peaks_sig4.png}
    \end{array}
    $
 \end{center}  
 \caption{\textbf{Top to bottom, Left to right:} Variace peaks for the interpolating kernel width $\sigma = 0.5, 1, 2, 3$ and $4$. The
	  peaks seem to hug the contours from the outside.}
\end{figure}

As can be seen in figure 3, the number of points that are estimated as potential candidates for being control points decreases as the
kernel width increases. For $\sigma=0.5$ we have the highest number of control points.

We have repeated the procedure of peak finding on the sum of the $L_2$ norm of variance images for each class to get the final set of
control points which is in some form a union of the peaks found in the earlier step. The results of performing this step is shown in
figure 4.
\begin{figure}[h!]
 \begin{center}
    $
    \begin{array}{cc}
      \includegraphics[width=1.25in]{./final_peaks_sig0pt5.png}&
      \includegraphics[width=1.25in]{./final_peaks_sig1.png}
    \end{array}
    $\\
    $
    \begin{array}{cc}
      \includegraphics[width=1.25in]{./final_peaks_sig2.png}&
      \includegraphics[width=1.25in]{./final_peaks_sig3.png}
    \end{array}
    $\\
    $
    \begin{array}{c}
      \includegraphics[width=1.25in]{./final_peaks_sig4.png}
    \end{array}
    $
 \end{center}  
 \caption{\textbf{Top to bottom, Left to right:} Peaks found as a union of the peaks from the previous steps.}
\end{figure}

As can be seen the peaks found with lower values of $\sigma$ tend to be more well distributed. We are not certain if this is the correct
thing to do though!

\subsection{Atlas Estimation}
In order to classify any test image of handwritten digit we need to first register the a representative image of each class with the test
image. The representative test image is the mean on the shape space of the class. This mean on the shape space is called a template. The
atlas is a set comprising of the template and a collection of deformation vectors that register the template with each of the images of
the class in the dataset. In this step we jointly optimize the template image and the deformations momenta. The objective function that we
are attempting to minimize in this step is:
\begin{align}
E(I_0, \mathbf{c},\mbox{\boldmath$\alpha$}_1,..., \mbox{\boldmath$\alpha$}_{N_s}) = \sum_{s=1}^{N_s}\{A_s(y)+\gamma||v_s||^2\}
\end{align}
As in section 3.1, the gradient of the objective with respect to the momenta can be found to be:
\begin{align}
\nabla_{\mbox{\boldmath$\alpha$}_s}E &= \nabla_{\mbox{\boldmath$\alpha$}_s}E_s\\
&\text{where }N_s = \text{ total number of images of class }l
\end{align}
Also, the gradient of the objective with respect to the template image $I_0$ which we use in order to get a better estimate of the
template turns our to be the splatted version of the sum of the residual images, a result the math for which we have not understood:
\begin{align}
\nabla_{I_0} = \sum_{s=1}^{N_s}\bigg(\text{splat }(I_0\circ\phi_{\mbox{\boldmath$\alpha$}_s}-I_s)\text{ into template domain}\bigg)
\end{align}
We perform a gradient descent on the momenta vectors and the template image simultaneously in order to get the optimal template image and
optimal deformation momenta vectors.
Another method to update the atlas is using iterative averaging. Although a detailed explaination is not included here, the results of 

We have run the atlas formation procedure using both splatting as well as averaging. Figure 5 shows the results of running the atlas
formation using both the techniques for two values of the kernel width $\sigma$.
\begin{figure}[h!]
 \begin{center}
    $
    \begin{array}{cc}
      \includegraphics[width=1.25in]{./atlas_avg_sig1.png}&
      \includegraphics[width=1.25in]{./atlas_avg_sig3.png}
    \end{array}
    $\\
    $
    \begin{array}{cc}
      \includegraphics[width=1.25in]{./atlas_splat_sig1.png}&
      \includegraphics[width=1.25in]{./atlas_splat_sig3.png}
    \end{array}
    $
 \end{center}  
 \caption{\textbf{Top:} Template images in atlas formed using averaging for $\sigma=1, 3$. \textbf{Bottom:} Template images in atlas
	  formed using splatting for $\sigma=1, 3$.}
\end{figure}
As can be seen from figure 5, the averaging technique tends to shrink the template compared to the actual images in the dataset itself.
This behaviour is possibly due to the regularization term reducing the magnitude of the momenta vectors leading to an update in the
template that is smaller that it should.

\subsection{Classfication and Error Estimation}
We will use Mahalanobis distance as a multiclass classification criterion. For evaluating the class of a test image, we register the
template images of all the classes with the test image and measure how far the test deformation is from the mean deformations in the
atlas. Although we could have experimented with better error estimates, we have used the simple 0-1 error function that attributes an 
error of 1 if a mistake is made on the test example and 0 if none is made.

The mahalanobis distance of test deformation from the mean deformation of a class in the atlas is computed as:
\begin{align}
&M_l(\mbox{\boldmath$\alpha$}_{test}) = \sqrt{(\mbox{\boldmath$\alpha$}_{test}-\mu_l)^TS^{-1}(\mbox{\boldmath$\alpha$}_{test}-\mu_l)}\\
&\text{where }S=\text{ covariance matrix of }\Theta_l\text{ from equation (8)}
\end{align}

The mahalanobis distance tells us in some sense how many standard deviations from the mean deformation of the class our test deformation
is. The closer to the mean deformation of a class the test deformation is, the more likely is it to belong to that class. As can be seen,
the mahalanobis distance is a normalised metric. Thus, the classification criterion we have used is smallest Mahalanobis distance which
can be written as follows:
\begin{align}
\hat{y} = \text{arg min }_{l\in\{1,2,...10\}} M_l(\mbox{\boldmath$\alpha$}_{test})\\
\end{align}
This can also be written as:
\begin{align}
\hat{y} = \text{arg max}_l\frac{\frac{1}{M_l(\mbox{\boldmath$\alpha$}_{test})}}{\sum_{i=1}^{10}\frac{1}{M_i(\mbox{\boldmath$\alpha$}_{test})}}
\end{align}
The expression given in the above equation converts the inverse mahalanobis distance to a probability. Thus the expression that we are
maximising $l$ over is nothing but the probability that $\mbox{\boldmath$\alpha$}_{test}$ belongs to class $l$ or analogously the
probability that the test image $I_{test}$ belongs to class $l$.

\section{Experiments} 
Figure 6 are the results of running the classifier on the test dataset and one training dataset. The error rates have been plotted for
each class for different values of kernel width $\sigma$ and for atlas obtained using both averaging as well as splatting.
\begin{figure}[h!]
 \begin{center}
    $
    \begin{array}{cc}
      \includegraphics[width=1.25in]{./testerr_avg_sig1.png}&
      \includegraphics[width=1.25in]{./testerr_avg_sig3.png}
    \end{array}
    $\\
    $
    \begin{array}{cc}
      \includegraphics[width=1.25in]{./testerr_splat_sig1.png}&
      \includegraphics[width=1.25in]{./testerr_splat_sig3.png}
    \end{array}
    $\\
    $
    \begin{array}{c}
      \includegraphics[width=1.25in]{./trainerr_splat_sig1.png}
    \end{array}
    $
 \end{center}  
 \caption{\textbf{Top:} Test Error using averaged atlas for each class for $\sigma=1,3$. \textbf{Middle:} Test Error using splatted atlas
	  for each class for $\sigma=1,3$. \textbf{Bottom:} Training Error using splatted atlas for each class for $\sigma=1$.,
	  Note that the class 10 is the handwritten digit 0.}
\end{figure}
As can be seen from the above plots, the test error is pretty high.

% examples for a several pairwise regstrations, best CP results for a sevaral kernalwidth 
% (peaks for each class and final found peaks), atlas estimation (show template images for 
% each class), some classification results, other numbers, plot, etc.

\section{Conclusions}
We need to experiment with the right values of $\sigma$ which at $3$ seems to capture variations pretty well. Also, training error
for the splatted atlas needs to be seen. The classification metric used here is not a good one. We need to experiment with multiclass
classification methods that are more suited to this task.
%Things we have learned during this project (new learning algorithms, better understanding of the algorithms, new tools/softwares, etc.)


\section{Division of Labour} 
\textbf{Nishith T.}: Implementation of Atlas formation, Pairwise matching and formulating the Report.\\
\textbf{Wathsala W.}: Implementation of methods for computing the optimal landmark positions and formulating the Report.\\
\textbf{Anshul J.}: Implementation of Atlas formation, Classifier and error metrics and designing plots.

\begin{thebibliography}{}

\bibitem [\protect\citename{Le Cun et al.}1999]{lecun:99} 
\newblock Le Cun et al. 1999 \url{http://www-stat.stanford.edu/~tibs/ElemStatLearn}

\bibitem[\protect\citename{Durrleman et al.}2011]{Stanley:11}
Durrleman, Stanley and Prastawa, Marcel and Gerig, Guido and Joshi, Sarang.
\newblock 2011.
\newblock {\em Optimal Data-driven Sparse Parametrization of Diffeomorphisms for Population Analysis},
\newblock In Proceedings of the 2011 Information Processing in Medical Imaging (IPMI) - Lecture
Notes in Computer Science (LNCS) 6801 pg.123-134.

\end{thebibliography}

\end{document}
