\section{Proof of correctness}

In this section we will show that our implementation of Heat Kernel Signature works for a part of the provided 3D model database. \\

\subsection{General performance}
\label{sect:generalperformance}
One of the tests we did is to find out under which circumstances the implementation works well. We compared all the null models to the rest of the database and fetched the first 30 models. From the models that we obtained, we checked which transformation they had undertaken. The results from the categories 2 to 7 gave promising results because there were at least 27 correct matches for each category. Also, the models with roughly the same transformations were returned. Unfortunately these good results were not obtained for categories 8 to 15. The maximum number of correct matches is four (see figure \ref{fig:invariance1}).

\begin{figure}[h!]
  \centering
    \includegraphics[width=0.7\textwidth]{figures/Invariance1.png}
  \caption{Distribution of the first 30 results from the query in which the null model is compared to the rest of the models.} 
  \label{fig:invariance1} 
\end{figure}

We were not convinced that the null models of these categories were correct and therefore decided to use a different model that should almost be the same as the null model for categories 8 to 15. We chose to use the models with micro-holes (the models with the 'microholes.1.off' extension) based on the results shown in figure \ref{fig:invariance1} and our common sense when looking at these models. The results that we obtained from this query were much more convincing because they were roughly the same as the results for categories 2 to 7 (see figure \ref{fig:invariance2}). This also emphasizes our assumption that the null models for these categories are incorrect.

\begin{figure}[h!]
  \centering
    \includegraphics[width=0.7\textwidth]{figures/Invariance2.png}
  \caption{Distribution of the first 30 results from the query in which the null model and a micro-holes model is compared to the rest of the models for categories 2 to 7 and categories 8 to 15 respectively.} 
  \label{fig:invariance2} 
\end{figure}

From the table in figure \ref{fig:invariance2} we can conclude that the implementation works well under the isometric, topologic, noise, shot noise, micro-holes and partial transformations. Rasterize and scale are also transformations for which the implementation partly works.\\
We also enlarged the scope of the query to the cluster size of 57 models but the results only increased by a few correct matches. We can therefore conclude that the implementation does not, or hardly works for models with holes, sampling, view, and affine transformations. 


\subsection{Two Number Measures}
The next step was to measure the performance for the entire database. We therefore used the distance matrix and computed a precision recall graph (see figure \ref{fig:prgraph}). Looking at this graph, you can clearly see that the precision of the first part of the graph (for recall $ \leq 0.5$) is significantly better than the last part of the graph. This has to do with our findings from section \ref{sect:generalperformance} which states that our implementation works well for models under certain transformations.

\begin{figure}[h!]
  \centering
    \includegraphics[width=0.5\textwidth]{figures/PrecRecall.png}
  \caption{The precision recall graph for the entire distance matrix.}
  \label{fig:prgraph} 
\end{figure}

Another graph that we calculated is the so called Receiver Operating Characteristic (ROC) curve, which is a curve based on the sensitivity (recall) and specificity of the performance (see figure \ref{fig:ROCCurve}). The difference of this graph and the precision recall graph is that the ROC curve also takes the true negative values into account.\\
The orange line corresponds to the ROC curve you would get when you would use a random method. We can clearly see that the ROC curve is above this line, and thus our implementation gives better results.\\

\begin{figure}[h!]
  \centering
    \includegraphics[width=0.5\textwidth]{figures/ROCCurve.png}
  \caption{The blue line is the ROC curve representing the performance of our implementation on the entire datbase. The orange diagonal line represents the ROC curve we get when we use a random method.}
  \label{fig:ROCCurve} 
\end{figure}

\subsection{Single Number Measures}
We also computed a couple of single number measures. We will shortly describe the way they are calculated and then give the results. Note that c is the cluster size 57, which is the number of objects in the same class. s is the scope, which is the number of objects that we retrieve. And d is the size of the database, 684 in our case.

\begin{itemize}
\item \textbf{1st Tier} is the percentage of correctly retrieved objects within the first c-1 matches.
\item \textbf{2nd Tier} is the percentage of correctly retrieved objects within the first $2 \cdot (c-1)$ matches by dividing the number of correct matches by the scope = 2(c-1).
\item \textbf{Bull's Eye Percentage (BEP)} looks a lot like 2nd Tier, except that BEP divides the number of correct matches by the cluster size instead of the scope.
\item \textbf{Average Precision} is, as the name already states, the average precision over the entire scope. Every time there is a correct match, the precision is calculated for that match. Say the 20th correct match occurs after 25 hits, then the precision for this match is 20/25 = 80\%. Take the sum over all these percentages and divide them by the cluster size to obtain the average percentage.
\end{itemize}

Below are the results for the single number measures based on the results of our implementation. We also added two random methods for comparision.

\begin{itemize}
\item \textbf{1st Tier = } 34.22\%
\item \textbf{2nd Tier = } 20.8\%
\item \textbf{Bull's Eye Percentage = } 41.16\%
\item \textbf{Average Precision = } 36.14\%
\item \textbf{Random pick (s=c)} = $\frac{57}{684} \cdot 100$ = 8.33\%
\item \textbf{Random pick (s=2c)} = $\frac{114}{684} \cdot 100$ = 16.66\%
\end{itemize}

From these results we can clearly state that our implementation works a lot better than a random method.

\clearpage 