
\chapter{Evaluation}\label{chap:eval}
        
        In this chapter, experiments done to evaluate the performance of the search engine are described. The performance of the system is measured in terms of quality of search results,
        in addition to the speed of the system (query time). The performance of the search engine is evaluated according to several factors and under different setups. 
       
        The experiments described in this chapter are organized in correspondance with the different stages of the system processing pipeline described in Sections \ref{s:offline} and \ref{s:online}.
        
        \section{Evaluation Protocol}
                In this section, description about the datasets used in the evaluation of the system. In addition, the performance measure used is illustrated, and
                the setup of the experiments is described.
                \subsection{Datasets}
                        The main dataset used in the experiments is the INRIA Holidays dataset \cite{Jegou2008}. This dataset includes a set of personal holiday
                        photos capturing different scenes like touristic sites, underwater scenes and indoor scenes. The same scene is captured under different 
                        viewpoints, lighting conditions and rotations. Those variations help to test the robustness of the matching process and the retrieval 
                        ability of the system. 
                        \\
                        
                        The dataset contains 1491 images grouped in series. The dataset contains 500 series in total, where each contain images of the same scene 
                        but under different conditions. 
                        \\

                        \begin{figure}[ht]
                        \centering
                                \includegraphics[width=10cm]{pics/Holiday.png}
                                \caption{Some sample images from the Holidays dataset.}
                                Each row are images which belong to the same series. Each series captures several variation of perspectives, rotations or lighting conditions
                                \label{fig:holiday}
                        \end{figure}
                        
                        Moreover, a set of 100000 random images chosen from Flickr is added in some experiments creating an extended dataset of Holidays+Flickr data,
                        in order to evaluate the scalability of the system and the effect of increasing the dataset size on the retrieval quality and speed.

                \subsection{Performance Measure}
                        The performance measure used for evaluating the quality of the results is the \ac{MAP}. This metric is specifically useful in dealing with
                        the ranked sequence of images returned by the system, since it incorporates the order in which the returned result is presented. 
                        \ac{MAP} is calculated for a set of queries $Q$, by taking the mean of the \ac{AP} of each query:  
                        
                        \[{MAP} = \frac{\sum_{q=1}^Q {AP(q)}}{Q}\]
                        For each query $q$, the average precision $AP(q)$ is calculated by computing the precision at each rank of the retrieved sequence of images. 
                        
                        Assuming that $T_{i}$ is the set of images retrieved up to rank $[1 \rightarrow i]$, and $R_{q}$ is the set of relevant images corresponding
                        to the query $q$, The precision of a query $q$ at a certain rank $i$ is:
                        
                        \[{P(q)_{@i}} = \frac{|T_{i} \cap R_{q}|}{i}\]
                        
                        
                        Assuming that a sequence of images of length $n$ to be retrieved, the \ac{AP} of the query $q$ is hence: 
                        \[{AP(q)} = \frac{\sum_{i=1}^n P(q)_{@i} \times Rel(i)}{|R_{q}|} \]
                        Where $Rel(i)$ is a function defined as follows:
                        
                        \[
                        Rel(i) = \left\{ 
                        \begin{array}{l l}
                        1 & \quad \text{if image $I_{i} \in R_{q}$}\\
                        0 & \quad \text{otherwise}\\
                        \end{array} \right.
                        \]
                        Additionally, a cutoff can be set on the returned sequence, where the images retrieved after the cutoff rank are ignored and their precision is set to zero. However, the
                        calculated measure hence is not the \emph{true} average precision.
                        
                \subsection{Experiments Setup}
                        The experiments are performed by using each image of the Holidays dataset as a query against the Holidays dataset. 
                        
                        The ideal response of each query is to retrieve the rest of the series to which the query belongs at the top ranks of the retrieved sequence of images. The \ac{MAP} is
                        hence computed over all queries. 
                        \\

                        In some experiments, the additional data from Flickr are included in the database, however the queries are only the images from the Holidays dataset.
                        \\

                        The \ac{MAP} is calculated at a cutoff $n = 25$, i.e. only the top 25 retrieved images per query are kept for calculating the \ac{AP} of the query,
                        the rest of the retrieved list are ignored. This cutoff is chosen based on the low probability that a user of the system will be interested in examining results 
                        at ranks higher than the cutoff value.

        \section{Experiments}
                This section includes the detailed description of the experiments and evaluation results organized according to the different stages of the system.
                
                \subsection{Lucene Scoring Function}
                \label{ss:optsim}
                        Experiments are done in order to optimize the Lucene scoring function. 
                        to the term frequency and the length norm calculation method
                        \\Modifications are done to how the term frequency and length norm parameters
                        are calculated as previously described in Section \ref{ss:searching}.
                        \\

                        After applying the modified scoring, the system performance is tested on the Holidays dataset, the \ac{MAP} increased from 15\% to 23.08\%. 
                        The increase in the performance indicates that the Lucene scoring has a great effect on the quality of the results and optimizing this scoring can
                        enhance greatly the results.
             
                        
                \subsection{Codebook Training}
                        The performance of the system is evaluated when using different codebook sizes. 
                        The codebook is built after the extraction of \ac{SIFT} patches from all the images in the input dataset. as described in section \ref{ss:codebook}.
                        The size of the codebook is the number of leaves of the tree of clusters obtained after applying the \ac{HIKM} clustering on the \ac{SIFT} patches.
                        \\

                        In all the following experiments, the optimized scoring function described in section \ref{ss:searching} is used.
                        The first experiment is done using a $10^4$ clusters codebook, trained on $75 \times 10^4$ \ac{SIFT} patches extracted from the Holidays dataset.
                        The \ac{MAP} of the results is 16.3\%. Using the same number of patches, another $10^5$ clusters codebook is trained, the \ac{MAP} of the results
                        increased to 23.08\%.
                        \\

                        This increase in the \ac{MAP} indicates that the codebook size is highly related to the quality of search results, since the smaller the codebook i.e.
                        (i.e. the smaller the number of clusters), the number of patches per clusters increase, and hence the probability that two patches having with a low
                        mutual similarity and getting assigned to the same cluster is higher, these patches are considered as \emph{false} (bad) matches.
                        \\

                        \begin{table}[ht]
                        \centering
                                \begin{tabular}{|l|l|l|}
                                        \hline 
                                        \textbf{Codebook} & \textbf{N. of patches} & \ac{MAP} \tabularnewline
                                        \hline
                                        \hline 
                                        $10^4$ & $75 \times 10^4$ & 16.3\% \tabularnewline
                                        \hline 
                                        $10^5$ & $75 \times 10^4$ & 23.08\% \tabularnewline
                                        \hline
                                \end{tabular}
                        \caption{Comparison of the performance of two codebooks trained on the Holidays dataset}
                        \label{t:holidayOnly}
                        \end{table}
        
                        In order to evaluate the scalability of the system, an extended dataset is formed by combining 100k images from Flickr to the Holidays dataset, 3 
                        codebooks of different sizes are trained on the patches extracted from the extended dataset.
                        
                        The first codebook trained on the extended dataset is a $10^5$ clusters codebook, trained on $2 \times 10^6$ patches extracted from the extended dataset.
                        The \ac{MAP} of the results is 16.72\%. 
                        
                        Comparing this result to the previous experiment performed on the Holidays data only, the \ac{MAP} decreased
                        as expected after injecting the $10^5$ noise images from Flickr.
                        
                        The second codebook is a $10^6$ clusters codebook, trained on $10 \times 10^6$ patches extracted from the extended dataset. The \ac{MAP} of the results
                        increased from 16.72\% to 20.29\%. 
                        
                        The third codebook is a $10^7$ clusters codebook, trained on $60 \times 10^6$ patches extracted from the extended dataset. The \ac{MAP} increased again
                        from 20.20\% to 22.39\%.
                        \\
                        
                        As shown by Table \ref{t:holidayFlickr}, the \ac{MAP} increases with the increase of the codebook size.
                        The \ac{MAP} obtained for the three different codebooks trained on the extended dataset is less than the one obtained by using the Holidays dataset only, which is expected
                        since the extended dataset includes a large chunk of noisy data (images from Flickr) which affects the quality of the retrieval results. 
                        \\

                        However, for the largest codebook of $10^7$ clusters the decrease from the \ac{MAP} obtained with the codebook trained purely on the Holidays dataset is less than 1\%, which indicates
                        that the effect of the presence of noise data on the retrieval quality is limited.
                        
                        \begin{table}[ht]
                        \centering
                                \begin{tabular}{|l|l|l|}
                                        \hline 
                                        \textbf{Codebook} & \textbf{N. of patches} & \ac{MAP} \tabularnewline
                                        \hline
                                        \hline 
                                        $10^5$ & $2 \times 10^6$ & 16.72\% \tabularnewline
                                        \hline 
                                        $10^6$ & $10 \times 10^6$ & 20.29\% \tabularnewline
                                        \hline
                                        $10^7$ & $60 \times 10^6$ & 22.39\% \tabularnewline
                                        \hline
                                \end{tabular}
                        \caption{Comparison of the performance of three codebooks trained on the (Holidays+Flickr) dataset}
                        \label{t:holidayFlickr}
                        \end{table}

                \subsection{Match Refinement}
                \label{ss:evalmatref}
                        The effect of applying match refinement on the retrieved results is evaluated in several experiments. As described in Section \ref{ss:hough},
                        the retrieved lists of images are reordered after the match refinement, and hence the \ac{MAP} changes accordingly. 
                        \\

                        The first experiment is done by applying the match refinement method described in Section \ref{ss:hough}. The experiment is performed on
                        the Holidays dataset with the $10^5$ codebook. The \ac{MAP} of the results increased from 23.08\% (without refinement) to 24.89\% (after refinement).
                        \\
                        
                        Figure \ref{fig:matchref} shows a comparison between the top ranked result for a sample query, before and after applying the match refinement. The
                        result obtained after refinement shows that a relevant image which belongs to the same series of the query is correctly retrieved. Moreover, the result shows
                        that all the remaining matches undergo a consistent change in position.
                        \\

                        \begin{figure}[!htbp]
                        \centering
                                \includegraphics[width=5cm]{pics/beforeref.png}
                                \includegraphics[width=5cm]{pics/afterref.png}
                                \caption{Effect of match refinement on retrieval results}
                                The image on the left shows the top ranked result for one of the queries before refinement, the one on the right shows the top ranked result after refinement.
                                \label{fig:matchref}
                        \end{figure}

                        After applying the fans removal modification described in Section \ref{ss:hough}, another experiment is done to evaluate the quality of the results using the modified refinement method. Using 
                        the pure Holidays dataset and the $10^5$ codebook, the \ac{MAP} increased from 24.89\% (using the original refinement) to 26.06\% (using the 
                        modified refinement)
                        \\
                        
                        Figure \ref{fig:nofanning} shows the result of a sample query where the fanning effect occurs even after the original match refinement method is applied, and 
                        shows the enhanced result after applying the modified match refinement where a relevant image is correctly retrieved after removing the fanning effect.
                        \\

                        \begin{figure}[!htbp]
                        \centering
                                \includegraphics[width=5cm]{pics/fanref.png}
                                \includegraphics[width=5cm]{pics/nofanref.png}
                                \caption{Effect of fanning effect removal on retrieval results}
                                The figure shows the effect of applying the fanning effect removal modification on the match refinement method.
                                The image on the left shows the top ranked result for one of the queries where the normal refinement fails, 
                                the one on the right shows the top ranked result after the modified refinement method is applied.
                                \label{fig:nofanning}
                        \end{figure}

                        The modified refinement method is then evaluated on the extended dataset (Holiday+Flickr). The experiment is repeated on the three
                        different codebook sizes. Table \ref{t:matchref} shows the results of this experiment.
                        As shown in the table, the \ac{MAP} increases consistently for all codebook sizes. 
                        \\

                        However, it is observed that the enhancement amount decreases as the codebook size increases, and this indicates that increasing 
                        the codebook size helps in filtering out bad matches and this is why the effect of match refinement on the quality of the results
                        diminishes for larger codebook sizes.
                        
                        \begin{table}[!htbp]
                        
                        \centering
                                \begin{tabular}{|l|l|l|l|}
                                        \hline 
                                        \multicolumn{1}{|c|}{\multirow{2}{*}{Codebook}} & \multicolumn{2}{c|}{\ac{MAP}} & \multicolumn{1}{c|}{\multirow{2}{*}{Enhancement}}  \tabularnewline
                                         \cline{2-3}
                                        & Before Refinement & After Refinement & \tabularnewline
                                        \hline
                                        \hline 
                                        $10^5$ & 16.72\% & 19.54\% & +2.82\% \tabularnewline
                                        \hline 
                                        $10^6$ & 20.29\% & 21.94\% & +1.65\% \tabularnewline
                                        \hline
                                        $10^7$ & 22.39\% & 22.95\% & +0.56\% \tabularnewline
                                        \hline
                                \end{tabular}
                        \caption{Comparison of the performance after refinement of three codebooks trained on the (Holidays+Flickr) dataset}
                        \label{t:matchref}
                        \end{table}
                        
                        The \ac{WGC} refinement method \cite{Jegou2008} is also evaluated on the Holidays dataset. \ac{WGC} performed poorly, causing
                        the \ac{MAP} to decrease from 23.08\% (without refinement) to 13\%. This poor performance can be due to that the \ac{WGC} by using only
                        rotation and scale differences, it does not take into consideration the translation differences between matches.
                        \\

                        As for the modified Hough tranform method which adds the difference in rotation, the performance evaluation on the Holidays dataset resulted 
                        in a \ac{MAP} of 24.6\%. This score is slightly lower compared to the score obtained with the original unmodified hough transform method (\ac{MAP} = 24.89\%). This
                        can be due to that the number of matches assigned to the same bin of the histogram decreases after incorporating the rotation difference, since each bin obtained from the orginial method
                        is subdivided into smaller sub-bins according to the rotation difference.

                        \subsubsection{Noisy Codebook}
                        In order to evaluate the effect of using a codebook trained on a noisy dataset (i.e. a dataset which include a group of irrelevant images) on the quality of the retrieval results,
                        a codebook of $10^5$ clusters is trained on the patches extracted from the extended dataset (Holidays+Flickr) which has a ration of 98.5\%
                        of noise images from Flickr to 1.5\% of relevant Holidays dataset images. 
                        \\

                        Using this codebook, a Lucene index is built using the pure Holidays dataset. This experiment resulted in a \ac{MAP} equal to 21.50\%.
                        Comparing this result to the 23.08\% score obtained by using a pure $10^5$ clusters codebook trained on the Holidays dataset patches solely,
                        the \ac{MAP} decreased by 1.58\%, which is a small percentage taking into consideration the 98.5\% noise ratio of noise in the case of the noisy codebook.
                        \\
                        
                        This result indicates that adding more images to the database of the search engine will have a limited effect on the quality of results, since
                        the codebook training is flexible to the presence of irrelevant data.
                           
                \subsection{Timings}
                        The approach described in Chapter \ref{c:approach} includes several techniques used to maintain a fast response and short querying time,
                        even with large dataset sizes, among these techniques is clustering and inverted file index.
                        During the experiments described in the previous section, the timing of several stages are recorded in order to evaluate the searching speed
                        under different setups and conditions.
                        \\

                        Firstly, the \ac{DPS} querying time is measured as a baseline for comparison. An experiment is done using the Holidays dataset (1491 images) for this purpose. The actual 
                        time for performing one query against the 1491 images is on average 4 minutes. This time varies from one query to the other according to 
                        the number of patches in each query. In the case of the Lucene based search engine, the Lucene querying time for the same number of images
                        drops to 0.67 seconds on average. 
                        \\
                        
                        Additionally, in the case of the extended dataset (Holidays+Flickr) which includes 101491 images, the Direct Patch Searching becomes unfeasible to apply 
                        since the estimated query time linearly increases from 4 minutes to 4.8 hours/query. This timing however drops to 0.88 seconds in the
                        case of Lucene search. This indicates the scalability of the adopted approach, since the query time is no longer linearly dependant on the number
                        of images in the index. 
                        \\

                        \begin{table}[!htpb]
                       
                        \centering
                                \begin{tabular}{|l|l|}
                                        \hline 
                                         Codebook & Quantization Time \tabularnewline
                                        \hline
                                        \hline 
                                        $10^5$ & 0.1 sec \tabularnewline
                                        \hline 
                                        $10^6$ & 0.68 sec \tabularnewline
                                        \hline
                                        $10^7$ & 6.2 sec \tabularnewline
                                        \hline
                                \end{tabular}
                        \caption{Comparison of the quantization time for different codebook sizes (Holidays+Flickr dataset)}
                        \label{t:timingCodebook}
                        \end{table}

                        Table \ref{t:timingCodebook} shows the average time needed for the vector quantization step (described in Section \ref{ss:vectquant}), 
                        where each \ac{SIFT} patch is pushed down the codebook in order to assign it to its suitable visual word. The table shows a comparison
                        of the quantization time for different codebook sizes.
                        
                        
                        
                \subsection{Spectral Hashing Evaluation}
                        
                In order to evaluate the spectral hashing approach, a modified version of the search engine is used to perform some experiments. The implementation of the search engine
                is modified in order to replace the \ac{HIKM} clustering method by the spectral hashing method described in Section \ref{s:hashing}. Instead of training a tree of clusters
                (codebook), a set of one or more hash tables are trained based on the spectral hashing method. For each extracted \ac{SIFT} patch, the descriptor
                vector is hashed to the corresponding hash bin. Hence, the visual word used to label a patch describes the hash bin to which the patch is assigned, in contrast to the cluster 
                index in the case of \ac{HIKM} clustering.
                \\
                
                Moreover, since the hashing approach allows to create several hash tables to increase error tolerance, matching two \ac{SIFT} patches is not as straight forward as in the case of clustering, where two patches are 
                considered a match if they are assigned to the same visual word. Instead, in the case of spectral hashing, two patches may be assigned to the same hash bin in some
                of the tables but not in all of them. Hence, the search engine is modified to include the table information in the hash code formed for each patch, as well
                as to handle matches with different \emph{strengths}.
                \\
                
                The focus of the experiments done on the spectral hashing version is to evaluate the quality of search results in comparison to the results achieved using \ac{HIKM} clustering,
                which are described in previous sections of this chapter.
                \\

                It is observed from several experiments that the resulting \ac{MAP} score is unstable, that is for the same setup of the number of bits and the number of tables, 
                the resulting score varies if the hash tables training is repeated. This can be due to the variation in the \ac{PCA} extraction according to the input feature vectors of the
                patches used for training the hash tables. More stable results are obtained after increasing the number of training patches used. 
                \\

                The maximum \ac{MAP} achieved is 14.36\%, using 26 bits and 2 tables, trained on $10^6$ patches extracted from the Holidays dataset. This result is
                lower than the lowest score obtained using clustering. In general, the spectral hashing method hence performed poorly compared to the clustering method