\chapter{Approach}
\label{c:approach}
        In this chapter, the approach adopted to accomplish the aims of this thesis is described. The different stages included in the image retrieval process are described throughout the 
        sections of this chapter. 
        \\

        After extracting patches, feature vectors describing patches are clustered using the \ac{HIKM} clustering algorithm described in Section \ref{ss:hkmeans}.
        The resulting clusters tree is used for mapping feature vectors to their corresponding clusters. As a result of this mapping, the index of the cluster
        to which a feature vector is assigned is used to describe the patch. This mapping hence allows the use of text search techniques for indexing and querying. 
        \\
        
        The Lucene text search library is used to build an inverted index and to perform the actual searching (querying) step.
        
        The image retrieval engine works in two main phases:
        \begin{enumerate}
         \item Off-line phase: this phase includes features extraction, clustering and building the search index.
         \item On-line phase: this phase includes the actual search, i.e. performing a query and returning the search results.
        \end{enumerate}

        In the following sections, each phase is described in detail, including the sub-processes of each.
        
        \section{Off-line Phase}
        \label{s:offline}
                Figure \ref{fig:offline} shows the summary of the processing stages of the off-line phase of the search engine. 
                The input to the off-line phase is a large dataset of images, which are used to build a search index, on which queries will be performed.
                \begin{figure}[ht]
                        \centering
                        \includegraphics[width=14cm]{pics/offline.png}
                        \caption{Processing pipeline of the offline phase}
                        \label{fig:offline}
                \end{figure}               

                As shown by Figure \ref{fig:offline}, the main outputs of the off-line phase are: 
                \begin{itemize}
                 \item the codebook: i.e. the tree of clusters generated by the clustering process.
                 \item the Lucene index: which is the index constructed by the visual words extracted from each image of the input dataset.
                \end{itemize}

                \subsection{Features Extraction}
                \label{ss:featextract}
                        Based on the concept of local image features, described in Section \ref{s:ImageFeatures}, features are extracted from images of the input dataset, using the \ac{SIFT}
                        detector/descriptor. 
                        
                        Each image is fed to a \ac{SIFT} extraction module. The extracted patches are stored in a database associated with an identifier of the corresponding image
                        to which they belong.
           
                              
                \subsection{Clustering / Codebook Training}
                \label{ss:codebook}
                        A subset of the input dataset of images are used in this stage. \ac{SIFT} patches are extracted from this subset, and the \ac{HIKM} clustering algorithm 
                        is applied on the descriptors of the extracted \ac{SIFT} patches as described in Section \ref{ss:hkmeans}. The aim of this stage is to group \emph{similar} descriptors into the same cluster,
                        this similarity is based on calculating the Euclidean distance between feature vectors (descriptors).
                        \\

                        The cluster tree produced after this process is referred to as the \emph{codebook}. The patches used for training this \emph{codebook} are chosen randomly from all images to ensure 
                        that the trained codebook is not overfitted to a subset of the dataset.
                        \\

                        Now that each of the training patches is assigned to a specific cluster $c_{i}$ in the codebook, the original 128 dimensional descriptor is replaced by the index $i$ of the cluster $c_{i}$. 
                        The index of the cluster is referred to as a \emph{visual word}. Figure \ref{fig:viswords} shows a graphical representation of the feature space, after 
                        being subdivided into clusters, where each cluster is labeled by an index.
                        \\
                        
                        \begin{figure}[ht]
                        \centering
                                \includegraphics[width=10cm]{pics/clusteringViswords.png}
                                \caption{a graphical representation of the clustered feature space, and example of \emph{visual words}}
                                \label{fig:viswords}
                        \end{figure}
                        
                        Based on the concept of visual words, an analogy can be drawn between text search and image search \cite{Sivic2003}. Table \ref{t:analogy} gives 
                        a comparison between text search and image search on a high level and low level view.\\
                        
                        \begin{table}[ht]
                        
                        \centering
                        \begin{tabular}{|l||c|c|}
                                \hline 
                                        &\textbf{Text Search} & \textbf{Image Search}\\
                                \hline
                                \hline 
                                 \textbf{High level view} & documents & images\\
                                \hline
                                 \textbf{Low level view} & words & visual words\\
                                \hline
                        \end{tabular}
                        \caption{Analogy between text and image search}
                        \label{t:analogy}
                        \end{table}

                        Using this analogy, the techniques used for text search which are already available can be migrated and adapted for image searching. For example,
                        an inverted index - as described in Section \ref{ss:inverted} - can hence be used for indexing images according to the visual words extracted from them. 
               
                \subsection{Vector Quantization}
                \label{ss:vectquant}
                        After extracting features from the input dataset, the codebook trained during the clustering stage (see Section \ref{ss:codebook}) is used to transform the descriptors of the extracted \ac{SIFT} patches to visual words.
                        \\

                        For each patch extracted from the input images, the descriptor vector is pushed down the trained tree of clusters in order to assign this vector to the cluster
                        of descriptors which share the highest similarity.
                        \\

                        This assignment is done by finding a path from the root to a leaf cluster. This path is constructed according to the distance measure between the descriptor and the center of 
                        the clusters at each level of the tree. The index of the leaf cluster at the end of this path is the \emph{visual word} used to describe the patch.

                \subsection{Indexing}
                \label{ss:indexing}
                        After the translation from descriptor vectors to visual words, the Lucene search library is used to create an inverted index. In this index, each unique
                        visual word is added, associated with the list of images in which the visual word occurs. 
                        The inverted index hence is very similar to the one used for text search (see Table \ref{t:invindex}). 
                        \\

                        Moreover, information about the term frequency (tf) can also be added for each visual word in a similar way as it is done in the case of text search,
                        where each image ID is associated with the number of occurrences of the visual word.

                        \begin{table}[ht]
                        \centering
                                \begin{tabular}{|l|l|}
                                        \hline 
                                        \textbf{Visual Word} & \textbf{[imgID, tf]} \tabularnewline
                                        \hline
                                        \hline 
                                        101 & [img1, 3], [img2, 4], [img4, 2] \tabularnewline
                                        \hline 
                                        104 & [img1, 1], [img3, 5] \tabularnewline
                                        \hline
                                \end{tabular}
                        \caption{Example of an inverted index of images and visual words}
                        Each visual word is stored associated with the IDs of the images in which it occurs, as well as the term frequency (tf) 
                        of the word in the corresponding document, e.g. the visual word ``101'' occurs 3 times in img1, i.e. three different \ac{SIFT} patches
                        are assigned - during clustering - to the same cluster, which has the index ``101''.
                        \end{table}
                        
        \section{On-line Phase}
        \label{s:online}
                Figure \ref{fig:online} shows the summary of the processing stages of the on-line phase of the search engine. 
                \begin{figure}[ht]
                        \centering
                        \includegraphics[width=14cm]{pics/onlinequery.png}
                        \caption{Processing pipeline of querying in the on-line phase}
                        \label{fig:online}
                \end{figure}
                The input to the on-line phase is a query image.
                
                The output of this phase is the ranked list of images retrieved from the Lucene index, which are relevant to the query image. The retrieved
                result list is ordered by the similarity score calculated by the Lucene scoring function (See Section \ref{ss:lucene}).
                \\
                
                After retrieving the ranked list of images, match refinement is applied as a post processing step, where the matches between the query image and each of 
                the top ranked images are filtered as described in Section \ref{ss:hough}. The result list is then reordered according to the refined matches.
                \\

                The same features extraction process as described in the off-line phase (see Section \ref{ss:featextract}) is performed on the query image, 
                in order to extract \ac{SIFT} patches from it for later stages, which are used as a basis for determining the similarity between the query and the database images.
                \\
                
                The Lucene indexing can also be applied during the on-line phase, allowing to add more images to the index during the on-line phase.
                Figure \ref{fig:onlineadd} shows the summary of the different stages of adding more images to the index during the on-line phase of the search engine. 
                \begin{figure}[ht]
                        \centering
                        \includegraphics[width=12cm]{pics/onlineadd.png}
                        \caption{Processing pipeline of adding additional images during the on-line phase}
                        \label{fig:onlineadd}
                \end{figure}

                \subsection{Vector Quantization}
                \label{ss:vectquant2}
                        In this stage, the same process described in Section \ref{ss:vectquant} is repeated for the query image. The descriptors of the 
                        patches extracted from the query image are translated to visual words, using the codebook trained during the off-line phase (see Section \ref{ss:codebook}.

                \subsection{Searching}
                \label{ss:searching}
                        After transforming descriptor vectors to visual words, the Lucene search library is used to query the Lucene index by the set of visual words of the query image.
                        The inverted index is accessed for each visual word extracted from the query, the images containing this visual word are retrieved and a score is calculated for each 
                        relevant image. 
                        \\

                        The similarity scoring used in text searching described in section \ref{ss:sim}, is also used for ranking the retrieved images. Similar
                        terms and normalization factors are calculated for images, through the similarity scoring functions of Lucene, but applied on visual words instead of 
                        textual words.
                        \\
                        
                        The final output of this step is a list of images sorted by the Lucene similarity score assigned to each image.
                
                \subsubsection{Lucene Scoring Optimization}

                        As described in Section \ref{ss:sim}, Lucene provides a similarity scoring function which combines several parameters, this scoring function
                        is customizable. Several experiments are done, using different setups for the Lucene scoring function, in order to reach the setup that
                        optimizes the quality of the retrieval results.
                        \\

                        The first experiment focuses on optimizing the \ac{tf} parameter in the scoring function. Assuming a visual word $v$,
                        the default setting of $tf(v)$ is equal to $freq(v)$, i.e. the overall score of a document $d$ is multiplied by the number of occurrences of $v$ in $d$.
                        \\

                        However, by inspecting the visualized results of some queries under this setting, it was noticed that in the case of images which include a highly repetitive texture, 
                        a \emph{fanning} effect occurs as shown by the example in Figure \ref{fig:fan}. This effect occurs when a single patch in the query image matches several patches in the target image.
                        Images with such a repetitive texture are hence given a high score, due to large \ac{tf} values for the repeated patches. 
                        \\

                        To tackle this problem, the scoring function is modified by setting the \ac{tf} parameter to $\sqrt{freq(v)}$, hence decreasing the effect.
                        \\
                         \begin{figure}[!htbp]
                        \centering
                                \includegraphics[width=5cm]{pics/fan.png}
                                \caption{An example matching result, where the \emph{fanning} effect occurs}
                                \label{fig:fan}
                        \end{figure}
                        The second experiment focuses on optimizing the length norm parameter. For a document $d$, the length norm $norm(d)$ is initially 
                        equal to $length(d)^{-1}$, i.e. the overall score is divided by the length of the document in order to give a preference to shorter documents
                        over longer ones.
                        \\
                        
                        However, by inspecting the the visualized results of some queries under this setting, it is noticed that poorly textured images (see figure \ref{fig:poor}),
                        which contain a small number of patches, are rewarded by a high similarity score, due to the large length norm value assigned to them, compared
                        to other images with more complex textures and hence more patches.
                        \\

                        To tackle this problem, the scoring function is modified by setting the length norm parameter to $[\sqrt{length(d)}]^{-1}$, hence decreasing the effect.
                        
                        \begin{figure}[!htbp]
                        \centering
                                \includegraphics[width=5cm]{pics/poor.png}
                                \includegraphics[width=5cm]{pics/poor2.png}
                                \caption{Comparison between the actual and the expected matching of two images}
                                The actual result (on the left) is a lot worse than the expected one (on the right). This result is due to that the poorly textured image on the left receives a higher score than the expected one on the right.
                                \label{fig:poor}
                        \end{figure}

                        Experiments are done to evaluate the effect of the optimized scoring function on the results. Detailed description of these experiments and their results are
                        described in Chapter \ref{chap:eval}.

                \subsection{Match Refinement}
                \label{ss:hough}
                        A match filtering process is applied on the images retrieved by querying the Lucene index. 

                        Each match consists of a pair of \ac{SIFT} patches $(p_{1},p_{2})$ where one belongs to one of the target images and the other belongs to the query image. 
                        The two patches are assigned to the same of visual word, yet
                        described by two different keypoints $(k_{1}, k_{2})$. As described in Section \ref{ss:SIFT}, each keypoint $k_{i}$ is a quadruple $(x_{i},y_{i},s_{i},\theta_{i})$, 
                        where:
                        \begin{itemize}
                         \item $x_{i},y_{i}$ denote the position of the keypoint $k_{i}$.
                         \item $s_{i}$ denotes the estimated scale of the patch $p_{i}$.
                         \item $\theta_{i})$ denotes the gradient orientation of the patch $p_{i}$.
                        \end{itemize}

                        The applied match refinement approach aims at detecting translation and scaling consistency between matches, as described in Section \ref{s:matchref}.
               

                        The approach is based on the \emph{Hough Transform}. The steps of the match refinement are listed below:
                        \begin{enumerate}
                         \item For all matching pairs of keypoints, the difference in x \& y position $\Delta x$ \& $\Delta y$ and the scale difference $\Delta s$ between the two keypoints are computed
                         \item The ranges of position and scale differences are divided into bins.
                         \item A histogram is built by assigning each matching pair to the corresponding bin.
                         \item The peak of the histogram (i.e. the dominant bin) represents the transformation 
                                (translation and scale) which fits the majority of matches. Hence, matches assigned to that bin are considered as good 
                                matches and the rest of the matches are discarded.
                        \end{enumerate}

                        After applying the filtering process, the Lucene score used to rank the list of retrieved images is replaced by the number of remaining matches after
                        filtering. Images are ranked according to the descending number of matches (i.e the image which has the largest number of matches with the query is ranked 
                        at the top position of the list).
                        \\
                                
                        By inspecting individual retrieval results from several experiments, the \emph{fanning} effect described in section \ref{ss:searching} is still observed to decrease the quality of the
                        results. Optimizing Lucene scoring function helps in decreasing the effect, however it does not prevent completely its occurrence.
                        \\

                        In order to deal with this effect through match refinement, a modification is done to the refinement method. A constraint is enforced on
                        the assignment of matches to the bins of the Hough transform histogram. 
                        This constraint basically requires that no two matches assigned to the same bin are allowed to share the same source keypoint. 
                        \\

                        Figure \ref{fig:illustfan} shows a representation of a situation where the fanning effect occurs. Four matches share the same source keypoint 
                        $K_{s1}$. After assigning the first match $m_{1}: (K_{s1},K_{t1})$ to the corresponding histogram bin, each time a new match $m_{i}$ where $[2\leq i\leq 4]$
                        is to be assigned to the same bin, its source keypoint is checked against the source of first match $K_{s1}$, and since they share the same keypoint, 
                        the new match is discarded.
                        \\

                        \begin{figure}[!htbp]
                        \centering
                                \includegraphics[width=6cm]{pics/fangraph.png}
                                \includegraphics[width=6cm]{pics/fangraphrem.png}
                                \caption{Illustration of \emph{fans} removal}
                                \label{fig:illustfan}
                        \end{figure}

                        Another modified version of the match refinement based on the original Hough transform method is also applied on the search results.
                        The original Hough tranform method is modified by including the rotation difference together with the position and scale difference used in building the hough space histogram.
                        Detailed description of the experiments done and their results are included in the following chapter.
                        
                        