\chapter{Background}
\label{chap:Background}
        In this chapter, a literature review of basic concepts like image features, text searching and clustering is provided. 
        This chapter also includes an overview of some related work and existing techniques for tackling the image retrieval problem.
        \\

        Content Based Image retrieval involves comparing between a query image and a target image from the database, in order to decide about their relevance to each other.
        Such a comparison requires the availability of a low level representation of the information included in each image. Hence, comes the role of \emph{feature
        extraction} as one of the main stages of a typical image retrieval system.
        \\
        
        After the feature extraction stage, it is a common approach to use data partitioning algorithms and methods in order to group the extracted features into
        groups, such that \emph{similar} features are assigned to the same group. \cite{Sivic2003}\cite{Jegou2008}\cite{Nister2006}
        \\
        
        Matching between a pair of images is based on matching the features extracted from the two images. This requires an efficient method for storing and 
        organizing the mapping between images and the extracted features. Indexing is hence used for this purpose, where the features extracted are stored in 
        an index in order to efficiently map between matching features to matching images.
        \\

        However, due to several factors discussed in later sections of this chapter, the matching between features is not always ideal.
        This leads to the presence of some bad matches which need to be filtered out. 
        Hence a \emph{match refinement} process is oftenly needed as a post processing stage to the initial matching results.
        
        
        \section{Image Features}
        \label{s:ImageFeatures}

                As described previously, feature extraction is an essential stage of the image retrieval process.
                Several types of features exist for describing images. In general two main classes of image features can be identified:
                \begin{enumerate}
                 \item \textbf{Global Image Features}: aim at detecting the \emph{gist} of the scene captured by the image. These features describe the high-level
                        properties of the image \cite{Oliva2006}. They are mainly based on the color distribution across the pixels of the image. A common
                        example of such features is the color histogram.
                 \item \textbf{Local Image Features}: aim at detecting localized regions of interest in images. These features describe the properties of 
                        specific components of the scene. Examples of such features are blobs, edges and corners.
                \end{enumerate}
                
                According to the above definitions, local image features are more tolerant to changes in lightning effects, occlusions and background differences, this is
                due to that the matching between two images in the case of local features is based on matching individual local regions. On the contrary, global features
                can be highly affected by such changes and hence do not provide the robustness level needed for the matching process.
                

                \subsection{\ac{SIFT} detector And descriptor}
                \label{ss:SIFT}
                        The Scale Invariant Feature Transform, introduced by David Lowe \cite{lowe99} aims at detecting, localizing and describing local image features,
                        The approach is based on convolving the image with a \ac{DoG} function. The image is convoluted by a set of Gaussian kernels at
                        different values of standard deviation $\sigma$ \cite{lowe2004}. This step produces several blurred images simulating the effect of different scale factors.
                        \\
        
                        The response of the \ac{DoG} function is obtained by taking the difference between each pair of the blurred images obtained previously. Interest points are identified by detecting local maxima and 
                        minima of the resulting \ac{DoG} response for the different scales, which results in detecting interest points which are approximately invariant to scale
                        changes. 
                        \\
                        
                        An orientation is also assigned to each interest point. This orientation is estimated by creating a \ac{HoG} of the points in 
                        a local region around each interest point. The peaks of this histogram correspond to the dominant directions of the local gradients 
                        \\
                        
                        The detected interest regions are referred to as \ac{SIFT} patches, each \ac{SIFT} patch is described by:
                        \begin{itemize}
                         \item a descriptor: a 128 dimensional feature vector.
                         \item a keypoint: each keypoint describes the position of the patch in the image (x \& y), a scale reflecting the detected size of the patch
                        and an orientation.
                        \end{itemize}

                        \begin{figure}[ht]
                        \centering
                                \includegraphics[width=5cm]{pics/sift-138700.jpg}
                                \caption{a visualized example of \ac{SIFT} patches extracted from a sample image. 
                                Using the keypoint associated with each patch, each patch is represented by a circle, centered at the x \& y positions of the keypoint,
                                and with a radius reflecting the scale of the keypoint.}
                                \label{fig:sift}
                        \end{figure}
        
        \section{Clustering}
        \label{s:clustering}
                Clustering is one of the most important unsupervised learning problems. The goal of clustering is to partition a set of data objects into groups. 
                Data objects are often points in a high dimensional space. Each group is hence a collection of points which are - according to certain criteria - \emph{similar} to each other.
                Each of these groups is referred to as a \emph{cluster}. 
                \\

                Assigning data points to a certain cluster is done according to a \emph{distance} measure. This measure determines to which cluster a
                particular data point is \emph{closer}. the higher the dimensions of the data space, the more complex the process of searching or 
                comparing between data points becomes. This is why partitioning this data space into groups capturing
                the similarity between data points is beneficial.                
                \\
                
                The advantages of clustering lie in speeding up the retrieval of data points which are similar (i.e. close) to a query point, since the problem is 
                reduced to finding the cluster to which the query point should be assigned. Consequently the data points assigned to this same cluster are the closest
                to the query point. Moreover, by labeling clusters, the label of a cluster can be used to describe the data points assigned to this cluster, and consequently replacing
                the original high dimensional vectors and reducing the effects of the dimensionality problem.
                \\
                
                In the context of the work done in this thesis, the feature vectors associated with \ac{SIFT} patches extracted from images can be used as the data points
                fed to the clustering process.
                
                \subsection{K-Means Clustering}
                \label{ss:kmeans}
                        K-Means clustering is one of the algorithms which solve the clustering problem, the algorithm basically aims at partitioning the data into $k$ clusters.
                        The algorithm describes an iterative process, involving an interleaved optimization of two steps:
                        \begin{enumerate}
                         \item the centers of clusters: also referred to as the \emph{centroids}, that is the position of the $k$ centroids in the data space. The centroid
                                of a cluster is the mean  $\boldsymbol{\mu_{i}}$ of the set of data points assigned to this cluster. 
                         \item the clusters membership: that is the cluster to which each data point is assigned. This assignment is done according to a distance measure which determines 
                                the closest cluster center to a particular data point.
                        \end{enumerate}
                        
                        The algorithm works as follows, (the number of clusters k should be predefined):
                        \begin{enumerate}
                         \item a set of k initial data points is chosen. These represent the initial centroids of the clusters.
                         \item each data point is assigned to the closest of the k centroids.
                         \item the position of the k centroids is updated to the mean of the data points assigned to the same cluster.
                         \item steps 2 \& 3 are repeated until convergence.
                        \end{enumerate}

                \subsection{Hierarchical K-Means Clustering}
                \label{ss:hkmeans}
                        \ac{HIKM} is a variation of the K-Means clustering algorithm discussed in section \ref{ss:kmeans}, however it differs
                        in that it produces a tree structure of clusters, instead of a flat structure.
                        \\

                        The algorithm first partitions data into $k$ clusters, then recursively applies K-Means clustering on each of those $k$ clusters.
                        The number $k$ hence denotes the \emph{branching factor} of the tree. This step is repeated until reaching a predefined
                        \emph{depth} (or equivalently until reaching a certain number of leaf clusters).
                        \\
                        
                        Figure \ref{fig:hierarchy} shows an illustration of the hierarchical clustering process. The process starts with partitioning the 
                        data space into three big clusters ($k$ = 3). Then each of the three clusters are in turn divided into 3 more clusters, and so on.
                        The predefined \emph{depth} is set to 3, hence this step is repeated three times.
                        \\

                        The resulting tree of clusters shown in Figure \ref{fig:tree} consists of 3 levels. A common approach is to label the leaf clusters 
                        with an index which encodes the path from the root to that leaf cluster, in order to facilitate accessing the clusters efficiently.
                        \\

                        The clustering process is controlled by several parameters which affect the produced tree of clusters, these parameters are:
                        \begin{itemize}
                         \item Branching factor: which determines the number of clusters in each tree level.
                         \item Depth: which determines the total number of levels and hence reflects the total number of leaf clusters in the tree.
                         \item Number of data points: the total number of data points fed to the clustering algorithm to train the clusters tree. Balancing between this parameter
                                and the total number of clusters helps in determining a rough estimate of the number of points per cluster,
                                in order to avoid having very fine or very coarse clusters.
                        \end{itemize}

                        \begin{figure}[ht]
                                \centering
                                \includegraphics[width=14cm]{pics/shape.png}
                                \caption{Hierarchical K-Means Clustering illustration}
                                For clarity, partitioning is shown in only one cluster per each level.
                                \label{fig:hierarchy}
                        \end{figure}
                        

                        \begin{figure}[ht]
                                \centering
                                \includegraphics[width=12cm]{pics/tree2.png}
                                \caption{Clusters tree illustration}
                                For clarity, only one branch is shown in detail, the others are omitted.
                                \label{fig:tree}
                        \end{figure}
                        
                        \clearpage
        \section{\ac{LSH}}
        \label{s:lsh}
                        \acf{LSH} is a method introduced by Indyk \& Motwani \cite{indyk98} to solve the \ac{NNS} problem. This problem basically is to find a subset of set of data points, which 
                        is closest to a query point. This problem becomes more challenging and time consuming in case of high dimensional data points.
                        \\
                        
                        Assuming a data point $p$, the idea of this approach is to use a group $g(p)$ of $k$ randomized hash functions $(h_{1}(p),...,h_{k}(p))$ .
                        A \emph{collision} of two data points occurs when both of them are assigned to the same hash bin. The randomized hash functions are
                        chosen so that they guarantee a high probability of collision for two close data points, and a low probability of collision for 
                        two far-apart data points. The number $k$ is referred to as the number of \emph{bits}
                        \\

                        Hash functions are chosen based on random projections, where the high dimensional data points are projected on several randomly
                        chosen directions (vectors) of lower dimension.                      
                        Each data point $p$ is hence projected using the group of projections $H(p) = (h_{1}(p),...,h_{k}(p))$, resulting into $k$ different projections.
                        For each projection (hash function) $h_{i}(p)$, the set of projected data points is quantized into bins. Hence, each data point will be assigned to $k$ different
                        bins $(b_{1}(p),...,b_{k}(p))$. Two close data points will hence have close projections, and hence they are more likely
                        to fall into the same hash bin.
                        \\
                        
                        The resulting hash table includes every data point indexed by its corresponding hash code $(b_{1}(p),...,b_{k}(p))$. This process can be repeated for 
                        $l$ different groups of hash functions $H_{1}(p),...,H_{l}(p)$, and hence producing $l$ different hash tables, which allows more error tolerance.
                        \\
                        
                        The method is hence affected by two main parameters:
                        \begin{enumerate}
                                \item Number of bits
                                \item Number of tables
                        \end{enumerate}
                        These parameters affect the performance of the system in terms of quality and speed. Increasing the number of bits means increasing the number of hash bins. This 
                        leads to a decrease in the number of data points assigned to each bin. Consequently, search becomes faster due to the lower number of data points per bin, but the retrieval
                        becomes less accurate due to bad partitioning of the data space.
                        \\
                        
                        As for the number of tables, increasing this parameter leads to more accurate retrieval results. This increase of accuracy results from covering several splittings of the data space,
                        each leading to different bins assignment of data points. However, this also decreases the search speed, due to the increase in the total number of colliding data points across the different tables.
                        

        \subsection{Spectral Hashing}       
        \label{s:hashing}
                In order to evaluate other approaches for feature space partitioning, the spectral hashing method introduced by Weiss, Trollaba \& Fergus \cite{weiss08}
                is used and evaluated, as an alternative to the \ac{HIKM} clustering method. 
                \\

                The main difference between clustering and the \ac{LSH} approach described in Section \ref{s:lsh}
                is that clustering is data driven, since the partitioning of the feature space is closely related to the data distribution, while \ac{LSH} is solely based
                on completely randomized hash functions, hence the partitioning of the feature space is independent from the data distribution.
                \\

                Spectral Hashing is an extension of \ac{LSH}, which aims at making \ac{LSH} more data driven. This aim is achieved by using a statistical method in order
                to estimate hash functions which are adapted to the data distribution, instead of using completely random ones. \ac{PCA} is the statistical method used for
                this purpose.
                \\

                \ac{PCA} aims at reducing the dimensionality of a dataset in which there are a large number of interrelated
                variables, while retaining as much as possible of the variation present in the dataset \cite{jolliffe86}. This reduction is done by extracting 
                a smaller set of variables combinations (vectors), referred to as the \emph{principal components} of the dataset, which are sorted by the order of increasing amount 
                of variation captured by each variable.
                \\

                The principal components are extracted by computing the eigenvectors and the eigenvalues of the covariance matrix of the dataset.
                The eigenvectors are then sorted in the order of decreasing eigenvalue, where the largest eigenvalue corresponds to the most significant eigenvector which is considered the 1st
                principal component.
                \\

                Weiss et al. \cite{weiss08} used the \ac{PCA} in order to detect the directions which captures the most variation of the data. Then a set of $k$ different sin-shaped eigenfunctions 
                $(\Phi_{1}(x),...,\Phi_{k}(x))$ is computed for each principal component. Assuming $p$ principal components are extracted, $n \times k$ eigenfunctions
                are computed in total. The number $k$ corresponds to the number of bits used in the original \ac{LSH} approach. Similar to the \ac{LSH}, each data point $p$ is
                assigned to a certain binary code $(b_{1}(p),...,b_{k}(p))$, which is generated as follows:
                        \[
                                b_{i}(p) = \left\{ 
                                \begin{array}{l l}
                                1 & \quad \text{if $\Phi_{i}(p) > 0$}\\
                                0 & \quad \text{otherwise}\\
                                \end{array} \right.
                        \]
                
                The spectral hashing is used to replace the clustering phase. However, the main difference between both methods is that for the case of spectral hashing,
                each \ac{SIFT} patch can be represented by several hash codes (analogous to the visual words in the case of clustering). This is possible by drawing $k$ random samples from the $n \times k$ eigenfunctions
                computed previously to produce each hash table.
                

        \section{Text Search}
        \label{s:textsearch}
                Text searching is a form of information retrieval, dealing with textual information.
                Considering a set of documents stored in a database, the main goal of a text search engine is to retrieve from the database those documents
                which are relevant to a certain input query, where a query typically consists of a sequence of one or more words. Google is a famous
                example of text search engines used for retrieving documents and textual information spread all over the Internet.
                \\

                After performing a query $q$, the search engine should return a list of relevant documents which are sorted according to their relevance to the query.
                \\Hence, a text search engine typically provides a technique for ranking documents by calculating a score $s_{i}$ for each document $d_{i}$ in the database, 
                this score reflects how relevant is the document $d_{i}$ to the query $q$.
                
                
                
                \subsection{Inverted Index}
                \label{ss:inverted}
                An index is used in text search engines in order to optimize the performance of document retrieval. Without using such an index, a scan through
                every document stored in the database would have to be done, in order to determine the relevance of each document to the query.
                \\

                In an inverted index, every word is stored, associated with a list of IDs of the 
                documents in which this word occurs. Hence, using such a structure provides a direct and efficient access to the documents in which query words occur
                (i.e. documents which are relevant to the query).
                \\

                Table \ref{t:invindex} shows a small part of such an inverted index, where each word is stored associated with the IDs of the documents in which it occurs, 
                
                \begin{table}[ht]
                \centering
                        \begin{tabular}{|l|l|}
                        
                                \hline 
                                \textbf{Word} & \textbf{docID} \tabularnewline
                                \hline
                                \hline 
                                ``cat'' & doc1, doc4, doc5 \tabularnewline
                                \hline 
                                ``cloud'' & doc2, doc4, doc7 \tabularnewline
                                \hline
                        \end{tabular}
                \caption{Example of an inverted index of text documents and words}
                The example index shows that the word ``cat'' occurs in three documents which are: doc1, doc4 and doc5
                \label{t:invindex}
                \end{table}
                
                \subsection{Similarity Measure}
                \label{ss:sim}
                As mentioned previously in this section, text search engines usually include a scoring function which is used to calculate a score for each relevant
                document, this score reflects the degree of similarity between a particular document and the query.
                \\
                
                The Vector Space Model describes documents in terms of their constituent terms. A document $d_{j}$ containing $t$ terms is modeled as a vector of weights $(w_{1,j},...,w_{t,j})$,
                where each weight corresponds to one of the terms which occurs in the document.
                
                The Term Frequency-Inverse Document Frequency (tf-idf) weighting is a commonly used scheme to weight terms in the Vector Space Model. It consists of two parameters:
                \begin{enumerate}
                  \item \ac{tf}: this parameter reflects how often a term occurs across a single document,
                                 it is used to weight up terms which are very frequent in a document, and hence having a high discriminating power.
                  \item \ac{idf}: this parameter reflects how often a term occurs across all the documents in the database,
                                 It is used to weight down terms which are common in many documents and hence having a low discriminating
                                 power (e.g. words like ``the''). Given a set of documents $D$ and an arbitrary term $t$, 
                                 the inverse document frequency is computed as follows: 
                                \[idf(t,D) = \log\frac{|D|}{|d \in D:t \in d|}\]
                                 Where: \begin{itemize}
                                        \item $|D|$: the number of documents in the set $D$.
                                        \item $|d \in D:t \in d|$: the number of documents in which the term $t$ occurs at least once.
                                       \end{itemize}
                                Hence, the idf of a term is inversely proportional to the number of documents in which the term occurs.
                \end{enumerate}

                Using this scheme, given a set of documents $D$, the term weight $w_{i,j}$ of an arbitrary term $t_{i}$ occuring in a document $d_{j} \in D$ is computed as: $w_{i,j} = tf(t_{i},d_{j}) \times idf(t_{i},D)$
                \\

                Using this vector representation of documents, a common method for calculating the similarity between two documents is to calculate the cosine of the angle
                between the vectors representing these documents. This method is referred to as the \emph{Cosine Similarity}.

                \subsection{Lucene search library}
                \label{ss:lucene}
                The Lucene text search library is an open source project, providing a full-featured text search API that can be easily extended and customized. 
                Lucene uses highly efficient indexing mechanism based on an inverted index structure (discussed in Section \ref{ss:inverted}). Lucene also applies 
                optimization techniques such as incremental indexing, where the index is divided to segments to which new documents are added. Segments can then be merged
                periodically. This process makes additions efficient because it minimizes physical index modifications. \cite{Hatcher2004}
                \\
                   
                Figure \ref{fig:scoring} shows the Lucene scoring function which is based on the \emph{Cosine Similarity} method mentioned previously. The formula describes how the Lucene similarity score ($score(q,d)$) between two documents $q$ and $d$ is calculated.
                Among the main factors included in this scoring function:
                \begin{enumerate}
                  \item Query Norm ($queryNorm(q)$): this factor is used to make the scores of different queries comparable. This factor is hence fixed for all documents and hence
                                   does not affect the ranking of the documents.
                  \item Length Norm ($norm(d)$): the document score is divided by its length, consequently weighting up shorter documents over longer ones.
                  \item Coordination factor ($coord(q,d)$): this factor reflects how many terms of the query $q$ occur in the target document $d$.
                \end{enumerate}

                \begin{figure}[ht]
                        \centering
                        \includegraphics[width=14cm]{pics/scoring.png}
                        \caption{Lucene scoring function}
                        \label{fig:scoring}
                \end{figure}

                The \ac{tf} parameter can be incorporated in the index, that is for each word, the corresponding list of document IDs is extended with the \ac{tf} parameter
                of the word in each document, that is the number of occurrences of the word in each document.
                
                Figure \ref{t:invindextf} shows a representation of the inverted index in figure \ref{t:invindex} after extending it with the \ac{tf} parameter:
                \begin{table}[ht]
                
                \centering
                        \begin{tabular}{|l|l|}
                                \hline 
                                \textbf{Word} & \textbf{[docID, \ac{tf}]} \tabularnewline
                                \hline
                                \hline 
                                ``cat'' & [doc1, 2], [doc4, 1], [doc5, 3] \tabularnewline
                                \hline 
                                ``cloud'' & [doc2, 4], [doc4, 2], [doc7, 1] \tabularnewline
                                \hline
                        \end{tabular}
                \caption{inverted index extended with the \ac{tf} parameter for each document}
                \label{t:invindextf}
                \end{table}

        \section{Match Refinement}
        \label{s:matchref}
                Matching two images is based on matching individual local features. A match hence consists of a pair of local features $(f_{1},f_{2})$ having similar descriptor vectors.
                The similarity between two descriptor vectors can be estimated through a distance measure between them, as described in section \ref{s:clustering}, 
                \\

                As described previously, clustering algorithms are used to partition the feature space into groups (clusters), where each group should contain features which are
                similar to each other. Hence, each pair of features which are assigned to the same cluster can be considered as a matching pair.
                \\

                However, the clustering process is affected by several parameters, hence the assignment of feature descriptors to a certain cluster can vary. Such parameters
                are for example the number of levels and the branching factor of the clusters tree in the case of \ac{HIKM} clustering. This leads to cases where
                two descriptor vectors are assigned to the same cluster, although the corresponding local features are not considered as good matches, hence comes the need
                for a mechanism to differentiate between \emph{true} (good) matches and \emph{false} (bad) matches. 
                \\
                
                Figure \ref{fig:matchrefcomparison} shows a comparison between how ideally two images should match (picture on the left) and how the actual result
                looks like (picture on the right). The picture on the right clearly shows \emph{bad} matches which need to be discarded.
                \begin{figure}[!htbp]
                        \centering
                                \includegraphics[width=5cm]{pics/expected.png}
                                \includegraphics[width=5cm]{pics/actual.png}
                                \caption{Comparison between the ideal (on the left) and the actual (on the right) matching of two images}
                                \label{fig:matchrefcomparison}
                \end{figure}
                
                By discarding most false matches, and keeping most good ones, the retrieved list of images can be re-ranked according to the refined results,
                hence enhancing the overall quality of search results.
                \\

                In order to detect whether a match is a good one or a bad one, several existing approaches make used of geometrical information and the spacial
                consistency of matches in order to decide whether a match should be kept or discarded. 
                These main idea behind spacial consistency is that the position of good matches are correlated while the position of false matches is not, i.e. 
                the matching local features will most probably have similar spacial arrangement in both the query and the target image.
                \\

                One of the existing approaches to decide about the spacial consistency of local features is to estimate a global transformation (in terms of 
                rotation, translation and scaling) between target and query image. Matches which are consistent with this transformation are considered as good
                matches, others are considered as bad matches and can be discarded. Xu, Kangling and Liu \cite{Xu2010} used the \ac{RANSAC} algorithm to estimate
                such a transformation and to filter matches accordingly.
                \\

                \ac{RANSAC} is an iterative method for estimating parameters of a mathematical model from a set of data points containing both
                inliers and outliers, it aims basically at finding the model which fits the observed inliers the most.
                
                The main drawback of this approach is that the estimated transformation will be affected by noisy matches especially if the fraction of noisy matches
                is high. 
                
                Moreover, if both query and target images contain several objects undergoing several different transformations, this approach will fail to 
                estimate one global transformation which fits the matching objects.
                \\

                Another approach is based on filtering matches according to their translation, scale and rotation consistency, since good matches will most probably undergo 
                a consistent change in scale and rotation. 
                \\

                Jegou, Douze and Schmid \cite{Jegou2008} introduced a method named \ac{WGC}, where they used scale and rotation information to build a histogram. 
                Each match pair of local features is assigned to its corresponding bin according to the difference in scale and rotation. The peak of this
                histogram represents the dominating scale and rotation difference, and hence the matches assigned to this bin are considered as good matches and
                the others are discarded. This method is evaluated in the experiments included in Chapter \ref{chap:eval}.

                \clearpage

        \section{Related Work}
        \label{s:related}

                There is a considerable body of previous research on image searching using local image features.
                Sivic and Zisserman \cite{Sivic2003} introduced the bag-of-features representation of images. \ac{SIFT}
                descriptors of the extracted interest regions are grouped into clusters using K-Means clustering. \ac{SIFT} patches are then labeled by the corresponding cluster index, 
                each image is hence represented by the frequency histogram of the cluster indexes. They also adapted text retrieval methods and used them for image retrieval such as the similarity
                scoring using the \emph{tf-idf} weighting and the inverted file structure.
                \\

                Based on this work, Nister and Stewenius \cite{Nister2006} used hierarchical clustering instead of flat clustering in order
                to provide a more efficient retrieval process when using large vocabularies of visual words, hence increasing the ability of the system to scale up to
                larger databases of images. However, they did not consider any information about the geometric layout of visual words.
                \\
                
                Jegou, Douze and Schmid used Hamming embedding in order to increase the accuracy of clustering. They used binary signatures in order to divide each cluster into
                smaller sub-clusters, hence providing a compromise between coarse quantizers (low number of clusters but more dense) and fine quantizers (high number of clusters but less dense).
                They also used geometrical information in the post-processing phase in order to enhance the quality of retrieval results.