\chapter{Background}

\label{chp:background}

\section{Similarity}

Judging similarity, as a major part of cognition, is very easy for humans. Why, for example, a fir tree is similar to a pine tree, is intuitively clear for us. However, formalizing our similarity judgment process proves to be a very hard task. In the next section, we are therefore going to summarize some influential models for measuring similarity that have been proposed so far.

There is no grand unified theory of the human perception of similarity. The complexity of the proposed similarity models varies greatly, while no single model is general enough to be usable for all tasks.  

Similarity is a relationship between objects. This definition seems extremely vacant, but it is a good way to accommodate the numerous uses of similarity. The exact  properties of the relationship are not known in general. As we will later see, even properties as likely as symmetry do not always hold, e.g. an apple need not be as similar to an orange as an orange is to an apple. The type of relationship is also not known in general.

Similarity is often represented by a binary relationship. Experiments with groups of identical and differing objects show that this not accurate in all cases (e.g. the experiments carried out on pigeons by Wasserman \cite{wasserman1995pigeons}).
 
\subsection{Geometric Models}

Geometric models were one of the earliest models for similarity. Despite some shortcomings, they are still widely used today. Reasons for this are their clarity and intuitiveness. The main idea is to place objects in $N$ dimensions and measure their distance with a metric. The arguably most famous metric is the Euclidean distance:

\begin{equation}
\label{formula:eud}
d(x,y)= \sqrt{\sum\limits_{i=1}^N (x_i - y_i)^2};\ \ x,y \in \mathbb{R}^N
\end{equation}

Metrics used in geometric models usually have the properties of a metric function $d:X\mathsf x X \rightarrow \mathbb{R}$. We will later take a look at the evidence against these properties $\forall x,y,z \in X$:

\begin{itemize}

\item Non-Negativity: $d(x,y) \geq 0$ 
\item Identity: $d(x,x)=0$
\item Symmetry: $d(x,y)=d(y,x)$
\item Triangle inequality: $d(x,z) \leq d(x,y) + d(y,z)$

\end{itemize}

Multi Dimensional Scaling (MDS) is a more advanced geometric approach. It takes any measure of pairwise proximity as input and constructs a geometric model of the data. The distance between every two objects in the geometric model is related to their distance in the input (cf. \cite[p.~5]{goldstone2005similarity}). 

The main purpose of MDS is to represent proximity data in a readable way. It is often easier to look at a plot rather than all the values in a proximity matrix. Another purpose is data compression. 

If we have $N$ objects, we need $N-1$ dimensions for our MDS to perfectly reconstruct the proximity in the input. However, we can reduce the number of dimensions if we accept some loss of proximity information. Let us look at an example (cf. \cite[p.~6]{goldstone2005similarity}). Assume we have the following events:

\begin{itemize}
\item $Similarity(Russia, Cuba) = 7$
\item $Similarity(Russia, Jamaica) = 1$
\item $Similarity(Cuba, Jamaica) = 8$
\end{itemize}

If we build an one dimensional model, we cannot place Russia near Cuba and far away from Jamaica, because Cuba is near Jamaica. However, if we build a two dimensional model, it is possible. MDS provides information about the cognitive process. Often, the axes can be labeled with interpretations. In our example, the labels are ``political affiliation'' and ``climate'.'

The goal of MDS is to minimize the badness of fit between the distance of the inputs $\delta_{i,j}$ and the distance of the outputs $||x_i-x_j||$, as shown in Equation \ref{formula:minmds}. We could, for example, use a pairwise dissimilarity matrix as input and measure distance between the outputs with Euclidean distance. 

\begin{equation}
\label{formula:minmds}
\begin{aligned}
& \underset{x_1,...,x_n}{\text{arg min}}
& & \underset{i<j} \sum{(||x_i-x_j||-\delta_{i,j})^2} \\
\end{aligned}
\end{equation}

Note that $x_i$ in the solution is not unique. The reason for this is that we can translate, rotate and reflect the points without changing the distance.

\subsection{Featural Models}

Featural Models are motivated by evidence against properties of the metrics used for geometric models. For example, in \cite{podgorny1979reaction}, if two identical letters are shown side by side, the reaction time for finding out whether the letters are similar varies. This a violation of the identity property of geometric models.

Using  psychological experiments, Tversky (\cite{tversky1977features}) found a good example for a violation of the symmetry property: North Korea is judged more similar to Red China than Red China is to North Korea.

According to the triangle inequality assumption, in Figure \ref{fig:triangle}, the distance between $A$ and $C$ cannot be larger than the distance between $A$ and $E$ plus the distance between $C$ and $E$.

\begin{figure}[h!]
	\centering
	\includegraphics[scale=0.65]{triangle.png}
	\caption{Triangle Inequality}
	\label{fig:triangle}
\end{figure}

According to the segmental additivity assumption, if $A$, $B$ and $C$ lie on a straight line, then the distance between $A$ and $C$ must equal the distance between $A$ and $B$ plus the distance between $B$ and $C$.

If segmental additivity is combined with the triangle assumption, and $E$ forms a right triangle when combined with $A$ and $C$, then one of two cases must hold:

\begin{enumerate}
\item The distance between $A$ and $E$ must be larger than or equal to the distance between $A$ and $B$. Additionally, the distance between $E$ and $C$ must be larger than or equal to the distance between $B$ and $C$.
\item The distance between $A$ and $E$ must be larger than or equal to the distance between $B$ and $C$. Additionally, the distance between $E$ and $C$ must be larger than or equal to the distance between $A$ and $B$.
\end{enumerate}

Again, Tversky carried out psychological experiments and found a violation of the triangle assumption if it was combined with the segmental additivity assumption (\cite{tversky1982similarity}). Test subjects had to assess the similarity of color/length-pairs to each other. The pairs were chosen according to Figure \ref{fig:triangle}, e.g. $E$ had the same size as $A$ and the same redness as $C$. People's dissimilarity ratings were not consistent with what we just derived, e.g. the distance between $A$ and $E$ was rated smaller than the distance between $A$ and $B$.

\begin{equation}
\label{formula:cm}
S(a,b)=xf(a \cap b)-yf(a  \setminus  b)-zf(b  \setminus  a)
\end{equation}

The contrast model addresses the problems we just described. It was proposed in \cite{tversky1977features}. The contrast model is defined by Equation \ref{formula:cm}. $S$ and $f$ measure similarity, while $x$, $y$ and $z$  perform a weighting of the different components. $a\cap b$ denotes the features that are in both $a$ and $b$. $a \setminus b$ denotes the features that are in $a$, but not in $b$. $b \setminus  a$ works the other way round. 

\subsection{Transformational Models}

The idea of transformational models is that any two representations of objects can be transformed into the other. The easier this transformation is, the more similar the objects are. How the difficulty of the transformation is measured and which transformations are available, depends on the specific model.

We will take the Imai model \cite{imai1977pattern} as an example, which uses strings of $X$s and $O$s as stimuli. Four transformations are possible:

\begin{itemize}
\item Mirror image (e.g. $XXXXXOO \to OOXXXXX$)
\item Phase shift (e.g. $XXXXXOO \to XXXXOOX$)
\item Reversal (e.g. $XXXXXOO \to OOOOOXX$)
\item Wave length (e.g. $XXOOXXOO \to XOXOXOXO$)
\end{itemize} 

The more transformations we need to make two strings identical, the less similar they are. Additionally, strings that can be made identical in multiple ways, are more similar.

Transformational models sometimes lack versatility for practical applications. Let us, for example, compare two digital images with a transformational model. Even with very sophisticated transformation operations, there would always be many possible sequences of transformations, making the time complexity large. In addition, we would often be forced to change the intensity of individual pixels, because no other identity-producing transformation exists.

\section{Taxonomic and Thematic Thinking}

\label{sec:taxthe}

This work is about a rather new model for measuring similarity: the DPM. Its theoretic foundations are taxonomic and thematic thinking, which are introduced in this section.

Taxonomic thinking tries to identify common features between objects. The more common features can be identified, the larger the similarity. Let us, for example, compare a table to a chair. A possible common feature is ``number of legs''. Because both have four legs, they are similar if we use taxonomic thinking.

Thematic thinking tries to find a theme that connects the objects. This theme is then used for comparison. In our example, a possible theme is ``furniture''. The likelihood to find a table and a chair at a furniture store is nearly equally high. Therefore, tables and chairs are similar. 

The usual definition of a theme is any ``temporal, spatial, causal or functional relation between things'' \cite[p.~3]{estes2011thema}.  In our introductory Figure \ref{fig:triad}, the theme is size. In other words, the two objects triangle and square are thematically related via their size. Different viewers of the test figure might find different thematic relations in it, e.g. color, occurrence in a company logo and also subjective ones like beauty.

Table \ref{tab:propstt} summarizes the properties of taxonomic and thematic thinking. 

Because we only have our senses to experience reality (at least according to a Positivism world-view), objects that we compare are represented as stimuli. Stimuli are said to be separable if they are either represented as 0/1-values (i.e. predicates) or can be counted. Other kinds of stimuli are referred to as integral stimuli. Humans can identify separable stimuli quickly and easily. 

If we think back to our introductory example in Figure \ref{fig:triad}, the number of edges was a separable stimulus. while the object size was integral. Separable stimuli are associated with taxonomic thinking, while integral stimuli belong to the world of thematic thinking. We can argue that if we compare images, we have both separable and integral stimuli to compare at the same time. This is further evidence in support of DPMs.

Taxonomic thinking and thematic thinking are not synonyms for similarity and distance. It is possible to measure distance taxonomically and vice versa. We could have said the chair and the table in our example have a ``number of legs'' difference of zero. Analogously, we could have said that both chair and table have the common feature ``sold at furniture store''.

However, it makes sense to associate taxonomic thinking with similarity, because similarity measurement works best on separable stimuli. In an analogous manner, distance measurement works best on integral stimuli cf. \cite[p.~537]{eidenberger2012handbook}.

To measure distance between two concepts, it is often constructive to find a commonality first. We call such concepts alignable. For example, while comparing a car and a motorcycle, we can find the commonality ``wheels'' and use it to measure distance. Concepts with no meaningful commonalities are called nonalignable. 

Highly alignable concepts tend to lead to taxonomic thinking, because there are enough common features to be able to make a comparison. Poorly alignable concepts tend to lead to thematic thinking, because there are no common features to even work with taxonomic thinking. Humans even go as far as inventing unlikely thematic relationships in this case. Distance can be converted into similarity with a generalization function which allows us to mix similarity and distance in DPMs. The exact form of the generalization function is still being disputed.

Depending on whether we measure or count, different measures (for want of a better name) are used for taxonomic and thematic thinking. The dot product and the number of co-occurrences are typical choices for taxonomic thinking. The city block distance and the Hamming distance (features that are in one object, but not the other) are typical choices for thematic thinking.

\begin{table}
\centering
\begin{tabular}{|c||c|c|} \hline
\textbf{Property} & \textbf{Taxonomic} & \textbf{Thematic} \\ \hline \hline
Stimuli & Separable & Integral \\ \hline
Concern & Similarity & Distance \\ \hline
Measurement & Dot Product & City Block Distance \\ \hline
Counting & Co-Occurrences & Hamming Distance \\ \hline
\end{tabular}
\caption{Properties of Taxonomic and Thematic Thinking (cf. \cite[p.~537]{eidenberger2012handbook})}
\label{tab:propstt}
\end{table}

\section{Generalization Functions}

\label{sec:genf}

We already heard that in nature, similar objects often share the same properties and how this is useful for cognition. This can be formalized with generalization functions. The higher the difference between objects, the lower the probability of them belonging to the same class.

Identical objects should belong to the same class with a probability of $p=1$. From this point, probability should fall with similarity. How exactly it should fall, is still being disputed. Various generalization functions have been proposed that had varying success with different types of stimuli.

Generalization functions allow us to convert distance into similarity, because we can simply use the probability of belonging to the same class as a measure of  similarity. This is an important part of DPMs, because we need a way of combining similarity and distance. 

Note that we deal with negative distances with a symmetry assumption to simplify distance measurements. The distance between object $x$ and $y$ yields the same probability as the distance between object $y$ and $x$.

\begin{figure}
\centering
\begin{subfigure}{.5\textwidth}
  \centering
  \includegraphics[height=4cm]{dirac.png}
  \caption{Boxing}
  \label{fig:dirac}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
  \centering
  \includegraphics[height=4cm]{gaussian.png}
  \caption{Gaussian}
  \label{fig:gaussian}
\end{subfigure}%

\begin{subfigure}{.5\textwidth}
  \centering
  \includegraphics[height=4cm]{shepard.png}
  \caption{Shepard}
  \label{fig:shepard}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
  \centering
  \includegraphics[height=4cm]{tenenbaum.png}
  \caption{Tenenbaum}
  \label{fig:tenenbaum}
\end{subfigure}%
\caption{Generalization Functions}
\label{fig:gens}
\end{figure}

Figure \ref{fig:dirac} shows the Boxing generalization function. The form that we show allows some distance between objects that are rated as equal.

Figure \ref{fig:gaussian} shows the Gaussian generalization function. The basic idea of this function is that the probability of belonging to the same class should decay exponentially with distance.

Figure \ref{fig:shepard} shows the Shepard generalization function. Compared to the Gaussian generalization function, it falls of more steeply. That means that a rise in distance leads to a faster decline in probability. The Shepard generalization function represents Shepard's Universal Law of Generalization (\cite {shepard1987toward}). It states that the probability that a stimulus is associated with a certain response is proportional to $e^{-distance}$ in an appropriate psychological space.

Figure \ref{fig:tenenbaum} shows the Tenenbaum generalization function. It can be seen as a more general version of the Shepard generalization function, because it is based on Bayesian inference and extends generalization to multiple consequential stimuli. We can see that its peak is broader than the peak of Shepard's generalization function. This is an intuitive result of dealing with multiple stimuli. If we get more positive examples for learning, we get surer about the classification of new examples.

Details about the Tenenbaum generalization function can be found in \cite{tenenbaum2001generalization}. Note that, because the concrete size of the peak depends on the concrete series of stimuli, the Tenenbaum generalization function is logically very similar to our Boxing generalization function.

\section{Measures}

We already mentioned that we need measures for our DPM in Equation \ref{formula:simple-dpm}. In the following section, we are going to introduce two classes of measures. The distinction is made on a rather technical basis: the function signature. Bear in mind that this is not related to the distinction into taxonomic and thematic thinking, which is discussed in Section \ref{sec:taxthe}.

\subsection{Quantitative Similarity Measures}

The parameters of quantitative measures are at least interval-scaled. In simple terms, this means that they can be compared and subtracted, while divisions need not be possible. The operation results are not required to be interval-scaled themselves. In even simpler terms, quantitative similarity measures take real numbers as inputs.

We already introduced Euclidean distance in Equation \ref{formula:eud}. It is a quantitative measure for distance and therefore associated with thematic thinking. Another classical example for a quantitative measure is the dot product defined in Equation \ref{formula:dot}. Unlike Euclidean distance, it measures similarity and is therefore associated with taxonomic thinking.

\begin{equation}
\label{formula:dot}
s(x,y)= \sum\limits_{i=1}^N x_i y_i;\ \ x,y \in \mathbb{R}^N
\end{equation}

\subsection{Predicate-based Similarity Measures}

\label{sec:pred}

Predicate-based measures work with nominal parameters. Because one predicate is as good as the other, it does not make sense to work with them individually. Therefore, two description vectors $f_1$ and $f_2$ are combined into variables:

\begin{itemize}
\item $a$ counts the number of predicates that are present in both description vectors
\item $b$ counts the number of predicates that are present in $f_1$, but not in $f_2$
\item $c$ counts the number of predicates that are present in $f_2$, but not in $f_1$
\item $d$ counts the number of predicates that are present in neither description vector
\end{itemize}

Variables $a$, $b$, $c$ and $d$ are the parameters of a predicate-based similarity measure. As before, there is no need for the result to be interval-scaled. Also note that with $K$ the number of feature vector elements, $d=K-a-b-c$ . Predicate-based measures are allowed to ignore some of the variables. $d$ is not used in many measures.

\begin{equation}
\label{formula:hamming}
d(a,b,c,d)= b+c
\end{equation}

Let us conclude this section with two examples. The Hamming distance in Equation \ref{formula:hamming} focuses on the differences between two feature vectors and on thematic thinking. With $f_1 = \left( \begin{smallmatrix} 1\\ 1\\ 0 \end{smallmatrix} \right)$ and $f_2 = \left( \begin{smallmatrix} 0\\ 1\\ 0 \end{smallmatrix} \right)$, we get $a=1$, $b=1$, $c=0$ and $d=K-a-b-c=3-1-1-0=1$ and eventually, $d=1$.

The number of co-occurrences in Equation \ref{formula:co} focuses on similarity and taxonomic thinking. With our feature vectors from before, $s=a=1$.

\begin{equation}
\label{formula:co}
s(a,b,c,d)=a
\end{equation}

\section{Feature Extraction}

In his seminal article \cite{attneave1954some}, Attneave argues that visual information reaching the eye is highly redundant and calls for a more economical representation of this information. He also states that attaining this compression is one of the major goals of visual perception. Figure \ref{fig:cat} shows an example for compression. The drawing was made by abstracting 38 points of maximum curvature and connecting them with appropriate straight lines.

\begin{figure}[h!]
	\centering
	\includegraphics[scale=0.2]{cat.png}
	\caption{Contours of a Sleeping Cat (cf. \cite[p. 185]{attneave1954some})}
	\label{fig:cat}
\end{figure}

We can see that, despite a lot of information being lost during the compression, the sleeping cat is still clearly recognizable. The points of maximum curvature can be seen as a summary of the original media object that is still detailed enough for the task at hand.

We call such summaries features and the process of generating them is called feature extraction. While data compression is beneficial, it is not the most useful impact of feature extraction: Feature vectors make objects comparable. For example, if we compare two digital images bit by bit, we will only get a good similarity assertion if they are exactly equal. 

Even the slightest difference in camera angle, lighting, etc. could lead to very different binary representations and thus to unusable similarity judgments. By using feature vectors as an intermediate format, our similarity judgments become more robust.

We used image feature extraction as an example, but the idea is not restricted to images. Many algorithms exist for video and sound feature extraction. The concrete algorithm depends heavily on the issue to be solved. It is a reasonable assumption that, in the future, a feature extraction algorithm is found that works equally well for all types of media objects and all domains. Humans are able to make similarity judgments regardless of stimulus type, so this grand goal should be realizable. 

Usually, features span more than one dimension. The cat in our example could be identified by the Cartesian coordinates of the points in the precise order that lines should be drawn between them. Data like these are best stored in a vector with numeric elements that we call a feature vector.  

Usually, feature vectors are compared element-wise. This causes problems if two feature vectors have different dimensions or the objects they summarize have different dimensions. Most feature extraction algorithms always produce feature vectors of equal length -  sometimes simply by filling missing dimensions with zeros. 

The problem of different input dimensions can be solved by splitting the input into equal sizes or by using an algorithm that is independent of the input dimensions.  

In the following section, we are going to learn a concrete algorithm to extract feature vectors from images.  

\section{Histogram of Oriented Gradients}

HOG divides images into parts and creates a histogram for each part. The histograms describe the gradient orientations of each part, which can be calculated from the horizontal and vertical gradients. HOG does not know about the the concrete edge positions - it is only concerned with the distribution of gradient orientations in the histograms (cf. \cite[p.~2--3]{dalal2005histograms}). 

For a task like human detection, the high-level approach of HOG is well suited, because depending on body position, camera position and lighting, humans look very differently on images. The exact location of all body parts would be too much effort for a binary classification, while HOG can be calculated efficiently and robustly. The simplified steps of the HOG algorithm from input image to person/non-person classification are:

\begin{enumerate}
\item Computation of gradients
\item Voting into spatial and orientation cells
\item Normalization over overlapping blocks
\item Support vector machine classification
\end{enumerate}

The original HOG paper \cite{dalal2005histograms} explores many possibilities of implementing the algorithm. To keep things simple, we will describe one variant with good detection performance here and visualize the steps with the HOG Glasses project\footnote{\url{http://saturday.csail.mit.edu:8080} last accessed 2015-02-09}. Figure \ref{img:example} shows our input image.

First, gradients are computed with the centered 1-D masks $filter_{h}=\left( \begin{smallmatrix} -1 & 0 & 1\end{smallmatrix} \right)$ and $filter_{v}=\left( \begin{smallmatrix} -1 & 0 & 1\end{smallmatrix} \right)^T$. Gradient calculation is done for each color channel, but only the  gradient vector with the largest norm is used. Figure \ref{img:gradient} shows an example for a gradient.

\begin{figure}
\centering
\begin{subfigure}{.33\textwidth}
  \centering
  \includegraphics[height=2.5cm]{h1.jpg}
  \caption{Input Image}
  \label{img:example}
\end{subfigure}%
\begin{subfigure}{.33\textwidth}
  \centering
  \includegraphics[height=2.5cm]{h2.jpg}
  \caption{Image Gradient}
  \label{img:gradient}
\end{subfigure}%
\begin{subfigure}{.33\textwidth}
  \centering
  \includegraphics[height=2.5cm]{h3.jpg}
  \caption{HOG Visualization}
  \label{img:hog}
\end{subfigure}%
\caption{HOG Algorithm Stages}
\label{fig:test}
\end{figure}

Second, each pixel casts a vote with its gradient magnitude for a histogram bin. The bins are evenly spaced over 0\degree~and 180\degree~into 9 parts. The gradient orientation determines the bin to vote for. A cell is a small subpart of an image. Each cell has its own histogram. Anti-aliasing is done between neighboring bins.

Third, the cells are grouped into larger units: the so called blocks. Blocks are rectangular and overlapping, so that each cell is part of more than one block. The concatenation off all histograms within a block forms a vector $v$, which is normalized with the L2-norm $v_{normal}=\frac{v}{\sqrt{||v||_2^2+\epsilon^2}}$, $\epsilon$ being a small constant. Figure \ref{img:hog} visualizes this step.

Finally, a SVM with a linear kernel is trained with HOG descriptors. The resulting SVM model can then be used to classify new images. More information about SVM can be found in Section \ref{sec:svm}.

\section{MPEG-7 Visual Standard}

In this section, we will describe two additional ways of extracting feature vectors taken from the MPEG-7 Visual standard. In principle, HOG alone is enough to produce satisfactory results. However, HOG creates quantitative feature vectors with many decimal places because of the normalization step. The integral numbers created by the two following MPEG-7 descriptors are much easier to turn into predicates and allow us to test predicate-based measures, too.

The aim of the MPEG-7 Visual standard is to provide standardized descriptions of images and video. The descriptions alone are not very useful, but they help us to identify, categorize and filter media objects. MPEG-7 Visual should make it easier to implement multimedia applications (cf. \cite[p.~696]{sikora2001mpeg}).  

MPEG-7 descriptors can be classified into general and domain-specific descriptors. This work deals with the following general descriptors, that will be used during the implementation in Chapter \ref{chp:imp}:

\begin{enumerate}
\item Scalable Color Descriptor (SCD): Creates a histogram of the color distribution of an image
\item Edge Histogram Descriptor (EHD): Captures the spatial distribution of the edges of an image
\end{enumerate} 

\subsection{Scalable Color Descriptor}

SCD divides the HSV (hue-saturation-value) color space into 256 bins of uniform size. This includes bins for hue, saturation and value. Each pixel is quantized into its nearest hue, saturation and value bins. This is called a color histogram. Algorithm \ref{alg:hist} shows a simplified algorithm for its generation.

The histogram values are truncated to 11-bit integers and nonlinearly mapped into a 4-bit representation. This step makes small values with a low probability of occurring in an image more significant. At this point, the descriptor can be used, although there is one drawback: its space requirements, which is also related to the time it takes to compare two descriptors. This is important, because hardly any practical tasks need just one descriptor without performing some comparison process.

To make the SCD smaller, a Haar transformation is employed (cf. \cite[p.~9]{ohm2001mpeg}). Neighboring bin values are combined into one bin value, which from then on represents a Haar coefficient. Each pass halves the number of bins and therefore the histogram size. SCD derives its name from the fact that different compression levels can be compared to each other.

Usually, SCD-comparisons arise from image retrieval tasks. The specification suggests to use the L1-norm in the Haar domain or directly in the histogram. In Chapter \ref{chp:imp}, we will also explore the use of other measures in the latter domain.

\begin{algorithm}
\KwIn{$h$, $s$, $v$ HSV image channels}
\KwOut{$hist$ color histogram}
	\While{x=h->next() \&\& y=s->next() \&\& z=v->next()}
	{
		++hist[getNearestBin(x, HUE\_BIN\_SIZE)];\\
		++hist[getNearestBin(y, SATURATION\_BIN\_SIZE)+OFFSET1];\\
		++hist[getNearestBin(z, VALUE\_BIN\_SIZE)+OFFSET2];\\
	}
\caption{Example for Generating a Color Histogram}
\label{alg:hist} % \label has to be placed AFTER \caption to produce correct cross-references.
\end{algorithm}

\subsection{Edge Histogram Descriptor}

EHD searches for edges in an image and puts information about them in a histogram. The image is divided into 16 identically-sized blocks. Each block occupies 5 bins in the histogram: 0\degree (horizontal), 45\degree, 90\degree (vertical), 135\degree and isotropic (not oriented). The blocks are further subdivided until a level of $2x2$ pixels is reached. 

On this micro-level, simple edge detection operators are applied to the intensity values, e.g. the operator for the 45\degree-bin $op_{edge} = \left( \begin{smallmatrix} 0 & \sqrt{2} \\ -\sqrt{2} & 0 \end{smallmatrix} \right)$. If the result of the maximal operator exceeds a certain threshold, the average intensity value of the micro-level is added to the appropriate bin. Finally, the bin values are normalized and compressed to 3 bits to keep descriptor size low.

As before, the specification suggests using the L1-norm for EHD-comparisons - this time directly with the 3 bit bins. Again, we are going to explore the use of other measures in Chapter \ref{chp:imp}.

\section{Support Vector Machines}

\label{sec:svm}

Now that we know how to transform an image into a feature vector, we return to our binary classification task. We could directly calculate distances between a known positive feature vector (e.g. contains a pedestrian) and an unknown feature vector. But with this approach, we have to set an arbitrary boundary and the classification performance would not be very high. Therefore, we introduce a classic machine learning algorithm for binary classification: the Support Vector Machine.

\subsection{Overview}

SVMs try to separate data by drawing a line between the two classes. If the data are separable in this way, then there are usually many ways to draw this line. The question is: Which line to take? Some algorithms, like the Perceptron algorithm, just take the first one they find. SVMs answer this question by maximizing inter-class distance.

\begin{figure}
\centering
\includegraphics[height=6cm]{svm.png}
\caption{Principle of a SVM (cf. \cite[p.~327]{bishop2006pattern})}
\label{img:svm}
\end{figure}

The rationale of SVMs is to place this line (i.e. decision boundary) as far away from both classes as possible. This reduces the misclassification risk of new data points. The principle of the SVM algorithm can be seen in Figure \ref{img:svm}. The light and dark blue points represent class A, while the orange and red points represent class B. The solid red line is the decision boundary. New data will be classified as A if it is below the boundary and as B otherwise. 

The optimization goal of the algorithm is to select the data points that maximize inter-class distance and therefore span the classification boundary. Our image shows them in light blue and orange. Vapnik, inventor of the first SVM, gives a good introduction in \cite{vapnik2000nature}.

SVMs are defined by the linear model in Equation \ref{formula:svmmodel}. Parameters $\vec{w}^T$ and $b$ make up a SVM model. $N$ training data points are stored in $\vec{x_1}, ... \vec{x_N}$ with ground truths in $t_1,...,t_N$ where $t_n \in \{-1,1\}$ are the class labels.

\begin{equation}
\label{formula:svmmodel}
y(\vec{x})=\vec{w}^T\vec{x}+b
\end{equation}

A detailed, step-by-step solution is out of scope for this work. It is provided, for example, in \cite[p.~326--336]{bishop2006pattern}. We find the parameters by formulating the problem in Figure \ref{img:svm} as a constrained optimization problem. We use the Lagrange approach and work with the dual form of the problem. Finally, the Karush-Kuhn-Tucker conditions help us to find a solution with quadratic programming. 

\subsection{Kernels}

\begin{equation}
\label{formula:lagrange}
L_{dual}=\sum\limits_{n=1}^N \lambda_n - \frac{1}{2} \sum\limits_{n=1}^N \sum\limits_{m=1}^N \lambda_n \lambda_m t_n t_m x_n x_m
\end{equation}

The Lagrange polynomial needed for finding the SVM parameters is given in  Equation \ref{formula:lagrange}. It has the useful property that the data points $x_j$ appear only as pairs linked by multiplication - i.e. as a dot product. We already defined the dot product in Equation \ref{formula:dot}. It also has a geometric definition with $\theta$ the angle between two vectors, given by Equation \ref{formula:dotgeo}. 

\begin{equation}
\label{formula:dotgeo}
s(\vec{x}, \vec{y})=||\vec{x}||\ ||\vec{y}||\ cos(\theta)
\end{equation}

If we think about the dot product like this, we see that it is a very natural similarity measure. It measures directional similarity of two vectors. Let us look at an example of two orthogonal vectors. Because $cos(90\degree)=0$, their similarity is $0$. If we have parallel vectors $\vec{x}$ and $\vec{y}$, then, because of $cos(0\degree)=1$, their similarity is $||\vec{x}||\ ||\vec{y}||$.

In Equation \ref{formula:lagrange}, we factor out $x_n x_m$ and replace it by some function $k(\vec{x}, \vec{y})$ that we call a kernel function. Any function can be used as a kernel function, as long as it is symmetric and positive semidefinite. There is evidence that even the definiteness requirement for kernel functions can be violated, e.g. \cite{ong2004learning}.

This replacement is called kernel trick. It is an interesting idea for machine learning and works with any algorithm for vector data that can be expressed in terms of dot products between vectors. The data is transformed to a higher-dimensional space to make separation easier.

Often, data is not linearly separable in the original space. We can see an example for this in Figure \ref{fig:nsep}. One possibility is to allow some separation errors and include this into the optimization problem with a penalty function. Another approach is to use a kernel function that maps data to a higher dimensional space where the separation is possible. An example for this is shown in Figure \ref{fig:sep}.

\begin{figure}
\centering
\begin{subfigure}{.5\textwidth}
  \centering
  \includegraphics[height=4cm]{nsep.png}
  \caption{Not Possible}
  \label{fig:nsep}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
  \centering
  \includegraphics[height=4cm]{sep.png}
  \caption{Possible}
  \label{fig:sep}
\end{subfigure}%
\caption{Linear Class Separation}
\label{fig:sepnsep}
\end{figure}

Using kernel functions is a simple and efficient way to add non-linearity to linear algorithms and as such not restricted to SVMs. The simplest SVMs are linear algorithms, in the sense that they try to find a linear separation in the input space. If input space is transformed to a higher-dimensional feature space with the kernel trick, then we can add nonlinearity to the SVM algorithm without changing it and without additional computational cost (cf. \cite[p.~12]{vert2004primer}).

Another useful aspect of the kernel trick is that we are not required to explicitly know the function that maps the original data to higher-dimensional feature space. This is not a problem, because there is no need to apply this mapping function to the original data. It is sufficient for us to be able to compute distance in the higher-dimensional space. This distance alone can often be computed very efficiently.

Kernel functions can be visualized by plotting them against the similarity of two vectors. Notable kernel functions are:

\begin{itemize}
\item Linear kernels $k(\vec{x}, \vec{y})=\vec{x}' \vec{y}$: They are equivalent to not performing a kernel substitution and creating no additional dimensions. Similarity is measured with the dot product. An example for a linear kernel is shown in Figure \ref{fig:linear}.
\item Gaussian kernels $k(\vec{x}, \vec{y})=e^{-a(x-y)^2}$: They do not create additional dimensions. The gestalt of these kernels is very similar to the Tenenbaum and to the Shepard generalization function that we can use in the thematic part of our DPM. They measure distance at first, but convert it into similarity with exponential decay. An example for a Gaussian kernel is shown in Figure \ref{fig:gaussiank}.
\item Polynomial kernels $k(\vec{x}, \vec{y})=(1+\vec{x}' \vec{y})^a$: They create $a$ dimensions to better separate data. Again, similarity is measured with the dot product. An example for a polynomial kernel is shown in Figure \ref{fig:polyk}.
\item Perceptron kernels $k(\vec{x}, \vec{y})=tanh(w_0 \vec{x}' \vec{y} + w_1)$: They imitate a neuron that fires until it has reached its highest potential. They measure similarity.  An example for a Perceptron kernel is shown in Figure \ref{fig:perceptron}.
\end{itemize}

\begin{figure}
\centering

\begin{subfigure}{.5\textwidth}
  \centering
  \includegraphics[height=4cm]{linear.png}
  \caption{Linear}
  \label{fig:linear}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
  \centering
  \includegraphics[height=4cm]{gaussiank.png}
  \caption{Gaussian}
  \label{fig:gaussiank}
\end{subfigure}%

\begin{subfigure}{.5\textwidth}
  \centering
  \includegraphics[height=4cm]{polynomial.png}
  \caption{Polynomial}
  \label{fig:polyk}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
  \centering
  \includegraphics[height=4cm]{perceptron.png}
  \caption{Perceptron}
  \label{fig:perceptron}
\end{subfigure}%

\caption{Kernel Functions}
\label{fig:kernels}
\end{figure}


Remember that the motivator for this thesis is the insight that the traditional, either exclusively taxonomic or exclusively thematic, approaches for similarity models are not very close to the way humans measure similarity. Kernels can be thought of as similarity measures. They become large if two vectors are similar and small if they are dissimilar (cf. \cite[p.~6--7]{vert2004primer}). 

Existing kernels usually neglect thematic or taxonomic thinking. For example, consider the linear kernel of a SVM. It is equivalent to the dot product, which is a similarity measure and therefore concerned with taxonomic thinking. Our idea is to replace the similarity measurement done by a kernel with a dual process model that combines taxonomic and thematic thinking.

Because we need similarity (related to taxonomic thinking) in the end, we introduce difference (related to thematic thinking) by converting it into similarity with a generalization function. Therefore, we can just use our DPMs as kernels, i.e. we can set $k=m_{dpm}$. The DPM kernels in this work usually only perform similarity measurement and no mapping. One exception appears in Equation \ref{formula:poly-dpm}, but we excluded it from the experiments to keep the number of tested DPMs  manageable.

In conclusion, this chapter introduced some of the models for measuring similarity that have been proposed. We explained taxonomic and thematic thinking, because these aspects are often not accounted for in similarity models. We took a look at generalization functions, because they enable us to mix similarity and difference. The distinction between quantitative and predicate-based measures was explained because both kinds can be used in a DPM. We also covered the technical aspects that we need later in this work to test DPMs in a real-world situation: Feature extraction in general, concrete feature extraction algorithms, support vector machines and kernels. This concludes the discussion of the background. 
