\chapter{Implementation}

\label{chp:imp}

\section{Overview}

The most important outcome of this thesis is a working implementation of the DPM idea. It should be straightforward to use as a basis for other works. To meet this requirement, the implementation is divided into the two separate parts pedestrian detection and SVM.

\begin{figure}[ph!]
	\centering
	\includegraphics[scale=0.4]{flowchart.png}
	\caption{Implementation Overview}
	\label{fig:flowchart}
\end{figure}

Pedestrian detection deals with all the domain-specific problems and keeps track of the results. There is not much novelty value in this part of the implementation. Myriads of sample code for this task are freely available\footnote{e.g. \url{http://iica.de/pd/demos.py} last accessed 2015-02-15}. As feature extraction is not the focus of this work, wherever possible, we delegate work to existing libraries.  

However, understanding and controlling the parts done by libraries, as opposed to e.g. calling a black box function \texttt{DetectPedestrians()}, proved surprisingly intricate. The main practical problem was to find a library that does all the involved steps separately and comprehensibly. This problem has been solved eventually.  

The second part, SVM, deals with all the domain-independent issues. It has a higher novelty value than the other part, because, to the best of the author's knowledge, no existing classification algorithm implementation supports DPMs out of the box.

The SVM part is independent of pedestrian detection and not even restricted to media descriptions. Although we only tested its performance with pedestrian detection, it is likely to have the same applications as classical SVMs.  

Figure \ref{fig:flowchart} gives an overview of the implementation. The most important program activities are shown in white, while the most important result files are shown in yellow. The implementation reads training data in the form of images and generates feature vectors from them. These are used to train a SVM. 

While training, the SVM uses a DPM kernel instead of a normal one. The result of the training is a SVM model file. At this point, we can discard the training data and start working with separate test data - also in the form of feature vectors. The classification process tells us which images from the test data contain pedestrians and which do not. We end up with a file with the classification results and a file with test statistics.  

The complete source code of the implementation is available freely\footnote{\url{https://code.google.com/p/dual-process-model} last accessed 2015-02-23}.The rest of this chapter takes a more detailed look at the two major parts of the implementation. Next, we explain how to work with the implementation. Treatise about how the experiments were carried out concludes the chapter. 

\section{Pedestrian Detection}

Enabling machines to ``see'' humans is a promising task with a large number of practically very relevant applications like robotics, surveillance and content retrieval. A large amount of research has been conducted in this area and a respectable number of algorithms exists. However, no near perfect algorithm has been proposed so far.

The aim of this work is not to come up with a new, high-performing image detection algorithm. Rather, the effect of using a DPM for measuring distance in existing algorithms is examined. We selected the HOG algorithm (as described in \cite{dalal2005histograms}), because it is representative for a line of research called gradient-based algorithms.

Empirical research suggests to combine HOG with other feature extraction methods to obtain a stronger algorithm.  ``\textit{While no single feature has been shown to outperform HOG, additional features can provide complementary information.}'' \cite[p.~10]{dollar2012pedestrian} 

The Scalable Color Descriptor (SCD) and Edge Histogram Descriptor (EHD) from MPEG-7 Visual provide this additional information to our implementation. Furthermore, turning these descriptor values into predicate based data is straightforward.

SCD and EHD do not return predicates (i.e. zero/one values). To be able to work with predicates, the values returned by SCD and EHD are put into evenly sized bins by Algorithm \ref{alg:bins}. Note that the algorithm does something different than creating a histogram. The output is an array of values that are either zero or one.

\begin{algorithm}
\KwIn{$binSize$ size of one bin, $minT$ smallest possible value, $binNum$ number of bins to create, $values$ array of values to be binned}
\KwOut{$binnedValues$ array of binned values}
\For{i=0; i < values.size(); ++i}
{
	\For{b = 0; b < binNum; ++b}
 	{
           	 val = values.at(i)\;
            	\If{val >= minT + b * binSize \&\& val < minT + (b+1) * binSize}
		{
            		binnedValues[i*binNum+b]=1\;
		}
	}
}
\caption{How to Turn Quantitative Values Into Predicates}
\label{alg:bins} % \label has to be placed AFTER \caption to produce correct cross-references.
\end{algorithm}

The whole pedestrian detection part has been written in C++. Image loading and HOG feature vector calculation is done using the Open Computer Vision library (Open CV)\footnote{\url{http://opencv.org} last accessed 2015-02-09}. A good introduction to the library features can be found in \cite{pulli2012real}.

OpenCV comes with classes to compute HOG feature vectors and with classes to work with support vector machines. The latter ones have not been used on purpose. The rationale is that machine learning is not a core part of OpenCV. It is there, because some computer vision algorithms need it. For solving a pure machine learning task, OpenCV is not the best choice because of the library size and numerous dependencies which are not even needed for machine learning.

Therefore, and to make it easier for users from non-computer vision domains, another library has been used for the machine learning part. Refer to Section \ref{sec:kernel} for details. Similarly, MPEG-7 feature vector calculation is done using the BilVideo-7 project (see \cite{BilVideo7-MM2010}).

The pedestrian detection part first reads all training images with pedestrians (positive images) from some directory. Training images without pedestrians (negative images) are taken from another directory. There are virtually no constraints for the number of images, but too few or too many images might yield a poorly trained classifier, because of under-/over-fitting. Experience showed that 20 to 400 training images usually lead to good classification results.

The first program version kept all images and feature vectors in memory. This limited the amount of images that could be processed. The current version frees images and feature vectors from main memory as soon as possible. Furthermore, all data is kept in temporary text files. With this approach, memory usage is nearly independent of the number of images. 

There is no need for the user to delete these files manually. The files are stored in directory \texttt{gen}. The most important files are the training data (\texttt{feature.txt}), the SVM model (\texttt{svm.txt}), the test data (\texttt{test.txt}), the classification results (\texttt{clz.txt}) and the test statistics (\texttt{pr*.txt}).

First, the program reads all training images. Not all image detection algorithms can process files with arbitrary size. This ability of an algorithm is called scale invariance. HOG, in the version we use, is not scale-invariant, while EHD and SCD are scale-invariant. Therefore we have to ``resize'' our images to the correct size. 

After this step, feature vectors for training are available. Every feature vector starts with the class label. For us, $1$ means the file contains at least one pedestrian, while $-1$ means there is no pedestrian in the image. For test files, $0$ is used as a label if the class is currently unknown. The class label is followed by the binned scalable color descriptor, binned edge histogram descriptor and finally the unchanged HOG descriptor. Every feature vector element except the class label carries a one-based index.

We train a modified SVM with the generated feature vectors. This results in a SVM model file containing the support vectors that create an optimal separation of the training data. If we do not want to carry out the training ourselves, we could get similar support vectors from the pre-trained SVM for pedestrian detection that is part of OpenCV.

The pedestrian detection part reads all test images in the same way as the training images. As soon as this is done, the image classification starts. This part is not computationally expensive anymore. We keep track of the classification result for each image and create a file with test statistics in the end.

\label{sec:pr}

We use one of the most straightforward methods for evaluating classifier quality: the correct classification rate. More advanced evaluation methods exist. One of them is analyzing precision and recall.

We calculate precision and recall as shown in Equations \ref{formula:prec} and \ref{formula:rec}. ${TP}$ (true positives) is the number of positive images with a positive classification result. ${FP}$ (false positives) is the number of negative images with a positive classification result. $FN$ (false negatives) is the number of positive images with a negative classification result.

\begin{equation}
\label{formula:prec}
precision=\frac{{TP}}{{TP}+{FP}}
\end{equation}

\begin{equation}
\label{formula:rec}
recall=\frac{{TP}}{{TP}+{FN}}
\end{equation}

Precision measures how relevant our results are. For example, in an image retrieval task, if we search for ``pedestrians'' and get ten images of pedestrians and nothing else, then our precision is as high as possible. Yet, we do not know how effective our algorithm is. This is measured by recall. Continuing with our example, if there were thousands of pedestrian images and our algorithm found only ten, then it had a low recall.

Sometimes, machine learning algorithms can be improved by setting a better threshold. We already described briefly how a SVM works. From the optimization goal, we can deduce that the threshold with the most correct classifications is $\Delta b=0$. It is not sensible to analyze the sensitivity of the correct classification rate to classification threshold changes, because we already know the optimal threshold.

However, the sensitivity of precision and recall to threshold changes is relevant. If we, for example, wanted to be sure to find all images of pedestrians, we could ``lower our standards'' and set the limit for positive classification to something lower than zero. If we, to formulate another example, wanted to avoid false positives as much as possible, we could ``rise our standards'' and set the limit to something much higher than zero. 

To analyze the sensitivity of precision and recall to classification threshold changes, we just vary the threshold, redo classification and calculate new precision and recall values.  The result of this analysis is called a precision/recall-curve. The test statistics files contain such a curve.

Figure \ref{fig:prsample} shows an example for such a precision/recall-curve. The first column in the test statistics file is the threshold used, the second column is the resulting precision and the third the resulting recall. In our implementation, the threshold always runs from $-3$ to $3$ and is increased in $0.1$-steps.

\begin{figure}[h!]
	\centering
	\includegraphics[scale=0.4]{prsample.png}
	\caption{Example for a Precision/Recall-Curve}
	\label{fig:prsample}
\end{figure}

The precision/recall-curves in our implementation allow us to analyze DPM kernel behavior in special tasks where high precision or high recall is required. Precision/recall-curves usually resemble stairs, because there is a tradeoff between precision and recall.

This finishes our tour through the pedestrian detection part. We will now continue with the SVM part. It is important to understand that the pedestrian detection part includes all the ``glue code''. That means that program execution starts in this part and uses the SVM part for training and classification. When this is done, the pedestrian detection part collects the classification results and puts them in a statistic. The SVM part can still be used standalone as a classification algorithm.

\section{Classification by a SVM}

It makes little sense to implement a whole new library every time an algorithm is modified. Therefore, we built DPM-capabilities into \label{sec:kernel}SVM$^{light}$\footnote{\url{http://svmlight.joachims.org} last accessed 2015-02-09}. It  is an implementation of the original SVM idea by Vapnik with numerous improvements. 

SVM$^{light}$ is widely used  in academia because it is fast, free and easy to modify. The only requirement for using it  is quoting \cite{Joachims/98c}, which conveniently provides a good overview of the library.

SVM$^{light}$ consists of a learning module and a classification module, both implemented in C. The binary \texttt{svm.exe} acts as a front end to both. Custom kernels, like our DPM kernels, are added in function \texttt{custom\_kernel} in file \texttt{kernel.h}.

\label{list:measures}

In Section \ref{sec:svm}, we already heard about kernel functions in general. Equation \ref{formula:simple-dpm} describes the from of our new kernel function. The remaining questions is: Which functions do we plug in? A turnkey list of quantitative and predicate-based measures was found in \cite[p. 587--591]{eidenberger2012handbook}. Our DPM kernel supports all measures in this list. Measure indexes are consistent with the numbers from the source, as long as one subtracts $1$ from them, because C-arrays are 0-based.

\label{list:gen}We need generalization functions for our model as well. The implemented generalization functions are Boxing, Gaussian, Shepard and None. They can  be used by passing the program one of the indexes $\{0, 1, 2, 4\}$. The missing index has been reserved for the Tenenbaum generalization function that has not been implemented yet.

This freedom to select all parts of the DPM kernel from a list leads to a large number of possible kernels.  One of the goals was to make the implementation as dynamic as possible. Therefore, the existing SVM$^{light}$ parameters are ``misused'' to select and control a DPM kernel. 

The disadvantage of this approach is that the parameter names do not have any deeper meaning in the DPM-context. However, the large advantage is that our modified version of  SVM$^{light}$ stays fully compatible with the canonical version. This approach is also suggested in the documentation.

\label{list:combine}

DPMs stipulate the use of quantitative and predicate-based measures to represent taxonomic and thematic thinking. We do not mandate which type of measure to use for which type of thinking. We can combine a quantitative measure for taxonomic thinking with a predicate-based measure for thematic thinking or we can use only quantitative or only predicate-based measures.

\label{list:combine}The concrete combination can be one of the following:

\begin{itemize}
\item Quantitative measure for taxonomic thinking. Predicate-based measure for thematic thinking. Selected with index 0.
\item Quantitative measure for taxonomic thinking. Quantitative measure for thematic thinking. Selected with index 1.
\item Predicate-based measure for taxonomic thinking. Predicate-based measure for thematic thinking. Selected with index 2.
\item Predicate-based measure for taxonomic thinking. Quantitative measure for thematic thinking. Selected with index 3.
\end{itemize}

Furthermore, the implementation supports a polynomial DPM kernel that is shown in Equation \ref{formula:poly-dpm}. It supports the same combinations as before with the indexes $\{4, 5, 6, 7\}$.

\begin{equation}
\label{formula:poly-dpm}
s_{dpm_{poly}} = [1+\alpha\  s_{taxonomic} + (1-\alpha)\ g(d_{thematic})]^3
\end{equation}

The pedestrian-detection part always produces feature vectors with both quantitative data and predicates. The custom SVM needs to know which parts of the input are quantitative and which are predicative. This is done by setting the directive 
\texttt{FIRST\_N\_PREDICATES}. It means that the first $N$ elements of each feature vector in the input should be treated as predicates. 

It would not pose a problem to detect the feature vector type automatically. But we are implementing a kernel function - which could well be called thousands of times while searching for a solution. Therefore, we have to make sure it terminates quickly by removing any overhead like this automatic type detection.

Internally, measures and generalization functions are stored in function pointer arrays. \texttt{q} is an array of quantitative measures, while \texttt{p} is an array of predicate-based measures. Analogously, \texttt{g} is an array of generalization functions. The parameters supplied by the users then select the correct DPM kernel parts from the array. Algorithm \ref{alg:custom} shows a prototype of this concept. 

\begin{algorithm}
\KwIn{$svm\_light\_parameters$ arguments passed to the SVM, $\alpha$ importance of taxonomic thinking, $a$, $b$ feature vectors}
\KwOut{$result$ DPM similarity score for feature vectors}
	lhs = svm\_light\_parameters->coef\_lin;\\
	rhs = svm\_light\_parameters->poly\_degree;\\
	combination = svm\_light\_parameters->coef\_const;\\
	gi = svm\_light\_parameters->rbf\_gamma;\\
	\If{combination==QUANTITATIVE\_QUANTITATIVE}
	{
		result = $\alpha$ * q[lhs](a, b) + (1-$\alpha$) * g[gi](q[rhs](a, b));
	}
	\ElseIf{combination==PREDICATE\_QUANTITATIVE}
	{
		result = $\alpha$ * p[lhs](a, b) + (1-$\alpha$) * g[gi](q[rhs](a, b));
	}
//...other selections
\caption{Prototype for Custom Kernel Selection}
\label{alg:custom} % \label has to be placed AFTER \caption to produce correct cross-references.
\end{algorithm}

\begin{algorithm}
\KwIn{$a$, $b$ feature vectors}
\KwOut{$result$ Euclidean distance}
	register double sum = 0;\\
	register WORD *ai, *bj;\\
	register int k = 0;\\
	ai = a->words;\\
	bj = b->words;\\
	\While{ai->wnum \&\& bj->wnum}
	{
		sum += pow(fabs(ai->weight - bj->weight), 2);\\
		++k;\\
		++ai;\\
		++bj;\\
	}
	result = sqrt(sum / k);
\caption{Example Implementation of a Quantitative Measure}
\label{alg:quant} % \label has to be placed AFTER \caption to produce correct cross-references.
\end{algorithm}

\begin{algorithm}
\KwIn{$a$ common features in both vectors, $b$ features only in the one vector, $c$ features only in the other vector, $d$ features in neither vector}
\KwOut{$result$ Hamming distance}
	result = b + c;
\caption{Example Implementation of a Predicate-based Measure}
\label{alg:pred} % \label has to be placed AFTER \caption to produce correct cross-references.
\end{algorithm}

The implementation of quantitative and predicate-based measures is straightforward. Algorithms \ref{alg:quant} and \ref{alg:pred} show example measure implementations. Note that, from a purely technical point of view, the distinction into thematic and taxonomic measures is not relevant, because they are implemented in the same way.

Quantitative measures are called with the feature vectors in linked lists. Predicate-based measures are a little different. Because they are  based on common features (Section \ref{sec:pred}), they are already invoked with variables that describe them.

\section{Usage}

The binary file \texttt{dpm.exe} performs pedestrian detection with a customized SVM that employs a DPM kernel. The usage is outlined in Algorithm \ref{alg:dpmexe}. It carries out all necessary steps for the training of a support vector machine for pedestrian detection and keeps track of the results. It operates in so called ``modes'':

\begin{enumerate}
\item \texttt{features}: Calculates feature vectors.
\item \texttt{train}: Uses the features vectors to train a support vector machine.
\item \texttt{classify}: Performs classification.
\end{enumerate} 

If no mode is given, all steps are performed in succession. This is the usual program invocation, unless one wants to examine the intermediate results or keep track of program output. Passing parameters to SVM$^{light}$ only makes sense in mode \texttt{train}, because before it, no SVM is used and after it, a SVM model file already exists from which the parameters are taken.

\begin{algorithm}
dpm.exe [mode] [svm-light-parameters]]
\caption{Program Invocation}
\label{alg:dpmexe} % \label has to be placed AFTER \caption to produce correct cross-references.
\end{algorithm}

The binary passes any parameters other than the first to SVM$^{light}$. This is used to select the specific DPM kernels. If we do not pass any parameters, we get a linear kernel. To work with DPM kernels, the parameter $-t \ \ 4$ is needed to tell SVM$^{light}$ to use the custom kernel, which is exactly where we placed all the DPM logic. In this case, some of the standard parameters change their meaning:

\begin{itemize}
\item \texttt{-g [0...4]} Index of generalization function (refer to Sections \ref{sec:genf} and \ref{list:gen}).
\item \texttt{-s [0...74]} Index of taxonomic measure (refer to Sections \ref{sec:taxthe} and \ref{list:measures}).
\item \texttt{-d [0...74]} Index of thematic measure (refer to Sections \ref{sec:taxthe} and \ref{list:measures}).
\item \texttt{-r [0...7]} Index telling us how to combine the measures (refer to Section \ref{list:combine}).
\end{itemize}

We implemented $75$ predicate-based measures and $19$ quantitative measures. Note that because we can theoretically use predicate-based measures for both the thematic and the taxonomic part of our DPM, both indexes run to $74$. If a quantitative measure is used, it is important to not exceed the index $18$.

We conclude this section with the usage example shown in Algorithm \ref{alg:commands}. It employs the Boxing generalization function to combine the predicate-based measure Mc Connaughey for taxonomic thinking with the predicate-based measure Hamming distance for thematic thinking. To demonstrate compatibility, we also use a standard SVM$^{light}$ parameter to limit the number of optimizations to $10000$.

\begin{algorithm}
dpm.exe features\\
dpm.exe train "-t 4 -g 0 -s 33 -d 2 -r 2 -\# 10000"\\
dpm.exe classify
\caption{Example Usage}
\label{alg:commands} % \label has to be placed AFTER \caption to produce correct cross-references.
\end{algorithm}

\section{Experiments}

DPMs can work with arbitrary data, but the aim of a DPM is to imitate the process underlying human similarity perception. Therefore,  we need to perform experiments in an inherently human domain. As already mentioned, the selected domain is the detection of pedestrians in images.

Because much research in this area has been conducted, many datasets do exist. We selected the INRIA dataset\footnote{\url{http://pascal.inrialpes.fr/data/human} last accessed 2015-02-24} with upright images of persons in everyday situations. The dataset is 970 MB large and contains thousands of images. Example images are shown in Figures \ref{fig:ex1} and \ref{fig:ex2}.

\begin{figure}
\centering
\begin{subfigure}{.5\textwidth}
  \centering
  \includegraphics[height=2.5cm]{ex1.jpg}
  \caption{Positive Image}
  \label{fig:ex1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
  \centering
  \includegraphics[height=2.5cm]{ex2.jpg}
  \caption{Negative Image}
  \label{fig:ex2}
\end{subfigure}%
\caption{Examples From the Dataset}
\label{fig:ex}
\end{figure}

To keep computation time low, the INRIA dataset is sub-sampled. Training is performed with 140 positive and 160 negative samples. Testing is done with 50 positive and 50 negative images. Positive training images are restricted to a size close to $96*160$ pixels, while negative training image can have any size larger than this. The reason is that the training algorithm works with one correctly sized, randomly taken sub-image for negative training images.

Technically, test images could have any size as long as we work with correctly sized sub-images, for example created by a simple iterative algorithm. To save computation time, this was not done during the experiments. Therefore, positive test images also have a size close to $96*160$ pixels. Again, because of the random sub-images, the size of the negative test images does not matter.

During SVM training, the number of allowed iterations without progress was restricted to $3000$.

\label{part:class}

We perform a manual classification into thematic and taxonomic measures. If there is a contrast (i.e. $x-y$, $\frac{x}{y}$, $\frac{a}{.}$, $\frac{.}{-a}$, $\frac{.}{b}$  or $\frac{.}{c}$), then a measure is thematic and belongs on the right-hand side of Equation \ref{formula:simple-dpm}. Otherwise, it is taxonomic and belongs on the left-hand side.

The importance of taxonomic thinking was set to $\alpha=\frac{1}{2}$ during all experiments. This means we simulate a person that values taxonomic thinking as much as thematic thinking. The fixed importance factor does not pose a large restriction, because our image classification task can be performed correctly by virtually all adults, regardless of their preference for taxonomic or thematic thinking. This makes it very likely that our $\alpha$ simulates a large subset of all humans. Furthermore, $\alpha$ can easily be changed in our implementation. 

With all implemented quantitative and predicate-based measures classified into taxonomic and thematic, we run pedestrian detection with the described dataset for:

\begin{itemize}
\item All combinations of quantitative measure/quantitative measure/generalization function.
\item All combinations of predicate-based measure/predicate-based measure/generalization function.
\item The most promising combinations of predicate-based measure/quantitative measure/generalization function.
\item The most promising combinations of quantitative measure/predicate-based measure/generalization function.
\end{itemize}

With ``the most promising combinations'' we mean that only predicate-based and quantitative measures were used, that were part of a purely predicate-based or purely quantitative DPM that performed as good as the linear kernel. The reason for this restriction is to make search space smaller to decrease the already large runtime of the experiments.

To be able to compare our DPMs to the current state-of-the-art, we also ran pedestrian detection with the described dataset for the linear, polynomial, sigmoid and radial kernels. Additionally, the thematic and taxonomic parts of each DPM were used on their own to classify the test data set. 

Furthermore, we carried out five experiments with selected DPMs and a larger dataset. This dataset contained 2379 positive and 1231 negative training samples. Testing in this case was done with 900 samples.

Some experiments did not terminate while training the support vector machine. This models have been omitted from the result discussion. 

In this chapter, we took a closer look at the implementation details. The implementation consists of two main parts: the pedestrian detection part and the SVM part. While the former ``glues'' everything together, the latter can be used on its own to carry out experiments in other domains and to solve machine learning tasks other than pedestrian detection.

We saw how to work with the implementation and took a closer look at the experimental setup. We will discuss the outcome of those experiments in the next chapter. At this point, let us think back to our goals from the introductory chapter. We wanted to provide an implementation of the DPM idea that could perform binary classification and was adoptable for other tasks.

Clearly, this task was fulfilled. The question how a DPM can be integrated into machine learning was answered. Because of the very useful concept of kernel functions, it was more of a technical than a logical question.



