\chapter{Results}

\label{chp:results}

We will now deal with the second aim of this work and asses the performance of DPMs for pedestrian detection. Using the experimental setup from before, we tested most of the DPMs that can be formulated with Equation \ref{formula:simple-dpm}.

\section{Viability of the Dual Process}

Using a Dual Process Model adds logical complexity to any machine learning task. Furthermore, depending on the actual measures used, it worsens time complexity. Is the overhead worth it?

We compare the classification performance of each DPM with its single process models, i.e. the taxonomic part $m_{taxonomic}$ with the thematic part $g(m_{thematic})$. Note that both parts measure similarity, because otherwise it would not be possible to use them as kernel functions. 

The aggregated results of this comparison are shown in Table \ref{tab:spm}. It was created by measuring the classification performance of each possible DPM and  comparing it to the classification performances of the two single process models that belong to it.

\begin{table}[h] % placement specifier
 \centering
\begin{tabular}{ | l | r | }
  \hline
  \textbf{Aspect} & \textbf{\% of Cases} \\ \hline \hline
Taxonomic Thinking Better Than Thematic Thinking & 71,12 \\ \hline
Thematic Thinking Better Than Taxonomic Thinking & 19,03 \\ \hline
Single Process Model Better Than or Equal DPM (Not Viable) & 73,77 \\ \hline
Single Process Model Worse Than DPM (Viable) & 26,23 \\ \hline
\end{tabular}
\caption{Viability of DPMs}
\label{tab:spm}
\end{table}

We can see in the first two lines that taxonomic thinking on its own often performed better than the thematic thinking on its own. This can be seen as a clue that taxonomic thinking has a larger impact on image detection performance than thematic thinking. However, it is equally likely that this difference is caused by the way we combined taxonomic and thematic  thinking in our model or by the algorithm selection for feature vector extraction. 

The last two lines are more important. They tell us that DPMs are not necessarily better than single process models. In other words: Not every DPM created with Equation \ref{formula:simple-dpm} makes sense. In about 74\% of the search space, the classification performance has nothing to gain from the use of a DPM with fixed importance factor $\alpha=\frac{1}{2}$ and might even decrease. Therefore, when formulating a DPM, it is essential to verify that it improves performance for the task at hand. 

Let us call this performance improvement viability. For the rest of the result discussion, we will exclude all DPMs that are not viable.  It should be mentioned that using the Euclidean distance (a specific case of the Minkowski distance) often resulted in strong classification performance. However, because of our viability constraint, this measure does not appear often in the following results.

\section{Comparison to Existing Models}

Are DPMs better for image classification than the current state of the art? To clear that, we compared them to linear, radial, polynomial and sigmoid kernels. Table \ref{tab:existing} shows a summary of the classification performance for our pedestrian detection task.

\begin{table}[h] % placement specifier
 \centering
\begin{tabular}{ | l | r | }
  \hline
  \textbf{Kernel} & \textbf{\% Correct} \\ \hline \hline
Sigmoid & 50 \\ \hline
Radial & 50 \\ \hline
Linear & 92 \\ \hline
Polynomial & 92 \\ \hline \hline
Baroni+Shepard(Normalization) & 96 \\ \hline
Histogram+Shepard(Minkowski) & 95 \\ \hline
Tanimoto Index+Boxing(Complement of Hamming Distance) & 94 \\ \hline
Russel \& Rao-Minkowski & 94 \\ \hline
... & ... \\ \hline
\end{tabular}
\caption{Comparison to the State of the Art}
\label{tab:existing}
\end{table}

A striking characteristic  of our comparison is that the values 50\% and 92\% seem to appear more often than they should. This has a compelling reason: the dataset, which contains an equal amount of positive and negative images to classify. Regularly, suboptimal DPM kernels have a tendency to classify all images as positives or negatives. In this case, the classification score is always obviously biased, e.g. $y=300$ or $y=-500$. With our experimental setup, if a kernel only produces one classification result, the correct classification rate will always be 50\%.

Another property of the dataset is that it contains some images that are fairly difficult to classify. Classification errors agglomerate for the same images. As a result, the linear and polynomial kernel make their classification errors for the same images. We can see them in Figure \ref{fig:errors}. Still, the  classification scores of the linear kernel and the polynomial kernel are  different. Therefore, it is not very plausible that this is a bug in the implementation or in the experimental setup.

\begin{figure}
\centering
\begin{subfigure}{.25\textwidth}
  \centering
  \includegraphics[height=2cm]{error1.jpg}
  \caption{}
  \label{fig:err1}
\end{subfigure}%
\begin{subfigure}{.25\textwidth}
  \centering
  \includegraphics[height=2cm]{error2.png}
  \caption{}
  \label{fig:err2}
\end{subfigure}%
\begin{subfigure}{.25\textwidth}
  \centering
  \includegraphics[height=2cm]{error3.png}
  \caption{}
  \label{fig:err3}
\end{subfigure}%
\begin{subfigure}{.25\textwidth}
  \centering
  \includegraphics[height=2cm]{error4.png}
  \caption{}
  \label{fig:err4}
\end{subfigure}%

\begin{subfigure}{.25\textwidth}
  \centering
  \includegraphics[height=2cm]{error5.png}
  \caption{}
  \label{fig:err5}
\end{subfigure}%
\begin{subfigure}{.25\textwidth}
  \centering
  \includegraphics[height=2cm]{error6.jpg}
  \caption{}
    \label{fig:err6}
\end{subfigure}%
\begin{subfigure}{.25\textwidth}
  \centering
  \includegraphics[height=2cm]{error7.jpg}
  \caption{}
  \label{fig:err7}
\end{subfigure}%
\begin{subfigure}{.25\textwidth}
  \centering
  \includegraphics[height=2cm]{error8.png}
  \caption{}
  \label{fig:err8}
\end{subfigure}%

\caption{Difficult Test Images}
\label{fig:errors}
\end{figure}

The difficult images are representative for situations that are hard for most image detection algorithms: bad lighting (Figures \ref{fig:err1}, \ref{fig:err2}, \ref{fig:err4} and \ref{fig:err8}) and entangled shapes (Figures \ref{fig:err3}, \ref{fig:err5}, \ref{fig:err6} and \ref{fig:err7}).

Note that the principal goal of the experimental setup and the implementation was not to come up with image classification that is as optimized as possible. It would have been counter-productive to optimize the feature vectors until all test images could be classified correctly. 

We need some errors that are left to examine the effect of DPMs. Therefore, all results have to be seen relative to the existing kernels and not to state of the art image classification algorithms. In particular, we did not perform multiple training rounds. That is, we did not add the images from Figure \ref{fig:errors} to our training data after we learned that they are hard to classify.

How many correct classifications do we expect from a kernel? The error tolerance in real world tasks is often low. Therefore, we expect a correct classification rate of at least 90\% from a well-performing kernel. This limit has been chosen somewhat arbitrarily, but some limit is definitely needed, because it does not make sense to examine kernels that barely work.

Experiments showed that sigmoid and radial kernels performed poorly, while the widely-used linear and polynomial kernels performed well. Many tested DPMs made less than 90\% correct test data classifications. However, about 9\% of all DPMs performed that well or better. Refer to Section \ref{sec:perf} for more information about these DPMs.

The precision/recall-curve of all well-performing models from Table \ref{tab:existing} is shown in Figure \ref{fig:dvse}. We did not find remarkable behavior of DPM kernels in the high precision or high recall areas. However, a more thorough quantitative analysis of the precision/recall-curves was out of scope for this work.

\begin{figure}[h!]
	\centering
	\includegraphics[width=14cm]{DvsE.png}
	\caption{Precision/Recall-Curve for Conventional Kernels and DPM Kernels}
	\label{fig:dvse}
\end{figure}

\section{Generalization}

All DPMs were tested with different generalization functions (refer to Section \ref{sec:genf} and Chapter \ref{sec:general} in the appendix for details): Boxing, Gaussian, Shepard and None.

We group all DPMs with a classification performance of at least  90\% by their generalization function. The results can be seen in Table \ref{tab:gen}.

\begin{table}[h] % placement specifier
 \centering
\begin{tabular}{ | l | r| }
  \hline
  \textbf{Generalization Function} & \textbf{\% of High-performing DPMs} \\ \hline \hline
Shepard & 51,0 \\ \hline
None & 23,5 \\ \hline
Gaussian & 23,5 \\ \hline
Boxing & 2,0 \\ \hline
\end{tabular}
\caption{Percentage of High-performing DPMs per Generalization Function}
\label{tab:gen}
\end{table}

The Shepard generalization function was most often part of high-performing DPMs. Surprisingly, not using any generalization function proved as successful as using the Gaussian generalization function. The Boxing function did not work well, but we found that its performance increases if additional iterations were  allowed during SVM training.

The data show that if a combination of taxonomic and thematic measure is successful with one generalization function, it tends to be successful with other generalization functions, too. The probability of arriving at a high-performing DPM is highest when using the Shepard generalization function. However, good classification performances could be obtained with most generalization functions, as long as fitting thematic and taxonomic measures were chosen.

Based on the experiments, selecting the taxonomic and thematic measure seems to have a much larger impact on the classification performance of DPMs. In the opinion of the author, generalization will only begin to have a larger impact, if we reach a level of model complexity where the generalization function is constructed dynamically from observed generalization gradients. 

\section{Quantitative and Predicate-based Measures}

As already discussed, quantitative measures operate on real-valued feature vectors, while predicate-based measures operate on 0/1 values. To keep the number of experiments manageable, we first had to test all purely quantitative and all purely predicate-based DPMs. Only the most promising measures of these  experiments where tested in combination. This approach is inspired by genetic algorithms.

The experiments indicated that if quantitative measures are used exclusively, their performance is slightly better than the exclusive use of predicate-based measures. Table \ref{tab:pred} states the classification performance of the best DPMs that use only predicate-based measures.

\begin{table} % placement specifier
 \centering
\begin{tabular}{ | l | l |  l || l | }
  \hline
  \textbf{Taxonomic} & \textbf{Generalization} & \textbf{Thematic} & \textbf{\%} \\ \hline \hline
Sorgenfrei & Shepard &  Batagelj \& Bren & 90 \\ \hline
Hawkins \& Dotson & Shepard & Variance Dissimilarity & 90 \\ \hline
Baroni-Urbani \& Buser & Shepard & Baulieu Variant 2 & 90 \\ \hline
Coeff. of Arith. Means & Shepard & Baulieu Variant 2 & 90 \\ \hline
Proportion of Overlap & Shepard & Baulieu Variant 2 & 90 \\ \hline
\end{tabular}
\caption{Classification Performance of Predicate-based DPMs}
\label{tab:pred}
\end{table}

Like before, 90\% seems to appear more often than it should. Again, this is explained by the difficult test images that lead to the same errors for all shown DPMs. Hence, our predicate-based feature vector extraction is not discriminative enough.

Table \ref{tab:quant} shows the best DPMs that use only quantitative measures. Their correct classification rate is always a little bit higher than the rate of their predicate-based counterparts.

\begin{table} % placement specifier
 \centering
\begin{tabular}{ | l | l |  l || c | }
  \hline
  \textbf{Taxonomic} & \textbf{Generalization} & \textbf{Thematic} & \textbf{\%} \\ \hline \hline
Histogram Intersection & Shepard & Minkowski Distance, Meehl Index & 95 \\ \hline
Histogram Intersection & Shepard & Kullback \& Leibler,  Jeffrey Divergence & 95 \\ \hline
Histogram Intersection & Shepard & Exponential Divergence, Normalization & 95 \\ \hline
Histogram Intersection & Shepard & Kagan Divergence, Mahalanobis Distance & 95 \\ \hline
Tanimoto Index & Shepard & Minkowski Distance & 94 \\ \hline
Modified Dot Product & Gaussian & Minkowski Distance,  Mahalanobis Distance & 93 \\ \hline
Modified Dot Product & Shepard & Mahalanobis Distance & 93 \\ \hline
Cosine Measure & Gaussian & Minkowski Distance, Mahalanobis Distance & 93 \\ \hline
Cosine Measure & Shepard & Mahalanobis Distance & 93 \\ \hline
Tanimoto Index & Shepard & Normalization, Mahalanobis Distance & 92 \\ \hline
\end{tabular}
\caption{Classification Performance of Quantitative DPMs}
\label{tab:quant}
\end{table}

Figure \ref{fig:qvsp} compares purely quantitative to purely predicate-based DPMs by plotting their precision/recall-curves against each other. The first two measures in the legend, blue and red, are predicate-based and the last two, green and violet, are quantitative DPMs. We see that quantitative measures seem to have an advantage in the high recall area, that vanishes quickly if more precision is required. Further analysis of this finding was out of scope of this work.

\begin{figure}
	\centering
	\includegraphics[width=14cm]{QvsP.png}
	\caption{Precision/Recall-Curve for Selected Quantitative and Predicate-based Measures}
	\label{fig:qvsp}
\end{figure}

Until now, our DPMs used either quantitative or predicate-based measures only - but these two types of measures can be mixed. Table \ref{tab:mixed} summarizes the classification performance of the best mixed DPMs. 

Note that some mixed DPMs appear in Table \ref{tab:mixed} that were not part of the best purely quantitative DPMs or purely predicate-based DPMs. The reason for this is that DPMs that performed well, but were not viable, were also permitted to take part in the mixed test round. However, all of the mixed DPMs are still required to be viable.

We can see that mixed DPMs work as well as quantitative or predicate-based DPMs. This is an encouraging result, because it allows us to select our DPM parts based on the feature vector type at hand.

\begin{table}% placement specifier
 \centering
\begin{tabular}{ | l | l |  l || c | }
  \hline
  \textbf{Taxonomic} & \textbf{Generalization} & \textbf{Thematic} & \textbf{\%} \\ \hline \hline
Baroni-Urbani \& Buser & Shepard & Normalization & 96 \\ \hline
Sorgenfrei & Shepard & Normalization & 95 \\ \hline
Coeff. of Arith. Means & Shepard & Normalization & 95 \\ \hline
Russel \& Rao &  Shepard &  Exponential Div. & 94 \\ \hline
Russel \& Rao &  None &  Minkowski Dist.& 94 \\ \hline
Russel \& Rao &  None &  Exponential Div. & 94 \\ \hline
Tanimoto Index &  Boxing, Gaussian & Compl. of Hamming Dist. & 94 \\ \hline
Tanimoto Index &  Shepard & Compl. of Hamming Dist. & 94 \\ \hline
Correlation Coefficient &  Shepard, None & Baulieu Var. 2& 94 \\ \hline
Correlation Coefficient &  Shepard, None & Batagelj \& Bren & 94 \\ \hline
Correlation Coefficient &  Shepard & Variance Dis. & 94 \\ \hline
Histogram Intersection &  None & Hamming Dist. & 94 \\ \hline
Russel \& Rao, Sorgenfrei &  Gaussian &  Normalization & 93 \\ \hline
Baroni-Urbani \& Buser &  Gaussian &  Normalization & 93 \\ \hline
Coeff. of Arith. Means &  Gaussian &  Normalization & 93 \\ \hline
Proportion of Overlap &  Gaussian, Shepard &  Normalization & 93 \\ \hline
Sorgenfrei &  None &  Normalization & 93 \\ \hline
Proportion of Overlap &  None &  Exponential Div. & 93 \\ \hline
Cosine Measure &  Boxing &   Compl. of Hamming Dist. & 93 \\ \hline
Mod. Dot Prod. &  Gaussian &  Variance Dis.,  Batagelj \& Bren & 93 \\ \hline
Cosine Measure &  Gaussian & Baulieu Var. 2 & 93 \\ \hline
Cosine Measure &  Gaussian & Compl. of Hamming Dist. & 93 \\ \hline
Cosine Measure &  Shepard, None & Variance Dis.,   Baulieu Var. 2& 93 \\ \hline
Cosine Measure &  Shepard, None &  Batagelj \& Bren & 93 \\ \hline
Cosine Measure &  Shepard, None &  Compl. of Hamming Dist. & 93 \\ \hline
Tanimoto Index &  Gaussian &  Baulieu Var. 2 & 93 \\ \hline
Tanimoto Index &  Shepard, None & Variance Dis.,  Baulieu Var. 2 & 93 \\ \hline
Tanimoto Index &  Shepard, None & Batagelj \& Bren & 93 \\ \hline
Russel \& Rao &  Gaussian &  Exponential Div. & 92 \\ \hline
Hawkins \& Dotson &  Shepard, None &  Exponential Div. & 92 \\ \hline
Baroni-Urbani \& Buser &  Shepard &  Exponential Div. & 92 \\ \hline
Coeff. of Arith. Means &  Shepard, None &  Exponential Div. & 92 \\ \hline
Proportion of Overlap &  Shepard &  Exponential Div. & 92 \\ \hline
Mod. Dot Prod. &  Gaussian & Baulieu Var. 2 & 92 \\ \hline
Mod. Dot Prod. &  Shepard, None & Variance Dis. & 92 \\ \hline
Hawkins \& Dotson &  Gaussian & Normalization & 91 \\ \hline
Sorgenfrei &  Shepard, None &  Exponential Div. & 91 \\ \hline
Hawkins \& Dotson &  Gaussian &  Exponential Div. & 90 \\ \hline
Hawkins \& Dotson &  Shepard &  Normalization & 90 \\ \hline
Coeff. of Arith. Means &  None &  Normalization & 90 \\ \hline
\end{tabular}
\caption{Classification Performance of Mixed DPMs}
\label{tab:mixed}
\end{table}

Against intuition, mixing quantitative measures with predicate-based measures (that performed slightly weaker in general), still often lead to improved classification performance. This is further empirical evidence in support of DPMs.

In the beginning of this section, we hinted at quantitative DPMs being slightly superior to their predicate-based counterparts. In the light of the performance of mixed DPMs, this does not hold. The observed performance difference is likely to be caused by the algorithms used for feature vector extraction.

\section{Statistical Significance}

Because of the long runtime of each experiment, it was not possible to test each DPM many times with different subsamples and provide confidence intervals for the classification results. It is plausible that in other experiments, a few of the well-performing measures disappear or new ones are found. What we can show is that DPMs work in principle and our results are not just caused by the degrees of freedom our large search space provides.

Assume for a moment that DPMs do not work at all and their classification is utterly random. In a binary classification problem, the chances of guessing the class label correctly are 50\%, which is quite high. Our generic model for constructing DPMs allows for many different concrete DPMs. That means we would have  many chances to guess a good test data classification.  

One argument against our findings in this chapter being random are the good classification results of the best mixed DPMs. We restricted the search space  for this experiments to DPM parts that already showed a good classification performance in purely quantitative or purely predicate-based DPMs. Still, we got  good classification results again.

As further evidence, we ran two of the best DPMs again with a large sample. The classification results are shown in Table \ref{tab:good}.

\begin{table} % placement specifier
 \centering
\begin{tabular}{ | l | l |  l || c | }
  \hline
  \textbf{Taxonomic} & \textbf{Generalization} & \textbf{Thematic} & \textbf{\%} \\ \hline \hline
Baroni-Urbani \& Buser & Shepard & Normalization & 95,4 \\ \hline
Histogram Intersection & Shepard & Minkowski Distance & 94,6 \\ \hline
\end{tabular}
\caption{Classification Performance With a Larger Dataset}
\label{tab:good}
\end{table}

We can see that the classification performance for a much larger dataset with nine times as many classifications remained stable. This indicates that our results are not fragile in the sense that for every dataset, different DPMs show up with high classification performances. However, it has to be noted that much stronger propositions would have been possible, if the size and total runtime of the experimental setup had permitted the calculation of confidence intervals.

\section{Constructing DPMs}\label{sec:perf}

Until now, we always stated concrete combinations of taxonomic measure, thematic measure and generalization function. Just using these combinations seems too narrow-minded. The data show that many measures always work well, but are often dragged down by combining them with measures with poor classification performance.

Therefore, we provide instructions for building well-performing DPMs by listing all measures and generalization functions that were part of at least one viable model  with a classification performance of at least 90\%. Our options for well-performing DPMs are shown in Tables \ref{tab:i1}, \ref{tab:i2}, \ref{tab:i3}, \ref{tab:i4} and \ref{tab:i5}.

For clarity, we split the options into quantitative and predicate-based DPMs, but they can be mixed at will. They are intended to be plugged into Equation \ref{formula:simple-dpm}. Refer to Chapter \ref{chp:formulas} in the appendix for exact definitions.

This concludes the results chapter. Unsurprisingly, using Equation \ref{formula:simple-dpm} does not work with all measures. Quite to the contrary, most measures barely work at all when used in a DPM. 

One possible reason is that the classification into taxonomic and thematic measures might not be as simple as  described in Section \ref{part:class}. Our experimental setup labeled every given measure either as taxonomic or as thematic, but maybe measures exist that are neither taxonomic nor thematic. Such measures would show in our results with very poor performance. This would explain the myriad of DPMs with poor performances that we observed during the experiments.

An alternative explanation has to do with the structure of  Equation \ref{formula:simple-dpm} itself. It can be described as adding taxonomic and thematic similarity to arrive at a total similarity score. It is not a given that both taxonomic and thematic similarity have the same magnitude for all existing measures. As an example for what could go wrong, assume that $s_{dpm}=100+3$. A similarity of three could mean a huge similarity for the taxonomic measure, while a similarity of $100$ only means ``somewhat similar'' for the thematic measure. 

On the plus side, we found some concrete DPMs that perform well. From this, we can conclude that psychological research in the area of thematic and taxonomic thinking has useful applications in machine learning.
 
The practitioner, when faced with a specific machine learning task, can just select two measures from our tables and check whether the DPM kernel constructed like this works better than the linear or polynomial kernel. The decision whether to use predicate-based or quantitative measures is only a technical one and depends just on the format of the input data.

The question about the best-performing DPMs has not been answered fully. We know that if following our building instructions, the performance of the linear kernel and polynomial kernels can usually be matched. With our data set and feature extraction, we are in the realm of about 90\% correct classifications for both  conventional kernels and DPM kernels. How DPMs will compare to the state of the art of other machine learning tasks, or even other feature extraction algorithms, cannot be said yet and is left as a task for future research.

\begin{table}[h!]
\parbox{.3\linewidth}{
\centering

\begin{tabular}{ | l | }
  \hline
\textbf{g} \\ \hline \hline
Boxing  \\ \hline
Gaussian  \\ \hline
Shepard  \\ \hline
None  \\ \hline
\end{tabular}
\caption{Generalization Functions}
\label{tab:i1}
}
\hfill
\parbox{.3\linewidth}{
\centering

\begin{tabular}{ | l | }
  \hline
$\boldsymbol{m_{taxonomic}}$ \\ \hline \hline
Histogram Intersection  \\ \hline
Modified Dot Product  \\ \hline
Tanimoto Index  \\ \hline
Correlation Coefficient  \\ \hline
Cosine Measure  \\ \hline
\end{tabular}

\caption{Quantitative Taxonomic Measures}
\label{tab:i2}
}
\hfill
\parbox{.3\linewidth}{
\centering
\begin{tabular}{ | l | }

  \hline
$\boldsymbol{m_{thematic}}$ \\ \hline \hline
Meehl Index  \\ \hline
Minkowski Distance \\ \hline
Exponential Divergence  \\ \hline
Kagan Divergence  \\ \hline
Jeffrey Divergence  \\ \hline
Kullback Leibler  \\ \hline
Mahalanobis Distance \\ \hline
Normalization  \\ \hline
\end{tabular}

\caption{Quantitative Thematic Measures}
\label{tab:i3}
}
\end{table}


\begin{table}[h!]
\parbox{.45\linewidth}{
\centering

\begin{tabular}{ | l | }
  \hline
$\boldsymbol{m_{taxonomic}}$ \\ \hline \hline
Hawkins Dotson \\ \hline
Sorgenfrei \\ \hline
Russel \& Rao \\ \hline
Baroni Urbani Buser \\ \hline
Proportion of Overlap \\ \hline
Coefficient of Arithmetic Means \\ \hline
\end{tabular}

\caption{Predicate-based Taxonomic Measures}
\label{tab:i4}
}
\hfill
\parbox{.45\linewidth}{
\centering

\begin{tabular}{ | l | }
  \hline
$\boldsymbol{m_{thematic}}$ \\ \hline \hline
Variance Dissimilarity \\ \hline
 Batagelj Bren \\ \hline
 Baulieu Variant 2 \\ \hline
Hamming Distance \\ \hline
Complement of Hamming Distance \\ \hline
\end{tabular}

\caption{Predicate-based Thematic Measures}
\label{tab:i5}
}
\end{table}

