\chapter{Conclusion and Future Work}

\label{chp:summary}

In this thesis, we briefly introduced existing models of similarity like geometric models, featural models and transformational models. DPMs are meta models for measuring similarity. They combine taxonomic and thematic thinking. 

Taxonomic thinking tries to identify common features between two objects. The more common features, the larger the similarity. Taxonomic thinking is best represented by similarity measures, i.e. measures that become larger with the similarity of two stimuli.

Thematic thinking tries to find a common theme that connects two objects. This theme is then used for comparison. Thematic thinking is best represented by distance measures, i.e. measures that become smaller the larger the similarity of two objects gets.

Using a generalization function, distance can be converted into similarity. In Equation~\ref{formula:simple-dpm}, we used this to formulate a simple DPM that measures similarity.

Psychological research suggests that a combination of both taxonomic and thematic thinking is needed to model human similarity assessment adequately.  Similarity is often an important part of machine learning algorithms. However, one of the two ways of thinking is often neglected - especially in models formulated by computer scientists.

We implemented pedestrian detection in images to be able to test DPMs in a real world task. Input images have to be converted into feature vectors. We used a combination of three algorithms for this task: Histogram of Oriented Gradients, Scalable Color Descriptor and Edge Histogram Descriptor. The feature vectors were used to train a SVM, which is a binary classification algorithm. Eventually, the SVM classifies test images into ``contains a pedestrian'' and ``contains no pedestrian''. 

SVMs use a kernel function to measure similarity. Some kernel functions also map input data to a higher-dimensional space to achieve better data separation. We provided DPM kernel functions for the widely-used SVM$^{light}$ library. The implementation lets us choose from 5 generalization functions and 94 measures. It is freely available, because the main goal of this work was to provide such an implementation to foster further work in the area of DPMs.

We tested our DPMs with a subsample of a standard dataset and got some interesting results. Any DPM is a combination of two measures. Obviously, if using just one measure performs as well or better than using two measures, we do not deal with a viable DPM. Only 14\% of our DPMs were found to be viable. The takeaway point here is that when using a DPM, one should always check whether the classification performance does not just come from one measure alone.

DPMs can be formulated with quantitative measures (i.e. real values), predicate-based measures (i.e. countable or 0/1 values) and with a mix of both types of measures. We did not find conclusive evidence that a certain measure type (e.g. measuring taxonomic and thematic thinking with quantitative measures) works better than any other type.

We discovered DPMs that performed as good as or better than the existing linear and polynomial kernels. Let us, for example, combine the Russel and Rao similarity measure with the Exponential Divergence difference measure using Shepard's generalization function. This is shown in Equation \ref{formula:exampledpm}.  

\begin{equation}
\label{formula:exampledpm}
m_{dpm}=\frac{a}{a+b+c+d}+e^{-|\sum_i x_i log(\frac{x_i}{y_i})^2|}
\end{equation}

Russel and Rao's measure calculates the percentage of shared features. This is a simple improvement over just counting the co-occurrences. The Exponential Divergence measure represents an entirely different way of thinking about similarity by measuring information gain.

However, it has to be mentioned that this is not a mandatory proof that DPMs outperform the current state of the art. With the algorithms we used, meaningful feature vectors contain thousands of elements per image. This makes SVM training (and even classification) slow for conventional kernels as well as DPM kernels. In addition, the search space is large, because many concrete DPMs can be formulated with Equation~\ref{formula:simple-dpm}.

Conclusive evidence would have to carry out statistical testing to be able to state significance levels of classification performances. For this, every single specific DPM has to be tested many times. To make this possible, runtime has to be improved.

Future work could either use algorithms that yield good classification results with much smaller feature vectors or focus only on a few possible DPMs. To support this, we provided a construction kit for well-performing DPMs.

Another interesting direction for further research are DPMs in other domains, for example audio and text retrieval or non-multimedia problems like recommender systems and computational finance. Because DPMs are kernel functions, they can be readily used with algorithms other than SVMs like Gaussian processes, ridge regression, spectral clustering and many more. 

In this work, a constant importance factor controlled the relative weight of taxonomic and thematic thinking. We assumed it can be found with psychological tests. It is very likely that the importance factor is situation-dependent rather than constant. Making the importance factor a function of the features would provide another promising direction of research.

Hopefully, this work provides a good basis for further work on DPMs. Improved similarity measurement processes would not only lead to large improvements on the application-side, but also bring us closer to the noble goal of understanding similarity as a concept.

