\documentclass[a4paper, oneside, BCOR1mm]{scrreprt}

\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage[english]{babel}
\usepackage[pdftex]{graphicx}

% \setkomafont{sectioning}{\rmfamily} % Serif überschrift einschalten
% .

\usepackage[sc,osf]{mathpazo}
\usepackage{bm}
\linespread{1.05}
\renewcommand\sfdefault{uop}
\usepackage{courier}

\usepackage{amsmath}
\usepackage{amsxtra}
\usepackage[protrusion, expansion]{microtype}

% \usepackage{textcomp} % was is das?!
% \usepackage{ellipsis, fixltx2e} % Was is das!?
% \usepackage{ellipsis}
\usepackage{jurabib}


\usepackage{svn-multi}
\usepackage{hyperref}
\hypersetup{colorlinks=true, linkcolor=red, urlcolor=blue}
%Kopf- und Fußzeile---------------------------
\usepackage[automark]{scrpage2}
\pagestyle{scrheadings}
\clearscrheadings
\clearscrplain
\lohead{\headmark}
% \lofoot{Revision \svnrev}
\cofoot{\pagemark}
\rohead{Revision \svnrev, \svndate}
% \setheadsepline{.4pt} % Linie unter dem Head
% \setfootsepline{.4pt} % Ganzunten
%---------------------------------------------



\svnidlong
{$HeadURL: http://svnthss.googlecode.com/svn/randomforest.tex $}
{$LastChangedDate: 2008-08-21 19:29:18 +0000 (Thu, 21 Aug 2008) $}
{$LastChangedRevision: 21 $}
{$LastChangedBy: knollmueller $}


\usepackage{pgf}
\usepackage{tikz}

%opening
\titlehead{{\Large University of Regensburg \hfill 2008 \\}
            Department of Biophysics \\
            Computational Intelligence and Machine Learning\\
            Albertus Magnus Straße xxx\\
            93055 Regensburg

            }
\subject{Diploma Thesis}
\title{Multivariate Outlier Detection using Machine Learning Techniques}
\author{Philipp Knollmüller}
\publishers{Supervised by Prof. Dr. rer. nat. Elmar Wolfgang Lang\thanks{University of Regensburg}}
\setlength{\parskip}{1.5ex plus 0.5ex minus 0.5ex} %vermeidung von schusterjungen
\newcommand{\vect}[1]{\mathbf{#1}} 
\dedication{For my parents.}

\begin{document}


\maketitle
\tableofcontents

\chapter{Introduction}
\chapter{Theory}
\section{Artificial outlier Generation}
A problem evaluating the performace of one class classifier is, that most of the time the training data is not labeled. Often it very expensive or even imossible to obtain the labels. For example Infineon does a ``Burn-In-Test'' with wafers they like to have labeled. Simply put, this is stress testing the chips until some of them start to fail. Depending on passing or failing the test, the chips get their label. Obviously, this is extremely expensive, because it consumes much time and it's questionable wether the chips are sold afterwards or not. So Infineon avoids the ``Burn-In-Test'' to save money and time.

Given the case there are no labels for a test set, there seems to be no possibility to create a ROC curve for evaluating different classifier. One option is to generate the outlier synthetically and use those as the second class. Now there are two labels and classifier can be rated.

The following methods for creating artificial outlier are used in this work. One is generating data uniformly inside a hypercube that contains the training data. Another method is to generate the data uniformly inside a hypersphere, that again contains the data.

\subsection{Hypersphere}
The first method presented is placing a hypersphere around the original data and distribute data uniformly inside this sphere. The intention is that the sphere fits tigthly around the data point, because with increasing dimensions the volume of the sphere will decrease.

A naive approach is to draw randomly from a hypercube, that contains the hypersphere, and test wether the new data points are inside or outside of the sphere. The problem with this approach is, that with rising numbers of dimensions $d$ the volume of the hypercube increases with $2^d$, as the volume of the hypersphere converges to zero. So the possibility of a randomly sampled point of the hypercube to be inside the hypersphere vanishes.

Therefore, it is not trivial to generate an uniform distribution in a hypersphere. The key idea behind another method is to use a $d$-dimensional gaussian distribution. Then the norm of the object vectors will be rescaled. This is accomplished by using the following steps:
\begin{enumerate}
    \item Generate data $x$ from Gaussian distribution (with zero mean and unit variance)
    \begin{equation}
        \vect{x}\sim \mathcal{N}(\vect{0},\vect{1})
    \end{equation}
    \item Calculate the squared Euclidean distance from each sample to the origin, $r^2$. This squared distance is distributed as $\chi^2$ with $d$ degrees of freedom \ref{lit:xxxUllman}(Ullman, 1978).
    \begin{equation}
        r^2 = \Vert \vect{x} \Vert^2
    \end{equation}
    
    \item Use the cumulative distribution of $\chi^2_d$, $X^2_d$, to transform the $r^2$ distribution in an uniform distribution between 0 and 1. This results in an uniform distribution of $\rho$:
    \begin{equation}
        \rho^2=X^2_d(r^2) = X^2_d(\Vert\vect{x}\Vert^2)
    \end{equation}
    
    \item Rescale $\rho^2$ by $r' = (\rho^2)^{\frac{2}{d}} $ such that $r'$ is distributed as $r' \sim r^d$ for $r$ between 0 and 1.
    \begin{equation}
        r'^2 = (\rho^2)^{\frac{2}{d}} = \left(X^2_d(\Vert\vect{x}\Vert^2)\right)^{\frac{2}{d}} 
    \end{equation}
    \item Rescale all objects $\vect{x}$ with this factor:
    \begin{equation}
        \vect{x'} = \frac{r'}{\Vert\vect{x}\Vert}\vect{x}
    \end{equation}
\end{enumerate}

Now $\vect{x'}$ are uniformly distributed in an unit hypersphere in d dimensions. This method is way faster than the naive approach. Now the synthetic outlier can be distributed around any $d$ dimensional data distribution. This is very useful as the number of dimensions in this thesis is about 20 to 200.


\subsection{Hypercube}
A simpler method to generate $d$ dimensional artificial outlier is to draw a box around the $M$ data points and sample uniformly inside the box.
This is done by following this steps:
\begin{enumerate}
    \item Determine the minimum and maximum value of every variable $\vect{x}_i$:
    \begin{equation}
        x^{\mathrm{max}}_i = \mathrm{max}(\vect{x}_i)
    \end{equation}
    \begin{equation}
        x^{\mathrm{min}}_i = \mathrm{min}(\vect{x}_i)
    \end{equation}
    
    \item Calculate the mean for every $\vect{x}_i$:
    \begin{equation}
        \bar{x_i} = \frac{1}{d}\sum_j^d{x_i(j)}
    \end{equation}

    \item To ensure, that data points are really inside the hypercube do not touch the data too much, the box will be expanded by a factor of $1.5$. The new borders are determined by calculating this:
    \begin{equation}
        x_i^{\mathrm{boxmax}} = 1.25 \cdot \left(x^{\mathrm{max}}_i - \bar{x_i}\right)+\bar{x_i}
    \end{equation}
    \begin{equation}
        x_i^{\mathrm{boxmin}} = 1.25 \cdot \left(x^{\mathrm{min}}_i + \bar{x_i}\right)-\bar{x_i}
    \end{equation}

\end{enumerate}

Now the bounding of the hypersphere is known. Therefore it is possible to place every $d$-dimensional data set inside such a hypersphere. The only task left is to draw uniformly distributed synthetic outlier at random inside of this hypersphere.

This method is implemented easily and the outlier are calculated very fast.
A drawback for larger dimensions $d$: the volume of the hypersphere grows exponetially with $d$. So most of the artificial data points are far apart from real data points.



\subsection{PERHAPS::Sample from marginal Distribution}


\section{Reiceiver Operator Characteristic Graphs}
After applying a classifier to a data set it is important to evaluate the resulting classifications, to decide which classifier is most appropriate for the given task. A useful technique to visualizethe the performance of classifiers is the Receiver Operator Characteristic (ROC). The resulting area under a ROC curve (AUC) is an useful measure to compare different classifier. The ROC curve is a plot of false positives (FP) against true positives (TP).
In the following, only two class problems are considered.
\subsection{Confusion Matrix and Classifier Performance} 
% In a \emph{confusion matrix} the classifiers performance is somehow measured.
A classifier maps an instances $I$ to one element of the set $\{Y,N\}$, its classification result. The instance has a label, that shows its true class membership, $\{p,n{}$. Some classifier produce a continuous output to which thresholds have to be applied to predict the class membership, while others produce discret outcomes.
 
Given a classifier and an instance, there are four possible outcomes. If the instance is positive and it is classified as positive, it is counted as a \emph{true positive} (TP). If the instance is negative and classified as negative, it is counted ass a \emph{true negative} (TN). If a negative instance is miscalssified as positive, it is counted as a \emph{false positive} (FP). The same goes for positive instances being misclassified as negative, \emph{false negative} (FN).
 
Given a classifier and a set of instances (the test set), a two by two \emph{confusion} matrix represents the dispositions of the set of instances as in table \ref{tab:confusion matrix}. The numbers on the diagonal are the correct decisions made by the classifier, while the numbers off the diagonal are wrong classifications.

\begin{table}[htb]
\centering

\begin{tabular}{r|cc}
    
        &    positive class & negative class \\
        \hline\
    positive label& True Positives    &False Positives \\
    negative label& False Negatives    & True Negatives \\
\end{tabular}
\caption{Confusion Matrix}
\label{tab:confusion matrix}
\end{table}


The \emph{True Positive rate} of a classifier, also called \emph{hit rate} and \emph{recall} is estimated as:
\begin{equation*}
    \text{TP rate} = \frac{\text{True Positives}}{\text{All Positives}}
\end{equation*}

The \emph{False Positive rate} of a classifier, also called \emph{false alarm rate} is estimated as:
\begin{equation*}
    \text{FP rate} = \frac{\text{False Positives}}{\text{All Negatives}}
\end{equation*}




\subsection{ROC Space}
The ROC graph is a plot of the \emph{TP rate} on the $Y$ axis and the \emph{FP rate} on the $X$ axis in a two-dimensional space. A ROC graph shows the relative trade off between benefits (TP) and costs (FP). Given a classifier with a known confusion matrix, its point in the ROC space can be calculated. For example, the "`Perfect"' classifier in figure \ref{fig:roc space} has a TP rate of one and a FP rate of zero., while a random classifier would produce points along the line through the origin. A classifier that performs worse than random would be located below the line of a random classifier, but can be turned in a better than random classifier by inverting the class label predictions.

\begin{figure}[htb]

\centering

\begin{tikzpicture}

% \draw[very thin, color=gray, step=0.8] (0.0,0.0) grid (3.9,3.9);
\draw[->] (0,0) -- node[below] {FP rate} (4.2,0) ;
\draw[->] (0,0) -- node[rotate=90, above] {TP rate} (0,4.2) ;

% \draw[color=red] plot[id=diag, domain=0:4] function{x};
\draw[color=red] (0,0) -- (4,4);

\filldraw (0,4) circle (2pt) node[right] {Perfect};
\filldraw (3,3) circle (2pt) node[right] {Random};
\filldraw (0.2,3.2) circle (2pt) node[right] {Good};
\filldraw (1.5,0.5) circle (2pt) node[right]{Worse than random};
% \draw   (0,0) .. controls (1,3)  .. (4,4);
\end{tikzpicture}
\caption{A basic ROC graph showing some discrete classifiers with different performances.}
\label{fig:roc space}
\end{figure}


\subsection{ROC Curves and Area Under Curve}
A classifier with known confusion matrix is a point in the ROC space. For classifiers that produce a continuous outcome a threshold has to be applied to get the real class prediction. A ROC curve can easily be created, by adjusting the threshold from the smallest to the largest outcome. Every possible confusion matrix of this specific classifier can be calculated. This results in multiple points, that can be interpolated to a ROC curve in the ROC space representing the classifiers performance. This ROC curve can for example be used to find the best working point for a specific classification task. The resulting area under the curve (AUC) is used to measure the overall performance of a classifier.

Looking at figure \ref{fig:roc curves}, the different ROC curves are created by several classifier. A random classifier has the worst performance and would be represented by the line through the origin. The better a classifier performs the more differs its ROC curve from the random case.
 
This implies, that classifier with a larger area beneath them do better. The area under the ROC curve (AUC) is often used to compare the performance of different classifier, as it is a single number and can be ranked. Adjusting the threshold for finding the best working point is another task and the AUC is not helping here. 

\begin{figure}[htb]
    
\centering


\begin{tikzpicture}
% \draw[very thin, color=gray, step=0.8] (0.0,0.0) grid (3.9,3.9);
\draw[->] (0,0) -- node[below] {FP rate} (4.2,0) ;
\draw[->] (0,0) -- node[rotate=90, above] {TP rate} (0,4.2) ;

% \draw[color=red] plot[id=diag, domain=0:4] function{x};
\draw[color=gray] (0,0) -- (4,4);

\draw   (0,0) .. controls (0,4) .. (4,4);
\draw   (0,0) .. controls (0.5,3.5)  .. (4,4);
\draw   (0,0) .. controls (1,3)  .. (4,4);
\draw   (0,0) .. controls (1.5,2.5)  .. (4,4);
\draw[->, color=red]   (2.5,1.5) -- node[fill=white, sloped] {better}  (0.2,3.8);
\end{tikzpicture}

\caption{Different ROC Curves}
\label{fig:roc curves}
\end{figure}


\section{Support Vector Machines}
A Support Vector Machine creates a hyperplane, that separates given input data with the largest possible margin. This hyperplane cuts the vector space in two halves. Classification is done by testing in which half space a new data point is.
\subsection{Optimal Separating Hyperplane}
A Hyperplane separates a space into two half spaces. Given a two class problem, the hyperplane separates the space in such a way, that each class of data points is located in its own half space.
 
Considering a training data set consisting of two classes living in a $n$ dimensional space. Assuming every class forms its own cluster that does not overlap with another cluster. In this case, there are hyperplanes that can separate the two classes with one class being on the one side and the other class being on the other side of the hyperplane.
  
There are many hyperplanes that can separate the two classes, as seen in picture \ref{pic:hyperplane}. Which of these hyperplanes separates the data best? The assumption is, a hyperplane, that maximizes the margin to the data points is best suited, as it is the most robust choice. A hyperplane with maximum margin should is more robust to noise, outlier, and unstabilities of measurement, since little changes of the datapoints can be tolerated, as they still are, despite the little variations, on the same side of the hyperplane.

As a matter of fact, a support vector machine creates a maximal margin hyperplane. The hyperplane is described by only a few data points, called the support vectors. These support vectors are those that are exactly located on the margin. The other points are then irrelevant for describing the hyperplane.



% \begin{enumerate}
%     \item Geometric interpretation of a hyperplane
%     \item why hyperplane
%     \item not anyone but the best
%     \item optimal margin hyperplane
% \end{enumerate}

\subsection{Support Vector Machines Algorithm}
This section describes the Support Vector Machine algorithm to find a hyperplane that separates two classes of not overlapping data points with maximal margin. The hyperplane will be constructed by only a few data points, the support vectors, residing exactly on the margin.

Given a dot product space $\mathcal{H}$ and a set of data points $(\mathbf{x}_1,y_1),\ldots,(\mathbf{x}_m,y_m)\; \mathbf{x}_i\in \mathcal{H},\; y_i \in\{-1,+1\}$, any hyperplane in $\mathcal{H}$ can be described as
\begin{equation}
    \label{eqn:hyperplane}
    \{\mathbf{x} \in \mathcal{H} | \langle\mathbf{w},\mathbf{x}\rangle + b =0 \}, \mathbf{w} \in \mathcal{H}, b \in \mathbb{R}.
\end{equation}
The vector $\mathbf{w}$ is a normal vector, it is orthogonal to the hyperplane. The fraction $\frac{b}{\lVert\mathbf{w}\rVert}$ is the distance of the hyperplane to the origin. By rescaling $\mathbf{w}$ and $b$, following is derived:
\begin{equation}
     |\langle\mathbf{w},\mathbf{x}_i\rangle+b|\geq1, \; \textrm{for}\; i=1,\ldots,m
\end{equation}
This is called the \emph{canonical form} of the hyperplane. This equation means, that the closest point to the hyperplane has the distance $\frac{1}{\lVert\mathbf{w}\rVert}$. The points closest are called \emph{support vectors}.

The goal is to find a decision function
\begin{equation}
\label{eqn:decisionfct}
 f_{\mathbf{w},b}(\mathbf{x}) = \text{sgn}(\langle\mathbf{w},\mathbf{x}\rangle+b)   
\end{equation}
  satisfying
\begin{equation}
    f_{\mathbf{w},b}(\mathbf{x_i}) = y_i
\end{equation}
This decision function tests on which side of the hyperplane \eqref{eqn:hyperplane} the data point $\mathbf{x}$ lies.  Since the data is linearly separable (the clusters are not overlapping), two separating hyperplanes can be chosen, so that no data points are between them. The task left is maximizing their distance $\frac{2}{\lVert\mathbf{w}\rVert}$. Therefore $\lVert\mathbf{w}\rVert$ has to be minimized with $y_i(\langle\mathbf{w},\mathbf{x}_i\rangle+b)\geq1$ as constraints.
This is following optimization problem:
\begin{align}
    \min_{\mathbf{w},b}\;&\lVert\mathbf{w}\rVert \\
    \textrm{subject to} \;\;&y_i(\langle\mathbf{w},\mathbf{x}_i\rangle+b)\geq1
\end{align}
This problem is hard, since it only depends on absolute values of $\lVert\mathbf{w}\rVert$. Fortunately it can be turned into a convex optimization problem, which is easier to solve, by substituting $\lVert\mathbf{w}\rVert$ with $\frac{1}{2}\lVert\mathbf{w}\rVert^2$. The new optimization problem is called the \emph{primal optimization problem}.
\begin{align}
    \label{eqn:primal}
    \min_{\mathbf{w},b}\;&\frac{1}{2}\lVert\mathbf{w}\rVert^2 \\
    \textrm{subject to} \;\;&y_i(\langle\mathbf{w},\mathbf{x}_i\rangle+b)\geq1
    \label{eqn:primalsubject}
\end{align}

This problem can be written in a \emph{dual form}, which has the same solutions as \eqref{eqn:primal}, but it turns out that it is more convenient to deal with the dual.
The starting point of the derivation is the Lagrangian,
\begin{equation}
    \label{eqn:lagrangian}
    L(\mathbf{w},b,\bm{\alpha}) = \frac{1}{2}\lVert\mathbf{w}\rVert^2 - \sum_{i=1}^m\alpha_i(y_i(\langle\mathbf{w},\mathbf{x}_i\rangle+b)-1),
\end{equation}
with Lagrange multipliers $\alpha_i \geq 0$. The Lagrangian $L$ has to be maximized with respect to $\alpha_i$ and minimized respect to $\mathbf{w}$ and $b$. At this saddle point, the derivations of $L$ have to vanish.
\begin{align}
    \frac{\partial}{\partial b}L(\mathbf{w},b,\bm{\alpha}) = 0 \\
    \frac{\partial}{\partial \mathbf{w}}L(\mathbf{w},b,\bm{\alpha}) = 0
\end{align}
which leads to
\begin{align}
    \label{eqn:derv1}
    \sum_{i=1}^m\alpha_iy_i=0 \\
    \mathbf{w} = \sum_{i=1}^m\alpha_iy_i\mathbf{x}_i
    \label{eqn:derv2}
\end{align}
The patterns $\mathbf{x}_i$ for which $\alpha_i \geq 0$ are called \emph{Support Vectors}. The norm vector $\mathbf{w}$ of the hyperplane can solely be expressed by using support vectors \eqref{eqn:derv2}.
By inserting \eqref{eqn:derv1} and \eqref{eqn:derv2} into the Lagrangian \eqref{eqn:lagrangian} the \emph{dual form} can be written as:
\begin{align}
    \label{eqn:dual}
    \max_{\alpha\in\mathbb{R}^m} & \;\;W(\bm{\alpha}) = \sum_{i,j=1}^m\alpha_i - \frac{1}{2}\sum_{i=1}^m \alpha_i\alpha_jy_iy_j\langle\mathbf{x}_i,\mathbf{x}_j\rangle, \\
    \textrm{subject to} & \;\;\alpha_i \geq 0, i=1,\ldots,m, \\
    \textrm{and} & \;\;\sum_{i=1}^m\alpha_iy_i=0. 
\end{align}
This is a convex optimization problem, and therefore, it is possible to find the global optimum. By substituting \eqref{eqn:derv2} into the decision function \eqref{eqn:decisionfct} a new expression of this decision function is obtained:
\begin{equation}
    f(\mathbf{x})=\textrm{sgn}\left(\sum_{i=1}^m\alpha_iy_i\langle\mathbf{x},\mathbf{x}_i\rangle+b\right)
\end{equation}


% \begin{enumerate}
%     \item mathematical deduction of a maximal margin classifier
%     \item dual problem
%     \item support vectors
%     \item convex optimization problem -> global minimum
% \end{enumerate}

\subsection{Nonlinear Support Vector Classifier}

In the previous section only linear separable problems have been treated. But in reality the data points are not necessarily linear separable. There are two extensions to the SVM algorithm to deal with this problem, soft margin hyperplanes, and kernels. A soft margin hyperplane is a hyperplane that separates most of the data, while a few points on the wrong side are tolerated as in figure\ref{fig:softmargin}. While, a kernel takes the input vectors and transforms them into a higher dimensional space. In this high dimensional space the input vectors are linearly separable again and a hyperplane can be created there.

xxxPictures

% \begin{enumerate}
%     \item Linear separable problem -> no problem
%     \item Non linear separable problem -> slack variable and kernel
%     \item Cooool images
% \end{enumerate}

\subsubsection{Soft Margin Hyperplane}
Sometimes it is not possible to separate all data points linearly. A soft margin hyperplane separates most of the data points, but some are on the wrong side.
A soft margin hyperplane allows data points to violate \eqref{eqn:primalsubject} by introducing a \emph{slack variable} \ref{lit:cortezvapnik}
\begin{equation}
    \label{eqn:cprimal1}
    \zeta_i\geq0,\; i=1,\ldots,m
\end{equation}
and use modified separation constraints \eqref{eqn:primalsubject},
\begin{equation}
    \label{eqn:cprimal2}
    y_i(\langle\mathbf{w},\mathbf{x}_i\rangle+b)\geq1-\zeta_i,\; i=1,\ldots,m.
\end{equation}
With a large enough $\zeta_i$ the constraints can always be met. To penalize too large $\zeta$ a term $\sum_i\zeta_i$ is added to \eqref{eqn:primal}. The new optimization term with $C\geq0$ is,
\begin{equation}
    \label{eqn:primal}
    \min_{\mathbf{w},b}\;\frac{1}{2}\lVert\mathbf{w}\rVert^2 + \frac{C}{m}\sum_{i=1}^m\zeta_i,
\end{equation}
subject to the constraints \eqref{eqn:cprimal1} and \eqref{eqn:cprimal2}. $C$ regulates the trade off between maximizing the margin and minimizing the training error. $C$ is a problem dependent parameter and there is no obvious way to choose it.

So instead of using $C$ another way to incorporate the soft margin hyperplane is the $\nu$-SV classifier. The optimization problem is stated as follows,
\begin{align}
    \label{eqn:nuprimal}
    \min_{\mathbf{w},b}\;&\frac{1}{2}\lVert\mathbf{w}\rVert^2 -\nu\rho + \frac{1}{m}\sum{i=1}^m\zeta_i \\
    \textrm{subject to} \;\;&y_i(\langle\mathbf{w},\mathbf{x}_i\rangle+b)\geq1 \\
    \textrm{and} \;\; & \zeta\geq0, \rho\geq0.
    \label{eqn:nuprimalsubject}
\end{align}
The $\nu$ parameter has some convenient porperties. It is a lower bound on the fraction of support vectors and an upper bound on the fraction of margin errors. So the $\nu$ regulates how many points can lie inside the margin. It is essential for the one class version of the support vector machine, as it states the upper bound of outlier, that can be found.
% 
% \begin{enumerate}
%     \item Tradeoff: missclassification and simple hyperplane -> parameter
%     \item linear -> nonlinear, works only for simple problems
%     \item soft margin deals with outlier in two class classification
%     \item is essential later on for one class classification
% \end{enumerate}

\subsubsection{Kernel Trick}
A kernel transforms the input data to a high dimensional feature space and calculates its dot products.
Let the mapping $\Phi$ be this transformation of data from input space $x \in \mathcal{X}$ to a higher dimensional feature space $\mathcal{H}$
\begin{align}
    \Phi: \mathcal{X} &\to \mathcal{H} \\
    x &\mapsto \Phi(x).
\end{align}
Most of the time the mapping function $\Phi$ is not known, but its dot products $\langle\Phi(x),\Phi(x')\rangle$ are. The kernel function $k$ calculates this dot product
\begin{equation}
    k(\mathbf{x},\mathbf{x}')=\left\langle\Phi(x),\Phi(x')\right\rangle.
\end{equation}
The dual optimization problem \eqref{eqn:dual} consists only of dot products. By replacing the dot products with a kernel, the data is implicitly mapped to a higher dimensional space. It is then possible to separate the data in the high dimensional space lineary. The actual mapping is not important, since only the dot product in the featurespace is needed.

Possible kernels are the \emph{polynomial} kernel of degree $d$
\begin{equation}
    k(\mathbf{x},\mathbf{x}') = \langle\mathbf{x},\mathbf{x}'\rangle^d,
\end{equation}
\emph{radial basis function} (RBF) or \emph{gaussian} kernel of width $\sigma > 0$
\begin{equation}
    k(\mathbf{x},\mathbf{x}') = \exp(-\lVert\mathbf{x}-\mathbf{x}'\rVert^2/\sigma^2),
\end{equation}
and \emph{neural network} kernel with $\tanh$ activation function,
\begin{equation}
    k(\mathbf{x},\mathbf{x}') = \tanh(\kappa\langle\mathbf{x},\mathbf{x}'\rangle+\Theta).
\end{equation}
The Kernel PCA and the SVM OCC are using the RBF kernel.



\begin{enumerate}
    \item in dual problem: everything is written as dot products
    \item mercer theorem, replace dot product by kernel
    \item kernel calculates dot product in higher dimensional space
    \item trick because mapping is not needed to know
    \item different kernels: polynomial, gaussian(rbf)
\end{enumerate}




\subsection{Support Vector Machines for Outlier Detection: One Class Support Vector Machine}
\begin{enumerate}
    \item no labels
    \item svms separate some points from others.
    \item approach, use origin
    \item separate points with maximum margin from origin
    \item slack variable sets percentage of data points lying on other side of hyperplane
    \item outlier if on wrong side, problem due to fixed percentage
    \item solution, use distance from hyperplane as measure and for creating ROC
\end{enumerate}



\section{Kernel Principal Compenent Analysis}

Kernel Principal Component Analysis (Kernel PCA) is an extension to normal PCA where the dot product is replaced by a kernel function. PCA uses the covariance matrix as in equation \eqref{eqn:covariance} and so only finds features that are linearly related to the input variables. By using a kernel function instead of a dot product, the Kernel PCA looks for features that are also \emph{nonlinearly} related to the input variables.

\subsection{Principal Component Analysis}
The Principal Component Analysis (PCA) is an often used technique for extracting structure from a possibly high-dimensional data set. PCA is an orthogonal transformation of the coordinate system of the data. The directions or \emph{principal components} of the new coordinate system are those, that represent the most variance of the data. Often a small number of \emph{principal components} is sufficient to describe the data very well.

The standard PCA algorithm doesn't use dot products exclusively. But Kernel PCA needs dot products, because the mapping function $\Phi(x)$ is not known in general but the dot product $\langle\Phi(x),\Phi(y)\rangle$ is.

The reformulated algorithm looks as following. Given a data set $x_i \in \mathbb{R}^N, i=1,\ldots,m$. This data set is centered with $\sum_{i=1}^m x_i =0$. By diagonalizing the covariance matrix 
\begin{equation}
    \label{eqn:covariance}
    C = \frac{1}{m}\sum_{i=1}x_ix_i^T
\end{equation}
PCA finds the principal components. $C$ is positive definite, so its eigenvalues are all positive. Retrieving the nonzero principal components or eigenvectors $v \in \mathbb{R}^N\backslash\{0\}$ and eigenvalues $\lambda \geq 0$ is done by solving the eigenvalue equation
\begin{equation}
    \label{eqn:eigenvalue}
    \lambda v = Cv.
\end{equation}

By substituting $C$ with \eqref{eqn:covariance} in this equation,
\begin{equation}
    \lambda v = C v = \frac{1}{m}\sum_{i=1}\langle x_i, v\rangle x_i, 
\end{equation}
all solutions, of $v$ with $\lambda \neq 0$, the principal components can be expressed by $x_1,\ldots,x_m$.



\subsection{Kernel PCA as an Eigenvalue Problem}

The Kernel PCA performs a linear PCA in the feature space $\mathcal{H}$.This feature space related to the input space (for instance, $\mathbb{R}^N$ by a possibly nonlinear map
\begin{align*}
    \Phi:\mathbb{R}^N   &\to \mathcal{H} \\
     x                  &\mapsto \Phi(x).
\end{align*}
The feature space $\mathcal{H}$ may be infinite dimensional. Assuming the data is centered $\sum_{i=1}^m \Phi(x_i) = 0$, the covariance matrix takes the form
\begin{equation}
    \mathbf{C} = \frac{1}{m}\sum_{i=1}^m\Phi(x_i)\Phi(x_i)^T.
\end{equation}
Now again, the problem is finding the positive eigenvalues $\lambda \geq 0$ and the nonzero eigenvectors $\mathbf{v} \in \mathcal{H}\backslash \{0\}$ satisfying
\begin{equation}
    \lambda\mathbf{v} = \mathbf{Cv}.
\end{equation}
All solutions $\mathbf{v}$ with $\lambda \neq 0$ are again in the span of $\Phi(x_1),\ldots,\Phi(x_m)$ and the equation can be rewritten as
\begin{equation}
    \label{eqn:alpha1}
    \lambda\langle\Phi(x_i),\mathbf{v}\rangle = \langle\Phi(x_i),\mathbf{Cv}\rangle
\end{equation}
and there are coefficients $\alpha_i$ such that
\begin{equation}
    \label{eqn:alpha2}
    \mathbf{v} = \sum_{i=1}^m\alpha_i\Phi(x_i).
\end{equation}
By combining these two equations \eqref{eqn:alpha1} and \eqref{eqn:alpha2} the result for all $n=1,\ldots,m$ is
\begin{equation}
    \lambda\sum_{i=1}^m \alpha_i\langle\Phi(x_n),\Phi(x_i)\rangle = \frac{1}{m}\sum_{i=1}^m \alpha_i \left\langle\Phi(x_n),\sum_{i=1}^m\Phi(x_j)\langle\Phi(x_j),\Phi(x_i)\rangle\right\rangle
\end{equation}
The Gram matrix $\mathbf{K}_{ij} := \langle\Phi(x_i),\Phi(x_j)\rangle$ simplifies this equation to
\begin{equation}
    \label{eqn:evprob}
    m\lambda \mathbf{K} \bm{\alpha} = \mathbf{K}^2\bm{\alpha}
\end{equation}
with $\bm{\alpha}$ beeing a column vector with entries $\alpha_1,\ldots,\alpha_m$. Solving the dual eigenvalue problem
\begin{equation}
    m\lambda\bm{\alpha} = \mathbf{K}\bm{\alpha}
\end{equation}
for nonzero eigenvalues yields all solutions of equation \eqref{eqn:evprob}, that are of interest
The eigenvalues $\lambda_1\geq\lambda_2\geq\ldots\geq\lambda_m$ and the corresponding complete set of eigenvectors $\bm{\alpha}^1,\ldots,\bm{\alpha}^m$ solve this equation.

From now on denote:
\begin{align}
    \mathbf{V} &= [\mathbf{v}^1\cdots\mathbf{v}^m] \\
    \mathbf{A} &= [\bm{\alpha}^1\cdots\bm{\alpha}^m] \\
    \mathbf{x} &= [\mathbf{x}_1\cdots \mathbf{x}_m] \\
    \bm{\Phi}(\mathbf{x})  &= [\Phi(x_1)\cdots \Phi(x_m)]
\end{align}
The training data set $\bm{\Phi}$ computes the basis vector matrix $\mathbf{V}$ by
\begin{equation}
    \mathbf{V} = \bm{\Phi}\mathbf{A}.
\end{equation}
Where each column of $\mathbf{V}$ is an eigenvector, a linear combination of the training data and each column of $\mathbf{A}$ contains the coefficients to perform the linear combination.
The data $\bm{\Phi}$ has to be centered.
\begin{equation}
    \bm{\Phi}_c = \bm{\Phi}(\mathbb{I} - \frac{1}{m} \mathbf{j}_m \mathbf{j}_m^T) = \bm{\Phi}\mathbf{M}
\end{equation}
With $\mathbf{j}_m^T = [1,\ldots,1]$ and $\mathbf{M}$ being the centering matrix.

The $m\times m$ kernel matrix of the centered data can be written as
\begin{equation}
    \mathbf{K}_c = \bm{\Phi}_c^T\bm{\Phi}_c = \mathbf{M}\bm{\Phi}^T\bm{\Phi}\mathbf{M} = \mathbf{MKM}.
\end{equation}
By computing the eigendecomposition of $\mathbf{K}_c = \mathbf{UDU}^T$ it is possible derive $\mathbf{A}$.
\begin{equation}
    \label{eqn:kpca-A}
    \mathbf{A} = \mathbf{MUD}^{-1/2}
\end{equation}
It is possible to keep only the $l\leq m$ largest eigenvalues and still be able to represent most of the data's information. The eigenvalues are the diagonal elements of $\mathbf{D}$.

\subsection{Projecting data}
Now it is easy to calculate the new presentation of a data vector $\mathbf{y}$ in the feature space. First it is needed to map and center the data
\begin{equation}
    \Phi(\mathbf{y})_c = \Phi(\mathbf{y}) - \frac{1}{m}\bm{\Phi}\mathbf{j}_m
\end{equation}
Then the projection of the centered data is calculated by
\begin{equation}
    \mathbf{z}=\mathbf{V}^T\Phi_c(\mathbf{y}) = \mathbf{A}^T\bm{\Phi}^T(\Phi(\mathbf{y})-\frac{1}{m}\bm{\Phi}\mathbf{j}_m
\end{equation}
Substituting $\mathbf{A}$ and $\mathbf{M}$ yields this expression
\begin{equation}
    \label{eqn:z}
    \mathbf{z} = \mathbf{D}^{-1/2}\mathbf{U}^T\left(\bm{\Phi}\Phi(\mathbf{y}) - \frac{1}{m}\bm{\Phi}^T\bm{\Phi}\mathbf{j}_m-\frac{1}{m}\mathbf{j}_m\mathbf{j}_m^T\bm{\Phi}^T\Phi(\mathbf{y}) + \frac{1}{m^2}\mathbf{j}_m\mathbf{j}_m^T\bm{\Phi}^T\bm{\Phi}\mathbf{j}_m\right).
\end{equation}
This equation has four parts and these can be interpreted as follows:
\begin{enumerate}
    \item The beginning term of the equation is
    \begin{equation*}
        \bm{\Phi}\Phi(\mathbf{y}) = k_z,
    \end{equation*}
        with $k_z = [\Phi(\mathbf{x}_1)\Phi(\mathbf{y})\cdots \Phi(\mathbf{x}_m)\Phi(\mathbf{y})]^T $. This is a column vector, its entries are simply the kernel function of the training data $\mathbf{x}$ and the arbitrary data vector $\mathbf{y}$.
    \item It is followed by this term
    \begin{equation*}
        \frac{1}{m}\bm{\Phi}^T\bm{\Phi}\mathbf{j}_m = \frac{1}{m}\mathbf{K}\mathbf{j}_m,
    \end{equation*}
        where $\mathbf{K}$ is the kernel matrix of the training data set. It is a column vector, where each of its entries is the sum of the columns of the kernel matrix.Note, that this term is independend of $\mathbf{y}$.
    
    \item The next term is
    \begin{equation*}
        \frac{1}{m}\mathbf{j}_m\mathbf{j}_m^T\bm{\Phi}^T\Phi(\mathbf{y}) =  \frac{1}{m} \mathbf{j}_m\mathbf{j}_m^Tk_z
    \end{equation*}
        This is a vector in which the sum of all elemts of $k_z$ is repeated in all of its entries.
     \item Finally, the last term loks like this
     \begin{equation*}
         \frac{1}{m^2}\mathbf{j}_m\mathbf{j}_m^T\bm{\Phi}^T\bm{\Phi}\mathbf{j}_m = \frac{1}{m^2}\mathbf{j}_m\mathbf{j}_m^T\mathbf{K}\mathbf{j}_m.
     \end{equation*}
        This vector's entries are all the same, the sum of all elements of the kernel matrix. Note, this term is also independent of $\mathbf{y}$
\end{enumerate}
So the projection of the data is expressed only in terms of dot products and can be calculated explicitly.

\subsection{Reconstruction error as novelty measure}
The measure of outlyingness or novelty will be the reconstruction error. Its intention is to generate a simplified model of the distribution of training data. The distribution is modeled by kernel PCA, which is computes the PCA in the featurespace.
 
The simplified model is then compared to the original data in feature space. The amount of which one data points deviates from its original place is the measure of novelty. Decision boundaries are iso-potential curves or surfaces of the reconstruction error.

The reconstruction error in feature space is \ref{lit:xxxdiamantaras}
\begin{equation}
    \label{eqn:recerror}
    p\left(\Phi_c(\mathbf{y})\right) = \lVert\Phi_c(\mathbf{y})-\hat{\Phi}_c(\mathbf{y})\rVert^2=\Phi_c^T(\mathbf{y})\Phi_c(\mathbf{y})-2\Phi_c^T(\mathbf{y})\hat{\Phi}_c(\mathbf{y})+\hat{\Phi}_c^T(\mathbf{y})\hat{\Phi}_c(\mathbf{y})
\end{equation}
$\Phi_c(\mathbf{y})$ is a vector that originates from the center of the distribution in feature space . And $\hat\Phi(\mathbf{Y})$ is a reconstructed vector not using all eigenvalues. This reconstructed vector is compared to the original vector.
\begin{align}
    \Phi_c(\mathbf{y})&= \Phi(\mathbf{y})-\frac{1}{m}\bm{\Phi}(\mathbf{x})\mathbf{j}_m \\
    \hat\Phi(\mathbf{y})&=\mathbf{V}\mathbf{z} \\
    \mathbf{z}&= \mathbf{V}^T \Phi_c(\mathbf{y})
\end{align}
Evaluating the reconstruction error \eqref{eqn:recerror} further yields:
\begin{align}
    \Phi_c^T(\mathbf{y})\Phi_c(\mathbf{y})-2\Phi_c^T(\mathbf{y})\hat{\Phi}_c(\mathbf{y})+\hat{\Phi}_c^T(\mathbf{y})\hat{\Phi}_c(\mathbf{y}) &= 
    \Phi_c^T(\mathbf{y})\Phi_c(\mathbf{y})-2\Phi_c^T(\mathbf{y})\mathbf{Vz}+\mathbf{z}^T\mathbf{V}^T\mathbf{V}\mathbf{z} \\ &=    
    \Phi_c^T(\mathbf{y})\Phi_c(\mathbf{y})-2\mathbf{z}^T\mathbf{z}+\mathbf{z}^T\mathbf{V}^T\mathbf{V}\mathbf{z} \\ &= 
    \Phi_c^T(\mathbf{y})\Phi_c(\mathbf{y})+ \mathbf{z}^T{z}
\end{align}
So the reconstruction error is calculated by evaluating two terms. The first one is
\begin{align*}
    \Phi_c^T(\mathbf{y})\Phi_c(\mathbf{y})&= \left(\Phi(\mathbf{y})-\frac{1}{m}\bm{\Phi}\mathbf{j}_m\right)^T\left(\Phi(\mathbf{y})-\frac{1}{m}\bm{\Phi}\mathbf{j}_m\right) \\ &= k(\mathbf(y),\mathbf(y)) - \frac{2}{m}k_z^T\mathbf{j}_m+\frac{1}{m^2}\mathbf{j}_k^T\mathbf{K}\mathbf{j}_m
\end{align*}
It consist of three terms:
\begin{enumerate}
    \item The first is constant as
    \begin{equation*}
        k(\mathbf{y},\mathbf{y}) = 1
    \end{equation*}
        for a RBF kernel, which is used in the context of kernel PCA for novelty detection. $k(\mathbf{y},\mathbf{y})=\exp(\lVert \mathbf{y}-\mathbf{y}\rVert^2) = \exp(0) = 1 $
    \item The second terms is
    \begin{equation*}
        \frac{2}{m}k_z^T\mathbf{j}_m.
    \end{equation*}
        This is the sum of all elements of $k_z$ divided by $m/2$.
     \item Finally, the third term is
     \begin{equation*}
         \frac{1}{m^2}\mathbf{j}_k^T\mathbf{K}\mathbf{j}_m.
     \end{equation*}
        This is just the sum over all elements of the kernel matrix $\mathbf{K}$ divided by $m^2$
\end{enumerate}
So the first and the third term are constant and independent of $\mathbf{y}$.

The second term of the reconstruction error is $\mathbf{z}^T\mathbf{z}$. Luckily $\mathbf{z}$ is already known due to \eqref{eqn:z} and can be explained with a kernel only.

\section{Classification and Regression Tree}
The decision tree used by Random Forest is the Classification and Regression Tree (CART). For outlier detection only the classification property is needed. It is a binary decision tree that is constructed by splitting a node into two child nodes repeatedly, beginning with the root node that contains the whole learning sample. Binary means, every node has only two child nodes. So to construct a CART, the problem reduces to find the optimal binary split for every node. There are several criterions how to find this ``best'' split. Random Forest CARTs use the Gini Criterion.


\subsection{Tree Growing}

The basic idea of tree growing is to choose a split among all the possible splits at each node so that the resulting child nodes are the ``purest''. In the context of Random Forest, only univariate splits are considered. So, each split depends on the value of only one predictor variable. All possible splits consist of all possible splits of each predictor. A tree is grown starting from the root node by repeatedly using the following steps on each node.
\begin{enumerate}
    \item Find each predictor's best split:
    
    Sort each predictor's entries by increasing value. Iterate over all values of the sorted predictor and find the candidate for the best split. That is the value that maximizes the splitting critireon.
    \item Find the node's best split:
    
    To actually perform the split, compare all evaluated predictors from step 1 and choose the split, that maximizes the splitting criterion.
    
    \item Let $s$ be this best split of the winning predictor. All $x\leq s$ are sent to the left node and all $x>s$ to the right node.
\end{enumerate}

The sorting of the values in the first step makes the tree more robust to outlier and noise. If noise changes the values of one sample a bit, it's likely that the order of the sequence is not changed.

So, constructing a CART is accomplished by finding the best split, which is just trying every possibility, calculating the ``goodness'' of every possible split and choosing the best one. CARTs repeat his for every node until each one is ``pure'', so it only has samples of the same class.


\subsection{Splitting and Gini criterion}
\label{sec:gini}
For every split at node $t$ a splitting criterion $\Delta i(s|t)$ is calculated.
The best split $s$, at node $t$ maximizes this splitting criterion $\Delta i(s|t)$.

Given a node $t$ with estimated class probabilities $p(j|t)$ with $j=1,\ldots,J$ beeing the class label. A measure of impurity for node $t$ is:
\begin{equation}
    \label{eqn:gini}
    i(s|t)=1-\sum_j p(j|t)^2 = \sum_{j\neq k} p(j|t)p(k|t)
\end{equation}
The best split would reduce this measurement the most. This impurity measure is called Gini criterion.

For a two class problem, $j$ is equal to 2. This reduces equation \eqref{eqn:gini} to:
\begin{equation}
    i(t)=2p(1|t)p(2|t)
\end{equation}

To see which split decreases the gini criterion the most, the splitting criterion $\Delta i$ is defined as
\begin{equation}
    \label{eqn:ginidecrease}
    \Delta i(t) = i(t) - p_L i(t_L) - p_R i(t_R)
\end{equation}
where $p_L$ and $p_R$ are probabilities of sending a case to the left child node $t_L$ and to the right child node $t_R$ respectively. The contribution of the left child nodes impurity is weighted by $p_L = p(t_L)/p(t)$ and the right child nodes by $p_R = p(t_R)/p(t)$. That is $p_L$ is the fraction of cases of the node $t$ that will be sent to the left child node $t_L$.

So, to find the best split only the decrease of gini, given by equation \eqref{eqn:ginidecrease} has to be calculated. Basically this is just counting the samples of every class, that are in node $t$ and the samples, that go to the left node $t_L$ and to the right node $t_R$.

\section{Random Forest}

A random forests is an ensemble of tree predictors. The trees used are CARTs, and each of them gets a different bootstrap sample of the original data and a random selection of features to split. Every tree learns his own dataset with its random samples and features. The number of trees has a nice property, as with an increasing number of trees the generalisarion error converging to a limit.
\subsection{Construction of a Random Forest}
The construction of a random forest is quite simple and can be parallelized easily as the trees are independent of each other. Assuming a dataset consists of $N$ cases with $M$ features divided into two classes. A Random Forest is constructed by the following steps.
\begin{enumerate}
    \item Parameter Selection:
    
    Choose a number $m \leq M$ of features to randomly select for each tree and a number $n$ that represents the number of trees to grow.
    \item Create dataset for tree:
    
    Take a bootstrap sample of the $N$ cases. So about two third of the cases are chosen. Then select randomly $m$ features.
    \item Grow a CART using the bootstrap sample and the $m$ randomly selected features.
    \item Repeat the steps 2 and 3 $n$ times.
\end{enumerate}

So there are now $n$ trees, which form the random forest. To classify a new object $x$, put $x$ down each of the $n$ trees. Each tree gives a classifiction for $x$. The forest chooses the class that has the most out of $n$ votes.

\subsection{Error estimation}
% A convenient property of a random forest is, that while learning the trees, an error estimation can be retrieved.
Random forest has a convenient property, as retrieving an generalisation error estimation is done by growing the different trees. So the generalisation error is calculated by the training data and no additional test set is needed. This error is calculated in the following steps:

\begin{enumerate}
    \item At each bootstrap iteration select the samples that are not in the bootstrap (``out of bag'' OOB\
    \item The grown CART predicts the his OOB data.
    \item Aggregate the OOB predictions of the different trees and calculate the error rate
\end{enumerate}

This calculation error is quite accurate, given enough trees are used in the forest. So there is no need to do crossvalidation like leave one out, and still the random forest yields a good generalisation error estimation.


\subsection{Variable Importance}
The generalisation error estimation is not the only inherent property of a random forest. One of them is the estimation of variable importance.
Because of the need to know which variables are important for the classification, there are different measures for the variable importance.

\subsubsection{Mean Decrease Gini}
The splitting criterion used is the gini criterion, as stated in section \ref{sec:gini}. At every split one of the $m$ variables is used to form the split and there is a resulting decrease in gini. The sum of all decreases in the forest due to a given variable, normalized by $n$ forms this measure. So the variables that decrease the gini index are assumed to be the most important.

\subsubsection{Mean Decrease Accuracy}
Another estimate is the mean decrease in accuracy. In the left out cases for the $k$th tree, randomly permute all values of the $l$th variable, put these new covariate values down the tree and get a classification.

Proceed as though computing a new internal error rate. The amount by which this new error exceeds the original test set error is defined as the importance of the $l$th variable.

\subsection{Proximity Matrix}
There is even a further property of a random forest. It can create something like a proximity matrix. The proximity matrix is also needed to detect outlier. It is a $N\times N$ matrix, representing the ``proximity'' of observations.

The $(i,j)$ element of this proximity matrix $P \in N\times N$ is the fraction of trees in which elements $i$ and $j$ end up in the same terminal node. The intuition behind this is simple. ``Similar'' observations should be in the same terminal node. To end up in the same terminal node, they need to look identical to a CART. So those samples have to be similar.

This proximity matrix can be used to identify structure in the data and it is needed for unsupervised learning. In context of this work it will be mainly used for outlier detection.



\subsection{Outlier Detection}
The problem for random forest to detect outlier is, that there are no counter examples. Only examples of the ``good'' class are available. Random forest consists of an ensemble of tree sctructured predictors, CARTs. In this situation there is only one class and the CART sees the root node already as pure. So, a CART can only work if the data is labeled.

To solve this problem the second class will be generated artificially. All data cases are assigned to the first class and the cases of the second class are artificially created. A property of the second class should be, that its cases are distributed differently than the first class cases.

\subsubsection{Artificial Outlier Examples}
There are two common ways for random forest to create the examples of the second class:
\begin{enumerate}
    \item The class two cases are randomly chosen to be inside a hypercube enclosing the original data.
    \item The class two cases are created by independent sampling from the one dimensional marginal distributions of the original data.
\end{enumerate}
The second method is used by Leo Breiman's current random forest algorithm. An Advantage of the second method is its insensitivity to skewness of the data, as the marginal distribution of the second class is the same of class one.




\subsubsection{Outlyingness}
Finally, the random forest can use the proximity matrix to calculate an outlyingness measure.
\begin{equation}
   \hat{O} = (\hat{O}_1,\ldots,\hat{O}_N)
\end{equation}

with

\begin{equation}
   \hat{O}_i = \frac{N}{\sum_{j\neq i}^N P_{ij}^2}
\end{equation}
This only has to be ``centered'' and ``scaled'' in a robust way. That is, subracting the median and then dividing by the mean absolute deviation.
 \begin{equation}
     O = (O_1,\ldots,O_N)
 \end{equation}
with
 
\begin{equation}
    O_i = \frac{\hat{O}_i-\text{med}(\hat{O})}{\text{mad}(\hat{O})}
\end{equation}

Applying this measure, cases with $O > 10$ should be declared as outlier, as stated in the literature \ref{lit:xxxbreiman}. But this may vary depending on the data set.


\chapter*{Erklärung}
\textsc{Translate this!}

Hiermit erkläre ich, dass ich die Diplomarbeit selbstständig angefertigt und keine Hilfsmittel außer den in der Arbeit angegebenen benutzt habe.\par

\vspace{2em}
\thispagestyle{empty}
\noindent Regensburg, den 1. xxx 2008 
% \vspace{3cm} \\
\hfill \footnotesize\underline{\phantom{Platz Philipp Knollmüller Platz}}\\
\phantom{a}\hfill \phantom{Platz} Philipp Knollmüller \phantom{Platz}



\end{document}
