\documentclass[a4paper,10pt]{article}
\usepackage[utf8x]{inputenc}
\usepackage{graphicx}
\usepackage[margin=1in]{geometry}
\usepackage{amsmath}
\usepackage{verbatim}
\usepackage{listings}
\usepackage{appendix}
\usepackage{color}
\usepackage{subfig}
\usepackage{float}

\addtolength{\parskip}{\baselineskip}


\begin{document}
\title{DME Mini-Project: \\ Network Intrusion Detection Data}
\author{Barbara Cervantes, Carlos Garcia, Petko Nikolov \\ (s1145092) \quad \quad (s1065405) \quad \quad (s1144605) \\ \\ \\ \\University of Edinburgh \\ School of Informatics}
\graphicspath{{figures/}}

\maketitle
\thispagestyle{empty}

\quad \\
\quad \\
\quad \\
\quad \\
\quad \\
\quad \\
\quad \\
\quad \\
\quad \\

\begin{abstract}
This miniproject looks at detection of network intrusions as a classification problem. The goal is to identify if a network conexion is an attack. These attacks are divided in four classes: \textit{DoS}, \textit{Probe}, \textit{U2R} and \textit{R2L}. The misclassifications of these attack are not equally weighted so a cost matrix is considered. The data comes form the KDD'99 competition. It contains a very unbalanced class distribution. In addition, the most rare classes are the ones that have a higher misclassification cost. In general, the classifiers are able to classify accurately conexions with no attacks and DoS attacks. The difficulty of the classification task is being able to identify the most rare classes without compromising the accuracy of the frequent clases. After some experiments we confirm that a simple 1-nearest neighbour classifier can achieve a low average cost. Generalized Boosted Regresion and Random Forest classifiers also have a good performance, the latest being the best classifier found for this task.  
\end{abstract}

\clearpage
\setcounter{page}{1}
\tableofcontents
\newpage

\section{Introduction}
The ACM group of Knowledge Discovery and Data Mining (SIGKDD) holds an annual competition called the KDD Cup where multiple participants showcase different methods for extracting knowledge from data. The aim of the competition is to analyse specific datasets given specific goals and constraints. Each year the dataset is different, and so are the challenges. The KDD Cup held in 1999 used a dataset consisting of network connections obtained from a simulated military network environment. The purpose of the competition was to build a predictor which could distinguish and classify malicious connections (intrusions or attacks) among benign connections. The real challenge, however, was to overcome the difficulties of handling high misclassification costs. 

\subsection{The Dataset}
Each record in the dataset represents an entire network communication session and does not specify individual packets or network transactions. The analysed traffic is a regular flow of TCP/IP data with known start and end times. All communications are between one source host and a single target host. In average, each record amounts for 100 bytes. The dataset has been derived from data captured in 1998 by the DARPA evaluation program\cite{Lippmann2000}. The original data consists of 4 gigabytes of uncompressed tcpdump\footnote{\textit{tcpdump} is a program that captures TCP/IP data in its entirety and outputs it in its raw format.} data. The data was acquired throughout a period of 7 weeks. The data generated in the first five weeks is used as training examples, the last 2 weeks are used for testing purposes. The full dataset consists of 8,050,290 records with 41 features and one label. The training set contains 4,940,000 records and the test set 3,110,290. 

There are a total of 5 different labels assigned to the connections. Benign connections are simply labelled as \textit{Normal}. Malicious connections, on the other hand, have more specific labels: \textit{DoS} (Denial of Service), \textit{Probe}, \textit{R2L} (Remote to Local) and \textit{U2R} (Local to Root). Furthermore, attacks are subdivided into 22 categories, each belonging to one of the four previously mentioned.

\subsubsection{Denial of Service Attacks}
These attacks focus on using as many resources as possible from a target system or service. The real intention is to consume the available resource with illegitimate requests and petitions so as to deny legitimate user access. These attacks usually originate from multiple sources and may be difficult to stop, without affecting legitimate users, once they have been launched. Some associated symptoms of these attacks are slow network performance, service instability and unavailability.

\subsubsection{Remote to Login Attacks}
These attacks are considered to be the most harmful in general. They occur when a user with no access to a remote system is able to obtain illegal access as either a normal user or an administrator. Remote attacks are performed against vulnerable services exposed to the general public.

\subsubsection{User to Root Attacks}
Attacks of these type occur when a user already has access to a system (either remotely or locally) and tries to access restricted resources. The usual target of these attacks is to get unrestricted root (or administrator) access to a system. These attacks may occur from within the network itself or through restricted remote access previously gained through a remote to login attack.

\subsubsection{Probing Attacks}
Probing attacks are technically not attacks. These mechanisms try to extract as much information as possible from network and computer resources. Harvesting information, however, is usually motivated by the intention to circumvent security measures.

\subsection{Feature Groups}
The 41 features of the dataset can be grouped into three groups: basic features, traffic features and content features. The type and a brief description of all the features can be found in Appendix A.

The \textit{Basic Features} represent information extracted from the TCP/IP communication protocol. These features are common to all connection types and represent only statistical data.

The \textit{Traffic Features} extract information from specific time intervals of the network communication process. These features are further subdivided into \textit{same host features} and \textit{same service features}. The former group gathers information from the past connections established within the last 2 seconds where the same destination address was used for the current connection. The latter gathers information from the same time span but only if the service matches the service used in the current connection.

\textit{Content Features} gather some information that might be important to consider for classification from the data inside the packets. The \textit{DoS} and \textit{Probing} attacks are able to be detected by only observing patterns in the number of connections or the flow of data. \textit{R2L} and \textit{U2R} attacks, on the other hand, consist of one or few connections that embed the attack in the data carried by the network packets. The information extracted from the packets are related to login attempts, if a root shell was spawned or not or if a particular shell command was used or not, among others.

\subsection{Challenges}
The main challenge of analysing this dataset is related to how the normal and malicious connections are distributed. The classes are not balanced, in fact, 79.24 \% of the examples in the training set belong to only one class (DoS). The rest of the data is split in 19.69 \% for normal connections, 0.83 \% for probe attacks, 0.23 \% for remote-to-login attacks and the extremely small set, with 0.01 \%, for user-to-root attacks.

The problem is aggravated when we look at the cost matrix for not classifying or wrongly classifying an attack. Given the nature of the classification problem, the false positive penalties are not the same for every class. For example, classifying a remote-to-local attack as normal carries a much higher weight than the opposite. For this reason the cost matrix in table (\ref{tab:cost_matrix}) is given to represent the seriousness of the mistakes. The rows of this matrix represent the real class while the columns represent the predicted class. For instance, classifying a remote-to-local attack (R2L) as normal traffic incurs in a penalty of 4; and a probe attack classified as normal traffic has an associated cost of 1.
\begin{table}[ht]
\centering
  \begin{tabular}{c|ccccc}
  & normal & probe & DoS & U2R &R2L\\
  \hline
  normal & 0 & 1 & 2 & 2 & 2\\
  probe  & 1 & 0 & 2 & 2 & 2\\
  DoS    & 2 & 1 & 0 & 2 & 2\\
  U2R    & 3 & 2 & 2 & 0 & 2\\ 
  R2L    & 4 & 2 & 2 & 2 & 0\\
  \end{tabular}
  \caption{Costs associated with different classification mistakes.}
  \label{tab:cost_matrix}
\end{table}
We can see that the most costly attacks are also the least frequent, so it is crucial to classify them correctly in order to achieve a lower average cost.

As previously mentioned, the malicious attacks are further subdivide into 22 different subcategories. There are an additional 14 new categories in the testing data. Experts believe that almost all new attacks are variations of old and previously seen attacks. Therefore, from the signatures of old attacks, it should be possible to detect new types. 

\section{Data Preparation}

After the KDD Cup 99' finished, the organizers published the labels of the test set. As we wanted to have as much training data as possible we made a decision to use the test set as validation one, while training our algorithms on the rest of the data. Due to the large size of both datasets we considered as non-practical to use all instances in them. Instead, we decided to work with 10 \% subsets available on the competition website. Noteworthy for these  subsets is that they were not sampled proportionally to the attack types from the original ones. What the organizers did was to preserve all the examples from the rare classes (U2R,R2L), while performing standard subsampling from the others (Probe, DoS, Normal).

The datasets were labelled in terms of attack types, but in fact, every attack type belongs to one of the five classes described earlier. Therefore, our first preprocessing step was to add the true classes to the instances in the training and testing dataset. The test set contained high number of unknown attack types such as \textit{apache2}, \textit{mailbomb}, \textit{processtable}, \textit{udpstorm}. Due to the above fact, the database was suitable for anomaly detection, where the objective is to distinguish the known from the unknown types. However, the experts in the field are able to tell the class belonging of this attacks. As we wanted to preserve as more testing data as possible, we did so too.\cite{Eskin1999,Shirazi2009}

One important observation related to the network intrusion dataset is that it has very high number of duplicate entries, more precisely 75 \% in the training set and 78 \% in the test set. The high redundancy may be crucial to the performance of the learning algorithms, as they will be biased towards the frequent classes and will harm the detection of the infrequent classes which, in our case, have higher misclassification cost. The above argument motivated us to remove the duplicate entries from the training and testing set. However, the processing of the test set was done only for comparison of the performance of our algorithms in a dataset with and without duplicates.

After duplicate removal, the distribution of the classes changed dramatically. The most frequent class became \textit{Normal} with 60.33 \%, while the percentage of the \textit{DoS} attacks dropped to 37.48 \%. The infrequent attack classes did not contain many duplicates, so their percentage increased to 1.46 \%, 0.04 \%, 0.69 \% for \textit{probe}, \textit{R2L}, and \textit{U2R}, respectively. Figure (\ref{fig:data_distribution}) shows how the data distribution looks when their frequencies are stacked together.

\begin{figure} [ht]
\centering
\includegraphics[scale=0.70]{figures/data_distribution}
\caption{Distribution of the instances of the training dataset.}
\label{fig:data_distribution}
\end{figure}

The network intrusion dataset is a mixture of continuous and categorical attributes. Moreover, some of the categorical attributes are really important as they correlate well with the class attribute. However, if we want to use these attributes in learning algorithms which only work with numeric types (Gaussian Processes, NNs, SVMs, Logistic regression), a conversion procedure is needed. We decided to apply a mapping between the values of the categorical attributes and the set of positive integer numbers. For example, the attribute \textit{protocol\_type} can have values \textit{udp}, \textit{tcp}, \textit{icmp} which we converted to 0,1 and 2, respectively. This mapping allows to calculate the correlation between the converted categorical attributes and the numerical ones. Another option considered was representing the categorical attributes with a 1-M encoding, but the drawbacks of this encoding were made evident immediately after calculating the correlation in between the attributes. The encoding ends up creating too many high correlated binary attributes.

Besides having convenient representations after normalizing and converting categorical attributes into numerical, three other advantages are observed:
\begin{itemize}
\item[(a)] Some learning algorithms like SVMs and Multilayer Perceptrons work faster with data normalized in the interval [-1,1] or [0,1].
\item[(b)] It is often the case that simple linear normalization may improve the accuracy of the classifiers.
\item[(c)] Normalized data is easier to visualize. This may help in finding some interesting patterns.
\end{itemize}

We can divide our numerical attributes into two different groups: \textit{short range} and \textit{long range} attributes. The first group includes all attributes which have a low maximum value, such as \textit{duration [0,58329]}, \textit{num\_failed\_logins [0,5]} and \textit{count [0,511]}. The second group includes only the attributes src\_bytes and dst\_bytes, whose values go up to 1.3 billion. Linear transformation was applied to the short range group and base 10 logarithmic transformation to the long range group. As a result, there are 38 attributes with the interval \textit{[0,1]},  one attribute with \textit{[0,7.80]}(\textit{src\_bytes}) and another one with \textit{[0,6.71]}(\textit{dst\_bytes}). Figure (\ref{fig:boxplots}) show results of logarithmic normalization in the mentioned features.

\begin{figure} [ht]
\centering
\fbox{
  \subfloat[\textit{src\_bytes}]{\label{fig:1_a_3d}\includegraphics[scale=0.35]{figures/src_bytes.pdf}}
  \hspace{4mm}
  \subfloat[\textit{dst\_bytes}]{\label{fig:1_a_map}\includegraphics[scale=0.35]{figures/dst_bytes.pdf}}
}
\caption{Boxplots for wide range group of attributes after normalization.}
\label{fig:boxplots}
\end{figure}

As a result from the above data transformation, we obtained 8 different datasets for training and testing. However, we decided to proceed working with the following:
\begin{itemize}
\item[(a)] \textit{Training datasets}: Normalized and non-normalized datasets without duplicates. We used both of them to train all algorithms that can work with numerical and categorical attributes, while we used the normalized version for SVNs, Generalized Boosted Logistic Regression and Multilayer Perceptron.
\item[(b)] \textit{Testing datasets}: Normalized and non-normalized datasets with duplicates. These datasets are used as validation sets to compare the results from different classifiers and from the KDD'99 competition.
\end{itemize}


\section{Data Analysis and Statistics}

The features extracted from the network connections are of a mixed nature, they represent different forms of data with different formats; therefore, we expect to have information that is either redundant or unnecessary. Some forms of dimensionality reduction will have an impact in the dataset in general; other forms will only be used for specific classifiers. In this section we perform feature selections that will affect the data used for all predictors. Later, in the sections of specific classifiers, we perform different feature selections that only affect those classifiers.

Evaluating the attributes with information gain immediately revealed two attributes which lacked any information: \textit{is\_host\_login} and \textit{num\_outbound\_cmds}. These attributes were removed, reducing the features from 41 to 39. For analysing the rest of the attributes an alternative to information gain was required. Some of the categorical features have many distinct values and information gain favoured these because of this fact. To solve this problem we have chosen to favour features using the gain ratio criteria instead. In general, features with a gain ratio lower than 0.08 where not considered.

Empirical evidence suggests that applying dimensionality reduction algorithms such as PCA, FA or LDA for this particular dataset worsen the results. In order to perform this algorithms, it is required to transform categorical attributes into numerical. The numeric values are chosen arbitrarily as there are no relations between the categorical elements. Instead of performing feature selection based on linear transformations of the features, we choose to further reduce dimensionality by applying correlation-based feature selection\cite{Hall1999}.

\subsection{Correlation-Based Feature Analysis}
The most useful set of features for classification are the ones which contains a lot of attributes that correlate with the class variable, but do not correlate to one another. The low correlation between the attributes means that they can explain the class variable from different aspects. This observations were the motivators for performing correlation analysis over the feature space.

We discovered that the features that correlate the most with each other are all features that describe the occurrence of an error on the host and service connections. Their correlation coefficient is very close to 1. This can be explained by the fact that whenever the error rate on the host side increases, due to rejection or any other reasons, a reconnection in the service side is spawned. High error rates also increase the rate of reconnection to particular services or hosts. Moreover, the error attributes are highly correlated with the class. The correlation between the \textit{class} and the \textit{flag} feature is 0.8136, while with \textit{dst\_host\_serror\_rate} the correlation is 0.77. Whenever an attack is performed, there are a lot errors that appear on the host side, which is also reflected in the error flag. Figure (\ref{fig:ClassFlag}) shows a scatter plot of the above mentioned attributes.

\begin{figure}[ht]
\centering
\includegraphics[width=100mm]{class_flag.png}
\caption{Scatter plot of attributes \textit{flag}(X) and \textit{class}(Y)}
\label{fig:ClassFlag}
\end{figure}

MATLAB is used to calculate the Pearson partial correlation coefficients between one error attribute and the class controlled over other error attributes. The correlation dropped almost to 0, which means that one error variable is explaining the same variance in the class as all others. With the above results we conclude that conserving only few error attributes in the feature set will not harm the performance of the classifiers. However, since we did not know exactly which one to leave, we delegated this task to feature selection algorithms such as Information Gain, Gain Ratio and Subset Evaluation.\cite{Kayack}

It is noteworthy to mention that the attribute that correlates the most with the class is \textit{same\_srv\_rate}, which stands for the number of connection to one specific service. This attribute is very useful for recognizing the rare attack types, as the nature of them is to concentrate on one specific service of interest on the host side. Figure (\ref{fig:ClassSameSrvRate}) illustrates this on a scatter plot. One can observe that almost all instances of \textit{R2L} and \textit{U2R} have a normalized \textit{same\_srv\_rate} value of 1. The figure is using jitter over the location of the instances in the plot to show the concentration of points around the classes.

\begin{figure}[ht]
\centering
\includegraphics[width=100mm]{class_same_srv_rate.png}
\caption{Scatter plot of attribute \textit{same\_srv\_rate}(X) against the \textit{class}(Y)}
\label{fig:ClassSameSrvRate}
\end{figure}

There is one group of features that have almost zero correlation against the class. These attributes belong to the content-based features. These are domain knowledge features that are trying to identify particularities in the contents of TCP packages. They have no relation to the most frequent attacks \textit{DoS} and \textit{Normal}, but are necessary for recognizing the rare classes. However, some of these attributes are useful for recognizing attacks from particular subtype. For example, the land feature is really powerful for recognizing attacks of type \textit{land}.

\subsection{Baseline}
After determining the minimal features useful for building classifiers, we now consider a simple model which will act as a baseline for future predictions. Because of the unbalanced nature of this classification problem, all classifiers and ranking algorithms need to consider the costs of making different mistakes. These costs are illustrated in Table (\ref{tab:cost_matrix}). The accuracy of the classification is not as important as the average cost per instance incurred by the predictions. For example, if we always predict \textit{DoS} as the class label of the full test dataset, we end up with the relative high accuracy of 73.90 \%. For the baseline, therefore, the average classification cost is used, which implicitly favours high accuracies. Average costs become high when poor misclassification rates are observed on classes with high costs or if there is low overall accuracy. The lower the average cost is, the better the classifier.

\begin{table}[ht]
\centering
  \begin{tabular}{c | c | c}
  \textbf{Class} & \textbf{Average Cost} & \textbf{Distribution in the Dataset} \\ \hline
  DoS    & 0.5219 & 73.90 \%\\  
  Probe  & 1.0394 & 01.34 \%\\
  Normal & 1.7023 & 19.48 \%\\ 
  U2R    & 1.8949 & 00.07 \%\\ 
  R2L    & 1.9995 & 5.20 \%\\
  \end{tabular}
  \caption{Average cost of classifying the full testing dataset using only one class.}
  \label{tab:avg_costs}
\end{table}

The second column of Table (\ref{tab:avg_costs}) shows the baseline values used for comparing the performance of different classifiers. These are the average costs obtained by trivially classifying everything in the testing dataset as one class only. The best average cost is obtained when everything is classified as a \textit{DoS} attack. The distribution of labels in the test dataset is shown in the third columns of Table (\ref{tab:avg_costs}). The baseline average cost is $\mathbf{0.5219}$.

\section{Learning Mechanisms}
\subsection{Basic Classification}

\subsubsection{Na\"ive Bayes}

For Na\"ive Bayes classification it is possible to use either the original or normalized training dataset. To create the Na\"ive Bayes models we used the CORElearn R package , which allows us to create a model that we can then use to predict the class labels and evaluate the results using the previously specified cost matrix (Table \ref{tab:cost_matrix}). 

The classification models learned from the data are accurate at detecting \textit{Normal} connections and fairly accurate for \textit{Probe} and \textit{U2R} attacks. However, they fail to identify most of the \textit{DoS} attacks, which conform the majority of the test instances, and the \textit{R2L}, which are the most costly attacks. For this reason the average cost is really high, 1.306 and 1.322, for the original and normalized dataset respectively.

We performed dimensionality reduction according to the gain ratio of the attributes. This ratios are calculated with the attrEval function in CORElearn. As the estimation parameter we chose GainRatioCost which computes a cost sensitive version of gain ratio. New Na\"ive Bayes models were created considering only the top 10 and the top 5 features. Table \ref{tab:nb} shows the average cost of this models.

For the original trainset, the average cost remains around 1.3. In fact, it is worse when less features are used. This happens because the attributes with high gain ratio have large range values which are not evenly distributed. Thus, the model tends to classify everything as \textit{normal} because it is the most frequent class and it dominates with its prior. Something similar happens with the normalized data where approximately 80\% of the instances are classified as normal. In particular, the detection of \textit{probe} and \textit{u2r} attacks decreased dramatically after the dimensionality reduction. This method's performance is below the baseline.

\begin{table}[ht]
\centering
  \begin{tabular}{c|ccc}
  \textbf{Training Set} & \textbf{All Features} & \textbf{Top 10} & \textbf{Top 5} \\
  \hline
  Original &   1.3060 & 1.3185 & 1.3292\\  
  Normalized & 1.3227 & 1.3359 & 1.3338\\
  \end{tabular}
  \caption{Average costs of different configurations of the Na\"ive Bayes classifier.}
  \label{tab:nb}
\end{table}
 
\subsubsection{KNN}

The KDD'99 competition results pointed out that a 1-nearest neighbour classifier achieved really good performance. This result was confirmed by creating a KNN model in R, with the CORElearn package. This model was indeed one of the best for this task. The model was created using the unnormalized training dataset and the average classification cost obtained was 0.2461. 

\begin{table}[ht]
\centering
  \begin{tabular}{c|c}
  \textbf{Class} & \textbf{Accuracy}\\ \hline
  DoS    & 97.00 \%\\  
  Normal & 99.46 \%\\
  Probe  & 69.44 \%\\
  R2L    & 06.07 \%\\
  U2R    & 22.85 \%\\
  \end{tabular}
  \caption{Accuracy of each class in the 1-nearest neighbour model.}
  \label{tab:knn}
\end{table}

Table (\ref{tab:knn}) summarizes the accuracy achieved by this model over the individual classes. This classifier is able to distinguish the most prevalent classes with high accuracy; as the frequency of the classes drops, so does the accuracy. It is evident that simple methods can perform well; nonetheless, the real challenge is to increase the prediction capabilities of the classifiers for the least frequent classes while still preserving a good accuracy over the other classes.

\subsection{Advanced Classification}
\subsubsection{SVM}
Support Vector Machines (SVM) are a widely used approach for classification tasks. They try to maximize the marginal distance of every train instance to the class boundary while keeping only small number of training vectors as support. Depending on the kernel method used they can perform linear or non-linear classification.

We performed experiments using the statistical software \textit{R} and package \textit{e1071}. This package is able to handle multinomial SVM classification using one-against-one voting scheme, which turned to be very helpful for this classification task. Different kernel methods were used to perform classification: radial basis functions, sigmoid, polynomial and linear kernels. The parameters of the models were tuned using grid search on a subset of 20,000 randomly chosen instances from the training set. In particular, we were interested in finding good values for the L1-regularization cost term and gamma\footnote{Degree of non-linearity of the SVM model.}. The values obtained by the grid search method were 0.5 for gamma and 2 for the regularization cost. The methods were tested on a full featured dataset and on one with 18 selected features by information gain.

The tests performed over the reduced feature space had a better overall performance. Table (\ref{tab:svm_measures}) summarizes the results obtained with SVMs. The results are still worse, by a high margin, than the KNN results. SVMs with radial basis kernel functions predicted almost every point as normal, which is the case of the class with highest frequency in the test dataset. This behaviour is observed due to the fact that SVMs are biased towards the frequent classes. To overcome this problem, different class weights were applied to compensate the class distribution. Additionally, we varied the misclassification cost for SVMs; though, this did not improve the original results. The best average classification cost, \textbf{0.3704}, was achieved with the sigmoid kernel using the parameters found by grid search. The number of supported vectors found was 20401.

\begin{table}[ht]
\centering
\begin{tabular}{c | c | c | c}
\textbf{ \# of Features Kept} & \textbf{Kernel method} & $\mathbf{\mathcal{ACC}}$ & \textbf{Accuracy} \\ \hline
18  & linear & 0.5058 & 74.70 \%\\  
18  & radial & 1.7020 & 00.22 \%\\  
18  & polynomial & 0.4421 & 77.90 \%\\  
18  & sigmoid & 0.3704 & 85.81 \% \\
\end{tabular}
\caption{SVM measures}
\label{tab:svm_measures}
\end{table}

The results of the class accuracy show that SVMs have very good results in detecting \textit{DoS} attacks, but that comes from the fact that most of the examples were classified as being from that class. Therefore, the false positive rate of the classifier is extremely high and non-practical.

\begin{table}[ht]
\centering
\begin{tabular}{c | c | c | c | c | c}
\textbf{Kernel method} & \textbf{Normal} & \textbf{Probe} & \textbf{DoS} & \textbf{U2R} & \textbf{R2L} \\ \hline
linear & 4.19 \%& 0.05 \% & 99.99  \% & 0.0 \% & 0.0 \% \\  
polynomial & 20.60 \% & 1.22 \% & 99.97 \% & 0.0 \% & 0.0 \% \\  
sigmoid & 71.03 \% & 0.456 \% & 97.39 \% & 0.0 \% & 0.0 \% \\
\end{tabular}
\caption{SVMs class accuracies}
\label{tab:svm_acc}
\end{table}

\subsubsection{Decision Trees}
Decision trees seem to be a suitable candidate for this classification problem as the features come from different sources and represent completely different things. The attribute selection mechanisms employed by this technique cope well with the task of multi-label classification and, at the same time, implicitly select useful attributes. The algorithms were implemented in R with the CORElearn package.

Both normalized and non-normalized datasets were used to evaluate the performance of this classifier. Tests using the normalized dataset resulted in better predictions. Table (\ref{tab:dt}) summarizes the predictions obtained on each dataset while applying different features selection approaches. The top features were selected according to information gain and gain ratio measures.

\begin{table}[ht]
\centering
  \begin{tabular}{c|cccc}
  \textbf{Training Dataset} & \textbf{All Features} & \textbf{Top 10 GR} & \textbf{Top 5 GR} & \textbf{Top 10 IG}\\ \hline
  Original   & 0.305 & 0.291 & 0.293 & 0.291\\  
  Normalized & 0.262 & 0.274 & 0.274 & 0.270\\
  \end{tabular}
  \caption{Average classification costs of Decision Trees. }
  \label{tab:dt}
\end{table}

The best average classification cost obtained was $0.2619$. This is a great improvement from the results obtained from the baseline and Na\"ive Bayes, but not enough to surpass KNN. Nonetheless, this classifier was able to predict with high confidence \textit{Normal} and \textit{DoS} attacks. Table (\ref{tab:dt_best_accuracies}) summarizes the accuracy achieved over the individual classes.

\begin{table}[ht]
\centering
  \begin{tabular}{c|cccc}
  \textbf{Class} & \textbf{Accuracy} \\ \hline
  Normal & 91.62 \% \\  
  Probe & 64.38 \% \\
  DoS & 97.50 \% \\
  U2R & 0.0 \% \\
  R2L & 3.30 \% \\
  \end{tabular}
  \caption{Accuracy of individual classes for the best Decision Tree classifier.}
  \label{tab:dt_best_accuracies}
\end{table}


\subsubsection{Random Forest}
Decision trees proved successful in classifying the three major classes in the dataset (\textit{Normal}, \textit{DoS} and \textit{Probe}); though, there is still ground for improvement as the least frequent classes were classified with moderate success. Random forests combine different decision trees using a specific amount of randomly selected attributes from the available ones. Each decision tree is able to vote on the class label of the test instances. If certain features relate more to the least frequent classes, the random selection of smaller subsets of features might favour these. In the following experiments random forests use 50 trees and 15 random attributes per tree. The depth of the generated trees is not constrained.

It is important to remove useless features for this classifier to work to its full capacity. If irrelevant features are not removed, the decision trees that randomly select these features are going to have suboptimal performance. The decision by which the attributes in the decision trees are split, in this implementation of random forests, uses information gain. Thus, information gain was used to select the most relevant features. To find the best number of features to keep, different evaluations were performed keeping 10,15, 25, 30, 35 and 40 features. Table (\ref{tab:random_forest_evaluation}) summarizes the accuracy and average classification cost ($\mathcal{ACC}$) of these tests.

\begin{table}[ht]
\centering
\begin{tabular}{c | c | c}
\textbf{ \# of Features Kept} & $\mathbf{\mathcal{ACC}}$ & \textbf{Accuracy} \\ \hline
10    & 0.4219 & 86.40 \%\\  
15    & 0.2615 & 91.10 \%\\
25    & 0.2427 & 92.26 \%\\ 
30    & 0.2391 & 92.42 \%\\ 
35    & 0.2530 & 92.51 \%\\
40    & 0.4610 & 84.35 \%\\
\end{tabular}
\caption{Evaluation of Random Forests keeping different amounts of features.}
\label{tab:random_forest_evaluation}
\end{table}

Using the top 30 features given their information gain yields the best results. The overall accuracy and average classification cost is better than any decision tree evaluation from the last section. This algorithm would have been ranked in fourth place given the final results of the KDD'99 competition.

\subsubsection{Generalized Boosted Regression Model}
Generalized Boosted Regression Model (GBM) is a model consisting of regression trees which are trained using gradient boosting. The general idea is to iteratively train simple regression trees, where the trees constructed on each iteration are built for the prediction residuals of the previous trees. The regression trees, just like the logistic variants, perform binary splits over one attribute for each non-leaf node, and use function estimators instead of class labels on the leaves. Usually, in the GBM setting, the regression trees have very simple structures and contains only few nodes.

We used \textit{R} and the\textit{gbm} package to perform this experiments. Since we are dealing with a multi-label classification problem (with 5 different classes), different GBM classifiers are trained for every pair of class values. This results in 10 different binary classifiers. Each of them was trained only on the train instances belonging to one of the two attack classes that belonged to that particular classifier. The GBMs were trained to optimize the Bernoulli-logistic log-likelihood function. Additionally, every training instance was weighted against the cost-matrix in Table (\ref{tab:cost_matrix}). The predictions on the test data were made by observing the probabilities obtained by every classifier and applying the majority vote rule.

Some additional settings used by this classifier are the number of trees for the boosted regression, which affects to the number of iterations; the number of minimum observation that the leaves in a tree must have; and the interaction order, which stands for the depth of the trees.

Firs, we observed that the performance of the algorithm improved when the number of trees created during training was increased. We used up to 6,000 trees for our experiments and noticed that the loss function is monotonically decreasing when we are adding more trees. This behaviour is well illustrate in Figure \ref{fig:BeurnolliLossGBM}. We believe that using more than 6,000 trees will improve the performance, but it might be barely noticeable and not worthy to do.

\begin{figure}[ht]
\centering
\includegraphics[width=150mm]{gbm_test.png}
\caption{Bernoulli loss for GBM}
\label{fig:BeurnolliLossGBM}
\end{figure}

To limit the high computational resources for training GBM we restricted the number of training examples used by single classifier to 40,000. This allowed us to use high number of iterations/trees which were more important for obtaining good results. Also, through empirical experimentation, we concluded that the optimal value for the interaction depth parameter is 3.
We achieved results that are very close to the K-nearest neighbour algorithm. The best average cost obtained was \textbf{0.2498} which was 0.003 less than KNN. However, GBM managed to obtain the best overall accuracy having 92.60 \%. Table \ref{tab:GBM} summarizes the experiments that we performed and the results.

\begin{table}[ht]
\centering
\begin{tabular}{c | c | c | c}
\textbf{ \# of Trees} & \textbf{ \# of Features Kept} & $\mathbf{\mathcal{ACC}}$ & \textbf{Accuracy} \\ \hline
4000  & 12 & 0.2650 & 89.5 \%\\  
4000  & ALL & 0.2507 & 92.2 \% \\
6000  & ALL & 0.2498 & 92.6 \%\\
\end{tabular}
\caption{GBM Experiments}
\label{tab:GBM}
\end{table}



We also report the accuracy of the classifier on each attack class in Table \ref{tab:GBMClass}.

\begin{table}[ht]
\centering
\begin{tabular}{c | c | c | c | c}
\textbf{Normal} & \textbf{Probe} & \textbf{DoS} & \textbf{U2R} & \textbf{R2L}\\\hline
99.424 \% & 83.5 \% & 97.58 \% & 11.42 \% & 00.4\%\\ 
\end{tabular}
\caption{GBM accuracy on different attack classes}
\label{tab:GBMClass}
\end{table}

\section{Evaluation}
The main metric used to evaluate every algorithm is the average classification cost (ACC). This metric is defined as
\begin{align*}
\mathcal{ACC} = \frac{1}{\mathcal{N}} \sum_{i = 1}^{5} \sum_{j = 1}^{5} \mathcal{M}_{ij} \times \mathcal{C}_{ij},
\end{align*}
where $\mathcal{N}$ represents the number of test instances, $\mathcal{M}$ the confusion matrix generated by the classifier, and $\mathcal{C}$ the cost matrix as defined in Table (\ref{tab:cost_matrix}). Some algorithms had versions that were able to work taking into account a cost matrix. Algorithms which did not have this capabilities were still evaluated using the same metric. 

Almost every classification algorithm evaluated was able to predict the \textit{Normal} and \textit{DoS} attack instances with high confidence, and the \textit{Probe} attacks with moderate success. The best classifiers were those that were able to maximize their prediction capabilities for the least frequent attacks while maintaining the same accuracy level for the frequent classes.

The results obtained by all the classifiers are summarized in Table (\ref{tab:acc_comparison}). We were successful in finding a classification algorithm that performed better than KNN and was better than all but three classifications performed in the original KDD'99 competition. Random Forests were able to improve the accuracy of the least frequent classes by a tiny bit. This was enough to have a considerable boost in the average classification cost.

\begin{table}[ht]
\centering
\begin{tabular}{c | c | c}
\textbf{Classifier} & \textbf{Best $\mathbf{\mathcal{ACC}}$ Achieved} & \textbf{Accuracy} \\ \hline
Na\"ive Bayes & 1.3060 & 38.25 \%\\  
KNN    & 0.2461 & 92.31 \%\\
Generalized Boosted Regression & 0.2498 & 92.60 \%\\ 
SVM    & 0.3704 & 85.81 \%\\ 
Decision Trees    & 0.2619 & 90.93 \%\\
Random Forest & 0.2391 & 92.42 \%\\
\end{tabular}
\caption{Comparisons between the average classification cost of the examined classifiers.}
\label{tab:acc_comparison}
\end{table}

The individual predictions for each class of the Random Forest classifier are given in Table (\ref{tab:rf_best_accuracies}). Notice how each class was predicted with a higher accuracy than with KNN. The biggest impact comes from the higher predictions of the last two classes.

\begin{table}[ht]
\centering
\begin{tabular}{c|cc}
\textbf{Class} & \textbf{Accuracy} \\ \hline
Normal & 99.1 \% \\  
Probe & 78.0 \% \\
DoS & 97.5 \% \\
U2R & 24.0 \% \\
R2L & 08.0 \% \\
\end{tabular}
\caption{Accuracy of individual classes for Random Forest.}
\label{tab:rf_best_accuracies}
\end{table}

Different classifiers were able to perform better predicting different classes. Even though not every classifier was able to improve its average classification cost above KNN, some classifiers predicted certain classes with high confidence (high accuracy with low false positive rates). SVMs, for example, were extremely good at predicting \textit{DoS} attacks in general. Table (\ref{tab:ensemble}) shows which classifiers predicted the best classes. These results suggest that taking into consideration an ensemble method would improve classification and average classification costs dramatically.

\begin{table}[ht]
\centering
\begin{tabular}{c|ccc}
\textbf{Classifier} & \textbf{Class} & \textbf{Accuracy} \\ \hline
KNN & Normal & 99.46 \% \\  
Boosted Regression & Probe & 83.50 \% \\
SVM & DoS & 99.99 \% \\
Random Forest & U2R & 24.00 \% \\
Random Forest &R2L & 08.00 \% \\
\end{tabular}
\caption{The best classifiers at predicting different classes.}
\label{tab:ensemble}
\end{table}

\section{Conclusion}
The KDD'99 competition provided very challenging dataset suitable for classification and anomaly detection tasks. While both of them are of challenging nature, this project, as well as the competition itself, focused on the classification of different attack types into five general classes. The original competition was very good example of how powerful a simple learning algorithm can be. Our results confirmed that statement. Although we managed to improve successfully our simple baselines, and 1-Nearest Neighbours in particular, some of the models that we used to achieve it involved complex structure and high number of parameters. In general, one should be aware of the trade-off between parameter complexity and performance gain when selecting a model. 

We showed that using feature selection algorithms for finding good feature subset, significantly improved the performance of most of the classifiers, while linear transformations like PCA and LDA were not that efficient. We presented interesting relationships between the attack classes and the error attributes in the dataset.

We achieved most of our good results, using tree-based algorithms, which comes to show the power of that approaches. Decision Trees performed slightly worse than 1-NN, while Random Forests (RF) with appropriate tuning achieved better results than it. Noteworthy to mention is that Generalized Boosted Regression (GBN) got results comparable to the one of 1-NN and RF. However, they have higher complexity of training and require very large number of iterations to converge.

Our believe is that the best classifier will be the one that combines all the powerful sides of the current state-of-the-art systems in the field. Having five different learners, each of which is good in predicting one particular class with very high accuracy and really low false positive rate, we can apply ensemble mechanism which will result in classifier more accurate than each of them taken individually.

%Appendix
\newpage
\appendix
\appendixpage

\newcommand{\specialcell}[2][c]{%
\begin{tabular}[#1]{@{}c@{}}#2\end{tabular}}

\section{Dataset Features}
\subsection{Basic Features}
\begin{table}[ht]
\centering    
\begin{tabular}{|  c | p{9cm} | c |} \hline
\textbf{Feature Name } & \textbf{Description}   & \textbf{Data Type}\\ \hline
duration   & length (number of seconds) of the connection   & continuous\\ \hline
protocol\_type   & type of the protocol, e.g. tcp, udp, etc.   & discrete\\ \hline
service   & network service on the destination, e.g., http, telnet, etc.   & discrete\\ \hline
src\_bytes   & number of data bytes from source to destination   & continuous\\ \hline
dst\_bytes   & number of data bytes from destination to source   & continuous\\ \hline
flag   & normal or error status of the connection   & discrete \\ \hline
land   & 1 if connection is from/to the same host/port; 0 otherwise   & discrete\\ \hline
wrong\_fragment   & number of ``wrong'' fragments   & continuous\\ \hline
urgent   & number of urgent packets   & continuous\\ \hline
\end{tabular}
\end{table}

\newpage
\subsection{Traffic Features}
\subsubsection{Same Host Connections}

\begin{table}[ht]
\centering    
\begin{tabular}{|  c | p{9cm} | c |} \hline
\textbf{Feature Name } & \textbf{Description}   & \textbf{Data Type}\\ \hline
count   & number of connections to the same host as the current connection in the past two seconds  & continuous\\  \hline
serror\_rate   & \% of connections that have ``SYN'' errors   & continuous\\  \hline
rerror\_rate   & \% of connections that have ``REJ'' errors   & continuous\\  \hline
same\_srv\_rate   & \% of connections to the same service   & continuous\\  \hline
diff\_srv\_rate   & \% of connections to different services   & continuous\\  \hline
srv\_count   & number of connections to the same service as the current connection in the past two seconds   & continuous\\  \hline
\end{tabular}
\end{table}

\subsubsection{Same-Service Connections}

\begin{table}[ht]
\centering    
\begin{tabular}{|  c | p{9cm} | c |} \hline
\textbf{Feature Name } & \textbf{Description}   & \textbf{Data Type}\\ \hline
srv\_serror\_rate   & \% of connections that have ``SYN'' errors   & continuous\\ \hline
srv\_rerror\_rate   & \% of connections that have ``REJ'' errors   & continuous\\ \hline
srv\_diff\_host\_rate   & \% of connections to different hosts   & continuous \\ \hline
\end{tabular}
\end{table}

\subsection{Content Features}
\begin{table}[ht]
\centering    
\begin{tabular}{|  c | p{9cm} | c |} \hline
\textbf{Feature Name } & \textbf{Description}   & \textbf{Data Type}\\ \hline
feature name  & description   & type\\  \hline
hot   & number of ``hot'' indicators  & continuous\\  \hline
num\_failed\_logins   & number of failed login attempts   & continuous\\  \hline
logged\_in   & 1 if successfully logged in; 0 otherwise   & discrete\\  \hline
num\_compromised   & number of ``compromised'' conditions   & continuous\\  \hline
root\_shell   & 1 if root shell is obtained; 0 otherwise   & discrete\\  \hline
su\_attempted   & 1 if ``su root'' command attempted; 0 otherwise   & discrete\\  \hline
num\_root   & number of ``root'' accesses   & continuous\\  \hline
num\_file\_creations   & number of file creation operations   & continuous\\  \hline
num\_shells   & number of shell prompts   & continuous\\  \hline
num\_access\_files   & number of operations on access control files   & continuous\\  \hline
num\_outbound\_cmds  & number of outbound commands in an ftp session   & continuous\\  \hline
is\_hot\_login   & 1 if the login belongs to the ``hot'' list; 0 otherwise   & discrete\\  \hline
is\_guest\_login   & 1 if the login is a ``guest''login; 0 otherwise   & discrete\\  \hline
\end{tabular}
\end{table}


%Bibliography
\bibliographystyle{plain}
\bibliography{library}

\end{document}
