\section{Classification}
\label{sec:classification}

Our first task was to classify our chosen dataset. Therefore we had to choose classifier, do some preprocessing steps and finaly train the choosen classifier and test the results with different approaches. 
Because our dataset only contains more or less 10.000 instances we were of the mind that it	concerns already a manageable size. On these grounds we decided to do no subsampling. 

\subsection{Classification Algorithms}

In respective of the assignment specification we had to choose two significantly different algorithms for classification of the dataset. Therefore we explored the algorithms supported by Weka by reading their documentations. The challenge was to find two algorithms that are passably suitable to our sort of data related to performance and accuracy. We tried different implementations and end up in more or less similar results. Depending on the attributes we have chosen for classification the results where quite bad or just accurate. Because of focus on the lecture we finally decided to concentrate on one classifier based on a tree structure and one classifier based on the work of Thomas Bayes. 

\subsubsection{REPTree}

REPTree is a decision learning algorithm with fast performance. It builds a decision tree by gaining information and prunes it using reduced-error pruning. It only sorts values for numeric attributes once. Missing values are dealt with using C4.5's method of using fractional instances.\cite{frank:reptree}

In respective to our dataset it is also relevant that this classifier supports nominal classes as well as numeric classes  and can handle numeric attributes, nominal attributes, binary attributes and missing values.

Some more advantages are \cite{hromkovic:design}:
\begin{itemize}
\item It manages both contiguous and categorical values.
\item Easier to understand complex relationship between variables.
\item Minimizes the effect of incorrect or missing values in final representation of tree.
\end{itemize}
But the most important reason why we have chosen this classifier is, that we got one of the best results with it by just doing some random tests with different preprocessing steps. More about that will follow in the following sections. 
 

\subsubsection{Na\"iveBayes}

The Na\"ive Bayes classifier classifies the data by evaluating its probability of belonging to a specific class. Finally the classifier assigns the data to the class with the highest probability. In case the probability is not known the classifier estimates it. 

Advantages of Na\"ive Bayes classifier\cite{bishop:pattern}:
\begin{itemize}
\item It scans the data just once.
\item Fast to train.
\item Fast to classify.
\item Basic implementation that is not overloaded. 
\end{itemize}
The main reason for us for choosing this classifier was that it does a good job on independent data and of our point of view the internet usage data are looking highly independent.
\subsection{Preprocessing}

The given data had to be transformed to a special form depending on the chosen classifier. For every single algorithm we tried different filters. In this chapter we will describe the different parameters and steps we did for each classifier. We evaluated the different results by interpreting the output of the 10-fold-cross-validation.

\subsubsection{General preprocessing}

The preprocessing steps that where necessary to make the data readable by Weka are described in the introduction of this paper. Furthermore we did following transformation steps as preparation for both classifier by using the filter functionalities of Weka.

\textbf{weka.filter.unsupervised.attribute.remove}\\
As we mentioned in the introduction part we decided to keep the attribute 'RID' in the dataset. During our experiments we noticed that the 'RID' won't give us any benefits in classification results. We removed this attribute because the unique values had a negative effect on the performance of both classifiers. 

We also removed the attribute 'Actual\_Time' because we were not able to find any information about what this attribute was used for in the questionnaire. In addition the attribute had no influence on performance or results of the classifiers. 

\textbf{weka.filter.unsupervised.attribute.Discretize}\\
The attribute 'Age' is difficult to classify and it's not easy to match it to a specific class through the classifier because of too much different values. Furthermore regarding the content of the attribute we decided to discretize it to five classes. We picked the filter 'Discretize' with parameter 1 as index and value 5 for the amount of bins. Weka calculated automatically the optimal ranges for each bin. 

\begin{figure}[H]
  \includegraphics[width=0.47\textwidth]{images/img_classification_before_discretize.png}
  	\caption{Age before discretize}
\end{figure}

\begin{figure}[H]
  \includegraphics[width=0.47\textwidth]{images/img_classification_after_discretize.png}
  	\caption{Age after discretize}
\end{figure}

\textbf{weka.filter.unsupervised.attribute.NumericToNominal}\\
To be able to use Na\"ive Bayes classifier we had to transform the numeric values to nominal values. Because REP Tree can handle both forms we decided to choose the numeric transformation for all our tests. NumericToNominal is a filter for turning numeric attributes into nominal ones. Unlike discretization, it just takes all numeric values and adds them to the list of nominal values of that attribute. Useful after CSV imports, to enforce certain attributes to become nominal, e.g., the class attribute, containing values from 1 to 5 \cite{frac:nominal}.


\subsubsection{Scaling}
\label{subsub:scaling}

We did not find any reasons why we should scale the whole data or samples of the data. Most answer were given by saying "Yes" or "No".  This was translated to 1 and 0. There is no weight or data similar to weight that we can compare. There is also no argument for standardization or normalization, because there are no differences between the largest value and the smallest and no variances we had to reduce. 

\subsection{Training}

We did several tests with different attributes for classification. We decided to go into more detail with the attribute "Race" as chosen classifiers because we got acceptable values for the classifier quality measures. 
The attribute "Race" had the following answer possibilities: 
\begin{itemize}
\item not say=0
\item White=1
\item Hispanic=2
\item Asian=3
\item black=4
\item Latino=5
\item Indigenous=6
\item American=7
\item korean=8
\item other=99
\end{itemize}

\subsubsection{Na\"ive Bayes}

\textbf{Parameters}\\
We started our first experiment with standard parameters of the Na\"ive Bayes classifier. The result was an accuracy of  85.4538 \%. That's not a quite good value but it is acceptable an compared to other random samples of attributes as classifier it is satisfactory. 
A more detailed look at the precision and recall values offered that class 1 has quite good values (recall=0.905,precision=0.948) but all other classes were not preciously estimated. For instance class 4s results for recall and precision were  0.363 and 0.285. Although 270 persons answered the question "What they consider themselves" with "Asian" just 1 set was classified correctly. 0.01 seconds where necessary to build the model with default parameters. 

Next we have chosen to set the parameter "useKernelEstimator" from "False" to "True". We did not notice any differences to the default value. Accuracy, precision, and recall were exactly the same. Also the necessary time came up to 0.01 seconds. 

Changing the parameter "useSupervisedDiscretization" from "False" to "True". And again there were no differences in the quality of the classifiers were noticeable. Just the time increased to 0.05 seconds, but when we started the test with the same parameter again, it decreased to 0.02 seconds. However all three time values indicated no significant states.

\textbf{Scaling}\\
As described in chapter~\ref{subsub:scaling} we did not find any useful reason to scale the dataset. Regarding to the specification of the assignment we should also test non-useful scaling. Therefore we decided to use standardization. Because our data were already transformed to nominal values standardization was not possible. We had to load the initial CSV file again, repeat the preprocessing part and apply the filter \textbf{unsupervised.attribute.standardized} before binning the age and transforming the data to nominal values. 

After that we tested our data again with class race and Na\"ive Bayes as classifier. The accuracy end up in not significantly lower value (85.2247 \%) in comparison to our tests we made earlier. We recognised the same behaviour for precision and recall values (0.905,0.945) and also the time that was necessary to build the model came to 0.01 seconds. 

\textbf{Training/test set splits}\\
The results of our experiments so far demonstrated that we should do our training/test set tests with the default parameters of Na\"ive Bayas and no scaling of any data. Regarding to the specification of the assignment we should test the outcome and the performance of the classfier with different splits between training and test set. In the following table  the result of this experiments are listed. The column F-Measure shows the weighted average value and the  the column class the class name with the best F-Measure.

The F-measure can be viewed as a compromise between recall and precision. It is high only when both recall and precision are high. It is equivalent to recall when $\alpha$ = 0 and precision when $\alpha$ = 1. The F-measure assumes values in the interval [0,1]. It is 0 when no relevant sets have been retrieved, and is 1 if all retrieved sets are relevant and all relevant documents have been retrieved \cite{ling:encycldbs}.

 \begin{tabular}{ccccc}
 \textbf{Split} & \textbf{Accuracy} & \textbf{F-Measure} & \textbf{Class} & \textbf{Performance} \\
  5\% & 86.5233\% & 0.828 & 1 & 0.01 s\\
  15\% & 84.2105\% & 0.823  & 1 & 0.01 s\\
  25\% & 84.2455\% & 0.824 & 1 & 0.01 s\\
  35\% & 84.7486\% & 0.830 & 1 & 0.01 s\\
  45\% & 84.5833\% & 0.830 & 1 & 0.01 s\\
  55\% & 84.7908\% & 0.834 & 1 & 0.01 s\\
  65\% & 85.1409\% & 0.836 & 1 & 0.01 s\\
  75\% & 85.6517\% & 0.844 & 1 & 0.01 s\\
  85\% & 86.6534\% & 0.855 & 1 & 0.01 s\\
  95\% & 87.2510\% & 0.857 & 1 & 0.01 s\\               
 \end{tabular}

\subsubsection{REPTree}
\textbf{Parameters}\\
We started again our experiments with default parameters of REPTree. The accuracy 88.2634\% was a little better than our first experiment with Na\"ive Bayas. Not a big surprise was that again class 1 resulted in good values for precision and recall (precision = 0.886  and   recall = 0.997) and again the classification of other classes was not satisfactory. However an interesting part is that the precision values are much better than recall values. This behaviour was not occurred during our Na\"ive Bayas experiments e.g. class 3: precision = 0.629, recall = 0.144 and class 4: precision =  0.333, recall = 0.005. The time that was necessary to build the model came up to 0.42 seconds. The tree size was given with 121. 

Next we had a look at the different parameters of REPTree. We first disabled pruning (\textbf{noPruning=True}) because we thought this will end up in different result. The output was different in reference to our first experiment, but the values were also disappointing. The accuracy decreased to 83.7202\% and also the recall and precision values of all classes were reduced. Also the time was reduced to 0.35 seconds. Obviously the tree size increased to 3613. 

The parameter \textbf{numFolds} determines the amount of data that is used for pruning. We made several experiments with different values. Increasing this value had no significant effect on accuracy but at a specific value (>500) the classifier assigns all data to class 1, which is simple wrong. 

Playing around with the parameter \textbf{seed}, which is responsible for randomizing the data, ended up in different sizes of the tree. Neither the performance nor the accuracy changed significantly.

The tree had a depth of 5 with default parameter settings. Forcing a maximum depth with the parameter \textbf{maxDepth} resulted in a obviously lower tree size. Significant changes in performance or accuracy were not noticeable.

We also tried some experiments with different values for \textbf{minNum}, which sets minimum number of instances per leaf and could not find any interesting differences in the results. 

The last parameter \textbf{minVarianceProp} didn't have any effect on our data sample, because it sets minimum numeric class variance proportion
of train variance for split. As we had no numeric values left in our dataset and it would also make no sense to test the classifier with numeric values in our case, we didn't have a closer look at this parameter. 

\textbf{Scaling}\\
As already mentioned in chapter~\ref{subsub:scaling}, scaling wouldn't make much sense for our kind of data set. But again for showing also non-useful scalings we chose the preprocessing setting of the same experiment as we did  in Na\"ive Bayas and standardized all values. 

The tests we did with this preprocessed data sets showed already known results. E.g. Accuracy 88.3033\%. The reason is that all possible answers are one enumerated element. There are no dimensions that could be compared where standardization would make sense. 

\textbf{Traing/test set splits}
As we did for Na\"ive Bayas classifier we also wanted to investigate the effect of different training and test set splits. The table has the same structure as the table above. The column F-Measure shows the weighted average value and the  the column class the class name with the best F-Measure.


 \begin{tabular}{ccccc}
 \textbf{Split} & \textbf{Accuracy} & \textbf{F-Measure} & \textbf{Class} & \textbf{Performance} \\
  5\% & 88.0545\% & 0.825 & 1 & 0.15 s\\
  15\% & 88.0670\% & 0.825 & 1 & 0.22 s\\
  25\% & 88.2173\% & 0.831 & 1 & 0.16 s\\
  35\% & 88.2128\% & 0.827 & 1 & 0.15 s\\
  45\% & 88.1703\% & 0.826 & 1 & 0.16 s\\
  55\% & 88.5765\% & 0.838 & 1 & 0.16 s\\
  65\% & 88.3291\% & 0.829 & 1 & 0.16 s\\
  75\% & 88.7206\% & 0.834 & 1 & 0.20 s\\
  85\% & 90.1726\% & 0.859 & 1 & 0.16 s\\
  95\% & 89.2430\% & 0.851 & 1 & 0.16 s\\               
 \end{tabular}
 
