\section{Conclusions}

In this paper we discussed several data mining tasks such as preprocessing a dataset, classification, handling missing values and clustering. For this assignment we chose a dataset from a survey of internet users in 1997. Our first challenge was to convert the raw data into an appropriate format which our data mining software WEKA could handle. We had to overcome some difficulties like missing headers, too many attributes in some rows and shifted columns because of illegal characters. After reaching the limitations of Notepad++ we used Microsoft Excel to complete the formatting and saved it as comma seperated CSV file in order to be able to import it in WEKA. 

After some further preprocessing strategies which are provided as filters in WEKA we had a suitable ARFF file to proceed with the classification task. 
%Classification
If someone is just looking at values in the tables of the classification chapter~\ref{sec:classification} he will get the impression that both classifiers work quite good. But we had a closer look behind the scenes and noticed that only sets with class "1" are assigned correctly with both classifiers. Recall as well precision values are in most cases not acceptable. For instance class 3 average values are recall = 0.313  and precision = 0.294 with Na\"ive Bayas classifier. The weighted average F-Values of the tables won't show this problem, because most interviewed persons were white people and the "weighted" component disguise this fact. Therefore we think it is obvious that most errors and bad values for different classes deriving from the wrong classification to class "1". In order to get a reliable prediction, it is important to use a dataset with higher quality and more variance. 

%missing values
Following the classification we examined the effect of missing values of the results of the Na\"ive Bayes classifier in chapter ~\ref{sec:missingvalues}. In order to do that we wrote a Java-based script which would allow us to generate missing values either across the whole dataset or for specified column only. The second functionality our tool provides is the replacement of missing values according one of three strategies. \\
First we generated eight datasets with different distributions and percentages of missing values and used the Na\"ive Bayes algorithm in WEKA for classification. We found that the missing value did not have a strong impact on the outcome.\\
Consequently we had a look at the three replacement strategies for missing values and assessed that ignoring the columns containing missing values did not affect the classification results much either. Using the second strategy and replacing the missing values by the attribute's mean or median did produce some differences in the results but for most still in a very limited range. We could observe a definitely more noticeable effect when selecting the third option though which was to use the attribute's class' mean or median as replacement. Mostly there we found that choosing the mean and a high percentage of missing values led to distinctively improved results. Furthermore we there could also see that the higher the attribute's information gain the higher the number of correctly classified instances.

%Clustering
After analyzing the results of the clustering algorithms and running them with different parameter settings in chapter~\ref{sec:clustering}, the following conclusions were achieved:\\
As the number of clusters \textit{k} increases the performance of K-Means and EM become better. The accuracy is not so good for both algorithms in general. It is supposed to get better with larger datasets \cite{abbas:clustering}. 
From our experiments we can infer that the K-Means algorithm takes less time than the EM algorithm to build the model and uses not as much memory. However, running the clustering algorithms give almost the same results even when changing the number of clusters, but we can conclude that the clusters determined with the EM algorithm have a greater granularity than those we got from the K-Means method. 

By dealing with this assignment we got a good insight into data mining. It was an interesting task exploring the data and getting some impressions about the internet usage from almost two decades ago. We understand that the results get better once you are more experienced, because depending on the dataset you have to apply different approaches. Basically we had no problems using WEKA. We found it to be a very good and straightforward open source tool. Furthermore the software worked fine with all our platforms including Windows, Mac OS X and Linux. 