\section{Missing Values}
\label{sec:missingvalues}

After experimenting with different preprocessing and classification algorithms our task was to have a deeper look at missing values. More precisely we first should analyse how different distributions of missing values in the dataset affect the classification outcome. Consequently the task was to replace those missing values using varied strategies and again assessing at the effect they have on the classification outcome. 

\subsection{Initially missing values}

Therefore we explored the dataset right at the beginning before we even started classification. As described previously the dataset contained several erroneous entries which did not allow us to import it instantly into Weka. After importing the file into Microsoft Excel we did our first short analysis of the initially missing values in the dataset using functions provided by Excel to search for empty fields in the spreadsheet. Naturally, due to the few corrupt entries, most of the columns contained at least one missing value (see Figure~\ref{fig:initially_missing_values_dirty1}). 

As explained in the first section of the paper we then cleaned the dataset by removed all  rows which did not conform with the csv file's structure (67 entries to be more precise). Consequently we had another look at the distribution of missing values in the dataset and found that there was only one column left which in fact contained incomplete entries. For the attribute 'Primary Computing Platform' 2.675 entries of the column where empty which amounts to about 27\% of the the attribute' values as can be seen in Figure~\ref{fig:initially_missing_values_cleaned1}. 

\begin{figure}[H]
  \includegraphics[width=0.47\textwidth]{images/initially_missing_values_dirty.png}
  	\caption{Excel import options}
  	\label{fig:initially_missing_values_dirty1}
\end{figure}

\begin{figure}[H]
  \includegraphics[width=0.47\textwidth]{images/initially_missing_values_cleaned.png}
  	\caption{Excel import options}
  	\label{fig:initially_missing_values_cleaned1}
\end{figure}

\subsection{Missing value script}

In order to experiment with the effect of missing values on the classification results we wrote a Java-based script which would allow us to manipulate the dataset. Therefore we created a jar-File named 'MissingValueEngine.jar' which has two main functions namely creating missing values randomly and replacing missing values following different strategies. 

The Java program can be executed via console using the command 'java -jar MissingValueEngine.jar' (it should be noted that 'java.exe' needs to be in the path variable). Using command line parameters one can define the script's input. As a number of parameters can and must be defined there is a help function which can be called by using the parameter '-?' and which prints some basic information on the program's functionality and its parameters (see Figure~\ref{fig:missingvalueengine_help});

\begin{figure}[H]
  \includegraphics[width=0.47\textwidth]{images/missingvalueengine_help.png}
  	\caption{Excel import options}
  	\label{fig:missingvalueengine_help}
\end{figure}

In the following we want to shortly describe the general parameters which are independent of the functionality used:
\begin{itemize}
\item -? \\
Calls the help function as was demonstrated in \mbox{Figure \ref{fig:missingvalueengine_help}}.

\item -i \\
Defines the (absolute) path to the input file which has to be a file with comma separated values.

\item -o \\
Defines the (absolute) path to the output file. Be aware that existing files are overwritten.

\item -m \\
The mode of the MissingValueEngine states whether we want to generate missing values ('generate') or replace the existing missing values ('replace').	

\item -noheader \\
Appending this parameter simply states that the input file does not contain an attribute header line.
	
\end{itemize}

\subsubsection{Generating missing values}

The first task for the script was to create the functionality of generating a specified percentage of missing values for a given dataset which should be done randomly. The requirements denoted that there should be two different strategies possible. The first option is that the generated missing values are distributed randomly across all attributes and the second choice is to specify column indices for which individual percentages are applied. The output of generation function of our tool is again a dataset where, according to the chosen option, the placeholder '?' is inserted to signalize a missing value.

To specify how missing values should be generated following parameters can or need to be used:
\begin{itemize}
\item -all \\
This parameter should be used when the first option is chosen. After the parameter a percentage value (greater than 0 and less than 100) needs to be specified which is the percentage of missing values that are generated across all columns of the dataset.
\item -classindex \\
Optionally in addition to '-all' the column index of the class attribute (for classification) can be specified which excludes this attribute from the generation of missing values.
\item -p \\
When the second generation option is selected the user can specify single columns for which missing values should be generated. Therefore a parameter list can be used which states which percentage of missing values should be generated for which column index. The syntax is: \mbox{\textless columnindex\textgreater :\textless percentage\textgreater}. For more than one attribute several of those tuples separated by a colon can be specified (e.g. 1:5,2:10,3:5).

\item -r \\
Defines the seed for the random number generator used for finding values to replace. This initialization parameter needs to be stated in order to ensure the repeatability of the results obtained.
	
\end{itemize}

\subsubsection{Replacing missing values}

The second functionality the script should provide is the replacement of missing values in a dataset following one of three defined strategies. 
The first strategy is to simply ignore attributes with missing values which means that every column which contains at least one missing attribute will be removed from the dataset. This could very likely make sense with the one attribute that had about 27\% already initially missing values as is shown in Figure \ref{fig:initially_missing_values_cleaned1}.\\
The next replacement strategy is a bit more sophisticated as it states that missing values of an attribute are replaced by the mean or median of the respective attribute. It should be noted that it explicitly needs to be stated if mean or median as this can influence the result distinctively. \\
The final approach for replacing missing values we implemented is that a missing value in the dataset is replaced by the mean or median of its respective class. In other words if an entry in the dataset is empty it is checked which class the particular instance belongs to and the calculated mean or median of the attribute for this class is then selected.

Also for the replacements there are certain parameters which need to be defined for the program we wrote:
\begin{itemize}
\item -ignore \\
When this parameter is included when executing the MissingValueEngine the first of the three replacement strategies is chosen meaning that every column containing at least one missing value is removed.

\item -attribute \\
Similar to the last parameter when '-attribute' is selected the second replacement strategy is chosen where missing values are replaced by the attribute's mean or median.

\item -class \\
When this parameter is selected missing values are replaced by the mean or median of that attribute in the respective class.

\item -classindex \\
For the options '-attribute' and '-class' the column index of the class attribute needs to be stated.

\item -type \\
For the parameters '-attribute' and '-class' this parameter needs to be included to state whether the mean or the median of the respective attribute should be used for replacements.

\end{itemize}

\subsection{Generating missing value datasets}

After creating the MissingValueEngine tool the next task was to generate datasets with different distributions of missing values for different attributes. Therefore we first had a look at the information gain of all attributes given the previous classification task. To obtain the attributes with the highest and lowest information gain we analyzed the dataset in Weka using the attribute evaluator \\ 'weka.attributeSelection.InfoGainAttributeEval'. Prior to that we excluded the column 'RID' as to our believe this artificial attributes makes little sense when evaluating the information gain in the context of classification. The results of the evaluation with selecting the class 'Race' (as in the previous classificatino task) using Weka are shown in Figure \ref{fig:weka_information_gain1}.

\begin{figure}[H]
  \includegraphics[width=0.47\textwidth]{images/weka_information_gain1.png}
  	\caption{Information gain analysis}
  	\label{fig:weka_information_gain1}
\end{figure}

We found that for all attributes the information gain is relatively low but that there is still a difference of factor 1000 between the attribute with the highest and the attribute with the lowest information gain. For the attribute 'Country' we obtained an information gain of 0.103795 which was not surpassed by any other attribute. On the other end of the spectrum we found that the attribute  \\'How\_You\_Heard\_About\_Survey\_Mailing\_List' had the absolute worst information gain with a value of 0.000106. Using cross validation (with 10 folds) for the information gain evaluation did not produce a different ranking of the attributes in our dataset.\\
After the evaluation we selected three attributes from the dataset for the generation of missing values. In addition to the attribute with the highest information gain ('Country') and the one with the lowest \\('How\_You\_Heard\_About\_Survey\_Mailing\_List') we also selected an attribute with a moderate information gain and therefore chose 'Gender' with an information gain value of 0.002189. \\
With these attributes selected we generated eight different datasets with following specifics:
\begin{itemize}
\item 'dataset\_o1': \\
80\% missing values for the attribute 'Country'
\item 'dataset\_o2': \\
10\% missing values for the attribute 'Country'
\item 'dataset\_o3': \\
80\% missing values for the attribute 'Gender'
\item 'dataset\_o4': \\
10\% missing values for the attribute 'Gender'
\item 'dataset\_o5': \\
80\% missing values for the attribute \\
'How\_You\_Heard\_About\_Survey\_Mailing\_List'
\item 'dataset\_o6': \\
10\% missing values for the attribute \\
'How\_You\_Heard\_About\_Survey\_Mailing\_List'
\item 'dataset\_o7': \\
80\% missing values distributed randomly across all attributes in the dataset
\item 'dataset\_o8': \\
10\% missing values distributed randomly across all attributes in the dataset
\end{itemize}

To generate the missing values we used the script we wrote and for our first generated dataset we got following output:\\

\texttt{java -jar MissingValueEngine.jar -i dataset.csv \newline
-o dataset\_o1.csv -m generate -p 11:80 -r 123456\newline
\newline
\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\# \\
\#\#\#\#\#\#\#\#\#\#\#\#\# MissingValueEngine 1.0 \#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\newline
\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\# \\
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ \\
Start reading the input file data (with header)... \\
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ \\
Finished reading the input file data: \\
        -) Data: 70 columns \\
        -) Data: 10037 rows \\
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ \\
Initially missing values: \\
        -) Column \#36: 2675 \\
\\
A total of 2675 missing values are missing. \\
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ \\
Starting to execute missing value generation... \\
\\
Missing values generated: \\
        -) Column \#11: 8030 \\
\\
A total of 8030 missing values were created. \\
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ \\
Finished writing the output file: \\
        -) Data: 70 columns \\
        -) Data: 10037 rows \\
\\
Exit. \\
\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\# \\
}

For the other seven datasets to be generated we proceeded in a similar way and in the end had created 'dataset\_o1.csv' to 'dataset\_o8.csv' ready to be used for further analyses. 

\subsection{Classification of missing value datasets}

After we created eight datasets with different percentages of missing values for different attributes we went back to Weka for classification. We used the Na\"ive Bayes classifier (with the same settings as during the classification task) on all datasets and got the results shown in Table \ref{table:classificationResultNaiveBayesMissingValues}.

\begin{table}[h!]
\centering
 \begin{tabular}{| c | c | c | c | c |} 
 \hline
 \textbf{Set} & \textbf{Accuracy} & \textbf{Precision} & \textbf{Recall} & \textbf{F-measure} \\ [0.5ex] 
 \hline\hline
 1 & 85.4339\% & 0.821 &  0.854 & 0.836 \\ 
 \hline
 2 & 85.4538\% & 0.823 & 0.855 & 0.837 \\
 \hline
 3 & 85.4538\% & 0.823 & 0.855 & 0.837 \\
 \hline
 4 & 85.3841\% & 0.822 & 0.854 & 0.836 \\
 \hline
 5 & 85.4538\% & 0.823 & 0.855 & 0.837 \\
 \hline
 6 & 85.4339\% & 0.823 & 0.854 & 0.836 \\
 \hline
 7 & 87.6568\% & 0.795 & 0.877 & 0.831 \\
 \hline
 8 & 85.5848\% & 0.821 & 0.856 & 0.835 \\
 \hline
\end{tabular}
\caption{Classification results for the missing value datasets}
\label{table:classificationResultNaiveBayesMissingValues}
\end{table}

As one can see the classification result of the various datasets did not show much variance as most of the values we obtained were in about the same range. We know that the Na\"ive Bayes classifier works well with missing values which is why we assume that the high amount of attributes and instances in our datasets enable the algorithm to compensate the missing values more than enough.\\
As already described in the previous classification section of this report we got an accuracy of 85.4538\% for our dataset (without missing values) with the Na\"ive Bayes classifier together with a precision of 0.823, a recall of 0.855 and an F-measure of 0.837. Table \ref{table:classificationResultComparisonNaiveBayesMissingValues} shows the comparison of those results with the data we obtained from the classification using the datasets with missing values. As can be seen the values hardly differ with one exception: dataset 7, which is the dataset with 80\% missing values randomly distributed across all attributes. The precision is smaller which means that that not as many elements are classified that are more relevant than irrelevant but on the other hand the recall value increased with this dataset meaning that most of the relevant elements were recognized by the Na\"ive Bayes algorithm. We concluded that this increase in correctly classified instances for dataset 7 is due to the fact that outliers carry less weight but apart from that we did not observe a major effect of missing values on the classification results.

\begin{table}[h!]
\centering
 \begin{tabular}{| c | c | c | c | c |} 
 \hline
 \textbf{Set} & \textbf{Accuracy} & \textbf{Precision} & \textbf{Recall} & \textbf{F-measure} \\ [0.5ex] 
 \hline\hline
 1 & $-0.0199$\% & $-0.002$ &  $-0.001$ & $-0.001$ \\ 
 \hline
 2 & $\pm 0.0000$\% & $\pm 0.000$ & $\pm 0.000$ & $\pm 0.000$ \\
 \hline
 3 & $\pm 0.0000$\% & $\pm 0.000$ & $\pm 0.000$ & $\pm 0.000$ \\
 \hline
 4 & $-0.0697$\% & $-0.001$ & $-0.001$ & $-0.001$ \\
 \hline
 5 & $\pm 0.0000$\% & $\pm 0.000$ & $\pm 0.000$ & $\pm 0.000$ \\
 \hline
 6 & $-0.0199$\% & $\pm 0.000$ & $-0.001$ & $-0.001$ \\
 \hline
 7 & +2.2030\% & $-0.028$ & +0.022 & $-0.006$ \\
 \hline
 8 & +0.1310\% & $-0.002$ & +0.001 & $-0.002$ \\
 \hline
\end{tabular}
\caption{Difference between classification results for the original dataset and the missing value datasets}
\label{table:classificationResultComparisonNaiveBayesMissingValues}
\end{table}

To prove our earlier theory that the sheer amount of attributes 'blurred' the effect of missing values we removed all but 18 attributes from the dataset leaving us mainly with data on the user and not the results of the survey. When we applied the Na\"ive Bayes classifier again we obtained a value of 87.0479\% for correctly classified instances which showed an overall improvement of about 1.6\%. We then again generated 80\% missing values for the attribute 'Country' (which had the highest information gain) using our script. The results of the anew classification did not yield an outcome of much difference to before. Therefore we concluded that a (well-thought) reduction of attributes in facts increases the accuracy of the classification but has little effect on the influence of missing values.

\subsection{Experimenting with replacement strategies}

As the classification of the datasets containing missing values did not return results with a noticeable difference (neither positive nor negative) in its efficiency we were unsure but interested of what to expect from using replacement strategies and again used our MissingValueEngine tool for handling the missing values.\\
First we selected the 'ignore-strategy' which meant removing all columns containing missing values. As we sort of expected none of the first six datasets' classification result changed much compared to our previous trial. The last two datasets of course were completely erased as all columns contained missing values and therefore all columns were deleted.\\
Next we had a look at the strategy which replaces missing values by the mean or median of the respective attribute. For the datasets with missing values in only one column the classification results we obtained again did not differ much from the original results without missing values. However when we selected dataset 7 and 8 which have missing values across the whole dataset (all attributes) we could observe partly drastic differences. Especially when selecting the strategy to replace by the attribute's median with a high fraction of missing values the percentage of correctly classified instances got close to 100\%. We realized that those results stemmed from the fact that also our class attribute 'Race' had lots of missing values which where replaced by its median. As a consequence almost all values where (correctly) classified to this now dominating class. We thought about the results and decided that classification does not make much sense if there are missing values within the class attribute. Therefore we went back to our missing value tool and remodelled it to exclude the class attribute when missing values are generated across the whole dataset.\\
After doing so we generated the datasets 7 and 8 again with the new settings but could not notice big differences in the classification results compared to the values we initially obtained in the classification section. However we expected more noticeable changes in the classification outcome after applying the attribute replacement strategy and indeed the values normalized tremendously compared to before when we had extremely high accuracy due to the replacement of class attribute values. Table \ref{table:classificationResultComparisonNaiveBayesMissingValuesReplaceAttr} summarizes the results of the classification after we replaced all missing values by the attribute's mean or median respectively. One can see that for most cases the accuracy decreased due to the replacement and we therefore concluded that this replacement strategy is probably not optimal. Furthermore we noted that for most cases replacing by the mean and not the median yielded a slightly better result.

\begin{table}[h!]
\centering
 \begin{tabular}{| c | c | c | c | c |} 
 \hline
 \textbf{Set} & \textbf{Mean} & \textbf{$\Delta$ Mean} & \textbf{Median} & \textbf{$\Delta$ Median} \\ [0.5ex] 
 \hline\hline
 1 & 85.6830\% & +0.2491\% & 85.6232\% & +0.1893\% \\ 
 \hline
 2 & 85.3542\% & $-0.0996$\% & 85.2645\% & $-0.1893$\% \\
 \hline
 3 & 85.3741\% & $-0.0797$\% & 85.3442\% & $-0.1096$\% \\
 \hline
 4 & 85.3143\% & $-0.0698$\% & 85.2645\% & $-0.1196$\% \\
 \hline
 5 & 85.3542\% & $-0.0996$\% & 85.2844\% & $-0.1694$\% \\
 \hline
 6 & 85.3442\% & $-0.0897$\% & 85.2944\% & $-0.1395$\% \\
 \hline
 7 & 87.5859\% & $-0.0709$\% & 87.6059\% & $-0.0509$\% \\
 \hline
 8 & 85.7029\% & +0.1181\% & 85.8723\% & +0.2875\% \\
 \hline
\end{tabular}
\caption{Difference between classification results for the missing value datasets and after using the 'attribute replacement strategy'}
\label{table:classificationResultComparisonNaiveBayesMissingValuesReplaceAttr}
\end{table}

Finally we created datasets using the third replacement strategy where every missing value is replaced by the mean or median of that attribute in the respective class(!). As the previous two strategies did not entail a distinct improved classification performance compared to the datasets with missing values we were hopeful to get better results with this strategy. Once again we used our Java tool to perform the replacement and consequently conducted the classification tasks in Weka. As hoped for the percentage of correctly classified instances increased for most datasets, for some even dramatically as is shown in Table \ref{table:classificationResultComparisonNaiveBayesMissingValuesReplaceClass}. Just like before we could also assess that using the attribute's mean for replacement worked far better than its median.\\
The first dataset contained 80\% missing values for the attribute with the highest information gain 'Country'. With our previous analyses the fact that we chose the attribute with the biggest information gain did never really show. With the class replacement strategy we were able to accomplish a classification accuracy of more 93\% which shows a clear improvement. Looking at the second dataset we found that percentage of correctly classified instances after applying the class replacement strategy also obviously correlates with the percentage of previously missing values. The more missing values the better the strategy as more instances fit the classes.

\begin{table}[h!]
\centering
 \begin{tabular}{| c | c | c | c | c |} 
 \hline
 \textbf{Set} & \textbf{Mean} & \textbf{$\Delta$ Mean} & \textbf{Median} & \textbf{$\Delta$ Median} \\ [0.5ex] 
 \hline\hline
 1 & 93.6634\% & +8.2301\% & 91.7007\% & +6.2668\% \\ 
 \hline
 2 & 86.6992\% & +1.2454\% & 85.3741\% & $-0.0797$\% \\
 \hline
 3 & 86.3505\% & +0.8967\% & 85.3442\% & $-0.1096$\% \\
 \hline
 4 & 86.3405\% & +0.9564\% & 85.2645\% & $-0.1196$\% \\
 \hline
 5 & 86.3704\% & +0.9166\% & 85.2844\% & $-0.1694$\% \\
 \hline
 6 & 86.3605\% & +0.9266\% & 85.2944\% & $-0.1395$\% \\
 \hline
 7 & 100.0000\% & +12.3432\% & 99.9402\% & +12.2834\% \\
 \hline
 8 & 91.0930\% & +5.5082\% & 85.4937\% & $-0.0911$\% \\
 \hline
\end{tabular}
\caption{Difference between classification results for the missing value datasets and after using the 'class replacement strategy'}
\label{table:classificationResultComparisonNaiveBayesMissingValuesReplaceClass}
\end{table}

The other two lines which catch one's eye immediately are the results for dataset 7 and 8 which prior to the replacement contained missing values across all attributes. As with all datasets in our tests the 'class replacement strategy' worked better when there was a higher percentage in missing values that were replaced. For dataset 7 we even obtained the incredibly and probably almost unbelievable value of 100\% classification accuracy when using the attribute's mean. We had a closer look but could not identify any error in our procedure. The high amount of missing values were replaced by using the means/medians of a rather small sample of remain values. As can be seen in Figure \ref{fig:dataset7_replace_class_distribution_age} for the attribute age, the distributions of the attributes were extremely uneven for datasets where a vast amount of missing values was replaced. These disparities and the fact that we have about 70 attributes in our dataset apparently enable the Na\"ive Bayes algorithm to correctly classify all values in the dataset and explain the seemingly unreal result. 

\begin{figure}[H]
  \includegraphics[width=0.47\textwidth]{images/dataset7_replace_class_distribution_age.png}
  	\caption{Distribution of 'Age' for dataset 7 after using the 'class replacement strategy' with the mean}
  	\label{fig:dataset7_replace_class_distribution_age}
\end{figure}

We can conclude that replacing missing values by the attribute's mean of the respective class clearly works best compared to other strategies. However it is important to mind the amount and distribution of missing values as a very high percentage of replaced missing values will likely create instances with unequal attribute distributions and therefore artificially good classification result.