\section{Introduction}

The dataset we used for our analysis contains multivariate data on the internet usage collected during approximately five weeks in 1997 and was conducted by the Graphics and Visualization Unit at Georgia Tech. The collected data contain general demographic information on survey participants from all over the world with different gender, age, language, ethnic origins, education, expertise, technical experience and/or hobbies. Aptly the survey has been conducted as internet survey because as there is no central registry of all Internet users this was the simplest way to select a hopefully representative subset of all users.\footnote{\url{http://www.cc.gatech.edu/gvu/user_surveys/survey-1997-10/\#methodology}} 
The dataset\footnote{\url{https://archive.ics.uci.edu/ml/machine-learning-databases/internet_usage-mld/}} was made available to the public in 1999 by the Iowa State University and has since then been the basis of numerous analyses and papers.

Our requirements were to choose a dataset from the UCI Machine Learning Repository\footnote{\url{https://archive.ics.uci.edu/ml}} with at least 1000 instances and a minimum of 15 attributes with class labels for the classification task. The dataset on internet usage we chose has a total number of 10104 instances with 72 attributes and therefore meets the minimum requirements easily. In the description of the set it says that there are no missing values however we found that there are not only a few erroneous instances but also that there is one column which has more than 25\% missing values whereas all other columns are complete.

The dataset can be downloaded as a simple ASCII flat file with one observation per line and spaces separating the fields. In order to be able to do the classification and clustering tasks we had to do some formatting of this file as Weka naturally has limited options for importing data files. Due to the flat file's format we chose to format it as comma separated file which is one file format Weka is able to read. 

Our first approach was to use the editor Notepad++\footnote{\url{http://notepad-plus-plus.org/}} to conduct all necessary changes. First we added a header with all attribute names separated by commas as the class labels came in an extra file. Next we simply tried to replace all white spaces by a comma using the function search and replace provided by Notepad++. When trying to import the file into Weka however we got an error message as can be seen in Figure \ref{fig:wekaImportError1}. 

\begin{figure}[H]
  \includegraphics[width=0.47\textwidth]{images/weka_import_error_1.png}
  	\caption{Weka import error}
  	\label{fig:wekaImportError1}
\end{figure}

After several unsuccessful iterations of changes and tries we realized that for one the dataset was partly corrupt and in addition Notepad++ apparently could not handle the search and replace of a file that big (which to be honest surprised us a bit). For a few rows it seemed as if characters were added, deleted or even changed during the replacing. Although it ostensibly only affected an extremely small number of rows we could not use a faulty dataset. 

Due to these problems we decided to use Microsoft Excel instead for formatting the dataset. Excel has an import option where it is possible to choose a text file as source and then state that the values are separated by a white space shown in Figure \ref{fig:excelImportOptions1} (note that the option to treat sequential separators must not be checked).

\begin{figure}[H]
  \includegraphics[width=0.47\textwidth]{images/excel_import_options1.png}
  	\caption{Excel import options}
  	\label{fig:excelImportOptions1}
\end{figure}

After looking at the data in the spreadsheet with the Weka error message(s) in mind we discovered that there are a few instances which appeared to possess too many attributes - in other words the rows were 'too long' for the header. To get a better overview of the erroneous rows we used Excel to get all data entries sorted by the apparent faulty additional columns. As can be observed clearly in Figure \ref{fig:corruptCsvfile1} there are 8 rows which seem to be shifted to the right and contain illegal characters like dots or random whitespaces. Scrolling down a bit in the spreadsheet we discovered that there was another skip meaning that there were 59 more rows with an additional column. Due to the partial ambiguity of the data we could not easily restore those values and rows and therefore decided to delete all 67 inaccurate instances in order to avoid working on a dataset containing wrong values. As the deleted rows amount to only 0.0066\% of the total dataset we are sure that no actual and significant insights might have been lost.

\begin{figure}[H]
  \includegraphics[width=0.47\textwidth]{images/corrupt_csvfile1.png}
  	\caption{Corrupt csv file}
  	\label{fig:corruptCsvfile1}
\end{figure}

In the final step we copied the attribute names into the dataset in Excel as those values were delivered in a separate file. Additionally we discovered that there was one class label missing but the values in the last unlabeled column appeared to be some kind of ID as they are all distinct. We therefore named the attribute RID (row ID) and decided not to delete it but knowing that we would probably have to remove or ignore it anyways later on to get meaningful results for the classifying tasks. To complete the formatting we saved the Excel sheet as 'comma separated value'-file (.csv) in order to be able to import it in Weka. 

It should be noted that we also had to change the computer's location settings so as to be able to save the csv file with normal commas as separators because in the German location settings a semicolon is used. Of course a search and replace of the plain text file could have done the job but we did not want to risk any wrong replacements again.