\section{Clustering}
\label{sec:clustering}

Clustering divides data into clusters or groups of similar objects. In this chapter two clustering algorithms are discussed. This includes the widely used K-Means method and the Expectation Maximization method. Both algorithms are categorized as partitional methods. Each object belongs to exactly one cluster. As clusters are refining the objects move within a cluster.  Partitional methods have a better computational performance and are more suitable for big data analysis \cite{gupta:dm}. 

The general reasons for selecting these two algorithms are:
\begin{itemize}
\item Popularity
\item Flexibility
\item Handling high dimensionality
\end{itemize}
Detailed reasons behind choosing every algorithm are stated in the corresponding sections.


\subsection{K-Means}

We chose K-Means because it is known as the most common and easy implemented clustering method. The \textit{K} stands for the number of clusters. It is assumed that the centroid point of each cluster is known which explains the name K-Means. The objects are relocated around the closest centroid as the new centroid point is recomputed for each iteration. This process is repeated until there are no cluster changes anymore. The size of the clusters is not defined, which means, that there might be large and some small clusters and it is not possible that clusters overlap. The main challenge using this method is to find the optimal number of clusters and seeds. The seeds represent the centroid of the clusters. If the user has no specific information about the data the number of seeds can be chosen randomly. Unfortunately the results depend on the accuracy of choosing the seeds. Another problem of the K-means method is, that it is not robust against outliers and dealing with big data it may be very slow \cite{gupta:dm}. 

 
\subsection{Expectation Maximization}

The Expectation Maximization (EM) method assumes that all attribute values are normally distributed and that they are independent to each other. This approach is different to the K-means algorithm where you only assume that the objects in a group are similar by picking the seed. On the other side EM locates objects in clusters according to probabilities to maximize the expectation \cite{gupta:dm}. This certain expectation is also called likelihood which means that the relevance of each attribute value of one cluster compared to attributes of the other clusters. 
The EM process includes two steps and several iterations of them. First the probability distributions are estimated which belongs to the expectation step and then the maximization step follows by finding the optimal parameters such as the mean to maximize the likelihood \cite{prajwala:em}. 
We chose this method because of its statistical basis in comparison to the simple K-Means algorithm. 


\subsection{Cluster analysis with WEKA}

The open source tool WEKA is used for comparing the above two algorithm. For the cluster analysis we only consider the following attributes, the others will be ignored.

\textbf{Attributes:}

Age\\
Country\\
 Gender\\
Race\\
Education\_Attainment\\
Household\_Income\\
Major\_Occupation\\
Marital\_Status\\
Opinions\_on\_Censorship\\
Primary\_Place\_of\_WWW\_Access\\
Registered\_to\_Vote\\
Sexual\_Preference\\
Web\_Ordering\\
Web\_Page\_Creation\\
Willingness\_to\_Pay\_Fees\\
Years\_on\_Internet\\

\textbf{Preprocessing:}

In order to make the visualization of the clusters concerning the attribute Age better readable we used the preprocessing method \textit{Discretize} to split all values into 5 bins. Additionally we applied the filter NumericToNominal to all attributes. 

\subsubsection{EM clusterer}
\label{sec:emclusterer}

For the EM clusterer in WEKA we can adjust the following parameter settings:
\begin{itemize}
\item maxIterations: maximum number of iterations
\item minStdDev: minimum allowable standard deviation
\item numClusters: number of clusters; -1 means that the number of clusters is determined automatically by cross validation
\item seed: random number seed to be used
\end{itemize}

The default setting is shown in figure~\ref{fig:em-default}. 
\begin{figure}[H]
  \includegraphics[width=0.47\textwidth]{images/em-default.png}
  	\caption{EM default parameters}
\label{fig:em-default}
\end{figure}

In WEKA the log-likelihood is computed. This is the natural logarithm of the likelihood function which is more convenient to work with. Log-likelihood refers to the probability of determining the correct group of data objects. 

The following experiments are executed with different numbers of clusters:

\begin{enumerate}

\item Experiment

\begin{itemize}
\item Clusters: -1 
\item Clustered Instances:\\
 0        538 (  5\%)\\
 1        980 ( 10\%)\\
 2        771 (  8\%)\\
 3        224 (  2\%)\\
 4        686 (  7\%)\\
 5       1076 ( 11\%)\\
 6        623 (  6\%)\\
 7        225 (  2\%)\\
 8       1495 ( 15\%)\\
 9        551 (  5\%)\\
10        490 (  5\%)\\
11         50 (  0\%)\\
12        121 (  1\%)\\
13        281 (  3\%)\\
14        634 (  6\%)\\
15        290 (  3\%)\\
16        507 (  5\%)\\
17        495 (  5\%)\\
\item Log likelihood: -19.66737
\item Time taken to build model: 566.43 seconds
 \end{itemize}
\item Experiment
\begin{itemize}
\item Clusters: 2 
\item Clustered Instances:\\
0       5860 ( 58\%)\\
1       4177 ( 42\%)\\
\item Log likelihood: -20.45813
\item Time taken to build model: 1.78 seconds
 \end{itemize}
\item Experiment
\begin{itemize}
\item Clusters: 5
\item Clustered Instances:\\
0       2970 ( 30\%)\\
1       1550 ( 15\%)\\
2       1713 ( 17\%)\\
3       2791 ( 28\%)\\
4       1013 ( 10\%)\\
\item Log likelihood: -44.24334
\item Time taken to build model: 13.25 seconds
 \end{itemize}
\item Experiment
\begin{itemize}
\item Clusters: 10
\item Clustered Instances:\\
 0        997 ( 10\%)\\
 1       1095 ( 11\%)\\
 2        811 (  8\%)\\
 3       1157 ( 12\%)\\
 4       1024 ( 10\%)\\
 5       1354 ( 13\%)\\
 6       1519 ( 15\%)\\
 7        221 (  2\%)\\
 8       1316 ( 13\%)\\
 9        543 (  5\%)\\
\item Log likelihood: -19.76999
\item Time taken to build model: 3.2 seconds
 \end{itemize}
\item Experiment
\begin{itemize}
\item Clusters: 8
\item Clustered Instances:\\
0        876 (  9\%)\\
1       1746 ( 17\%)\\
2       2618 ( 26\%)\\
3        985 ( 10\%)\\
4       1275 ( 13\%)\\
5        814 (  8\%)\\
6        491 (  5\%)\\
7       1232 ( 12\%)\\
\item Log likelihood: -43.73527
\item Time taken to build model: 15.91 seconds
 \end{itemize}
\item Experiment
\begin{itemize}
\item Clusters: 6 
\item Clustered Instances:\\
0       2420 ( 24\%)\\
1       2483 ( 25\%)\\
2        907 (  9\%)\\
3       1264 ( 13\%)\\
4       1282 ( 13\%)\\
5       1681 ( 17\%)\\
\item Log likelihood: -19.97362
\item Time taken to build model: 2.57 seconds
 \end{itemize}

 \end{enumerate}

From these experiments we can say, that the more clusters we use the longer it takes to build the model. The log-likelihood does not depend on the quantity of clusters. We get the best results for the log-likelihood with 5 clusters. When using 10 clusters it is only half as much. 
As we can see the distribution of items onto clusters is quite even. The purity of clusters on the other hand is not so well. 

Furthermore some examples where clusters can be interpreted pretty easily are provided:

In figure~\ref{fig:em-age-maritalstatus} we can tell that the age has an effect to the marital status\footnote{Marital status:\\
not say=0
divorced=1
living with another=2
married=3
separated=4
single=5
widowed=6}. 
\begin{figure}[H]
  \includegraphics[width=0.47\textwidth]{images/em-age-maritalstatus.png}
  	\caption{EM: Age and Marital Status}
\label{fig:em-age-maritalstatus}
\end{figure}
Most people between 16 and 32 are singles and as the age increases the frequency of singles decreases. The status married behaves the other way around. And also the value divorced and widowed increases as the age increases which is only natural. 

The next figure~\ref{fig:em-age-occupation} shows that young people correspond to the class education\footnote{
Occupation:\\
computer=0
management=1
professional=2
education=3} very purely. 

\begin{figure}[H]
  \includegraphics[width=0.47\textwidth]{images/em-age-occupation.png}
  	\caption{EM: Age and Occupation}
\label{fig:em-age-occupation}
\end{figure}

%\begin{figure}[H]
%  \includegraphics[width=0.47\textwidth]{images/em-education-occupation.png}
%  	\caption{Education and occupation}
%\end{figure}

The income\footnote{Income:\\
under \$10=1
\$10-19=2
\$20-29=3
\$30-39=4
\$40-49=5
\$50-74=6
\$75-99=7
Over \$100=8} is also correlating with the education\footnote{Education:\\
grammar=0
high school=1
professional=2
some college=3
college=4
masters=5
doctoral=6
special=7} and occupation, which is seen in figure~\ref{fig:em-income-occupation} and~\ref{fig:em-income-education}.

\begin{figure}[H]
  \includegraphics[width=0.47\textwidth]{images/em-income-occupation.png}
  	\caption{EM: Income and Occupation}
\label{fig:em-income-occupation}
\end{figure}

\begin{figure}[H]
  \includegraphics[width=0.47\textwidth]{images/em-income-education.png}
  	\caption{EM: Income and Education}
\label{fig:em-income-education}
\end{figure}

The purity of clusters in figure~\ref{fig:em-age-webordering} is very low, but still we can tell that as the age increases, the frequency of web ordering\footnote{Web Ordering:\\
yes=1
no=2
don't know=98} decreases. Meaning that young people use web ordering quite more often than older people. 


\begin{figure}[H]
  \includegraphics[width=0.47\textwidth]{images/em-age-webordering.png}
  	\caption{EM: Age and Web Ordering}
\label{fig:em-age-webordering}
\end{figure}

Comparing the age and years on internet\footnote{Years on Internet:\\
Under 6 mo=0
6-12 mo=1
1-3 yr=2
4-6 yr=3 
Over 7 yr=4} we can see that the results are significantly better than in the figure before regarding the web ordering. Younger people have already spent more time on the internt than older people, see figure~\ref{fig:em-age-yearsoninternet}. Also the occupation has an influence on the years spent on the internet, which is shown in figure~\ref{fig:em-occupation-yearsoninternet}.

\begin{figure}[H]
  \includegraphics[width=0.47\textwidth]{images/em-age-yearsoninternet.png}
  	\caption{EM: Age and Years on Internet}
\label{fig:em-age-yearsoninternet}
\end{figure}

\begin{figure}[H]
  \includegraphics[width=0.47\textwidth]{images/em-occupation-yearsoninternet.png}
  	\caption{EM: Occupation and Years On Internet}
\label{fig:em-occupation-yearsoninternet}
\end{figure}

In the following figure~\ref{fig:em-education-webpagecreation} it is obvious that the education has an impact on the ability of creating web pages\footnote{Web Page Creation:\\
yes=1
no=2
don't know=98}. Especially the purity of the red cluster is really high, which means that the cluster corresponds to the provided class. \\

\begin{figure}[H]
  \includegraphics[width=0.47\textwidth]{images/em-education-webpagecreation.png}
  	\caption{EM: Education and Web Page Creation}
\label{fig:em-education-webpagecreation}
\end{figure}


\subsubsection{SimpleKMeans clusterer}

Following parameters can be set in WEKA for the SimpleKMeans clusterer:
\begin{itemize}
\item distanceFunction: either Euclidean or Manhattan distance
\item dontReplaceMissingValues: False or True for replacing missing values with mean
\item maxIterations: maximum number of iterations
\item numClusters: number of clusters (>0)
\item seed: random number seed to be used
\end{itemize}

The default setting is shown in figure~\ref{fig:km-default}. 
\begin{figure}[H]
  \includegraphics[width=0.47\textwidth]{images/km-default.png}
  	\caption{K-Means default parameters}
\label{fig:km-default}
\end{figure}

%In WEKA the log-likelihood is computed. This is the natural logarithm of the likelihood function which is more convenient to work with. Log-likelihood refers to the probability of determining the correct group of data objects. 

Experiments with different parameter settings are executed as follows:

\begin{enumerate}

\item Experiment

\begin{itemize}
\item Clusters: 2
\item Seed: 10
\item Clustered Instances:\\
0       5490 ( 55\%)\\
1       4547 ( 45\%)\\
\item Number of iterations: 7
\item Sum of squared errors: 167284.0
\item Time taken to build model: 1.12 seconds
\end{itemize}

\begin{itemize}
\item Clusters: 2
\item Seed: 100
\item Clustered Instances:\\
0       7459 ( 74\%)\\
1       2578 ( 26\%)\\
\item Number of iterations: 7
\item Sum of squared errors: 170580.0
\item Time taken to build model: 0.97 seconds
\end{itemize}

For 2 clusters, we get the best result with the seed 10. If we increase or decrease the seed, the sum of squared errors within the clusters increases and the distribution of items onto clusters gets more uneven. 

\begin{itemize}
\item Clusters: 5
\item Seed: 10
\item Clustered Instances:\\
0       2830 ( 28\%)\\
1       2396 ( 24\%)\\
2       1335 ( 13\%)\\
3       1467 ( 15\%)\\
4       2009 ( 20\%)\\
\item Number of iterations: 8
\item Sum of squared errors: 157104.0
\item Time taken to build model: 1.08 seconds
\end{itemize}

\begin{itemize}
\item Clusters: 5
\item Seed: 100
\item Clustered Instances:\\
0       2589 ( 26\%)\\
1       1270 ( 13\%)\\
2       2452 ( 24\%)\\
3       1800 ( 18\%)\\
4       1926 ( 19\%)\\
\item Number of iterations: 6
\item Sum of squared errors: 154937.0
\item Time taken to build model: 1.06 seconds
\end{itemize}

Using 5 clusters we achieved the best results by choosing 100 as the seed. The distribution of items onto clusters is again pretty even. 

\begin{itemize}
\item Clusters: 10
\item Seed: 10
\item Clustered Instances:\\
 0       1802 ( 18\%)\\
 1       1430 ( 14\%)\\
 2        300 (  3\%)\\
 3        947 (  9\%)\\
 4        705 (  7\%)\\
 5        854 (  9\%)\\
 6       1070 ( 11\%)\\
 7       1170 ( 12\%)\\
 8       1019 ( 10\%)\\
 9        740 (  7\%)\\
\item Number of iterations: 10
\item Sum of squared errors: 148898.0
\item Time taken to build model: 1.79 seconds
\end{itemize}

\begin{itemize}
\item Clusters: 10
\item Seed: 40
\item Clustered Instances:\\
 0       1544 ( 15\%)\\
 1        816 (  8\%)\\
 2       1096 ( 11\%)\\
 3        498 (  5\%)\\
 4       1315 ( 13\%)\\
 5       1531 ( 15\%)\\
 6        587 (  6\%)\\
 7        796 (  8\%)\\
 8       1089 ( 11\%)\\
 9        765 (  8\%)\\
\item Number of iterations: 6
\item Sum of squared errors: 147110.0
\item Time taken to build model: 1.33 seconds
\end{itemize}

The above experiments show, that the sum of squared errors of clusters decreases as the number of clusters increases. With 10 clusters the best results are obtained by using the seed 40. The distribution of items onto clusters is rather uneven. 
In general the clusters do not really correspond to the provided classes so well, that means that the purity of clusters is quite low. 

\end{enumerate}

For the following examples we used 5 clusters and seed 100 to compare the results with those form the EM algorithm. There is hardly any difference between the results of the EM method before and the following results with the K-Means method. We can conclude that the clusters determined with the EM algorithm are slightly more precise. See figures ~\ref{fig:km-age-occupation}, ~\ref{fig:km-income-occupation}, ~\ref{fig:km-age-webordering}, ~\ref{fig:km-age-yearsoninternet}, ~\ref{fig:km-education-webpagecreation} and~\ref{fig:km-occupation-yearsoninternet} compared to the figures in section~\ref{sec:emclusterer}  EM clusterer. 

\begin{figure}[H]
  \includegraphics[width=0.47\textwidth]{images/km-age-occupation.png}
  	\caption{K-Means: Age and Occupation}
\label{fig:km-age-occupation}
\end{figure}


\begin{figure}[H]
  \includegraphics[width=0.47\textwidth]{images/km-income-occupation.png}
  	\caption{K-Means: Income and Occupation}
\label{fig:km-income-occupation}
\end{figure}

\begin{figure}[H]
  \includegraphics[width=0.47\textwidth]{images/km-age-webordering.png}
  	\caption{K-Means: Age and Web Ordering}
\label{fig:km-age-webordering}
\end{figure}

\begin{figure}[H]
  \includegraphics[width=0.47\textwidth]{images/km-age-yearsoninternet.png}
  	\caption{K-Means: Age and Years on Internet}
\label{fig:km-age-yearsoninternet}
\end{figure}

\begin{figure}[H]
  \includegraphics[width=0.47\textwidth]{images/km-education-webpagecreation.png}
  	\caption{K-Means: Education and Web Page Creation}
\label{fig:km-education-webpagecreation}
\end{figure}

\begin{figure}[H]
  \includegraphics[width=0.47\textwidth]{images/km-occupation-yearsoninternet.png}
  	\caption{K-Means: Occupation and Years On Internet}
\label{fig:km-occupation-yearsoninternet}
\end{figure}


\subsection{Scaling} %punkt 3

One important aspect regarding the significance of cluster analysis is the scaling method normalization. Executing algorithms with normalized data or non-normalized data have different results. Normalization affects not only the performance of the algorithm but also the quality of the results.

In our case scaling was not necessary as we only have nominal attributes. \\


%
%\subsection{Clustering Conclusions}
%
%After analyzing the results of the clustering algorithms and running them with different parameter settings in chapter ~\ref{sec:clustering}, the following conclusions were achieved:
%
%As the number of clusters \textit{k} increases the performance of K-Means and EM become better. The accuracy is not so good for both algorithms in general. It is supposed to get better with larger datasets \cite{abbas:clustering}. 
%From our experiments we can infer that the K-Means algorithm takes less time than the EM algorithm to build the model and uses not as much memory. However, running the clustering algorithms give almost the same results even when changing the number of clusters, but we can say that the clusters determined with the EM algorithm are more precise than those we got from the K-Means method. 

