We have run the Expectation-Maximization algorithm a number of times. 
The resulting error rates for different number of guassian mixtures
with a test which included 25\% of the test data
are as followed:

\begin{tabular}{c c} 
Number of Gaussian Mixtures & Error Rate \\
1 & 0.106 \\
2 & 0.010 \\
3 & 0.006   \\
4 & 0.004   \\
5 & 0.004   \\
6 & 0.004   \\
7 & 0.014   \\
8 & 0.010   \\
9 & 0.016   \\
10 & 0.016  \\
20 & 0.016  \\
30 & 0.006  \\
100 & 0.006
\end{tabular}

Figure \ref{fig:errorPlot} shows these results in a plot. 

\begin{center}
\begin{figure}[b]
    \centering
    \includegraphics[width=\textwidth]{resources/error}
    \caption{Error rates per amount of mixtures.}
\label{fig:errorPlot}
\end{figure}
\end{center}

Notice how the error rate of using only one guassian mixture, 
as was done in exercise \ref{sec:one} is significantly larger than the 
error rate of any number of gaussian mixtures. This is, of course, to 
be expected as even 2 or 3 guassian contour plots describe the data 
tremendously better (see figure \ref{fig:c3}). 

\begin{figure}[b]
    \centering
    \includegraphics[width=\textwidth]{resources/C3}
    \caption{Three guassian contour plots converged for data A.}
\label{fig:c3}
\end{figure}

\begin{verbatim}
Function E-step:
for i = 1:K
    Q(:, i) = pi*N(X, mu, sigma)
end
LL = sum(log(Q))
% normalise
Q /= sum_rows(Q, 2)
\end{verbatim}

\begin{verbatim}
-----------------------------------------

Function M-step:
for k = 1:K
    % calculate Nk
    Nk = sum(Q(:, k))
    % calculate new MU
    mu_k = (1/Nk) * (sum(Q(:, k) * X))
    % calculate new SIGMA
    sigma = zeros(2)
    for n = 1:size(X)
        sigma += Q(n, k) * ((X_n - mu) * (X_n - mu)')'
    end
    sigma /= Nk;
	 % check for singularity
    if cond(sigma) < 10.^10
		 sigma_k = sigma;
	 end
    % calculate new PI
    pi_k = Nk/size(X);
end
\end{verbatim}
