\begin{table}
\centering
\begin{tabular}{|c|c|}
\hline
true class 1& false class 1\\
\hline
250 & 4\\
246 & 0\\
\hline
true class 2 & false class 2 \\
\hline
\end{tabular}
\caption{Confusion matrix of the MoG-Classifier, using three Gaussians.}
\label{tab:conf2}
\end{table}


The EM-algorithm was implemented following the pseudocode in the
book\footnote{Machine Learning: Pattern Recognition, by Cristopher M. Bishop}.
Run \texttt{ex2(C)}, where C is the desired number of mixture components, to see the code
in action. With $C > 2$, the classifier almost always finds 99.20\% correct
answers. With $C = 20$, this is even 99.60\%. Our experience is, however, that
lower complexity is, in general, better when creating models, so $C > 10$ is not
recommended. 
The resulting confusion matrix is shown in table \ref{tab:conf2}, and can be
reproduced by running ex2(3).

The error rate of the function is 0.008

Here follows the discussion, that is asked for in the assignment:
\begin{enumerate}
    \item You want the initialized values to not be too far from the actual
    values of the data. That's why in this approach, initialization from the
    minimum, to the maximum of the data is chosen for $\mu$. This is done in
    such a way that all $\mu$'s are in a diagonal line, evenly divided over the
    coordinates from $(min(x, y), min(x, y)$ to $(max(x, y), max(x, y)$, where
    $min(x, y)$ is the minimum of all of the $x$ and $y$ coordinates. This
    results in two positive outcomes:
    \begin{itemize}
        \item $\mu$ is never initialized equal to another $\mu$ in this mog
        \item $\mu$ is always close to the data points, meaning that no
        rediculous values for $\Sigma$ can be calculated in the M-step.
    \end{itemize}
    \item As is stated above, no real difference from $C > 2$ can be noticed.
    Since the data is in a sort of banana curve, it can be said that intuitively
    $C = 1$ is not ``going to cut it''. This often results in a Gaussian, with
    $\mu$ being the mean of all test data, which would not be able to perform
    better than the classifier created in excersise 1.
    \item The initialization does not make a great difference for the
    results, since it is not really used, when computing new values for $\mu$,
    $\pi$ and $\Sigma$ in the M-step. In our case, initialisation with really
    high values, like $\mu$ = $\left[ 100 100 \right]$, results in $\Sigma$
    being Not a Number, because of the calculations made in the M-Step. Also
    $\mu_i$ being the same as any $\mu_j$, with the same $\Sigma$ and $\pi$ is considered bad, because they are
    not going to be different after converging.
\end{enumerate}

