\documentclass{article}
%\documentclass[journal]{IEEEtran}
%\documentclass{report}
%\documentclass{acta}

\usepackage{graphicx}

\begin{document}

\title{EFME LU Exercise 3\\Perceptron Report}

\author{Tuscher Michaela \and Geyer Lukas \and Winkler Gernot}

\maketitle

\begin{figure}
    \centering
    \includegraphics[width=4.0in]{Bilder/BooleanPerceptron}
    \caption{perceptron applied to boolean operations}
    \label{fig:boolean}
\end{figure}

\begin{figure}
    \centering
    \includegraphics[width=4.0in]{Bilder/FilePerceptron}
    \caption{perceptron applied to boolean operations}
    \label{fig:file}
\end{figure}

\begin{abstract}
For this exercise we had to write a function \texttt{perco()} which learns the weights for a perceptron.
\end{abstract}

\section{Applied to boolean operations}
Since a single perceptron applied to a 2-dimensional dataset is only a linear classificator the weight learning algorithm only converges towards a result if the dataset can be separated by a line. For this reason the \texttt{perco()} function accepts \texttt{maxEpoches} as parameter, which limits the iterations to a finite value.

Our method solves the OR-problem within 6 iterations, the AND-problem needs 9 iterations, but XOR cannot be solved, since there is no line that separates both classes (see Figure~\ref{fig:boolean}). In fact, since we use zeros as initial weights (as suggested in the lecture slides), each iteration sums all vectors (since all of them are misclassified) which again results in a vector where every weight is zero. It never converges, and the separation line is invalid, since $0 * x + 0 * y = 0$ is not a line equation.

\section{Applied to given files}
The given dataset with the targets stored in \texttt{perceptrontarget1.dat} are separable by a line, after 9 epoches the solution is found. \texttt{perceptrontarget2.dat} does not converge, but the resulting line is a good approach (see Figure~\ref{fig:file}).

\section{Conclusion}
The perceptron is not a very powerful classificator since it is only applicable to linear separable problems, for other cases this model is too simple (underfitting). Although, since a single perceptron is quite simple, it can easily be connected with more perceptrons to improve the performance (multi-layer-perceptron, neural network).

\end{document}
