\newpage
\section{The Machine Learning Tools}

We will review powerful mathematical machinery during the course, but we
would like to consider as base case for comparison a particular type
of unsupervised learning: Self-Organizing Maps, an artificial neural
network invented by Kohonen and inspired on the retina models
(\cite{cbook}, \cite{kohonen-som}, \cite{yin}). The SOM can be
regarded as a dimension reducer, in the sense that arranges the input
vectors (from $R^n$, with $n$ potentially high) into vectors of $R^1$
or $R^2$ (typically the second), which serve as vessels for the
neurons that form the output 
layer of the network (connections between output neurons and inputs,
are represented by weight vectors of $R^n$). Relationships that occurred (due
proximity) in the input space, are preserved into the output map and
can be understood there more easily. \\

The learning algorithm for the SOM goes as follows: \\

\begin{figure}[H]
  \centering
  \includegraphics[width=11cm]{online-alg}
  \begin{center}\caption{Online (serial) learning algorithm for the SOM}\end{center}
\end{figure}
\hfill

And intuitively consists of calculating, for each train vector, the
neuron whose weight vector is closer to it; then we proceed to make
that winner vector even more similar to the input, but we do the same
for its neighbors in the output map. The closer are neurons to the
winner, the more they are affected. At the beginning the network is
more elastic and big changes are allowed; but we reduce the degree of
change with time (exponentially) to allow for convergence. \\

This serial version of the algorithm is also called ``on-line''
learning, as there is an immediate recursive dependency among
iterations. That makes this version unsuitable for high parallelism
and distributed computing, for which we researched an option that fits
better our high-performance intentions: the ``off-line'' (aka batch)
version of the algorithm \cite{batch-som1} and \cite{batch-som2}: \\

\begin{figure}[H]
  \centering
  \includegraphics[width=13cm]{offline-alg}
  \begin{center}\caption{Off-line learning algorithm for the SOM}\end{center}
\end{figure}
\hfill

In this variant of the algorithm, the whole trainset needs to be
processed (in parallel) on each epoch \footnote{Number of times same trainset is feed
to the neural network, aiming eventual convergence.}, at the end of
which we need to compute the new state of the SOM just once: for each
neuron, we adjust weights 
considering not only closest input vectors, but also the
neighborhood (whose ration decreases exponentially with time). This
could be interpreted graphically as calculating the 
Voronoi-Set (centers on neuron weights); and as suggested, it exhibits 
high degree of parallelism (input vectors can be associated to closest
neurons independently).\\

Although off-line learning flavor seems more suitable for big-data
applications, we still need to assess whether its quality yields good
results for the two problems chosen. Another consideration is that,
we pretend to complement or even compare the SOM results, with that
obtained from different techniques reviewed during the course. \\

