\section{The Software Tools}

A few years ago, the dominating paradigm in distributed frameworks on
commodity clusters was Hadoop \cite{hadoop}. The Map-Reduce paradigm
it offers, can be easily understood as a composition of the high order
functions ``map'' and ``fold'', present in functional languages like
Haskell. Of course that the interesting aspect of such computation, is
that both the mapping and the reduction could be achieved in a
distributed fashion automatically (given a proper encoding of data in
a file, that lives on a distributed file system). \\

We initially considered Hadoop, for making a distributed version of the
serial on-line algorithm revised before. Each iteration would become a
chain of two map-reduce cycles: \\

\begin{itemize}
\item Map1: Calculate distance between neuron weights and input.
\item Reduce1: Compute neuron with minimum distance.
\item Map2: Update neurons weight using winning neuron.
\item Reduce2: Identify function (nothing else to be done).
\end{itemize}
\hfill

Given that we wanted to use Hadoop with Python (due its high
level and RAD capabilities, as well as existing frameworks), we would
need to code the above chain as one shell script iterating over the
trainset (calling the Hadoop Streaming driver twice on each cycle). This
batch-paradigm imposed by Hadoop makes each iteration 
kind of expensive, as each map-reduce cycle can not talk directly to
each other but need a file in between; same occurs for each iteration,
they all need to communicate previous state through a file. We
considered the approach unsuitable for high performance computing
(even with an optimized distributed file system, such as HDFS). \\

More research was conducted and we found that the current trend in
distributed computing goes towards the Spark Framework \cite{spark},
which not only can work with Hadoop HDFS files but also offers a more
flexible paradigm that fits better iterative algorithms: is based on
distributed collections which can be operated in parallel (Resilent
Distributed DataSets, RDD, they are called). Furthermore, it offers
sharing variable mechanisms (broadcasting and accumulators), that
combined with the off-line version of the learning algorithm, promise
to offer a more efficient implementation. \\

With RDD, the trainset can be broadcasted (read-only) to all workers,
and each one can process an independent piece on its own, accumulating
the values needed to update each neuron. At the end of each epoch, we
need to recompute entirely (not updating) the whole SOM state; which
can be accomplished with standard map-reduce operators that Spark also
offers. No files are directly involved, unless it is for storing the
trainset \footnote{For the moment, the SOM state is considered
to fit into memory; so the ``big-data'' feature of our project would
be on the trainset side.} \\

Furthermore, we found that Spark is just another layer on top of which
Apache Foundation puts another higher level framework for Machine
Learning (MLlib \cite{mllib}). We considered though, that for an introductory
course it would not be didactic to simply use the algorithms, but
rather to implement them ourselves. Hence, the usage of Spark is
justified but not that of MLlib. \\

Another important tool we will use, which actually contributed to
picking Python, was the Natural Language ToolKit (\cite{nltk}), useful
for preprocessing the data sets. It includes sentence and word
tokenization, POS-tagging (used as pre-classification of words),
trigrams computation, etc. \\ 

Finally, worth to mention the usage of NumPy \cite{numpy} for speeding
up vector computations; this is specially important due high dimensions
achieved in the two problems selected (encoding implies weight vectors
with thousands of entries, as they can not be sparse like input
vectors). \\

