\newpage
\section{The (Data) Problems}
\IEEEPARstart{P} REVIOUS academic exposure to the interesting field of
Natural Language Processing, made us choose the subtopic of automatic
classification (which is a trendy area in Machine Learning). There are 
two particular problems we would like to attack first: semantic maps
of words (matching their syntactical categories),
and automatic classification of web texts \cite{kohonen-websom}. \\

For the first case, syntactic classification of words, we want to use
a variant of the encoding used for Kohonen's Semantic Maps
\cite{kohonen-semap}, where the 
author mimics the way humans learn to associate meaning to symbols:
the symbol is presented repeatedly with variants of a set of
attributes as companion context, until the human brain eventually can
classify the symbol alone, without the help of context attributes. Kohonen
takes a formally written source of natural language, eg. books, and extracts
the trigrams (sequences of 3 words); the left and right context words become the
``attributes'' of the word in the middle (which becomes the
symbol). Orthogonal vectors are good choice for representing non
related symbols for the words, where such symbols are used to encode
not only the middle word of each trigram, but also its surrounding
contexts. The variants we want to introduce are the following: \\

\begin{itemize}
\item Optimize storage of trainset encoding, by representing both
word symbol and contexts (symbols too) as sparse vectors. This representation fits
naturally the data, as orthogonal vectors are mostly zeroes. \\
\item Avoid usage of averaged encoding for words, in order to mimic
closely the way humans learn (there is no pre-processing of samples,
but repeated exposition of raw cases). \\
\item Consider sentence boundaries when computing the document trigrams. \\
\end{itemize}
\hfill

The second problem is more ambitious and has more variants, as it is
based on the WebSOM project \cite{kohonen-websom}. In that project,
same author Kohonen (et al) uses word histograms to encode documents,
and discover an automatic classification of them. The possible
variants we are planning include: \\

\begin{itemize}
\item Avoid attempts to reduce dimension of the input space, but
rather opt for sparse representation of vectors whenever possible
(similar to the approach for word classification). \\
\item Consider not only the histogram of words, but also their
relative position. For example, we can build a matrix that represents
the particular relationship for each couple of words in our vocabulary
to consider. Such relationship could be based on relative distance
within the document (among other metrics). At the end, the matrix could be
unrolled into a big (sparse) vector; to represent a single document. \\
\end{itemize}
\hfill

For both problems, perhaps the main innovation that we are suggesting
is the usage of modern distributed computing frameworks, than can run
on commodity clusters (as opposed to the expensive specialized
super-computers, that researchers like Kohonen needed to use in the
80's and 90's). More details about this in the next sections. \\
