\subsection{Natural language processing}

\begin{comment}
\subsubsection{Summarise theory and results from the literature needed
to understand your problem and your solution. If you developed theory
of your own, include it here.}

\subsubsection{Explain your method using pseudo-code.  (Actual code
segments you consider significant can be shown in an Appendix).}

\subsubsection{Problems you encountered, and your solutions (one
subsection per problem).}

\subsubsection{How did you measure progress of the project? Give
your results.}

\subsubsection{What benchmarks did you use to evaluate the correctness
and performance of your programs?  If you used none, why not, and how
did you then evaluate your programs?}
\end{comment}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\subsubsection{Theory and results}
The NLP part of this project consists of extracting (potentially)
interesting entities from the text accompanied with the pictures.

\subsubsection{Method}
To recognize entities within the text we decided to
\emph{part-of-speech} tag the sentences and focus only on nouns. This was accomplished using a tool kit called NLTK \cite{nltk}.

\subsubsection{Encountered problems}
At first we tried to use the \textit{of-the-shelf} POS tagger from the
NLTK framework, but after some testing it became obvious that it was
not too reliable in terms of accuracy; sometimes it would label words
wrong.

A solution for this was to write our own POS taggers, based on other
methods included in NLTK, and train them on a corpus of our
choice. Accuracy of POS taggers relies heavily on the training data
set; the training data should be as similar as possible to the actual
work data.

The different taggers we wrote combines some of the following
techniques:

\begin{description}
\item [Unigram tagging] A unigram tagger finds the most likely tag for
  each word in the training corpus and uses that information to assign
  tags to new tokens.
\item [Bigram tagging] A bigram tagger work similarly to the unigram
  tagger, except it finds the most likely tag for each word given the
  preceding tag.
\item [Trigram tagger] This tagger works like the bigram tagger,
  except it uses two pieces of information. N-gram taggers assignt
  tags to new tokens depending on the $N-1$ preceding tags.
\item [Affix tagging] The affix tagger is similar to the unigram
  tagger. It takes some fixed-size substring of a word and finds the
  most likely tag for that substring.
\item [Regexp tagging] This tagger assigns tags to tokens based on
  regular expessions.
\item [Brill tagging] The Brill tagger starts by running an initial
  tagger, and then improves the result by applying a list of
  transformation rules. These rules are automatically learned from the
  training corpus.
\end{description}

\subsubsection{Benchmarking}
The taggers were trained on 6000 sentences from different topics
covered by the \textit{Brown Corpus}. To test the accuracy, we ran the
POS taggers on 6000 new, untagged, sentences from the same topics in
the training Corpus \cite{streamhacker}.

\begin{table}[ht!]
  \begin{center}
  \begin{tabular}{| l | l | l}
    \hline
    \textbf{Tagger:} & \textbf{Accuracy:} \\
    \hline
    UBT Tagger       & 82.30\% \\
    \hline
    AUBT Tagger      & 87.58\% \\
    \hline
    RAUBT Tagger     & 87.73\% \\
    \hline
    BRAUBT Tagger    & 88.69\% \\
    \hline
    POS Tagger       & 58.76\% \\
    \hline
  \end{tabular}
  \caption{Part-of-speech taggers accuracy
    \label{tab:postaggers}}
  \end{center}
\end{table}

The abbreviations tells which tagger is the \emph{main tagger} and
which taggers that are used as \emph{backoff taggers}. For example,
UBT means that a trigram tagger is run first, and if it fails it will
fall back to the bigram tagger. The bigram tagger will fall back to
the unigram tagger. The last tagger in Table~\ref{tab:postaggers} uses
the same training method as NLTK's \emph{off-the-shelf} tagger.

