\section{Survey}
% exempel med cited artikel - inkludera bilder inget med barnböcker
% som källor?

\subsection{Automatic Image Annotation}
The task at hand might fall into the category of what wikipedia refers
to as \emph{Automatic image annotation}\cite{wiki}. That is,
automatically labeling images or parts of images.

Early published material in the field include the 1995 article
\emph{Vision texture for annotation}\cite{picard} by Picard et al, in
which they describe their attempts at a system that given a region of
an image with a labeled texture (ie water), tries to identify similar
textures in other images.

The work presented by Duygulu et al during the 7th European Conference
on Computer Vision in 2002 is quite spot on. They use a large number
of images, each with 4-5 keywords to train their system. It is titeled
\emph{Object Recognition as Machine Translation: Learning a Lexicon
  for a Fixed Image Vocabulary} \cite{duygulu}.

\begin{figure}[h!]
    \begin{center}
    \includegraphics[width=0.9\textwidth]{duygulu.png}
    \caption{\label{duy} Some examples of successful labelling of
      elements in images by Duygulu et al\cite{duygulu}.}
    \end{center}
\end{figure}

To measure the performance of their system, Duygulu et al trained the
system on 4500 images with 371 keywords from the \emph{Corel} data
set. They then meassured the number of correct labels in 100
images. They acchived correctness in ranges typically 30\% to 80
\%. In this case this ment that if the correctness for the keyword
\emph{ocean} was 70\%, when an image part was labeled \emph{ocean}, it
was correct 70\% of the time.

In 2007, Deschacht and Moens focused on mining texts which had
associated images (in this case in the form of 100 image-text pairs
from Yahoo! News). The system they use tries to estimate the
probaility that a word represents an object that might be present in
the image. They call the method with the best results \emph{NER+DYN},
Named Entity Recognition with dynamic cut-off.  Their work is
published as \emph{Text Analysis for Automatic Image Annotation}
\cite{deschacht-moens}.

As far as we know, no one has attempted to use children's books as
material for automatically labeling images. But the recent work
titeled \emph{Combining image captions and visual analysis for image
  concept classification} by Kliegr et al in 2008\cite{kliegr}, is
quite close. In many ways, text in books for young children are mere
image captions. We will, however, try to use both image and text
information from the entire book when labeling an image.

\subsection{Natural Language Processing}
For the NLP part we need a way to identify entities in the raw text
accompanying the images in the book. As mentioned before, we found a
good book on how to do this in python.\cite{nltk}

\begin{figure}[h!]
    \begin{center}
    \includegraphics[width=0.9\textwidth]{ie-architecture.png}
    \caption{\label{nltk} Information extraction
      architecture\cite{nltk}.}
    \end{center}
\end{figure}

To extract information from the raw text accompanying the pictures in
the book(s) we have to split it into sentences using a sentence
segmenter. Next, we tokenize the sentences and tag them with
\textit{part-of-speech} tags. All methods for doing this is provided
by the NLTK. When we've detected the entities in the text we will need
a way to identify which entities are refering to the same
\textit{object}.  For example, we might encounter sentences such as
``This is Bob. He is a 9 year old monkey.'' The word \textit{he}
refers to \textit{Bob}, and is described as an anaphoric reference to
\textit{Bob}. \textit{Bob} is refered to as the \textbf{antecendent}
of \textit{he}. To complicate things even more, sentence-anaphora is
also possible. For example, in ``I saved a cat from drowning today. It
was my duty as a citizen.'' \textit{it} is an anaphorc reference to
the verb phrase \textit{saved a cat from drowning}. Fortunately, the
NLTK has a module for \textit{coreference}.
