\section{Background and Overview}
The project might fall into the category of what is referred
to as \emph{Automatic image annotation}. %%\cite{wiki}.
That is, automatically labelling images or parts of images.

Early published material in the field include the 1995 article
\emph{Vision texture for annotation}\cite{picard} by Picard et al, in
which they describe their attempts at a system that given a region of
an image with a labelled texture (ie water), tries to identify similar
textures in other images.

The work presented by Duygulu et al during the 7th European Conference
on Computer Vision in 2002 is often cited and used for comparisons. It is
titled \emph{Object Recognition as Machine Translation: Learning a Lexicon
for a Fixed Image Vocabulary} \cite{duygulu}. They worked with the Corel
dataset, then consisting of 5000 images, each (manually) assigned 1-5 keywords
from a vocabulary of 371 words. Their approach was to first find image blobs in
each images, then compare these blobs with images tagged with similar keywords.
They then use expectation maximization to deduce which blobs that was likely
representing which keywords.

\begin{figure}[h!]
    \begin{center}
    \includegraphics[width=0.9\textwidth]{duygulu.png}
    \caption{\label{duy} Some examples of successful labelling of
      elements in images by Duygulu et al\cite{duygulu}.}
    \end{center}
\end{figure}

Duygulu et al developed two measurements of performance that has since
been used to evaluate many image annotation systems; \emph{Precision}
for a keyword is defined as the number of images assigned the keyword
correctly divided by the total number of images predicted to have the
keyword. \emph{Recall} is defined as the number of images assigned
the keyword correctly, divided by the number of images assigned the
keyword in the originally manually tagged dataset. Performance typically
varies a lot for different keywords, so normally the performance is
measured as an average over all keywords. When measuring performance
in this way, the system is first trained on a large subset of the
dataset (unsupervised learning!), then performance is measured on the
remaining dataset, where the system is forced to assign five keywords to
each image.
\begin{comment}
To measure the performance of their system, Duygulu et al trained the system
on 4500 images. They then meassured the number of correct labels in 100 of
ther remaining 500 images. They acchived correctness in ranges typically 30\%
to 80 \%. In this case this ment that if the correctness for the keyword
\emph{ocean} was 70\%, when an image part was labeled \emph{ocean}, it was
correct 70\% of the time.

The main difference between the approach taken by Duygulu et al and that taken
by us, is that we also incorporate a NLP system that does keyword extraction.
Thus, we get a system that does not require a data set tagged with keywords in
advace.
\end{comment}

A system which shows promising results in keyword extraction has been
made by Deschact and Moens. In 2007, they focused on mining texts which
had associated images (in this case in the form of 100 image-text pairs
from Yahoo! News). The system tries to estimate the probability that a
word represents an object that might be present in the image. They call
the method with the best results \emph{NER+DYN}, Named Entity Recognition
with dynamic cut-off.  Their work is published as \emph{Text Analysis for
Automatic Image Annotation} \cite{deschacht-moens}.

As far as we know, no one has attempted to use children's books as
material for automatically labelling images. But the recent work
titled \emph{Combining image captions and visual analysis for image
  concept classification} by Kliegr et al in 2008\cite{kliegr}, is
quite close. In many ways, text in books for young children are mere
image captions.

\subsection{State of the Art}
In 2008, after a decade of intense research in the field resulting
in more and more complex solutions, Makadia et al\cite{makadia2008}
decided to introduce a simple algorithm which could be used for
comparison when developing new solutions. The algorithm first finds
all $k$ nearest neighbours of a blob (in terms of blob features)
and then labels are assigned using a greedy algorithm. Surprisingly,
the algorithm outperformed all previously designed systems.

Using the work by Makedia et al as a starting point, Guillaumin et al
developed \emph{Tagprop}\cite{tagprop} which delivers state of art
performance. For the Corel dataset, the achieved mean precision was
$33\%$ and the mean recall was $42\%$.

\begin{comment}
\subsection{Set the problem in context.  What are the related areas
of computer science, AI, or other fields?}

\subsection{What have others already done, either with your chosen
task, or with closely related ones?  Give references to literature
and tools (listed at the end of the report). }

\subsection{Relevant bits from the proposal, describing your first
approaches, trial programs, hypotheses, experiments, first results,
etc.}
\end{comment}
