\subsection{Reasoning system}
In attempts to solve the problem of correctly labelling blobs with words,
a reasoning--system was developed.

Our studies of previous works in the area \cite{duygulu} revealed
that the Expectation Maximization algorithms perhaps was a good
fundament for designing a reasoning--system. However, as work
progressed it became clear that there was not enough time both to
fully understand the EM algorithm and to implement it in our
particular situation. Instead we chose a more naive approach to solve
the problem of building relations.

\subsubsection{The Reasoning System}
Presented here is the reasoning--system used in the final product.
 
The system assumes two functions, one for comparing two blobs
\emph{blobCmp}, and one for comparing two words \emph{wordCmp}. Each
of the two functions returns a floating point value between $0.0$ and
$1.0$; $1.0$ indicates that the blobs or words are exactly equal,
and $0.0$ indicates that they have nothing in common.

Using $blobCmp$ and $wordCmp$ the system is then able to build two
relationship-dictionaries, one for blobs and one for words. The
relationship-dictionaries uses blobs or words as keys and the values
stored are new dictionaries, where the keys also are blobs or images,
but the values are the floating point values retrieved from using the comparison functions $wordCmp$ and $blobCmp$. For example,
to determine how equal $blob_1$ are $blob_2$ the followed call is made.

\begin{verbatim}
 blobrelations[blob_1][blob_2]
\end{verbatim}

These relation-dictionaries are then use to \emph{cluster} both the
words and the blobs into groups.

Because the clustering of the blobs and the images are done
separately, but in the exact same manner, the process of clustering
will be discussed in more general terms, simply referring to
\emph{items}, where items may be either blobs or words.

To build clusters, the system chooses any item from a list of all
items, this will be the pivot item. The system then creates a new
\emph{graup} for this item, also removing the item from the list of
items. 

The system then iterates through the list of items, moving items from
the list to the group if the relation to the pivot item is greater
then some $\theta$. Good values for $\theta$ allows for some
flexibility while still trying to be correct usually values around
$0.8$ works quite well.

After the first group is built, a new pivot item is selected from the
list of remaining items and the process in iterated until the list of
items is empty.

When the the clustering process is applied to both the list of all
blobs and the list of all words in a book, the relation between blobs
and words are made by saying that the largest cluster of
blobs correlates to the largest cluster of words, and so forth.
