\subsection{Image-recognition}
Duygulu el al.\cite{duygulu} created a sort of signature for each
blob in the form of a vector of real-valued image features. The main
features used were colour, variation in colour, size, size vs contour
length etc. In their 2005 paper, Yavlinsky et al\cite{yavlinsky}
makes the case that such simple image features can reliably
be used for annotating images. State of the art systems, such as
Tagprop\cite{tagprop} and the system by Makadia et al\cite{makadia2008}
continue to rely on such simple features, specifically they all make
use of colour histograms.

In our system, image-recognition boils down the task of comparing two
image regions or \emph{blobs} and determine how similar they are. We
call the similarity $p(blob_a, blob_b) \in [0,1]$, which we define as:

\begin{equation}
\label{blob}
p(blob_a, blob_b) = \sum_{n=1}^{N} a_n * x_n
\end{equation}
where 
\begin{equation}
\sum_{n=1}^{N} a_n = 1.0
\end{equation}
and $x_n \in [0,1]$. Here $x_n$ is a measurement of similarity when
comparing a specific feature $n$ of $blob_a$ and $blob_b$, while $a_n$
is the relative importance of that feature. Features we look at:

\subsubsection{Colour histograms}
Colour images are represented by values in multiple colour channels.
Originally, all images in this project are RGB (Red, Green, Blue) images.
But for comparison, we also convert blobs to HSV (Hue, Saturation,
Value). We then compare each channel separately by comparing their
histograms.

When creating image histograms the relative occurrence of pixels with
values within a certain range is measured. We create histograms where
we divide the pixels into 8 different ranges and then compare the
histograms using an opencv \cite{opencv} function
(\texttt{compareHist}) which calculates the correlation between
histogram $H_1$ and $H_2$:

\begin{equation}
d(H_1, H_2) = \frac{\sum_{i}^{N} H'_1(i) * H'_2(i)}{\sqrt{
\sum_{i}^{N} H'_1(i)^2 * \sum_{i}^{N} H'_2(i)^2
}}
\end{equation}

where $N$ is the number of histogram bins (8) and

\begin{equation}
H'_k(i) = H_k(i) - \frac{1}{N} \sum_{j}^{N} H_k(j)
\end{equation}

\subsubsection{Contour length vs object area}
Comparing the shape of two image blobs can be done in various ways. We
combine three different methods. The first is to compare the
relation between the object's contour length and area:

\begin{equation}
x_{AreaVsLength}(blob_a, blob_b) = \frac{min(rel_a, rel_b)}{max(rel_a,
rel_b)}
\end{equation}
where
\begin{equation}
rel_k = \frac{contourLengt(blob_k)}{area(blob_k)}
\end{equation}

\subsubsection{Object shape}
We compare the shapes of to blobs using their contours. 
\begin{equation}
I(A,B) =\sum_{i=1\ldots 7}|m^A_i-m^B_i|
\end{equation}
where 
\begin{equation}
m^A_i = sign(h^A_i) \dot log(h^A_i)
\end{equation}
\begin{equation}
m^B_i = sign(h^B_i) \dot log(h^B_i)
\end{equation}
 and \(h^A_i\), \(h^A_i\) are Hu moments\cite{hu1962visual}.

\subsubsection{Object proportions}
We compare object proportions just as we compare contour length vs
object area, but instead of length and area, we look at the relation
between width and height of the smallest box that can be drawn around
the image blob.

\begin{figure}[h!]
    \begin{center}
    \includegraphics[width=0.5\textwidth]{blobcomp}
    \caption{Comparison of two detected blobs from \emph{The Brave Monkey
Pirate}\cite{tbmp} using the described methodology. Images reproduced with the
author's permission.
$p(blob_{left}, blob_{right}) = 0.55$ with 
$x_{red} = 0.43$, $x_{green} = 0.34$, $x_{blue} = 0.24$,
$x_{AreaVsLength} = 0.58$, $x_{shape} = 0.84$, $x_{box-prop} = 0.56$,
, $x_{hue} = 0.78$, $x_{sat} = 0.52$ and $x_{val}$ = 0.46}
    \end{center}
\end{figure}


\subsubsection{Parameters}
When using this method to determine object-similarity, the result $p$
will very much depend on parameters and the weights $a_n$. We expect
that optimal settings varies very much for different datasets (set
of images).

Here, parameter settings have been optimized by hand to work well with
the current dataset. If one could compose a suitable dataset for
training, the parameter settings could be an excellent candidate for
optimization with AI techniques such as genetic algorithms.

\begin{comment}
\subsubsection{Summarise theory and results from the literature needed
to understand your problem and your solution.  If you developed theory
of your own, include it here.}

\subsubsection{Explain your method using pseudo-code.  (Actual code
segments you consider significant can be shown in an Appendix).}

\subsubsection{Problems you encountered, and your solutions (one
subsection per problem).}

\subsubsection{How did you measure progress of the project? Give
your results.}

\subsubsection{What benchmarks did you use to evaluate the correctness
and performance of your programs?  If you used none, why not, and
how did you then evaluate your programs?}
\end{comment}

