\section{Project management}
The four main project parts each require some degree of
specialization.  We will therefore distribute the tasks among us:

\begin{itemize}
	\item Object Detection and Image Recognition - Elias Johansson \&
      Sebastian Johansson.
	\begin{itemize}
		\item Detect objects. Finish course - week 4
		\item Pull out the detect object from the picture. Finish
          course week - 5
		\item Compare and recognize objects - week 5.
	\end{itemize}
	\item Natural Language Processing - Martin Hardselius.
          \begin{itemize}
          \item Research text analyzation - course week 4
          \item Write a program that fetches interesting entities from
            the raw text of our book - course week 5
          \item Finish up coreferencing and adding my work to the rest
            of the project - course week 5 \& 6
          \end{itemize}
          It's supposedly going to be hard to implement a good way of
          solving coreference problems even if the NLTK has support
          for this. The main reasing being I've never worked with it
          before. Another reason is that implicit references can be
          really hard to spot, such as ``Abe pulled out his matches,
          but found \textit{the box} was empty.'' A very possible
          limitation is to exclude such references as childrens books
          rarely contains a lot of implicit facts.

	\item Reasoning System - Andreas Granstr\"om.

      The main problem with the reasoning system will be to find a
      dataset representation of the extracts from both the NLP part
      and the Image part of the project.  Allowing the use of the
      Expectation Maximization \emph{(EM)}-algorithm as used in
      \cite{duygulu}.

      The framework \emph{PyMix}\cite{pymix} provides an
      implementation of the Expetation Maximization algorithm that
      perhaps can be used in our project. PyMix also helps with
      creating Mixture Models which is used in \cite{duygulu}. Mixture
      Models can be regarded as a type of unsupervised learning
      \cite{mixtmod}.

      Main tasks within the Reasoning System:
      \begin{enumerate}
        \item Obtain an implementation of the Expectation Maximization
          algorithm and Mixture Models.
        \item Figure out what our datasets will look like.
        \item Use mixture models to create probabilistic relationships
          between our images and the words.
        \item Use the EM-algorithm, our mixture models and the
          datasets to compute which word corresponds to which image.
      \end{enumerate}
     

     
\end{itemize}

\subsection{Time line}
What should be done during the \emph{corresponding course week}:

\begin{itemize}
    \item[3.] Problem formulation and current work research.
    \item[4.] Continued research, programming and the four program
      parts.
    \item[5.] Working program parts. Hand in of problem set.
    \item[6.] First version of a complete and working program. First
      draft of report.
    \item[7.] Finalize program. Collect evaluation data.
    \item[8.] Finalize report and presentation / demo.
\end{itemize}

