



\chapter{Conclusions and future work}

In this last chapter, we will end this paper by drawing conclusions on our study and, by means of the results obtained, we will suggest a number of topics on which further research would be beneficial.

\section{Conclusions}

This thesis was intended to begin a preliminary study for the creation of a portable human-computer interface based camera-projector device, aimed at the identification of text based research objects. As we stated, this project is inserted in a wider 3-years long framework and aims at the creation of a firm base for the future work that will be carried out in the next years.
From this point of view, the significance of our contribution is two-fold. First we have shown the results of our investigations on new and challenging technologies that could be used for the final device implementation. In particular, according to what we know, our work has been the first investigation on the exploitation of a depth-camera by means of a mobile device. In this context, a fair amount of time has been spent in the analysis of low level data, kernel and driver programming, confronting constantly with bugs and unexpected behaviours of the platforms analysed. 
Second, we succeed in the task of the creation of a working prototype, easily controllable by the user by means of simple gestures in a field. It has been shown, that the device, with proper settings has been able to recognise in near real time the artefacts of interest despite the not high resolution of the camera we had at our disposal. Furthermore, using an open standard API (OpenNI), the software developed will also work with all the future compliant devices, without changing a line of code. This work, finally,  is inserted in an area, the recognition of texts in natural scene images where there have been relatively few works, increasing the value of our study.  
Considering all the results obtained, the original aims and objectives of the project were met.

\section{Extensions}
Before building the final device, putting together all the various components, some improvements are required. 

\subsection{Databases and algorithms for related information retrieval}
In our experiments we employed just a local text file containing the research keys as database. Complex and real world applications cannot abstract from the use of a modern DBMS and/or not-local information. 

We implemented an algorithm based on the levenshtein distance but with large databases this cannot be enough to find the correct match. In order to achieve best results also in these cases, it will be necessary to explore other ways like:
\begin{itemize}
	\item Tf-idf indexing
	\item Hopfield networks
	\item Levenshtein automata
	\item BK trees
	\item LCS distance
	\item N-gram distance
\end{itemize}

\subsection{Projection and occlusion avoidance}
During our study, we didn't focus on the projection area. Future work will therefore be addressed to identify optimal projection places over the artefacts of interest or in locations chosen by the user by pointing mechanisms ( i.e. with some arm pointer technique).  Actuators can be used to move the device's cameras/projector to keep the user in the field of view or to interact in other areas without the need of moving the device.
In order to avoid the occlusion problem, more devices can be used in parallel. The OpenNI API implementation already permits to use different cameras to track the same user so this task shouldn't be difficult to achieve.  

\subsection{Text recognition improvements}
It would probably  be possible to find a better overall combination of text tracking, OCR and preprocessing techniques in order to obtain better results. The Kinect RGB camera maximum resolution is 1280x1024 at 10 FPS only and being the resolution, a key factor in the text recognition problem, using a better quality camera would improve considerably the results obtained. This task entails some new issues because it would be necessary to calibrate the new RGB camera pixels with the depth ones.   
Another feature which could be added is the ability to perform auto contrast and brightness adjustments by means of techniques such as Histogram equalization.

\subsection{New interface and functionalities}
More functionalities could be added. These should be more context oriented, and should imply the adoption of new gestures with possible related studies about the research of the best suitable  gestures.
Furthermore, there could be added functionalities to compare different research objects or to make annotations on the objects of interest. 

Recently, Microsoft released their official Kinect drivers for Windows, with a new skeleton tracking system that doesn't require a user pose calibration.
This is a really important innovation but we believe that using OpenNI remains the best solution to adopt, due to the Windows Operating System requirement constraint required by the Microsoft solution and because the removal of the calibration pose feature is planned to be released for the OpenNI framework too. 


\subsection{Miniaturization}
Our prototype currently uses a standard laptop for the processing task. As we stated, on the market are available very small laptops and other smaller devices worth of investigation. Other efforts (currently carried on by members of our PATINA group) consist in the miniaturization of the other components of the system (i.e. Kinect).








%
%\begin{itemize}
%	\item 
%	\item 
%	\item 
%	\item 
%	\item 
%	\item 
%\end{itemize}



%\begin{figure}[!h]
%    \begin{center}
%        \includegraphics[scale=0.17]{images/dany-img019.jpg}
%    \end{center}
%    \caption[One of the experiments]{One of the experiments.}
%    \label{fig:one-experiment}
%\end{figure}



%\begin{figure}[!h]	
%	  \centering	  	  
%	  \subfloat[No users in the device FOV]{\label{fig:no-user-in-FOV}\includegraphics[scale=0.8]{images/dany-img016.jpg}}          
%	  \hspace{3em}%	   
%	  \subfloat[New user in the FOV]{\label{fig:new-user-in-FOV}\includegraphics[scale=0.8]{images/dany-img017.jpg}}		  
%  	  \hspace{3em}%	   
%	  \subfloat[User out of the FOV]{\label{fig:user-out-FOV}\includegraphics[scale=0.8]{images/dany-img018.jpg}} 
%	  
%	  \caption{Recognized events.}
%	  \label{fig:recognized-events}  
%\end{figure}