\chapter{Results}
This chapter gives detailed information about the tests which were carried out on our prototype and on the obtained results.

\section{Experiment settings}

As we saw in the implementation chapter, we used Tesseract as OCR engine and a Levenshtein distance based algorithm to retrieve related data.

With the current OpenNI implementation, if you want to correctly track the user, and at the same time calibrate the depth image with the RGB one, a minimum framerate is required. This implies we couldn't exploit the devices analysed in the chapter 4. The maximum frame rate reached with them is in fact too low. We used then an HP pavilion dv5-1110el, a Microsoft Kinect depth camera and as projector and a Texas Instruments DLP Pico Projector Development Kit Version 2.0. 

We tried our method both on books and sheets of paper of different colour, size, shape, font and background and we placed the device in different places, measuring every time the device-object distance. We furthermore carried  out the experiments by trying all the options offered by the application, setting the available parameters to different values.  Finally, we tested them on small databases of different size and word/sentence lengths.  

%\begin{figure}[!h]	
%	  \centering	  	  
%	  \subfloat[]{\label{fig:tests-01}\includegraphics[scale=0.07]{images/dany_test02.jpg}}          
%	  \hspace{3em}%	   
%	  \subfloat[]{\label{fig:tests-02}\includegraphics[scale=0.07]{images/dany_test03.jpg}}		  
%  	  \hspace{3em}%	   
%	  \subfloat[]{\label{fig:tests-03}\includegraphics[scale=0.18]{images/dany_test_01.png}} 
%	  
%	  \caption{Some of the tested objects}
%	  \label{fig:test-settings}  
%\end{figure}

\begin{figure}[!h]	
	  \centering	  	  
	  \subfloat[]{\label{fig:tests-01}\includegraphics[scale=0.07]{images/dany_test02.jpg}}          
	  \hspace{1em}%	   
	  \subfloat[]{\label{fig:tests-02}\includegraphics[scale=0.07]{images/dany_test03.jpg}}		  
  	  \hspace{1em}%	   
	  \subfloat[]{\label{fig:tests-03}\includegraphics[scale=0.165]{images/dany_test_01.png}} 
	  
	  \caption{Some of the tested objects}
	  \label{fig:test-settings}  
\end{figure}

\section{Human-computer interface accuracy analysis}
All our human-gesture-based interface depended heavily on the accurate identification of the skeleton joint points. In the next sub-paragraphs, we will discuss more in details the accuracy of our method.

\subsection{Poses}
All the poses described in the design chapter were correctly identified when the user stood with the face in front of the device and when there were not objects occluding the body joint points. Obviously, in order to track all the joints, the user should be entirely in the Kinect camera point of view.
On slow machines, frame rate can decrease. User tracking loss problems and pose identification success rate can respectively arise and decrease under the 10 FPS, especially with rapid movements.

\subsection{Hand tracking}
In general, the tracking of hand-held objects works well if at least one part of the hand is visible. It can also work if the hand is totally occluded by the object of interest but part of the forearm must be visible. Figure~\ref{fig:hand-held-tracking} shows a successful tracking of a book held with the right hand.

\subsection{Arm pointer}
The observed results showed that the (fore)arm pointer works really well when the pointed target is close to the user and the user is close to the camera (see Figure~\ref{fig:arm-pointer-mode} ). If these two conditions are not met, a correct target identification could become more difficult to achieve. This behaviour is due to the fact that, currently, the NITE Skeleton generator used by the system doesn't always return the same point for a specific joint point. Speaking about the hand for example, it can sometime return one point on a finger and another time on a the wrist instead. 

In the case of the arm pointer there can be more occlusion problems compared to the hand-tracking case. The reason is due to the action of pointing, that requires the user orientation in direction of the target point. In this case, it can occur more frequently that the two arm-pointing relevant points (the hand and the elbow) can be occluded by other part of the body.

\begin{figure}[!h]
    \begin{center}
        \includegraphics[scale=0.24]{images/dany-img031.png}
    \end{center}
    \caption[Correct hand-held object tracking]{Correct hand-held object tracking.}
    \label{fig:hand-held-tracking}
\end{figure}

\begin{figure}[!h]
    \begin{center}
        \includegraphics[scale=0.25]{images/dany-img030.png}
    \end{center}
    \caption[Arm-pointer mode]{Arm-pointer mode.}
    \label{fig:arm-pointer-mode}
\end{figure}

\subsection{Other options}
The \textit{depth-zoom} and the \textit{depth-size} options work correctly. Best results are obtained when the user calibrate them for the particular context. Figure~\ref{fig:depth-zoom} and Figure~\ref{fig:depth-size} show their application.

\begin{figure}[!h]
    \begin{center}
        \includegraphics[scale=0.24]{images/dany-img033.png}
    \end{center}
    \caption[Depth-zoom applied to Arm-pointer]{Depth-zoom applied to Arm-pointer.}
    \label{fig:depth-zoom}
\end{figure}

\begin{figure}[!h]
    \begin{center}
        \includegraphics[scale=0.24]{images/dany-img032.png}
    \end{center}
    \caption[Depth-size for hand-tracking]{Depth-size for hand-tracking.}
    \label{fig:depth-size}
\end{figure}

\section{Object-recognition results}
The general accuracy of our system in the task of retrieving information about a specific target indicated by the user (both with the arm pointing and hand tracking) depended mainly on:
\begin{itemize}
	\item the precision of the pointing/tracking mechanism.
	\item the extracted subimage.
	\item the OCR engine.
	\item the algorithm used to find the object of interest in the database. 
	\item the size and the specific elements composing the database.
\end{itemize}

\subsection{Identification quality analysis}
Both for the hand-tracking and arm-pointer scenarios, the experiments indicated that best results occur when:
\begin{itemize}
	\item high resolutions were used.
	\item text only subimages were extracted.
	\item the artefacts were close to the device.
	\item the user didn't move too much.
	\item zooming operations were performed for far away objects and for small sized texts.
	\item increasing the number of OCR samples. The impact of this factor is relevant, especially when augmenting the size of the database. It could lead anyway to an increased time in the artefact identification.
	\item modifying in an appropriate way the contrast and brightness factors. 
	\item activating the text-tracking mechanism.
	\item augmenting the value of the \textit{target frame to skip} parameter when the target point starts to change too often (especially in the arm-pointer case, as we saw in the 6.2.3 subsection). It improves in fact the success in keeping the trackers active but the downside is a decreasing of the OCR samples/second ratio.
\end{itemize}

Depending on the above factors, a certain quantity of noisy text  (e.g., misspellings, transliterations differences, meaningless long characters sequences)  was present. 

In general, every OCR sample is obtained from a sub-image of the stream that is almost always slightly different from the previous one (because of user movements, of the NITE skeleton tracking current implementation, light condition alterations and other factors). Even when there were small differences, we observed very different results, with variations in the noisy text quantity and type.

As we explained before, in order to properly identify an object (minimizing the noisy text impact), the identification algorithm we used, needed a certain number of OCR samples. 
Figure~\ref{fig:object-identification}, shows the last two OCR samples of a successful object identification process. In the first image, it's possible to notice one of the noisy text problems (a meaningless long character sequence) in conjunction with the correct target object text (a sheet of paper hung on the wall, with  the \textit{HELLO WORLD} text printed on it). In the second, only some characters of the target object text were recognised. They were however sufficient, in addiction to the other samples, for the correct object identification.

In Figure~\ref{fig:interaction} finally, it is showed how our prototype can be used to interact with the research space using a pico-projector.

\begin{figure}[!h]	
	  \centering	  	  
	  \subfloat[Intermediate OCR sample with noisy text]{\label{fig:object-identification1}\includegraphics[width=0.45\textwidth]{images/dany_intermediate_frame.png}}          
	  \hspace{1em}%	   
	  \subfloat[Last OCR sample for the identification]{\label{fig:object-identification2}\includegraphics[width=0.45\textwidth]{images/dany-img034.png}}  
	  
	  \caption{Successful object identification}
	  \label{fig:object-identification}  
\end{figure}

\begin{figure}[!h]
    \begin{center}
        \includegraphics[scale=0.25]{images/dany-img035.jpg}
    \end{center}
    \caption[Interaction in the research space]{Interaction in the research space.}
    \label{fig:interaction}
\end{figure}
 
 


