\chapter{System Design}

The aim of this chapter is to provide the reader with our proposed system design, both on a hardware and software level. 

\section {A general overview on the hardware system}
Figure~\ref{fig:architecture} shows the overall  hardware architecture of the final device. 

\begin{figure}[!ht]
    \begin{center}
        \includegraphics[scale=0.4]{images/dany-img007.png}
    \end{center}
    \caption[Overall hardware architecture]{Overall hardware architecture.}
    \label{fig:architecture}
\end{figure}

The final device will be composed by the following components:
\begin{itemize}
	\item A RGB camera to get the input text data.
	\item A  depth camera to help the software in the user interaction and text location tasks. 
	\item A processing unit to execute our main software.
	\item A pico-projector to show the software output.
	\item A wireless communication unit, necessary to interact with the back-end system.
	\item A battery for power supply.
	\item Some actuators to allow the cameras and the projector to cover a bigger area.
\end{itemize}

As we stated, due to the limited time, our goal is not to build the final physical device but to build a first working prototype, concentrating our efforts on the individuation of the best suitable depth-camera and processing unit combination.


\section{System software behaviour}
The state diagram in Figure~\ref{fig:system-state-diagram} describes the general behaviour of the proposed system software. The system, one time started, uses the depth camera to detect new users (\textit{NO USER IN THE FIELD OF VIEW} state). Once a user has been recognised, in order to take the right body proportions, it is necessary for the system to calibrate. Consecutively, the system enters in the \textit{WAIT FOR POSE} state and stays there till the recognition of a user calibration pose (\textit{calibration pose} event). In this latter case, the application passes in the \textit{CALIBRATION START} state. In this status, in order to succeed in the calibration, the user should be steady. If the user moves too much and/or if the system is not able to accomplish the calibration, a \textit{calibration failed} event is generated and the system goes back in the previous state. 

\begin{figure}[!ht]
    \begin{center}
        \includegraphics[scale=0.25]{images/dany-img008.png}
    \end{center}
    \caption[System state diagram]{System state diagram.}
    \label{fig:system-state-diagram}
\end{figure}

On the other hand, if the system succeeds in this task then the application goes in the \textit{USER TRACKING} state and the user that performed the calibration is designed as current user of the system.  In this state, the application starts to track the user body joints in order to recognise specific gestures. This is the state in which the user has to decide which of the system's functionalities he wants to use. Basically, there are two main options: one is to use the hand tracking mode ( if the user moves in the \textit{user pose 1} then the system passes in the \textit{HAND TRACKING MODE} state); the other is to use the arm-pointer mode (if the user moves in the \textit{user pose 2} then the system passes in the \textit{ARM POINTER MODE} state). The system gives the user the possibility to choose the arm/hand to use (between left and right). As in the previous situation, the user has to choose between two possibilities for performing a specific gesture:

\begin{itemize}
	\item From the \textit{HAND TRACKING MODE} state:
	\begin{itemize}
		\item \textit{user pose 3} leads the system to the \textit{LEFT HAND TRACK ON} state.
		\item \textit{user pose 4} leads the system to the \textit{RIGHT HAND TRACK ON} state.
	\end{itemize}

	\item From the \textit{ARM POINTER MODE} state:
	\begin{itemize}
		\item \textit{user pose 4} leads the system to the \textit{LEFT ARM POINTER ON} state.
		\item \textit{user pose 5} leads the system to the \textit{RIGHT ARM POINTER ON} state.
	\end{itemize}
\end{itemize}

In all the four states (\textit{LEFT HAND TRACK ON},  \textit{RIGHT HAND TRACK ON}, \textit{LEFT ARM POINTER ON}, \textit{RIGHT ARM POINTER ON}), the system starts to perform its main work: it  processes the user data that comes from the depth camera and finds a target point (that depends on the mode selected and on the environment) that is used as reference for extracting a (rectangular) part of the correspondent RGB image. The size of the extracted image (sub-image) depends on some parameters and from the distance between the device and the target point. In order to make easier the text recognition process, this sub-image is then modified using different techniques ( that may consider  the device-target distance) and processed by a text-tracking/OCR engine. The results are then compared with the elements of the back-end database (it's also possible to set the system to scan different sub-images in order to get better results) and, if a match is found, the related information is showed (by means of the projector) to the user. 
If the user wants to switch to another mode (among the remaining three) he has to perform a specific gesture (\textit{user pose 6}). Doing that, the system goes in the \textit{END TRACKING} state. In this state, if the user performs the \textit{calibration pose} again , he becomes (again) the current user and the systems goes back to the  \textit{USER TRACKING} state without the need of calibration. If instead, another user present in the device field of view, performs the calibration pose then the system goes in the \textit{WAIT FOR POSE} state.
The system recognises when a new user enters the field of view and when the user goes out. However, if an additional user is not the current user, the status of the application is not modified (a part the \textit{NO USER IN THE FIELD OF VIEW} state). If instead, the  user that goes out is the current one, then the application, regardless of the current state,  goes in the \textit{LOST CURRENT USER} state.  In this state the system looks for users in its field of view, if present goes in the \textit{WAIT FOR POSE STATUS} if not, it goes in the starting state ( \textit{NO USERS IN THE FIELD OF VIEW} ).

\subsection{User poses}
We require 7 gestures to be recognizable. The following figures show how we designed them. In order to understand the right positions, we use the following terminology:
\begin{itemize}
	\item UP: the hand must be in the L position.
	\item IN: the hand must be on of the chest.
	\item OUT: the hand must be out of the chest.
	\item LEFT: left hand/arm.
	\item RIGHT: right hand/arm.
	\item CLOSE: hands must be close each other.
\end{itemize}

\begin{figure}[!h]	
	  \centering	  	  
	  \subfloat[\textit{Calibration Pose}]{\label{fig:calibration-pose}\includegraphics[scale=0.7]{images/dany-img009.jpg}}          
	  \hspace{2em}%	   
	  \subfloat[Right up, left in (\textit{userpose 1})]{\label{fig:userpose_1}\includegraphics[scale=0.7]{images/dany-img010.jpg}}		  
  	  \hspace{2em}%	   
	  \subfloat[Right in, left up  (\textit{userpose 2})]{\label{fig:userpose_2}\includegraphics[scale=0.7]{images/dany-img012.jpg}}   	
	  \hspace{2em}%	       
	  \subfloat[Right out, left up (\textit{userpose 3})]{\label{fig:userpose_3}\includegraphics[scale=0.7]{images/dany-img011.jpg}}	
	  	
	  \subfloat[Right in, left in  (\textit{userpose 4})]{\label{fig:userpose_4}\includegraphics[scale=0.7]{images/dany-img013.jpg}}         	      \hspace{3em}%	
	  \subfloat[Right up, left out (\textit{userpose 5}) ]{\label{fig:userpose_5}\includegraphics[scale=1.0]{images/dany-img014.png}}	
	  \hspace{3em}%	  
	  \subfloat[Right up, left up and close (\textit{userpose 6})]{\label{fig:userpose_6}\includegraphics[scale=0.7]{images/dany-img015.png}}         
	  
	  \caption{Recognized gestures.}
	  \label{fig:recognized-gestures}  
\end{figure}

\subsection{Events}
The possible events that can occur during the execution of the software are the following:
\begin{itemize}
	\item \textit{Gesture X detected}: one of the above poses has been detected by the system.
	\item \textit{Calibration failure}: this event occurs when for some reason the initial calibration fails.
	\item \textit{Calibration success}: this event occurs when the calibration ends successfully.
	\item \textit{New user}: this event occurs when the system detects a new user in the depth camera field of view (see Figure~\ref{fig:new-user-in-FOV}).
	\item \textit{Lost user}: this event appears when a user gets lost by the application (see Figure~\ref{fig:user-out-FOV}). 
	\item \textit{Research user success}: when the application founds users in the field of view.
	\item \textit{Research user failed}: when the application doesn't find users in the field of view (Figure~\ref{fig:no-user-in-FOV}).
\end{itemize}

\begin{figure}[!h]	
	  \centering	  	  
	  \subfloat[No users in the device FOV]{\label{fig:no-user-in-FOV}\includegraphics[scale=0.8]{images/dany-img016.jpg}}          
	  \hspace{3em}%	   
	  \subfloat[New user in the FOV]{\label{fig:new-user-in-FOV}\includegraphics[scale=0.8]{images/dany-img017.jpg}}		  
  	  \hspace{3em}%	   
	  \subfloat[User out of the FOV]{\label{fig:user-out-FOV}\includegraphics[scale=0.8]{images/dany-img018.jpg}} 
	  
	  \caption{Recognized events.}
	  \label{fig:recognized-events}  
\end{figure}




\section{System software features}
Our software provides the user with two main modalities and a series of available options and parameters that can be set and tuned depending on the particular environment and type of research artefact. The two main modalities are: hand-tracking and the arm-pointer and the user can choose to use both the right and the left hand.

\subsection{Hand-tracking mode}
In this modality it is supposed that the user holds an object (e.g.~a book)  that presents some text data on its surface. The system calculates then  the hand position and a rectangular area in correspondence of it. This area is then used by the system to extract a sub-image that will be then processed in order to retrieve the related data. 

\subsection{Arm-pointer mode}
With this modality, the target point on which the extraction rectangle is computed is instead calculated using a target text region chosen by the user, using his forearm  as a pointer. Specifically, the direction that the algorithm uses to calculate the target point is taken using two body joint points: the elbow and the hand.

\subsection{Other options and parameters} 
There are many options and configurable parameters are:
\begin{itemize}
	\item \textit{Depth-zoom}: the target rectangle content is zoomed accordingly to its distance from the camera: farther an object is, less big the rectangular area will be.
	\item \textit{Depth-size}: the target rectangle dimensions change in relation with the distance from the camera: farther an object is, less big the rectangular will be.
	\item \textit{Normal zoom}:  it's a user configurable factor that can be modified to resize the target rectangle content. It can be used also in combination with the depth-zoom.
	\item \textit{Normal-size increment}: the target rectangle dimensions can be modified by the user. It can be used also with the depth-size factor.
    \item \textit{Brightness}: it's possible to adjust the brightness of the target image.
	\item \textit{Contrast}: it's possible to adjust the contrast of the target image.
	\item \textit{OCR samples number}: the number of OCR samples that will be used by the algorithm to find the research object name in the database.
	\item \textit{Target frame to skip}: number of frame that should be skipped.
\end{itemize}
 
