\chapter{Software implementation}
Software implementation consists of implementing the requirements and design into code. In this chapter there will be discussed practical software implementation details.

\section{A general overview}
We developed our application in the C++ language and as anticipated in the previous chapter, we chosen to develop our application code modifying the Bristol text-tracking software \cite{10}. Several small size devices don't support OpenGL. Considering this fact, we decided to use another GUI, the OpenCV GUI, HighGUI. It provides simple methods for displaying images on screen, accepting user input and it has also been used in the text-tracking software we choosed.

The most important files of our application are \textit{tracker\_main.cpp} and \textit{particleTracker\_main.cpp}. We used OpenNI API and the NITE Skeleton tracking modules. Our first versions didn't use threads and it was almost impossible to use. The skeleton tracking in fact, looses the user tracking very easily if both the Kinect depth camera frame rate is not high and if you are using at the same time the calibration of the depth image with the the RGB one. 
In the final version we used instead several callbacks and three Posix threads with much better results. To increase the performance in slow systems, it's possible to set different priorities to the different threads (using \textit{pthread\_setschedparam}), giving highest priority to the one containing the main OpenNI features.

Their name and main tasks are:
\begin{itemize}
	\item \textit{Main}: it is the principal thread and it is responsible to process the information coming from the Kinect and to manage the input/output of the program.
	\item \textit{GrabFrame}: this thread takes care of receive the Kinect's data and to pre-process it.
	\item \textit{retrieveData}: it is the thread that is involved in the retrieval of the data from the database.
\end{itemize}

\begin{figure}[!h]
    \begin{center}
        \includegraphics[scale=0.4]{images/dany-img028.png}
    \end{center}
    \caption[Control window ]{Control window.}
    \label{fig:control-window}
\end{figure}

In addiction we used also some callbacks to manage the events coming from the Kinect and in order to allows the user to change the application's parameters by means of trackbars. Due to the fact we are using threads we used global variables for the communication among the threads.

We provide the user a control window  (\textit{Control Window}, Figure~\ref{fig:control-window}) where the information about the status of the application,  the enabled/disabled options, the results of the OCR engine and of the algorithm for the data retrieval are showed. This window is the one projected. 

\begin{figure}[!h]
    \begin{center}
        \includegraphics[scale=0.5]{images/dany-img029.png}
    \end{center}
    \caption[Settings window]{Settings window.}
    \label{fig:settings-window}
\end{figure}

Another window (\textit{Settings Window}, Figure~\ref{fig:settings-window}) is provided to allow the user to change the application's parameters. 

Two optional window can be showed: one is provided to understand how the system is really working. It's a RGB image, on which, if a user is tracked, user skeleton information, target rectangle and direction on which the user is pointing (if one of the two arm pointer modalities has been chosen) are drawn. The second is a black and white image modified depending on the options and on the values of the parameters chosen by the user. It is the image that is send to the OCR engine to find the text.
All the options are configurable, included the maximum number of trackers and the maximum number of trackers per frame.
Next paragraphs will analyse more in details the implementation of the threads  and of the callbacks described.

\section{The main thread}
After the initialization process, the main thread launches the thread responsible to collect the data from the Kinect and waits until the first data arrives. Once the data has arrived, a loop process is started. In this loop process, the application waits first of all for new Kinect data. When the data is ready the RGB image and the skeleton joints data (if a user is being tracked) are copied. At this point, first of all the controls and the skeleton model are drawn (drawing line segments between the specific joints with the \textit{drawlimb} function).  Secondarily, if a user is tracked,  a function (\textit{CheckAppStatus})  that compares the current joint positions with the poses defined in the chapter 3 (excluding the calibration pose that is calculated by the OpenNI library) is called. If a pose has been recognised then the application status is changed in accordance with the state diagram of Figure~\ref{fig:system-state-diagram}.

If the system enters in one of these two states:  \textit{LEFT HAND TRACK ON} or \textit{RIGHT HAND TRACK} ON  then the \textit{HandTracking} function is called. After having checked the accuracy of the   point of the skeleton relative to the hand selected, the function computes the position and the size of a rectangular region in the RGB image. These calculations are based on the position of the hand but are influenced also by the options and the parameter values chosen by the user.

If instead the state is \textit{LEFT ARM POINTER ON}  or  \textit{RIGHT ARM POINTER ON} then the \textit{ArmPointer} function is called. For every Kinect's pixel (projection space), the OpenNI API provide us a way to obtain the correspondent real world coordinates in millimeters and also a way to convert them back in the projection space. A rectangular area is calculated also in this case but two joint points are now used (after controlling their accuracy): the hand and the elbow of the same arm. 

The idea underlying the algorithm is that it's possible to use these two point (in real world space) to create a direction vector. Starting from the hand point, a certain number of points (not greater than the maximum distance) are then taken along the real world direction vector. They are then converted into the projected space (all together because the ConvertRealWorldToProjective function responsible for the conversion grants in this way better performances) and for every point we took along that direction,  we compare the z coordinate with the depth values coming from the Kinect (using the projected points). When the algorithm finds a depth value smaller than the one that should have (according to the direction vector) the target point is found and the rectangle can be calculated accordingly. Also in this case, the dimensions of the rectangle depends on the option that the user chooses.  

If and when the rectangular target area is calculated, the RGB image is copied and converted into a  greyscale image. At this point,  a sub-image is taken in correspondence of the points of the rectangular area found with the arm-pointer or the hand-tracking modes and, depending on the options chosen, then transformed (zoomed,resized or with modifies in the contrast and/or in the brightness). 

The resulting image is then processed with the \textit{frame} (or \textit{frameWithoutTracking} if the user doesn't want to employ the text-tracking feature) function to individuate textual data in the image and to keep track of it in the next frames. Once the text has been tracked, we extract the region that encloses the (using the graphBox feature of the text-tracking library) text (there could be more than one tracker) and we launch the Tesseract engine with this \textit{sub-sub-image} in input. Tesseract produces a textual file with the results that we draw in the \textit{ControlWindow}. 

OCR results usually are not 100\% accurate and very often produce noisy results. In order to identify the right database entry (i.e. the research object),  we designed an algorithm (discussed subsequently) that uses a certain number of OCR samples to improve the quality of the results. Once the number of samples are reached, the function that implements this algorithm is launched (in another thread). After this, the output is showed and the keyboard input is taken (to change the various settings or to quit). When the user wants to quit, all the memory is released and the program exits. 

\section{The grabFrame thread}
As we said, this thread deals with the task of collecting the data from the Kinect. The RGB camera and the depth sensor without calibration are not aligned. The depth camera pixels don't correspond to the ones in the RGB camera. In  order to align the two streams of data, it's necessary some calibration. Usually, this process requires some time but fortunately, OpenNI provides us of a function that allows to perform this task in an easy way. 
We used (see \textit{Appendix B}) an external XML file to configure the Kinect parameters and designed all the code taking into account the possibility to use both the available Kinect RGB resolutions (1280x1024 and 640x480). 
The OpenNI Skeleton implementation generates some events so we created and registered some callabacks to manage them. 

After this initialization phase, the thread starts the Kinect transfer data (\textit{context.StartGeneratingAll}) and it enters in a loop that will finish only at the end of the main one. 
The data coming from the Kinect is in a raw format so we had to modify it in order to make it usable by OpenCV (\textit{convertImageToCVImage}). Every time, if there is a tracked user, the information about the skeleton joints are updated (using the \textit{SkeletonJointsUpdate}). 

In the code, we use the terms LEFT and RIGHT to indicate their opposite in order to maintain the same OpenNI convention. Finally, when the main thread ends, the loop is broken and the memory can be freed. 

\section{The retrieveData thread}
We implemented a simple local database using a plain text file. As we said before, this thread is started when a certain number of OCR samples is reached. In order to retrieve the most likely candidate element of the database, our algorithm, computes the distance from every OCR sample to each element of the database. We chosen the Levenshtein distance that is defined as \textit{the minimum number of edits needed to transform one string into the other, with the allowable edit operations being insertion, deletion, or substitution of a single character}. 

The algorithm,for every sample, chooses as most likely candidate the element of the database with the smallest distance (or more than one, if there are more elements at the same distance) and keeps track of the minimum value reached. Each of these candidates is then compared with the others and at the end, the element (or elements if there are other candidates with same distance)  that has better chances to be the correspondent one (to the object of interest) is returned. 

\section{Callbacks}
As we stated there are two types of callbacks. Some of them are associated with the OpenNI API, other instead are responsible to modify the behaviour of the system as a consequence of a change in the relative trackbar status. \textit{update\_brightcont, update\_rectsize,  update\_ocrsamples, update\_zoomfactor,  update\_depthzoomfactor, update\_target\_frame\_to\_skip} are all callbacks of this last type. All of them, check the values provided by the related trackbars and implement the functions described in the design chapter. 
\textit{User\_NewUser, User\_LostUser, UserPose\_PoseDetected, UserCalibration\_CalibrationStart, UserCalibration\_CalibrationEnd} are all callbacks that have to be implemented to manage the OpenNI events. These callbacks change the application status as described in the design chapter.

