\chapter{Technology comparison and analysis}

There are multitudes of hardware and software choices on the market. This chapter will summarise our research about the technologies we considered worthy of investigation.

\section{Text recognition and tracking}
100\%~OCR accuracy with text only documents is difficult to achieve. The situation gets much worse when dealing with environment images containing both text and background. With this type of image the only way to improve the quality of the results is to locate the text and perform the OCR process on it. 

\subsection{OCR}
Most of the good quality OCR engines available are not free. After some research however, we found an open source commercial quality engine called Tesseract. This engine was originally  developed as proprietary software at Hewlett-Packard between 1985 and 1995 and released after as open source software under the Apache license v.~2.0 by Hewlett-Packard and UNLV in 2005. Tesseract development is currently sponsored by Google.
First Tessaract versions could recognize text in English only but currently it supports about 40 different languages, can be trained to work in other languages and can use custom dictionaries with words linked to the particular context. The actual Tesseract character recognition works very well.
In addition, the engine also supports output text formatting, hOCR positional information and page layout analysis. 
There are still problems with languages that possess complex characters, or connected scripts such as Arabic and also with right-to-left texts.

We tried the engine with several indoor images containing text data obtaining poor results,  often without any character recognised.
We later  tried to locate manually the text, extracting the area around the text with a photo editing program. As expected,  Tesseract provided better results even if not always correct. The quality of the results depends mainly on the scanning resolution and the quality of the original text but it is influenced also by other factors such as brightness, constrast, text size and angle ( often OCR software cannot read text skewed by more than 10 degrees). 

\subsection{Automated text tracking}
Finding an open-source text-tracking software it has not been simple as for OCR. Surfing the net, we initially didn't find any free acceptable solution but fortunately we had the possibility to try the text-tracking software described in the background chapter, developed here in the University of Bristol.
There were several compilation and linking errors due mainly to the fact that the software relied on old library versions (I.e. OpenCV 1.0) while we installed the new ones. The problem was that they didn't support backward compatibility so we had to modify the code in order to use the new ones.
Once solved all these problems we tried it with a standard webcam.  On numerous occasions, we noticed that the software found  the text regions while not always the region contours were so precise. Depending on the background, a different number of false positive regions were found. 
As we saw in the previous chapter, our final software will extract regions of the RGB camera image basing on the information taken from a depth camera. Applying the text-tracking algorithm to this sub-image, the number of false positives decreased considerably. 


\subsection{Cameras}
As we have stated before, a depth camera could be very useful in this type of applications.   
Initially, we planned to use a camera based on the time-of-flight principle, the PMD-Vision CamCube but, due to reasons that will be explained further in this same chapter, we decided to opt for the Microsoft Kinect camera, a device that uses the structured light approach. 

\subsection{PMD-Vision CamCube 3}
The PMD-Vision CamCube 3.0 is one of the best Time-of-flight cameras available in the market today, in particular, it has the highest resolution among all the cameras based on the TOF principle.

\begin{figure}[!h]
    \begin{center}
        \includegraphics[scale=0.17]{images/dany-img020.jpg}
    \end{center}
    \caption[PMD camera and projector]{PMD camera and projector.}
    \label{fig:PMD-camera-and-projector}
\end{figure}

The main features of this device are:
\begin{itemize}
	\item Simultaneous capture of grayscale images and distance information. 
	\item Depth resolution: 204x204 (41616 pixels).
	\item Measurement range: from 0.3 to 7 meters.
	\item Power supply:  between 10,8V to 13,2V.
	\item Frame rate:
	\begin{itemize}
		\item 40 fps at 200x200 pixels.
		\item 60 fps at 176x144 pixels.
		\item 80 fps at 160x120 pixel.
		\item Field of view:  40° x 40°.
	\end{itemize}
	\item Interface: USB 2.0
	\item High resistance to background light thanks to the integrated SBI circuit.
	\item Price: around 6000 euros ~atm.
\end{itemize}

The camera comes with a SDK that provides a simple C API. It is called PMDSDK2  and it allows us to get the 2D and 3D data from the device. In particular, it permits to retrieve the distance between the camera and the object, the signal strength of the active illumination, the greyscale image associated and the range data in cartesian coordinates. 
I wrote some c++ programs in order to test the camera's capabilities obtaining good results but the news of the release of open source drivers for the Microsoft Kinect camera changed our original plans.

\begin{figure}[!h]
    \begin{center}
        \includegraphics[scale=0.17]{images/dany-img019.jpg}
    \end{center}
    \caption[One of the experiments]{One of the experiments.}
    \label{fig:one-experiment}
\end{figure}

\subsection{Microsoft Kinect}
The Microsoft Kinect (previously known as Project Natal ) is a horizontal bar equipped with depth sensors, a rgb camera and multiarray microphones. It is  intended to be used with the Microsoft Xbox 360 video game platform in order to enable users to control and interact with it, without the need to touch a game controller, using instead just gestures and spoken commands.
In the month of November 2010,  Adafruit Industries offered a bounty for an open-source driver for the device. On the 10th of the same month, Adafruit announced that there was a winner: Héctor Martín. He created a driver for Linux that allowed the use of both the RGB camera and depth sensitivity functions of the device. 
The Kinect depth sensing reference design is based on PrimeSensor. The latter, is a technology produced by a company called PrimeSense \cite{28}, the manufacturer of the PrimeSensor camera reference design used by Microsoft to create the Kinect . In Dicember 2010, this company released both their own open source driver and a motion tracking middleware called NITE.

The Kinect features:
\begin{itemize}
	\item A depth sensor: monochrome CMOS sensor.
	\item Depth Image resolution: 640x480 with 11-bit depth (which provides 2,048 levels of sensitivity) at 30 FPS. 
	\item Depth sensor range: 1.2m - 3.5m (the tracking could be maintained through a range of around 0.7m - 6m).
	\item Field of view: 57° (horizontal) x 43° (vertical).
	\item A RGB camera that uses 8-bit per channel depth.
	\item RGB camera resolution: 
			\begin{itemize}
				\item 640x480 at 30 FPS.
				\item 1280x1024 at 15 FPS.
			\end{itemize}
	\item A laser projector: a class 1 laser that uses a wavelenght near the infrared light, 830 nm with a constant output.
	\item A motorized pivot: it provides the possibility to tilt the bar up to 27° either up or down.
	\item A microphone array:  each channel processes 16-bit audio (sampling rate of 16 Khz).
	\item Interface: USB 2.0.
	\item Power demand: 12W.
	\item Price: around 130£ atm.
\end{itemize}

The Kinect principle is a slight variant of the Structured Light approach described in the background chapter. 

\subsection{Kinect vs PMD-Vision CamCube 3}
After that the Kinect open-source driver was released we tried the device. Even if the PMD camera has a better depth sensor range and has been tested for more time, we decided to adopted the Kinect for a variety of reasons. 
The biggest advantage of the Kinect over the PMD CamCube 3 is the depth camera resolution, 640x480 for the first and 204x204 for the second. Even if it is possible to do user tracking with a resolution of 204x204 pixels, higher resolution will ensure results much more accurate.  Among the other benefits there is the wider field of view, the lower weight and the possibility to exploit the other devices integrated with it (rgb camera, microphones, motorized tilt). Last but not least, the price. Currently, the Kinect, compared to the PMD camera, costs much less: around 40 times less.

\subsection{OpenKinect vs Primesense Sensor}
Another choice we had to make concerned the Kinect driver. As I wrote earlier, at the moment, there are two possible solutions: OpenKinect and OpenNI.  

\subsubsection{OpenKinect Libfreenect}
OpenKinect is an open community of people interested in making use of the Kinect hardware with computers and other devices. Their software, called Libfreenect, has been the first open-source driver (Apache2.0-GPL2.0 licence) has been released and aims to provide a simple, straightforward C interface to the Kinect. 
Currently it is possible to access to the depth, colour and IR image data and is also possible to use the tilt motor and the led but unfortunately  there is not any high level recognition feature. Furthermore, a proof-of-concept audio driver has been released recently. 
The driver provides an asynchronous interface by means of callbacks but has been released also a synchronous interface (c\_sync). There are wrappers for C++, C\#, Java, Python, Ruby and ActionScript.

\subsubsection{PrimeSense Sensor, OpenNI and NITE}
PrimeSense released their open source driver (LGPLv3 licence) Sensor, a multi-language, cross-platform framework  called OpenNI and also binaries for their NITE Skeletal tracking module.
Their open source driver has been realized for the PrimeSensor, a Kinect-like depth sensor but thanks to some modifies it allows to make use of the Kinect depth sensor and rgb camera with a computer. There is no support neither for the audio, nor for the tilt motor and led. Though more heavyweight than the Libfreenect driver (because it attempts to implement all the features of the PrimeSense reference design), it allows to interface with the OpenNI framework.
OpenNI is not only an API, it's also an “industry-led, not-for-profit organization formed to certify and promote the compatibility and interoperability of Natural Interaction devices, applications and middleware. One of the OpenNI organization goals is to accelerate the introduction of Natural Interaction applications into the marketplace“ \cite{29}.
OpenNI supplies a set of APIs to be implemented by sensor devices, and a set of APIs to be implemented by  middleware components (the software components that analyse the audio and visual data from the scene, and comprehend it, e.g. tracking the position of a hand from the depth image). 

\subsubsection{Kinect drivers comparison}
The two solutions are quite different and choosing one driver over the other has not been an easy task.
From the miniaturization point of view, using small computation units implies the adoption of slow CPUs and in this case, the simplicity and lightness of the Libfreenect approach is preferable compared to the heaviness of the Sensor driver. Nevertheless, the high level features offered by the OpenNI API, in particular the skeleton tracking one are very important from our point of view. Reimplementing the interesting features would require a considerably amount of time, a work beyond the intents of this thesis. Using OpenNI means that all the software that will be written, will work also with all the sensors that supports it. This means that, if in the future, a new sensor better than the Kinect is released, the task of modifying the code, in order to make it work with the software, will be very easy. 
About the Sensor driver performance, we have to add that, since the OpenNI and Sensor code is open-source, theoretically it's possible to modify it in order to get rid of many useless features and improving then the driver efficiency.
Considering all these aspects, we anticipate here that we decided to implement the code using the PrimeSense Sensor driver and the OpenNI API. 

\subsection{RGB camera}
As we saw, depth-cameras like the Microsoft Kinect and the PMD-Vision CamCube 3.0, besides the depth sensors, have another integrated camera, RGB in the first case, grayscale in the other. 
Exploiting such integrated camera means that we don't need an additional camera and this would provide many advantages in terms of minor dimensions, weight, cost and probably energy consumption.
We will conduct our experiments with this integrated camera and, taking into account the possibility that there may be the need to use an external additional camera. The Kinect RGB maximum resolution is in fact only 1280x1024.
 
\section{Processing Unit}
In order to make our software work we need some device to execute it.  Ideally, it should be small and lightweight, with a fast CPU, not very expensive, devoid of high power consumption and it should provide networking support and the interfaces for the peripherals we need (cameras, projector).
As always in these cases, it's very hard to find something that meets all the requirements so it's necessary a trade-off. Needing a small device, we decided to try the Nokia N900 and the Gumstix Overo Fire with the Tobi expansion board and to take into account the possibility to use a small laptop,  bigger in size and weight, but better for all the rest.
The main challenge is to make the Kinect or in general some other depth-cameras work with such devices because it has not been supposed to work with machines different from the Microsoft Xbox and in the best of our knowledge there is no literature in the topic area.

\section{Nokia N900}
The Nokia N900 \cite{30} is a mobile phone that runs the Maemo OS \cite{31}. Unlike Android and other platforms, is a complete GNU/Linux Operating System (the default kernel is 2.6.28-omap1). This means easy portability of almost every application that runs on standard desktop Linux computers (including the Kinect drivers, our application and the libraries that requires). We choose it as our primary test platform also because it was already available in our laboratories and so ready to test but above all because it presents a lot of really interesting features:
\begin{itemize}
	\item A Texas Instruments OMAP 3430 ARM Cortex-A8 CPU running at 600 MHz with a Imagination Technologies PowerVR SGX530 GPU.
	\item 256 MB of RAM.
	\item 32 GB eMMC and 256 MB NAND non-removable storage.
	\item Battery of 1320 mAh capacity (Third party extended batteries up to 2400 mAh capacity).
	\item A 5 megapixel back camera with autofocus.
	\item 3.5 inch touch-sensitive widescreen display with a 800x480 pixel resolution and 16M of colours.
	\item Infrared sensors and Bluetooth v2.1.
	\item Wi-Fi b/g connectivity with support for WEP, WPA and WPA2 (AES/TKIP) security protocols.
	\item An autonomous GPS with optional A-GPS functionality.
	\item A High-Speed USB 2.0 USB Micro-B connector (with support for USB On-The-Go using a modified version of the kernel).
	\item Dimensions: 110.9mm × 59.8mm × 18mm (19.55mm at thickest part).
	\item Weight: approximately 181g.
	\item Cost: about 350\$.
\end{itemize}

\subsubsection{N900 general development settings}
The Nokia N900 has a much slower CPU compared to today's standard desktop computers and laptops. In order to make the Maemo applications development process easier, it is necessary using a cross-compilation toolkit called Scratchbox \cite{32} and the Maemo SDK. A cross compiler is a compiler capable of creating binary executables files for a platform other than the one on which the compiler is run. In the case that the target system processor is much slower than the host one, the advantages in terms of compiling speed are quite high.

\begin{figure}[!h]
    \begin{center}
        \includegraphics[scale=0.17]{images/dany-img022.jpg}
    \end{center}
    \caption[N900 with an USB memory stick]{N900 with an USB memory stick.}
    \label{fig:n900-usb-stick}
\end{figure}

Scratchbox supports ARM (used by the N900) and x86 targets and it also provides a full set of tools to integrate and cross-compile an entire Linux distribution, the Maemo one in our case. Thanks to it you  can test applications in conditions practically identical to running the application on a Nokia N900. There is in fact also a X11 server that provides a device screen for the developer so that you can see all the Maemo application windows and visuals on your host machine.
Adding software to Maemo is officially done through packages. Since Maemo is based on the Debian operating system, when creating a package we used tools and techniques borrowed from Debian.
The phone officially doesn't support the USB OTG mode necessary to connect other device (depth camera, projector and so on) to the phone. USB OTG (On The Go)  \cite{33} is a specification that adds host functionalities  (when paired with another USB device) to the  devices that generally fulfill the role of being a slave USB device. In order to meet the deadlines for production and USB certification \cite{34}, Nokia didn't enable the USB OTG support and so by default the phone USB capabilities allow just to make backups, file exchange and related tasks with a computer. 
Even if the N900 lacks the pin required to do auto switching there has been an ongoing community effort to add this support subsequently \cite{35}. The main part of this work has been the creation of some kernel patches to make USB host work properly and this means that it's possible to use this mode only recompiling the official kernel with these patches or using another kernel that include them, like the Power-kernel \cite{36} (unfortunately, there has no been any new official kernel update from Nokia).
It has been created also an user-friendly interface to enable the host mode for the particular device and this software is currently in beta stage.  

\begin{figure}[!h]
    \begin{center}
        \includegraphics[scale=0.8]{images/dany-img021.jpg}
    \end{center}
    \caption[Kinect and N900]{Kinect and N900.}
    \label{fig:kinect-and-N900}
\end{figure}

Proper host mode would recognize automatically the speed of the USB slave device but this version of the software requires manual speed switching so you need to know what type of device you are dealing with (the possible choices are high,full and low speed).
A part from this, you also need a female to female (A-A) USB adapter, and the stock USB cable. 
We decide to test it with a USB memory  stick. 
We connected the female to female USB adapter with the USB memory stick and the N900 stock cable and installed the Power-kernel and the H-E-N software (the user-friendly interface).  As you can see from the Figure~\ref{fig:one-experiment}  the device worked and we were able to make file transfer operations. 

\subsubsection{Text-recognition porting}
Before trying the Kinect camera we decided to port one of the first versions of our text-recognition software (without depth camera support) and test it.
As suggested by Nokia \cite{37} the Scratchbox environment as well the Maemo SDKwere installed on a HP Pavillion dv5-1110el \cite{38} to exploit the advantages of the cross-compilation. We installed the required libraries both in the scratchbox development environment and in the phone. We needed the last versions of Tesseract, libdc1394 and OpenCV, a library of programming functions mainly aimed at real time computer vision.
We were able to port successfully the software on the phone and test it with the Nokia N900 integrated camera. Due to the slow 600 MHz CPU, obviously the software was not always able to reach the same results obtained on the laptop but we decide to go ahead with the N900 testing because of the fact that the software version tested was clearly unoptimized. One other thing that we took into account (as last option) was to overclock the phone's CPU.  The N900 overclocking has been tested by many people and it is generally considered safe, even for high frequencies. 
In Scratchbox we installed the following libraries:  \textit{libcv4 libcv-dev opencv-doc libcvaux-dev libcvaux4 python-opencv libhighgui-dev libhighgui4}. 
In the N900 instead: \textit{libcv4 libcvaux4 libhighgui}.

\subsubsection{OpenKinect porting: first steps}
In order to make the Kinect work with our mobile phone, first of all, we needed to compile the Kinect driver for  the ARM platform. We decided then to test the OpenKinect driver because of its simplicity. 
The driver requires some libraries to work: \textit{cmake, libusb-1.0-0, libusb-1.0-0-dev, pkg-config} and if you want to compile and execute some OpenGL examples also \textit{libglut3} and \textit{libglut3-dev}.
With the N900 is possible to use just the OpenGL ES,  a low-level and lightweight API for advanced embedded graphics using well defined subset profiles of OpenGL \cite{39}. The OpenGL version required by some examples is the standard one so we decided to create our examples using OpenCV.
In the Maemo repositories there were just the cmake and pkg-config packages so we created the others. 

Due to the fact that we tried the first OpenKinect versions we had to deal with some bugs of the driver (fixed later on by the OpenKinect developers). We had to modify some cmake configuration files in order to skip the provided examples and we had also some problems in the compilation and installation process. 
We solved them, commenting some lines in the \textit{libfreenect/build/src/cmake\_install.cmake} file and creating manually links to the library when required.
Once installed the driver, the first thing to do was to connect the Kinect to the N900, in the same way we did with USB memory stick. We did it and used the H-E-N software, selecting the high speed mode. The Kinect was successfully recognised and as you can see from Figure~\ref{fig:kinect-and-N900}, the Kinect led started to blink with a green light.
After this encouraging result we started to write some c++ code in order to test the performances.
Since our test examples were very simple, they didn't require too much time to compile. We preferred then to compile and install them manually without the Scratchbox environment. Doing that we encountered error messages due to the fact that, by default, Maemo reserves too few space to the /tmp directory. Modifying the TMPDIR and pointing it to a directory in a partition with more free space, allowed us to solve the problem.
It came finally the time to test the example and from here started the problems. The error messages we got (activating the \textit{libfreenect} debugging flags) were:
\begin{code}
 Failed to submit isochronous transfer 0: -1
 Failed to submit isochronous transfer 1: -1
 ...
 Failed to submit isochronous transfer 15: -1
 Failed to submit isochronous transfer 0: -1
 Failed to submit isochronous transfer 1: -1
 ...
 Failed to submit isochronous transfer 15: -1
\end{code}

This message came from the Libfreenect driver, specifically from the \textit{usb\_libusb10.c} file.  It  showed that there was an error (the return value of the \textit{libusb\_submit\_transfer} (a libusb function used to submit the data) was negative). This function is called in  \textit{fnusb\_start\_iso}, the libfreenect function responsible to start the transfer. We discovered that the number of these messages (32 in total) was due to the number of buffers (16 in our case, Linux) used for the libusb transfers (defined in the  \textit{usb\_libusb10.h} file), and because we started two isochronous transfers (the rgb camera and the depth camera). 

In order to investigate the possible cause of the problem we started to gain information about the Kinect low-level information. A good starting point has been the analysis of the information collected by lsusb, a linux command that displays information about USB buses in the system and the devices connected to them. We used the -vv option (ultra Verbose) with the purpose to gain some useful information about the depth/rgb cameras.
The lsusb output showed that there were three devices: Xbox NUI Camera, Xbox NUI Camera and Xbox NUI Motor.
From our point of view the most interesting was obviously the first one. 

The Camera device has two isochronous endpoints of IN type (i.e. data going into the computer ), each of which does two 960-byte packets per microframe.  Microframes are time units used in the USB protocol for a number of purposes, in particular as a timing reference for transfers. For not high-bandwidth transfers, are used instead time units called frames. The difference between them relies in the length of the interval: 1 ms long for frames and 125us long for microframes (8 Microframes per millisecond).

Isochronous transfer is one of the four transfer/endpoint types defined by the Universal Serial Bus specification \cite{40}. Isochronous transfers are streaming, real-time transfers that typically contain time sensitive information and that are useful when data must arrive at a constant rate and when occasional errors are tolerated \cite{41}.
Activating debugging flags also in the libusb library we discovered that the error we got corresponded to a \textit{LIBUSB\_ERROR\_IO}, in particular the \textit{ERRNO 28} (\textit{ENOSPC}). This means that there is no enough free bandwidth available to schedule the isochronous packets,a usb controller problem. It could it be in the kernel driver, in the hardware or in both. 

As we saw, with the N900 default kernel provided by Nokia, it was not possible to use the phone as USB host. Analysing the source code we noticed that there was also no support for high bandwidth isochronous transfer: 

In the \textit{kernel-power-2.6.28/drivers/usb/musb/musb\_host.c}  file, in the \textit{musb\_urb\_enqueue} function, you can find these lines of code:
\begin{code}
/* no high bandwidth support yet */ 
  if (qh->maxpacket & 0x7ff) { 
      ret = -EMSGSIZE; 
      goto done; 
  }
\end{code}

High-speed isochronous transactions can transfer up to 1024 bytes. An isochronous endpoint (\textit{A uniquely addressable portion of a USB device that is source or sink of information in a communication flow between the host and device} \cite{40}) that requires more than 1024 bytes like our case (1920 bytes for the color camera  and 1760 bytes for the depth camera ) can request two or three transactions per microframe. If it requires these multiple transactions per microframe it is called high-bandwidth endpoint and it gives a maximum possible isochronous or interrupt transfer rate of 192 Mb/s.

The phone is delivered with a modified 2.6.28-omap kernel. Nokia, wrote a patch of 380000 lines of code (\textit{kernel\_2.6.28-20100903+0m5-orig.diff}) to make the kernel work with the N900 hardware. The problem with this approach is that it is not possible to apply the same patches that apply to mainline kernels. After the phone's launch, new kernel versions have been released, providing new features and fixing several bugs. The only way to add these fixes and features to the N900 is to modify the kernel manually, taking into account all the Nokia changes. This process requires several time, in particular when you try to backport (applying the patch to an older version of the software that it was initially created for)  latest features.

We tried to solve the problem creating and applying a patch for the \textit{musb\_urb\_enqueue} function. To do it, we used the quilt, diff and patch commands and after that we added the name of our patch in the \textit{kernel-power-2.6.28/debian/patches/series}, installed all the dependences needed and compiled the kernel using:
\begin{code}
 fakeroot apt-get build-dep kernel-power 
 dpkg-buildpackage -rfakeroot -b. 
\end{code}

In order to solve the problem it has been necessary to modify also other files because we discovered that the configuration set by Nokia hadn't a matching endpoint with the required size. All the endpoints defined indeed were at most of 1024 bytes while both the colour camera and the depth camera need of 2x960 (1920) bytes.

We found out that the existence of a patch for the musb high-bandwidth support \cite{42} . This patch added bigger endpoints, modifying the following files:

\begin{code}
 kernel-power-2.6.28/drivers/usb/musb/musb_host.h
 kernel-power-2.6.28/drivers/usb/musb/musb_host.c
 kernel-power-2.6.28/drivers/usb/musb/musb_core.h
 kernel-power-2.6.28/drivers/usb/musb/musb_core.c
\end{code}

We backported that patch (with some necessary modifies due to the Nokia patches), applied it and recompiled the kernel. With this new kernel we got rid of half of the \textit{Failed to submit isochronous transfer} messages. 
We made some tests and discovered that the problem was due to the fact that the patch, added just an high-bandwidth isochronous endpoint, so just one device at time between the colour camera and the depth camera could work. 

A USB device can have up to 32 endpoints: 16 into the host controller and 16 out of the host controller. What we needed to modify was the size of the endpoints in order to allow enough bandwidth to make work both devices. We tried different configurations taking into account the constraints imposed by the controller hardware (i.e. the sum of all endpoints size shouldn't  exceed 16 KB) and the endpoints size that could need other devices (i.e.a new colour camera or a projector) and we ended up with a configuration (see Appendix A)  that solved this initialization problem.

\subsubsection{OpenKinect porting: driver internals}
In order to understand our next work it is necessary to provide some basic information about how internally the driver works, specifically how the driver manages the RGB and depth information. Currently, there is no official documentation so the next information is  just fruit of the reverse engineering work of the OpenKinect community \cite{43} and of my studies.
Basically, for every stream (RGB and/or depth) there is a data structure (packet\_stream) associated that contains information about it (like the next expected sequence packet number, the number of packets necessary for a complete frame, the number of packets got etc). This structure is updated every time a packet arrives by means of functions located in the cameras.c file. Each Kinect packet has a header with these main fields:
\begin{itemize}
	\item magic: 2 bytes information about the validity of the current packet. A packet should be valid only if the content of this field is equal to a specific code. In the case of RGB/depth frame packets for example, these two bytes must  represent the code: \textit{RB}.
	\item flag: 1 byte field where is indicated a code that represents the packet type. From our point of view the most interesting ones are:
		\begin{itemize}
			\item 0x71: First packet of a new RGB frame (SOF Start Of Frame).
			\item 0x72: A packet of the current RGB frame (excluding the first and the last one).
			\item 0x75: Last packet of the current RGB frame (EOF End Of Frame).
			\item 0x81: First packet of a new depth frame (SOF Start of Frame).
			\item 0x82: A packet of the current depth frame (excluding the first and the last one).
			\item 0x85: Last packet of the current depth frame (EOF End Of Frame).
		\end{itemize}
	\item seq: 1 byte field used to  identify the order of the packets sent from the Kinect so that the data can be reconstructed in order.
	\item timestamp: 32 bits information that represent the current value of the Kinect timestamp clock.
\end{itemize}

Once the user chooses the cameras data format output (resolutions etc.) and passes this information to the driver, the latter communicates with the Kinect writing in its registers the code necessary to start the isochronous transfers.
For each stream, every time a packet arrives, basically the driver checks first of all the magic field in order to check if the packet is valid. If it is not valid the packet is dropped, if it is valid instead,  the driver checks if the stream is synchronized (the stream can be out of synchrony  if for example  too many packets are lost). If there is no synchronization, it drops all the packets until it reaches a new SOF. At this point, it compare the sequence number with the one expected. If they match, after some data size and flag verifications,  the packet data is copied into a buffer (the position is calculated by means of information present in the stream structure). If they don't, there are two cases:  in the case of just few packets lost ( the difference between the sequence number expected and the one of the packet is less than 5), it copies it anyway in the right position of the buffer (after the size and flag checks). The packet sent will have old data (of the previous packet)  on the pixels   relative to the lost packets. In the other case (more than 4 packets lost) it sets the stream as desynchronized. 
When all the packets of the (depth or RGB) frame have arrived, the driver generates a  (depth or RGB) callback with the current frame data that can be processed by the user software.
Considering the 640x480 resolution, there are 242 packets for one frame for the depth camera (including 0x71 and 0x75 packets). All packets are 1760 bytes except 0x75 packet  (1144 bytes). In total then for a complete frame there are 425304 bytes of data. With a 30Hz frame rate this means: 12759120 bytes/sec (12672000 excluding the header) so around 12 MB/s. 
There are 162 packets for one frame for the colour camera (including 0x81 and 0x85 packets).  All packets are 1920 bytes except 0x85 packet (24 bytes). In total then for a complete frame there are 309144 bytes of data. With a 30Hz frame rate this means 9274320 bytes/sec so around 9MB/s.

\subsubsection{OpenKinect porting: packet loss problem}
Unfortunately the Failed to submit isochronous transfer message problem was not the only one. During the execution of our examples, the RGB and depth callbacks (they are called when have arrived all the packets necessary to generate a complete frame) were never called and we got these warning messages:

\begin{code}
...
Creating EP 81 transfer #14 
Creating EP 81 transfer #15 
... 
Write Reg 0x0005 <= 0x00 
Control cmd=0003 tag=0000 len=0004: 12 
Control reply: 12 
[stram 70] Invalid magic ffff 
... 
[stream 70] Not synced yet... 
.. .
[stream 70] Lost 8 packets 
[Stream 70] Lost too many packets, resyncing... 
...
[Stream 70] Invalid magic c458 
[Stream 70] Invalid magic af39
\end{code}

 Analysing the driver source code \cite{39} and various N900 and Kinect outputs we discovered that the callbacks were never called because there were not enough valid packets to create a complete image (both of the colour and depth camera). 
 
We tested the system trying different combinations of: frame rate, test duration, quantity of debug messages (a lot of debugging output slows down the performances), using only the RGB camera, using only the depth camera or using both cameras.
From 35\% to 50\% of the total packets, depending on the particular test, showed an invalid code in the magic field. It seemed however that even if the magic field was wrong, the other header fields were (often) as they should have been. As a result of this observation, we modified the driver in order to get rid of the lines of code responsible to check the magic field.
Even with this modification, the output of the tests were not satisfactory: still no callbacks and a lot of packet loss messages.
Just to see a minimum video output, we modified again the driver deleting all the packet loss checks and adding some new checks about the maximum number of packets per frame and finally we got our callbacks. As expected, the output was not acceptable: at a glance, both the  RGB and the depth output windows showed random stripes of different colours (different shades of grey in the depth camera case).
This strange output was due to the fact that most of the frames were considered complete also when a few packets arrived (sometimes also less than 10), because the driver just generated the callback one time found a EOF packet. 
From the outputs we also noticed that sometimes, the difference between the sequence number of two successive packets was very big: also of 100 or more packets. Even if this strange behaviour has been noticed more often between consecutive packets with invalid magic field, it was present also in the case of two successive packets with valid magic fields. Besides, often after this weird packet sequence number \textit{jump}, the sequence numbers of the subsequent packets continued to follow the first packet's sequence number and not the strange one. 
The following different packet's sequence numbers are one example of the outputs we found:  

\begin{code}
....,1,2,3,4,5,33,8,9,10,....
....,1,2,3,4,5,35,3,9,10,....
....,1,2,3,4,45,46,47,48,....
....,1,2,3,4,79,8,9,10,11,...
\end{code}

Considering all these aspects we modified heavily the driver source code several times in order to get a better output. 
We tried to managed the \textit{jumping} sequence number behaviour combining different solutions: dropping those packets (so considering them corrupted), putting different synchronization code, changing the number of buffers (used by libusb) and packets for buffer, timestamp, magic and size checks, considering the frame complete after a certain number of packets from a SOF packet and so on. 
 
\begin{figure}[!h]	
	  \centering	  	  
	  \subfloat[RGB Kinect experiment]{\label{fig:rgb-kinect-experiment}\includegraphics[width=0.45\textwidth]{images/dany-img024.jpg}}          
	  \hspace{1em}%	   
	  \subfloat[Kinect and PC output]{\label{fig:rgb-kinect-pc-output}\includegraphics[width=0.45\textwidth]{images/dany-img023.png}}  
	  
	  \caption{Kinect experiments.}
	  \label{fig:kinect-experiments}  
\end{figure}
 
\begin{figure}[!h]	
	  \centering	  	  
	  \subfloat[Output 1]{\label{fig:kinect-N900-output_1}\includegraphics[width=0.45\textwidth]{images/dany-img025.png}}          
	  \hspace{1em}%	   
	  \subfloat[Output 2]{\label{fig:kinect-N900-output_2}\includegraphics[width=0.45\textwidth]{images/dany-img026.png}}  
	  
	  \caption{Kinect and N900 output.}
	  \label{fig:kinect-N900-output}  
\end{figure}

The output after these fixes became better but still not acceptable. As you can see from Figure~\ref{fig:rgb-kinect-experiment} , we put the Kinect in front of a printed paper. We planned to compare the RGB images coming from the utilize of the Kinect camera with both the N900 and a laptop.
The output coming from the laptop is shown in Figure~\ref{fig:rgb-kinect-pc-output} while the one coming from the N900 is shown in Figure~\ref{fig:kinect-N900-output}. As you can see, just the first rows are equal in the two images. After the first rows the image starts to become incomprehensible, with a lot of different coloured stripes. 

While with the notebook the images taken in different time intervals were always the same (as it was supposed to be), with the N900, each frame taken with the camera pointed towards the same target was different from the previous one (you can see this effect observing the Figure~\ref{fig:rgb-kinect-pc-output} and the Figure~\ref{fig:kinect-N900-output_1}).  
Initially, we hypothesized that it was just a shifting problem and we tried to find a possible cause of it but at the end we discovered that the problem was bigger. 
We noticed from the various test outputs that the difference between a SOF packet sequence number and the next SOF one (or also between one SOF and a EOF) was never correct. For example, in the case of RGB images, the difference at the maximum was of 105 packets while it should be of 162 (always constant). Furthermore, with a computer, all the packets behaving to a same image have a same timestamp and here there are at most 105 packets with the same timestamp. It seemed as if the Kinect firmware was not working properly.
It was a very big problem because, with this data  it was impossible to get any complete frame. In the packet's header there is indeed no information about which pixels of the complete image correspond to the data contained in the packet. 

\subsubsection{OpenKinect porting: working with registers and overclocking}
It's known that if you use the Kinect in parallel with other USB devices there is a reduction of the available bandwidth and this decreases the performances and it could also provoke initialization errors. We thought that the problems we encountered could be related to bandwidth issues so we decided to check if it was possible to decrease the framerate or the resolution just to see if the situation could have been better.
We modified again the driver in order to configure the Kinect with the lowest possible parameters (FPS and resolution). The communication between the Kinect and the OpenKinect driver takes place by means of registers. The OpenKinect community has been able to build the driver after the discovery of the (most important) Kinect's registers meanings. Unfortunately, during the period I passed to test the Kinect's settings there was not a lot of information available about the  configurations allowed by the Kinect's firmware so I had to test them by myself.

In the function: \textit{int freenect\_start\_video(freenect\_device *dev)} of \textit{cameras.c}, we modified these lines:

\begin{code}
 write_register(dev, 0x0c, 0x00); // Bayer 
 write_register(dev, 0x0d, 0x01); // 640x480 
 write_register(dev, 0x0e, 0x1e); // 30Hz 
\end{code}
with 
\begin{code}
 write_register(dev, 0x0c, 0x00); // Bayer 
 write_register(dev, 0x0d, 0x00); // 320x240 
 write_register(dev, 0x0e, 0x0f); // 15Hz
\end{code}
but we we message errors. The best RGB camera accepted combination has been the following:
\begin{code}
 write_register(dev, 0x0c, 0x00); // Bayer 
 write_register(dev, 0x0d, 0x01); // 640x480 
 write_register(dev, 0x0e, 0x0f); // 15Hz
\end{code}

With our Kinect current firmware we were not able to reduce either the frame rate nor the resolution of the depth camera. Even with these modifications and trying also to use just the RGB camera (with a theoretically 4.4 MB of bandwidth usage) there were no improvements.

We decided then to overclock the N900 CPU in order to improve the phone's performance. Overclocking is never 100\% safe:
\begin{itemize}
	\item With high speed the overclocked device could become unstable (applications crashes, file corruption.. ).
	\item Could not be able to boot up.
	\item Theoretically, the lifetime of the device could be reduced.
	\item If used at high speed constantly for a long period it may overheat with a possible hardware damage.
\end{itemize}

Furthermore, there is no CPU temperature monitor installed in the phone. There was just a battery temperature sensor and all the N900 temperature monitoring software to rely it so we had to pay attention to any sign of instability sign.
To overclock the phone you need to use the \textit{kernel-power} and a utility package called: \textit{kernel-power-settings}.
The Nokia N900 default settings set the CPU to scale between 250 MHz and 600 MHz dynamically. We could instead decide the upper and lower scaling limits, choosing them between 125 MHz and 1150 MHz and also different voltages.
We tried different voltages and lower/upper limits (up to the maximum speed, 1150 MHz) but we didn't get also in this case any visible improvement.

If not in the CPU, the problem may still be in the USB controller HW / Kernel driver or also in the Nokia USB cable plus the female to female adapter configuration we were using. 
We didn't try to use a Micro USB to Mini USB 2.0 Adapter M/F USB adapter because using it, would put too much pressure on the USB port causing with a high probability hardware damage.  
With regards to the USB kernel driver, we found a recent patch \cite{45} that should assure better transfer performances. In order to exploit it it's necessary to backport it. We tried to do it but there is too much work to do because the kernel code has had too many modifications in the meantime. Another way to exploit that patch is to use the Meego OS on the N900. Meego \cite{46} is an other Linux based operating system intended to run on a variety of hardware platform including mobile phones, handhelds devices and notebooks. Currently, the latest Meego kernel is the 2.6.37 and with this there shouldn't be problems about this patch. The downside of this approach is that, Meego unfortunately doesn't support a lot of features that are available on Maemo (one for all, the USB OTG). Both the solutions, backporting the patch to Maemo or fixing Maemo features would require too much time so we preferred to explore other options.  

\subsubsection{Nokia N900 and Kinect: PrimeSense Sensor driver}
As we saw, PrimeSense is the company behind the Kinect camera reference design. Due to this reason and despite the problems with the OpenKinect driver, we thought that it could have been worth a try.
The Sensor driver developed by PrimeSense, actually is not for the Kinect. It has been thought for the PrimeSense PrimeSensor but since the two devices are very similar it has not been hard to create a modified version for the Kinect \cite{48}. 
Currently, in order to install the Sensor driver the installation of OpenNI ( the unstable branch)  is required. 
Also this driver, uses the libusb library so we had to install it  ( \textit{libusb 1.0.8} and \textit{libusb-1.0-0-dev}) on our system.  In order to create the documentation and make the provided examples work with it, also the following libraries/packages should be installed:
\begin{itemize}
	\item Python 2.6/2.7/3.0/3.1 
	\item freeglut3 (freeglut3-dev)
	\item Doxygen \& GraphViz
\end{itemize}

The OpenNI compilation process  has been thought for processors with the SSE3 (Streaming SIMD Extensions 3) support \cite{49}. The OMAP3430 N900 CPU  unfortunately doesn't support it so, during the compilation process we encountered these illegal instructions messages: 
\begin{code}
cc1plus: error: unrecognized command line option "-malign-double" 
cc1plus: error: unrecognized command line option "-msse3"
\end{code}
We solved this problem removing the sse3 compiler flags from the make files (\textit{OpenNI/Platform/Linux-x86/Build/CommonMakefile}). In order to port this driver for our ARM platform we also had to modify the make files in order to skip the compilation of some examples that used libraries unavailable for the N900 and to copy manually some files header file.
We tried the simplest example but unfortunately we encountered errors similar to the ones got with OpenKinect. 

\subsection{Gumstix}
Besides the N900 we planned also to test some embedded computers built on a single circuit board due to their small dimensions. We opted for the Gumstix Overo series, in particular for the Overo Fire model, the most powerful available of them. It has been necessary also to buy a dual 70-pin expansion board, the Toby model that provides the connectors needed for I/O functions such as USB, power input, HDMI etc. Every Overo model runs Linux, in our case the Angstrom distribution. It's however possible to install also Ubuntu, Android, Windows CE and other OS.

\subsubsection{Gumstix hardware features}
More in details, the Overo Fire presents:
\begin{itemize}
	\item A Texas Instruments OMAP 3530 processor.
		\begin{itemize}
			\item ARM Cortex-A8 CPU.
			\item C64x+ digital signal processor (DSP) core.
			\item POWERVR SGX for 2D and 3D graphics acceleration.
			\item 720 Mhz Clock.
		\end{itemize}
	\item 256 MB RAM.
	\item 256 MB Flash. 
	\item OMAP3530 Application Processor 802.11b/g wireless communications.
	\item Bluetooth communications.
	\item MicroSD card slot.
	\item TPS65950 Power Management. 
	\item Dimensions: 17mm x 58mm x 4.2mm.
	\item Weight: 5.6g (excluding antenna).
	\item Cost: 219\$.
\end{itemize}

The main features of the expansion board instead are:
\begin{itemize}
	\item 10/100baseT Ethernet.
	\item DVI-D (HDMI).
	\item USB OTG mini-AB.
	\item USB host standard A.
	\item Stereo audio in/out.
	\item USB Serial Console.
	\item Dimensions: 105mm x 40mm.
	\item Weight: 29.0g.
	\item Cost: 69\$. 
\end{itemize}

\subsubsection{Gumstix and Kinect}
First of all, we connected the Overo Fire to the Tobi expansion board (via the two 70-pin connectors located on the bottom side of the Overo unit). Next step has been to connect the device to a battery supply because the power supplied from the USB port is not enough (other models can work with just the power supplied via the USB port). In order to set up the device we used our laptop as terminal connecting it with the Tobi USB console port via a mini-B to standard AUSB cable and kermit \cite{50} as terminal emulator.

\begin{figure}[!h]
    \begin{center}
        \includegraphics[scale=0.15]{images/dany-img027.jpg}
    \end{center}
    \caption[Kinect and Gumstix board]{Kinect and Gumstix board.}
    \label{fig:kinect-gumstix-board}
\end{figure}

One time installed the drivers and the required libraries (this time compiling directly on the device) , we connected the Kinect and tried the most simple examples.
With OpenKinect we got a frame rate of 6-7 Hz, up to 8 Hz using the depth camera only while with OpenNI we got 1.5 FPS. 
During these experiments, the CPU has been working at full speed and this means that in order to don't loose more FPS it's better to avoid any other additional CPU load. In spite of all, these results were much better compared to the N900.


\subsection{Other platforms}
Laptops, although not so small as phones like N900 or Gumstix boards, certainly present better performances. We developed and tested the application on our HP pavilion dv5-1110el and since our application worked perfectly on it, it should work on all other laptops in possess of the same main features and with high probability also with others less powerful.
Currently, on the markets you can find also laptops really tiny that are worth a try. The Viliv N5 \cite{51} for example, weighs 399g and measures 172[W] x 86[L] x 25[H]mm and features an Intel Atom Z52 1.33 GHz CPU, a GMA500 GPU, GPS, Wifi, Bluetooth, 3G availability, SSD options and one USB Host port.

\section{Other Hardware Components}
Due to the limited time at our disposal, we didn't pay much attention to the other hardware components even if  in reality we tried some pico-projectors. The one that captured mostly our attention has been the Texas Instruments DLP Pico Projector Development Kit Version 2.0 \cite{52} due to its small dimension and quality.