\chapter{Background Information}


This chapter describes and analyses the different aspects and possible solutions related to the realization of the system we aim to design. We will see then pros and cons of  both  hardware and software relevant aspects that should be taken into account when dealing with these types of devices and also a review on the research that has been undertaken in this area.

\section{Hardware}
In this section we will analyse the possible solutions about the hardware components of the device subject of the study. These type of systems are basically composed by one or more cameras and projectors and of course should be present some processing unit too.

\subsection{RGB Cameras}
In this type of devices the camera serves mainly three different purposes: it is used to identify the artefacts, to interact with the user (by gesture tracking  for example) and to identify optimal projection places over the artefacts of interest.

A wearable device should be suitable for wear so it shouldn't be heavy and cumbersome. This implies that also its components should be preferably small and lightweight and the camera is not an exception.

There are a lot of different types of cameras available on the market that differ for size, price  and quality. Camera phones are becoming more and more accurate with good quality optics devices that can reach 12 or more Megapixels of resolution (like the Nokia N8 and the Samsung Pixon 12 for instance). It is not unreasonable to evaluate the possibility of exploiting a camera phone and the phone CPU  itself for our purpose. This solution could really save a lot of space and weight but, of course, it's necessary to deals with the requirements that the project software needs (e.g. a particular Operating System, a certain CPU performance or any other feature not available on the phone).

\subsection{Depth cameras}
Using a 2D RGB camera you can perform a lot of image processing elaborations but it's a well known fact that, with them, some image processing tasks are very hard to achieve. 

Recently, a new technology that makes a lot of image processing tasks very simple has become widely available. This technology is implemented in devices called depth cameras, known also as 3D scanners. They are devices that analyse and record the three-dimensional form of real-world physical objects and environments.
Like the normal 2D RGB cameras, 3D scanners have a cone-like field of view but instead of collecting colour information about surfaces, they collect distance information measuring the distance from the 3D scanner to the scanned surface.
A well accepted categorization \cite{2} classifies them into two types: contact and non contact 3D scanners. The first type works in physical contact with the scanning object while the second performs the measurements without coming in contact with the objects. 
In addiction, it's possible to distinguish between two types of non contact 3D scanners: active and passive scanners:
active scanners work by emitting some kind of radiation or light (commonly a laser) onto the scanning object and detecting its reflection, with very good results.
Passive scanners instead don't emit any kind of radiation themselves but work by detecting reflected ambient radiation (for example infrared light) with a good trade-off between results and costs.
For our purposes, needing precision and a high level of accuracy, the most interesting types of scanners are the active ones. In the next subsections we will see an overview of the main active scanning techniques focusing in particular on the ones that make use of the Time of flight and Structured Light principle.

\subsubsection{Time of flight  (TOF)}
Time-of-flight laser scanners emit a pulse of laser light that is reflected off of the object to be scanned and detected with a sensor. The time taken by the pulse to be reflected off the target and returned to the sender is measured and since the speed of the laser light in air for different environmental conditions is well known, for the device is not hard to calculate the distance to the object.
The accuracy of this type of scanner depends on how precisely is measured the round-trip time and is affected mainly by the amount of active light which arrives at every pixel, by the illumination, spectral sensitivity, modulation contrast, active area of each pixel and obviously  also by optics quality. Some systems permit to cover ranges of some tens of meters with very good results providing up to 100 frames per second with resolution that can reach 484 x 648 pixels \cite{3}. 


\begin{figure}[!ht]
    \begin{center}
        \includegraphics[scale=0.8]{images/dany-img003.png}
    \end{center}
    \caption[Time-of-flight principle]{Time-of-flight principle.}
    \label{fig:time-of-flight-principle}
\end{figure}

A time-of-flight camera consists of the following components:
\begin{itemize}
	\item Illumination unit: It is the unit responsible to illuminate the scene. It works normally with infrared light in order to make the illumination unobtrusive. It is a critical part of the system because the light has to be modulated with high speeds up to 100 MHz, and this means that only LEDs or laser diodes are feasible.
	
	\item Receiver optics: The device contains a lens that gathers the reflected light and makes an image of the scene. A filter is applied in order to suppress the background light.
	
	\item Image sensor: As said before, this component measures the time necessary for some type of particle to travel from the transmitter to the receiver after being reflected from the target.  Current sensors have a resolution rather small in comparison to standard RGB sensors.

	\item Driver electronics: Both the illumination unit and the image sensor use high speed signals.

	\item Computation Unit: The computing unit main tasks are to calculate the distances and to use calibration algorithms. 
	
	\item Interface: Currently this type of cameras provide USB or Ethernet interfaces.
\end{itemize}
\begin{figure}[!ht]
    \begin{center}
        \includegraphics[scale=0.5]{images/dany-img004.jpg}
    \end{center}
    \caption[A TOF camera (PMDvision CamCube)]{A TOF camera (PMDvision CamCube).}
    \label{fig:TOF-camera}
\end{figure}
TOF cameras offer various benefits: 
\begin{itemize}
	\item No mechanical moving part is needed.

	\item Distance information is very easy to obtain.
	
	\item Suitable to use in real-time application due to their speed.
\end{itemize}
There are also disadvantages:
\begin{itemize}
	\item Low resolution (compared to other type of devices).

	\item Systematic error when calculating the distance (due to the fact that the theoretically required signal is practically not achievable).

	\item Possible interference when running several TOF cameras at the same time.
	
	\item The distance measurement may be influenced by multiple reflections and by the total amount of incident light.
\end{itemize}



\subsubsection{Triangulation}
This type of technique consists on projecting with a laser onto an object and then capturing its reflection using a camera to look for the location of the laser dot. Using mathematical relations between the direction of the emitted laser beam and the detected reflection is possible to get depth information about the object. This technique is called triangulation because the laser dot, the camera and the laser emitter form a triangle. 

\begin{figure}[!ht]
    \begin{center}
        \includegraphics[scale=0.8]{images/dany-img005.png}
    \end{center}
    \caption[Triangulation principle]{Triangulation principle.}
    \label{fig:triangulation-principle}
\end{figure}

\subsubsection{Structured light}
Structured light 3D scanners require the use of one or more cameras and a projector. This technique 
is based on projecting a light pattern and viewing the illuminated scene from one or more points of view. Since the pattern is coded,  point correspondences between the projector and the observing device can be easily extracted by calculating a code for each observed pixel. Codewords (simple numbers) are assigned to set of pixels. They can be mapped in various ways (grey levels, colours or geometry information) and exist a direct mapping from the them to the corresponding coordinates of the pixel in the pattern.
The number of correspondences that can be recovered in a single image depends basically on the sort of pattern projected and, therefore, on the light emitter device chosen. 

There are two main methods of pattern generation:
\begin{itemize}
	\item Laser interference: changing the angle between two planar laser beams it's possible to create different type of patterns.

	\item Projection: patterns are generated by a display (LCD, LCOS or DLP) within a projector using non coherent light.
\end{itemize}

In addition to light pattern calibration, structured lighting techniques require also calibration of light sources and light beams. This type of scanners are very fast. Some models provide up to 120 frames per second but also with them there are problems with reflections and transparent surfaces.

\subsection{Projectors}
Recently there has been a growing interest on projector miniaturization research. These new small devices that can be smaller than a smartphone or even, integrated into them, are called with different names: handheld projectors, mobile projectors, pico projectors, pocket projectors or micro projectors. 

\begin{figure}[!ht]
    \begin{center}
        \includegraphics[scale=0.6]{images/dany-img006.jpg}
    \end{center}
    \caption[A pico projector]{A pico projector.}
    \label{fig:pico-projector}
\end{figure}

These devices are composed by different components:
\begin{itemize}
	\item Electronics system: its task is to transform the image into an electronic signal.

	\item Light source: the light can assume different colours and intensities.

	\item Combiner optic: this unit is responsible to combine the different light paths into one path.

	\item Scanning mirror: one time the image is copied pixel by pixel it is projected.

	\item Battery:  currently most pico projectors can run for about an hour and can recharge in less than four hours.
\end{itemize}

In the following subsections we will describe the three main technologies can be employed in the construction of these devices, often combined with color-sequential (RGB) LEDs or lasers.
Laser projectors produce images always in focus, no matter how distant the surface is.
 
\subsubsection{DLP (Digital Light Processing)}
This  type of pico projector,  exploits a semiconductor chip (called DMD, Digital Micromirror Device)  in order to control the light. This chip uses millions of microscopically small mirrors. Each  mirror can represent one or more image pixels.  The number of mirrors corresponds to the resolution of the projected image and is not uncommon to see models that reach a 1920x1080 resolution.
There are at present two DLP versions: with single and with three chips. Both assure higher quality than the LCD technology, great ANSI Contrast and lower power consumption but don't eliminate a problem known as the rainbow effect. Mostly when are projected high contrast images that move from bright to dark colour but also at any given instant in time it's possible to perceive flashes of red, blue and green shadows.  Fortunately not everyone perceives this phenomenon or not always but for the one who that can see it is very distracting.

\subsubsection{Beam steering}
This technology is associated to the laser technology. Laser beam steering projectors create the image one pixel at a time.  Three laser beams have to be combined using optics and guided using some mirrors. As other laser based projectors, also Laser Beam Steering systems assure images always in focus. Their main disadvantage are high speckle noise ( noisy looking images) along with thermal instability.

\subsubsection{LCoS (Liquid Crystal on Silicon)}
As the name suggests, this is a technology based on liquid crystals instead of mirrors. They are applied directly to the surface of a silicon chip coated with a reflective aluminum layer and a polyimide alignment layer. 
LCoS scanners first of all generate an intense light beam. In order to obtain focus and to allow the  transit of only the visible light, it has to pass  through a condenser lens. After this stage, the light is separated into separate coloured components and each of them hits a specific LCoS device. The light reflected from them is passed through a prism and directed through a projection lens. 
The devices can reach high resolutions meaning that there aren't visible pixels on the screen when projecting and don't present screen issues like the rainbow effect but, however, present contrast performances  lower compared to other solutions and have also a limited lamp file.

\subsection{Processing Unit}
Some Computer Vision and Image processing operations require fast CPU and further GPU resources. Most of today notebooks and netbooks can afford these requisites.
New mobile phones CPU are increasingly fast. One Ghz processor and dual core mobile phones are now a reality and 2Ghz CPU processors will be soon available so the idea of using a mobile phone CPU in order to execute image processing tasks is no more inconceivable.  




\section{Software}
The device will have a lot to do with identification and  location of books so it is necessary to explain some fundamental concepts about Optical Character Recognition (OCR) and tracking and general object recognition.

\subsection{Optical Character Recognition (OCR)}
OCR systems are programs that aim to convert analogue text based resources into digitally ones. Some of them, apart from the text extraction can also extract and reproduce the page layout .    
They require calibration to read a specific font. During this phase the systems get examples of images containing text in some format (ASCII for example). 
Every OCR software can be trained to understand one or more languages. This is possible providing it a dictionary containing the complete words for the selected language.
Latin alphabet is the most widely used alphabet in the world. Currently, typewritten texts based on it  are recognised with a high accuracy level. With other alphabets, hand printing and with cursive handwriting the accuracy is considerably lower. In some of these cases, without the use of contextual analysis at word, sentence or paragraph level, text recognition could be impossible.
OCR software products first of all preprocess the input images, eliminating non-textual information and determining character blocks. These are further separated in basic components, pattern recognised and compared to the words contained in the dictionaries.

A good recognition accuracy depends on different factors:
\begin{itemize}
	\item Quality of the dictionary.

	\item Text alignment.

	\item Text uniformity.

	\item Text clarity: words on a ruined or dirty book or words that contain faded characters could be very complicated to recognize.

	\item Layout: multiple columns text and the presence of other graphical elements can bring issues.

	\item Bit-depth: scanned images should at least use a 8 bits colour depth.
\end{itemize}

 
\subsection{Tracking and object recognition}
At some point we will need some object recognition and tracking mechanism. The first one is the task of recognise objects from images or video sequences with the possibility of estimating their position. For tracking instead we mean the precise and continuous position-finding of targets and from our point of view there are a lot of possible applications starting from text tracking.

\subsubsection{Object recognition techniques}
Object recognition algorithms, in general, adopt  representations or models to find and capture the  desired objects. There are three main approaches:
\begin{itemize}
	\item Geometry based: using 3D geometric models of objects it's possible to predict the projected shape in 2D images but recent studies have shown that they work well only under limited lighting and pose conditions.
	\item Appearance based: central to these methods is the computation of eigenvectors from a set of vectors where each one consists of gray-scale pixel values and represents one particular object. Each object image can be represented by a linear combination of these eigenfaces. It has been shown that these methods are effective with different viewpoints and illumination changes.
	\item Feature based: the recognition is viewed as a classification problem. Feature based algorithms constructs representations of the images and find the nearest models that are directly comparable.
\end{itemize}

\subsubsection{Visual Tracking techniques}
The whole process of object tracking is made with components called trackers. In a typical visual tracker we can distinguish two major components:
\begin{itemize}
	\item Target representation and localization: are often top to bottom processes and usually  don't require high computational resources. 
	\item Filtering and data association: are mostly bottom to top processes and involve more complex computation.
\end{itemize}



\section{Literature review}
Camera projectors systems have been used for different purposes. In this section we will give a brief overview about the relevant literature that has been written on the aspects covered by the project.

\subsection{Underkoffler and Ishii.}
Underkoffler and Ishii wrote some of the first relevant papers about camera projector systems. In \cite{4}, the authors proposed a general-purpose optics simulator called \textit{Illuminating Light}. It made use of components like lasers,mirrors,lenses and in particular of camera-projectors called I/O Bulbs, devices that at the same time projected high resolution information and captured video data of the same region. 

\subsection{Cauchard, Fraser, Alexander and Subramanian.}
In \cite{5}, the authors illustrated a technique to dynamically offset the throw angle of a mobile projector  from the handset's screen.  Currently, many manufacturers place these projectors usually above the screen. In this paper has been showed that a different approach, like the one they described, could offer better benefits by allowing the user to chose where to display the information.

\subsection{Baldauf and Fröhlich.}
Most of camera-projector systems relied on desktop or laptop computer, sometimes carried in a backpack when used on the move. In order to be portable, a device should be preferably very small In \cite{6} the authors proposed a wearable camera-projector system that supported hand gestures manipulation of projected content through a Symbian OS based phone (Nokia N95). During their experiments the authors attached coloured markers to the user's fingers to simplify the gesture and finger detection. The device was able to detect the presence of a marker in a frame and its position change among different frames. Starting from this information it was possible recognise several gestures.

\subsection{Crasto, Kale and Jaynes.}
The past literature has dealt also with camera-projector systems applied to the books and libraries areas. In \cite{7}, Crasto et al. discussed a method to monitor the state of a real world library shelf by means of a projector-camera device. The described system supports different tasks as user's queries for the presence of books in the bookshelf and the projection of related information about the selected book. It doesn't make use of Optical Character Recognition techniques to matching book spines but relies on a different approach based on the planar parallax technique.

\subsection{Chen, Tsai, Girod, Hsu, Kim and Singh.}
Another method used to identify books in bookshelves is the one proposed in \cite{8}. Their system is composed by a smartphone and a server. The first is used to take pictures of a bookshelf and to transmit it to the second. The server's task is to analyse the photo in order to recognise individual book spines and to transmit related information about a particular book back to the phone.
The technique used to recognise books spines is a feature-based one and consists on analysing line structures and comparing the results against a vocabulary tree-structured database of spines.

\subsection{Löchtefeld,  Gehring,  Schöning and Krüger.}
The aim of the system proposed in \cite{9} is to help the user in the search of a desired object in a shelf and to assist him with additional projected information about it using a semantic zoom technique. In order to identify the books they tagged each of them with visual markers containing their ISBN  while to project the information they used a pico projector connected to a phone, the Nokia N95 8gb.

\subsection{Majid Mirmehdi, Carlos Merino}
Text extraction from indoor and outdoor environment has been rarely dealt with in the past literature. In \cite{10}  the authors proposed  a near real time text tracking system suitable both for indoor and outdoor environments. The method is based on extracting text regions using a novel tree-based connected component filtering approach, combined with the Eigen-Transform texture descriptor. Every time a new text entity (a group of one or more words that appear together in an image as a salient features) is detected, is then used Particle filter tracking to follow it, frame to frame. The component of the system responsible for the tracking of a certain text entity is called tracker. Every time the text cannot be detected anymore the tracker is removed and this means that in case of full occlusion  a new tracker starts once the text is back in view. 
Their experiments showed showed that the system's performance when using tracking was close to 10 FPS on average and up to 15 FPS with simple scenes with little background. Complex backgrounds affect also the number of false positives.

\subsection{Conclusions.}
As we saw, camera-projector systems and the task of recognizing books, in particular on bookshelves, has been previously considered but none of these studies has explored the aims pursued by this project.



