\documentclass[a4paper]{article}

\usepackage[utf8]{inputenc}
\usepackage[intlimits]{amsmath}
\usepackage{amssymb}
\usepackage{verbatim}
\usepackage{graphicx}
\usepackage{pdfpages}
\usepackage{float}
\usepackage[all]{xy}
\usepackage{parskip}

\addtolength{\voffset}{-35pt}
\addtolength{\textheight}{75pt}

\newcommand{\unit}[1]{\ensuremath{\, \mathrm{#1}}}
\newcommand{\vect}[1]{\boldsymbol{#1}}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{document}
\title{Humanoid robotics 2010 (TIF160)\\ Assignment 5}
\author{Group 2: Sebastian Johansson \and Joakim Odengård \and Ronnie Sjögren
\and Ilya Zorikhin-Nilsson}
\date{\today}

\maketitle
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section*{5.1 Skin pixel detection}
The solution to assignment 5.1 is divided into two separate programs. The first
program, found in \texttt{SkinColorApp\_5\_1}, is used for analyzing images, and
creating rules for skin-pixel detection. The second program, found in
\texttt{BinocularVision\_5\_1}, is used for real-time skin pixel detection in webcam
images.

\subsection*{Skin colors in YCbCr}
The program reads images with skin pixels where all other pixels have been
masked out with white and saves the RGB values of these pixels into a list
named the "skin color database". For each image that is loaded it sweeps over
all the pixels and checks if they already exist inside the database. This
can take a lot of time if the image loaded is large. The list of colors
are plotted in a graph where the x-coordinate corresponds to the Cb-value
of the color starting at zero (i.e. the minimum value of Cb is subtracted
from the true value of Cb) and the same is done with the Cr-value taken as
the y-coordinate pointing downwards.  There are also buttons for saving and
loading the color database.

To identify skin pixels the program checks whether their Cb,Cr-values are
inside of rectangles stored in a list named "skin color areas" defined on
the graph.  One can manually add a rectangle with the button labeled "Add
rule" and click inside the graph. It's also possible to delete the last
rectangle in the list with the "Delete last rule" button. One can also show
rectangles in the graph.  Drawing the rectangles doesn't erase any already
drawn rectangles so one has to tab down the program or cover it with another
window to delete them. There's also the possibility to save and load these
rules. The save button also saves a text document (in addition to the raw
data) with the coordinates and dimensions of the rectangles.

To try out the new rules one can load an image that the program tries to
identify skin pixels from by calculating their Cb,Cr-values and checking
whether they are inside of the rectangles. For implementing them in the robot
the separate functions have been included in the detection function in order to
minimize the number of function calls and the time of doing this check depends
on the number of rectangles defined and the amount of skin pixels in the image
(since early detected pixels break out from the loop). The detection speed
was not an issue when tested. Lots of bad, over-exposed images and images
shot in bad light-conditions were used for saving the color database why the
area of skin pixels is quite large. There should also be a check for taking
out too dark and too bright pixels (i.e. an interval for the Y-component).

$f(x,y)$ is the undistorted image, $h(x,y)$ is a blurring function, we want to
recover $f(x,y)$ from the convolution $g(x,y) = f(x,y) * h(x,y)$

\begin{equation}
\label{pseudo_inverse} G(u,v) / H(u,v) = F(u,v) + N(u,v)/H(u,v)
\end{equation}

\subsection*{Real-time skin-pixel detection}
The program uses images from two connected webcams and filters out all pixels
that could be representing skin. For skin-pixel detection either the rules
from the Skin Color App (which can be imported) or a simple YCbCr-rule is used.
The program structure is based on the GPRBS concept with one cognitive brain
process, which does skin-pixel detection. For details see appendix \ref{bp1}.

The source code is based on the Binocular Vision example, with an Image
Processing Library and a main project with the brain structure. The Image
Processing Library has been modified to include the YCbCr skin-color detection
(method: \texttt{GetSkinPixelsYCbCr()}) and YCbCr skin-color detection using
rules generated by the Skin Color App (method: \texttt{GetSkinPixelsYCbCr2()}
and class: \texttt{SkinColorAreas}). The cognitive brain process is in class:
\texttt{FaceDetection}.

\section*{5.2 Face detection}
The solution to assignment 5.2 can be found in \texttt{BinocularVision\_5\_2}.
5.2 is an extension of the ''Real-time skin-pixel detection'' of 5.1. In
addition to detecting skin, the program detects faces. It uses two brain
processes, one for speech synthesis and one for face detection. The speech
synthesis process is very simple. It calls the speech synthesis library and then
deactivates itself.

The brain process for face detection first uses the skin detection from 5.1,
including a binarization of the images. After removing any single pixels,
the images is searched for connected components. Connected components search
is implemented with a two-pass algorithm (with a third pass introduced for
simplicity and efficiency). Small connected components are then filtered out,
and finally the shape of reaming components are examined. Any component with
the right proportions is considered a face. If a face appears in front of
any of the cameras, the speech synthesis is activated. For more details on
the brain processes, see appendix \ref{bp2}.

The source code is based on the ''Real-time skin-pixel detection'' source code
from 5.1. The ImageProcessingLibrary is here extended further. It includes code
for doing connected components analysis (in class: \texttt{ConnectedComponent}
which also uses class: \texttt{DisjointSets}). A library for speech synthesis
(our solution from assignment 2) is also included. The cognitive brain process
for face detection is in class: \texttt{FaceDetection} and the brain process
for speech synthesis is in class: \texttt{TextToSpeechBrainProcess}.

\newpage
\appendix
\section{Brain processes - Skin pixel detection}
\label{bp1}
\begin{verbatim}
5.1
==========

BrainProcess: SkinDetection (cognitive)
[FaceDetection.cs]
---------------------------------------
State 0: Read new Image, IL, from left webcam.
         Read new Image, IR, from right webcam.
         GOTO State 1

State 1: Detect skinpixels in IL and IR.
         Save Images with skinpixels (and black background)
            to public PIL and PIR.
         GOTO State 0
\end{verbatim}

\newpage
\section{Brain processes - Face detection}
\label{bp2}
\begin{verbatim}
5.2
==========

BrainProcess: FaceDetection (cognitive)
[FaceDetection.cs]
---------------------------------------
State 0: SetStatevar(faceDetectedPreviously, false)
         GOTO State 1

State 1: Read new Image, IL, from left webcam.
         Read new Image, IR, from right webcam.
         GOTO State 2

State 2: Detect skinpixels in IL and IR. (Binarize)
         GOTO State 3

State 3: Remove single pixels.
         GOTO State 4

State 4: Extract connected components from IL and IR.
         Remove small connected components.
         Remove components that does not have face shape.
         GOTO State 5

State 5: face in IL if IL has any components left
         faci in IR if IR has any components left
         Let faceDetected = (face in IL OR face in IR)
         if ( faceDetected AND (NOT faceDetectedPreviously) )
            Save Images with pixels of detected face and black
                background to public images PIL and PIR.
            GOTO State 6
         else if ( faceDetected AND faceDetectedPreviously )
            Save Images with pixels of detected face and black
                background to public images PIL and PIR.
            GOTO State 1
         else
            Save Images with black background only to public
                images PIL and PIR.
            GOTO State 0

State 6: SetUtility(SpeechHello, 1)
         SetStatevar(faceDetectedPreviously, true)
         GOTO State 1

BrainProcess: SpeechHello (output-sound)
[TextToSpeechBrainProcess.cs]
----------------------------------------
State 0: Generate sound and speak.
         GOTO State 1

State 1: SetUtility(SpeechHello, -1)
\end{verbatim}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\end{document}

