\documentclass[a4paper]{article}

\usepackage[utf8]{inputenc}
\usepackage[intlimits]{amsmath}
\usepackage{amssymb}
\usepackage{verbatim}
\usepackage{graphicx}
\usepackage{pdfpages}
\usepackage{float}
\usepackage[all]{xy}
\usepackage{parskip}

\addtolength{\voffset}{-35pt}
\addtolength{\textheight}{75pt}

\newcommand{\unit}[1]{\ensuremath{\, \mathrm{#1}}}
\newcommand{\vect}[1]{\boldsymbol{#1}}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{document}
\title{Humanoid robotics 2010 (TIF160)\\ Assignment 4}
\author{Group 2: Sebastian Johansson \and Joakim Odengård \and Ronnie Sjögren
\and Ilya Zorikhin-Nilsson}
\date{\today}

\maketitle
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section*{4.1 Motion Detection}
The solution to 4.1 can be found in \texttt{Hubert\_4\_1}. The program uses
images from a single webcam to detect motion and uses speech synthesis to notify
the surroundings of a large enough detected motion. 

The program structure is based on the GPRBS concept with one cognitive brain
process for motion detection and one brain process for speech synthesis. There
is also a motor brain process, but it lacks states and its sole function is to
move the robot to a starting position at initialization.

The brain process for for speech synthesis is very simple. It calls the speech
synthesis library (from assignment 2) and then deactivates itself.

The brain process for motion detection starts by reading a new image from the
webcam. The image is then converted to gray-scale. The image is then binarized
with white pixels of motion and black background. This is done through a
process with frame differencing with a running average, that is, the current
frame is compared with a slowly updated background. After binarization, single
pixels are removed and the image is searched for connected components. If
there exist a component larger than threshold $T$, motion is detected and
the speech synthesis is activated. For more details on the brain processes,
see appendix \ref{bp1}.

\subsection*{Source Code}
The source code is based on the HubertTestProgram, with an Image Processing
Library, a Text to Speech Library and a main project with the the brain
structure. The Image Processing Library has been extended and now includes
motion detection with running average (method:
\texttt{FrameDiffRunningAverage}) and connected components (method:
\texttt{ConnectedComponents} which uses class: \texttt{DisjointSets}). The brain
process for speech synthesis is in class: \texttt{TextToSpeechBrainProcess} and
the brain process for motion detection is in class: \texttt{DetectMovingPerson}. 

\newpage
\section*{4.2 Surveillance}
The solution to 4.2 can be found in \texttt{Hubert\_4\_2}. It is based on the
solution to 4.1. The brain structure is mostly the same, but the motor brain
process now has states and controls the activation of the speech synthesis
process.

The motor brain process controls the motion of the Hubert robot. It activates
the (cognitive) process of motion detection when the head webcam is at
standstill. If the motion detection process detects motion, it will turn the
body and point at the motion. The head will also turn, so that the webcam
remains in its position while the body moves. If motion is not detected,
the motion detection process is stopped and the head is turned either left
or right and thereafter the motion detection process is restarted. During
all movements, the motor process keeps an estimate of the absolute (relative
to mount point) pose of the webcam. For more details see appendix \ref{bp2}.

\subsection*{Source Code}
The motor brain process is implemented in class: \texttt{Movement}. It uses
the method: \texttt{RotateHeadRelativeToCurrentPose} to rotate the head
and the webcam. If needed to reach the desired pose, the base will also be
rotated. To rotate the base so that the robot may point in the direction
of a detected movement, the method: \texttt{RotateBaseToAbsolutePose} is
used. The method also moves the head to compensate for base movement.

Since the Hubert robot will only accept a command every 200 ms or so, the class:
\texttt{SerialCommunication} and the class: \texttt{BasicStamp} has been
modified. Commands to the robot are now buffered and a separate thread sends one
buffered command every 300 ms.

\newpage
\appendix
\section{Brain processes - Motion Detection}
\label{bp1}
\begin{verbatim}
4.1
============

BrainProcess: DetectMovingPerson (cognitive)
--------------------------------------------
State 0: Set background image, BI to null
         GOTO State 1

State 1: Read new Image, I, from webcam.
         Convert I to grayscale.
         GOTO State 2

State 2: Frame diff with running average (7.24 & 7.25)
            Diff from I and BI.
         GOTO State 3

State 3: Find connected components.
         Find largest component.
         if largest component > T
              GOTO State 4
         else
              GOTO State 1

State 4: SetUtility(TextToSpeechBrainProcess, 1)
         GOTO State 1

BrainProcess: TextToSpeechBrainProcess (output-sound)
-----------------------------------------------------
State 0: Generate sound and speak.
         GOTO State 1

State 1: SetUtility(TextToSpeechBrainProcess, -1)
\end{verbatim}

\section{Brain processes - Surveillance}
\label{bp2}
\begin{verbatim}
4.2
============

BrainProcess: DetectMovingPerson (cognitive)
----------------------------------------------------
State 0: Set background image, BI to null.
         GOTO State 1

State 1: Read new Image, I, from webcam.
         Read current estimated absolute direcetion of webcam.
         Convert I to grayscale.
         GOTO State 2

State 2: Frame diff with running average (7.24 & 7.25)
            Diff from I and BI.
         GOTO State 3

State 3: Find connected components.
         Find largest component.
         if largest component > T
              GOTO State 4
         else
              GOTO State 1

State 4: Find direction to center of largest component.
         Calculate absolute direction, d, to largest component.
         Set directionToPerson = d
         Set personDetected = true

BrainProcess: TextToSpeechBrainProcess (output-sound)
-----------------------------------------------------
State 0: Generate sound and speak.
         GOTO State 1

State 1: SetUtility(TextToSpeechBrainProcess, -1)

BrainProcess: Movement (motor)
-----------------------------------------------------
State 0: if TimeInState > 1.0 s
             SetUtility(DetectMovingPerson, 1)
         if TimeInState > 4.0 s
             SetUtility(DetectMovingPerson, -1)
             if DetectMovingPerson.personDetected
                 GOTO State 1
             else
                 GOTO State 2

State 1: Move base to point at detected person
         Move head to maintain direction
         Move arm to pointing position
         SetUtility(TextToSpeechBrainProcess, 1)
         SetUtility(Movement, -1)

State 2: if turnLeft
             Move head left
             if MaxLeft
                 Set turnLeft = false
             else
                 GOTO State 0
         else
             Move head right
             if MaxRight
                 Set turnLeft = true
             else
                 GOTO State 0
\end{verbatim}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\end{document}

