\documentclass{article}
\usepackage{KJN}
\usepackage{a4wide,changebar}
\usepackage[numbered,framed]{mcode}

\title{Advanced Vision Assignment}
\author{Daniel Mankowitz: S1128165, George Dita: S1136415}
\date{3/4/2012}

\begin{document}
\maketitle

\section{Introduction}
\label{sec:introduction}
This report details the algorithms used to extract and overlay planes from a Kinect range image video. The algorithms include detecting the background and transferring an image onto the background. In addition, foreground person detection is detailed as well as detecting the briefcase that the person is carrying whilst walking past the camera. A video sequence is transferred onto the briefcase while the person is walking. The algorithm used to perform this procedure is also detailed. The performance of each of the algorithms is analysed and improvements are suggested.\\


\section{Background Detection}
\label{sec:backDetect}

The fist stage for replacing the background with an image of our choosing is identifying the back plane. To be able to compute the plane equation for the back wall a selection of 3D points from that section is necessary. We first select manually the 4 corner points that define the quadrilateral section of the back plane on which we will overlay the image. We selected the following corner points: (41, 184), (41, 427), (473, 453), (472, 158) as seen in Figure \ref{fig:corners}. The next step is selecting a sample of points that will be used for estimating the back plane. We select a random sample of 100 points from the largest vertical inscribed rectangle as seen in Figure \ref{fig:samplepoints}. We expect the noise at the edges of the quadrilateral defined by the corners to be significantly greater than in the center. Also at the top and bottom edge there is no range data (maximum Z depth is recorded) which will influence the plane estimations considerably if points from that areas are selected for plane fitting. In the first several frames from the video the background is completely unobstructed. Because the camera is also fixed we can use only the first frame to collect the 3D coordinates for plane estimation.\\  

\begin{figure}[ht!]
\begin{minipage}[b]{0.5\linewidth}
  \centering
    \includegraphics[width=0.8\textwidth]{../Drawings/Background/StaticPoints.png}
    \caption{The corner points selected manually defining the back quadrilateral}
     \label{fig:corners}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
  \centering
    \includegraphics[width=0.8\textwidth]{../Drawings/Background/samplepoints.png}
    \caption{The subset of points selected to determine the equation of the back plane} 
    \label{fig:samplepoints}
\end{minipage}
\end{figure}

We then use the 3D coordinates of the sampled points to compute the plane equation as $\vec{n}x+d=0$. We must take into account that the range data is noisy and the back wall is not a perfect plane meaning that the problem does not have an exact solution. We view the problem as an error minimization task and use a total least squares (TLS) method to find $\vec{n}$ and $d$. We compute TLS approximation using the singular value decomposition technique (SVD) on the scatter matrix $S = P'*P$, where $P$ represents a $100 \times 4$ matrix of the 100 points expressed in homogeneous coordinates. The plane normal is computed as the vector with the smallest eigenvalue. The output of this component is a $4 \times 1$ vector $v =[n_1, n_2, n_3, d]$ that will be later used to separate the background from the foreground.\\



%1. determin the 4 points statically/ use the first frame
%2. choose a reduced number of points from the inside of the plane and then find the plane equation using svd
%pics: image with the 4 points used
%pics: image with the number of points used in detecting the plane

\section{Foreground Person Extraction and Image Transfer}
\label{sec:imageTransfer}

The image transfer and the foreground person detection are performed simultaneously. We first compute the homography between the background picture (\textit{field.jpg}) and the projection of the back plane on $XOY$. This returns the projection matrix $P$ that maps $[i, j, 1]$ points in the video to $[x, y, 1]$ image points from \textit{field.jpg}. The \textit{field.jpg} scene is defined by the corner coordinates (1, 1), (1, 338), (450, 338), (450, 1) which are packed in a $XY$ matrix, while the projected region in the video is given by the four manual selected points in the previous stage packed in an $UV$ matrix.\\

The $P$ matrix is computed using the direct linear transformation algorithm that solves an equation of the form: $x_k \propto P y_k$ for $k = 1, \ldots, N$. In our case $P$ is a $3 \times 3$ matrix with $N=4$ point matches. The equation can be re-written as $X = PY$, where the matrices $X$ and $Y$ contain the vectors $x_k$ and $y_k$ in their columns. \\

A point is transferred from \textit{field.jpg} onto the back plane if two requirements are met. The first condition is that the Euclidean distance from the point to the estimated equation of the back plane is below 0.05. The euclidean distances for all points from the backplane are computed and stored in a \textit{dist} matrix. The second condition is that the projection $[x, y]$ of the $[i, j]$ point in the video frame is within the boundaries of the \textit{field.jpg} image. If the two conditions are met the RGB values of point $[x, y]$ are overlaid on the $[i, j]$ point. The result of an image overlay can be seen in Figure \ref{fig:overlayback}.

\begin{figure}[ht!]
\begin{minipage}[b]{0.5\linewidth}
  \centering
    \includegraphics[width=0.8\textwidth]{../Drawings/Background/05back_orig.png}
    \caption{The original image of frame 5 before the image transfer.}
     \label{fig:originalback}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
  \centering
    \includegraphics[width=0.8\textwidth]{../Drawings/Background/05back.png}
    \caption{The processed image of frame 5 after the image transfer.} 
    \label{fig:overlayback}
\end{minipage}
\end{figure}

To be noted that the 0.05 threshold separates the back wall from the foreground person. However several other sections of the scene satisfy the restriction. As seen in Figure \ref{fig:planefit} a large area in the left of the image is matched as belonging to the plane. This is a scene characteristic, as it seems the two components belong to the same wall and are separated by a pillar or column. The smaller red area in the top right is attributed to noise. These components are filtered thanks to the second restriction (boundaries) imposed on the $[x, y]$ projection. \\

\begin{figure}[ht!]
%\begin{minipage}[b]{0.5\linewidth}
  \centering
    \includegraphics[width=0.7\textwidth]{../Drawings/Background/02plane.png}
    \caption{The red section represents background points that are within 0.05 Euclidean distance from the estimated back plane equation}
     \label{fig:planefit}
%\end{minipage}
\end{figure}

Our decision of looping through all the image points when performing the image transfer is justified by the need of the Euclidean distance matrix ($dist$) from the back plane to each point. This is used later for extracting the briefcase in the next stages (\secref{sec:planCorners}). One method for detecting the briefcase uses the Euclidean distance from the back plane and a threshold to extract the binary representation of several briefcase points. It represents an optimization to decrease CPU load by performing several processing tasks (image transfer, computing distances, thresholding) with just one cycle through the image points. 

\section{Foreground Plane Detection}
\label{sec:forePlane}
In order to detect the rectangular briefcase in the foreground of the image, a number of steps need to be performed. These include finding the equation of the plane of the briefcase as well as the four corners of the briefcase. Once this has been achieved, the relevant video frame is then transferred onto the briefcase. These procedures will be discussed in depth in the sections to follow.\\

\subsection{Find the Plane and Detecting the briefcase}
\label{sec:planCorners}
%1. Find the closest point in the image (Z coordinate)
%2. Find points within a certain range of the closest point. Threshold those points
The first phase in calculating the equation of the plane, is to generate relevant points that can be used to estimate the plane. It should be noted that the briefcase is the object closest to the camera. Thus, in order to find the equation of the plane for the briefcase, the pixels whose depth values are near the camera are used for the calculation. Initially, the pixel, whose $(x,y,z)$ point is closest to the camera, (I.e. the pixel with the most negative Z value) is determined. This should correspond to a point on the briefcase. Using this point as a reference point, pixels whose Z values are, at most, a distance of $0.1$ from the reference point are selected. This should effectively select a subset of $(x,y,z)$ points that are found on the briefcase. The image is thresholded to obtain the pixels corresponding to these points. An example of this image is given for frame $20$ and is shown in \figref{fig:image1}. \\

It is important to note that these points come from noisy measurements and therefore a large subset of points need to be sampled from this set of points in order to accurately calculate the equation of the plane. The number of points sampled is equal to the total number of points within a range of $0.1$ of the nearest point, divided by $5$. This value was chosen such that a sufficient number of points are selected to determine the plane equation. The sampled points, $P_{s_{i}}$, are then used to determine the equation of the plane. \\

\begin{figure}[ht!]
\begin{minipage}[b]{0.5\linewidth}
  \centering
    \includegraphics[width=0.8\textwidth]{../Drawings/20briefcasePlanOrig20.jpg}
    \caption{The thresholded image containing points on the briefcase}
     \label{fig:image1}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
  \centering
    \includegraphics[width=0.8\textwidth]{../Drawings/20briefcasePlane20.jpg}
    \caption{The subset of points selected to determine the equation of the plane} 
    \label{fig:image2}
\end{minipage}
\end{figure}

%3. Using a subset of random thresholded points, select a subset of points to determine the plane.
%3.1 Use the total number of thresholded points divided by 5
%4. Find the xyz coordinates of the thresholded points
%5. Use these points to fit a plane
In calculating the equation of the plane, the set of all sampled points $\overrightarrow{x_{i}}  (i = 1...N)$ are then used to find the normal, $\overrightarrow{n}$ and the constant $d$ that best approximates the equation $\overrightarrow{n} x_{i} + d = 0$ for all $i$. This is achieved by performing singular value decomposition on the points and choosing the eigenvector with the smallest eigenvalue as the parameter vector containing the normal, $\overrightarrow{n}$ and the constant $d$. \\




%6. Find a region of points that includes all of the briefcase. The threshold
%value of closest point - 0.3 is used.
The above procedure produces an estimate of the plane along the visible face of the briefcase. The next step is then to find all the pixels in the image whose depth values lie on, or within a small tolerance, of the briefcase plane. Initially, the Z matrix, containing the depth value for each pixel in the image is thresholded in order to find all points that lie within a range of $0.3$ to the closest point in the image. This will select all points on the briefcase as well as points that lie slightly further away from the briefcase (such as the person's arm, trousers, head etc.). This is performed for performance reasons. Points that lie far away from the briefcase (such as the background walls) will be filtered out, leaving only the briefcase and some segments of the person in the image. An example of the pixels corresponding to these points is shown in \figref{fig:image3}. The points that are not filtered out (corresponding to the pixels in the figure) will be tested to see if they lie on the plane of the briefcase. This means that a small subset of the points in the image will be tested which results in faster, optimised code. \\

The pixels are then tested to see whether or not they lie on the plane. In order for a pixel to lie on the plane, it has to fulfil two constraints. The first constraint is that the pixel must lie within a certain perpendicular distance from the plane. This has been set to $0.05$. The second constraint is that the pixel must be near a pixel that already lies in the plane. This is achieved by comparing each potential plane pixel, $P_{p}$ to the pixels used for sampling the plane, $P_{s_{i}}$, mentioned previously. If the distance between the $P_{p}$ and at least one of the sampled pixels, $P_{s_{j}}$, is within $0.5$, then the $P_{p}$ is selected as a plane pixel. The plane pixels are used to construct a binary image.  An example of this is shown in \figref{fig:image4}.\\

\begin{figure}[ht!]
\begin{minipage}[b]{0.5\linewidth}
  \centering
    \includegraphics[width=0.8\textwidth]{../Drawings/beforeSelection20.jpg}
    \caption{The pixels whose coordinates lie within the $0.3$ threshold}
     \label{fig:image3}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
  \centering
    \includegraphics[width=0.8\textwidth]{../Drawings/20wholeBriefcaseBeforeOpen20.jpg}
    \caption{The pixels that lie on the briefcase plane within a certain tolerance} 
    \label{fig:image4}
\end{minipage}
\end{figure}

As can be seen in the image, the pixels corresponding to the points that lie on the briefcase plane still tend to include segments that are not on the briefcase, such as the person's arm. This is because the arm carrying the briefcase and the briefcase itself tend to be at similar distances from the camera. In order to fix this problem, the arm needs to be removed from the image. This is achieved by noting that the arm's dominant colours lie in the red channel. This is due to the colour of the persons skin. In order to remove the arm, the image is thresholded along the red colour channel. The value of the threshold depends on the mean intensity of the image histogram. Images with mean intensities below $40$ will have higher thresholds since these images are darker on average. This ensures that most of the arm is removed from the image. An example of this is presented in \figref{fig:image5}. \\

\begin{figure}[h!] 
  \centering
    \includegraphics[width=0.5\textwidth]{../Drawings/20briefcaseThresholded20.jpg}
    \caption{The resulting binary image after thresholding along the red colour channel}
    \label{fig:states}
\end{figure}

%1. Find the closest point in the image (Z coordinate)
%2. Find points within a certain range of the closest point. Threshold those points
%3. Using a subset of random thresholded points, select a subset of points to determine the plane.
%3.1 Use the total number of thresholded points divided by 5
%4. Find the xyz coordinates of the thresholded points
%5. Use these points to fit a plane
%6. Find a region of points that includes all of the briefcase. The threshold
%value of closest point - 0.3 is used.
%7. Test all of these points to find the points that lie on the plane calculated previuously within
% a certain tolerance.
%7.1 This will probably include parts of the persons arm. Perform element-wise multiplication
%with the red channel and the image which includes the briefcase and the person's arm.
%8. Since the arm contains colours that are mainly in the red channel, the arm can be removed using
%a suitable threshold. 
%8.1 Calculate the mean intensity of the intensity histogram. Choose a threshold based on the mean 
%intensity value.
%9. Dilate the image to fill in holes created from the threholding procedure
%10. Using bwareaopen, remove all thresholded regions that are smaller than the suitacase

\subsection{Finding the Four Corners of the Briefcase}
\label{sec:corners}

Once the briefcase has been found, the four corners of the briefcase need to be determined in order to perform the image transfer. In order to find the four corners of the briefcase, the RANSAC algorithm is performed on the binary image of the briefcase. The points used to find the RANSAC lines are the boundary points of the briefcase. These points are found using the function \textit{bwboundaries} in Matlab. The points are fed into the RANSAC function \textit{ransacline} which outputs the parameters $t$ and $d$ used to construct a line of the form shown in \eqnref{eqn:ransacline}. Here, $r$ and $c$ correspond to the row and column in the image respectively.\\

\begin{equation}
sin(t) r + cos(t) c = d
\label{eqn:ransacline}
\end{equation}

This line needs to be converted to the standard straight line form $y = ax +b$. In order to calculate $a$ and $b$, a standard conversion is applied. The respective conversions are shown in \eqnref{eqn:conversion}.

\begin{eqnarray}
a &=& \frac{sin(t)}{cos(t)}\\
b &=& \frac{d}{cos(t)}
\label{eqn:conversion}
\end{eqnarray}

Four RANSAC lines are constructed, resulting in four $a$ and $b$ values corresponding to each of the four lines respectively. In order to ensure that RANSAC lines are not repeated, the points used to construct the current RANSAC line are removed from the boundary points vector before the next RANSAC line is calculated. An example of a set of RANSAC lines for the briefcase is shown in \figref{fig:ransaclines}.\\

\begin{figure}[h!] 
  \centering
    \includegraphics[width=0.5\textwidth]{../Drawings/20briefcaseRansac20.jpg}
    \caption{The RANSAC lines calculated for the image}
    \label{fig:ransaclines}
\end{figure}

Once the RANSAC lines have been calculated, the corners of the briefcase can then be determined by finding the points of intersection of each of the four lines. All possible intersections of the four lines are computed and the valid corner points are selected. In order to determine whether or not the point of intersection is a valid corner point on the briefcase, the centroid of the briefcase is calculated. Any point of intersection whose Euclidean distance from the centroid is less than $100$ pixels is valid. This will output four corner points, but their ordering information is not yet available. That is, the corner of the briefcase that each intersection point corresponds to needs to be determined. \\

The centroid is used to order the intersection points. The centroid coordinates $(x_{c}, y_{c})$ are subtracted from each of the intersection points $(x_{i}, y_{i})$. These differences are shown in \eqnref{eqn:diff}.\\

\begin{eqnarray}
x_{ci} &=& x_{i}  - x_{c}\\
y_{ci}&=& y_{i} - y_{c}
\label{eqn:diff}
\end{eqnarray}

The product of the differences is then obtained. If the product is positive, then the intersection point is a corner in the first or third quadrant of the briefcase as shown in \figref{fig:quad}. If the product is negative then the intersection point is a corner in the second or third quadrant. \\

\begin{figure}[h!] 
  \centering
    \includegraphics[width=0.5\textwidth]{../Drawings/quad.jpg}
    \caption{The sign of the intersection points relative to the centroid of the briefcase for each quadrant}
    \label{fig:quad}
\end{figure}

One this has been performed, the intersection points are placed in their corresponding corner position based on their row coordinate relative to the centroid. For example, if the product is positive for an intersection point, then the point lies in the first or third quadrant. If the row coordinate of the intersection point is smaller than the centroid row coordinate, then the point is in the first quadrant. Otherwise, it lies in the third quadrant. This is performed for each of the intersection points. This yields the four corners of the briefcase. An example of the four corners is plotted in \figref{fig:corners}. \\

\begin{figure}[h!] 
  \centering
    \includegraphics[width=0.5\textwidth]{../Drawings/20cornerPoints20.jpg}
    \caption{The corner points plotted on the briefcase}
    \label{fig:corners}
\end{figure}

However, on some occasions, the RANSAC lines are not calculated correctly which results in some intersection points being assigned coordinates of $(0,0)$. This would result in an incorrect image transfer. In order to prevent this from occurring, the bounding box of the briefcase is calculated. The bounding box gives a very rough estimation of the corner vertices of the briefcase. An example bounding box is shown in \figref{fig:boundingBox}. In order to ensure that all intersection points have a value that corresponds (at least roughly) to the corner vertices of the briefcase, an image transfer is possible.\\ 

\begin{figure}[h!] 
  \centering
    \includegraphics[width=0.5\textwidth]{../Drawings/20boundingBox20.jpg}
    \caption{The bounding box of the briefcase}
    \label{fig:boundingBox}
\end{figure}

Thus once the intersection points have been ordered as mentioned previously, they are tested to see if their Euclidean distance from the centroid is less than $100$ pixels. If this is the case, then the point is a valid corner point. If the point lies too far from the centroid, then the corresponding bounding box vertex is used to represent the corner of the briefcase. This ensures that a fairly robust image transfer can be performed.\\
%Next section: Detecting the edges of the briefcase.
%1. Perform RANSAC on the briefcase in order to identify the edges of the briefcase.
%2. This should identify four lines, outputting the t and d values for the line
%3. Convert the t and d values to a and b values for the equation y = ax + b 
%4. Find the centroid of the image
%5. Find the point of intersection of each of the four RANSAC lines. Write equation
%5.1 Find the distance between the point of intersection and the centroid. If this distance
%is below a certain threshold, then the point is a corner of the briefcase.
%5.2 Calculate the bounding box of the image
%5.3 Check for incorrect corner detections. If the detections are above a certain distance
%from the centroid, then use the corresponding bounding box vertex instead.

\section{Video Transfer}
\label{sec:videoTrans}
Once the corners of the briefcase have been identified, the relevant video frame is then transferred onto the plane of the briefcase. This is performed using projective image transfer. Initially, the relevant video frame is loaded into Matlab. The dimensions of the video frame are set as the XY source points. The UV points, onto which the projection occurs, are set as the corner vertices calculated in \secref{sec:corners}. \\

The projection matrix which maps from the XY coordinate system to the UV coordinate system is then calculated using the function \textit{esthomog} in Matlab. This function uses the XY and UV points respectively in order to estimate the projection matrix $P$.\\

This is achieved by setting up the $A$ matrix as defined in \cite{LectureNotes}. Singular Value Decomposition is then performed on $A$ and the eigenvector corresponding to the smallest eigenvalue is used to represent the projection matrix.\\

Once the projection matrix has been calculated, the video frame needs to be mapped onto the briefcase's plane. Since the briefcase is the only area that is of interest for the purpose of this projection, only the pixels that lie within the bounding box of the briefcase are tested to see whether or not they lie on the briefcase plane as this is computationally efficient. In order to test whether or not a pixel's corresponding range coordinate lies on the briefcase plane, the range coordinate's Euclidean distance from the plane, $d_{plane}$, is calculated. Once this occurs, the $(u,v)$ pixel coordinate on the original image is converted to the video frame's corresponding $(x,y)$ coordinate as shown in \figref{fig:project}. If $d_{plane}$ is less than $0.1$ and the $(x,y)$ coordinate is a valid coordinate on the video frame, then the pixel from the video frame corresponding to $(x,y)$ is projected onto the original image's $(u,v)$ coordinate. This is shown in \figref{fig:projectFinal}. This is repeated for every pixel contained within the bounding box.\\



\begin{figure}[ht!]
\begin{minipage}[b]{0.5\linewidth}
  \centering
    \includegraphics[width=0.8\textwidth]{../Drawings/vidTransfer.jpg}
    \caption{The mapping of u,v points from the original image to x,y points on the video frame}
     \label{fig:project}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
  \centering
    \includegraphics[width=0.8\textwidth]{../Drawings/vidTransferFinal.jpg}
    \caption{Mapping of a pixel to the original image if it satisfies the constraints} 
    \label{fig:projectFinal}
\end{minipage}
\end{figure}

This procedure will result in the video frame being projected onto the briefcase. An example image from this procedure is shown in \figref{fig:projection}. As can be seen in the figure, the person is surrounding by a border of black pixels. These pixels represent lost information from the kinect sensor. In order to remove these pixels, a background noise removal procedure is performed. This is discussed in the section to follow.

\begin{figure}[h!] 
  \centering
    \includegraphics[width=0.5\textwidth]{../Drawings/20videoFrameWithNoise20.jpg}
    \caption{A video frame projected onto the briefcase}
    \label{fig:projection}
\end{figure}

%1. Load the corresponding frame from the video sequence.
%2. Use the corner points of the briefcase as the UV points onto which the projection occurs.
%3. Set the dimensions of the video frame that is to be projected onto the briefcase.
%4. Estimate a homography mapping from XY to UV using Singular Value Decomposition.
%5. For each pixel within the bounding box surrounding the briefcase, determine if the pixel
%lies on the plane of the briefcase.
%5.1 If the pixel is within 0.1 of the plane, then project the pixel from the frame onto the image
%using the projection matrix P.
%6. Output the RGB image


\subsection{Removing Black Background Noise}
\label{sec:noiseRemoval}
In order to remove the background noise in the image, all the black points first need to be identified. This is performed by searching for all pixels that have a value of $0$ in all three R,G,B colour channels respectively. This will select a large subset of pixels since the image is surrounded by a black border. Therefore, the pixels are filtered further by discarding all pixels that lie in the black border region. This is performed for computational purposes as we are only interested in the black pixels surrounding the person. \\

Once the final subset of pixels has been created, each pixel is assigned an intensity based on a neighborhood average of the pixels surrounding the specified black pixel. A $5 \times 5$ neighborhood of pixels is chosen surrounding the black pixel where the black pixel is at the center of this neighborhood. The average intensity of the pixels in the neighborhood is calculated and this is assigned to the black pixel. This is performed on the entire subset of selected pixels. This resulted in the black pixels surrounding the person being effectively removed from the image as shown in \figref{fig:noNoise}.\\

  \begin{figure}[h!] 
  \centering
    \includegraphics[width=0.5\textwidth]{../Drawings/20videoFrameWithoutNoise20.jpg}
    \caption{Black noise removed from the image}
    \label{fig:projection}
\end{figure}

%1. Find all points that are black along all three colour channels
%2. Filter out all points that are not in the main image. These never change.
%3. For all the remaining black points in the image, determine their intensity
%by averaging the pixels in a (2n + 1)x(2n + 1) neighborhood surrounding the speciified 
%black pixel.
%3.1 Perform this for each colour channel

\section{Performance}
\label{sec:conclusion}

The algorithms yielded good, robust performance. Only one of the frames resulted in an incorrect projection of the video frame onto the briefcase. Processing each image frame takes on average $13.85$ seconds. Each processing stage will now be detailed. \\

\subsection{Background Detection}
\label{sec:backDetectsub}

The task of detecting the background has a very small cost with regard to the rest of the computation. The plane equation is estimated only once regardless of the length of the sequence. The total duration of this stage is of 0.04 seconds and the least squared fitting error for the plane is 0.0091. \\

\subsection{Foreground person extraction and Image Transfer}
\label{sec:imageTransfersub}

The person extraction and the image transfer are performed in the same computational stage. The image transfer performs well in every frame without overlapping with any part of the foreground person. Also there are no background sections that are not covered by the new texture. The transformations for frames 5, 15, 20 and 25 can be seen in \figref{fig:back05_orig} to \figref{fig:back20} respectively. \\

The performance in terms of runtime is measured at an average of 5.64 seconds per frame for person extraction, image transfer and briefcase detection preprocessing. The entire sequence is processed in 272 seconds on an Intel 2.5 GHz Dual Core processor. \\

\begin{figure}[h]
\begin{minipage}[b]{0.5\linewidth}
  \centering
    \includegraphics[width=0.8\textwidth]{../Drawings/Background/05back_orig.png}
    \caption{Original view of frame 5}
     \label{fig:back05_orig}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
  \centering
    \includegraphics[width=0.8\textwidth]{../Drawings/Background/05back.png}
    \caption{Transformed view of frame 5} 
    \label{fig:back05}
\end{minipage}
\end{figure}
\begin{figure}[h]
\begin{minipage}[b]{0.5\linewidth}
  \centering
    \includegraphics[width=0.8\textwidth]{../Drawings/Background/15back_orig.png}
    \caption{Original view of frame 15} 
    \label{fig:back15_orig}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
  \centering
    \includegraphics[width=0.8\textwidth]{../Drawings/Background/15back.png}
    \caption{Transformed view of frame 15} 
    \label{fig:back15}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
  \centering
    \includegraphics[width=0.8\textwidth]{../Drawings/Background/20back_orig.png}
    \caption{Original view of frame 20} 
    \label{fig:back20_orig}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
  \centering
    \includegraphics[width=0.8\textwidth]{../Drawings/Background/20back.png}
    \caption{Transformed view of frame 20} 
    \label{fig:back20}
\end{minipage}
\end{figure}

The foreground person is extracted using only the $xyz$ measurements obtained from the Kinect sensor. However the measurements are not very precise. The consequence is that the outer pixels of the human silhouette have 3D measurements matching the foreground person or they are missing color information. The issue of the missing color information is described in \secref{sec:noiseRemoval} which uses interpolation to color the black spots. In the end the silhouette is 2-3 pixels thicker in certain regions, as seen in \figref{fig:personedges}. The spurs can be removed either by excluding 2-3 pixels from the edge of the extracted silhouette or by removing contour pixels that are similar in colour with the background. The second option shows most promise as it is discriminative with regard to the pixels excluded. The first method would remove contour pixels regardless of the 3D data accuracy in that region. \\

\begin{figure}
  \centering
    \includegraphics[width=0.4\textwidth]{../Drawings/Background/20edges.png}
    \caption{The edge of the detected person is marked in yellow}
     \label{fig:personedges}

\end{figure}

\subsection{Foreground Plane Detection}
\label{sec:forePlanesub}
The performance of the foreground plane detection algorithm will now be detailed. This includes detection of the briefcase, finding the four corners of the briefcase as well as the video transfer routine. The algorithm takes, on average, $4.58$ seconds to perform all of these routines. Frames $14$ to $28$ have a fully visible briefcase. The video frames are projected onto the briefcase during the above-mentioned frames in the original sequence.\\


\subsection{Detecting the Briefcase}
\label{sec:performDetectBriefcase}
The first important step in detecting the foreground plane is the detection of the briefcase. The briefcase was sufficiently detected in all but one of the frames. Some of the good detections are shown in \figref{fig:brief20} to \figref{fig:brief23} respectively. These figures correspond to frames $20$ to $23$ of the image sequence. As can be seen in the images, the briefcase is always detected. However, the top of the briefcase generally contains a segment of the person's hand. This is not a serious problem, unless a large segment of the person's arm is detected. If a large segement of the person's arm is detected, then the RANSAC lines will not approximate the edge of the briefcase very well as will be shown in \secref{sec:performfindCorners}. Frame $26$ is the only frame that currently detects a large portion of the person's arm and is shown in \figref{fig:problembrief26}. The edges of the briefcase tend to be jagged as a result of the image processing algorithm. This effects the placement of the corner points.\\

In addition, since the person's hand is detected along with the briefcase, it is possible that points are sampled from this person's hand in order to calculate the plane of the briefcase. This will produce a slightly inaccurate estimate of the plane equation for the briefcase since the points on the person's hand do not lie on the briefcase plane. However, since the person's hand is relatively close to the briefcase plane, it does not cause a noticeably large error when estimating the plane equation.\\   

\begin{figure}[ht!]
\begin{minipage}[b]{0.5\linewidth}
  \centering
    \includegraphics[width=0.8\textwidth]{../Drawings/performance/briefCase20.jpg}
    \caption{The detected briefcase in frame 20}
     \label{fig:brief20}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
  \centering
    \includegraphics[width=0.8\textwidth]{../Drawings/performance/briefCase21.jpg}
    \caption{The detected briefcase in frame 21} 
    \label{fig:brief21}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
  \centering
    \includegraphics[width=0.8\textwidth]{../Drawings/performance/briefCase22.jpg}
    \caption{The detected briefcase in frame 22} 
    \label{fig:brief22}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
  \centering
    \includegraphics[width=0.8\textwidth]{../Drawings/performance/briefCase23.jpg}
    \caption{The detected briefcase in frame 23} 
    \label{fig:brief23}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
  \centering
    \includegraphics[width=0.8\textwidth]{../Drawings/problemBoundingBox26.jpg}
    \caption{The briefcase with a large segment of the person's arm in frame $26$} 
    \label{fig:problembrief26}
\end{minipage}
\end{figure}

\subsection{Finding the Four Corners of the Briefcase}
\label{sec:performfindCorners}
The RANSAC method does a very good job in detecting edges of the briefcase as is shown in \figref{fig:ransac20} to \figref{fig:ransac23} respectively. RANSAC is able to perform well, even if a small portion of the person's hand is thresholded with the briefcase as is shown in the figures. This is because the hand does not fulfil the constraints required by the RANSAC algorithm and will therefore not be selected as a line segment.\\

However, if too large a portion of the person's arm is selected, then the RANSAC algorithm may select an incorrect line segment as is shown in \figref{fig:problemRansac26}. Here, line segments are chosen along the person's arm thus creating a line that does not lie along the edge of the briefcase. This will create an incorrect calculation of the intersection points resulting in an incorrect video transfer to the briefcase. This only occurs on one occasion during the video sequence.\\
 
\begin{figure}[ht!]
\begin{minipage}[b]{0.5\linewidth}
  \centering
    \includegraphics[width=0.8\textwidth]{../Drawings/performance/ransac20.jpg}
    \caption{The detected ransac lines for frame 20}
     \label{fig:ransac20}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
  \centering
    \includegraphics[width=0.8\textwidth]{../Drawings/performance/ransac21.jpg}
    \caption{The detected ransac lines for frame 21} 
    \label{fig:ransac21}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
  \centering
    \includegraphics[width=0.8\textwidth]{../Drawings/performance/ransac22.jpg}
    \caption{The detected ransac lines for frame 22} 
    \label{fig:ransac22}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
  \centering
    \includegraphics[width=0.8\textwidth]{../Drawings/performance/ransac23.jpg}
    \caption{The detected ransac lines for frame 23} 
    \label{fig:ransac23}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
  \centering
    \includegraphics[width=0.8\textwidth]{../Drawings/problemRansac26.jpg}
    \caption{An incorrect RANSAC line segment as a result of the person's arm} 
    \label{fig:problemRansac26}
\end{minipage}
\end{figure}

\subsection{Calculating the Corners}
\label{sec:corners}
Once the RANSAC lines have been calculated, their points of intersection are found in order to determine the corner points on the briefcase. The corner points have been plotted on a sequence of images from frames $20$ to $23$ and are presented in \figref{fig:intersect20} to \figref{fig:intersect23} respectively. As can be seen in the figures, the corner points accurately represent the corners of the briefcase. \\

\begin{figure}[ht!]
\begin{minipage}[b]{0.5\linewidth}
  \centering
    \includegraphics[width=0.8\textwidth]{../Drawings/performance/points20.jpg}
    \caption{The plotted intersection points for frame 20}
     \label{fig:intersect20}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
  \centering
    \includegraphics[width=0.8\textwidth]{../Drawings/performance/points21.jpg}
    \caption{The plotted intersection points for frame 21} 
    \label{fig:intersect21}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
  \centering
    \includegraphics[width=0.8\textwidth]{../Drawings/performance/points22.jpg}
    \caption{The plotted intersection points for frame 22} 
    \label{fig:intersect22}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
  \centering
    \includegraphics[width=0.8\textwidth]{../Drawings/performance/points23.jpg}
    \caption{The plotted intersection points for frame 23} 
    \label{fig:intersect23}
\end{minipage}
\end{figure}

There are however, a number of situations whereby RANSAC fails to estimate good lines to represent the edges of the briefcase. In this case, incorrect points of intersection will be determined. An example of this is shown in \figref{fig:problemCorner26}. The point of intersection for the bottom left corner of the briefcase does not satisfy the constraint of being 100 pixels from the centroid. This results in only three corner points. The fourth corner point is found at the origin $(0,0)$. In order to remedy this, as mentioned in \secref{sec:corners}, the fourth corner point will use the estimate of the bounding box's bottom left vertex. This will enable a fairly accurate video transfer to be performed. The transferred video frame will be distorted to some degree, but this prevents the entire video frame from being unrecognisable.  The new corner point can be seen in \figref{fig:problemCorner261}.\\

\begin{figure}[ht!]
\begin{minipage}[b]{0.5\linewidth}
  \centering
    \includegraphics[width=0.8\textwidth]{../Drawings/missedCorners16.jpg}
    \caption{Three of the four corner points have been correctly plotted}
     \label{fig:problemCorner26}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
  \centering
    \includegraphics[width=0.8\textwidth]{../Drawings/missedCornersFixed16.jpg}
    \caption{Using the estimate of the bounding box vertex, a rough estimate of the corner point can be recovered} 
    \label{fig:problemCorner261}
\end{minipage}
\end{figure}


\subsection{Video Transfer}
\label{sec:videoTranssub}
The video transfer algorithm depends on the previous algorithms providing it with good corner points and a well defined briefcase. As can be seen in \figref{fig:transfer20} to \figref{fig:transfer23} respectively, the video frames are accurately transferred onto the briefcase plane producing consistent projections.\\

\begin{figure}[ht!]
\begin{minipage}[b]{0.5\linewidth}
  \centering
    \includegraphics[width=0.8\textwidth]{../Drawings/performance/withNoise20.jpg}
    \caption{The video transfer for frame 20}
     \label{fig:transfer20}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
  \centering
    \includegraphics[width=0.8\textwidth]{../Drawings/performance/withNoise21.jpg}
    \caption{The video transfer for frame 21} 
    \label{fig:transfer21}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
  \centering
    \includegraphics[width=0.8\textwidth]{../Drawings/performance/withNoise22.jpg}
    \caption{The video transfer for frame 22} 
    \label{fig:transfer22}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
  \centering
    \includegraphics[width=0.8\textwidth]{../Drawings/performance/withNoise23.jpg}
    \caption{The video transfer for frame 23} 
    \label{fig:transfer23}
\end{minipage}
\end{figure}

The video transfer algorithm produces a distorted image in frame $26$ as a result of the corner points on the briefcase not being accurately constructed. The distorted projection is shown in \figref{fig:distorted}.\\

\begin{figure}[h!] 
  \centering
    \includegraphics[width=0.5\textwidth]{../Drawings/problemVideoFrameWithNoise.jpg}
    \caption{The distorted video frame projected onto the briefcase}
    \label{fig:distorted}
\end{figure}


\subsection{Background Noise Removal}
\label{sec:forePlanesub}

The black pixels that surrounds the person in the image have been effectively removed using the neighborhood averaging routine. The results of this procedure is shown in \figref{fig:final20} to \figref{fig:final23} respectively. These images are output to the final video sequence. \\

\begin{figure}[ht!]
\begin{minipage}[b]{0.5\linewidth}
  \centering
    \includegraphics[width=0.8\textwidth]{../Drawings/performance/frame20.jpg}
    \caption{The final image without black pixels for frame 20}
     \label{fig:final20}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
  \centering
    \includegraphics[width=0.8\textwidth]{../Drawings/performance/frame21.jpg}
    \caption{The final image without black pixels for frame 21} 
    \label{fig:final21}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
  \centering
    \includegraphics[width=0.8\textwidth]{../Drawings/performance/frame22.jpg}
    \caption{The final image without black pixels for frame 22} 
    \label{fig:final22}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
  \centering
    \includegraphics[width=0.8\textwidth]{../Drawings/performance/frame23.jpg}
    \caption{The final image without black pixels for frame 23} 
    \label{fig:final23}
\end{minipage}
\end{figure}


\section{Discussion}
\label{sec:discussion}

The approach for detecting the back plane is efficient and robust. It make use of prior knowledge about the scene and the input data: the camera is stationary and the first frames represent the unobstructed background. With this assumption it can be run only once for on one of the first frames for the entire sequence. The manually selected boundaries for the plane simplify the issue considerably. Without user input a different methods would have been needed. A solution for this could be extracting image patches and compute the best fit plane for each of them. Then comparing choosing the patch with the least mean error as valid. A more computationally expensive method that can solve the problem would be region growing, and then selecting the largest region. \\

The image transfer algorithm performs well for all the frames. The accuracy for the transfer benefits from the quality of the range data. The only issue for this component refers to the contour of the extracted silhouette detailed detailed in \secref{sec:imageTransfersub}. This a common issue as processing stages that deal raw data often require procedures for dealing with noise and inconsistencies. The background image transferred looks natural. The resolution of the original image (\textit{field.jpg}) is greater than the resolution of the quadrilateral on which it was transferred. If a good ratio in this perspective is not maintained the end result can appear skewed and with poor quality as the homographic transfer can not deal with occlusions in the scene.\\

%Speed of algorithm - failurish 4.58
The algorithm for detecting the briefcase is robust and uniquely detects the briefcase in all but one of the frames (In the problem frame, frame $26$, a portion of the arm is selected in addition to the briefcase). The corner points and the video transfer routines enable the video frames to be accurately projected onto the briefcase. The problem frame has a distorted projection. This can be remedied by adjusting the red channel threshold for the particular frame such that only the briefcase is identified which would result in the correct video transfer.\\

The algorithm for detecting the foreground plane as well as projecting video frames onto this plane, as mentioned previously, takes on average $13.85$ seconds to complete. This is a relatively slow procedure and can be further improved by performing a number of optimisation procedures. One improvement is to threshold a smaller subset of points in order to calculate the plane of the briefcase. This can be achieved by setting the threshold range to a smaller value or using a smaller number of sampled points from the current subset.\\

Another problem was that of different frames having different light intensities. One way to deal with this problem is normalisation. However, a different technique has been chosen for this implementation. The average intensity of each image frame is calculated using the intensity histogram values as mentioned in \secref{sec:planCorners}. Darker images are thresholded at higher intensities to ensure that the briefcase is thresholded correctly.\\

On occasion, a large portion of the arm is selected and causes incorrect RANSAC lines to be generated and subsequently an incorrect video transfer results. In order to try and remedy this problem, an alternative image processing technique has been proposed to try and better detect the briefcase. This has been implemented in the function \textit{fillBriefcase}. In order to detect the briefcase, the rgb image is thresholded based on the colour of the briefcase. All pixels with intensities below $40$ on all three colour channels are selected. In addition to this, these pixels range coordinates must be a certain distance from the background plane in order to ensure that the briefcase pixels are included in the thresholded image. A number of dilation and erosions are performed on the image as well as removing small regions of connected pixels using \textit{bwareaopen}. This results in a small region of the briefcase being selected which will then be used to estimate the plane equation for the briefcase. An example image is shown in \figref{fig:sampleTest}.\\


\begin{figure}[h!] 
  \centering
    \includegraphics[width=0.5\textwidth]{../Drawings/Method2/sampledRegion.jpg}
    \caption{A region of the briefcase used to estimate the plane with method two}
    \label{fig:sampleTest}
\end{figure}

A plane is then fitted to the briefcase using points from this region. All the range points in the image are then tested to see whether or not they lie on the plane of the briefcase. If the points lie on the plane (or within a tolerance of $0.045$) and their corresponding rgb pixel intensities are below $50$ for each colour channel, then the pixel is selected as part of the briefcase. Some erosion and dilation is performed and the resulting detected briefcase is shown in \figref{fig:detectedTest}.\\

\begin{figure}[h!] 
  \centering
    \includegraphics[width=0.5\textwidth]{../Drawings/Method2/briefcaseFinal.jpg}
    \caption{The detected briefcase using method two}
    \label{fig:detectedTest}
\end{figure}

The same RANSAC algorithm and video transfer routine is then applied to the image. This was tested and it was found that the briefcase detections were of a good quality but resulted in two failures rather than one in performing undistorted video transfers. An example of a good detection and video transfer is shown in \figref{fig:undistorted}.\\

\begin{figure}[h!] 
  \centering
    \includegraphics[width=0.5\textwidth]{../Drawings/Method2/finalTest.jpg}
    \caption{The final image for method two}
    \label{fig:finalTest}
\end{figure}

Using RANSAC lines to find the corners of the briefcase proved to be a very efficient and robust technique. It is fairly immune to small deformations along the briefcase (such as detecting the hand at the top of the briefcase). So long as the deformation is not too large, very accurate corner points can be determined. \\

Another successful aspect of the algorithm is utilising the bounding box to estimate a corner vertex if the vertex is not detected using RANSAC. This prevents video frames from being unrecognisable after video transfer and often provides a good estimate of the corner vertex. This estimate worsens as the tilt of the briefcase becomes larger and larger. The bounding box is always rectangular and remains parallel to the horizontal and vertical axes of the image respectively. This results in inaccurate estimates of corner vertices as the briefcase tilts.\\
%Improving the detections for the briefcase - failurish - try to fix the corresponding sequence


%Average intensity... some success

%Bounding box - success

%RANSAC lines. Success

%Discussion on performance

\bibliographystyle{witseie}
\bibliography{bibliography}
 \newpage
\onecolumn
\appendix
\setcounter{table}{0}
\setcounter{figure}{0}
\setcounter{subsection}{0}
\makeatletter \renewcommand{\thefigure}{A.\@arabic\c@figure} \renewcommand{\thetable}{A.\@arabic\c@table} \renewcommand{\thesection}{A.\@arabic\c@section} \makeatother
\section*{APPENDIX A}

\section{Processing the Range Data}
\label{app:rangeData}
\lstinputlisting{../../transformData.m}

\subsection{Convert Kinect Data to RGB Images}
\label{app:kinect2rgb}
\lstinputlisting{../../kinect2rgb.m}

\subsection{Convert Kinect Data to XYZ Matrices}
\label{app:kinect2xyz}
\lstinputlisting{../../kinect2xyz.m}

\section{Background Detection and Main Method}
\label{app:main}
\lstinputlisting{../../main.m}

\lstinputlisting{../../loadTransformedData.m}


\section{Foreground Plane Detection}
\label{app:background}
%Add the matlab code to this file...
%Example of usage
\subsection{Main Briefcase Detection Method}
\label{app:method1}
\lstinputlisting{../../findBriefcase.m}

\subsection{Alternative Briefcase Detection Method}
\label{app:method2}
\lstinputlisting{../../fillBriefcase.m}

\subsubsection{Calculate the Bounding Box}
\label{app:boundBox}
\lstinputlisting{../../Helpers/calcBoundingBox.m}

\subsubsection{Threshold Image with range data}
\label{app:threshold}
\lstinputlisting{../../Helpers/thresholdRangeImage.m}

\subsubsection{Get Euclidean Distance}
\label{app:euclidean}
\lstinputlisting{../../getEuclideanDistance.m}

 
\end{document}

