\documentclass[12pt, letterpaper]{article}
\usepackage{setspace}
\doublespacing
\usepackage[margin=1.0in]{geometry}
\usepackage{graphicx}
\usepackage{times}

%opening
\title{Depth Map Using a Four-Camera Array}
\author{Brian Fehrman and Scott Logan}

\begin{document}

\maketitle

\section{Introduction}
\input{introduction}

For our image processing project we would like to use a recently developed four-camera array. This camera array employs four commonly found web cams which tie together at a powered USB hub. The USB hub allows for a single USB cord interface to the host PC. The cameras are arranged in a $2\times2$ grid with equal spacing between each camera. This setup allows for four images to be taken nearly simultaneously and with different camera settings for each image if desired. The camera array is shown in Figure \ref{fig:camera_array}.

\begin{figure}[h!]
\centering
\includegraphics[width= 0.3\textwidth]{img/camera_array}
\caption{$2\times2$ camera array that will be used for the project}
\label{fig:camera_array}
\end{figure}

The main idea behind the proposed project is to process the images for ``stereo'' vision applications. In this case, we will be using four cameras instead of the typical number of two. The hope is that a larger amount of information from a given scene will be instantly available which should increase the accuracy of discerning depth and other information.

The overall goal here is to be able to estimate depth of points in a scene using the images that are captured from the camera array. The depth information will then be used to display a depth map. This will hopefully be performed in near real-time as the images are being captured. OpenCV and C++ will be used as the main coding platform. 

The first step of the algorithm will be to perform stereo camera calibration. OpenCV has built-in functions for stereo camera calibration and these will be used for this first step. The calibration functions will give the parameters necessary to adjust the input images to compensate for intrinsic camera properties, misalignment of the cameras, and will also give the focal length of the cameras which will be very important for generating a depth map. The camera calibration will be performed in four pairs: top horizontal, bottom horizontal, left vertical, and right vertical. A smoothing filter will be used on acquired images to reduce the effects of noise. Images will also be converted to gray-scale before being processed. Both smoothing and gray-scale conversion will use built-in functions.

The next step is to acquire images and find correspondence points between image pairs. The correspondence problem will be handled by implementing two approaches; a brute-force template matching approach and a higher level feature based matching approach. The overall speed and accuracy of each approach will be compared.

For the template matching, a small area centered on some $(x_1, y_1) $will be selected from one image. The program will then attempt to find where that template is centered in the second image to give some $(x_2, y_2)$ coordinate. The search area will be constrained based upon which camera pairs are being considered. For the horizontal camera pairs the main shift should be in the $x$ direction so the algorithm will go to $(x_1, y_1)$ in the second image and search left and right by a chosen amount. The vertical pairs will instead look in the $y$ direction. The distance metric between the template and the current area will be given as the Euclidean Distance. The smallest distance within the chosen search neighborhood will be considered as the correspondence point.

The feature based matching will use OpenCV's built-in SURF routine to both find and extract/describe regions of interest. An adaptive non-maximum suppression algorithm will be implemented to get a good spread of features. From there, OpenCV's FLANN routine will be used to match the features between images.

As correspondence points are found the distance between the points can be computed and stored in a matrix. This will result in four distance matrices with one matrix for each camera pair. At this point the program can begin generating depth maps. First, just one distance matrix from the horizontal pairs will be used to generate a traditional two-camera stereo vision depth map. The formula to be used will be $z = \frac{b \times f}{d} $ where $z$ is depth, $b$ is the baseline between cameras (well known based on design), $f$ is focal length (determined by calibration), and $d$ is the distance between pixels. The underlying concept is that points with more motion are typically closer while points with less motion are further away and this is related to parallax \cite{Harltley2003}. The next step will be to average the four distance matrices together and use that information to generate a depth map which will use the same formula. The depth values in each case will be plotted using a pseudo-color mapping scheme which will be manually implemented. 

The results from the two-camera case and four-camera case will be compared to see if there is any advantage in using more than two cameras. These results will be based upon smoothness of depth map produced and overall accuracy of distance.


{\small
\bibliographystyle{unsrt}
\bibliography{references}
}

\end{document}
