\documentclass{article}

\usepackage[hmargin=.75in,vmargin=1in]{geometry}
\usepackage[american]{babel}
\usepackage[T1]{fontenc}
\usepackage{times}
\usepackage{caption}

%%% Class name, option, and packages above are mandatory for generating an appropriate format 
%%% suitable for the WorldComp style. Therefore, do not make any changes unless you know 
%%% what you are doing.
%%% However, if you need to use the subfig package, you must call it BEFORE the caption package.
%%% (NOTE: the subfig package probably will work but has not been tested.)

%%% The worldcomp.cls is derived (in a quite dirty and quick manner) from the IEEEtrans.cls.
%%% At least the following packages are incompatible with the worldcomp.cls:
%%% <DO NOT USE THEM> setspace, titlesec, amsthm
%%% There may be more, so if you use a package that produces a lot of errors or weird results, 
%%% be advised to avoid that package.

%%% Below packages are recommended to use for better results and compatible with the worldcomp.cls
\usepackage{textcomp}
\usepackage{epsfig,graphicx}
\usepackage{xcolor}
\usepackage{amsfonts,amsmath,amssymb}
\usepackage{fixltx2e} % Fixing numbering problem when using figure/table* 
\usepackage{booktabs}
\usepackage{subfigure}
%%% Below packages are probably useful for some table-formatting purposes. Compatibility is not yet
%%% tested but probably fine.
%\usepackage{tabularx}
%\usepackage{tabulary}

%%% Using the hyperref package is not really necessary for conference papers, but if your paper includes
%%% a lot of URLs, and you wish them to be line-breakable, it might be useful.  When you need to use the
%%% hyperref package, make sure you set <colorlinks option> = true and all link colors black as shown in
%%% the sample below (the sample calls the ifpdf package, too).
%\usepackage{ifpdf} 
%\ifpdf
%\usepackage[pdftex,naturalnames,breaklinks=true,colorlinks=true,linkcolor=black,citecolor=black,filecolor=black,menucolor=black,urlcolor=black]{hyperref}
%\else
%\usepackage[dvips,naturalnames,breaklinks=true]{hyperref}
%\fi

\columnsep 6mm  %%% DO NOT CHANGE THIS
\newcommand{\bd}{\begin{displaymath}}
\newcommand{\ed}{\end{displaymath}}
\newcommand{\bcen}{\begin{center}}
\newcommand{\ecen}{\end{center}}
\newcommand{\bea}{\begin{eqnarray}}
\newcommand{\eea}{\end{eqnarray}}
\newcommand{\beq}{\begin{equation}}
\newcommand{\ba}{\begin{array}}
\newcommand{\eeq}{\end{equation}}
\newcommand{\ea}{\end{array}}

\title{\bf Video based vehicle detection and its application in Transportation Systems}           %%%% Replace with your title.

%%%% Replace the author and institution/affiliation names. 
%%%% Make sure the author names are boldface.
\author{
{\bfseries Naveen Chintalacheruvu and Venkatesan Muthukumar \\
Dept. of Electrical and Computer Engineering, University Nevada Las Vegas, NV, USA\\
}

\begin{document}


\maketitle                        %%%% To set Title and Author names.


\section{Vehicle Detection and Tracking Algorithm}

In this section, Harris-Stephens corner detection algorithm is used to determine interest points in the image. Point tracking using deterministic methods for point correspondence is used to track the vehicles. Also, spatial and temporal characteristics are used to derive vehicle counts. Speed of the vehicle is determine using vector mapping and scaling of interest points at different frames.

Harris-Stephens corner detection algorithm is based on the auto-correlation function of a signal, where the local auto-correlation function measures the local changes of the signal with patches shifted by a small amount in different directions. The HS corner detection method was improved upon Moravec's corner detector. The main drawback of Moravec's conner detector is that it is not isotropic \cite{}. HS corner detector considers the differential of the corner score (auto-correlation) with respective to the direction, instead of using shifted patches. 

Let us consider an 2-D image I, with an image area (x,y) shifted by ($\Delta x$, $\Delta y$). The weighted sum of squared difference (SSD) or auto-correlation between the two image patches is denoted as C(x,y) is given as:
\[ C(x,y) = \sum_x \sum_y w(x,y) {[I(x+\Delta x, y+\Delta y) - I(x,y)]}^2 \]
The shifted image $I(x+\Delta x, y+\Delta y)$ can be approximated y using Taylor expansion as follows:
\[ I(x+\Delta x, y+\Delta y) = I(x, y) + x I_y(x, y) + y I_y(x, y)\]
where $I_x$ and $I_y$ are the partial derivatives of I w.r.t x and y respectively.
Therefore, the auto-correlation function can be expressed an equation and as in a matrix form as follows:
\[ C(x,y) = \sum_x \sum_y w(x,y) {[x I_y(x, y) + y I_y(x, y)]}^2 \]
\[ C(x,y) = (\Delta x \Delta y) A {(\Delta x \Delta y)}^T \]
The matrix A (Harris Matrix), captures the intensity structure of the local neighborhood and $w(x,y)$ is a smooth circular Gaussian window defined as follows:
\[ w(x,y) = exp^{-(x^2+y^2/2\sigma^2)}\]
The Harris matrix is expressed as:
\[ A = \sum_x \sum_y w(x,y) 
\begin{pmatrix}
{I_x}^2  & I_x I_y \\
I_x I_y & {I_y}^2 \\
\end{pmatrix} = 
\begin{pmatrix}
\alpha  & \gamma \\
\gamma & \beta \\
\end{pmatrix} \]
A corner or interest point  is characterized by a large variation of C in all direction of the vector (x y). 

Let $\lambda_1$, $\lambda_2$ be the eigenvalues of the matrix A. By analyzing the eigenvalues of A, the following inferences can be made:
\begin{itemize}
\item If $\lambda_1 \approx 0$ and $\lambda_2 \approx 0$, then the pixel (x,y) has an auto-correlation function that is flat and has no interest point.
\item If   $\lambda_1 \approx 0$ and $\lambda_2$ has some large positive value, then the pixel auto-correlation function is ridge shape and interest point is an edge.
\item $\lambda_1$ and $\lambda_2$ are both large positive values, the auto-correlation function is sharply peaked and the interest point is a corner.
\end{itemize}

Since the exact computation of the eigenvalues of the matrix is computationally expensive, computation of the function $R$ has been suggested by \cite{}. R is also refereed as the interest point confidence value.
\[ R = \lambda_1  \lambda_2 - \kappa {( \lambda_1 +  \lambda_2)}^2 = det(A) - \kappa trace^2(A) \]
The above expression reduces the problem of determining the eigenvalues of the matrix A to evaluating the determinant and trace of the matrix A to determine the interest points or the corner points of the object/vehicle.
\[ trace(A) = \alpha + \beta = \lambda_1  + \lambda_2 \]
\[ det(A) = \alpha \beta - \gamma^2 = \lambda_1  \lambda_2 \]
The interest points are marked by thresholding R and applying non-maximal suppression. The value of $\kappa$ has to determined empirically, and in literature a range of 0.04 - 0.15 has been suggested. 

Object or vehicle tracking can be formulated as the correspondence of the interest points across frames. In this work, we employ a deterministic method for correspondence of interest points. Deterministic methods for interest point correspondence typically define a cost function. The cost function is a cost of associating each object or vehicle in frames j and j+k using a set of motion constraints. Minimization of the correspondence cost is usually modeled as an optimization problem. However, for vehicle tracking application the correspondence cost is modeled as combination of proximity and motion constraints. In our work, the correspondence cost employed involves matching of the object or vehicle centroids within lanes along with common and smooth motion constraints \cite{}.

If $c_j(x_i, y_i)$ denote the set of interest points or corner points determined by Harris-Stephen corner detection algorithm for the frame  $j$, then the centroid of the object is determine as follows:
\[ M_j(x,y) = ( \sum_i x_i /i, \sum_i y_i/i) \]
Given the proximity, common and smooth motion constraints this approach is suitable for determining the center of the object of the vehicle in the bounding region or vehicle detection zone. 

Our next formulation explains the approach of determining the speed of the object or vehicle. Let N be the number of frames/sec captured for video processing. Let $d_r$ be the total shift in the centroid of the vehicle from frame $j$ to $j+k$ expressed as pixels. The centroid point is denoted as $c_j(x_1, y_1)$ and $c_{j+k}(x_2, y_2)$ for frames j and j+k respectively. The centroid displacement is determined as:
\[ d_v = \sqrt{ (x_2 - x_1)^2 + (y_2 - y_1)^2} \]
If D (in miles) is the real-world distance from the reference points on the screen $r(x_1,y_1)$ to $r(x_2, y_2)$, then $d_r$ is evaluated by the above equation. The speed of the vehicle that has a centroid displacement $d_v$ from frame j to j+k is determined by the following formula:
 \[ v = d_v \times D/d_r \times 3600 \times k/N \  mph \]  
Figure \ref{}, shows the reference points ($r(x_1,y_1)$ and $r(x_2, y_2)$), centroid displacement ($d_r$) and the vehicle centroid at frames j and j+k.

\begin{figure}[hbtp]
\centering
\includegraphics[width=3in]{./Gaussian_Kernel}
\label{fig:Gaussian_Kernel}
\end{figure}


\begin{figure}[ht]
\centering
\subfigure[Caption of subfigure 1]{
\includegraphics[width=2in]{Detection_1.eps}
\label{fig:subfig1}
}
\subfigure[Caption of subfigure 2]{
\includegraphics[width=2in]{Detection_2.eps}
\label{fig:subfig2}
}
\subfigure[Caption of subfigure 3]{
\includegraphics[width=2in]{Detection_3.eps}
\label{fig:subfig3}
}
\label{fig:subfigureExample}
\caption[Optional caption for list of figures]{Caption of subfigures \subref{fig:subfig1}, \subref{fig:subfig2} and \subref{fig:subfig3}}
\end{figure}

\begin{figure}
\centering
\mbox{\subfigure{\includegraphics[width=3in]{Detection_1.eps}}\quad
\subfigure{\includegraphics[width=3in]{Detection_2.eps} }}
\caption{Text pertaining to both graphs ...} \label{fig12}
\end{figure}

\end{document}