\section{System Architecture}
\label{architecture}
\subsection{Low level feature extraction}
	In our system, we extract optical flow from temporal consecutive images. The optical flow is the representation of individual pixels motion. We employed  Farneback’s method\cite{Farneback:2003} for dense
optical ﬂow estimation. 
%By applying
%a polynomial expansion transform to the neighbourhood of
%each image frame, the translation vector for each
%pixel can then be obtained through the expansion coefficients. 
In order to reduce the noise, increase the processing speed and makes the motion more salient, optical flow is computed across 5 frames. 

\begin{figure}[htb] 
\begin{minipage}[b]{1.0\linewidth}
 \centering
 \centerline{\includegraphics[width=7cm]{architecture_2}}
%  \vspace{2.0cm}
\end{minipage}
\caption{Architecture of the Integral HOOF/HOG descriptor }
\label{fig:architecture1}
%%
\end{figure}
	Histogram features have the advantage over raw color or optical flow data. It is invariant to translations or rotations smaller than the orientation bin size. The contrast normalization in HOG improves its resistance to illumination. Similar to HOG, the HOOF is based on well-normalized local histogram of motion gradients in a dense grid. Flow is divided into two directions, $d_x$ and $d_y$. The direction and magnitude of each flow, is calculated according to the angles of two vectors and their strength. Due to light effect, some small low magnitude noise are removed by simply set threshold to eliminate pixel moving strength magnitude less than $1/2$. 
\subsection{Shared Integral Histogram Calculation Framework}

	We will introduce the basic idea of HOG which can directly apply to the HOOF. The architecture is showed in fig.\ref{fig:architecture1}.In HOG, each detection window is divided into cells. Each cell contains a 9-D vector where 9 is the default bin number. The default setting of the cell size is $8*8$ pixels and every $2*2$ cells consists a block by sliding through cells. Each block is overlapped with the neighbour block by certain stride and contain 36-D feature vector. Then we perform L2-normalization on each block to avoid large variance influence before concatenate them into the detection window. In the detection window of size $32*32$, the size of feature vector is 324.  
	
	The HOG gain success in its accuracy of detecting objects but is often criticized from the massive overall computation time needed in binning. Applying the idea of "Integral Image" becomes the solution to achieve better speed. \cite{Porikli:2005} suggested to compute integral histograms over arbitrary rectangular image region. \cite{Zhu:2006vi} proposed the integral HOG, which reduces computation time in the iterative loop of counting the histogram.  First, each pixel's orientation is discretized into n bins according to their angles. When operating trilinear interpolation, it is difficult to apply at the 2D integral vector directly. First we do linear interpolation on pixel between two neighbour bins-low bin and high bin(eq.\ref{interpolation}). L denotes the low bin and H is the high bin, and $\delta x$ is the weight used to distribute the magnitude f(x) between two bins.
	
\begin{equation}
\label{interpolation}
f(L+\delta x) = (1-\delta x)*f(L) + \delta x*f(H), \delta x \in(0,1)
\end{equation}	

After voting the vector at each bin will pass through the $7*7$ convolution kernel introduced by \cite{wang:2009} to achieve the same result as Dalal's trilinear interpolation. The speed is increased even more by Fast Fourier Transform(FFT) . Finally, we compute the integral histogram on each bin and the final matrix has the dimension of $(W+1)*(H+1)*B$, where $W$ and $H$ stands for the width and height of the image, and $B$ is the bin number.

	In order to lower the redundancy of computing the cells histogram, we propose the usage of cells map for both HOG and HOOF before the constitution of the descriptor. By reducing the redundant computation on cells feature, the  complexity of calculating the block and descriptor decreases. There's only need to assign them according to their location, which the location of cells in each block is fixed. We hard-code them in a codebook. In eq.\ref{pos}, the $n-th$ block contains the cells from the following equation. $Y$ and $X$ are refer to the position in the cells map.   
\begin{equation}
pos(y,x) = (y*nblock+x)*(cellperblock^2*binnumber) 	
\label{pos}
\end{equation}
\subsection{Support Vector Machine Learning}

The detection window produces the feature vector which consists of cell histograms. The features are fed into a classifier, which is trained using annotated data. The popular classifiers applied in human detection are Support Vector Machine(SVM) and variants of boosted decision trees. The head of person is classified as positive sample while other body parts and background is classified as negative sample. In our experiment, the normalization of training data is done by Z-score. In SVM, three kernels are frequently applied under different scenarios: the linear, the polynomial, and the RBF(Radial-Basis-Function) kernel. People can refer to \cite{libsvm} for detail. 
Another augmented features which combines the HOG and the HOOF(HOOF-HOG) to form a 624-D feature vector is proposed. After segmentation, the feature descriptor is fed into SVM. 
%The decision function of SVM is (\ref{SVM}). Y stands for the input vector, $X_k$ is the support vectors. 
%In linear SVM, the output is the inner product of support vectors and the input vector. It has the advantage of lower run time but acceptable predicting performance.
%\begin{equation}
%\label{SVM}
% f(Y) =\sum_{k=1}^n \alpha_k <Y , X_k>
%%f(x) = \beta +\sum_{k=1}^l
%\end{equation}
 
\subsection{Segmentation of the Region of Interest}
\begin{figure}[htb] 
\begin{minipage}[b]{1.0\linewidth}
 \centering
 \centerline{\includegraphics[width=7cm]{segment}}
%  \vspace{2.0cm}
\end{minipage}
\caption{Utilize HOOF to segment the interest moving region}
\label{fig:segment}
%%
\end{figure} 
In this stage, we introduce the method of automatically segment the moving region. With the assist of cells map, we calculate the connected component directly from the map by the rule that when the sum of the pixel in cells map exceed certain threshold $T_s$, the pixel is set to one in the binary image. For each patch in the CCL candidates, the area is calculated within the patch and it is rejected if the area contain inside is lower than another threshold $T_a$. Two threshold numbers are computed empirically. Some examples are showed in fig.\ref{fig:segment}.  
\begin{table*}[ht]
\begin{center}
{\small
\hfill{}
\begin{tabular}{|c|r|c|r|r|r|r|r|}
\hline
Data&Bins&Kernel&Precision(\%)&Recall(\%)&F-Score&Accuracy&Rank\\
%\cline{3-9}
\hline
HOG&9	&\textit{Polynomial}	&64.64	&73.19	&0.6865&65.26&9\\
HOG&6&\textit{RBF}&\textbf{60.39}&\textbf{87.13}&\textbf{0.7134}&\textbf{65.52}&7\\
HOG&4&\textit{RBF}&64.48&75.13&0.6938&65.51&8\\		
\hline
HOOF&9&\textit{RBF}&78.35&88.71&0.8321&87.24&3rd\\										
HOOF&6&\textit{RBF}&\textbf{81.07}&\textbf{88.36}&\textbf{0.8456}&\textbf{88.48}&\textbf{1st}\\										
HOOF&4&\textit{Polynomial}&79.94&87.13&0.8338&87.54&2nd\\										
\hline
HOOF-HOG&9&\textit{RBF}&77.85&86.77&0.8207&86.12&6\\										
HOOF-HOG&6&\textit{Polynomial}&78.91&86.42&0.8249&86.15&5\\
HOOF-HOG&4&\textit{RBF}&\textbf{78.06}&\textbf{86.60}&\textbf{0.8211}&\textbf{86.26}&4\\
\hline
\end{tabular}}
\hfill{}
\caption{The result of two proposed features and HOG,.The rank is the average rank in each column. The highest result in each feature set is showed in bold}
\label{table:experiment_result}
\end{center}
\end{table*}