\section*{Introduction}
The present image analysis project was introduce as part of our master's training at Lyon 1 university. Our tutor gives us the choice among three image analysis subject's~: region growing, foreground extraction, human body part segmentation. I choose to treat the second topic about foreground extraction.

\subsection*{State of art}
The common way to extract moving objects is to use background subtraction methods.
These methods have different steps: background modeling, background initialization, background maintenance and
foreground detection~\cite{bouwmans2009modeling}. So the foreground detection depend on the background modelling.\\
In the literature, quantity of methods has been developed to perform the crucial background subtraction step~\cite{bouwmans2009subspace}. This methods can be classified in three main categories~: basic background modelling, statistical background modelling and background estimation~\cite{bouwmans2009modeling}.\\
The most naive approach to model the background, characterized by the first category methods are based on frame differencing. A difference frame is computed at each new image between the new image and the precedent one. To add supplementary accuracy a mean filter can be added. During a learning period the background is modelled by averaging each frame of the learning sequence. Therefore, the difference frame is so computed between the background model and the new frame~\cite{tamersoy2009background,belaroussi2010traitement}.
Since the first category is not robust in dynamic environment and to camera movement, the second categories methods adopt a probabilistic way to model the background. The main idea being that the foreground is detected considering at each pixel the background average and variance. A bunch of methods exists in this categories~\cite{bouwmans2009modeling,tamersoy2009background}.
The third category~: Background estimation is based on a multidimensional (range and color) mixture of Gaussians which can be performed for sequences containing substantial foreground elements. Segmentation of the foreground is performed via background comparison in range and normalized color~\cite{gordon1999background}.

\subsection*{Chosen technique}
Since that is an training project I choose to implement in the first instance basic background modelling.
In order to detect the foreground the preliminary step is to model the background. The first approach is to using frame differencing~: be an \emph{8-bits} depth grey-scale image sequences an application $I : \mathbb{N}^3 \rightarrow \{0,1,...,255\}, I(x,y,t)$ give the pixel intensity where $t$ index the time and $(x,y)$ the pixel position. The difference frame is
\[\Delta I(x,y,t)=|I(x,y,t)-I_{ref}(x,y,t)|, \forall(x,y)\]
where $I(x,y,t)$ is the current captured frame and $I_{ref}(x,y,t)=I(x,y,t-1)$ the precedent frame.
The obtained $\Delta I$ can be threshed and 
\[
\Delta I_{thresh}(x,y,t)=
\left\{
  \begin{array}{l l}
    255 & \text{if $ \Delta(x,y,t) > thresh$} \\
    0 & \text{if $ \Delta(x,y,t) \leq thresh$}\\
  \end{array}
\right.
\]
If the scene illumination unsparingly vary or the camera moved during the sequence the difference image should not be relevant. A more robust movement detection can be implemented using the average of the $N$ previous frame as reference frame\cite{belaroussi2010traitement}~:
\[
I_{ref}(x,y,t)=\frac{1}{N}\sum_{i=1}^{N}I(x,y,t-i)
\] 
This implementation require to store a frame historical of size $N$.\\
In order to save memory where $N$ is big, we can store an approximation of the running average into a \emph{16-bits} depth image $I_{\backsim}$, which represent an approximation of the average of up to $256$ frame. The running average frames number is represented by $\backsim_{size}$ and $I_{\backsim}$ is re-computed at each frame~:
\[
I_{\backsim}=
\left\{
 \begin{array}{l l}
   I_{\backsim} + I & \text{if } t \leq \backsim_{size} \\
   I_{\backsim} - \dfrac{I_{\backsim}}{\backsim_{size}} + I & \text{if } t > \backsim_{size} \\
 \end{array}
\right.
\]
$\forall (x,y)$ where $I,I_{\backsim}$ are respective simplified notation for $I(x,y,t), I_{\backsim}(x,y,t)$.
When $t$ reach $\backsim_{size}$, $I_{\backsim}$ might be full and we start to take $\dfrac{I_{\backsim}}{\backsim_{size}}$ from $I_{\backsim}$. Notice that $\backsim_{size}=\dfrac{2^{depth}}{2^{\backsim_{depth}}}$ where $depth$ is the image depth and $\backsim_{depth}= depth + \log_2(\backsim_{size})$ and the memory cost equal to $height \times width \times \backsim_{depth}$. The memory efficiency is in $\log_2(N)$ compared to the true running which is in $N$.\\
Therefore, for low cost in memory, we obtain a sufficient approximation of the running average.
With this method, if the global illumination of the scene is stable and the $\backsim_{size}$ sufficiently big~: $I_{\backsim}$ tend to correctly modelling the background by erasing the moving objects.\\
But if the scene illumination vary unsparingly, for example in an interior when a main light in turned off~: then during a period depending on $\backsim_{size}$ the results will be incorrect.
In order to solute this problem we can put $\backsim_{size}$ value inversely proportional to the global illumination variance $Var_{\gamma}$ square root on a period $N$~:
\[
\gamma_i=\left(\dfrac{\sum_{x,y}I(x,y,i)}{heigth \times widht}\right), \forall (x,y)
\]
\[
\bar{\gamma}=\frac{1}{N}\sum_{i=t-N}^{t}\gamma_i
\]
\[
Var_{\gamma}=\frac{1}{N} \sum_{t-N}^{t}(\gamma_i - \bar{\gamma})^2
\]
\[
\backsim_{size}=
\dfrac{2^{depth}}{2^{\backsim_{depth}}} 
\times 
\left( 1- \left( \dfrac {\sqrt{Var_{\gamma}}} {2^{depth}} \right) \right)
\]
where $\gamma_i$ is the global illumination at time $i$.\\
To be more reactive when a fast change occur in the scene, we can weight the $Var_{\gamma}$ with a \emph{Gaussian} function $\varphi$ of parameter $\sigma$~:
\[
\varphi(t)=\dfrac{1}{\sigma\sqrt{2\pi}}\exp\left(-\frac{1}{2}\left(\frac{t}{\sigma}\right)^2\right)
\]
\[
Var_{\gamma}=\sum_{t-N}^{t}\varphi((t-N)-t)(\gamma_i - \bar{\gamma})^2
\]
\emph{The presented method is adapted in the project implementation to \emph{RGB} images. A method description is detailed in appendix \ref{app:One}}
%Color case -> OK
%Movement history image
%Movement concentration image
%Gaussian filter
\section*{Implementation}
%Talk about used technology :
%C++ with Qt4 and OpenCV
%Multithreading and/or OpenCL?
%Talk template about design

\subsection*{Application architecture}
%Creat a class diagram and talk about it

\subsection*{Graphical user interface}
%Explain the GUI utilisation

\section*{Discussion}
\subsection*{Performances}
\subsection*{Limitations}

\section*{Conclusion}

%\begin{algorithmic}
%\If {$t \leq 255$}
%    \State $I_{\backsim}(x,y,t) \gets I_{\backsim}(x,y,t) + I(x,y,t)$
%\Else
%    \State $I_{\backsim}(x,y,t) \gets I_{\backsim}(x,y,t) - (I_{\backsim}(x,y,t)/255) + I(x,y,t)$
%\EndIf
%\end{algorithmic}

%\begin{figure}[h]
%   \includegraphics[width=\textwidth]{mainwindow.png}
%   \caption{Graphical user interface : Main window modeling}
%\end{figure}
