\documentclass{report}

% Comments between the % sign and the end of line

\usepackage[utf8]{inputenc}
\usepackage{fullpage} 

\usepackage{amsmath}   %Package containing useful math symbols

\usepackage[english,vietnam]{babel} 

\usepackage{graphicx}

\graphicspath{{Resources/}}

\usepackage{setspace}
%\singlespacing
\onehalfspacing
%\doublespacing
%\setstretch{1.1}

\title{This is our title}
\date{April 3, 2012}
\author{Nguyen Gia Duy}

\begin{document}

\begin{figure}[h!]
  \centering
    \includegraphics[width=0.7\textwidth]{logo}
\end{figure}

\begin{center}
{\scshape\LARGE University of Bordeaux 1}\\[2cm]
\end{center}

\begin{center}
{\scshape\Large Final Report M2 Project}\\
{\scshape\Large Master of Software Engineering (2011-2013)}\\
\end{center}

\begin{center}
\line(1,0){460}
\end{center}

\begin{center}
\textbf{\rmfamily\Huge Computing The Frequent Items In A Stream Of Data}
\end{center}

\begin{center}
\textbf{\rmfamily\Huge Using Small Memory Space}
\end{center}

\begin{center}
\line(1,0){460}\\[3cm]
\end{center}

\begin{center}
 \begin{tabular}{lr}
   \emph{Authors}:\hspace{7cm}  &   \emph{Supervisors}: \\[5pt]
  Phan Quoc Trung & Sofian Maabout\\
  Nguyen Gia Duy \\[5cm]
 \end{tabular}
\end{center}

\begin{center}
{April 30, 2013}
\end{center}

\selectlanguage{english}

\renewcommand{\abstractname}{\rmfamily\Huge Abstract}
\begin{abstract}

\large {The frequent items problem is one of the most heavily studied problems in mining data streams, dating back to the 1980s. Many other applications rely directly or indirectly on finding the frequent items, and implementations are in use in large-scale industrial systems. It is common to find a paper touching on this topic, one of them is the paper of Cormode and Hadjieleftheriou [1].This paper present the most important algorithms for frequent items problem in a common framework. They have created baseline implementations of the algorithms, and used these to perform a thorough experimental study of their properties. They give empirical evidence that there is considerable variation in the performance of frequent items algorithms.}\\

\large {In this project, we aim to analysis the paper of Cormode and Hadjieleftheriou a some other relate document to have deep understand this problem and base on this, we Implement and test some of the techniques proposed in the paper, focus on test the execute time and the memory in used.}

\end{abstract}

\tableofcontents

\chapter{Introduction}
\Huge{(draft)}
\section{Project summary}

\large {In this project, the data structure of graph has been provided by the client – Professor Olivier Baudon. This structure includes many basic interfaces of graph. Base on this structure, we  analyze and implement some classic graph algorithms such that : \emph{ Dijkstra, Bellman-Ford, Floyd-Warshall, Ford-Fulkerson, Kruskal, Prim, Maximum Flow, Compute Minimum Cutsets, etc.} and almost algorithms base on the book \emph{T.H Cormen, C. E. Leiserson, R.L. Rivest, and C. Stein. Introduction to Algorithms, Third Edition. The MIT Press, 2009}  as required from client.}

\section{Domain analysis}

\large {The graph algorithms are thoroughly analyzed in many books and materials. In fact, however, there are few project synthesize and implement all of the algorithms on the graph. To get an overview of the algorithms on the graph as well as easy to check, compare the structure and performance among the algorithms, this is the reason why we do the project.}


\section{Result}

\large {We built a small graph program that contains implementation of almost algorithms on graph. Besides that, the program provide a set of test cases to test the execution and performance  of algorithms . }

\chapter{Analysis}

\section{Counter-based algorithms}

A common feature of these algorithms is that when given a new item, they test whether it is one of \emph{k} being stored by the algorithm, and if so, increment its count.
The cost of supporting this \emph{“dictionary”} operation depends on the model of computation assumed. There are many solutions to solve this problem, but in this case we just counting the number of \emph{“dictionary”} operations in the algorithms.

\subsection{The Majority algorithm}

Invented in 1980 by \emph{Moore} and \emph{Boyer} [6], Majority Algorithm can be stated as follows:

\begin{itemize}
\item Store the first item and a counter, initialized to 1.
\item For each sub-sequent item:
	\begin{itemize}
		\item If it is the same as the currently stored, increment the counter.
		\item If it differs, and the counter is zero, then store the new item and set the counter to 1.
		\item Else, decrement the counter.
	\end{itemize}
\end{itemize}

\subsection{The Frequent algorithm}

This Algorithm was first proposed by \emph{Misra} and \emph{Gries} as \emph{“Algorithm 3”} in 1982[22]. It is a generalization of the Majority Algorithm.
Instead of keeping a single counter and item from the input, the Frequent Algorithm stores \emph{k - 1}(item, counter) pairs.
The natural  generalization  of  the Majority  algorithm  is  to compare  each  new  item  against  the  stored  items  \emph{T},  and increment  the corresponding counter  if  it  is among  them. 
Else, if there is some counter with a zero count, it is allocated to the new item, and the counter set to 1. 
If all \emph{k - 1} counters are allocated to distinct items, then all are decremented by 1. A grouping argument is used to argue that any item which occurs more than n/k times must be stored by the algorithm when it terminates.

Pseudo code:

\begin{figure}[h!]
    	\centering
     	\includegraphics[width=0.5\textwidth]{p_F}
\end{figure}

\subsection{The LossyCounting algorithm}

Proposed by \emph{Manku} and \emph{Motwani} in 2002[19].
The Algorithm  stores  tuples  which  comprise  an  item,  a  lower bound on its      count, and a \emph{“delta”} value which records the difference between the upper bound and the  lower bound. 

When processing the \emph{ith} item in the stream:

\begin{itemize}
	\item If information is currently stored about the item then its lower bound is increased by one
	\item Else, a new tuple for the item is created with the lower bound set to one, and \emph{“delta”} set to \emph{[i/k]}.
\end{itemize}

Periodically, all tuples whose upper bound is less than \emph{[i/k]} are deleted. These are correct upper and lower bounds on the count of each item, so at the end of the stream, all items whose count exceeds \emph{n/k} must be stored

Pseudo code:

\begin{figure}[h!]
    	\centering
     	\includegraphics[width=0.5\textwidth]{p_LC}
\end{figure}

\subsection{The SpaceSaving  algorithm}

Introduced in 2005 by \emph{Metwallyetal}.
In this Algorithm, \emph{k} (item, count) pairs are stored, initialized by the first \emph{k} distinct items and their exact counts.
As usual, when the next item in the sequence corresponds to a monitored item, its count is incremented, but when the next item does not match a monitored item, the (item, count) pair with the smallest count has its item value replaced with the new item, and the count incremented.

Pseudo code:

\begin{figure}[h!]
    	\centering
     	\includegraphics[width=0.5\textwidth]{p_SS}
\end{figure}

\section{Sketch  algorithms}

(draft)

The sketch algorithms described here use hash functions to define a (very sparse) linear projection.
Because of their linearity, it follows immediately that updates with negative values can easily be accommodated by such sketching methods...

\subsection{The CountSketch  algorithm}

Pseudo code:

\begin{figure}[h!]
    	\centering
     	\includegraphics[width=0.7\textwidth]{p_CK}
\end{figure}

\subsection{The CountMin Sketch algorithm}

Pseudo code:

\begin{figure}[h!]
    	\centering
     	\includegraphics[width=0.7\textwidth]{p_CMK}
\end{figure}

\chapter{Experiments}

\section{Counter-based algorithms}
\subsection{The Frequent algorithm}

\begin{itemize}
  \item Used memory :

   \begin{figure}[h!]
    \centering
     \includegraphics[width=0.9\textwidth]{logo}
   \end{figure}

  \item Executing time :

   \begin{figure}[h!]
    \centering
     \includegraphics[width=0.9\textwidth]{logo}
   \end{figure}

\end{itemize}

\subsection{The LossyCounting algorithm}

\begin{itemize}
  \item Used memory :

   \begin{figure}[h!]
    \centering
     \includegraphics[width=0.9\textwidth]{logo}
   \end{figure}

  \item Executing time :

   \begin{figure}[h!]
    \centering
     \includegraphics[width=0.9\textwidth]{logo}
   \end{figure}

\end{itemize}

\subsection{The SpaceSaving algorithm}

\begin{itemize}
  \item Used memory :

   \begin{figure}[h!]
    \centering
     \includegraphics[width=0.9\textwidth]{logo}
   \end{figure}

  \item Executing time :

   \begin{figure}[h!]
    \centering
     \includegraphics[width=0.9\textwidth]{logo}
   \end{figure}

\end{itemize}

\subsection{Comparisons}

\subsection{Conclusions}

\section{Sketch algorithms}
\subsection{The CountSketch  algorithm}

\begin{itemize}
  \item Used memory :

   \begin{figure}[h!]
    \centering
     \includegraphics[width=0.9\textwidth]{logo}
   \end{figure}

  \item Executing time :

   \begin{figure}[h!]
    \centering
     \includegraphics[width=0.9\textwidth]{logo}
   \end{figure}

\end{itemize}

\subsection{The CountMin Sketch  algorithm}

\begin{itemize}
  \item Used memory :

   \begin{figure}[h!]
    \centering
     \includegraphics[width=0.9\textwidth]{logo}
   \end{figure}

  \item Executing time :

   \begin{figure}[h!]
    \centering
     \includegraphics[width=0.9\textwidth]{logo}
   \end{figure}

\end{itemize}

\subsection{Comparisons}

\subsection{Conclusions}

\chapter{Conclusion}

\bibliographystyle{alpha}
\begin{thebibliography}{}

   \bibitem{CLRS10} % Short name used for the \cite command
     T.H. Cormen, C.E. Leiserson, R.L. Rivest and C. Stein,
     \emph{Introduction to Algorithms},
     MIT Press, 3rd Edition, 2010. % Necessary!

   \bibitem{CLRS10} % Short name used for the \cite command
     Shimon Even,
     \emph{Graph Algorithms}, Computer Science Press, 1979.% Necessary!

\end{thebibliography}
\end{document}

