\documentclass[10pt, conference, compsocconf]{IEEEtran}

\ifCLASSINFOpdf
  % \usepackage[pdftex]{graphicx}
  % declare the path(s) where your graphic files are
  % \graphicspath{{../pdf/}{../jpeg/}}
  % and their extensions so you won't have to specify these with
  % every instance of \includegraphics
  % \DeclareGraphicsExtensions{.pdf,.jpeg,.png}
\else
  % or other class option (dvipsone, dvipdf, if not using dvips). graphicx
  % will default to the driver specified in the system graphics.cfg if no
  % driver is specified.
  % \usepackage[dvips]{graphicx}
  % declare the path(s) where your graphic files are
  % \graphicspath{{../eps/}}
  % and their extensions so you won't have to specify these with
  % every instance of \includegraphics
  % \DeclareGraphicsExtensions{.eps}
\fi

\hyphenation{op-tical net-works semi-conduc-tor}


\begin{document}
%
% paper title
% can use linebreaks \\ within to get better formatting as desired
\title{Hand Gesture Recognition and Its Practical Applications}
% author names and affiliations
% use a multiple column layout for up to two different
% affiliations
\author{\IEEEauthorblockN{Tran Duy Quang}
\IEEEauthorblockA{0912369\\
HCMUS, 09 Honor Class\\
Email: duyquang195@gmail.com}
\and
\IEEEauthorblockN{Phan Anh Tuan}
\IEEEauthorblockA{0912516\\
HCMUS, 09 Honor Class\\
Email: tuan\_anh64208@yahoo.com\\}
}

% make the title area
\maketitle


\begin{abstract}
Human computer interaction (HCI) is a growing field in which computer scientists study novel ways in order to help human interact with computers naturally and intuitively. One of the most widely research topics in this field is hand gesture recognition, where hand movements are used to control computers, and even mobile devices. However current methods have multiple problems, including an inability to function in real-time interactive applications. The goal of this project is to survey, experiment and evaluate some current gesture recognition methods, then we propose a stable real-time hand gesture recognition algorithm that can recognize several gestures for use in computer interface interaction. Using this approach, we are able to recognize 'spread', 'left', 'flat', 'right', and number of fingers gestures with over 90\% accuracy, this also makes it applicable for practical use.

\end{abstract}

\begin{IEEEkeywords}
gesture detection; gesture recognition; skin detection; hand tracking; 

\end{IEEEkeywords}


% For peer review papers, you can put extra information on the cover
% page as needed:
% \ifCLASSOPTIONpeerreview
% \begin{center} \bfseries EDICS Category: 3-BBND \end{center}
% \fi
%
% For peerreview papers, this IEEEtran command inserts a page break and
% creates the second title. It will be ignored for other modes.
\IEEEpeerreviewmaketitle



\section{INTRODUCTION}
% no \IEEEPARstart
Humans communicate with a mix of speech and gestures. Gestures are made with the hands, or with the head, or with any other part of the body. What matters is that with a gesture someone moves to communicate. When humans interact with computers (HCI) this is typically via typing, mouse pointing and clicking. However, computer recognition of hand gestures can provide more intuitive user machine interface and can be useful for wide range of applications [1].

So why develop gesture technology? Why develop speech technology? Why search for a 'more natural way to communicate with computers'. There are perhaps as many answers as there are people involved in these developments. Yet, here are some of the most common answers:
\begin{itemize}
\item To enable the transformation of the computer from a tool to an assistant, or a robot.
\item To enable a hands-free or device-free interaction with a computer, expanding the range of possibilities for computer usage.
\item Because under special circumstances it would be very handy.
\item To allow those who can only use their voice, or eyes, or mouth to use computers, think RSI or Stephen Hawking.
\item Because of a personal disliking of keyboards and mice.
\item And more important, because we can.
\end{itemize}
As computers and camera systems continue to advance and develop, new and interesting applications for them have come about including new ways to interact with computer systems through cameras. Most recently the release of the Kinect system for XBOX 360 indicates that there is a growing interest in such interaction. Inspired by that, we decide to experiment and create such an interface for vision based computer interaction. Our interface would allow ones to do specific actions such as mouse, media controlling, building autonomous picture taking system on mobile device, etc, which can simply done with movements and gestures made by their hand. Our intention is to create the interface that only needs a single webcam and standard computing equipment to operate. Since webcams are built into many laptops and mobile devices these days, in addition they are easy to come by the hardware need to use our interface, so this would be available to most people. We believe that if a reliable system for controlling mouse can be developed based on a stable hand motions and gestures interface, many people would be interested in using it for games and other activities.

The remaining part of the report is organized as following. In section II, we review some prior work on feature detection and hand gesture recognition algorithms. In section III, our proposed hand gesture recognition method for some practical applications is introduced in more detail. In section IV, simulation and experimental results from Matlab and C++ using with OpenCV implementation will be discussed. Finally, the report is concluded in section V.


%\subsection{Subsection Heading Here}
%Subsection text here.
%
%
%\subsubsection{Subsubsection Heading Here}
%Subsubsection text here.

\section{PRIOR AND RELATED WORK}
Hand  gestures  can  be  classified  as  macro  gestures  and micro gestures. Macro gestures vary with different positions of the hands relative to the human body. Micro gestures vary with the  relative  position  of  the  fingers  of  the  hand.  Various algorithms  to  detect  hand  gestures  have  been  proposed  in research [2-10]. 

Macro  hand  gestures  can  be  detected  by  a  vertical  or horizontal  histogram  of  the  preview  image  [2].  This  method assumes a static background. When a person enters the view, a binary mask of the person is obtained and its vertical histogram is  analyzed. When  one  or  both  hands  are  raised  to  different positions,  the  peaks  in  the  histogram  change.  Different histogram patterns correspond  to different hand gestures. This algorithm  is  easy  to  implement  but  it  only  works  for  a  few persons with a static background. 

One  way  to  detect micro  hand  gestures  is  employing  the local  orientation  histogram  by  Freeman  and  Roth  [3].  This algorithm depends  on  the  feature vector orientation histogram used in pattern recognition systems by McConnel [4]. It forms a histogram based on the orientation of edges in an image. The major  limitation  of  this  algorithm  is  that  the  object  should occupy  a  considerable  amount  of  area  in  the  image  because small  objects  have  little  impact  on  the  orientation  histogram. Another  representative  approach  is  the  use  of Adaboost with SIFT by Wang and Wang [5]. The discrete Adaboost  learning algorithm with Lowe’s scale invariant feature transform (SIFT) is employed. Bretzner also proposed a hand gesture recognition scheme  using  multi-scale  color  features,  hierarchical  models and  particle  filtering  [6].  Fang  and  Wang  used  scale  space feature detection such as Blob and Ridge detection to recognize certain hand gestures [7]. MacLean and Herpers used skin-tone blobs around the face area to detect the hands and the number of fingers to  differentiate gestures [8].  

Another approach is to label hand pixels using RGB data. One way to do this is to have a person wear a single colored glove, and label those pixels whose RGB values are close to that color as hand pixels [9]. A less robust method that does not require the use of a glove labels pixels based on their likeness to common skin colors [10].

However, in fact these algorithms are still either too complex to implement in real-time on low-level hardware and mobile device, or are not enough robust and only  limited  to  specific training gestures.

\section{METHODOLOGY}
In experiment, we try a number of ideas, many of which failed to produce good results for hand gesture. In order to simplify the problem, we separate inputs for hand gesture into two main types which are static images and sequence of frames. Below are our proposed approaches for each input types which are the best methods from those we have tested and concluded.
\subsection{Hand gesture with static inputs}
For the static inputs which are single images, we propose to reuse Viola Jones algorithm which was found to be very successful for face detection in variable lighting conditions, and was further enhance for hand recognition under some rotation restrictions [11]. However, mapping Viola and Jones face detector to hand gestures has introduced a few challenges.
\begin{enumerate}
\item Hand gestures are more difficult to characterize than face due to finer grain details.
\item A set of two gestures has greater similarity than a face and non-face object.
\item A single hand gesture may have many different postures that can be perceived as the same gesture, unlike Viola and Jones face decoder that is limited to face looking straight ahead.
\end{enumerate}
To simplify, we assume that only the four hand gestures which are 'spread', 'left', 'flat', and 'right' can appear on the scene. The approach includes two main steps: training and decoding Viola Jones algorithm.
\subsection{Hand gesture with dynamic inputs}
For dynamic input data, such as videos, sequence of frames from camera, one such idea is to firstly locate hands by using hand tracking method, which includes the following steps. Firstly, we calculate the absolute difference between the gray scale of a frame and the gray scale of the previous frame. We secondly apply a threshold as well as use basic image morphology in order to clean up the image. Then we use the location of maximum value to track the hands. This worked if hands are the fastest moving objects in the scene, but this is often not the case. This implementation also suffers from stability since hands are not always moving, thus problems take place when objects differ from hand are moving.

% conference papers do not normally have an appendix

\section{EXPERIMENTS AND RESULTS}
Not complete ... Updating in Progress...

% use section* for acknowledgement
\section*{Acknowledgment}
%
%
The authors would like to thank PhD.Tran Minh Triet, teacher Le Thanh Tam for teaching and guiding us useful knowledge and image processing techniques to complete this project. 
%more thanks here


% trigger a \newpage just before the given reference
% number - used to balance the columns on the last page
% adjust value as needed - may need to be readjusted if
% the document is modified later
%\IEEEtriggeratref{8}
% The "triggered" command can be changed if desired:
%\IEEEtriggercmd{\enlargethispage{-5in}}

% references section

% can use a bibliography generated by BibTeX as a .bbl file
% BibTeX documentation can be easily obtained at:
% http://www.ctan.org/tex-archive/biblio/bibtex/contrib/doc/
% The IEEEtran BibTeX style support page is at:
% http://www.michaelshell.org/tex/ieeetran/bibtex/
%\bibliographystyle{IEEEtran}
% argument is your BibTeX string definitions and bibliography database(s)
%\bibliography{IEEEabrv,../bib/paper}
%
% <OR> manually copy in the resultant .bbl file
% set second argument of \begin to the number of references
% (used to reserve space for the reference number labels box)
%\begin{thebibliography}{1}
%
%\bibitem{IEEEhowto:kopka}
%H.~Kopka and P.~W. Daly, \emph{A Guide to \LaTeX}, 3rd~ed.\hskip 1em plus
%  0.5em minus 0.4em\relax Harlow, England: Addison-Wesley, 1999.
%
%\end{thebibliography}

\begin{thebibliography}{1}
\bibitem{}(Gesture Recognition: A Survey)~Sushmita~Mitra, Senior~Member,~IEEE,~and~Tinku~Acharya,~Senior~Member,~IEEE http://ieeexplore.ieee.org/xpls/abs\_all.jsp?arnumber=4154947


\bibitem{}W.T.Freeman,  M.Roth, \textit{"Orientation  histogram  for  hand  gesture recognition,"}  IEEE  Intl.  Wksp.  On  Automatic  Face  and  Gesture Recognition, Zurigh, Jun, 1995. 

\bibitem{}R.K.McConnell, Method of and apparatus  for pattern  recognition. U.S.Patent No. 4,567,610, Jan. 1986. 
[4]  C.C.Wang, K.C.Wang, \textit{"Hand posture  recognition using Adaboost with SIFT for human robot interaction,"}

\bibitem{}L.Bretzner, I.Laptev, T.Linderg, \textit{"Hand gesture recognition using multi-scale color features, hierarchical models and particle filtering,"}

\bibitem{}Y.  Fang,  K.  Wang,  J.  Cheng,  H.  Lu,  \textit{"A  real-time  hand  gesture recognition method,"}

\bibitem{}J.Maclean, R.Herpers, C.Pantofaru, L.Wood, K.Derpanis, D.Topalovic, J.Tsotsos, \textit{"Fast hand gesture recognition  for  real-time  teleconferencing applications,"}  Proc.  Of  the  IEEE  ICCV  Worksop  on  Recognition, 
Analysis  and  Tracking  of  Faces  and  Gesture  in  Real-Time  Systems, RATFE-RTS 2001, pp. 133-144, 2001. 

\bibitem{}P.Viola, M.  Jones,  \textit{"Rapid object detection using  a boosted  cascade of simple features,"}

\bibitem{}R. Lienhart,  J. Maydt,\textit{"An  extended  set of Haar-like  features  for  rapid object detection,"}

\bibitem{}T. Cour, F. Benezit and J. Shi. \textit{"Spectral Segmentation with Multiscale Graph Decomposition."} IEEE Computer Society Conference on Computer Vision and Pattern Recognition,2005.

\bibitem{}L. E. Baum, \textit{"An Inequality and Associated Maximization Technique in Statistical Estimation for Probabilistic Functions of Markov Processes."} Inequalities, vol. 3, 1972: 1-8.

\bibitem{}Mathias Kolsch and Matthew Turk, \textit{"Analysis of Rotational Robustness of Hand Detection with a Viola-Jones Detector"} http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=\&arnumber=1334480   


\end{thebibliography}


% that's all folks
\end{document}


