\chapter{The Proposed System}
\label{sec:proposedSystem}
\section{Aims of the System}
\label{sec:AimsOfTheSystem}
%The previous chapter showed the current trends in sketch recognition. 
This thesis introduces a new approach for the sketch recognition problem. Particle Swarm Optimization (PSO) is used to correctly segment strokes into curves and lines. The current implemented systems either segment strokes based on simple curvature\cite{meanshift10,earlySketchbased4} or speed information\cite{earlySketchbased4}.


The contribution in the system is mainly in attempting to generate the optimal segmentation using PSO which exhibited superiority with respect to similar algorithms in various other applications \cite{PolygonApproximationPSO}. The system attempts to find the optimal decomposition of the input stroke using PSO with the help of curvature and speed information. The use of PSO algorithm helps in eliminating the effects of input noise, problems of over and under segmentation that were in previous systems. The proposed symbol recognizer uses the generated segmentation to compute the feature vector which is used to identify more complex symbols. This enables the system to be scalable for more complex shapes.


%the performance of most algorithms is based on user style. 
% Stroke segmentation is one of the most complex problems in the sketch recognition system. It is dividing the   
%In stroke segmentation problem is Particle swarm optimization is used as the 
\section{System Overview}
\label{sec:AnOverviewOfTheSystem}
   This research solves sketch segmentation problem using particle swarm algorithm (PSO).  Due to sloppiness of users and hardware glitches the captured points must be processed to remove noise and redundancy before the system proceeds to compute the speed and curvature information. After that a set of possible corner or critical points is computed to guide the segmentation procedure.  A set of features is extracted from the segmented strokes to be used by the classifier.  %The stroke is the path of points from the instant the pen is down till it is up.
The block diagram in Figure \ref{fig:Blockdiagram} shows the main system blocks. The next section describes each block in details.
 %Due to sloppiness of users and hardware glitches the system processes the data before further
\begin{figure}[]
	\centering
	
\begin{center}
	\includegraphics[scale=0.9]{images/AllBlockDiagram.eps}
	\caption[The System Block Diagram]{Block diagram of the system}
	\label{fig:Blockdiagram}
\end{center}
\end{figure}
 

%\subsection{System Components}
%\label{sec:SystemComponents}

The \textbf{Possible Dominant Point Extraction} step is responsible for capturing the input data and removing the noise from it. As shown in Figure \ref{fig:Blockdiagram} Possible Dominant Point Extraction consists of computing the time differences, direction, speed and curvature of every point in the stroke. After computing this information the local minimum and local maximum of each curve is extracted. This process estimates a set of a set of Possible Dominant Points $P_{pd}$ that guide the segmentation process. Later, other geometrical and statistical computations are evaluated from the stroke. %The system then proceeds to estimate to help in the segmentation process.   \\
 
  
The next step is \textbf{Segmentation}, the goal of the segmentation stage is to divide strokes into segments of either curves or lines. The segmentation problem is summarized as finding the best decomposition with least number of segments each represents a geometric primitive. This is an optimization problem, which evolutionary programming can solve efficiently. A genetic algorithm was used by \cite{CruveDivisionSwarm} to optimally divide digital curves into lines and curves. Chen et al.\cite{CruveDivisionSwarm} uses digital curves scanned from paper as input to the system and did not take advantage of the curvature or local geometric properties of the digital curve. However, comparisons between PSO and GA \cite{ComparePSOGA05} shows that PSO converges to optimal solution faster than GA.  

 As shown in Figure \ref{fig:Blockdiagram} first an attempt is made to fit the stroke points into a curve or an ellipse using a minimum square error fitting algorithm \cite{chernov}. If the stroke proved to be an elliptical arc then the segmentation process ends and the system proceeds to the next step. Otherwise, the stroke is passed to two particle swarm algorithms that divide the stroke to either a polygon or a set of lines and curves. The algorithms takes the stroke points along with the possible dominant points $P_{pd}$ computed during Possible Dominant Point Extraction then produce a set of dominant points which are connected by either lines or curves. The two algorithms will generate two segmentations, the system will choose the segmentation that has the minimum segmentation error.%

% First an attempts the segmentation process is divided into two steps ellipse fitting and curve segmentation. In the first step, the ellipse fitting process tries to fit the stroke into an ellipse. If the system fails it passes the stroke to the second step; curve segmentation which consists of two PSO segmentation algorithms. The two algorithms will generate two segmentations, the system will choose the segmentation that has the minimum segmentation error.%  generate the segmentations,  the minimum error will be the chosen segmentation. % is the ellipse fitting

 %the stroke first If the ellipse detection fails the stroke is passed to the segmentation algorithms which will pass it to the two PSO algorithms described below the segmentation with the minimum error will be the chosen segmentation.  The segmentation is then added to the set of un-recognized segments in the system. \\% this part is repeated for each stroke. 
 %the clustering algorithm starts to group segments together after the segmentation step. The system let the user draws the symbol by using any number of strokes, a set of unrecognized segments is passed to the clustering algorithm to generate a symbol and compute a feature vector for it.
 
 In \textbf{Feature Extraction} step, the approximated segments that were computed in the previous step are appended to a list of segments. A feature vector consists of a set of statistical, geometrical and spatial features are extracted from the segmented strokes. The feature vector is used as input for the next step.   %the system computes composite set of features some are statistical other are spatial features based on the type primitives.% The system extract segment based and statistical based features from the set of segments that the clustering algorithms produce. 
  
   The final step is \textbf{Classification} using a SVM classifier. The classifier attempts to classify the symbol into one the previously trained classes.
   % c  that will use the features computed to classify the segments into one of the previously trained classes. % Or to determine the symbol of the given segments from the set of preciously trained symbols. 
 %with a symbol from the training set.% The system compute composite set of features some are statistical other are spatial features based on the type primitives. Finally, the strokes are classified into the corresponding classes.% the classifier identifies the strokes into a symbol from the set of known symbols.\\
%\chapter{System Details}
%\label{sec:SystemDetails}
\section{Possible Dominant Point Extraction}
\label{sec:Preprocessing}
%Some data is extracted from the points in the stroke.
The Possible Dominant Point Extraction step captures the points from the pointing devices then compute the information needed to determine a set of  \textit{Possible Dominant Points $P_{pd}$} which guides the segmentation step. It is noted as a person draws a shape the pen slows down near corners and picks up speed when drawing straight lines. Therefore, the speed information is widely used to identify the shape corners and edges \cite{earlyprocess}. The curvature information is used to determine the points with high angular changes along the path of points \cite{meanshift10}. Those observations help detecting the dominant points as the points with lower speeds and high curvatures. 

Also the time difference between samples was used in \cite{polygonfeedback31} as it provides more distinct maximum than the speed information. Agar et al. \cite{polygonfeedback31} mentions that the pointing device (for example the mouse) sampling rate is the reason for this phenomena. The points are sampled at regular time intervals while the pointing device is moving. There are no points while the pointing device is stationary. This leads to a nearly constant time difference between samples while the pen is moving and large difference while the pen is stationary. Contrary to speed information, where the users draw with variant speeds the speed information has a lot of noise in the data. Figure \ref{fig:speed2Distance} shows the time difference and the speed graphs for the stroke drawn in Figure \ref{fig:orignalStroke}.  Similarly, direction information is used as it provides better distinctive maximum for the corners than the curvature information \cite{meanshift10}. Figure \ref{fig:curvatures} shows the direction and curvature graphs for the stroke drawn in Figure \ref{fig:orignalStroke}

 %The system compute speed, curvature, time difference and direction data then generate a set of possible corners that will be used in the segmentation as an initial solution. 
\subsection{Preliminary Calculation}
\label{sec:CurvatureCalculation}
  
 
 In this research, time difference, direction, speed and curvature of each point along the stroke was computed. The experiments in section \ref{sec:PSO} proved that computing all information's provide better segmentations results than only using one of the graphs. The time difference is calculated as $\Delta t = t_{i+1} - t_i$ where $t_i$ is the time of the point $i$ and $t_{i+1}$ is the time of the point $i+1$. The speed is calculated using equation $v=\Delta s/\Delta t$ where $t$ is the time difference between two points and $s$ is the length between them. 
 
\begin{figure}
	\centering
		\includegraphics{images/curvatureCal.jpg.eps}
	\caption{Direction Calculation}
	\label{fig:curvatureCal.jpg}
\end{figure}

  The direction is calculated as the angle between vectors $Q_i$ and the $x-axis$ (vector $O_i$) as in equation \ref{eq:direction} (Figure \ref{fig:curvatureCal.jpg} explains direction calculations).We used a n estimation of curvature that define curvature as the change in direction with respect to length i.e. $c= \Delta d/\Delta s$.
  
 \begin{equation}
\label{eq:direction}
	\vartheta  = \cos ^{ - 1} \left( {\frac{{\overrightarrow {Q_i}  \cdot \overrightarrow {Px_i } }}{{\left\| {\overrightarrow {Q_i} } \right\| \times \left\| {\overrightarrow {Px_i} } \right\|}}} \right)
\end{equation}
where $Q_i=\overrightarrow {P_{i - 1} P_i }$ is the vector from the point  $P_{i - 1}$ to point $P_i$ and $Px_i$ represent the $x-axis$ at this point (Figure \ref{fig:curvatureCal.jpg}).  $\|{\overrightarrow {Q_i }}\|$ is the normal of the  vector $\overrightarrow{Q_i}$
 
 
 
 %system compute speed, distance data for all points in the stroke. Curvature is computed using estimation used in [] where direction computed is angle between two lines. Curvature is 
 %$\Delta d/  \Delta S $ where d is the difference direction of point and s is difference in distance between points. 
 All these calculations are performed in real time while the user draws the strokes. The complexity of computation is $O(n)$ where $n$ is number of points. Figures \ref{fig:speed2Distance} \&  \ref{fig:curvatures} show the computed information's for the stroke drawn in Figure \ref{fig:AnotherorignalStroke}.% After computing the graphs the system  compute the average, minimum and maximum values of all curves.  \\% the lower speed points correspond to vertex and dominant point's location. The higher the direction and curvature data correspond to location with higher change in curvature witch promote the location for vertices'. \\
%Description of input data. 
%how to calculate speed, curvature, area, bounding box.
%how to remove noise. 
%Finally how to compute primarily dominant points. 

\begin{figure}[]
	\centering
		\includegraphics[scale=0.8]{images/stroke3.jpg.eps}
	\caption{Example of Input Stroke} Example of an input stroke to the system. 
	\label{fig:orignalStroke}
\end{figure}


\begin{figure}
	\centering
			\subfigure[ Speed Graph]{	\includegraphics[scale=0.5]{images/vel3orginal.png.eps}}
			\hfill
			\subfigure[ Time Difference Graph] {\includegraphics[scale=0.5]{images/td3.jpg.eps}}
	\caption{Speed and Time Difference Graphs}
	\label{fig:speed2Distance}
\end{figure}

\begin{figure}[]
	\centering
		\includegraphics[scale=0.8]{images/orignalStroke.eps}
	\caption{Another Example of Input Stroke} Example of an input stroke to the system. 
	\label{fig:AnotherorignalStroke}
\end{figure}

\begin{figure}[]
%\begin{minipage}[b]{0.8\linewidth}
	\centering

	%	\subfigure[ The Speed of data] {\includegraphics[scale=0.35]{images/speed2.eps}}
		
		\subfigure[ Direction Graph ] {\includegraphics[scale=0.35]{images/direction2.eps}}
					%\end{minipage}
					%\begin{minipage}[b]{0.8\linewidth}
			\subfigure[Curvature Graph] {\includegraphics[scale=0.35]{images/curvature2.eps}}
		
			%\end{minipage}
	
	\caption{Curvature and Direction Graphs}%{The data of the stroke}
	
	\label{fig:curvatures}
\end{figure}


\subsection{Critical Point Detection}
\label{sec:CriticalPointDetection}
After computing speed, time difference and curvature information the system proceeds to detect the points with low velocity and high curvatures. Using simple differentiation to detect local extreme points resulted in false points due to the non smooth curves. Hence, the system adopted a process presented by \cite{earlyprocess}, where the mean of the curve is calculated. Then a threshold \textit{th} is used to separate the curve into regions; each region $Region_i$ is defined as a range of points, where the curve values are either above or below the threshold \textit{th}. Those regions are further processed to find the maximum point $Max(Region_i)$ of each region $Region_i$. The stroke points $p_i(x,y)$ that correspond to those maximum values are labeled as \textit{possible dominant points} $P_{pd}$. Figure \ref{fig:MaxRegioi} and \ref{fig:speed2Distance} shows an example of $Region_i$ of the speed curve and the dominant points $P_{pd}$ which correspond to the minimum of point in this region $Min(Region_i)$.  % (as shown are redundant)
\begin{figure}
	\centering
		\includegraphics[scale=0.5]{images/vel3.jpg.eps}
	\caption{Possible Dominant Points Calculations}
	\label{fig:MaxRegioi}
\end{figure}

   %After the threshold is applied the curve is divided into regions. Each region is defined by a part in the data curve between tow intersection point of the threshold line and the curve.  % to find the extreme points. Hence, the mean of the direction curve is calculated and used as threshold.   \\
   %If we tried to differentiate the curve the result will be false threshold values we divide the data into regions of data higher or lower that the threshold. This will let us only look for high data only. The maximum of each region is then computed and reported as a possible vertex point. 
  
The system repeats this process to curvature, direction, time difference and speed curves. For each curve the points labeled as possible dominant points $P_{pd}$ are saved into a single array. The points that are repeatedly labeled are given a higher score than the other points. This score is then used to sort the  $P_{pd}$ points array. The array that contains possible points $P_{pd}$ is used as input for the next segmentation step. Figure \ref{fig:LabelsPPD} shows the particles labeled as Possible dominant points $P_{pd}$ by the Possible Dominant Point Extraction step, it is noticed the redundancy of some $P_{pd}$ points. 


\begin{figure}
	\centering
		\includegraphics{images/ppd.eps}
	\caption{Possible dominant points}% a) Possible dominant point b) Particle encoding  
	\label{fig:LabelsPPD}
\end{figure}

\section{Segmentation}
\label{sec:Segmentation}
The segmentation tries to divide the stroke into a set of primitives. As shown in Figure \ref{fig:Blockdiagram} first an attempt is made to fit the stroke points into a curve or an ellipse using a minimum square error fitting algorithm \cite{chernov}. If the stroke proved to be an ellipse arc then the segmentation process ends and the system proceeds to the next step. Otherwise, the stroke is passed to two particle swarm algorithms that divide the stroke to either lines or lines and curves. The algorithms take the stroke points along with the possible dominant points $P_{pd}$ computed during Possible Dominant Point Extraction then produce a set of dominant points which are connected by either lines or curves. The next section describes the ellipse detection algorithm and the two particle swarm algorithms used to divide the stroke.
%If the stroke proved to be an ellipse then the segmentation process ends and the system proceeds to the next step. Otherwise, the stroke is passed into two particle swarm algorithms that will divide the stroke to either lines or lines and curves (see the block diagram in fig. \ref{fig:Blockdiagram} ). 
%The algorithms takes the stroke points along with the possible dominant points computed then produce a set of dominant points which are connected with either lines or curves (see fig. \ref{fig:Blockdiagram}).  The next sections will describe the ellipse detection algorithm and both the two particle swarm algorithms used to divide the stroke.
%After computing the primary data the system now tries \\
%Paragraphs describe the segmentation algorithm.  


\subsection{Ellipse Fitting }
\label{sec:EllipseDetection}

%The process starts by computing the center of the stroke bounding box. The bounding box center point is used as the first estimation of the ellipse center. The axes of the ellipse are estimated as $width/2$ and $height/2$ of the stroke bounding box. The least square fitting algorithm is used to minimize the fitting error of the ellipse equation \cite{chernov-2003}.  
This process tries to fit the stroke points into an ellipse arc; it starts with computing the center of the stroke bounding box. The bounding box center point is used as the first estimation of the center of the ellipse. The axes of the ellipse are estimated as the $width/2$ and $height/2$ of the stroke bounding box where $width$ is width and $height$ is height of the bonding box. The least square fitting algorithm \cite{chernov} is used to minimize the fitting error of the ellipse Equation (\ref{eq:circleFit})  

\begin{equation}
E = \sum\limits_{i = 0}^N {\frac{{(x_i - x_0 )}}{{a^2 }}^2  + \frac{{(y_i - y_0 )}}{{b^2 }}^2  - 1} 
\label{eq:circleFit}
\end{equation}

 where $N$ is number of points in the stroke, $a,b$ are the length of ellipse axes, $x_0$ \& $y_0$ are the coordinates of the center point, $x_i$ \& $y_i$ are the coordinates of point $i$ in the stroke. A list of new values for $x_0$ , $y_0$ ,$a$ and $b$ are generated randomly from the older values with small increments after each loop.  After few iteration, the final fit error of the estimated ellipse is reported. Another measure is used to compute the efficiency of the final estimated ellipse. Equation \ref{eq:circleError} ensures that the drawn percentage of ellipse is considered. This eliminates fitting a line into very large ellipse but leaves small ellipses to be fitted as a partial or full ellipse. 


  \begin{equation}
eff = \left\{ {\begin{array}{*{20}c}
   {\frac{{P_{percent} }}{E}} & {ifP_{percent}  < 0.5}  \\
   {\frac{{P_{percent} }}{{E \times 2}}} & {ifP_{percent}  \ge 0.5}  \\
\end{array}} \right.
%eff= \begin{cases} 
% \frac{P_{percent}}{E},&\mbox{if} P_{percent}\mbox{< 0.5} \\
% \frac{P_{percent}}{2\times E},&\mbox{if} P_{percent}\mbox{\geq0.5} 
%\end{cases}
\label{eq:circleError}
\end{equation}
 \begin{equation}
P_{percent}  = L_{stroke} /P_{ellipse} 
\label{eq:ErrorArea}
\end{equation}
where $E$ is the error computed by Equation(\ref{eq:circleFit}), $L_{stroke}$ total length of stroke and $P_{ellipse}$ is the perimeter of the estimated ellipse. Figure \ref{fig:ellipseFitExamples} shows different $eff$, $P_{ellipse}$  and  $E$ values of strokes drawn by users.  If $eff$ is more than threshold $th_{Ellipse}$\footnote{By trial and error best threshold found was $th_{Ellipse}=0.25$} then the stroke is segmented as an ellipse otherwise the system proceeds to the next step. 

\begin{figure}
	\centering
		\includegraphics[scale=0.7]{images/ellipseFitError.eps}
	\caption{Ellipse Fitting Error}
	\label{fig:ellipseFitExamples}
\end{figure}


%Describe the ellipse detection algorithms 
%This process tries to fit the stroke points into an ellipse arc; it starts with computing the center of the stroke bounding box. The bounding box center point is used as the first estimation of the center of the ellipse. The axes of the ellipse are estimated as the $w/2$ and $h/2$ of the stroke bounding box where $w$ is width and $h$ is height of the bonding box. The least square fitting algorithm \cite{chernov-2003} is used to minimize the fitting error of the ellipse Equation (\ref{eq:circleFit})  
%
%\begin{equation}
%E = \sum\limits_{i = 0}^N {\frac{{(x_i - x_0 )}}{{a^2 }}^2  + \frac{{(y_i - y_0 )}}{{b^2 }}^2  - 1} 
%\label{eq:circleFit}
%\end{equation}
%
% where $N$ is number of points in the stroke, $a,b$ are the length of ellipse axes, $x_0$ \& $y_0$ are the coordinates of the center point, $x_i$ \& $y_i$ are the coordinates of point $i$ in the stroke. A list of new values for $x_0$ , $y_0$ ,$a$ and $b$ are generated randomly from the older values with small increments after each loop. After few loops, the final fit error and the ellipse confidence value are computed. The confidence value $Conf$ computed by Equation (\ref{eq:circleError})
% \begin{equation}
%Conf = E_{ellipse}+E_{area}
%\label{eq:circleError}
%\end{equation}
% \begin{equation}
%E_{area}  = \sqrt {\left( Area(stroke) - Area(ellipse)\right)^2 }
%\label{eq:ErrorArea}
%\end{equation}
%
%   where $E_{ellipse}$ is the error computed by Equation(\ref{eq:circleFit}) and $E_{area}$ is computed as the difference between the actual area of the stroke and the area of the estimated ellipse Equation(\ref{eq:ErrorArea}). If confidence value $Conf$ is less than threshold $th_{Ellipse}$\footnote{ Using trial and error the best threshold used is $th_{Ellipse}=0.7$ } the stroke is segmented as an ellipse otherwise the system proceeds to the two DPSO segmentation algorithms.  % The confidence value is used to label the stroke as ellipse or un segmented stroke. If the stroke as un-segmented the next process will be to pass the stroke into curve segmentation algorithm.    
   
 %  The next section describes the \textit{Discrete Particle Swarm Algorithm (DPSO)} then it proceeds to detail the two DPSO segmentation algorithms. 
   
   %to check if the stroke can be labeled as ellipse or not.\\ %is computed to check if stroke is an ellipse. \\
%We found that if the confidante is above threshold then the probability of ellipse is highest otherwise the stroke is passed to the next section to get the divisions of stroke and test its error.  


%\subsection{Discrete Particle Swarm Algorithm}
%\label{sec:ParticleSwarmAlgorithm}
%%\section{Particle Swarm Algorithm}
%%\label{PSO}
%%What is particle swarm algorithm and how it was used in related researches. 
%The main idea of \textit{Particle Swarm Algorithm (PSO)} is to represent each agent with a particle from the solution space \cite{PSOFirst}. Each agent moves the particle with a direction and velocity $v_{ij}$ based on equations \ref{eq:Swarm} \& \ref{eq:Swarm1}.
%\begin{equation}
%%\[
%p_{ij}=p_{ij}+v_{ij},
%%\
%\label{eq:Swarm1}
%\end{equation}
%where $p_{ij}$ represent the $jth$ particle in the $ith$ agent and $v_{ij}$ is the velocity of the $jth$ particle in the $ith$ agent.
% %Equation [\ref{eq:Swarm}] shows how velocity and direction of each particle are computed
% \begin{equation}
%v_{ij}  = v_{ij}  + c_1 r_1 (lbest_{ij}  - p_{ij} ) + c_2 r_2 (gbest_{ij}  - p_{ij} )
%\label{eq:Swarm}
%\end{equation}
% where $lbest_{ij}$ is the local best particle, $gbest_{ij}$ is the global best particle, $r_1$ \& $r_2$ are random variables and $c_1$ \& $c_2$ are the swarm system variables.
% After each iteration the global best $g_{best}$ particle and the agent local best $l_{best}$ particle are evaluated based on the maximum fitness functions of all particles in the solution space. The solution is found after achieving a specific number of iteration or after an error threshold is achieved.
%Equation \ref{eq:descrite}  
%\begin{equation}
%   P(i)\Leftarrow 
%\{
%\begin{array}{c} 
%1 \quad \quad if\quad r_{3}>p_{i}  \\
%
%0 \quad \quad if\quad r_{3}<p_{i} 
%\label{eq:descrite}
%\end{array}\}
%\end{equation}
% where $p_{ij}$ is the numerical values of the particle and $r_{3}$ is a random variable, is used to change the general swarm algorithm into binary particle (\textit{Discrete Particle Swarm Algorithm DPSO}) which handles particle values of either $0$ or $1$ \cite{PSODisceret}.  
\subsection{Non Ellipse Fitting}
\label{sec:SwarmSegmentation}
Two DPSO algorithms generate two different segmentations for each stroke. Each DPSO algorithms tries to find the best particle in the solution space. Both algorithms have the same problem formulation but different fitness functions. After generating two segmentations the system chooses the best segmentation from the outputs of the DPSO. The details of the problem formulation and fitness functions are given in the following sections.
%In this process the system generate two stroke segmentation using two PSO segmentation algorithm. The system generates segmentation from both algorithms and then chooses the segmentation with the minimum error value. The problem definition is the same in both algorithms but they differ in the fitness function and the error functions. %formation is nearly the same in both. 
\subsubsection{Problem Formulation}
\label{sec:ProblemFormulation}
The input stroke with $N$ points can be represented by set $S = \left\{ {x_1 ,x_2  \ldots x_N } \right\}$ where $x_i$ is the location of the point $i$ . The swarm algorithms consist of $M$ particles which are represented by the set 
$A  = \left\{ {P_i \left| {i = 1,2 \cdots M} \right.} \right\}$ where $P_i$ is a single solution particle from the solution space. Each particle decodes the problem with binary array with the same length $N$ as the input stroke (Figure \ref{fig:CodingSwarm}).   
\begin{figure}
	\centering
		\includegraphics{images/CodingSwarm.eps}
	\caption{An Example Stroke and the Coding}
	\label{fig:CodingSwarm}
\end{figure}
% The particles of the swarm represent a single solution of the solution space. For this problem the, each particle %will give a different segmentation for the input stroke. Firstly, we will define the stroke with N points by the set %S where.  We define the arc %An array with the same length as the number of points of the strokes.
The system represents each particle $P_i$ by $P_i = \left\{ {p_{ij} \left| {j = 1,2 \cdots N} \right.} \right\}$ where $p_{ij}$ is either 0 or 1 where $p_{ij}=1$ means that point $j$ is a dominant point(see Figure \ref{fig:CodingSwarm}.  Thus, the particle represents points that are chosen to be used as dominant points for this segmentation. 
The goal of the DPSO algorithm is to find the solution $P_i$ that generates the minimum set of dominant points that defines the stroke with minimum segmentation error. In other words, the system tries to find the fewer number of points (1's) location in the particle array which gives minimum segmentation error.  


The fitness function and error calculations are different in each algorithm. The first fitness and error function are described below. 

\subsubsection{Polygon Division Algorithm \textsl{AlgS1}}
\label{sec:PolygonDivisionAlgorithm}
The first algorithm, the algorithm tries to segment the stroke as a polygon. The final solution is a set of line segments that best define the input stroke. Given the inputs stroke let's define the set of points in the stroke as  $S = \left\{ {x_1 ,x_2  \ldots x_N } \right\}$ which is the set of consecutive points from start of stroke till the end. The arc $\widehat{x_ix_j}$ is defined as the consecutive set of points from point $x_i,x_{i+1} \cdots,x_j$. The line
$\overline{x_i x_j} $ is the straight line connecting point $x_i$ to point $x_j$. The approximation error is computed by the equation \ref{eq:ErrorSwarm1} 
\begin{equation}
E=\sum\nolimits_{i = 0}^M e ( \widehat{x_ix_{i+1}},\overline{x_i x_{i+1}})
\label{eq:ErrorSwarm1}
\end{equation}
 where $M$ is the number of dominant points in this solution.  The error $ e ( \widehat{x_ix_j},\overline{x_i x_j})$ is computed as the sum of squared perpendicular distance from every point along the arc $\widehat{x_ix_j}$ to the line $\overline{x_i x_j}$ \cite{PolygonApproximationPSO}. Figure \ref{fig:DPSOERROR} shows a graphical representation of the error. The fitness is computed using equation \ref{eq:fitnessSwarm1} 
\begin{equation}
\max fitness(p_i ) = \left\{ {\begin{array}{*{20}c}
   { - E/\varepsilon N} & {ifE > \varepsilon ,}  \\
   {D/\sum\limits_{j = 1}^N {p_{ij} } } & {otherwise}  \\
\end{array}} \right.
\label{eq:fitnessSwarm1}
\end{equation}%\]
where $N$ is the number of points in the stroke, $D$ is the number of point in the solution that was previously labeled as $P_{pd}$  and $E$ is the computed error and $\varepsilon$ is the error threshold which is computed as 20 \% of the input stroke area.  If the solution produced an error that exceed the error threshold $\varepsilon$ the fitness function assign a -ve value for the solution to express it's in feasibility, otherwise the inverse of the number of vertex produced is used to access the solution fitness in terms of minimum number of vertices'.   

This fitness function optimize two goals the first is to move the solution into the feasible solution space with acceptable error bounds and second is to fly the particle to a new position which may result in a polygon with a fewer number of vertices'\cite{PolygonApproximationPSO}.  The two optimization goals are pursued simultaneously since the DPSO evolves with swarm particles and each of which may invoke different fitness evaluation depending on the incurred approximation error. 
\begin{figure}
	\centering
  \includegraphics[scale=0.8]{images/pso1.eps}			
	\caption{AlgS1 Error}% a) Possible dominant point b) Particle encoding  
	\label{fig:DPSOERROR}
\end{figure}
 %The error is chosen to in favor of larger than the threshold is given a -ve value to lower the value of solution otherwise the system will favor the lower number of vertices'.\\ %we will want to lower the number of vertices'. \\
%if we say that s
%Alg1:sinc
%Alg2: 
\subsubsection{Hybrid Curve Algorithm \textsl{AlgS2}}
\label{sec:PolygonDivisionAlgorithm}

The second algorithm has the same problem formulation but different fitness and error functions. It was previously introduced in  \cite{CruveDivisionSwarm} where genetic programming was used as the optimizing algorithm. The particle is represented using the arrays $P_i = \left\{ {p_{ij} \left| {j = 1,2 \cdots N} \right.} \right\}$ where $p_{ij}$ is either 0 or 1. Lets denote the segments $\widehat{x_ix_j}$ as the consecutive set of points from point $x_i,x_{i+1} \cdots,x_j$. The algorithms attempts to fit each segment $\widehat{x_ix_j}$ to both line and circle. The type of the segment is determined according to the minimum error of each fit. The final error of the solutions is the summation of error of all segments.  %The goal of the algorithm is to fit the segments between the solution vertices' into straight lines or a circular arc. 
\paragraph{Fitting segments into straight line.}
\label{sec:FittingSegmentIntoStraightLine}
According to \cite{CruveDivisionSwarm} each segment $\widehat{x_ix_j}$ given a set of points $x_i,x_{i+1} \cdots,x_j$ can be fitted into the line: $y=kx+c$ where $k$ and $c$ are the slope and the intercept of the line respectively. Equation \ref{eq:Linek} and \ref{eq:LineC} 
\begin{equation}
\label{eq:Linek}
k = \frac{{N\sum\limits_{i = 1}^N {x_i y_i }  - \sum\limits_{i = 1}^N {x_i } \sum\limits_{i = 1}^N {y_i } }}{{N\sum\limits_{i = 1}^N {x_i^2 }  - \left( {\sum\limits_{i = 1}^N {x_i } } \right)^2 }}
\end{equation}
\begin{equation}
\label{eq:LineC}
c = \frac{{\sum\limits_{i = 1}^N {x_i^2 } \sum\limits_{i = 1}^N {y_i }  - \sum\limits_{i = 1}^N {x_i } \sum\limits_{i = 1}^N {x_i y_i } }}{{N\sum\limits_{i = 1}^N {x_i^2 }  - \left( {\sum\limits_{i = 1}^N {x_i } } \right)^2 }}
\end{equation}
, where $N$ is the number of points in the segment and $(x_i, y_i)$ are the coordinate of the point $i$, are used to fit the segment $\widehat{x_ix_j}$ into the straight line.
\begin{equation}
\label{eq:ds}
 d_s  = \frac{{\sum\limits_{i = 1}^N {\left| {(kx_i  + c) - y_i } \right|} }}{{N\sqrt {k^2  + 1} }}
\end{equation}
The distance ($d_s$) (Equation \ref{eq:ds}) is the average distance from segment points $(x_i, y_i)$ to the estimated line. This distance $d_s$ is used as the error of the line approximation.
\paragraph{Fitting segments into circular arc.}
\label{sec:FittingSegmentcirculararc}
Each segment is also fitted into a circular arc $(x - a)^2  + (y - b)^2  = R^2, $ where ($a,b$) are the coordinate of the center of the circle and $R$ is the radius of the circle. They are estimated using the following set of equations: 
 
\begin{equation}
a = \frac{{b_{1}a_{22} - b_{2}a_{12}}}{\Delta},
\end{equation}
\begin{equation}
	b = \frac{{b_{2}a_{11} - b_{1}a_{21}}}{\Delta},
\end{equation}
\begin{equation}
R = \sqrt {\frac{1}{N}(\sum\limits_{i = 1}^N {x_i^2 }  - 2\sum\limits_{i = 1}^N {x_i a}  + Na^2  + \sum\limits_{i = 1}^N {y_i^2  - 2} \sum\limits_{i = 1}^N {y_i b + Nb^2 } )} ,
\end{equation}

Where 
\begin{equation}
a_{11}  = 2\left[ {\left( {\sum\limits_{i = 1}^N {x_i } } \right) - N\sum\limits_{i = 1}^N {x_i^2 } } \right]
\end{equation}
\begin{equation}
a_{12}  = a_{21}  = 2\left( {\sum\limits_{i = 1}^N {x_i \sum\limits_{i = 1}^N {y_i } }  - N\sum\limits_{i = 1}^N {x_i y_i^2 } } \right),
\end{equation}
\begin{equation}
 a_{22}  = 2\left[ {\left( {\sum\limits_{i = 0}^N {y_i } } \right)^2  - N\sum\limits_{i = 0}^N {y_i^2 } } \right]
\end{equation}
\begin{equation}
b_1  = \sum\limits_{i = 0}^N {x_i^2 } \sum\limits_{i = 0}^N {x_i }  - N\sum\limits_{i = 0}^N {x_i^3 }  + \sum\limits_{i = 0}^N {x_i } \sum\limits_{i = 0}^N {y_i^2 }  - N\sum\limits_{i = 0}^N {x_i y_i^2 } , 
\end{equation}
\begin{equation}
b_2  = \sum\limits_{i = 0}^N {x_i^2 } \sum\limits_{i = 0}^N {y_i }  - N\sum\limits_{i = 0}^N {y_i^3 }  + \sum\limits_{i = 0}^N {y_i } \sum\limits_{i = 0}^N {y_i^2 }  - N\sum\limits_{i = 0}^N {x_i^2 y_i } ,
\end{equation}

\begin{equation}
\Delta  = a_{11}b_{22}-a_{12}a_{21}.
\end{equation}
 The error of the circle estimation is measured using the distance $d_c$ which is calculated using equation \ref{eq:circleE}:
 \begin{equation}
 \label{eq:circleE}
d_c  = \frac{{\sum\limits_{i = 0}^N {\left| {\sqrt {(x_i  - a)^2  + (y_i  - b)^2 }  - R} \right|} }}{N}
\end{equation}
Thus, Equation \ref{eq:circleE} $d_c$ defines the average distance $d_c$ from the segment points $x_i,y_i$ to the estimated circle ($a,b$) and $R$.

\paragraph{Determining The Segment Type.} 
\label{sec:DeterminigSegmentType}
If the average distance $d_s$ equals zero this means that the segment$\widehat{x_ix_j}$ points is exactly the straight line approximated. If the distance ($d_c$) equals zero means that the segments point lies exactly on the circular arc estimated. But the user rarely draws an exact line or circle, so the distances are used as the fitting error. The minimum distance determines the type of the current segment, in other words if ($d_s$) is smaller than ($d_c$) then the segment is labeled as a line segment otherwise it is labeled as a circular arc. 


After each segment type is determined the particle segmentation error is computed by Equation \ref{eq:errorSwarm2} 
\begin{equation}
E=\sum\nolimits_{i = 0}^M e(D_i) 
\label{eq:errorSwarm2}
\end{equation}
where $M$ is the number of segments in the solution, $D_i$ is the  approximation error of each segment as $min(d_c,d_s)$ as computed in Equation\ref{eq:circleE}  and Equation \ref{eq:ds} \cite{CruveDivisionSwarm}. The fitness is computed by the equation \ref{eq:fitnessSwarm2} 
\begin{equation}
\max fitness(P_i ) = \frac{1}{{E \times M^k }}
\label{eq:fitnessSwarm2}
\end{equation} where $E$ is the error and $M$ is number of segments and $k$ is a parameter tweaked to get minimum number of segments. $k$ is selected to be 0.5\cite{CruveDivisionSwarm}. 

\subsubsection{Solution Refine procedures} 
%\cite{PolygonApproximationPSO} used a merge and divide algorithm after each loop of the swarm system to refine the solution but we used another enhancement method.
 After each loop in the swarm algorithm, each particle loops on the set of selected dominant points to enhance the solution. Each dominant point is checked to find if it was labeled before as a possible dominant (computed as in section \ref{sec:Preprocessing}), if not the point is moved to the nearest labeled point.
   After each loop of the swarm algorithm (\textsl{AlgS1} and \textsl{AlgS2}), each particle is refined using the following procedure. For each particle $P_i$ each dominant point $P_{ij}$ is checked to find if it was labeled before as a \textit{possible dominant point} $P_{pd}$ (computed as in section \ref{sec:Preprocessing}). If it was not labeled the point $P_{ij}$ is moved to the nearest labeled point. This ensures that all of the points generated by the DPSO are possible dominant points $P_{pd}$. After that the particles are tested to make sure that the distance between every two successive dominant point is larger than the constant $min_D$. If two points are nearer than $min_D$\footnote{ 5\% of the total stroke length} then one of the points is removed. 
   
\subsection{Choosing the Best Fit}
\label{sec:bestFit}
The two PSO algorithms produce two segmentation solutions. After that the system evaluates each solution to finally select the best segmentation. This evaluation is based on the area of both the segmentation and the input stroke. The segmentation error ($E_{alg}$ )represents the sum of Square Root Error of the length from the point on the stroke to the estimated curve or line. In \textsl{AlgS2}, this error is the same as the computed error in Equation \ref{eq:ErrorSwarm1} but For the second algorithm \textsl{AlgS2} it is different than the computed error in Equation \ref{eq:errorSwarm2}. The additional computation is necessary to compare the two segmentations fairly. The system chooses the minimum segmentation as the final segmentation of this stroke. 


After the best fit segmentation is chosen the parameter of each primitive is computed. For example, if the stroke is represented as an ellipse, the center and axis of the ellipse is computed and confirmed. For lines the slope, equation, start point and end point are computed.  These values and parameter will help determine the spatial relations between different segments in the feature extraction step. 
% The segmentation which crosseponds to the minimum $E_{tot}$ is the best segmentation.  
%\begin{equation}
%\label{eq:bestFit1}
%E_{area}=A_{st}-A_{sg}
% \end{equation}
% \begin{equation}
% where $A_{st}$ is the area of the input stroke, $A_{sg}$ is area of computed segmentation, $E_{alg}$ is the Segmentation Error. 
  
\section{Hybrid Segmentation Algorithm \textsl{Alg3}}
\label{sec:BenchMarckAlgorithm}

In this research, we developed another algorithm as reference to the results obtained by the DPSO algorithms. The algorithm was first implemented by  Sezgin et al. in \cite{earlyprocess}. The algorithm is divided into the following steps: 
\begin{enumerate}
	\item Compute the curvature and the speed for each point in the stroke. 
	\item   A percentage of maximum value of the speed and curvature is used as threshold.
		\item Mark the values higher than the threshold in a new arrays $L_c$ as the list of corner points generated from the curvature curve. The list $L_s$ contains the points generated from the speed curve.
	\item Compute a confidence on each list based on the type of curve. Then sort the lists $L_s$and $L_c$ by the confidence value. 
	\item Find the intersection points in both list and generate the a Sample solution $h_{curr}$. 
	\item Use $h_{curr}$ as initial solution. Loop on the following:
	
		\begin{enumerate}
			\item Generate a new solution $h_{s}$ using $h_{curr}$ and  the first point in the $L_s$ list .
			\item Generate a new solution $h_{c}$ using $h_{curr}$ and  the first point in the $L_c$ list.
			\item Test the solutions $h_{s},h_c$ to find the minimum solution.
			\item  If $h_s$ is the minimum then add the solution a list of hybrid solutions $Solutions_{set}$ and remove the first point from the list $L_s$ then set $h_s$ as $h_{curr}$. 
			\item If $h_c$ then add the solution a list of hybrid solutions $Solutions_{set}$ and remove the first point from the list $L_c$ then set $h_c$ as $h_{curr}$. 
		\end{enumerate}
		\item Choose the final solution from the list of solutions $Solutions_{set}$ as a tradeoff between  minimum error and number of corners.  
		\item Check each segment in the solution if the solution is can be estimated as a line segment if not it is fitted using a higher level Bezire curve. 
\end{enumerate}
More details about the algorithm are in \cite{earlyprocess}.


%\section{Stroke Clustring Algorithms }
%\label{sec:ClustringAlgoirthm}
%
%When the user finishes drawing a stroke the segmentation algorithm generate a list of segments that represent this stroke. The system then groups the segments generated from the segmentation algorithm. %These segments are grouped into a symbol and tested if they can be classified into one of the known symbols. If no identification is achieved the segments are added into a list of unclassified segments. When a user draws another stroke the segmented stroke is added to the list and the process is repeated until all the segments are classified to a known symbol.\\
%
%After the user draw all strokes of the symbol he has to wait 10 seconds or press finish button beside the drawing area. The set of unrecognized strokes is grouped together along with their segmentation as input to the feature extraction process. 

\section{Feature Extraction Methods}%%Feature Set
\label{sec:FeatureExtraction}

After the segmentation, each stroke is represented as a list of segments $L_s$. A segment can be an ellipse, a line or a circular arc. The list of segments $L_s$ represents the segmentation of the stroke. When a user draw another stroke the generated segmentation of the stroke is appended to the list of segments.  When the user finish drawing the symbol the segment list $L_s$ will contain all segments of the drawn symbol. The relation and properties between different segments are extracted from the segment list $L_s$.  This step is considered as feature extraction which is described in details in the next sections. 

The system uses a composite set of features (total of 70 features) which include global shape properties like Rubine feature set \cite{gestureexample12},  ink density \cite{GeometryAndDomain102}, some appearance based properties like Zernike moments \cite{HeloiseBeautification}, and some stroke based structural information like number of perpendiculars lines , number of parallel lines and types of primitives in each symbol. The following list gives details on all the features used in the system.


\subsection{Structural and geometrical Features(FS1)}
 Features define the structure of the geometrical symbol. Section \ref{sec:steprec} describe in details how these 13 feature are computed.  
%\begin{description}
% \item [Structural and geometrical Features (FS1)] 
  \begin{itemize}
	 \item \emph{Segments:} Number of segments in the symbol.
	 \item \emph{Strokes:} Number of strokes or partial strokes that created the symbols.  
		\item  \emph{Primitives:} Number of primitives in the symbol. The feature helps when identifying             symbols with mixed geometric primitives like cylinders and callouts.  
		\item \emph{Curves:} Number of curves or ellipses in the symbol. 
		\item \emph{Lines:} Number of lines in the symbol. 
		\item \emph{Perpendicular} lines Number of perpendicular lines. 
		\item \emph{Parallel} lines Number of parallel lines. 
		\item \emph{Intersections} Number of intersection between lines and curves. 
		\item \emph{T intersections} Number of T intersections. 
		\item \emph{L intersections} Number of L intersections. 
		\item \emph{X intersections} Number of X intersections.
		\item \emph{Min Radius} Minimum Radius of all curves in the symbol.
		\item \emph{Max Radius} Maximum Radius of all curves in the symbol.
		%\item[No. of holes] Number of holes in the symbols. 
\end{itemize}
%\begin{description}
%	 \item[Segments] Number of segments in the symbol.
%	 \item [Strokes] Number of strokes or partial strokes that created the symbols.  
%		\item[Primitives] Number of primitives in the symbol. The feature helps when identifying             symbols with mixed geometric primitives like cylinders and callouts.  
%		\item [Curves] Number of curves or ellipses in the symbol. 
%		\item [Lines] Number of lines in the symbol. 
%		\item [Perpendicular lines] Number of perpendicular lines. 
%		\item [Parallel lines] Number of parallel lines. 
%		\item [Intersections] Number of intersection between lines and curves. 
%		\item [T intersections] Number of T intersections. 
%		\item [L intersections] Number of L intersections. 
%		\item [X intersections] Number of X intersections.
%		
%		
%		%\item[No. of holes] Number of holes in the symbols. 
%
%\end{description}
\subsection{Rubine Feature Set (FS2)} Features introduced by Rubine\cite{gestureexample12} for single stroke gestures. These 12 features represent the global shape of the symbol drawn. To compute the features we append all segments points into single path $ink_{path}$. The features is computed on the generated path $ink_{path}$.%then compute the features based on this path.  
\begin{enumerate}
	\item Cosine of starting angle.
	\item Sine of starting angle.
	\item Length of diagonal of bounding box. It gives an idea of the size of the bounding box).
	\item Angle of diagonal. It gives an idea of the shape of the bounding box (long, tall, square).
	\item Distance from start to end.  
	\item Total stroke length
	\item Change in Rotation( Arctan ). It gives the directional angle.
	\item Absolute rotation 
	\item Rotation squared 
	\item The maximum speed reached (squared) 
	\item Total time of stroke 
\end{enumerate}
\subsection{Statistical Features (FS3)} 
\begin{enumerate}
	\item \textsl{Zernike moments:} Zernike moments of order $n$\footnote{ We tried different values of n. Results show the analysis of each order.} \cite{HeloiseBeautification}.  For $n=10$ there are 32 moment features. 
\end{enumerate}
\subsection{Global shape properties set (FS4)}
 Features composed of different geometrical and statistical properties. These 13 features represent the global shape of the symbol drawn.

 	\begin{itemize}
\item \emph{Size Ratio} Ratio between width to height of the symbol.
		\item \emph{Log of size Ratio} Log of the ratio between width and height of symbol.
	\item \emph{Ink density} Compute the density of points inside its bounding box\cite{GeometryAndDomain102}.   
 	\item \emph{Convex Hull Area} Area of convex hull with respect to area of bounding box of symbol.
  
	\item \emph{Convex Hull Perimeter} Perimeter of convex hull with respect to total length of symbol.
		\item \emph{Convex Hull Point} Number of points of convex hull with respect to Number of points of symbol.
		\item \emph{Mean Centroidal radius} The Mean of the centroidal radius which is the distance from each point in the symbol to the center of gravity.
				\item \emph{Center of Gravity} The center of gravity of the symbol (both in y and x direction).
					\item \emph{Mean Time} The mean of the time values between each two successive points in the symbol. %Different strokes are appended to construct a single path.
	%\item [Ra]	
				
	\item \emph{Mean Time difference} The mean of the time difference between each two successive points in the symbol. %Different strokes are appended to construct a single path.
	%\item [Ra]		
			\item \emph{Log of Area} Log of area of the symbol.
	\item \emph{Diff Area} Absolute difference of the area of bounding box and area of symbol.
	
  \end{itemize}
  
 Section \ref{sec:featuresComparisions} presents a comparative result of the different feature sets from these features.

\section{Classification}%%Feature Set
\label{sec:Classification}
The system uses support vector machine (SVM) classifier with Linear kernel (Linear kernel) \cite{libsvm}. SVM is a binary classifier that maximizes the margin between the decision boundary and the support vectors. To enable SVM to learn multi classes we used one versus one strategy for combining set of binary classifier. This strategy trains $ \frac{n(n-1)}{{2}}$ binary classifier each distinguish between only two shapes. The final result is based on a number of votes each shape has from all classifiers. We used the library implementation of LibSVM \cite{libsvm} and its implementation of the one-vs-one strategy. Section \ref{Sec:SVMdetail} describe in details the classifier, how classes (classification) are determined and the reasons behind using this specific classification method.

\subsection {Training and Testing}
A training set that contains positive samples of each category in the data set is used to train the SVM classifier.  The training starts after computing the feature vector of each sample. The system trains $ \frac{n(n-1)}{{2}}$ binary classifiers each distinguish between only two categories. The goal of the training process is to generate a linear SVM plane that maximize margin between the two categories.  A cross validation procedure is implemented to choose the best value for the $C$ parameter in SVM classifier.( $C=100$ was the best value) 
=
For classification the decision is made from producing the feature vector of the unknown symbol to the trained $\frac{n(n-1)}{{2}}$ binary classifiers. The final decision is given using simple voting strategy. 



\subsection{Support Vector Machines}
\label{Sec:SVMdetail}
The foundation of Support Vector Machines (SVM) were developed by Vapnik \cite{svmintroduce} in 1995. It is gaining increase popularity due to the high performance and efficient implementation. The goal of SVM algorithms is to generate a separating plane between two different classes which are represented using a set of n-dimensional vectors. The generated plane is used as a good classifier to the unseen examples. However, there can be more than one plane that can separates the data therefore SVM attempts to find the plane that maximize the margin (distance between classifier and nearest sample) between the two classes \ref{fig:svm1}. To calculate the margin, two different hyper planes are constructed, one plane on each side of the data, these two planes are then moved to generate the maximum margin possible.  
\begin{figure}
	\centering
		\includegraphics{afterDefense/svm1.jpg.eps}
	\caption{Different Classifier Planes}
	\label{fig:svm1}
\end{figure}


Let us consider a two class classification task with data points $x_i$ where $i=1,\cdots,m$ having corresponding labels $y_i=\pm1$ where if  $y_i=1$ if $x_i$ belonged to first class and $y_i=-1$ if $x_i$ belonged to second class. Each data point $x_i$ is represented in $N$ dimensional input or attribute space. Let the classification plane be represented as function. 
\begin{equation}
 y=f(x)=sign(\omega \cdot x - b)
\label{eq:planeEq}
\end{equation}

 The orientation of the separation plane is determined using the vector $\omega$ and the offset of the plane from the origin is determined using the scalar $b$.   Let's make an assumption that the data points are linearly separable, i.e. there are infinite possibilities to find a plane that separates and correctly classify the training data. This is idea is illustrated in Figure\ref{fig:svm1} where two different separating planes can correctly classify the data. It is intuitively noted from the figure that the solid line is more likely to generalize more data. This means less classification error will be introduced. The geometrical characterization of this plane means the plane furthest from both classes.     % Which one is preferable ? Intuitively one prefers the solid pane since small perturbations of any point would not introduce misclassifications errors. Without any information, the solid plane is more likely to generalize better on future data. Geometrically we can characterize the solid plane as being furtherer from both classes.  Plane supports a class if all points in that class are on one side of that plane.
 
The main problem of SVM is constructing the plane that is furthest from both classes. The main approach is to use two parallel supporting planes and maximize the margin between them. All point in a class should be in one side of the plane in order to be a supporting plane.  Therefore, for the points with the class label $+1$ we would like there exist $\omega$ and $b$ such that $\omega \cdot x_i\> b$ or $\omega \cdot x_i -b \> 0$%, and for the points with class label $-1$ we would like there exist $\omega$ and $b$ such that $\omega \cdot x_i \< b $.
Suppose that smallest value of $\mid \omega \cdot x_i -b\mid$ is $\kappa$, then when $ \omega \cdot x_i -b \geq \kappa$. The argument inside the decision function is invariant under positive rescaling so we will implicitly fix a scale by requiring $\omega \cdot x_i -b \geq 1 $. Similarly the points with class label $-1$ requires $\omega \cdot x_i -b \leq -1 $. To find the plane furthest from both classes, we need to maximize the distance between the support planes for each class as illustrated in Figure \ref{fig:svm2}. To accomplish this, both planes are pushed apart until they bump into small number of data points ( the support vectors ) for each class. Figure \ref{fig:svm2} shows the support vectors as filled data points, 
\begin{figure}
	\centering
		\includegraphics{afterDefense/svm2.jpg.eps}
	\caption{Margin of SVM Support planes }
	\label{fig:svm2}
\end{figure}


The distance between these supporting planes 
\begin{equation}
 \omega \cdot x=b+1 \\
  \omega \cdot x=b-1
\end{equation}
is 
\begin{equation}
 \gamma = \frac{2}{\parallel \omega \parallel^2}
\end{equation}

Thus maximizing the margin is equivalent to minimizing $\frac{{\parallel \omega \parallel}^2}{2}$ in the next quadratic program 

\begin{equation}
 \min_{\omega,b}  \frac{{\parallel \omega \parallel}^2}{2}
\end{equation}
\begin{equation}	
\begin{array}{cc}
 s.t. \ \ \   \omega \cdot x_i \geq b+1 &    y_i \in Class 1 \\ 
 \ \ \ \ \ \   \omega \cdot x_i \leq b-1 &    y_i \in Class -1 
  \end{array} 
\end{equation}

The constrain can be simplified to $y_i(\omega \cdot x_i-b)\geq 1$. 

The Support vectors that are pumped into the supporting plane determine the margin. This is a primal form Quadratic Programming (QP) problem. The concept of duality can also used to wields the problem to a Largrnigian dual QP problem in Equation \ref{eq:dual}. 
% maximum margin between parallel 
\begin{equation}
\begin{array}{c}
\min_{\alpha} \ \ \   \frac{1}{2} \sum_{i=1}^{m}{\sum_{j=1}^{m}{y_iy_j\alpha_i\alpha_jx_ix_j}} - \sum_{i=1}^{m}{\alpha_i} \\
s.t. \ \ \sum_{i=1}^{m}{y_i\alpha_i}=0   \\
  \alpha_i \geq 0  \ \ \ i=1,\cdots , m 
  \end{array} 
\label{eq:dual}
\end{equation}

Both the dual form and the primal form wields to the same plane $\omega=\sum_{i=1}^{m}y_i\alpha_i x_i$ and threshold $b$ determined by the support vectors (for which $\alpha_i\geq 0$). 

\subsubsection{Theoretical foundation}

The generalization error on future points and not only in the training set can be obtained as proved by the statistical learning theory \cite{BennettSVMP2000}.  The generalized error bound is a function of the misclassified error on the training data and the terms that measure the complexity or capacity of the classification. In linear functions, using planes to maximize the margin of separation reduces the complexity of the functions. Thus, by explicitly maximizing the margin we are in the same time minimizing bound on the generalization error and therefore expect better generalization with high probability. The dimensionality of the data does not affect the size of the margin which   results in good performance for very high dimensional data (i.e, with a very large number of attribute). In a sense, this also reduces the problems caused by over fitting of high - dimensional data.  A large volume of literature, e.g., \cite{bookSVMoverfit11,statisticalSVM1} refers to this problem,  for more technical discussion of the   statistical theory. 

%Using geometric arguments we can gain insights to some of these results. Classification function that have capacity to fit the training data are more likely to over fit resulting in poor generalization.  

\subsubsection{Linearly Inseparable Case}
In all previous sections we assumed that the problem of classifying two set is linearly separable. The assumption is not true in most cases, thus, trying to construct a plane to separate the two dataset will fail. Figure \ref{fig:svm5} illustrates that we cannot construct a plane that can separate the two sets correctly.  Constrains for constructing the planes must be relaxed since the QP task is not feasible for the linearly inseparable case.  Consider the linear inseparable problem in Figure\ref{fig:svm5}, in the ideal case there will be no point misclassified and no point falls on the margin. But we must relax constrains that insure that each point in on the appropriate side of it is supporting plane. Every point that falls on the wrong side of the supporting plane is considered an error. The problem is now to maximize the margin and minimize the error.  
 
\begin{figure}
	\centering
		\includegraphics{afterDefense/svm5.jpg.eps}
	\caption{Example of SVM Linearly Inseparable Problem}
	\label{fig:svm5}
\end{figure}
A minor change in the supporting plane QP problem can accomplish this. A non negative slack or error variable $z_i$ is added to each constraint.  A weighted penalty term is also added to the objective as follows:
 \begin{equation}
 \begin{array}{c}
\min_{w,b,z}  \quad \frac{{\parallel \omega \parallel}^2}{2}+C \sum_{i=1}^{l}z_i  \\
s.t.  \quad y_i(\omega \cdot x_i -b)+z_i \geq 1  \\
    z_i \geq 0  i=1,\cdots,m
\end{array} 
\label{eq:inseperable}
\end{equation} 

The primal relaxed supporting plane method is equivalent to the dual problem. The Largangian dual of the QP task is :

\begin{equation}
\begin{array}{c}
\min_{\alpha} \ \ \   \frac{1}{2} \sum_{i=1}^{m}{\sum_{j=1}^{m}{y_iy_j\alpha_i\alpha_jx_ix_j}} - \sum_{i=1}^{m}{\alpha_i} \\
s.t. \ \ \sum_{i=1}^{m}{y_i\alpha_i}=0   \\
 C \geq \alpha_i \geq 0  \ \ \ i=1,\cdots , m 
 \end{array} 
\label{eq:dual}
\end{equation}



\subsubsection {Non linear function Using Kernels}
Figure \ref{fig:svm6} shows a classification problem that cannot be separated using linear space. A higher order plane can work better in similar cases. Converting a linear classification algorithm into non linear classification algorithm can be achieved by adding additional attribute to the data that are nonlinear functions of the original data. The expanded dataset features space can be separated using an existing linear classification algorithm.  The linear classification that is applied on the feature space produces a non linear classification on the original input space. In two dimensional vector space with attributes $r$ and $s$, to construct a quadratic  discriminate  simply map the original two dimensional input space $\left[r,s\right]$ to  five dimensional features space $\left[r,s,rs,r^2,s^2\right]$ and construct a linear discriminate in that space. Specifically, define : $\vartheta(x):R^2  \rightarrow R^5$ then 
\begin{equation}
\begin{array}{l}
	x=[r,s] \\
	\omega \cdot x=\omega_1 r+ \omega_2 s \\
	\downarrow \\
	\vartheta(x)=\left[r,s,rs,r^2,s^2\right] \\
	
	 \omega \cdot \vartheta(x)= \omega_1 r+ \omega_2 s + \omega_3 rs+ \omega_4 r^2+\omega_5 r^2 
\end{array}
\label{eq:kernel}
\end{equation}  

The resulting classification function, 
\begin{equation}
\begin{array}{l}
f(x)=sign\left( \omega \cdot\vartheta(x)-b \right) \\
= sign\left( \omega_1 r+ \omega_2 s + \omega_3 rs+ \omega_4 r^2+\omega_5 r^2 -b \right) 
\end{array}
\label{eq:kern2}
\end{equation}  
is linear in the mapped five dimensional feature space but is quadratic in the two dimensional input space. 

\begin{figure}
	\centering
		\includegraphics{afterDefense/svm6.jpg.eps}
	\caption{Example of an non Linearly SVM Problem }
	\label{fig:svm6}
\end{figure}

For higher dimensional datasets, this method of nonlinear mapping has two potential problems stemming from the fact that dimensionality of the feature space explodes exponentially. The first problem is that over fitting becomes a problem. However, since SVM rely on margin maximization they are largely immune to this problem. Provided an appropriate value of parameter $C$ is chosen. The second concern is that it is the computational complexity of   computing $\vartheta(x)$ but SVM use kernels to get around this problem.  

Examine what happens when the nonlinear mapping is introduced to equation \ref{eq:dual}. Let us define: $\vartheta(x):R^2\rightarrow R^5 \quad n>> n$  we need to optimize

\begin{equation}
\begin{array}{c}
\min_{\alpha} \ \ \   \frac{1}{2} \sum_{i=1}^{m}{\sum_{j=1}^{m}{y_iy_j\alpha_i\alpha_j\vartheta(x_i)\vartheta(x_j)}} - \sum_{i=1}^{m}{\alpha_i} \\
s.t. \ \ \sum_{i=1}^{m}{y_i\alpha_i}=0   \\
 C \geq \alpha_i \geq 0  \ \ \ i=1,\cdots , m 
 \end{array} 
\label{eq:dualkernel}
\end{equation}
Notice that the mapped data only occurs as inner product in the objective. Now we apply a little mathematically magic known as Hilbert Schmidt kernels, first applied to SVM in \cite{svmintroduce}.  By Mercer's theorem, we know that for certain mapping $\theta$ and any two points $u$ and $v$, the inner product of the mapped points can be evaluated using the kernel function without ever explicitly knowing the mapping, e.g., $\theta(u)\theta(v)\equiv K(u,v)$. Some of the more popular known kernels are given below. New kernels are being developed to fit domain specific requirements. 
\begin{center}
	\begin{tabular}{cc}
	$\vartheta(u)$  & $K(u,v)$ \\ \hline 
	Degree $d$ polynomial & $\left(u\cdot v +1\right)^d$ \\ 
	Radial Basis Function & $\exp \left( - \frac{\left\|u-v\right\|^2}{2\sigma}\right) $ \\ 
	Two layer Neural Network& $ sigmoid \left(\eta\left(u\cdot v +1\right)+c\right)$ \\ 
	\end{tabular}
\end{center}
To change from a linear to nonlinear classifier, one must only substitute a kernel evaluation in the objective instead of the original dot product. Thus by changing kernels we can get different highly nonlinear classifiers. No algorithmic changes are required from linear case other than substitute of a kernel evaluation for simple dot product. All the benefits of the original linear SVM methods are maintained. We can train a highly nonlinear classification function such as a polynomial or a radial basis function machine, or a sigmoid neural network using robust, efficient algorithms that have no problem with local minima. By using kernel substitution a linear algorithm (only capable of handling separable data) can be turned into a general nonlinear algorithm. 
%\begin{equation}
%\begin{array}{cc}
%\underline{&}\\  
% & \left(u\cdot v +1\right)^d \\
%Radial Basis Function & \exp \left( - \frac{\left\|u-v\right\|^2}{2\sigma}\right) \\
%Two layer Neural Network & sigmoid \left(\eta\left(u\cdot v +1\right)+c\right) 
% \end{array} 
%\label{eq:diffkernel}
%\end{equation}
 
%The foundations of Support Vector Machines (SVM) have
\subsubsection{SVM  Classification}
The resulting SVM method can be summarized as follows 
\begin{enumerate}
	\item Selecting the SVM classifier parameters. Parameter $C$ representing the tradeoff between minimizing the training set error and maximizing the margin. 
	\item Select A kernel function and their kernel parameters. For example for the radial basis function kernel, one must select width of the Gaussian $\gamma$. 
	\item Solve dual QP Equation \ref{eq:dualkernel} or an alternative SVM formulation using an appropriate quadratic programming or linear programming algorithm. 
	\item Get the support vector and recover the imperial threshold variable $b$. 
	\item Classify new points using $f(x)=sign\left(\sum{i}y_i\alpha_i K\left(x,x_i\right)-b\right)$
\end{enumerate}
The parameters in step 1. are selected using cross validation if sufficient data are available. %However, recent model  selectin strategies can 
\subsubsection {Multi Class Classifiers}
 
 There is mainly two popular methods to change the binary SVM classification problem into Multi class classification problem. The first method is One Versus All (OVA) Strategy where M binary classifiers is constructed one classifier for each class.  The $ith$ classifier is trained using positive example form $ith$ class and negative example from all other classes. The output function $p_i$ represents the distance and classification of the example from the $ith$ classifier. For a new example $x$, OVA SVM strategy assigns it to the class with the largest value of $p_i$. 
 
 The second method is One Versus One Strategy where a binary classifier for every pair of distinct classes ($M$ classes) and so, all together $\frac{M(M \cdot 1)}{2}$ binary classifiers are constructed. Each binary classifier is trained using examples from both $p_i$ and $p_j$. For example, classifier $C_{ij}$ is trained taking the examples from $p_i$ as positive and the examples from $p_j$ as negative. The classification is based on the number of binary classifier that classify the new example as $p_i$. For a new example $x$, if classifier $C_{ij}$ says $x$ is in class $p_i$, then the vote for class $p_i$ is added by one. Otherwise, the vote for class $p_j$ is increased by one. After each of the $\frac{M(M \cdot 1)}{2}$  binary classifiers makes its vote,  OVO strategy assigns $x$ to the class with the largest number of votes. 
 
 Both methods proved to be competitive with each other and there is no clear superiority of one method
over another \cite{Duan05whichis}. Some claims that OVO strategy trains in less time than OVA strategy because they train with only portion of the dataset \cite{libsvm} but this claim have not yet been proofed by experiments.  


\subsubsection{ SVM in Machine Learning }
SVM has been applied successfully to various application ranging from particle identification \cite{ParticleSVM7}. Other application like Face detection \cite{faceSVM8,faceSVM7} and text categorization \cite{TextSVM7}. Also SVM suits classification task that is commonly rise in Machine vision. For example we will consider face identification\cite{faceSVM8} problem. In \cite{faceSVM8} three different SVM classifier were used on training set of 200 images (5 image per person). The error rate was less than 5\%. Another example is digit recognition problem where an RBF network and SVM classifiers were compared on the UPUS handwritten digit recognition database\cite{ORLDataset}.  SVM outperformed and RBF network on all digits.  

%\subsubsection {SVM vs. Other Learning Methods}

SVM have been applied to the a largest MNIST a Handwritten digit recognition dataset of 60000 training set and 10000 test set\cite{IjdarSherifPaper}. Sherif and Ezzat has performed a comparison between SVM and several other techniques on the MNIST dataset. They also compared the same classification methods on Arabic handwritten digits (MADBASE) \cite{ADBase9,IjdarSherifPaper}. Their result shows that SVM classifier outperforms all other techniques on both MNIST and MADBase datasets and achieved recognition rate of over 98\%. Similar results was reported by \cite{empiricalcomp11,SVMInvariantComDecoste02} on the MNIST dataset. There are many other successful applications.  SVM have proven to be robust to noise and perform well on many tasks \cite{empiricalcomp11,Scholkopf97comparingsupport}. 


This superior performance because they may eliminates many of problems experienced with other inference methodologies with other inference methodologies like neural network and decision trees. The first problem that is eliminated was the problems with local minima. SVM can construct highly nonlinear classification and regression function without working about getting stuck at local minima. Another problem SVM eliminated is the numbers of parameter that are picked to generate the model. For example, SVM classifier with RBF kernel will only need 2 parameters to set the model the $C$ cost parameter and $\sigma$ parameter of the RBF kernel. The last reason is the final result are sable, reproducible and largely independent of the specific algorithm used to optimize the SVM model. If two users apply the same SVM model with the same parameters to the same data, they will get the same solution module numeric issues. Compare this with neural networks where the results are dependent on the particular algorithms and starting point used. 

SVM classifiers are also simple to use. One need not be a SVM expert to successfully apply existing SVM software to new problems. 

%different classifier on MNIST and a   


\subsubsection {SVM Implementations}
To solve SVM approach requires solving a QP or LP problem.  Mathematical programming has extensively studied the LP and QP type problems. These techniques can be divided into three categories, methods in which evaluate and discard kernel components during learning. Decomposition methods in which evolving subsets of data are used and new optimization approaches that specifically utilize the structure of the SVM problem. 

In the first category the most obvious approach is to sequentially update the $\alpha_i$,  kernel Adtron  used this approach in (KA) algorithm \cite{kernelSVM4}. 

Chunking and decomposition methods optimize the SVM with respect to subsets. They update $\alpha_i$ in parallel but using only a subset or working set of data at each stage rather than sequentially updating the $\alpha_i$. Decomposition strategies have been successfully coded in many libraries. SVM libraries are available on line such as SVMTorch \cite{svmTorch} and SVMLight \cite{SVMlight}. 


The third approach is to directly attack the SVM problem from and optimization perspective and create an algorithm that explicitly exploits the structure of ten problems. Keerthi et al. \cite{fastSVM6} proposed a very effective algorithm based on the dual geometry of finding the two closest points in the convex hulls. 
 
%\subsubsection {Summary Of SVM}

% whate is support vector machines 
%%The classifcation problem can be restricted to consideration of the two-class problem
%without loss of generality. In this problem the goal is to separate the two classes by a
%function which is induced from available examples. The goal is to produce a classifier
%that will work well on unseen examples, i.e. it generalizes well. Consider the example
%in Figure 2.1. Here there are many possible linear classifiers that can separate the data,
%but there is only one that maximizes the margin (maximizes the distance between it
%and the nearest data point of each class). This linear classifier is termed the optimal
%separating hyper plane. Intuitively, we would expect this boundary to generalize well as
%opposed to the other possible boundaries.
%
%Let $x_i$ , $i$ = $1.2 \dots,N$ , be the feature vectors of the training set, $X$ . These
%belong to either of two classes that are linearly spereable $\omega_1=1$ and  $\omega_2=-1$.
%The goal, is to design a hyperplane which solve the equation 
%\begin{equation}
%g(x) = \omega^{T} x + \omega g = 0
%\label{eq:svmhyper}
%\end{equation}
 % Vladimir N. Vapnik. Statistical Learning Theory. John Wiley & Sons, Inc., New
% York, NY, September 1998.
 
%The above is a classic example of a linear classifier, i.e., a classifier that separates a set of objects into their respective groups (GREEN and RED in this case) with a line. Most classification tasks, however, are not that simple, and often more complex structures are needed in order to make an optimal separation, i.e., correctly classify new objects (test cases) on the basis of the examples that are available (train cases). This situation is depicted in the illustration below. Compared to the previous schematic, it is clear that a full separation of the GREEN and RED objects would require a curve (which is more complex than a line). Classification tasks based on drawing separating lines to distinguish between objects of different class memberships are known as hyperplane classifiers. Support Vector Machines are particularly suited to handle such tasks. 

% how decisions is made
%  how training is done 
% how testing is donel. 
% kernels 
% how multi classifier are implemented 
% how parameter for cost is measured. 
% advantages 
% disadavantages 
% importance and applications used in it. 


\section{Step by Step Example}
\label{secstepExample}
In this section a step by step example is presented to clarify the presented system. Let us make an assumption that the user will draw a clock using two strokes. Figure \ref{fig:clock} show the two strokes and how the user draws them. The next steps demonstrate how output of the system after each step. The first section \ref{sec:stepseg} describe how the segmentation is done step by step. Section \ref{sec:steprec} shows the details of feature calculation and the classification process. 
\begin{figure}
	\centering
	\subfigure[First Stroke]{\label{fig:clockstroke1}
		\includegraphics[scale=0.7]{afterDefense/clockstroke1.jpg.eps} }
		\subfigure[Second Stroke] {	\label{fig:clockstroke2}
		\includegraphics[scale=0.7]{afterDefense/clockstroke2.jpg.eps}}		
		\subfigure[Completed symbol]{
		\label{fig:Clock1}\includegraphics[scale=0.7]{afterDefense/ClockDirections.jpg.eps}	}
	\caption{Stroke Drawing Sequence} a) The user first draws the ellipse b) The square wave is drawn inside the ellipse c) The final symbol the user draws along with arrows to illustrate how the directions the user used while drawing the symbol. 
	\label{fig:clock}
\end{figure}

\subsection{Step by Step Segmentation}
\label{sec:stepseg}

In this step the user draw one or more stroke to draw a symbol. Figure \ref{fig:clock} illustrate the sequence and direction of user draw. In this section, each stroke user draws will be labeled as it illustrated in Figure \ref{fig:clockstroke1} and Figure \ref{fig:clockstroke2}.  Table \ref{tab:StepsStroke1} shows the detailed output of the program at each step. The input of this step is a set of strokes, each stroke has a set of points and the output is a list of segments that will be used in the recognition process. More detailed example and algorithms are demonstrated in the Appendix \ref{ChapterstepExample} \\

For each stroke the user draw the system segments the stroke using the following procedure

\begin{enumerate}
\item Extracting $P_dp$ as in Section \ref{sec:Preprocessing} and Algorithm \ref{extractpdp}.
\item Ellipse Fitting: which is an attempt to fit as an ellipse as in details in Section \ref{sec:EllipseDetection} and Algorithm \ref{ellipseFitAlg} in Appendix \ref{ChapterstepExample}.
\item Segmentation ALS1 as explained in Section \ref{sec:SwarmSegmentation}. The algorithm will be similar to the PSO algorithm in section \ref{sec:ParticleSwarmAlgorithm}. (see Algorithm \ref{AlgS1alg} in Appendix \ref{ChapterstepExample})
%he axes of the ellipse are estimated as the $width/2$ and $height/2$ of the stroke bounding box where $width$ is width and $height$ is height of the bonding box.
\item Segmentation ALS2 as explained in Section \ref{sec:PolygonDivisionAlgorithm}. ( see Algorithm \ref{AlgS2alg} in Appendix \ref{ChapterstepExample} ). 
\item Append best segmentation to List of segmentation ($L_s$) of this symbols. 
\end{enumerate}


\begin{landscape}
The steps of the first stroke (details of the outputs are in  Appendix \ref{ChapterstepExample} in Table \ref{tab:StepsStroke1Details}):
\begin{scriptsize}
%\begin{table}
	%\centering 
%\scalebox{0.6}{		
\begin{longtable}{|p{2cm}|p{2cm}|p{5cm}|p{2cm}|p{5cm}|}
\caption{ Output of System in Each Step of Segmentation of First Stroke}
\label{tab:StepsStroke1} \\

\hline 
\multicolumn{1}{|p{2cm}|}{\textbf{Step}} & 
\multicolumn{1}{p{2cm}|}{\textbf{Description}} &
\multicolumn{1}{p{5cm}|}{\textbf{Inputs}} &
\multicolumn{1}{p{2cm}|}{\textbf{Details}} &
\multicolumn{1}{p{5cm}|}{\textbf{Output}} 
\\ \hline 
\endfirsthead

\hline
\multicolumn{3}{c}%
{{\bfseries \tablename\ \thetable{} -- continued from previous page}} \\
\multicolumn{1}{|p{2cm}|}{\textbf{Step}} & 
\multicolumn{1}{p{2cm}|}{\textbf{Description}} &
 \multicolumn{1}{p{5cm}|}{\textbf{Inputs}} &
 \multicolumn{1}{p{2cm}|}{\textbf{Details}} &
 \multicolumn{1}{p{5cm}|}{\textbf{Output}} 
\\ \hline 
\endhead


 
%Step & Discription of Step & Input & Steps & Output \\ \hline
Stroke 1 & Extracting point and computing boundary data & -- &   & 	

	\includegraphics[scale=0.5]{afterDefense/clockstroke1.eps}


  \\ \hline
Possible Dominate point Extraction & for Computing details see Section\ref{sec:Preprocessing} &  	

%	\includegraphics[scale=0.5]{afterDefense/clockstroke1.eps}
  &   &    \includegraphics[scale=0.5]{afterDefense/stroke1withpdp.jpg.eps}
\\ \hline 
Segmentation & Segmentation of Ellipse & $P_{dp}$   &  Ellipse fitting  & 
The circle is   a = 103.05715108084719 ,b = 103.06851874747797 , Center( 283.03365107691775,  181.05035287667172 )   with Error =  1.9471676673301757 , Percent =1.0695193086531243
eff  = 0.27463462099275116  

 \includegraphics[scale=0.5]{afterDefense/ellipse.jpg.eps}
 \\ \hline
 	\end{longtable}
 	\newpage
 	The steps of the second stroke:(details of the outputs are in  Appendix \ref{ChapterstepExample} in Table \ref{tab:StepsStroke1Details})
 	
\begin{longtable}{|p{2cm}|p{2cm}|p{5cm}|p{2cm}|p{5cm}|}
\caption{ Output of System in Each Step of Segmentation of Second Stroke}
\label{tab:StepsStroke2} \\

\hline 
\multicolumn{1}{|p{2cm}|}{\textbf{Step}} & 
\multicolumn{1}{p{2cm}|}{\textbf{Description}} &
\multicolumn{1}{p{5cm}|}{\textbf{Inputs}} &
\multicolumn{1}{p{2cm}|}{\textbf{Details}} &
\multicolumn{1}{p{5cm}|}{\textbf{Output}} 
\\ \hline 
\endfirsthead


\multicolumn{3}{c}%
{{\bfseries \tablename\ \thetable{} -- continued from previous page}} \\ \hline
\multicolumn{1}{|p{2cm}|}{\textbf{Step}} & 
\multicolumn{1}{p{2cm}|}{\textbf{Description}} &
 \multicolumn{1}{p{5cm}|}{\textbf{Inputs}} &
 \multicolumn{1}{p{2cm}|}{\textbf{Details}} &
 \multicolumn{1}{p{5cm}|}{\textbf{Output}} 
\\ \hline 
\endhead
 	
Stroke 2 & Extracting point and computing boundary data & -- &  see Section \ref{sec:Preprocessing} &
 	\includegraphics[scale=0.5]{afterDefense/clockstroke2.jpg.eps} \\ \hline
  & Possible Dominate point Extraction for details see Section\ref{sec:Preprocessing} &   	%\includegraphics[scale=0.5]{afterDefense/clockstroke2.jpg.eps}
    & 1. compute the $P_{pd}$  &  
  \includegraphics[scale=0.5]{afterDefense/stroke2WithPdp.jpg.eps}
    \\ \hline
 
Segmentation & Segmentation of Ellipse & $P_{dp}$   &  Ellipse fitting  & 
  The Ellipse is   a = 125.5 ,b = 125.5 , Center( 320.5,  235.0 )   with Error =  7.074966523515405 , Percent =1.1481777694994926 certainty  =  0.08114368920921669  
 \\ \hline
 Segmentation & PSO Segmentation ALgS1 & $P_{dp}$  & ALgS1   & Global best fitness is 0.875 and error value is 729.248312158671 
 \includegraphics[scale=0.7]{afterDefense/Stroke2Seg1labeled.jpg.eps}
 \\ \hline 
  Segmentation & PSO Segmentation AlgS2 & $P_{dp}$  & AlgS2   & Error value is 8.61499750333895
 \includegraphics[scale=0.7]{afterDefense/Stroke2Segmentaion.jpg.eps}
 \\ \hline 
Choosing best fit & Compare between ALgS1 and AlgS2 & First PSO and Second PSO &   & The error ALgS1 780.6652975044847  AlgS2 161.60692855994625    Best = AlgS2 

\includegraphics[scale=0.7]{afterDefense/Stroke2Seg2labeled.jpg.eps}	
  \\ \hline 
		\end{longtable}
\end{scriptsize}
\end{landscape}
The final output of this step is list $L_s$ of segments There are 9 segments in this symbol (\ref{fig:FinalSegmentationLabeled})

\begin{itemize}
	\item    Segment S0  = Ellipse ( Stroke 1)
   \item Segment S1  =   Line ( Stroke 2)
     \item  Segment S2 = Line ( Stroke 2)
     \item  Segment S3  =  Line ( Stroke 2)
     \item Segment S4  =   Line ( Stroke 2)
  \item    Segment S5  =  Curve  ( Stroke 2)
     \item  Segment S6  =  Curve  ( Stroke 2)
     \item  Segment S7  =  Line   ( Stroke 2)
     \item  Segment S8  =  Line   ( Stroke 2)
\end{itemize}
\begin{figure}
	\centering
 		\includegraphics[scale=0.65]{afterDefense/FinalSegmentation.jpg.eps}
 
		\label{fig:FinalSegmentationLabeled}
	\caption{Final Segmentation of Symbol} The segments are labeled on the symbol. 
\end{figure}

\subsection{Step by Step Recognition}
\label{sec:steprec}

	The input of the step is the list of segmentation $L_s$ that is used to compute the feature vector. Figure \ref{fig:segmentatiofinal.jpg} shows the best segmentation as given by the previous step. This step can be summarized as firstly using $L_s$ to computer the feature vector (Total of 70 features) and then introduce this vector the SVM classifier to classify it to one of the trained categories. The details of each step of feature extraction are detailed in the Algorithm \ref{FeatureAlg} . Feature extraction step is divided into four steps where each feature set is computed separately then all sets are appended to construct a final feature vector. 
	
	
\begin{enumerate}
	\item Computing Structural and geometrical Features(FS1). This step computes number of parallel, perpendicular line and similar structural features [13 features]. Algorithm \ref{FeatureAlg} in Appendix \ref{ChapterstepExample} is used to determine these features.
 	\item Rubine Feature Set (FS2) [12 features] .  Compute Rubine features as in \cite{gestureexample12} 
	\item Statistical Features (FS3) compute the moments as in \cite{zernike61} for N=10 there are 32 features.
	\item Global shape properties set (FS4). Firstly compute the convex hull of the stroke points then compute the rest of the features [13 features]. 
	\item Introduce the computed feature vector (All 70 features) to the classifier and check the label of the detected symbol. 
\end{enumerate}

\begin{landscape}

More details are in Table \ref{tab:StepsRegDetails} in Appendix \ref{ChapterstepExample} 

\begin{scriptsize}
	
 \begin{longtable}{|p{2cm}|p{2cm}|p{5cm}|p{2cm}|p{10cm}|}
\caption{Detailed Output of System in Each Step of Recognition}
\label{tab:StepsReg} \\

\hline 
\multicolumn{1}{|p{2cm}|}{\textbf{Step}} & 
\multicolumn{1}{p{2cm}|}{\textbf{Description}} &
\multicolumn{1}{p{5cm}|}{\textbf{Inputs}} &
\multicolumn{1}{p{2cm}|}{\textbf{Details}} &
\multicolumn{1}{p{10cm}|}{\textbf{Output}} 
\\ \hline 
\endfirsthead


\multicolumn{3}{c}%
{{\bfseries \tablename\ \thetable{} -- continued from previous page}} \\  
\hline
\multicolumn{1}{|p{2cm}|}{\textbf{Step}} & 
\multicolumn{1}{p{2cm}|}{\textbf{Description}} &
 \multicolumn{1}{p{5cm}|}{\textbf{Inputs}} &
 \multicolumn{1}{p{2cm}|}{\textbf{Details}} &
 \multicolumn{1}{p{10cm}|}{\textbf{Output}} 
\\ \hline 
\endhead

 Feature Extraction of FS1 & Preprocessing of Structural Featues  &  \includegraphics[scale=0.5]{afterDefense/FinalSegmentation.jpg.eps}
 &  (see First Algorithm in Section \ref{sec:steprec} )  &    Line 1    is\textbf{ parallel} to Line from  L3
 
 Line 1  \textbf{ intersects }Line 2
  Line 2  \textbf{ intersects }Line 3
   Line 3  \textbf{ intersects }Line 4
    Line 4  \textbf{ intersects }Line 5
 
\\ \hline 


 FS1 & Structural and geometrical Features & Segment List $L_s$  &  & 
 The final Feature vector of the FS1==>
ature Number of primitives 3.0   ,    Feature Number of segments 9.0   ,    Feature Line count 6.0   ,    Feature Curves Count 3.0   ,    Feature Ellipse Count 1.0   ,    Feature Intersection T type -1.0   ,    Feature Intersection L type 4.0   ,    Feature Intersection X type -1.0   ,    Feature Parallel Count 1.0   ,    Feature Perpendicular Count -1.0   ,    Feature Intersection Lines Count 4.0   ,    Feature Min Radius 1.7976931348623157E308   ,    Feature Max Radius 231.70851748053468   
 
 \\ \hline
FS2  & Rubine Features & Segment List $L_s$  &   see \cite{gestureexample12}   & 
\begin{scriptsize}
 Feature Rubine 0 -1.0   ,    Feature Rubine 1 -1.0   ,    Feature Rubine 2 477.6107201476952   ,    Feature Rubine 3 0.6368010415482491   ,    Feature Rubine 4 169.49926253526885   ,    Feature Rubine 5 0.8908593331996381   ,    Feature Rubine 6 0.454279262624981   ,    Feature Rubine 7 1179.0172885029808   ,    Feature Rubine 8 4.712388980384688   ,    Feature Rubine 9 70.80157037633609   ,    Feature Rubine 10 55.4044569978846   ,    Feature Rubine 11 9.140625   ,    Feature Rubine 12 41783.0   ,   
\end{scriptsize}
 \\ \hline
 
FS3 & Moments Computation   &  Segment List $L_s$  &  see \cite{zernike61}  &

\begin{scriptsize}
   Feature Zernike moments 0 158.87520185460517   ,    Feature Zernike moments 0 7.007557168632727   ,    Feature Zernike moments 1 34.26844061831789   ,    Feature Zernike moments 1 6.658086415801404   ,....... ...... .... ....... ....... ....... ..... ..... ....... ....... ....... ..... .....
\end{scriptsize}
  
  \\ \hline
FS4   &  &Segment List $L_s$   &  &  \begin{scriptsize} 
 The convex hull    [ ]
 Area of convex hull =  -34160.5   perimeter of hull 656.1576413855265
 
 --------------------------------------------------------------------------------
 
Final Feature vector for the FS4: 
  Feature Centroid time 1.2504525968640884E12  , Feature Centroid time difference 28319.088339222613   ,    Feature area of convexhull 34160.5   ,    Feature area of convexhull/area of symbol 1.2365344240932454   ,    Feature N.Points ConvexHull/ N. points symbol 0.15302491103202848   ,    Feature Convext Perimeter/symbol perimeter 0.556529278903672   ,    ....... ....... ....... ..... ..... ....... ....... ....... ..... .....
  \end{scriptsize}
    \\ \hline
 Feature Vector & Appends the selected Feature FS1, FS2, FS3, FS4 & Previous Steps  & & 
 
  [FS1,FS2,FS3,FS4]  \\ \hline
   Classification  & SVM classifier decision & Feature Vector  &  & 
   Symbol = Clock.  
 
 \\ \hline
   
 		\end{longtable}
%}

%\end{table}

\end{scriptsize}
\end{landscape}
