In this section we discuss the implementation of the less straightforward parts
described in the previous section. 
\section{Analysis}
\subsection{Writing systems}
The writing systems discussed in this work are shown in Table \ref{tab:writingSystems}.
\begin{table}[!h]
\centering
\begin{tabular}{|c|c|c|}
\hline
\textbf{Writing system} & \textbf{Number of glyphs} & \textbf{Example glyphs} \\ \hline
Latin & 26 & a, b, c \\ \hline
Hebrew & 27 & \includegraphics[height=10pt]{img/hebrew0}, \includegraphics[height=10pt]{img/hebrew1}, \includegraphics[height=10pt]{img/hebrew2}\\ \hline
Burmese & 42 & \dots \\ \hline
Georgian & 42 & \dots \\ \hline
Armenian & 38 & \dots \\ \hline
Cyrillic & 25 & \dots \\ \hline
Hiragana & 85 & \dots \\ \hline
Katakana & 89 & \dots \\ \hline
\end{tabular}
\caption{The writing systems to be analyzed}
\label{tab:writingSystems}
\end{table}

\subsection{Representation}
For each glyph in each writing system a $256\times256$ binary image was created, with the white glyph centered in the middle of a black background. We use the Arial Unicode font for each writing system, except for Burmese, for which Arial was unavailable. Here we have used the Padauk font \cite{Padauk}.
% The choice of a binary representation removes one degree of freedom with respect to a grayscale representation, leading to more compact data.

\subsection{Thinning}
The second thinning algorithm described in Guo and Hall, 1989 \cite{GuoHall1989} was implemented from scratch. Two separate subiterations are executed in alternating order until the result stops changing. The first subiteration deletes pixels from left to right and from top to bottom, while the second subiteration removes pixels from right to left and from bottom to top.

\subsection{Features}
Some of the features discussed in Chapter \ref{chp:method} have less than straightforward implementation. They are discussed here.

\subsubsection{Perimetric complexity, connected components and holes}
\label{subs:complexity}
The perimetric complexity is defined as squared perimeter over ink area. Ink area is the number of white pixels and thus easy to compute. Computing
the perimeter, however, is more contrived, as horizontal and vertical neighbours are closer together than diagonal neighbours.
Pelli et al. used an approximation method using dilation \cite{Pelli2006}, but
was shown to produce significantly erroneous results in many cases (Watson,
2011 \cite{Watson2011}).

We have opted for the OpenCV built-in function \verb!findContours!, which implements the border-following algorithm proposed by Suzuki et al., 1985 \cite{Suzuki1985}. This function returns all contours, including those of holes, in a two-level hierarchy. The length of the top level array is equal to the number of connected components, while the length of the bottom level array is the number of holes. The length of the found contours can be found using \verb!arcLength!, the sum of which is equal to the perimeter of all connected components.

\subsubsection{Corners}
Corners are found using the hit-and-miss morphological operator, which finds a given submatrix in a given parent matrix.

\subsubsection{Center of gravity}
The center of gravity (COG) of glyphs is computed separately for the x-axis and y-axis. For each row or column it is defined as

\[ \mu = \frac{1}{N} \sum_{i=0}^{N} i p(i) \]

where $p(i)$ is the $i$th pixel in the row or column, and $N$ the number of pixels in the row or column. The average COG for rows and columns
are two features used in the feature vector.

\subsubsection{Density}
Glyph density is defined as the fraction of the canvas which is set to one (white):

\[ d = \frac{1}{N} \sum_{i=0}^{N} p(i) \]

As this is the sole feature to take into account the size of the canvas, it is the only feature which in a way captures the relative
size of the glyphs. % MAYBE REMOVE? ADD ACTUAL SIZE FEATURE?

\subsubsection{Convexity}
The real convexity of a glyph is defined as the ratio of the glyph area and the glyph's convex hull area. As the convex hull is
relatively expensive to compute, we instead use the ratio of the glyph area and the glyph's bounding box area.


\subsubsection{Gradient orientation}
A distribution of line directions in a given glyph is found in the orientation of its gradient. The x-gradient and y-gradient are computed
using the Sobel operator. The gradient direction can then be computed for each pixel using Equation \ref{eq:gradDirection}.

\begin{equation}
\theta = \tan^{-1}\frac{\Delta y}{\Delta x}
\label{eq:gradDirection}
\end{equation}

The gradient directions for all pixels in a glyph are put into a histogram of 4 bins. The size of these 4 bins contribute 4 features to the final feature vector.

\subsection{Machine Learning}
After generating a feature vector for each glyph, we feed the data into a supervised learning (SL) algorithm. We setup the algorithm to classify the glyphs
into their respective writing systems. We have experimented with a number of SL algorithms, settling on the Random Forest algorithm by Breiman \cite{Breiman2001}. A full examination of this algorithm is outside of the scope of this work.
\section{Synthesis}
Using the information acquired in the previous section, we would like to generate new glyphs % CONTINUE

\subsection{Simulated Annealing}


\subsection{Representation}
\section{Technical considerations}
All implementation has been carried out in C++ with the OpenCV library \cite{OpenCV}. Any machine learning experimentation was done with the WEKA data mining software \cite{WEKA}. Some batch scripting was done in Python.
