This work is divided into glyph analysis and glyph synthesis. The method of both these parts
is laid out in this section.

\section{Analysis}
For the purpose of analysis the glyphs are thinned and then subjected to a number of feature extraction
methods. These feature vectors are then fed into a classification algorithm. These steps are detailed in the
following subsections.
\subsection{Writing systems}
Analysis was carried out on the 10 writing systems listed below.
\begin{enumerate}
\item The Latin alphabet, 26 glyphs
\item The Hebrew alphabet, 27 glyphs
\item The Greek alphabet, 24 glyphs
\item The Burmese alphabet, 42 glyphs
\item The Georgian alphabet, 42 glyphs
\item The Armenian alphabet, 38 glyphs
\item The Cyrillic alphabet, 25 glyphs
\item The Hiragana syllabary, 85 glyphs
\item The Katakana syllabary, 89 glyphs
\item A selection from the Kanji logosyllabary, 64 glyphs
\end{enumerate}
All glyphs can be found in Appendix \ref{chp:glyphs}. For information on
representation, see Chapter \ref{chp:impl} on implementation.

\subsection{Thinning}
Before extracting features from the glyphs, they are subjected to a thinning algorithm
in order to remove thickness information. For this purpose the second algorithm described by Guo and Hall, 1989 
\cite{GuoHall1989} is used. It is an algorithm consisting of two subiterations which are repeatedly
executed until no further changes are applied. For each pixel it checks a certain number of conditions.
If all conditions are satisfied, the pixel will be set to 0 in the next iteration. For implementation
issues consult Chapter \ref{chp:impl} or the original work.

\subsection{Feature extraction}
After thinning, a number of features are extracted from each glyph. In this
section these features are discussed in more detail. The features that have a
less than straightforward implementation are discussed as well in Chapter
\ref{chp:impl}.

\subsubsection{Perimetric complexity}
As mentioned before in Chapter \ref{chp:background}, the perimetric complexity
of a character gives a good relative estimate of recognition efficiency among
novice and experienced users.  It is computed as the ratio between squared
glyph perimeter $P$ and ink area $N$, where ink area is the number of white pixels. The computation
of glyph perimeter is not completely straightforward and is detailed in Section \ref{subs:complexity}.
\[ C = \frac{P^2}{N} \]

\subsubsection{Center of Gravity}
The center of gravity (COG) of a glyph  and is computed separately for the x-axis and y-axis. It can be computed
for each row and for each column, and is defined as

\[ \mu = \frac{1}{N} \sum_{i=0}^{N} i p(i) \]

where $p(i)$ is 1 if the $i$th pixel in the row or column is white and 0
otherwise, and $N$ the number of pixels in the row or column. This feature is
defined with respect to the glyph's bounding box, making it scale independent.
The average COG for rows and columns are two features used in the feature
vector.

\subsubsection{Variance in Center of Gravity}
Using the previous feature, the variance in rowwise and columnwise center of gravity can be computed,
which gives a measure of how far on average white pixels deviate from the center of gravity. It is computed for one row or column as

\[ \sigma = \frac{1}{N} \sum_{i=0}^{N} (i p(i) - \mu)^2 \]

The average variance in COG for rows and columns are two features used in the feature vector.

\subsubsection{Density}
Glyph density is defined as the fraction of the glyph's bounding box which is set to white:

\[ d = \frac{1}{N} \sum_{i=0}^{N} p(i) \]

\subsubsection{Height and Width}
The height and width of the glyph's bounding box relative to the size of the canvas are features in the feature vector.

\subsubsection{Aspect Ratio}
The aspect ratio of a glyph is defined as the ratio between the height and width of its bounding box.

\subsubsection{Connected Components}
The number of connected components is used as a feature in the feature vector. Two white pixels belong to two different
components if and only if there is no 8-neighbours path through only white pixels from one to the other. The computation
of this feature is not straightforward and is discussed in more detail in Chapter \ref{chp:impl}.

\subsubsection{Maximum segments}
This feature records the maximum number of segments per row and per column. For one row or column, a segment is defined as a
connected set of white pixels. For each row and column of a glyph the number of segments is computed, and the maximum is recorded for both
rows and columns. This renders two features per glyph to be added to the feature vector.

\subsubsection{Number of Corners}
A corner is defined as a $3 \times 3$ submatrix of the glyph which matches one of the patterns in Table \ref{tab:corners} exactly.

\begin{table}[!h]
\centering
\subfigure{\begin{tabular}{|c|c|c|} \hline 0 & 0 & 0 \\ \hline 0 & 1 & 1 \\ \hline 0 & 1 & 0 \\ \hline \end{tabular}}
\subfigure{\begin{tabular}{|c|c|c|} \hline 0 & 0 & 0 \\ \hline 1 & 1 & 0 \\ \hline 0 & 1 & 0 \\ \hline \end{tabular}}
\subfigure{\begin{tabular}{|c|c|c|} \hline 0 & 1 & 0 \\ \hline 1 & 1 & 0 \\ \hline 0 & 0 & 0 \\ \hline \end{tabular}}
\subfigure{\begin{tabular}{|c|c|c|} \hline 0 & 1 & 0 \\ \hline 0 & 1 & 1 \\ \hline 0 & 0 & 0 \\ \hline \end{tabular}}
\caption{Different corner rotations}
\label{tab:corners}
\end{table}

These corners are found using the hit-or-miss transform, the implementation of which is discussed in chapter \ref{chp:impl}.

\subsubsection{Number of holes}
The number of holes, also called number of loops, is the number of areas of only black pixels completely surrounded by white pixels.
The implementation of this feature is discussed in more detail in Chapter \ref{chp:impl}.

\subsubsection{Gradient Orientation}
It would be informative to get an idea of the distribution of stroke directions
of a glyph. As a rough approximation the gradient orientation of the glyph is
recorded. To this end, the glyph is convolved with the x and y variants of the
Sobel kernel (see Table \ref{tab:sobel}) to get an approximate derivative $dx$ and $dy$ in x
and y direction respectively.
We can then compute the gradient direction of each pixel as described in Equation \ref{eq:gradDir}.
\begin{equation}
\label{eq:gradDir}
\theta = \arctan\left(\frac{dy}{dx}\right)
\end{equation}
This process returns one value for each pixel, ranging between $-\pi$ and $\pi$. These values are then distributed over 8 equal sized bins. The size of each of these bins are responsible for 8 features of the final feature vector.
\begin{table}[h]
\centering
\subfigure[x-direction]{\begin{tabular}{|c|c|c|} \hline -1 & 0 & 1 \\ \hline -2 & 0 & 2 \\ \hline -1 & 0 & 1 \\ \hline \end{tabular}}
\subfigure[y-direction]{\begin{tabular}{|c|c|c|} \hline -1 & -2 & -1 \\ \hline 0 & 0 & 0 \\ \hline 1 & 2 & 1 \\ \hline \end{tabular}}
\caption{The Sobel operator}
\label{tab:sobel}
\end{table}

\subsection{Classification}
The features described above are concatenated into one feature vector per glyph. To this feature vector the writing system of the glyph is also added.
This results in a labeled dataset of all glyphs. Using the machine learning tool WEKA \cite{WEKA} a large set of supervised learning algorithms
can then be applied to the full dataset or to a subset. After running the selected classification algorithm, WEKA returns a number of statistics
about the degree of success of the classification. In the Chapter \ref{chp:results} we will use these statistics to evaluate the features listed above.

\section{Synthesis}
The distribution of feature values of a given writing system are used to generate new data points that fit that distribution.
If the features described are descriptive enough of the writing system this should result in new glyphs that are similar to the
original glyphs of the writing system.

\subsection{Fitting a Multivariate Normal Distribution}
The distribution of values of the feature vector for a certain writing system is approximated as a multivariate normal distribution.
The mean of this distribution is approximated as the \emph{sample mean}, which is the average of all datapoints (feature vectors) belonging
to the writing system:
\begin{equation}
\vect{\mu} = \frac{1}{N}\sum_{i=0}^N \vect{x}_i
\end{equation}
The covariance matrix is approximated as the \emph{sample covariance matrix}:
\begin{equation}
\vect{\Sigma} = \frac{1}{N - 1}\sum_{i=0}^N (\vect{x}_i - \vect{\mu})(\vect{x}_i - \vect{\mu})^\mathrm{T}
\end{equation}

It is now straightforward to compute the likelihood of any feature vector under this distribution. We use the log likelihood:
\begin{equation}
\ln (L) = -\frac{1}{2}\ln(2 \pi) - \frac{1}{2}\ln |\vect{\Sigma} | - \frac{1}{2}(\vect{x} - \vect{\mu})^\mathrm{T} \vect{\Sigma}^{-1}(\vect{x} - \vect{\mu})
\end{equation}
As the first two terms are constant we can leave them out, which leaves us with
\begin{equation}
\label{eq:logLikeNoConstant}
f(\vect{x}) = - \frac{1}{2}(\vect{x} - \vect{\mu})^\mathrm{T} \vect{\Sigma}^{-1}(\vect{x} - \vect{\mu})
\end{equation}
The higher $f(\vect{x})$, the more likely $\vect{x}$ is to happen under our sample distribution. This function can be subjected to
non-linear optimization. The next section will describe \emph{simulated annealing}, which is one way of approximating the global optimum of
a given function.

\subsection{Simulated annealing}
As described in Chapter \ref{chp:background}, simulated annealing is a
heuristic approach of approximating the global optimal solution to a given
problem.  Starting from an initial candidate solution, it mutates this
candidate according to certain rules and accepts or rejects these mutations
according to an energy function.  The specific configuration of these steps for
this work are discussed in the next few sections.

\subsubsection{Search space}
Given the fact that the glyphs of the original writing systems are represented
as matrices of pixels, it is theoretically possible to search the whole space
of binary images (of a certain size) for solutions with a high fitness.
However, as this search space is very large, and most of the candidates in this
space are not suitable, it would be wise to limit the search space in such a
way that samples found in that search space are more likely to be fit. To this
purpose, we take into account the prior information that glyphs consist of
\emph{strokes}. The search space is thus limited to only those images that
consist of strokes. The procedure listed in Algorithm \ref{alg:randToStroke}
takes a matrix containing number between 0 and 1 and returns a binary image
consisting of strokes.

{\singlespacing
\begin{algorithm}[!h]
  \centering
  \begin{algorithmic}[1]
    \Procedure{RandToStroke}{$S, \alpha, \lambda$}
      \State $T \gets \mathrm{threshold}(S, \alpha)$\Comment{Threshold $S$ at $\alpha$ to create binary matrix}
      \State $T \gets \mathrm{Lanczos}(T, \lambda)$\Comment{Scale up $S$ by $\lambda$ using Lanczos interpolation \cite{Duchon1979}}
      \State $T \gets \mathrm{thin}(T)$\Comment{Thin $S$ using Guo and Hall's method \cite{GuoHall1989}}
      \State \textbf{return} $T$\Comment{$S$ is now a binary stroke image}
    \EndProcedure
  \end{algorithmic}
  \caption{Random matrix to stroke image}
  \label{alg:randToStroke}
\end{algorithm}
}
This method can generate a vast number of different images, which all consist
solely of strokes. By keeping the original $S$ as our state and modifying just
$S$, while computing the fitness of $T$, we are guaranteed to stay in the space
of stroke images, while keeping the method of mutation straightforward.

If $S$ is fully random, $\alpha$ dictates the ratio of white and black pixels after
the first step. In general this means that the higher $\alpha$ is set, the
lower the density of the final result. The size of $S$ defines the number of
degrees of freedom in the generation process. $\lambda$ controls the amount of
interpolation necessary. A high value for $\lambda$ means a high degree of
interpolation, resulting in rounder strokes than for low values of $\lambda$.

See Figure \ref{fig:randToStroke} for an example of all steps in the algorithm.

\begin{figure}[!h]
\centering
\subfigure[Initial matrix]{\includegraphics[width=0.2\textwidth]{img/rand16x16}}
\subfigure[Thresholding]{\includegraphics[width=0.2\textwidth]{img/rand16x16T05}}
\subfigure[Interpolation (displayed at a quarter size)]{\includegraphics[width=0.2\textwidth]{img/rand16x16T05R4}}
\subfigure[Thinning]{\includegraphics[width=0.2\textwidth]{img/rand16x16T05R4Th}}
\caption{Visualization of steps of Algorithm \ref{alg:randToStroke}}
\label{fig:randToStroke}
\end{figure}

\subsubsection{Initial candidate solution}
The method of generating an initial state from which the algorithm will start
optimizing must be chosen carefully. The closer the rendered initial solution
is to the optimum, the higher the probability will be that the algorithm will
actually find it and settle there.  One of the most straightforward
initializations is a blank matrix (all zeroes). A fully empty state will render
a fully empty solution, which will not fit the sample distribution well, and
thus the simulated annealing algorithm will move out of this state quickly.

Another possibility is to randomly initialize the state matrix with numbers
between 0 and 1, thus immediately generating more complex candidates using the
\verb!randToStroke! algorithm.

Both of these methods will be evaluated in Chapter \ref{chp:results}.

\subsubsection{Energy function}
To evaluate the fitness of a certain candidate solution, an energy function is
necessary. In this case the log likelihood with constants removed as shown in
Equation \ref{eq:logLikeNoConstant} is used. Usually, simulated annealing tries
to minimize the energy function, whereas this function needs to be maximized to
find the optimal solution.  Note that the state itself is not passed to the
function; rather, the result of passing the state through the
\verb!randToStroke! algorithm is used.

\subsubsection{Mutating}
Multiple mutation methods seem feasible. One method of mutation would be add or
subtract a random value between 0 and 1 to a random pixel, but this method is
quite slow, meaning that a very large amount of steps need to be taken for the
algorithm to converge to a desirable solution. The other extreme of adding a
random matrix of the same size as the state to the state only allows for global
changes. As many global changes are detrimental to the advancement of the
solution, it can take a long time until by chance a global change is generated
that improves the result (or worsens the result but is accepted anyway).

A compromise between the two is the random addition and subtraction of
\emph{bivariate gaussians}, with randomized covariance and mean, scaled in such
a way that its range lies between some predefined values. The randomized
covariance means global and local changes are possible. This versatility means
that change can happen quickly, without removing the possibility of small local
changes -- a feature quite important for tweaking the glyph in a late stage to
find the local optimum. See figure \ref{fig:bivGauss}.

\begin{figure}[!h]
\centering
\subfigure{\includegraphics[width=0.3\textwidth]{img/bivgauss0}}
\subfigure{\includegraphics[width=0.3\textwidth]{img/bivgauss1}}
\subfigure{\includegraphics[width=0.3\textwidth]{img/bivgauss2}}
\caption{Three random bivariate gaussians; these are either subtracted or added to the current state. Note that the values are scaled for visualization purposes.}
\label{fig:bivGauss}
\end{figure}

\subsubsection{Temperature}
Two important parameters of the simulated annealing algorithm are the starting
temperature and its cooling schedule. The ideal starting temperature is dependent on the
topography of the search space and the range of the energy function. By first
running the algorithm starting at a very high temperature ($T = 1 \times 10^{6}$)
and plotting the temperature decrease against the fitness, one can find at what temperature
the steps stop being completely random. This is set as the starting temperature.
As this point varies with each feature distribution, we repeat this process each time
we switch character sets.

A vast number of cooling schedules have been tried out. In this work, a simple exponential cooling schedule is used,, where
the temperature is multiplied by a constant $\alpha$ at each step:
\begin{equation}
T_{t+1} = \alpha T_t ,
\label{eq:expCoolingSchedule}
\end{equation}
with $\alpha < 1$. This cooling schedule has been shown to be reasonably efficient \cite{Kirkpatrick1983}. As $\alpha$ approaches 1,
the probability of reaching the global optimum approaches 1. For this report, $\alpha$ has been set to $0.99$.

\subsubsection{Stopping criterion}
The algorithm stops as soon as the temperature drops below a certain threshold or when the maximum number of iterations has been reached.
The maximum number of iterations has been set to 10000.
% IMPLEMENT STOPPIGN CRITERION : AVERAGE CHANGE IN ENERGY LOWER THAN \ALPHA
