Writing systems in general have been subject to research in many different
areas of study. Given its ubiquitous nature in society through all ages,
its academic appeal is not surprising. In this chapter some of the studies
more specific to the purpose of this work are discussed.
%The second part of this paper, which concerns itself with glyph synthesis,
%is founded upon non-linear optimization and 

An extensive work on writing systems can be found in `\textit{The world's
writing systems}' by Daniels and Bright, 1996 \cite{DanielsBright1996}.
Covering both linguistic and morphological features of writing, it gives an
extensive review of many of the world's historical and modern writing systems.
It stops short, however, of offering any sort of computational approach to
writing analysis.

One evident approach of harnessing the power of computation with respect to
writing is \emph{optical character recognition}, or OCR; the process of
converting handwritten or printed text into digital form. Many different
methods of OCR have been proposed, based on many different high-level
approaches.  Directly relevant to our cause are the methods that use the
approach of \emph{feature extraction}. These methods first break down the input
into separate, lower dimensional features, and then compare these features to a
previously computed dictionary of feature vectors to classify the glyphs
represented by the input. A survey paper by Trier et al., 1996 \cite{Trier1996} on
different methods of feature extraction for character recognition shows
that this approach renders multiple levels of success. In this work, feature extraction
lies at the basis of the approach, but the feature dictionary used for classification does not contain a
vector for each glyph but rather a single vector for each writing system. A
more in-depth explanation of the approach taken can be found in Chapter
\ref{chp:method}. 

Apart from the technicalities of different approaches of handwriting
recognition based on feature extraction, the most important parameters of such
systems are the types of features used. Feature extraction is a broad subject
and can be based on a variety of inputs. This work focuses on methods which
extract features from a binary pixel representation of a glyph.

Pelli et al., 2006 \cite{Pelli2006} showed that a measure known as
\emph{perimetric complexity}, extracted from a given glyph, is directly
correlated with efficiency of recognition of that glyph. It is defined as the
perimeter of the glyph squared over its ink area. Ink area is defined as the
number of white pixels, where white is glyph and black is background. This
measure is high for complex characters such as those found in Kanji, and low
for characters in less complex writing systems, such as Hebrew and Cyrillic
  (see Figure \ref{fig:perimetricComplexity}).

\begin{figure}[!h]
\centering
\subfigure[Kanji character. $c = 315.59$]{\includegraphics[width=0.3\textwidth]{img/kanjiExample}}
\subfigure[Cyrillic character. $c = 85.86$]{\includegraphics[width=0.3\textwidth]{img/cyrillicExample}}
\caption{Perimetric complexity gives a reliable measure of efficiency of recognition.}
\label{fig:perimetricComplexity}
\end{figure}

A whole set of features quite salient to this paper's purpose is the set of
\emph{discrete features}.  These features are extracted directly from the topology
of the glyph.

In \cite{Trier1996} the following set
of extractable discrete features, based on those found in \cite{Ramesh1989} and
\cite{Kundu1989}, is listed:
\begin{quote}
``the number of loops; the number of
T-joints; the number of X-joints; the number of bend
points; width-to-height ratio of enclosing rectangle;
presence of an isolated dot; total number of endpoints
and number of endpoints in each of the four directions
N, S, W and E; number of semi-circles in each of these
four directions; and number of crossings with vertical
and horizontal axes, respectively, the axes placed on
the center of gravity.''
\end{quote}
The advantage of these features is that it is straightforward to compute the
moments (e.g. mean, variance, skewness) of their distribution among some
character set. In the second part of this work, an attempt of synthesizing
glyphs based on these distributions will be conducted.

Extracting these features from the topology of a given glyph is not completely straightforward. Different
typefaces can have very different appearances, while the topology of the glyph remains the same. To remove
as much stylistic information as possible, a process called skeletonization, or thinning, is used. Skeletonization is the process of removing
all thickness from a given binary image while leaving as much of the topology
of the original intact. This makes sense for our purpose, as it would seem that
the class to which a certain glyph belongs stems from its topological structure
rather then from its thickness. Note however that when classifying for \textit{typeface},
thickness appears to be a quite distinguishing feature.  This issue is more
thoroughly discussed in Chapter \ref{chp:method}.

Different methods of thinning exists, with different advantages and problems.
Although it is possible to perform skeletonization without losing any
topological information, those methods in general render results with an
abundance of dendrite-like structures, representing thicknes, present near edges. These features are
generally not desirable for the purpose of the analysis sought after in this
work. Other methods do not have the advantage of full topological preservation
but result in cleaner skeletons, more appropriate for our scope.
See Figure \ref{fig:thinningComparison} for a comparison.

\begin{figure}[!h]
\centering
\subfigure[Original image.]{\includegraphics[width=0.3\textwidth]{img/bOrig}}
\subfigure[Thinning with dendrite-like structures. Full topological structure is preserved.]{\includegraphics[width=0.3\textwidth]{img/bSkeleton}}
\subfigure[Thinning with loss of topological structure. The result is cleaner.]{\includegraphics[width=0.3\textwidth]{img/bThinned}}
\caption{Difference in thinning results}
\label{fig:thinningComparison}
\end{figure}

Guo and Hall, 1989 \cite{GuoHall1989} proposed two subiteration algorithms for
thinning. Their method contains no thickness information and as such displays
none of the dendrite-like features just explained. They showed their method to
be superior in speed to at least 3 other similar approaches.\footnote{This
method is also implemented in MATLAB, in the function \texttt{bwmorph}.}

The output of these feature extracting algorithms are fed into a big feature
vector for each glyph. Adding the constituent writing system as the class to such a feature vector enables
one to feed those vectors into a supervised machine learning (ML) algorithm,
aimed at classifying each vector into its constituent writing system. A very large number of
supervised learning algorithms are available, and a full review of these algorithms is outside the
scope of this work. The machine learning tool WEKA \cite{WEKA} was used for all
machine learning experiments. Two specific supervised ML algorithms used in
this work are Random Forest, first mentioned by Breiman, 2001 , which generates
a multitude of decision trees, finally classifying the input as the mode of the
output of the individual trees \cite{Breiman2001}, and Naive Bayes, a simple
statistical classifier which fits an independent multivariate normal
distribution to the data. The latter is interesting to include as it could give
information about the salience of dependence between features.

In the second part of this paper we concern ourselves with the synthesis of new characters
based on the distribution of the previously acquired features of glyphs in existing writing systems.
To this end, we harness the power of a non-linear global optimization algorithm
known as simulated annealing.

Simulated annealing is a heuristic approach of approximating the global optimum
of some given combinatorial problem. In many cases it provides lower running
times than exhaustive search, in exchange for lower accuracy. Introduced by
Kirkpatrick, Gelatt \& Vecchi \cite{Kirkpatrick1983}, it overcomes the problem
of getting stuck in local minima, often the bane of other heuristic
optimization procedures. It works by first making big random steps through the sample space in search of coarse areas with many good solutions, and then slowly decreasing the step size and amount of negative change allowed to finally reduce to a steepest descent algorithm.

Many different flavours of simulated annealing exist, but the basic steps
remain the same across versions. An initial random configuration is generated
and its cost is computed and noted. Then, in each iteration, a new
configuration is generated from the previous one using some mutation function.
This function can be anything and must be specified by the user. The cost of
this new configuration is compared to the previous configuration. If the cost
is lower, this new configuration replaces the old, and the next iteration is
started. If the cost is higher, however, it is not automatically rejected, but
judged based on some probability function. This probability function is
dependent not only on the difference in cost, but also on a preset variable
called the \emph{temperature} ($T$). At high temperatures it is more likely to
accept changes with negative changes in cost than at lower temperatures. By
sequentially lowering the system's temperature with each iteration, the system
will reject more and more negative changes as the process progresses. At
temperature 0, only positive changes will be accepted. A function which is often
chosen for this purpose is shown in Equation \ref{eq:simulProb}.
\begin{equation}
P(e, e_n, T) = \exp((e - e_n)/T)
\label{eq:simulProb}
\end{equation}

This mechanism of accepting changes for the worse under certain circumstances
allows the system to jump out of local minima and increases its chance of
finally settling on a configuration close to the global optimum. A pseudocode version of the algorithm can be found in Algorithm \ref{alg:simulatedAnnealing}.

%\input{simulatedAnnealingFlowchart.tex}
{\singlespacing
\begin{algorithm}[!h]
  \centering
  \begin{algorithmic}[1]
    \State $s \gets s_0$\Comment{Initialize state $s$}
    \State $T \gets T_0$\Comment{Initialize temperature}
    \State $e \gets E(s)$\Comment{Compute energy of initial state}
    \State $i \gets 0$\Comment{Keep track of number of iterations}
    \While{$i<N, T>0$}
      \State $s_n \gets \mathrm{mutate}(s)$\Comment{Generate mutation of current state}
      \State $e_n \gets E(s_n)$\Comment{Compute energy of new state}
      \If{$e_n < e$}\Comment{If new energy is lower keep the new state regardless}
        \State $s \gets s_n$
        \State $e \gets e_n$
      \ElsIf{$P(e, e_n, T) > \mathrm{rand}()$}\Comment{Otherwise decide based on $T$, $e$ and $e_n$}
        \State $s \gets s_n$
        \State $e \gets e_n$
      \EndIf
      \State $\mathrm{decrease}(T)$\Comment{Decrease temperature}
    \EndWhile
  \end{algorithmic}
  \caption{Simulated annealing}
  \label{alg:simulatedAnnealing}
\end{algorithm}
}

To our knowledge, simulated annealing has never been used in conjuction with character generation.
In the context of shape generation in general, however, it has been used on
numerous occasions. In a 1993 paper by Cagan and Mitchell \cite{Cagan1993} the
problem of generating shapes according to a certain shape grammar to optimize
a certain criterion was posed. Simulated annealing was shown to generate
shapes with high fitness, but not converge necessarily to the global optimum.
This is a defining characteristic of simulated annealing: it approximates the
global optimum without every giving a guarantee of truly getting there.
