\documentclass[a4paper]{article}
\usepackage[lofdepth,lotdepth]{subfig}
\usepackage{graphicx}
\usepackage{color}
\usepackage[margin=2.5cm]{geometry}
\usepackage[math]{iwona}
\usepackage[T1]{fontenc}
\usepackage[backref=page,colorlinks=true]{hyperref}

\hypersetup{
pdfauthor = {Moos Hueting},
pdftitle = {Alphabet Synthesis - A Literature Review},
pdfsubject = {Alphabet Synthesis}}

\title{Alphabet Synthesis - A Literature Review}
\author{Moos Hueting}

\begin{document}

\maketitle

% WHAT IS THE PROBLEM YOU WANT TO SOLVE
%Generating artificial alphabets that are well fit for handwriting

% HOW DID OTHERS APPROACH THIS PROBLEM / WHAT IS THE STATE OF THE ART
%No previous work on the specifics of alphabet generation, however a lot on handwriting synthesis and recognition

% WHAT WILL BE YOUR OWN APPROACH
%Working back from models used in handwriting recognition

% HOW DOES YOUR APPROACH RELATE TO PREVIOUS WORK

% WHAT WILL BE YOUR MEASURE OF SUCCESS

%% SECTIONS %%

% INTRODUCTION / MOTIVATION
\section{Introduction}
\label{sec:intro}
It is not easy to think of a more influential human invention than the invention of written language.
We have been using man-made symbols to convey information for millennia, and over time a significant
number of diverse systems of writing have been developed. These systems then evolved and combined
into the more sophisticated systems we use today. Many different types of systems can be distinguished, such
as \emph{alphabets}, in which each character (or sometimes combination of characters) represents a phoneme;
\emph{abjads}, where only consonants are represented and the vowel sounds are derived from context; and logographies,
where each character represents a syllable. Some examples of writing systems are shown in figure \ref{fig:writingSystems}.

\begin{figure}[h!tb]
\centering
\subfloat[Sample characters from the Roman alphabet]{\label{fig:romanAlphabet}\includegraphics[width=0.3\textwidth]{img/roman_alphabet}}
~
\subfloat[Sample characters from the Arabic abjad]{\label{fig:arabicAbjad}\includegraphics[width=0.3\textwidth]{img/arab_abjad}}
~
\subfloat[Sample characters from the Hiragana logography]{\label{fig:arabicAbjad}\includegraphics[width=0.3\textwidth]{img/hiragana_logography}}
\caption{Different writing systems used throughout the world}
\label{fig:writingSystems}
\end{figure}

While all these systems differ strongly in appearance and have to be interpreted in various ways,
they have a very important factor in common. Every system is devised to be \emph{used by humans}. More specifically,
humans must be able to \emph{produce} handwritten text with ease, and moreover should have no trouble
\emph{recognising} previously produced text.

Furthermore, the above examples have all grown into their current form by combination and evolution of previous writing systems.
Interestingly, a number of writing systems exist that were artificially created. Most of these alphabets were
created for languages previously gone unwritten. Two examples still in use today are Hangul, the official constructed script of Korea,
and the Cherokee script, devised in the early 19th century for the language of the same name.

In consequence we know that the creation of artificial usable scripts is possible. The only question that remains
is whether we can automate the process of artificial script creation.

\begin{figure}[h!tb]
\centering
\subfloat[Sample characters from the Hangul constructed alphabet]{\label{fig:romanAlphabet}\includegraphics[width=0.5\textwidth]{img/hangul_alphabet}}
~
\subfloat[Sample characters from the Cherokee constructed syllabary]{\label{fig:arabicAbjad}\includegraphics[width=0.5\textwidth]{img/cherokee_syllabary}}
\caption{Two constructed writing systems in use today}
\label{fig:constructedWritingSystems}
\end{figure}

\section{Related work}
\label{sec:relWork}
When consulting past research for such automatic methods of alphabet generation, we are confronted with a
gap in the literature. To our knowledge, no previous work has been done that covers the synthesis of artificial writing systems.
On the other hand, a large amount of research has been conducted on handwriting recognition, as well as research
on the automatic generation of form in general. We will try to use their results in coming up with a method of artificial alphabet generation.
In this section we will discuss some work done in the past we deem to be relevant to our goal.

\subsection{`Inverse' handwriting recognition}
\label{sub:inverseHandwritingRec}
Many handwriting recognition methods use feature extraction (Trier et al. 1996, \cite{Trier1996}). The input to the algorithm is broken down into separate, lower-dimensional features
and compared to a dictionary of features previously computed. In some cases, the original input can be more or less reconstructed from
the extracted features. While we will not take this approach, using the
features of existing letters and modifying them in an intelligent way could be
an entry point for using recognition methods for alphabet synthesis.

\subsection{The generation of form: Shape grammars}
\label{sub:shapeGrammars}
Another field of research relevant to our focus is shape grammars. First formulated by Stiny in 1980
\cite{Stiny1980}, shape grammars define a way of arbitrarily generating shapes using a distinct set
of rules to govern the way these shapes are built. He defines a \emph{shape} as ``a limited arrangement of
straight lines defined in a Cartesian coordinate system with real axes and an associated euclidean
metric''.  He then introduces the concept of a \emph{labelled shape}, which is a combination
$\langle s,
P \rangle$ consisting of a shape $s$ and a set of labelled points $P$. A labelled point $p:A$, in turn, is
defined as a location $p$ with respect to $s$, and a symbol $A$. \emph{Shape grammars} are then
defined as ways for algorithms to be defined directly in terms of labelled shapes.

Stiny defines a shape grammar to be consisting of the following four components:

\begin{enumerate}
\item $S$ is a finite set of shapes
\item $L$ is a finite set of symbols
\item $R$ is a finite set of \emph{shape rules} of the form $\alpha \rightarrow \beta$, where $\alpha$ is a labelled shape in $(S, L)^+$,
and $\beta$ is a labelled shape in $(S, L)^*$
\item $I$ is a labelled shape in $(S, L)^+$ called the \emph{initial shape}
\end{enumerate}

Using such a shape grammar, we can generate an arbitrary number of shapes. The generation process is described as follows:

\begin{quote}
``Labelled shapes are generated by a shape grammar by applying the shape rules one
at a time to the initial shape or to labelled shapes produced by previous applications
of shape rules. A given labelled shape $\gamma$ is generated by the shape grammar if there is
a finite series of labelled shapes beginning with the initial shape and ending with $\gamma$
such that each term in the series but the first is produced by applying a shape rule to
its immediate predecessor.''
\end{quote}

Take, for example, the shape grammar shown in figure \ref{fig:shapeGrammarExample}. The set of shapes
consists of a single square. As such, this is also the initial shape (\ref{fig:SG_initial}). The square is in fact a
labelled shaped, with its single label being positioned at the place of the dot at the top. The
shape grammar has a set of 2 shape rules, shown in \ref{fig:SG_rules}. Rule 1 extends the total
shape, while rule 2 stops the addition of shapes. One example application of this shape grammar is
shown in figure \ref{fig:SG_applied}.

\begin{figure}[h!tb]
\centering
\subfloat[The initial shape of the example shape grammar, as well as the full library of
shapes]{\label{fig:SG_initial}\includegraphics[scale=0.5]{img/SG_initial}}
\subfloat[The shape rules of this SG]{\label{fig:SG_rules}\includegraphics[scale=0.5]{img/SG_rules}}
\\
\subfloat[A sample application of the SG]{\label{fig:SG_applied}\includegraphics[scale=0.3]{img/SG_applied}}
\caption{A very simple shape grammar}
\label{fig:shapeGrammarExample}
\end{figure}

Shape grammars, thus, provide a way to encode formal constraints of our to-be-generated writing system.
However, shape grammars lack any form of global evaluative mechanisms. The rules only encode local
placement constraints and do not look at the full shape.

\subsection{Genetic algorithms}
\label{sub:genetic}
Rosenman (1997) \cite{Rosenman1997} suggested using an evolutionary approach to guide the generation of form. The basic idea of genetic algorithms
is contained in three steps:
\begin{enumerate}
    \item Generate an initial population of solutions (genotypes) according to a set of rules
    \item Select members from this population for survival using a biased random selection function which evaluates the phenotype
    \item Generate new genotypes from the existing ones using several evolutionary mechanisms
\end{enumerate}

In the case of Rosenman, the generation of the initial population is governed by shape grammars, as described previously.
The biased random selection function is a function of the properties of a sample generated according
to the genotype.  These properties are, in accordance with biology, also called the
\emph{phenotype}. The function acts as a fitness evaluator, and would in the case of character
generation take into account factors as readability and ambiguity.  Another method of selecting is
giving the control back to the user, getting feedback directly on what samples are desired above
other samples. Rosenman wanted to minimise the perimeter of the generated rooms. Accordingly he
evaluated the fitness of generated samples as a function of the total perimeter.

After selecting members for evolution, the genotypes of these surviving members are evolved using \emph{cross-over}
and \emph{mutation}. Cross-over is the process of taking two samples and changing out a certain property. For example,
if we have two samples $s_1$ and $s_2$, and each sample has a set of the same properties with different values, respectively $s_1.P$ and $s_2.P$, we take
    some random property $\alpha$ from $P$ and switch its value between $s_1$ and $s_2$.

Mutation is a mechanism used to get out of local minima of the fitness function. It takes a certain percentage of the selected samples
and randomly changes one or more of its properties. If this mutation increases the fitness of the
mutated sample, it is likely to survive further evolution rounds, thus increasing the overall
fitness of the generated samples.

For clarification, see table \ref{tab:evolutionaryMechanisms}, where we have listed an example of mutation and cross-over for a binary genotype.

\begin{table}[h!tb]
\centering
\begin{tabular}{|r|c||r|c|c|}
\hline
\multicolumn{5}{|c|}{Evolutionary mechanisms} \\
\hline
Before mutation & 1011010{\color{red} 0}1110 & Before cross-over& {\color{red} 1011}01001110 & {\color{red} 0011}01001111 \\
\hline
After mutation & 1011010{\color{red} 1}1110 & After cross-over& {\color{red} 0011}01011110 & {\color{red} 1011}01001111 \\
\hline
\end{tabular}
\caption{Mutation and cross-over. The bits highlighted in red are involved in the operation}
\label{tab:evolutionaryMechanisms}
\end{table}

Rosenman suggests working on multiple levels, i.e. generating low-level units using the approach
described above, and using these units as input shapes for the next level. We will consider this
approach in our project, as we will describe in section \ref{sec:projectPlan}.

\subsection{Feature extraction}
\label{sub:featureExtraction}
It makes sense to once again make use of the long-standing existence and evolution of real alphabets
to evaluate the fitness of generated characters. However, we cannot do a direct comparison of characters.
After all, we are not looking to copy existing alphabets. We thus need to come up with some relevant
features to compare. In search of these features we turn to a survey paper by Trier (1996) \cite{Trier1996}.
The author states that there are different character representations from which features can be extracted.
In the stated work, the author covers gray-level images, solid binary images, solid binary contours, and 
character skeletons (a representation where the character is thinned to a thickness of 1 pixel everywhere).
For each representation different kind of feature extraction techniques are available. One common method called \emph{template matching}
uses a library of training examples which are directly compared to test cases. As we are not interested in
creating characters identical to existing ones, we can dismiss this set of methods immediately.

From the representations mentioned in \cite{Trier1996}, the character skeletons are most applicable to our
project, as we are primarily concerned with the general form of the character and not with the precise
gray-levels of each pixel making up the character, nor with the thickness of the stroke.
The author mentions several possible feature extraction methods for character skeletons. We will
discuss the two methods most relevant for our work.

For our project a quite salient set of features are \emph{discrete features}. These are features that
are directly computable from the pixel representation of the skeleton. Trier lists the following
set of extractable discrete features, found in \cite{Ramesh1989} and \cite{Kundu1989}:
\begin{quote}
``the number of loops; the number of
T-joints; the number of X-joints; the number of bend
points; width-to-height ratio of enclosing rectangle;
presence of an isolated dot; total number of endpoints
and number of endpoints in each of the four directions
N, S, W and E; number of semi-circles in each of these
four directions; and number of crossings with vertical
and horizontal axes, respectively, the axes placed on
the center of gravity.''
\end{quote}
The advantage of these features is that it is straightforward to compute the moments (e.g. mean,
variance, skewness) of their distribution
among some character set. Using those moments we can then compute the likelihood of some new
character belonging to the original character set.

Another method of feature extraction is \emph{zoning}, where the character image is divided into a set of $N$
different zones, on which smaller scale feature extraction is performed. For example, Holbaek-Hanssen et al. (1986) measured
the length of the character in each zone and used these lengths as features. This method can be made scale independent
by dividing the length in each zone by the total length of the character. Again, moments of the distribution of these
features are easy to compute.

\section{Project plan}
\label{sec:projectPlan}
In this project, we plan to use shape grammars to automatically construct novel alphabets. As
mentioned before, shape grammars do not offer a method of global evaluation. Consequently, we would like to
combine these shape grammars with some form of genetic algorithms to improve upon previously
generated samples. The envisioned final result is a system which is capable of generating alphabets
either fully autonomously or with some form of user interaction.

We already mentioned that a shape grammar is composed of some set of basic elemental shapes, and a set
of rules which governs how to combine the elemental shapes into larger shapes. For our project, we
will start out with a set of manually crafted basic shapes. Making good use of the extensive evolution of
already existing character sets, we will extract these basic shapes from the Roman alphabet.

We cannot generate a usable new character set by simply pasting shapes randomly onto a canvas. We
need to take into account a number of constraints to ensure usability. As mentioned by Wang
\cite{PeiWang1985}, the ease of learning a new character is, among other things, inversely related
to the complexity of structures within the character and the degree of ambiguities between different
characters. We thus need to limit the number of shapes in each character and stop adding
elements to the shapes after a certain point. 

By intuition it seems to be a good idea to favour connections of shapes at their end points. For
example, the `A' consists of three lines, of which two are connected at their end points; the same
holds for the `F'. More generally, at least in the capital Roman alphabet each letter is a
one-connected component. By experimentation we will need to come up with other shape rules.

We note, however, that just modifying the shape rules will not guarantee a satisfying result. We need to
come up with a way of evaluating the characters after their creation. This is where
genetic algorithms come in. The genetic algorithm will serve to evaluate characters generated from
the shape grammar according to some fitness function and, if necessary, change the parameters and weighting of the rules of the
shape grammar to favour more desirable characters.

There are several eligible possibilities for the fitness function of the genetic algorithm. As the
character set will finally need to be used by humans, the fitness function could be simply supplied by human
intervention, where a human selects his most favoured characters from a batch of generated
characters. The system can then modify the parameters of the shape grammar to favour these
characters over the rest of the character set.

Another possibility is to compare the generated characters with an existing character set. This
approach requires us to come up with a way of comparing characters in a meaningful way. Of course,
we cannot compare characters on a pixel-by-pixel basis, or any direct shape-based way. The distance
measure to come up with needs to compare some relevant set of features which ideally describe both
the recognisability and writeability of the character at hand.

As we do not know in advance what method will work best, we will implement feature extraction
for all features listed in section \ref{sub:featureExtraction}. We can then compute the distribution
of these features in existing character sets, such as the Roman alphabet, and use different statistical
moments (e.g. mean, variance) to compute the likelihood of newly generated characters. The saliency of each of the features
in evaluating generated characters remains to be seen. Secondly, we will also implement a user interface
enabling the user to directly select individuals from the population for survival and use in further
evolution.

\subsection{Practical considerations}
We plan to implement the shape grammar and genetic algorithms in C++. Furthermore,
for human evaluation and display, we will use the Qt toolset to create a graphical user interface.
For prototyping purposes, we might turn to MATLAB, however our final product will be implemented in
C++ for reasons of speed and independence.

\section{Evaluation}
\label{sec:evaluation}
The final product of this work should be able to generate sets of characters that are easily written and recognized
by human beings. As such, the best method of evaluation would be to do a user study where we ask participants
to study a generated character set for a given time and see how well they can (1) produce text and (2) recognize text.
Furthermore, it would be interesting to ask the participants for their opinion on the final result,
marking readability and writeability on a subjective basis. As the final character set will be based
on shapes extracted from the Roman alphabet it might be of interest to see if participants who are used to
reading the Roman alphabet have an easier time learning the artificial character set than
participants who are not.
The problem with a user study, however, is that it takes quite some time to setup correctly; time we
most probably will not have available.

Alternatively, some computer-assisted method of evaluation could be employed. There are, however, some problems with this approach.
First off, depending on the fitness function used in the genetic algorithm, it could be that the characters have already
been optimised to match a certain set of features drawn from some pre-existing set of characters. In this case, basing evaluation on those
features is futile: it would be like testing a classifier using its training
set. Furthermore, if the chosen optimised feature set is a good descriptor of readability
and writeability, the final result will by definition be satisfactory. More
importantly, however, this quality of the feature set is by no means
guaranteed.  The question then arises how to evaluate this feature set. No
matter how you pose the problem, the fact remains that we are trying to
evaluate human readability and writeability of the character set, properties
which are inextricably linked to humans on a subject-by-subject basis. 

\section{Discussion and Conclusion}
Even though both the artificial generation of form and handwriting in general
have been studied extensively, the literature is lacking when it comes to
artificial synthesis of character sets. We plan to start filling this gap by
using shape grammars and genetic algorithms to generate artificial alphabets.

The use of shape grammars and genetic algorithms together will give us both
local and global control over the generation of the character set. By splitting up
the shape generation into multiple stages, we plan to work our way from the
initial shapes through small elemental shapes to the final characters. This
gives us many possible points of evaluation.

How much of this evaluation will have to be done by humans remains to be seen.
It may very well be that a combination of human evaluation and comparison with
previous character sets proves to yield the quickest convergence to favourable
character sets.

Starting a project with more or less no previous work to lean on is exciting.
Nevertheless, the lack of foundation also poses a unique challenge. Like in any
project, we will have to accommodate for unforeseen problems on the way, but as
we are starting from scratch these problems might be higher in number than they
would be in a very well explored field of study.

Given the limited time in which this project will take place, a final product
where we are able to generate some form of description of an artificial character set
would be satisfactory. However, given time, some additional features could be inspected.
For example, by adding constraints we could try to force the character sets to
be cursive, i.e. writeable without lifting the pen. We could also try to use some
user-created characters as prior information, looking to generate a character set most
similar to the supplied glyphs.

%Shape grammars provide us with a way to encode the constraints of usable
%writing systems we have listed previously.  However, we still need to come up
%with a way of formalising these constraints. How can we formally define \emph{ease of production}
%and \emph{recognisability} for characters? 
%
%Shen-Pei Wang \cite{PeiWang1985} stated that the ease of learning a new
%character is, among other things, inversely related to (1) complexity of
%structures within the character and (2) degree of ambiguities. While complexity of structures is local
%to each character, the degree of ambiguities is defined by intercharacter similarities. As such,
%we can minimise structural complexity separately but will need to consider all characters within the set
%when minimising ambiguity.



%% RECOGNITION PAPERS
% KUHL AND GIARDANA 1982
%In Kuhl and Giardina 1982 \cite{Kuhl1982} the authors describe a method of computing elliptic Fourier descriptors from
%\emph{chain codes}. Using chain codes arbitrary closed contours can be approximated using a string of numbers, ranging from 0 to 7.
%
%Each number $a_i$ represents a line segment starting at the end of the previous line segment, in direction $\frac{\pi}{4}a$, and is of length $1$ or $\sqrt{2}$, depending on if $a$ is even or odd. Figure\footnote{Figure courtesy of Kuhl and Giardana 1982 \cite{Kuhl1982}} \ref{fig:chaincodes} gives a clear example.
%
%\begin{figure}[h!tb]
%\centering
%\includegraphics[width=0.2\textwidth]{img/chaincodes}
%\caption{This figure is represented by chaincode 0005676644422123.}
%\label{fig:chaincodes}
%\end{figure}
%
%Of course, chain codes give approximations to given closed contours, and are as-is not enough to do any form of handwriting recognition.
%In their work, they convert the chain codes to two separate time functions for $x$ and $y$. From these functions, they compute
%Fourier descriptors for $x$ and $y$ separately. The Fourier series expansion for $x$ is shown in equation \ref{eq:KGFourierX}.
%
%\begin{equation}
%x(t) = A_0 + \sum_{n = 1}^\infty a_n \cos \frac{2n\pi t}{T} + b_n \sin \frac{2n\pi t}{T}
%\label{eq:KGFourierX}
%\end{equation}

%% SYNTHESIS PAPERS
% WANG ET AL 2002
% SEPARATE LETTER STYLE AND CONCATENATION STYLE
% B-SPLINE FITTING USING CONTROL POINTS GATHERED FROM 1D GABOR FILTERS

% TRI-UNIT MODEL OF HEAD, BODY AND TAIL OF LETTER
% BACKGROUND / RELATED WORK

% OPTIONAL : OVERVIEW OF YOUR APPROACH

% DETAILS OF YOUR APPROACH

% EVALUATION

% DISCUSSION / CONCLUSION

\bibliographystyle{plain}
\bibliography{literature/bibliography.bib}

\end{document}
