\documentclass{article}
\usepackage{nips07submit_e,times,amsmath}
\usepackage{graphicx}
\usepackage{hyperref}
%\documentstyle[nips07submit_09,times]{article}

% John's preamble stuff
\newcommand{\Expr}{\text{Expr}}
\newcommand{\NonExpr}{\text{NonExpr}}
\newcommand{\Word}{\text{Word}}
\newcommand{\Char}{\text{Char}}
\newcommand{\Number}{\text{Number}}
\renewcommand{\L}{\ \ | \ \ }
\newcommand{\ra}{\rightarrow \ }
\newcommand{\tS}[1]{^{\text{#1}}}
\newcommand{\ts}[1]{_{\text{#1}}}
\newcommand{\Score}{\text{Score}}
\newcommand{\Rule}{\text{Rule}}
\newcommand{\Tree}{\text{Tree}}
\newcommand{\bounds}{\text{bounds}}
\newcommand{\LHS}{\text{LHS}}
\newcommand{\RHS}{\text{RHS}}
\newcommand{\height}{\text{height}}
\newcommand{\abs}{\text{abs}}
\newcommand{\vecf}{\mathbf{f}}
\newcommand{\vecw}{\mathbf{w}}
\newcommand{\Data}{\text{Data}}
\newcommand{\Quantity}{\text{Quantity}}
\newcommand{\Operator}{\text{Operator}}
% End John's preamble stuff

\begin{document}

\title{Recognizing handwritten equations using a context-free grammar}
\author{Luke Moryl, Noah Jakimo, Eric Panigua, John Schulman}

\maketitle


\begin{abstract}
We design and implement algorithms for converting handwritten equations into \LaTeX. The problem is broken down into two independent problems: character recognition and layout parsing. In character recognition, groups of strokes are matched to characters. In layout parsing, the sizes and locations of the characters are used to infer the relation between the parts of the equation, such as superscripts and subscripts. A GUI canvas is used for input. The GUI has been implemented, and the character recognition and layout parsing are functional at a basic level.
\end{abstract}


\section{Introduction}
Write your own math in new glyphs (WYOMING) is the goal and name of our group effort to convert digitally handwritten mathematical expressions to their corresponding \LaTeX \ code. The three tasks are (i) providing a digital interface for handwriting these expressions, (ii) recognizing the symbols from a mathematic alphabet that best match the series of digitally handwritten strokes, and (iii) parsing the expression with a well-defined grammar for mathematical equations.

%The many degrees of freedom that are inherent to this problem are evident in the previous attempts by others to perform character and contextual recognition of digitally written mathematics. On one hand, an off-line approach receives images of typeset math equations that may be surrounded by regular paragraphical text. With this input system, one must one recognize when a mathematical expression occurs within a broader text, decompose overlapping characters, and ignore any impurities (i.e. scratches and dots) of the image that contribute no meaning to the expression. On the other hand, an on-line approach inputs a series of handwritten strokes penned by a user. At the cost of variation in the structure of inputs with identical semantics, the latter system yields temporal information that may be used to distinguish strokes and characters. 

Two types of equation-recognizing program are possible, based on the form of input: (1) images of equations, (2) stroke information from a mouse or tablet. We take the latter approach. This makes it more difficult to obtain labeled training data, but it makes it easier to segment the input into a collection of symbols, because of the temporal information and pen-up/pen-down information. Every written symbol, e.g. $x$ or $\sqrt{\ }$, consists of at most four consecutive strokes, where a stroke is the set of positions between pen-up and pen-down. A chart summarizing current progress on both of these problems can be found in \cite{chan_mathematical_2000}.

WYOMING is an on-line, top-down parser. In the current implementation, character recognition is performed independly of parsing. However, performance will be improved by using a model that combines character recognition with parsing, and we may use such an approach later. The status of the software is as follows:
\begin{enumerate}
  \item[(i)] \textit{GUI}. Functional. Accepts and displays input through a mouse or tablet; generates stroke data to feed to the next stage of processing.
  \item[(ii)] \textit{Symbol recognition}. We have written a classifier to match sequences of strokes to characters, i.e., numbers, letters, and mathematical operators. The parameters of the classifier are determined by training data. We are currently tuning the preprocessing stages to improve classification performance, which is currently poor.
  \item[(iii)] \textit{Layout parsing}. We have written a minimal parsing algorithm that, given a set of symbols and their locations, will parse them into an expression with subscripts and superscripts. Parameters for are set by hand (not by training data).
\end{enumerate}

Our software is written in Python.

\section{Graphical User Interface}
The features of WYOMING's user interface extend the Python module PyGame, which captures desktop events and controls a window display. For a visual of working GUI's layout, please refer to Figure \ref{GUI}. Note the GUI currently functions as a testing environment. Using the features we lay out in this section, we are capable of producing examples of handwritten expressions on which we may independently train our character and contextual recognition algorithms.

To provide a user with a virtual pen, WYOMING extends either the clicks and movement of a mouse or contact of a stylus on a tablet personal computer. A stroke is initiated upon such contact or click within the writing space bounded by the displayed black dashed lines. The stroke then consists of the ordered sequence of positions on each desktop frame that the user drags the cursor over before either picking up the stylus or releasing the click on the mouse. At each of these positions a small circle is displayed. If the stroke strays from writing space, all circles from this stroke are erased. Otherwise the stroke is left on display, and the ordered list of points by which the stroke is defined is appended onto a list of strokes. 

When a stroke is completed, rather than initiating another stroke, the user may click on other active regions of the interface. In the event that the red eraser is clicked, the last visible stroke that was scribed is erased and popped from the stroke list. The user may also click on the displayed red equals symbol, which clears the writing space and initiates the recognition algorithms. After future development of the recognition algorithms, the GUI will fill the gray box with the returned image of the rendered \LaTeX \ code produced by these algorithms.



\begin{figure}[h]
\begin{center}
\includegraphics[width=9cm]{wyomingGUI.png}
\caption{GUI application with input $e^{i\pi}+1=0$}
\label{GUI}
\end{center}
\end{figure}



\section{Character recognition}

We use a sequence of strokes to infer a sequence of characters. Each stroke is a set of points that the pen traces between pen-up and pen-down. One to four consecutive strokes make up each character--this assumes that the pen is lifted between characters, and no character requires no more than three lifts. It is not known initially how the strokes should be grouped together.

The classifier, which infers a character from a set of strokes (but does not address the grouping problem) is based on preprocessing and clustering techniques in \cite{ouyang_visual_2009}. We train the classifier using the \emph{pidigits} dataset. Following Ouyang. The preprocessing steps are

\begin{itemize}
 \item Spatial resampling.
 \item Centering and normalization. Center of mass goes to origin, standard deviation goes to unity on each axis.
 \item Feature Extraction. Features include contours of various orientation at every point in the image, and stroke start and end points.
  \item Smoothing (convolution with a gaussian kernel).  
\end{itemize}

This process results in 5 12x12 feature images, which are then vectorized and mapped into 128-dimensional principal component space.

The final classification uses a tree-based clustering technique based on euclidean distance in this principal component space.

The grouping of strokes into characters works by applying the classification algorithm to every subsequence of one to four strokes. Each of these subsequences is assigned a score. The total score of a grouping is the sum of these subsequence scores. Using dynamic programming, we efficiently find the grouping that maximizes the total score.

\section{Layout parsing}

Using the spatial information about the characters, i.e., their sizes and locations, we infer subscript and superscript relationships. A later version of the program will handle fractions, sums, integrals, and roots. We consider an equation grammar and attempt to parse the layout in terms of this grammar. A parse is assigned a score, which we seek to maximize. This approach draws on \cite{viola_learning_2005}, which discusses discriminative context-free grammars.

\subsection{Equation grammar}


The following toy grammar has been implemented and can infer the proper layout of hand-made inputs.

\begin{align*}
  \Expr \ra &\Expr \ \Expr \\
  \L &\Expr_{\Expr} \\
  \L &\Expr^{\Expr} \\
  \L & a,b,c,\dots,1,2,3,\dots,+,-,\times
\end{align*}


We plan to implement a larger grammar. Our implementation is data-driven, i.e. the program can work with an arbitrary grammar. However, our current implementation requires that the right-hand side of every rule has either two nonterminals or a terminal. A more realistic grammar will have ``non-normal'' rules with more than two nonterminals on the right-hand side, e.g. $\Expr \ra \frac{\Expr}{\Expr}$. Every grammar can be written so that there are only two items on the right-hand side of every rule. However, this will require us to add intermediate nonterminals and rules to the grammar. Specifying the features for these rules will be unwieldy, and some information is discarded in this process unless we make some changes to the algorithm. We have two choices:
\begin{enumerate}
  \item Specify features in terms of non-normal rules, but automatically generate a grammar in normal form that uses these features and corresponding weights.
  \item Change the parser so it can apply these non-normal rules by searching over partitions of the symbols into more than two subsets.
\end{enumerate}
Both methods introduce some complications and inefficiency.

Also, the more complex, realistic version of the grammar will take into account the structure of mathematical expressions and distinguish quantities from operators. For example, it might include a rule $\Quantity \ra \Quantity \ \Operator \ \Quantity$.

\subsection{Scoring function}

\begin{align}
  \Score(\Tree) = \sum_i \Score(\Rule_i) \\
  \Score(\Rule_i) = \sum_{k=1}^f w^{\Rule_i}_{k} f_k^{\Rule_i}(\bounds)
\end{align}

$w_k$ are the weight parameters, and feature functions $f_k$ are nonlinear functions of the bounding boxes of the subexpressions, for example 
\begin{align*}
&f_1(\bounds) = \abs(x_{\min,\LHS}-x_{\min,\RHS})/\height < .3\\
&f_2(\bounds) = \height_{\RHS}/\height_{\LHS}<.5
\end{align*}
These are boolean-valued, so the output is taken to be $0$ or $1$.


\subsection{Parsing algorithm}

Our parsing algorithm uses a top-down approach (with memoization) to find the optimal parse for an equation. A parser for a flat text input essentially works by parsing its subsequences, whereas our equation parser must parse arbitrary subsets of our collection of symbols, making the problem more complex. The pseudocode is given below.

{\tiny
\begin{verbatim}
function parse: nonterminal, symbol_subset -> parse_tree, score
   if symbol subset has one element, return terminal character and score 0
   for each rule replacing nonterminal with two other nonterminals:
      for each subset and complement of symbol_subset:
          parse subset and complement with corresponding nonterminals
          total score is score of this rule plus score of parses of subset and complement
   return parse tree and score from rule and partition that gave maximal score     
\end{verbatim}}

The complexity of this function is $O(2^{2n})$ where $n$ is the number of symbols in the equation. In contrast, flat text parsing has complexity $O(n^3)$. In flat text parsing, the there are $n^2$ subsequences, and each subsequence is a member of $O(n)$ pairs of adjacent subsequences so that partial parse is used $O(n)$ times. In our layout parsing, there are $O(2^n)$ subsets, and each subset parse is used $O(2^n)$ times.

\subsection{Feature weights from training data}
This has not yet been implemented. Currently, the feature weights are coded into the program by hand. However, we plan to implement an algorithm that will determine the feature weights that maximize likelihood on a labeled dataset.

We can consider the ``probability'' of a parse, which is the product, over all rule of applications in the parse, of the following probability (i.e., we're assuming our model is generative)

\begin{align}
P^{\Rule}(\vecf) &= \frac{1}{Z(\vecw_{\Rule})}\exp(-\vecf \cdot \vecw_{\Rule})\\
\end{align}

By looking at all of our data, we have a set of rule applications for a given rule. The log-probability thus has the following form

\begin{align}
  \log P(\Data) = \sum_{\Rule \in \textsc{Rules}} \sum_i ( \vecf_i \cdot \vecw_{\Rule} -\log Z(\vecw_{\Rule}))
\end{align}

Since this is a log-linear model, the maximum-likelihood parameter values $\vecw$ are given by

\begin{align}
\vecw_{\Rule} = \bar{\vecf}
\end{align}

Thus we can choose our feature weights using these moments, averaged over the input data.



%P(\vecf_1,\vecf_2,\dots,\vecf_N)&=\prod_{n=1}^N P(\vecf)\\
%\log P(\Data) &= \sum_{n=1}^N P(\vecf_n)\\
%&= \sum_{n=1}^N \left[\sum_{i=1}^kf_i\cdot w_i - \log Z(\vecw)\right]


% ViolaNara05 uses a "perceptron algorithm". They cite a paper "Max-Margin parsing", which I ought to check out.

\nocite{*}

\bibliography{refs}
\bibliographystyle{unsrt}
\end{document}


Outline
Introduction
Character recognition


\begin{align*}   
  \Expr \ra &\Expr \ \Expr \\
  \L &\Expr_{\Expr} \\
  \L &\Expr^{\Expr} \\
  \L &\frac{\Expr}{\Expr} \\
  \L &\int_\Expr^{\Expr} \Expr \L \int_\Expr \Expr \L \int^{\Expr} \Expr  \L \int \Expr \\
  \L &\sum_{\Expr}^{\Expr} \Expr \L \sum_{\Expr} \Expr \L \sum^{\Expr} \Expr \L \sum \Expr \\
  \L &\sqrt{\Expr} \\
  \L &(\Expr) \\
  \L &[\Expr] \\
  \L &\text{NonExpr} \\
  \Expr \ra &\Expr^{\Expr} \\
  \L &\text{NonExpr} \\
  \Expr \ra &\NonExpr \\
  \NonExpr \ra  &\Number \\
  \L &\Word \\
  \L &\Char \\
  \Number \ra &-3, 52.5, \dots \\
  \Word \ra &\max, \log, \dots \\
  \Char \ra &a,b,c,\alpha,\beta,\gamma,\otimes,\dots
\end{align*}
