\documentclass{article}
\usepackage{nips07submit_e,times,amsmath}
\usepackage{graphicx}
\usepackage{hyperref}
\usepackage{subfig}
\usepackage[usenames,dvipsnames]{color}


% John's preamble stuff
\newcommand{\Expr}{\text{Expr}}
\newcommand{\EWS}{\text{ExprWithScript}}
\renewcommand{\L}{\ \ | \ \ }
\newcommand{\ra}{\rightarrow \ }
\newcommand{\tS}[1]{^{\text{#1}}}
\newcommand{\ts}[1]{_{\text{#1}}}
\newcommand{\Score}{\text{Score}}
\newcommand{\Rule}{\text{Rule}}
\newcommand{\Tree}{\text{Tree}}
\newcommand{\bounds}{\text{bounds}}
\newcommand{\LHS}{\text{LHS}}
\newcommand{\RHS}{\text{RHS}}
\newcommand{\height}{\text{height}}
\newcommand{\abs}{\text{abs}}
\newcommand{\vecf}{\mathbf{f}}
\newcommand{\vecw}{\mathbf{w}}
\newcommand{\Data}{\text{Data}}
\newcommand{\BigOp}{\text{BigOp}}
\newcommand{\BinOp}{\text{BinaryOp}}
\newcommand{\THRESHOLD}{\text{THRESHOLD}}
\newcommand{\Strokes}{\text{Strokes}}
\newcommand{\Char}{\text{Char}}
\newcommand{\Segmentation}{\text{Segmentation}}
\newcommand{\op}[1]{\text{#1} \ }
\newcommand{\newtag}{\tag{\theequation}\addtocounter{equation}{1}}
\newcommand{\Parse}{\text{Parse}}
\newcommand{\li}{\text{LayoutInfo}}
\newcommand{\xmin}{\text{xmin}}
\newcommand{\xmax}{\text{xmax}}
\newcommand{\ymin}{\text{ymin}}
\newcommand{\ymax}{\text{ymax}}
% End John's preamble stuff
% Note to self. After doing this project, I still don't understand what a discriminative model is. I understand that if x and y are vectors, e.g., we can say that p(x|y) is a gaussian or something and try to model p(x|y) instead of p(x,y). But how about what we're doing with the grammars--just throwing out some information that we don't know how to generate, like the probability of each character and each rule application.
% Actually, now that I think of it, that should have been included. But that makes a difference of log(small integer). Also, we're not marginalizing over equivalent parses.
% on the second run-through, talk a little about discriminative models and where we're conditioning on information.
% Here's how I'd define it now. A discriminative model is like the full generative model except you don't include some nodes in the log probability because you're conditioning on these nodes. It doesn't mean that you come up with some ad-hoc score function that's loosely based on the generative model.


\begin{document}

\title{Recognizing handwritten equations using a context-free grammar}
\author{John Schulman, Noah Jakimo}

\maketitle


\begin{abstract}
We design and implement algorithms for converting handwritten equations into \LaTeX. The inference problem is broken down into three parts: stroke segmentation, character recognition, and layout parsing. Stroke segmentation separates the sequence of input strokes into groups of one to three strokes, each of which corresponds to a single character. Character recognition matches a group of strokes to a character. Layout parsing, which is specific to the equation recognition problem, finds the most likely parse tree for the equation given an equation grammar.

We have implemented algorithms for the inference problems above. We have implemented a graphical program where a user writes equations using a mouse or tablet, and the program recognizes the input  in real time.
\end{abstract}


\section{Stroke segmentation}

The input is a sequence of strokes. A stroke is the set of points that the pen crosses between pen-up and pen-down (see section \ref{ui} on user input.) We could have instead used the image of the equation. We chose to use the raw strokes because this makes the segmentation problem easier--the problem of finding breaking the drawn equation into pieces, each of which corresponds to a character. We assume that each character is drawn all at once--the user can't come back later to dot his i's and cross his t's. Thus we have to solve the following segmentation problem: given a sequence of strokes $(s_1,s_2,\dots,s_n)$, break the sequence into subsequences of one, two, or three strokes, e.g. $((s_1),(s_2,s_3),(s_4),\dots)$, where each subsequence matches a single character, e.g. $(a,x,\sum)$.

We tried two different approaches to this problem. Approach (1) uses a generative model for the stroke sequence and solves a maximum likelihood problem. Approach (2) uses a fragile heuristic. So far, approach (2) has been more successful. Approach (2) segments the strokes as follows:

Goal: segment strokes $(s_1,s_2,\dots,s_n)$. Stroke $s_i$, $i > 1$ is part of a new group if and only if
\begin{itemize}
\item $s_i$ does not cross $s_{i-1}$, and
\item The the combined strokes $s_i,s_{i-1}$ do not match any of the \textit{i-like} like characters, which are defined as the characters with two disjoint parts, such as $i$,$j$, and $=$. More precisely, $\Score(s_i,s_{i-1}|c) < \THRESHOLD$ for all $c \in \textit{i-like}$.
\end{itemize}

This approach has serious drawbacks. Most importantly, two consecutive strokes that erronesouly overlap (or fail to overlap) cause segmentation to be wrong.

The probabilistic model we tried--approach (1)--is as follows. $N$, the number of characters, has exponential distribution $P(N)\propto \gamma^N$. For each character, the strokes are generated with probability distribution $p(\Strokes|\Char)$. 

\begin{align}
p(\text{Chars,Strokes}) \propto \gamma^N \prod_{i=1}^N p(\Char_i)p(\Strokes_i|\Char_i) \label{allstrokes}
\end{align}

We segment the strokes to maximize the total likelihood of the stroke sequence. We take the max-marginal over the possible character identities. 

\begin{align}
\log P(\text{Strokes}|\text{Segmentation,Characters}) = N \log \gamma + \sum_{i=1}^N \log P(\Strokes_i|\Char_i)\\
\Score(\text{Segmentation}) = N \log \gamma + \sum_{i=1}^N \max_{\Char_i} \log P(\Strokes_i|\Char_i)
\end{align}

This approach failed for two reasons:
\begin{enumerate}
\item While the score function is good at distinguishing two characters for a fixed groups of strokes, it does a poor job at comparing segmentations with different numbers of characters. Depending on the parameter $\gamma$, the algorithm either over-splits characters (it returns ``- 1'' instead of ``+'') or over-merges them (it thinks ``+'' is ``- 1''). The actual mistakes it makes are much stupider than this, e.g. it groups three perfectly good single-stroke characters as ``k''.
\item The score function above does not incorporate spatial information about separate characters, i.e. there is no penalty term that encourages overlapping strokes to be part of the same character, hence ``+'' can be mistakenly recognized as ``- 1''.
\end{enumerate}

We believe that a better approach to stroke segmentation will use a model like the one above, but modified as follows:
\begin{enumerate}
\item A better score function $\Score(\Char_i|\Strokes_i)$.
\item A penalty term that depends on the position of the strokes and whether or not they intersect.
\end{enumerate}

\section{Character recognition}

The character recognition engine has the following task: given a group of strokes, determine the k characters that match it best. Assign a probability to each of these matching characters.

We preprocess the strokes as follows. First the character is shifted and rescaled so that the minimum x value and y coordinates are 0, and the longer dimension has a range of 1. Then we fit several line segments to each curve. The segment endpoints $s_1,s_2,\dots$ are selected from the list of all points in the strokes $p_1,p_2,\dots$:

\begin{itemize}
\item $s_1 = p_1$
\item $s_{n+1}$ is the last point $p_k$ (i.e. largest $k$) such that
\begin{align}
\max_i \text{Dist}(p_i,\overline{p_ks_{n}} < \THRESHOLD
\end{align}
for all $i$ such that $p_i$ lies on the curve between $s_n$ and $p_k$.
\end{itemize}
In other words, the segment endpoints are chosen greedily so that the point-to-line distance between the segment and any point on the curve it summarizes is less than $\THRESHOLD$.

Finally, each character is represented as a sequence of sequences of vectors $v_i = s_{i+1}-s_i$, with each sequence of vectors corresponding to a stroke. The number of vectors in each stroke may be different in different instances of the characters. For the character to be recognized, it is necessary that there is an example from the training data with the same number of segments in each stroke.

We use a mixture of Gaussians model, where each handwritten character is a noisy version of some example in the training data, and each example has a Gaussian distribution. Thus $P(\text{Char}|\text{Strokes})$ is the total probability density of the Gaussians corresponding to $\Char$ that have the correct number of strokes and dimensionality of each stroke (the length of the sequence of vectors). The mixing coefficients of the Gaussians are equal, so that
\begin{align}
P(\Char | \text{Strokes}) = \sum_{\text{Examples*}}\frac{1}{\text{NumExamples}} \frac{1}{(2\pi\sigma^2)^{\dim \mathbf{v}/2}}e^{-(\mathbf{v}-\mathbf{v}_0)^2/\sigma^2}
\end{align}
where $\text{NumExamples}$ is the number of training examples of the character, and $\text{Examples*}$ is the set of examples with the correct dimensionality and number of strokes.


\section{Layout parsing}

This is the crux of the equation recognition problem: inferring how the drawn characters are part of an expression with subscripts, superscripts, fractions, and other syntactical constructs. We assume that equations are generated by a probabilistic grammar. We try to find the parse that maximizes a likelihood, described in section \ref{prob-model}. We do inference using the spatial bounds information of each character and features extracted from this bounds information.

The input to this algorithm is a list of pairs (\textit{candidates}, \textit{bounds}), each of which corresponds to a character in the input:
\begin{itemize}
\item \textit{candidates} is a list of candidate characters for a location, along with their log probability scores. So this list might contain the (t,-4), (+,-5), (x,-9).
\item \textit{bounds} is the bounding box of the strokes that make up that character: $(x_{\min},x_{\max},y_{\min},y_{\max})$.
\end{itemize}

\subsection{Equation grammar}

Each grammatical rule has three parts: (1) a non-terminal on the left-hand side, (2) a layout operator, (3) a list of nonterminals or terminals on the right-hand side. Each layout operator determines relationships between the sizes and positions of the operands (the items on the right-hand side of the rule.)

\begin{align*}
  \Expr \ra &\op{right} \Expr\ \Expr \newtag\\
  \L &\op{sub} \EWS_{\Expr} \\
  \L &\op{sup} \EWS^{\Expr} \\
  \L &\op{subsup} \EWS^{\Expr}_{\Expr} \\
  \L &\op{frac} -\ \Expr\ \Expr\ \\
  \L &\op{comb} \Expr\ \BinOp\ \Expr \\
  \L &\op{sum1} \BigOp\ \Expr \\
  \L &\op{sum2} \BigOp\ \Expr\ \Expr \\
  \L &\op{sum3} \BigOp \Expr\ \Expr\ \Expr \\
  \L &\op{right2} (\  \Expr\ ) \\
  \L &\op{box} \sqrt{\ } \ \Expr \\
  \EWS \ra &\op{right2} (\ \Expr\ )
\end{align*}

There is another set of replacement rules that replaces nonterminals with terminal characters:

\begin{align*}
  \Expr& \ra a,b,c,\dots,1,2,3,\dots \newtag \\
  \EWS& \ra a,b,c,\dots,1,2,3,\dots \\
  \BigOp& \ra \sum, \prod, \int \\
  \BinOp& \ra +,-
\end{align*}

\subsection{Probability model}

\label{prob-model}The following generative model describes a probabilistic grammar:
\begin{itemize}
\item Start with a root-level nonterminal ($\Expr$). 
\item While the sentence contains a nonterminal, randomly select a rule with that nonterminal on the left-hand side and apply it. 
\end{itemize}
In our model of the equation grammar, at each rule application, LayoutInfo is generated. For example, we might have an $\Expr$ with bounding box $(x_{\min},x_{\max},y_{\min},y_{\max})=(0,1,0,1)$ and apply the rule $\Expr \ra \op{sub} \Expr\ \EWS$. The $\Expr$ and $\EWS$ produced have their own bounding boxes, e.g. $(0,.9,0,.9)$ and $(.9,1,.9,1)$. These new bounding boxes constitute the LayoutInfo. Thus the generative model specifies how to draw from the discribution $P(\Parse,\li)=P(\Parse)P(\li|\Parse)$.

We find the parse that maximizes the likelihood of LayoutInfo, i.e. we find $\max_{\Parse} P(\li|\Parse)$. This way, we avoid having to model $P(\Parse)$, which we would need if we wanted to find the most probable parse.

We use a log-linear model for the LayoutInfo corresponding to each rule application. Namely, each rule corresponds to a set of features $f$, which are independent Bernoulli random variables:
\begin{align}
f = \begin{cases} \label{f}
  +1& \text{with probability } e^{+w}/Z \\
  -1& \text{with probability } e^{-w}/Z \end{cases}
\end{align}
where $Z =e^w+e^{-w}$. Thus,
\begin{align} \label{logp-rule}
P(\li_{\Rule} | \Rule) = \prod_i e^{w_if_i}/Z_i \\
\log P(\li_{\Rule} | \Rule) = \sum_i (w_if_i - \log Z_i)
\end{align}

where $f_1,f_2,\dots$ are the set of features that correspond to $\Rule$, and $w_1,w_2,\dots$ are the corresponding probability parameters.

In our implementation, each layout operator (e.g. right, sub, frac) corresponds to a different set of features. For example, the subscript operator corresponds to the features [LL1,BB1,TT1,R1L2,TT2,Bigger12,RR2], where 
\begin{align}
\text{LL1} &= I(\xmin_0 = \xmin_1) \\
\text{Bigger12} &= I\left(\frac{\height_1}{\height_2} > 2\right)
\end{align}
Where subscript $0$ refers to the bounding box of the whole expression $\Expr$, and the subscripts $1$ and $2$ refer to the bounding boxes of the first and second operands, $\Expr$ and $\EWS$, respectively, and
\begin{align}
I(\text{Predicate}) = \begin{cases} 
+1& \text{if Predicate is true}\\
-1& \text{if Predicate is false}
\end{cases}\end{align}

LL1's predicate is true if the left side of the first subexpression is aligned with the left side of the total expresssion, and Bigger12's predicate is true if the first subexpression is taller than the second subexpression (the subscript). Since we expect these predicates to be true for most subscripts, we expect that the parameter $w > 0$.

The log likelihood for the whole parse is the sum of the log likelihood of all rules (equation \ref{logp-rule}):
\begin{align}
\log P(\text{All}\li|\Parse) = \sum_{\substack{r\in\Rule-\\\text{applications}}}\log P(\li_r|\Rule_r) \label{logp-total}
\end{align}

\subsection{Efficient parsing}

We would like to efficiently determine the parse of the input that maximizes the log-likelihood in equation \ref{logp-total}. As with other parsing problems, the parse of a subset of the terminals does not depend on the other terminals. Thus we only need to solve the parsing problem $(a,S)$ once, in which we are trying to parse subset $S$ using nonterminal $a$. Given a suitable restriction on what order the characters are written, the complexity of the equation parsing problem scales the same way as the complexity of linear parsing.

We will briefly abstract the structure of these two parsing problems: equations and linear sentences, so that we can upper-bound the complexity. In each problem, we are trying to find the optimum solution (parse) on a set of items $S$. Suppose the following conditions hold:
\begin{itemize}
\item If we have already solved the problem on all \textit{special subsets} of $S$, then solving it on $S$ has time $A(n)$, where $n=|S|$. 
\item $A(n)$ is increasing in $n$.
\item \textit{special subsets} of $S'$ $\subset$ \textit{special subsets} of $S$ if $S \subset S'$.
\end{itemize}
 Successively solve the problem on all of the special subsets of S' in non-decreasing order. Each of these steps takes time less than $A(n)$, and there are $B(n)$ steps, where $B(n)$ is the number of \textit{special subsets} of $S$.

In a parsing problem, we define the \textit{special subsets} as the subsets of terminals that can be produced by a single nonterminal. In linear parsing, assuming each grammatical rule has two items on the right-hand side,
\begin{itemize}
\item $A(n) \propto n$, since there are $O(n)$ different ways of breaking the sequence into two subsequences, corresponding to the items in the right-hand side of the rule.
\item $B(n) \propto n^2$, since the \textit{special subsets} are the subsequences, and there are there are $O(n^2)$ subsequences of $n$ terminals.
\end{itemize}
Thus the total complexity is $O(n^3)$.

In equation parsing, let us first assume that the characters that make up a nonterminal are all written consecutively. So if one writes $x+3$, he can not return later to put an exponent on the $x$ to get $x^2+3$. However, the nonterminals themselves may be written in any order, e.g. $3$, $+$, $x^2$. Again let us assume that there are two items on the right-hand side of every rule. Then we still have $A(n) \propto n$ (the different orderings only contribute a constant factor), and $B(n) \propto n^2$. On the other hand, if we allow the characters to be written in an order that is completely arbitrary, then $B(n) \propto 2^n$, so the complexity goes as $n \cdot 2^n$. 

Our implementation uses a top-down parse, which uses a recursive parsing function and memoization to cache the results of the solved subproblems. See \url{http://code.google.com/p/wyoming/source/browse/layout_parse.py}. The performance is worse the $O(n^3)$ because our rules have up to three variable-length items on the right-hand side, so $A(n) \propto n^2$. The \textit{special subsets} are still the subsequences, so $B(n)\propto n^2$ Thus the total complexity goes as $n^4$. In our Python implementation, the running time is fast (less than a second) for less than 8 characters, and it takes roughly 30 seconds after the twelth character is drawn.

\subsection{Determining parameters from data}

Each feature $f$ for each rule has parameter $w$ (equation \ref{f}). Suppose we are given a collection of labeled training data: each example includes the bounds information of the characters and the correct parse tree. We would like to set all parameters $w$ to maximize the likelihood of the LayoutInfo of all of the training data.

We take the product of the likelihood over the training examples, each of which has several rule-applications: 
\begin{align}
P(\text{Data}) &= \prod_{e\in\text{Examples}}\prod_{\substack{r\in\Rule-\\\text{applications}}}P(\li^e_r|\Rule_r) \\
&= \prod_{e\in\text{Examples}}\prod_{\substack{r\in\Rule-\\\text{applications}}}\prod_{f_i\in\text{Features}(r)} e^{w_if^{e}_i}/Z_i
\end{align}

We rearrange the product by grouping the terms that are due to a specific feature (from a specific rule):
\begin{align}
P(\text{Data}) &= \prod_{f_i} \prod_{n=1}^{N_i}  e^{w_if^{n}_i}/Z_i
\end{align}

Here $n$ indexes over the $N_i$ occurrences of the rule (corresponding to $f_i$) in the training data. Thus we can optimize for all parameters $w_i$ independently. This is a log-linear model so the following equation holds at the maximum likelihood solution for $w$:
\begin{align}
\bar{f} = E_w f
\end{align}
where $\bar{f}$ is the empirical expectation--the average of $f$ over the data, and $E_wf$ is the expectation of $f$ given parameter $w$.
\begin{align}
E_wf = \frac{e^w-e^{-w}}{e^w+e^{-w}} = \tanh(w)\\
w_{\text{ML}} = \tanh^{-1} \bar{f}
\end{align}



\section{Implementation: User input}
\label{ui}
The features of WYOMING’s user interface extend the Python module PyGame, which captures desktop events and controls a window display. For a visual of working GUI’s layout, please refer to Figure \ref{Demo}, which includes working demonstrations of our software. Note the GUI initially provided a testing environment, but is now completely functional. Using the features we lay out in this section, we are capable of producing examples of handwritten expressions on which we may independently train and test our character and contextual recognition algorithms.

To provide a user with a virtual pen, WYOMING extends either the clicks and movement of a mouse or contact of a stylus on a tablet personal computer. A stroke is initiated upon such contact or click within the writing space bounded by the displayed black dashed lines. The stroke then consists of the ordered sequence of positions on each desktop frame that the user drags the cursor over before either picking up the stylus or releasing the click on the mouse. For a natural view of this handwritten stoke, a thin line is drawn between its consecutive points. If the stroke strays from the writing space, all segments of this stroke are erased. Otherwise the stroke is left on display, and the ordered list of points by which the stroke is defined is appended onto a list of strokes.

When a stroke is completed, rather than initiating another stroke, the user may click on other active regions of the interface. In the event that the red eraser is clicked, the last visible stroke that was scribed is erased and popped from the stroke list. In the past, the user could also click on the displayed red equals symbol, which clears the writing space and initiates the recognition algorithms. After further development of the recognition algorithms, at each lift of the pen, the GUI now fills the gray box with the most likely corresponding LATEX code produced by these algorithms.

 
\begin{figure}[h]
  \centering
  \subfloat{\includegraphics[width=0.4\textwidth]{inf_series_demo.png}}\\
  \subfloat{\includegraphics[width=0.4\textwidth]{int_demo.png}}
  \subfloat{\includegraphics[width=0.4\textwidth]{sqrt_demo.png}}
  \caption{\label{Demo}The GUI in action}
\end{figure}


\nocite{*}

\bibliography{refs}
\bibliographystyle{unsrt}

\end{document}