\documentclass[11pt,a4paper]{article} \usepackage[hyperref]{acl2017} \usepackage{times} \usepackage{latexsym} \usepackage{graphicx} \usepackage{algorithm} \usepackage{algorithmic} \usepackage{url} \usepackage[procnames]{listings} \usepackage{color} \usepackage{latexsym} \usepackage{amsmath} \usepackage{verbatim} \usepackage{amssymb} \usepackage{amsthm} \usepackage{booktabs} \usepackage{multirow} \aclfinalcopy \def\aclpaperid{555} \newcommand\BibTeX{B{\sc ib}\TeX} \newcommand{\vect}[1]{\mathbf{#1}} \newcommand{\set}[1]{\mathcal{#1}} \newcommand{\struct}[1]{\boldsymbol{#1}} \newcommand{\softmax}{\mathrm{softmax}} \newcommand{\argmax}{\mathrm{argmax}} \newcommand{\ensuretext}[1]{#1} \newcommand{\ignore}[1]{} \newcommand{\mycomment}[3]{\ensuretext{\textcolor{#3}{[#1 #2]}}} \newcommand{\dycomment}[1]{\textcolor{red}{\bf \small [#1 --DY]}} \newcommand{\cjdmarker}{\ensuretext{\textcolor{blue}{\ensuremath{^{\textsc{C}}_{\textsc{D}}}}}} \newcommand{\cjd}[1]{\mycomment{\cjdmarker}{#1}{blue}} \newenvironment{itemizesquish}{\begin{list}{\setcounter{enumi}{0}\labelitemi}{\setlength{\itemsep}{-0.25em}\setlength{\labelwidth}{0.5em}\setlength{\leftmargin}{\labelwidth}\addtolength{\leftmargin}{\labelsep}}}{\end{list}} \lstset{language=Python, basicstyle=\ttfamily\tiny, keywordstyle=\color{blue}, commentstyle=\color{gray}, stringstyle=\color{red}, showstringspaces=false, identifierstyle=\color{black}, procnamekeys={def,class}} \title{Program Induction by Rationale Generation:\\ Learning to Solve and Explain Algebraic Word Problems} \author{ Wang Ling$^{\spadesuit}$ \qquad Dani Yogatama$^{\spadesuit}$ \qquad Chris Dyer$^{\spadesuit}$ \qquad Phil Blunsom$^{\spadesuit\diamondsuit}$ \\ $\spadesuit$DeepMind$ \qquad \diamondsuit$University of Oxford\\ {\tt \{lingwang,dyogatama,cdyer,pblunsom\}@google.com} } \begin{document} \maketitle \begin{abstract} Solving algebraic word problems requires executing a series of arithmetic operations---a program---to obtain a final answer. However, since programs can be arbitrarily complicated, inducing them directly from question-answer pairs is a formidable challenge. To make this task more feasible, we solve these problems by generating \emph{answer rationales}, sequences of natural language and human-readable mathematical expressions that derive the final answer through a series of small steps. Although rationales do not explicitly specify programs, they provide a scaffolding for their structure via intermediate milestones. To evaluate our approach, we have created a new 100,000-sample dataset of questions, answers and rationales. Experimental results show that indirect supervision of program learning via answer rationales is a promising strategy for inducing arithmetic programs. \end{abstract} \ignore{ \begin{abstract} Solving algebraic word problems requires executing a series of arithmetic operations---a program---to obtain a final answer. However, since programs can be arbitrarily complicated, inducing them from question-answer pairs is a formidable challenge. We propose a new task in which, in addition to generating answers for questions, the model should also generate \emph{answer rationales}, sequences of natural language and mathematical expressions that justify the answer. Although rationales do not explicitly specify programs, they provide a coarse outline of their structure. Furthermore, by generating rationales as part of the solution generation at test time, the model provides a human-readable explanation of its reasoning. To support this task, we have created a new 100,000-sample dataset of questions, answers and rationales. In addition to the data, we introduce a model and training objective that jointly predicts rationales, programs, and answers. Experimental results show that indirect supervision of program learning via answer rationales is a promising strategy for inducing arithmetic programs. \end{abstract} } \ignore{We explore the generation of rationales for solving algebraic word problems, where we wish to obtain the correct answer but also provide a convincing and understand- able rationale leading to the answer. This process entails the generation of natural language intertwined with the manipula- tion and generation of the algebraic ex- pressions. To address these requirements, we propose a neural generation model that produces a sequence of program instruc- tions alongside a sequence of word to- kens. By executing the program, rationale is “completed” with values grounded in the problem statement. We build a corpus of multiple answer math questions with over 100,000 samples, where each is anno- tated with a rationale and correct answer (although without programs), and show that our model is effective in solving the problems correctly, in terms of accuracy in choosing the correct answer, and also in describing the correct solution, evaluated with BLEU scores} \section{Introduction} Behaving intelligently often requires mathematical reasoning. Shopkeepers calculate change, tax, and sale prices; agriculturists calculate the proper amounts of fertilizers, pesticides, and water for their crops; and managers analyze productivity. Even determining whether you have enough money to pay for a list of items requires applying addition, multiplication, and comparison. Solving these tasks is challenging as it involves recognizing how goals, entities, and quantities in the real-world map onto a mathematical formalization, computing the solution, and mapping the solution back onto the world. As a proxy for the richness of the real world, a series of papers have used natural language specifications of algebraic word problems, and solved these by either learning to fill in templates that can be solved with equation solvers~\cite{DBLP:conf/emnlp/HosseiniHEK14,kushman-EtAl:2014:P14-1} or inferring and modeling operation sequences (programs) that lead to the final answer~\cite{Roy2015SolvingGA}. \begin{figure}[t] {\fontsize{8.5}{9}\selectfont \hspace{-2mm} \begin{tabular}{|p{75mm}|} \hline \underline{\textbf{Problem 1}}:\\ \textbf{Question}: Two trains running in opposite directions cross a man standing on the platform in 27 seconds and 17 seconds respectively and they cross each other in 23 seconds. The ratio of their speeds is:\\ \textbf{Options}: A) 3/7$\ \ \ $B) 3/2$\ \ \ $C) 3/88$\ \ \ $D) 3/8$\ \ \ $E) 2/2 \\ \textbf{Rationale}: Let the speeds of the two trains be x m/sec and y m/sec respectively. Then, length of the first train = 27x meters, and length of the second train = 17 y meters. (27x + 17y) / (x + y) = 23 $\rightarrow$ 27x + 17y = 23x + 23y $\rightarrow$ 4x = 6y $\rightarrow$ x/y = 3/2. \\ \textbf{Correct Option}: B \\ \hline \end{tabular} \hspace{-2mm} \begin{tabular}{|p{75mm}|} \hline \underline{\textbf{Problem 2}}:\\ \textbf{Question}: From a pack of 52 cards, two cards are drawn together at random. What is the probability of both the cards being kings?\\ \textbf{Options}: A) 2/1223$\ \ $ B) 1/122$\ \ $C) 1/221$\ \ $D) 3/1253$\ \ $E) 2/153 \\ \textbf{Rationale}: Let s be the sample space.\\ Then n(s) = 52C2 = 1326 \\ E = event of getting 2 kings out of 4 \\ n(E) = 4C2 = 6 \\ P(E) = 6/1326 = 1/221 \\ Answer is C \\ \textbf{Correct Option}: C \\ \hline \end{tabular} \hspace{-2mm} \begin{tabular}{|p{75mm}|} \hline \underline{\textbf{Problem 3}}:\\ \textbf{Question}: For which of the following does $p(a)-p(b)=p(a-b)$ for all values of $a$ and $b$?\\ \textbf{Options}:A) $p(x)=x^2$, B) $p(x)=x/2$, C) $p(x)=x+5$, D) $p(x)=2x−1$, E) $p(x)=|x|$ \\ \textbf{Rationale}: To solve this easiest way is just put the value and see that if it equals or not. \\ with option A. $p(a) = a^2$ and $p(b) = b^2$ \\ so L.H.S = $a^2 - b^2$ \\ and R.H.S = $(a-b)^2 \rightarrow a^2 + b^2 -2ab$. \\ so L.H.S not equal to R.H.S \\ with option B. $p(a) = a/2$ and $p(b) = b/2$ \\ L.H.S = $a/2 - b/2 \rightarrow 1/2(a-b)$ \\ R.H.S = $(a-b)/2$ \\ so L.H.S = R.H.S which is the correct answer. \\ answer:B\\ \textbf{Correct Option}: B \\ \hline \end{tabular} } \caption{Examples of solved math problems.} \label{fig:examples} \end{figure} In this paper, we learn to solve algebraic word problems by inducing and modeling programs that generate not only the answer, but an \textbf{answer rationale}, a natural language explanation interspersed with algebraic expressions justifying the overall solution. Such rationales are what examiners require from students in order to demonstrate understanding of the problem solution; they play the very same role in our task. Not only do natural language rationales enhance model interpretability, but they provide a coarse guide to the structure of the arithmetic programs that must be executed. In fact the learner we propose (which relies on a heuristic search; \S\ref{sec:program_induction}) fails to solve this task without modeling the rationales---the search space is too unconstrained. This work is thus related to models that can explain or rationalize their decisions~\citep{hendricks:2016,DBLP:journals/corr/HarrisonER17}. However, the use of rationales in this work is quite different from the role they play in most prior work, where interpretation models are trained to generate plausible sounding (but not necessarily accurate) post-hoc descriptions of the decision making process they used. In this work, the rationale is generated as a latent variable that gives rise to the answer---it is thus a more faithful representation of the steps used in computing the answer. This paper makes three contributions. First, we have created a new dataset with more than 100,000 algebraic word problems that includes both answers and natural language answer rationales (\S\ref{sec:dataset}). Figure~\ref{fig:examples} illustrates three representative instances from the dataset. Second, we propose a sequence to sequence model that generates a sequence of instructions that, when executed, generates the rationale; only after this is the answer chosen (\S\ref{sec:model}). Since the target program is not given in the training data (most obviously, its specific form will depend on the operations that are supported by the program interpreter); the third contribution is thus a technique for inferring programs that generate a rationale and, ultimately, the answer. Even constrained by a text rationale, the search space of possible programs is quite large, and we employ a heuristic search to find plausible next steps to guide the search for programs (\S\ref{sec:program_induction}). Empirically, we are able to show that state-of-the-art sequence to sequence models are unable to perform above chance on this task, but that our model doubles the accuracy of the baseline~(\S\ref{sec:exp}). \section{Dataset} \label{sec:dataset} We built a dataset\footnote{Available at \url{https://github.com/deepmind/AQuA}} with 100,000 problems with the annotations shown in Figure~\ref{fig:examples}. Each question is decomposed in four parts, two inputs and two outputs: the description of the problem, which we will denote as the \textbf{question}, and the possible (multiple choice) answer options, denoted as \textbf{options}. Our goal is to generate the description of the rationale used to reach the correct answer, denoted as \textbf{rationale} and the \textbf{correct option} label. Problem 1 illustrates an example of an algebra problem, which must be translated into an expression (i.e., $(27x + 17y) / (x + y) = 23$) and then the desired quantity $(x/y)$ solved for. Problem 2 is an example that could be solved by multi-step arithmetic operations proposed in~\cite{Roy2015SolvingGA}. Finally, Problem 3 describes a problem that is solved by testing each of the options, which has not been addressed in the past. \subsection{Construction} We first collect a set of 34,202 seed problems that consist of multiple option math questions covering a broad range of topics and difficulty levels. Examples of exams with such problems include the GMAT (Graduate Management Admission Test) and GRE (General Test). Many websites contain example math questions in such exams, where the answer is supported by a rationale. Next, we turned to crowdsourcing to generate new questions. We create a task where users are presented with a set of 5 questions from our seed dataset. Then, we ask the Turker to choose one of the questions and write a similar question. We also force the answers and rationale to differ from the original question in order to avoid paraphrases of the original question. Once again, we manually check a subset of the jobs for each Turker for quality control. The type of questions generated using this method vary. Some turkers propose small changes in the values of the questions (e.g., changing the equality $p(a)-p(b)=p(a-b)$ in Problem~3 to a different equality is a valid question, as long as the rationale and options are rewritten to reflect the change). We designate these as replica problems as the natural language used in the question and rationales tend to be only minimally unaltered. Others propose new problems in the same topic where the generated questions tend to differ more radically from existing ones. Some Turkers also copy math problems available on the web, and we define in the instructions that this is not allowed, as it will generate multiple copies of the same problem in the dataset if two or more Turkers copy from the same resource. These Turkers can be detected by checking the nearest neighbours within the collected datasets as problems obtained from online resources are frequently submitted by more than one Turker. Using this method, we obtained 70,318 additional questions. \subsection{Statistics} Descriptive statistics of the dataset is shown in Figure~\ref{stats}. In total, we collected 104,519 problems (34,202 seed problems and 70,318 crowdsourced problems). We removed 500 problems as heldout set (250 for development and 250 for testing). As replicas of the heldout problems may be present in the training set, these were removed manually by listing for each heldout instance the closest problems in the training set in terms of character-based Levenstein distance. After filtering, 100,949 problems remained in the training set. We also show the average number of tokens (total number of tokens in the question, options and rationale) and the vocabulary size of the questions and rationales. Finally, we provide the same statistics exclusively for tokens that are numeric values and tokens that are not. Figure~\ref{histogram} shows the distribution of examples based on the total number of tokens. We can see that most examples consist of 30 to 500 tokens, but there are also extremely long examples with more than 1000 tokens in our dataset. \begin{table}[t] \centering \small \begin{tabular}{|l|l|c|c|} \hline \multicolumn{2}{|c|}{} & \textbf{Question} & \textbf{Rationale} \\ \hline\hline \multicolumn{2}{|c|}{Training Examples} & \multicolumn{2}{c|}{100,949}\\ \multicolumn{2}{|c|}{Dev Examples} & \multicolumn{2}{c|}{250}\\ \multicolumn{2}{|c|}{Test Examples} & \multicolumn{2}{c|}{250}\\ \hline\hline \multirow{2}{*}{\textbf{Numeric}}&Average Length& 9.6 & 16.6 \\ &Vocab Size & 21,009 & 14,745 \\ \hline \multirow{2}{*}{\textbf{Non-Numeric}}&Average Length & 67.8 & 89.1 \\ &Vocab Size & 17,849 & 25,034 \\ \hline\hline \multirow{2}{*}{\textbf{All}}&Average Length & 77.4 & 105.7 \\ &Vocab Size & 38,858 & 39,779 \\ \hline \end{tabular} \caption{Descriptive statistics of our dataset.}\label{stats} \end{table} \begin{figure}[t] \begin{center} \centerline{\includegraphics[width=1.0\columnwidth,scale=0.22,clip=false,trim=0cm 0cm 0cm 0cm]{images/length_hist2.pdf}} \vspace{-0.5cm} \caption{Distribution of examples per length.} \label{histogram} \end{center} \end{figure} \section{Model} \label{sec:model} Generating rationales for math problems is challenging as it requires models that learn to perform math operations at a finer granularity as each step within the solution must be explained. For instance, in Problem 1, the equation $(27x + 17y) / (x + y) = 23$ must be solved to obtain the answer. In previous work~\cite{kushman-EtAl:2014:P14-1}, this could be done by feeding the equation into an expression solver to obtain $x/y = 3/2$. However, this would skip the intermediate steps $27x + 17y = 23x + 23y$ and $4x = 6y$, which must also be generated in our problem. We propose a model that jointly learns to generate the text in the rationale, and to perform the math operations required to solve the problem. This is done by generating a program, containing both instructions that generate output and instructions that simply generate intermediate values used by following instructions. \subsection{Problem Definition} In traditional sequence to sequence models~\cite{DBLP:journals/corr/SutskeverVL14,DBLP:journals/corr/BahdanauCB14}, the goal is to predict the output sequence $\boldsymbol{y}=y_1,\ldots,y_{|\boldsymbol{y}|}$ from the input sequence $\boldsymbol{x}=x_1,\ldots,x_{|\boldsymbol{x}|}$, with lengths $|\boldsymbol{y}|$ and $|\boldsymbol{x}|$. In our particular problem, we are given the problem and the set of options, and wish to predict the rationale and the correct option. We set $\boldsymbol{x}$ as the sequence of words in the problem, concatenated with words in each of the options separated by a special tag. Note that knowledge about the possible options is required as some problems are solved by the process of elimination or by testing each of the options (e.g. Problem 3). We wish to generate $\boldsymbol{y}$, which is the sequence of words in the rationale. We also append the correct option as the last word in $\boldsymbol{y}$, which is interpreted as the chosen option. For example, $\struct{y}$ in Problem 1 is ``Let the $\ldots$ = 3/2 . $\langle$EOR$\rangle$ B $\langle$EOS$\rangle$", whereas in Problem 2 it is ``Let s be $\dots$ Answer is C $\langle$EOR$\rangle$ C $\langle$EOS$\rangle$", where ``$\langle$EOS$\rangle$" is the end of sentence symbol and ``$\langle$EOR$\rangle$" is the end of rationale symbol. \subsection{Generating Programs to Generate Rationales} We wish to generate a latent sequence of \textbf{program instructions}, $\boldsymbol{z}=z_1,\ldots,z_{|\boldsymbol{z}|}$, with length ${|\boldsymbol{z}|}$, that will generate $\boldsymbol{y}$ when executed. We express $\boldsymbol{z}$ as a program that can access $\boldsymbol{x}$, $\boldsymbol{y}$, and the memory buffer $\boldsymbol{m}$. Upon finishing execution we expect that the sequence of output tokens to be placed in the output vector $\boldsymbol{y}$. \begin{table}[t] \centering \small \begin{tabular}{@{}r|l|l|l|l@{}} \toprule $i$ & $\boldsymbol{x}$ & $\boldsymbol{z}$ & $\boldsymbol{v}$ & $\boldsymbol{r}$\\ \midrule 1 & From & {\tt Id}(``Let") & \emph{Let} & $y_{1}$ \\ 2 & a & {\tt Id}(``s") & \emph{s} & $y_{2}$ \\ 3 & pack & {\tt Id}(``be") & \emph{be} & $y_{3}$ \\ 4 & of & {\tt Id}(``the") & \emph{the} & $y_{4}$ \\ 5 & 52 & {\tt Id}(``sample") & \emph{sample} & $y_{5}$ \\ 6 & cards & {\tt Id}(``space") & \emph{space} & $y_{6}$ \\ 7 & , & {\tt Id}(``.") & \emph{.} & $y_{7}$ \\ 8 & two & {\tt Id}(``$\backslash$n") & \emph{$\backslash$n} & $y_{8}$ \\ 9 & cards & {\tt Id}(``Then") & \emph{Then} & $y_{9}$ \\ 10 & are & {\tt Id}(``n") & \emph{n} & $y_{10}$ \\ 11 & drawn & {\tt Id}(``(") & \emph{(} & $y_{11}$ \\ 12 & together & {\tt Id}(``s") & \emph{s} & $y_{12}$ \\ 13 & at & {\tt Id}(``)") & \emph{)} & $y_{13}$ \\ 14 & random & {\tt Id}(``=") & \emph{=} & $y_{14}$ \\ 15 & . & {\tt Str\_to\_Float}($x_5$) & $\boldsymbol{52}$ & $\underline{m_{1}}$ \\ 16 & What & {\tt Float\_to\_Str}($m_1$) & \emph{52} & $y_{15}$ \\ 17 & is & {\tt Id}(``C") & \emph{C} & $y_{16}$ \\ 18 & the & {\tt Id}(``2") & \emph{2} & $y_{17}$ \\ 19 & probability & {\tt Id}(``=") & \emph{=} & $y_{18}$ \\ 20 & of & {\tt Str\_to\_Float}($y_{17}$) & $\boldsymbol{2}$ & $\underline{m_{2}}$ \\ 21 & both & {\tt Choose}($m_1$,$m_2$) & $\boldsymbol{1326}$ & $\underline{m_{3}}$ \\ 22 & cards & {\tt Float\_to\_Str}($m_3$) & \emph{1326} & $y_{19}$ \\ 23 & being & {\tt Id}(``E") & \emph{E} & $y_{20}$ \\ 24 & kings & {\tt Id}(``=") & \emph{=} & $y_{21}$ \\ 25 & ? & {\tt Id}(``event") & \emph{event} & $y_{22}$ \\ 26 & $<$O$>$ & {\tt Id}(``of") & \emph{of} & $y_{23}$ \\ 27 & A) & {\tt Id}(``getting") & \emph{getting} & $y_{24}$ \\ 28 & 2/1223 & {\tt Id}(``2") & \emph{2} & $y_{25}$ \\ 29 & $<$O$>$ & {\tt Id}(``kings") & \emph{kings} & $y_{26}$ \\ 30 & B) & {\tt Id}(``out") & \emph{out} & $y_{27}$ \\ 31 & 1/122 & {\tt Id}(``of") & \emph{of} & $y_{28}$\\ \ldots & \ldots & \ldots & \ldots & \ldots \\ $|\boldsymbol{z}|$& & {\tt Id}(``$\langle$EOS$\rangle$") & $\langle$\emph{EOS}$\rangle$ & $y_{|\boldsymbol{y}|}$ \\ \bottomrule \end{tabular} \caption{Example of a program $\boldsymbol{z}$ that would generate the output $\boldsymbol{y}$. In $\boldsymbol{v}$, \emph{italics} indicates string types; $\boldsymbol{bold}$ indicates float types. Refer to \S\ref{sec:instr} for description of variable names.\label{tab:code_example}} \end{table} Table~\ref{tab:code_example} illustrates an example of a sequence of instructions that would generate an excerpt from Problem 2, where columns $\boldsymbol{x}$, $\boldsymbol{z}$, $\boldsymbol{v}$, and $\boldsymbol{r}$ denote the input sequence, the instruction sequence (program), the values of executing the instruction, and where each value $v_i$ is written (i.e., either to the output or to the memory). In this example, instructions from indexes 1 to 14 simply fill each position with the observed output $y_1,\ldots,y_{14}$ with a string, where the \texttt{Id} operation simply returns its parameter without applying any operation. As such, running this operation is analogous to generating a word by sampling from a softmax over a vocabulary. However, instruction $z_{15}$ reads the input word $x_5$, 52, and applies the operation \texttt{Str\_to\_Float}, which converts the word 52 into a floating point number, and the same is done for instruction $z_{20}$, which reads a previously generated output word $y_{17}$. Unlike, instructions $z_{1},\ldots,z_{14}$, these operations write to the external memory $\boldsymbol{m}$, which stores intermediate values. A more sophisticated instruction---which shows some of the power of our model---is $z_{21}=\texttt{Choose}(m_1, m_2) \rightarrow m_3$ which evaluates ${m_1 \choose m_2}$ and stores the result in $m_3$. This process repeats until the model generates the end-of-sentence symbol. The last token of the program as said previously must generate the correct option value, from ``A" to ``E". By training a model to generate instructions that can manipulate existing tokens, the model benefits from the additional expressiveness needed to solve math problems within the generation process. In total we define 22 different operations, 13 of which are frequently used operations when solving math problems. These are: \texttt{Id}, \texttt{Add}, \texttt{Subtract}, \texttt{Multiply}, \texttt{Divide}, \texttt{Power}, \texttt{Log}, \texttt{Sqrt}, \texttt{Sine}, \texttt{Cosine}, \texttt{Tangent}, \texttt{Factorial}, and \texttt{Choose} (number of combinations). We also provide 2 operations to convert between \texttt{Radians} and \texttt{Degrees}, as these are needed for the sine, cosine and tangent operations. There are 6 operations that convert floating point numbers into strings and vice-versa. These include the \texttt{Str\_to\_Float} and \texttt{Float\_to\_Str} operations described previously, as well as operations which convert between floating point numbers and fractions, since in many math problems the answers are in the form ``3/4". For the same reason, an operation to convert between a floating point number and number grouped in thousands is also used (e.g. 1000000 to ``1,000,000'' or ``1.000.000"). Finally, we define an operation (\texttt{Check}) that given the input string, searches through the list of options and returns a string with the option index in \{``A'', ``B'', ``C'', ``D'', ``E''\}. If the input value does not match any of the options, or more than one option contains that value, it cannot be applied. For instance, in Problem 2, once the correct probability ``1/221" is generated, by applying the check operation to this number we can obtain correct option ``C". \subsection{Generating and Executing Instructions}\label{sec:instr} \ignore{ In a sequence to sequence model, we predict the probability of $\boldsymbol{y}$ given $\boldsymbol{x}$ as $\log p(\boldsymbol{y} \mid \boldsymbol{x}) = \sum_{i} \log p(y_i \mid \boldsymbol{y}_{