{ "paper_id": "I13-1009", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:15:00.315318Z" }, "title": "The Complexity of Math Problems -Linguistic, or Computational?", "authors": [ { "first": "Takuya", "middle": [], "last": "Matsuzaki", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Institute of Informatics", "location": { "country": "Japan" } }, "email": "takuya-matsuzaki@nii.ac.jp" }, { "first": "Hidenao", "middle": [], "last": "Iwane", "suffix": "", "affiliation": { "laboratory": "", "institution": "Fujitsu Laboratories Ltd", "location": { "country": "Japan" } }, "email": "iwane@jp.fujitsu.com" }, { "first": "Hirokazu", "middle": [], "last": "Anai", "suffix": "", "affiliation": { "laboratory": "", "institution": "Fujitsu Laboratories Ltd", "location": { "country": "Japan" } }, "email": "anai@jp.fujitsu.com" }, { "first": "Noriko", "middle": [], "last": "Arai", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Institute of Informatics", "location": { "country": "Japan" } }, "email": "arai@nii.ac.jp" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present a simple, logic-based architecture for solving math problems written in natural language. A problem is firstly translated to a logical form. It is then rewritten into the input language of a solver algorithm and finally the solver finds an answer. Such a clean decomposition of the task however does not come for free. First, despite its formality, math text still exploits the flexibility of natural language to convey its complex logical content succinctly. We propose a mechanism to fill the gap between the simple form and the complex meaning while adhering to the principle of compositionality. Second, since the input to the solver is derived by strictly following the text, it may require far more computation than those derived by a human, and may go beyond the capability of the current solvers. Empirical study on Japanese university entrance examination problems showed positive results indicating the viability of the approach, which opens up a way towards a true end-to-end problem solving system through the synthesis of the advances in linguistics, NLP, and computer math.", "pdf_parse": { "paper_id": "I13-1009", "_pdf_hash": "", "abstract": [ { "text": "We present a simple, logic-based architecture for solving math problems written in natural language. A problem is firstly translated to a logical form. It is then rewritten into the input language of a solver algorithm and finally the solver finds an answer. Such a clean decomposition of the task however does not come for free. First, despite its formality, math text still exploits the flexibility of natural language to convey its complex logical content succinctly. We propose a mechanism to fill the gap between the simple form and the complex meaning while adhering to the principle of compositionality. Second, since the input to the solver is derived by strictly following the text, it may require far more computation than those derived by a human, and may go beyond the capability of the current solvers. Empirical study on Japanese university entrance examination problems showed positive results indicating the viability of the approach, which opens up a way towards a true end-to-end problem solving system through the synthesis of the advances in linguistics, NLP, and computer math.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Development of an NLP system usually starts by decomposing the task into several sub-tasks. Such a modular design is mandatory not only for the reusability of the component technologies and the extensibility of the system, but also for the sound and steady advancement of the research field. Each module, however, has to attack its sub-task in isolation from the entirety of the task, usually with a quite limited form and amount of knowledge. The separated sub-task is hence not necessarily easy even for human. This problem has been investigated in various directions, including the solutions to the error-cascading in pipeline models (Finkel et al., 2006; Roth and Yih, 2007, e.g.) , the injection of knowledge into the processing modules (Koo et al., 2008; Pitler, 2012, e.g.) , and the invention of a novel way of modularization (Bangalore and Joshi, 2010, e.g.) .", "cite_spans": [ { "start": 637, "end": 658, "text": "(Finkel et al., 2006;", "ref_id": "BIBREF13" }, { "start": 659, "end": 684, "text": "Roth and Yih, 2007, e.g.)", "ref_id": null }, { "start": 742, "end": 760, "text": "(Koo et al., 2008;", "ref_id": "BIBREF18" }, { "start": 761, "end": 780, "text": "Pitler, 2012, e.g.)", "ref_id": null }, { "start": 834, "end": 867, "text": "(Bangalore and Joshi, 2010, e.g.)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we present a simple pipeline architecture for natural language math problem solving, and investigate the issues regarding the separation of the semantic composition mechanism and the mathematical inference. Although the separation between these two may appear to be of different nature than the above-mentioned issues regarding the system modularization, as we will see later, the technical challenges there are also in the tension between the generality of an implemented theory as a reusable component, and its coverage over domain-specific phenomena.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the system, a problem is analyzed with a Combinatory Categorial Grammar (Steedman, 2001 ) coupled with a semantic representation based on the Discourse Representation Theory (Kamp and Reyle, 1993) to derive a logical form. The logical form is then rewritten to the input language of a solver algorithm, such as specialized math algorithms and theorem provers. The solver finally finds an answer through inference.", "cite_spans": [ { "start": 75, "end": 90, "text": "(Steedman, 2001", "ref_id": "BIBREF21" }, { "start": 177, "end": 199, "text": "(Kamp and Reyle, 1993)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Natural language problem solving in math and related domain is a classic AI task, which has served as a good test-bed for the integration of various AI technologies (Bobrow, 1964; Charniak, 1968; Gelb, 1971, e.g.) . Besides its attraction as a pure intellectual challenge, it has direct applications to the natural language interface for the formal systems such as databases, theorem provers, and formal proof checkers. The necessity of the interaction between language understanding and backend solvers has been pointed out in some of the classic works and also in closely related works Winograd's SHRDLU (1971) . A clear separation of the two layers is, however, an essential property for a wide-coverage problem solving system since we can extend it in a modular fashion, by the enhancement of the solver or the addition of different types of solvers.", "cite_spans": [ { "start": 165, "end": 179, "text": "(Bobrow, 1964;", "ref_id": "BIBREF9" }, { "start": 180, "end": 195, "text": "Charniak, 1968;", "ref_id": "BIBREF12" }, { "start": 196, "end": 213, "text": "Gelb, 1971, e.g.)", "ref_id": null }, { "start": 588, "end": 612, "text": "Winograd's SHRDLU (1971)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The research question in the current paper is thus summarized as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. Can we derive the logical form of the problems compositionally, with no intervention of mathematical inference, and how?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2. Can we solve such a direct translation of the text to a logical form with the current stateof-the-art automatic reasoning technology?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "After a brief overview of the system pipeline ( \u00a73), we present a technique for capturing the dynamic properties of the syntax-semantics mapping in the math problem text, which, at first sight, seem to call for mathematical inference during the derivation of a logical form ( \u00a74). We then describe remaining issues we found so far in the semantic analysis of math problem text ( \u00a75). Finally, the viability of the approach is empirically evaluated on real math problems taken from university entrance examinations. In the evaluation, we apply a solver to the logical forms derived through manually annotated CCG derivations and DRSs on the problem text ( \u00a76). In the current paper, we thus exclusively focus on the formal aspect of the semantic analysis, setting aside the problem of its automation and disambiguation. The final section concludes the paper and gives future prospects including the automatic processing of the math text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We use a variant of Discourse Representation Structure (DRS) (Kamp and Reyle, 1993) for the semantic representation. DRS has been developed for the formal analysis of various discourse phenomena, such as anaphora and quantifier scopes beyond a single sentence. Fig. 1 shows the syntax of DRS used in this paper. 1 In the definitions, f and P respectively denote a function and a predicate symbol and v denotes a variable. The definition is slightly extended from that by van Eijck and Kamp (2011) for incorporating higher-order terms. A term of the form \u039bv.M denotes lambda abstraction in the object language, which is used to represent (mathematical) functions and sets 2 ; we reserve \u03bb for denoting the abstraction over DRSs (and terms) for the composition of DRSs. We define the interpretation of a DRS D indirectly through its translation D \u2022 to a (higher-order) predicate logic as in Fig. 2 .", "cite_spans": [ { "start": 61, "end": 83, "text": "(Kamp and Reyle, 1993)", "ref_id": "BIBREF17" }, { "start": 312, "end": 313, "text": "1", "ref_id": null }, { "start": 485, "end": 496, "text": "Kamp (2011)", "ref_id": "BIBREF23" } ], "ref_spans": [ { "start": 261, "end": 267, "text": "Fig. 1", "ref_id": "FIGREF0" }, { "start": 889, "end": 895, "text": "Fig. 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Discourse Representation Structure", "sec_num": "2.1" }, { "text": "As defined in Fig. 2 , a DRS D = (V, C) is basically interpreted as a conjunction of the conditions in C that is quantified existentially by all the variables in V. However, as in the second clause in Fig. 2 , the variables in the antecedent of an implication are universally quantified and their scopes also cover the succedent; this definition is utilized in the analysis of sentences including indefinite NPs, such as donkey sentences.", "cite_spans": [], "ref_spans": [ { "start": 14, "end": 20, "text": "Fig. 2", "ref_id": "FIGREF1" }, { "start": 201, "end": 207, "text": "Fig. 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Discourse Representation Structure", "sec_num": "2.1" }, { "text": "The mechanism of the DRS composition in this paper is based on the formulation by van Eijck and Kamp (2011) . They use an operation called merge (denoted by \u2022) to combine two DRSs. Assuming no conflicts of variable names, it can be defined as:", "cite_spans": [ { "start": 96, "end": 107, "text": "Kamp (2011)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Discourse Representation Structure", "sec_num": "2.1" }, { "text": "(V 1 , C 1 ) \u2022 (V 2 , C 2 ) := (V 1 \u222a V 2 , C 1 \u222a C 2 ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse Representation Structure", "sec_num": "2.1" }, { "text": "Roughly speaking, this operation amounts to form the conjunction of the conditions in C 1 and C 2 allowing the conditions in C 2 to refer to the variables in V 1 . Consider the following discourse:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse Representation Structure", "sec_num": "2.1" }, { "text": "s 1 : A monkey x is sleeping. s 2 : It x holds a banana.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse Representation Structure", "sec_num": "2.1" }, { "text": "Assuming the anaphoric relation indicated by the super/sub-scripts, we have their DRSs as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse Representation Structure", "sec_num": "2.1" }, { "text": "D 1 = ({x}, {monkey(x), sleep(x)}) D 2 = ({y}, {banana(y), hold(x, y)})", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse Representation Structure", "sec_num": "2.1" }, { "text": "By merging them, we have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse Representation Structure", "sec_num": "2.1" }, { "text": "D 1 \u2022D 2 = ( {x, y}, { monkey(x), sleep(x), banana(y), hold(x, y) }) ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse Representation Structure", "sec_num": "2.1" }, { "text": "which is translated to \u2203x.\u2203y. (monkey(x) S : ({x, x1, x2}, {x = [x1, x2] , x1 = center of(C1), x2 = center of(C2), coincide(x)}) > S/S : \u03bbQ. ({x, x1, x2}, {x = [x1, x2] , x1 = center of(C1), x2 = center of(C2), coincide(x)}) \u2192 Q Figure 3 : A part of CCG derivation tree ", "cite_spans": [ { "start": 30, "end": 40, "text": "(monkey(x)", "ref_id": null }, { "start": 45, "end": 72, "text": "({x, x1, x2}, {x = [x1, x2]", "ref_id": null }, { "start": 141, "end": 168, "text": "({x, x1, x2}, {x = [x1, x2]", "ref_id": null } ], "ref_spans": [ { "start": 229, "end": 237, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Discourse Representation Structure", "sec_num": "2.1" }, { "text": "\u2227 \u2022 \u2022 \u2022 \u2227 hold(x, y)) as expected. Assuming D1 = ({v1, . . . , v k }, {C1, . . . , Cm}), D \u2022 1 := \u2203v1 . . . \u2203v k . (C \u2022 1 \u2227 \u2022 \u2022 \u2022 \u2227 C \u2022 k ) (D1 \u2192 D2) \u2022 := \u2200v1 . . . \u2200v k . ((C \u2022 1 \u2227 \u2022 \u2022 \u2022 \u2227 C \u2022 m ) \u2192 D \u2022 2 ) (\u00acD) \u2022 := \u00acD \u2022 (P (t1, t2, . . . )) \u2022 := P (t \u2022 1 , t \u2022 2 , . . . ) (f (t1, t2, . . . )) \u2022 := f (t \u2022 1 , t \u2022 2 , . . . ) (\u039bv.D) \u2022 := \u039bv.(D \u2022 ) (\u039bv.t) \u2022 := \u039bv.(t \u2022 ) v \u2022 := v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse Representation Structure", "sec_num": "2.1" }, { "text": "X/Y : f Y : a > X : f a X/Y : f Y /Z : g >B X/Z : \u03bbx.f (gx)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse Representation Structure", "sec_num": "2.1" }, { "text": "Combinatory Categorial Grammar (CCG) (Steedman, 2001 ) is a lexicalized grammar formalism. In CCG, the association between a word w and its syntactic/semantic property is specified by a lexical entry of the form w := C : S, where C is the category of w and S is the semantic interpretation of w. A category is either a basic category (e.g., S, N, NP) or a complex category of the form X/Y or X\\Y . For instance, we can assign the following categories and semantic interpretations to the region notation \"[0, +\u221e)\" and a bare noun phrase \"positive number\":", "cite_spans": [ { "start": 37, "end": 52, "text": "(Steedman, 2001", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Combinatory Categorial Grammar", "sec_num": "2.2" }, { "text": "[0, +\u221e) := NP : \u039bx.({}, {x \u2265 0}) positive number := N : \u03bbx.({}, {x > 0})", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combinatory Categorial Grammar", "sec_num": "2.2" }, { "text": "since the region notation behaves as a proper noun and it can be represented by its characteristic function, while \"positive number\" functions like a common noun (recall that \u039b is for the abstraction in the object language and \u03bb stands for the abstraction for the DRS composition). A handful of combinatory rules define how the categories and the semantic interpretations of constituents are combined to derive a larger phrase. Fig. 4 shows two of the rules. A part of a derivation tree for \"When the centers of C 1 and C 2 coincide\" is shown in Fig. 3 . As shown in the figure, the semantic representation in DRS is composed by the beta reduction and the DRS merge operation. As we will see in \u00a74, there are certain types of discourse for which the basic DRS composition machinery described so far does not suffice. We will return to this after a brief description of the whole system.", "cite_spans": [], "ref_spans": [ { "start": 428, "end": 434, "text": "Fig. 4", "ref_id": "FIGREF2" }, { "start": 546, "end": 552, "text": "Fig. 3", "ref_id": null } ], "eq_spans": [], "section": "Combinatory Categorial Grammar", "sec_num": "2.2" }, { "text": "The main result in the current paper is a mechanism of semantic composition and an empirical support for our overall design choice. Although the NLP modules for the automatic processing and disambiguation are still under development, we show a brief overview of the whole system to give a clear image on the different representations of a problem at different stages of the pipeline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Simple Pipeline for Natural Language Math Problem Solving", "sec_num": "3" }, { "text": "From text to logical form The system receives a problem text with L A T E X-style markup on the symbolic mathematical expressions: e.g.,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Simple Pipeline for Natural Language Math Problem Solving", "sec_num": "3" }, { "text": "Let $a>0$, $b\u22640$, and $0 0, b \u2264 0, 0 < p, p < 1}) D 2 = ({}, {P = (p, p 2 ), on(P, \u039bx.ax \u2212 bx 2 )}) D 3 = Find(b \u2032 ) [ cc; \u2203 \u22121 a; \u2203 \u22121 p; b = b \u2032 ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Simple Pipeline for Natural Language Math Problem Solving", "sec_num": "3" }, { "text": "A discourse structure analyzer receives the DRSs and determines the logical relations among them while selecting an antecedent for each anaphoric expression. The net result of this stage is a large DRS that represents the whole problem. For the above example, we have their sequencing as the result: D 1 ; D 2 ; D 3 . The sequencing operator (;) basically means conjunction (merge) of the DRSs, but it is also used to connect the meanings of a declarative sentence and an imperative sentence. The large DRS is then translated by a process defined in the next section, giving a HOPL formula enclosed by a directive to the solver:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Simple Pipeline for Natural Language Math Problem Solving", "sec_num": "3" }, { "text": "Find(b \u2032 ) [ a > 0 \u2227 b \u2264 0 \u2227 0 < p \u2227 p < 1 \u2227 \u2203P. ( P = (p, p 2 ) \u2227 on(P, \u039bx.ax \u2212 bx 2 ) \u2227 b = b \u2032 ) ] ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Simple Pipeline for Natural Language Math Problem Solving", "sec_num": "3" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Simple Pipeline for Natural Language Math Problem Solving", "sec_num": "3" }, { "text": "Find(v)[\u03d5]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Simple Pipeline for Natural Language Math Problem Solving", "sec_num": "3" }, { "text": "is a directive to find the value of variable v that satisfies the condition \u03d5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Simple Pipeline for Natural Language Math Problem Solving", "sec_num": "3" }, { "text": "From logical form to solver input Many of the current automatic reasoners operate on first-order formulas. To utilize them, we hence have to transform the HOPL formula in a directive to an equivalent first-order formula. Such transformation is of course not possible in general. However, we found that a greedy rewriting procedure suffices for that purpose on all of the high-school level math problems used in the experiment. In the rewriting procedure, we iteratively apply several equivalence-preserving transformations including the beta-reduction of \u039b-terms and rewriting of the predicates and functions using their definitions. For the above example, by using some trivial simplifications and the definition of", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Simple Pipeline for Natural Language Math Problem Solving", "sec_num": "3" }, { "text": "on(\u2022, \u2022): \u2200x.\u2200y.\u2200f. (on((x, y), f ) \u2194 (y = f x)) ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Simple Pipeline for Natural Language Math Problem Solving", "sec_num": "3" }, { "text": "we have the following directive holding a firstorder formula:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Simple Pipeline for Natural Language Math Problem Solving", "sec_num": "3" }, { "text": "Find(b \u2032 ) [ a > 0 \u2227 b \u2264 0 \u2227 0 < p \u2227 p < 1 \u2227 p 2 = ap \u2212 bp 2 \u2227 b = b \u2032 ] .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Simple Pipeline for Natural Language Math Problem Solving", "sec_num": "3" }, { "text": "Solver Algorithms In addition to the generic first-order theorem provers, we can use specific algorithms as the solver when the formula is expressible in certain theories. Among them, many mathematical and engineering problems can be naturally translated to formulas consisting of polynomial equations, inequalities, quantifiers (\u2200, \u2203) and boolean operators (\u2227, \u2228, \u00ac, \u2192, etc). Such formulas construct sentences in the first-order theory of real closed fields (RCF).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Simple Pipeline for Natural Language Math Problem Solving", "sec_num": "3" }, { "text": "In his celebrated work, Tarski (1951) showed that RCF allows quantifier-elimination (QE): for any RCF formula \u03d5(x 1 , . . . , x n ), there exists an equivalent quantifier-free formula \u03c8(x 1 , . . . , x n ) in the same vocabulary. For example, the formula \u2203x.(x 2 + ax + b \u2264 c) can be reduced to a quantifier-free formula a 2 \u2212 4b + 4c \u2265 0 by QE.", "cite_spans": [ { "start": 24, "end": 37, "text": "Tarski (1951)", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 184, "end": 205, "text": "\u03c8(x 1 , . . . , x n )", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "A Simple Pipeline for Natural Language Math Problem Solving", "sec_num": "3" }, { "text": "Automated theorem proving is usually very costly. For example, QE for RCF is doubly exponential on the number of quantifier alternations in the input formula. The problems containing only six variables may be hard for today's computer with the best algorithm known. However, several positive results have been attained as the result of extensive search for practical algorithms during the last decades (see (Caviness and Johnson, 1998)). Efficient software systems of QE have been developed on several computer algebra systems, such as SyNRAC (Iwane et al., 2013) .", "cite_spans": [ { "start": 543, "end": 563, "text": "(Iwane et al., 2013)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "A Simple Pipeline for Natural Language Math Problem Solving", "sec_num": "3" }, { "text": "In this section, we first summarize the most prominent issues we found so far in the linguistic analysis of high-school/college level math problems and then present a solution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formal Analysis of Math Problem Text", "sec_num": "4" }, { "text": "Context-dependent meanings of superlatives and their alike The meaning of a superlatives and semantically similar expressions such as \"maximum\" generally depends highly on the context. For example, the interpretation of \"John was the tallest\" depends on the group (of people) that is prominent in the discourse:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problems", "sec_num": "4.1" }, { "text": "There were ten boys. John was the tallest.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problems", "sec_num": "4.1" }, { "text": "This context-dependency can be made more explicit by paraphrasing it to a comparative (Heim, 2000) : \"John was taller than anyone else,\" where \"anyone else\" refers, depending on the context, to the group against which John was compared.", "cite_spans": [ { "start": 86, "end": 98, "text": "(Heim, 2000)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Problems", "sec_num": "4.1" }, { "text": "In math text, however, we can usually determine the range of the \"anyone else\" without ambiguity:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problems", "sec_num": "4.1" }, { "text": "Assume a + b = 3. Find the maximum value of ab.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problems", "sec_num": "4.1" }, { "text": "Here, the set of values that should be compared against the maximum value is, with no ambiguity, all the possible values of ab that is determined by the preceding context. Once we have a representation of such a set, it is easy to write the semantic interpretation of the phrase \"maximum value of \u03b1.\" But, how can we obtain a representation of such a set without inference?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problems", "sec_num": "4.1" }, { "text": "Discrimination between free/bound variable We can explicitly specify that a variable should be interpreted as being free, as in:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problems", "sec_num": "4.1" }, { "text": "Let R be a square with perimeter l. Write the area of R in terms of l.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problems", "sec_num": "4.1" }, { "text": "This discourse may be translated to since, assuming the proper definitions of the functions and predicates, the first one is equivalent to Find(a)[a = l 2 /16] but the second one is equivalent to Find(a)[a > 0]. How can we specify a variable be not bound?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problems", "sec_num": "4.1" }, { "text": "Imperatives Math problems usually include imperatives such as \"Find/Write...,\" and \"Prove/Show...\". How can we derive correct interpretations of those imperatives, which depend on the semantic content of preceding declarative sentences, but are not a part of the declarative meaning of a discourse?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problems", "sec_num": "4.1" }, { "text": "Although the above-mentioned phenomena are quite common in math problem text, we found it is difficult to derive the meanings of such expressions within the basic compositional DRS framework introduced in \u00a72. All of the examples above involve the manipulation and modification of the context in a discourse.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Solution by iDRS", "sec_num": "4.2" }, { "text": "We present an extension of the DRS composition mechanism that covers expressions like the above examples. The basic idea is to introduce another layer of semantic representation called iDRS hereafter, which provides a device to manipulate terms t :: First we define the syntax of iDRS as in Fig. 5 . In the definition, the variables P, f, t, and v follows the same convention as in the DRS definition. In words, an iDRS represents either a DRS condition (the first row of the definition of I), a quantification \u2203v, which corresponds to a DRS having only one variable, ({v}, {}), a sequencing I 1 ; I 2 of two iDRSs, or the new ingredients in the rest of the definition that will be explained shortly.", "cite_spans": [], "ref_spans": [ { "start": 291, "end": 297, "text": "Fig. 5", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Solution by iDRS", "sec_num": "4.2" }, { "text": "= v | f (t1, . . . , t k ) | \u039bv.t | \u039bv.I iDRS I ::= P (t1, . . . , t k ) | \u00acI | I1 \u2192 I2 | \u2203v | I1; I2 | \u2203 \u22121 v | Find(v)[I] | Show[I] | cc", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Solution by iDRS", "sec_num": "4.2" }, { "text": "The \"anti-quantifier\" \u2203 \u22121 v means an operation that cancels the quantification on v that precedes \u2203 \u22121 v. Find(v) [I] is a directive that requires to find the set of the values of variable v which satisfy the condition represented by I. Similarly, Show [I] is a directive that requires to prove the statement represented by I. Note that these two directives are not specific to any solvers; The choice of the solver depends on the theory (e.g., RCF) under which the formula in a directive is understood. The last element, cc, can be considered as a special 'variable', through which we can always retrieve an iDRS representation of the context that precedes the position marked by the cc.", "cite_spans": [ { "start": 115, "end": 118, "text": "[I]", "ref_id": null }, { "start": 254, "end": 257, "text": "[I]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Solution by iDRS", "sec_num": "4.2" }, { "text": "Using these new ingredients, we can now write, for instance, the semantic representation of the phrase \"maximum value\" as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Solution by iDRS", "sec_num": "4.2" }, { "text": "N/NP of : \u03bbx.\u03bbm.max (\u039by.(cc; y ", "cite_spans": [ { "start": 20, "end": 30, "text": "(\u039by.(cc; y", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Solution by iDRS", "sec_num": "4.2" }, { "text": "= x), m),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Solution by iDRS", "sec_num": "4.2" }, { "text": "assuming that the two-place predicate max(s, m) is defined to be true iff m is the maximum element in the set s (represented by a \u039b-term). A sentence \"the maximum value of x is m\" will thus have max(\u039by.(cc; y = x), m) as its semantic representation, which means that m is the maximum value of x that satisfies the condition specified by the preceding context. [I] or Show [I] includes only those elements that have a counterpart in the basic DRS except for the \"anti-quantifiers.\" We can hence convert it to a HOPL formula, by first canceling the quantifications \u2203v that precede \u2203 \u22121 v (i.e., deleting all occurrences of \u2203v that appear before an occurrence of \u2203 \u22121 v in the iDRS, and deleting \u2203 \u22121 v itself), then converting it to a DRS by replacing the sequencing operator ';' to the merge operator, and finally translating it to a HOPL formula according to Fig. 2 .", "cite_spans": [ { "start": 360, "end": 363, "text": "[I]", "ref_id": null }, { "start": 372, "end": 375, "text": "[I]", "ref_id": null } ], "ref_spans": [ { "start": 859, "end": 865, "text": "Fig. 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Solution by iDRS", "sec_num": "4.2" }, { "text": "The mechanism presented in \u00a74 significantly enhanced the coverage of the analysis over real problems. We however found several phenomena that can not be handled now.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Remaining Issues in the Semantic Analysis of Math Problem Text", "sec_num": "5" }, { "text": "Free/bound variable distinction without a cue phrase We have presented a mechanism to 'unbind' the variables specified by a cue phrase, such as \"(find x) in terms of (y).\" Some types of variables however have to be left free even without any explicit indication, e.g.:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Remaining Issues in the Semantic Analysis of Math Problem Text", "sec_num": "5" }, { "text": "Let p > 0. Find the area of a circle with radius p, centered at the origin.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Remaining Issues in the Semantic Analysis of Math Problem Text", "sec_num": "5" }, { "text": "Assuming circle(x, y, r) denotes a circle with radius r and centered at (x, y), we want to derive This directive means to find the range of the areas of the circles with arbitrary radii, which is apparently not a possible reading of the problem. We found such cases in 3 out of the 32 test problems used in the experiment shown later.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Remaining Issues in the Semantic Analysis of Math Problem Text", "sec_num": "5" }, { "text": "Scope inversion by a cue phrase The hierarchy of the quantifier scopes in math text mostly follows the linear order of the appearance of the variables (either overtly quantified or not). This general rule can however be superseded by the effect of a cue phrase, as shown in the example problem and its possible translation in Fig. 7 . In the figure, the formula inside the Show-directive mostly follows the discourse structure, in that the predicates from the first and the second sentence respectively form the antecedent and the succedent of the implication. The quantification on F is however dislocated from its default scope, i.e., the succedent, and moved to the outset of the formula by the effect of the underlined cue phrases. To handle such cases correctly, we would need a more involved mechanism for the manipulation of the context representation through the cc variable.", "cite_spans": [], "ref_spans": [ { "start": 326, "end": 332, "text": "Fig. 7", "ref_id": "FIGREF7" } ], "eq_spans": [], "section": "Remaining Issues in the Semantic Analysis of Math Problem Text", "sec_num": "5" }, { "text": "Idiomatic expressions As in other text genres, idiomatic multiword expressions are also problematic as can be seen in the following example:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Remaining Issues in the Semantic Analysis of Math Problem Text", "sec_num": "5" }, { "text": "By choosing x sufficiently large, y = 1/x can be made as close to 0 as desired.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Remaining Issues in the Semantic Analysis of Math Problem Text", "sec_num": "5" }, { "text": "As the example shows, a set phrase involving complex syntactic relations, e.g., \"can do X as Y as desired by choosing Z sufficiently W\" and \"X approaches Y as Z approaches W,\" can convey idiomatic meanings in math.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Remaining Issues in the Semantic Analysis of Math Problem Text", "sec_num": "5" }, { "text": "We tested the feasibility of our approach on a set of problems selected from Japanese university entrance exams. Specifically, we wanted 1) to test the coverage of the semantic composition mechanism presented in \u00a74 on real problems, and 2) to verify that there is no significant loss in the capability of the system due to the additional computational cost incurred by the separation of the semantic analysis from the mathematical reasoning. The second point was confirmed by providing the ideal (100% correct) output from the (forthcoming) NLP components to a state-of-theart automatic reasoner and comparing the result against the performance of the reasoner on the input formulated by a human expert. Specifically, we manually gave the semantic representations of the problems as iDRSs or CCG derivation trees, and then automatically rewrote them into the language of RCF. The resulting formulas were fed to a solver to see whether the answers be returned in a realistic amount of time (30 seconds). The solver was implemented on SyN-RAC (Iwane et al., 2013) , which is an RCF-QE solver implemented as an add-on to Maple, and the (in)equation solving commands of Maple.", "cite_spans": [ { "start": 1041, "end": 1061, "text": "(Iwane et al., 2013)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Empirical Results", "sec_num": "6" }, { "text": "The problems were taken from the entrance exams of five first-tier universities in Japan (Tokyo U., Kyoto U., Osaka U., Kyushu U., and Hokkaido U.) for fiscal year 2001, 2003, 2005, 2007, 2009 and 2011. There were 249 problems in total. From them, we first eliminated those that included almost no natural language text, such like calculation problems. We then chose, from the remaining non-straightforward word problems, all the problems which could be solved with SyNRAC and Maple when the input was formulated by an expert of computer algebra. The formulation by an expert was done, of course, with no manual calculation, but otherwise it was freely done including the division of the solving process into several steps of QE and (in)equation solving.", "cite_spans": [ { "start": 159, "end": 192, "text": "year 2001, 2003, 2005, 2007, 2009", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Empirical Results", "sec_num": "6" }, { "text": "As the result of that, we got 32 test problems, each of which contained 3.9 sentences on average. They include problems on algebra (of real and complex numbers), 3D and 2D geometry, calculus, and their combinations. For analyzing the result in more detail, we divided the problems into 78 sub-problems for which the correctness of the answers can be judged independently.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical Results", "sec_num": "6" }, { "text": "For the first experiment, we manually encoded the problems in the form of iDRSs. Each sentence in a problem was first encoded as a single iDRS, and the sentence-level iDRSs were combined (again manually) into a problem-level iDRS using the connectives defined in the iDRS syntax. In the manual encoding, the granularity of the representation, i.e., the smallest units of the semantic representation, was kept at the level of the actual words in the text whenever possible, intending that the resulting iDRSs closely match the representation Problem: Point P is on the circle x 2 + y 2 = 4 and lP is the normal line to the circle at P . Show that lP passes through a fixed point F irrespective of P . composed from word-level semantic representations. In the iDRS encoding of the 32 problems, the context-fetching mechanism through 'cc' variable was needed in 15 problems and the canceling of quantification was needed in 6 problems. These mechanisms thus significantly enhanced the coverage of the semantic composition machinery. After rewriting the iDRSs to RCF formulas 4 , we fed them to the solver and got perfect answers for 19 out of the 32 problems. Out of the 78 subproblems, 56 sub-problems (72%) were successfully solved. 12% of the sub-problems (9 subproblems) failed due to the timeout in the QE solver. Besides the timeout, a major cause of the failures (7 sub-problems) was the fractional power (mainly square root) in the formula. Although we can mechanically erase the fractional powers to get an RCF formula, it was not implemented in the solver. 5 The remaining 6 sub-problems needed the free/bound variable distinction without any cue phrase ( \u00a75). Although half of them could be solved by manually specifying the free variables, we did not count them as solved here.", "cite_spans": [ { "start": 1564, "end": 1565, "text": "5", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "From discourse analysis to the solution", "sec_num": "6.1" }, { "text": "We chose 14 problems from the 19 problems which were fully solved with the iDRS encod- 4 The knowledge-base used to rewrite the HOPL formulas to first-order RCF formulas included 230 axioms for 86 predicates and 98 functions. 5 In the formulation by the human expert, the use of square roots were avoided by encoding the conditions differently (e.g., x \u2265 0 \u2227 x 2 = 2 instead of \u221a x = 2).", "cite_spans": [ { "start": 87, "end": 88, "text": "4", "ref_id": null }, { "start": 226, "end": 227, "text": "5", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "From syntactic analysis to the answer", "sec_num": "6.2" }, { "text": "ings. We manually analyzed the text following the CCG-based analyses of basic Japanese constructions given by Bekki (2010) . We annotated the 44 sentences in the 14 problems with full CCG derivation trees and anaphoric links. We selected the 14 problems so that they cover different types of grammatical phenomena as much as possible.", "cite_spans": [ { "start": 110, "end": 122, "text": "Bekki (2010)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "From syntactic analysis to the answer", "sec_num": "6.2" }, { "text": "The final CCG lexicon contained 240 lexical entries (109 for function words and the rest for content words). The iDRS representations were then derived by (automatically) composing the semantic representations of the words according to the derivation trees and combining the sentence-level iDRSs to a problem-level iDRS as in the first experiment. Out of the 14 problems, we got fully correct answers for 13 problems. In the 14 problems, there were 33 sub-problems and we got correct answers for 32 of them; On only one subproblem, the solver could not return an answer within the time limit. Fig. 8 shows an English translation of one of the 13 problems successfully solved with the CCG derivation trees as the input. Overall, the results on the real exam problems were very promising: 72% of the sub-problems were successfully solved with the formula derived from a sentence-by-sentence, direct encoding of the problem. The experiment with manually annotated CCG derivation trees further showed that there was almost no additional cost introduced by the mechanical derivation of the logical forms from the word-level semantic representations.", "cite_spans": [], "ref_spans": [ { "start": 593, "end": 599, "text": "Fig. 8", "ref_id": "FIGREF8" } ], "eq_spans": [], "section": "From syntactic analysis to the answer", "sec_num": "6.2" }, { "text": "We have presented a logic-based architecture for automatic problem solving. The experiments on the university entrance exams showed positive results indicating the viability of the modular design.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Prospects", "sec_num": "7" }, { "text": "Future work includes the development of the processing modules, i.e., the symbolic expression analyzer, the parser, and the discourse structure analyzer. Another future work is to incorporate different types of solvers to the system for covering a wider range of problems, with the ability to choose a solver based on the content of a problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Prospects", "sec_num": "7" }, { "text": "Disjunction can be defined by using implication and negation: D1 \u2228 D2 := ({}, {\u00acD1}) \u2192 D2.2 We represent the application of a \u039b-term to another term, such as (\u039bx.D)t and (\u039bx.t1)t2, either by a special predicate App(f, x) \u2261 f x or a function app(f, x) := f x according to the type of f . Compound terms of the form t1t2 are hence not in the definitions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This approach shares much with a kind of dynamic semantics such as those byBekki (2000) andBrasoveanu (2012), in which a representation of the context can also be accessed in the semantic language. An important difference is that in their approaches the context is represented as a set of assignment functions, while we represent them directly as an iDRS. This difference is crucial for our purpose since we eventually need to obtain a (first-order) formula on which an automatic reasoner operates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "{ {I1; I2} }c := { {I1} }c", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "{ {I1; I2} }c := { {I1} }c;", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "{ {I1 \u2192 I2} }c := { {I2} } c", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "{ {I1 \u2192 I2} }c := { {I2} } c;[[I 1 ]]c { {I} }c := \u03f5 [[cc;", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Supertagging: Using Complex Lexical Descriptions in Natural Language Processing", "authors": [ { "first": "Srinivas", "middle": [], "last": "Bangalore", "suffix": "" }, { "first": "K", "middle": [], "last": "Aravind", "suffix": "" }, { "first": "", "middle": [], "last": "Joshi", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Srinivas Bangalore and Aravind K. Joshi. 2010. Su- pertagging: Using Complex Lexical Descriptions in Natural Language Processing. Bradford Books. MIT Press.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Typed Dynamic Logic for Compositional Grammar", "authors": [ { "first": "Daisuke", "middle": [], "last": "Bekki", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daisuke Bekki. 2000. Typed Dynamic Logic for Com- positional Grammar. Ph.D. thesis, University of Tokyo.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Formal Theory of Japanese Syntax", "authors": [ { "first": "Daisuke", "middle": [], "last": "Bekki", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daisuke Bekki. 2010. Formal Theory of Japanese Syn- tax. Kuroshio Shuppan. (In Japanese).", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Natural language input for a computer problem solving system", "authors": [ { "first": "Bobrow", "middle": [], "last": "Daniel Gureasko", "suffix": "" } ], "year": 1964, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Gureasko Bobrow. 1964. Natural language in- put for a computer problem solving system. Ph.D. thesis, Massachusetts Institute of Technology.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The grammar of quantification and the fine structure of interpretation contexts", "authors": [ { "first": "Adrian", "middle": [], "last": "Brasoveanu", "suffix": "" } ], "year": 2012, "venue": "Synthese", "volume": "", "issue": "", "pages": "1--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adrian Brasoveanu. 2012. The grammar of quantifica- tion and the fine structure of interpretation contexts. Synthese, pages 1-51.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Quantifier Elimination and Cylindrical Algebraic Decomposition", "authors": [ { "first": "F", "middle": [], "last": "Bob", "suffix": "" }, { "first": "Jeremy", "middle": [ "R" ], "last": "Caviness", "suffix": "" }, { "first": "", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bob F. Caviness and Jeremy R. Johnson, editors. 1998. Quantifier Elimination and Cylindrical Algebraic Decomposition. Springer-Verlag, New York.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Carps: a program which solves calculus word problems", "authors": [ { "first": "Eugene", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 1968, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eugene Charniak. 1968. Carps: a program which solves calculus word problems. Technical report, Massachusetts Institute of Technology.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Solving the problem of cascading errors: approximate bayesian inference for linguistic annotation pipelines", "authors": [ { "first": "Jenny", "middle": [ "Rose" ], "last": "Finkel", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, EMNLP '06", "volume": "", "issue": "", "pages": "618--626", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jenny Rose Finkel, Christopher D. Manning, and An- drew Y. Ng. 2006. Solving the problem of cascad- ing errors: approximate bayesian inference for lin- guistic annotation pipelines. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, EMNLP '06, pages 618-626, Stroudsburg, PA, USA. Association for Computa- tional Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Experiments with a natural language problem-solving system", "authors": [ { "first": "Jack", "middle": [ "P" ], "last": "Gelb", "suffix": "" } ], "year": 1971, "venue": "Proceedings of the 2nd international joint conference on Artificial intelligence, IJCAI'71", "volume": "", "issue": "", "pages": "455--462", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jack P. Gelb. 1971. Experiments with a natural lan- guage problem-solving system. In Proceedings of the 2nd international joint conference on Artificial intelligence, IJCAI'71, pages 455-462, San Fran- cisco, CA, USA. Morgan Kaufmann Publishers Inc.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Degree operators and scope", "authors": [ { "first": "Irene", "middle": [], "last": "Heim", "suffix": "" } ], "year": 2000, "venue": "Proceedings of Semantics and Linguistic Theory 10", "volume": "", "issue": "", "pages": "40--64", "other_ids": {}, "num": null, "urls": [], "raw_text": "Irene Heim. 2000. Degree operators and scope. In Proceedings of Semantics and Linguistic Theory 10, pages 40-64. CLC Publications.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "An effective implementation of symbolic-numeric cylindrical algebraic decomposition for quantifier elimination", "authors": [ { "first": "Hidenao", "middle": [], "last": "Iwane", "suffix": "" }, { "first": "Hitoshi", "middle": [], "last": "Yanami", "suffix": "" }, { "first": "Hirokazu", "middle": [], "last": "Anai", "suffix": "" }, { "first": "Kazuhiro", "middle": [], "last": "Yokoyama", "suffix": "" } ], "year": 2013, "venue": "Theoretical Computer Science", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hidenao Iwane, Hitoshi Yanami, Hirokazu Anai, and Kazuhiro Yokoyama. 2013. An effective implemen- tation of symbolic-numeric cylindrical algebraic de- composition for quantifier elimination. Theoretical Computer Science. (in press).", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "From Discourse to Logic: Introduction to Modeltheoretic Semantics of Natural Language, Formal Logic and Discourse Representation Theory. Studies in Linguistics and Philosophy", "authors": [ { "first": "Hans", "middle": [], "last": "Kamp", "suffix": "" }, { "first": "Uwe", "middle": [], "last": "Reyle", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hans Kamp and Uwe Reyle. 1993. From Discourse to Logic: Introduction to Modeltheoretic Semantics of Natural Language, Formal Logic and Discourse Representation Theory. Studies in Linguistics and Philosophy. Kluwer Academic.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Simple semi-supervised dependency parsing", "authors": [ { "first": "Terry", "middle": [], "last": "Koo", "suffix": "" }, { "first": "Xavier", "middle": [], "last": "Carreras", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2008, "venue": "Association for Computational Linguistics", "volume": "", "issue": "", "pages": "595--603", "other_ids": {}, "num": null, "urls": [], "raw_text": "Terry Koo, Xavier Carreras, and Michael Collins. 2008. Simple semi-supervised dependency parsing. In Proceedings of ACL-08: HLT, pages 595-603, Columbus, Ohio, June. Association for Computa- tional Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Attacking parsing bottlenecks with unlabeled data and relevant factorizations", "authors": [ { "first": "Emily", "middle": [], "last": "Pitler", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "768--776", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emily Pitler. 2012. Attacking parsing bottlenecks with unlabeled data and relevant factorizations. In Pro- ceedings of the 50th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 768-776, Jeju Island, Korea, July. Association for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Global inference for entity and relation identification via a linear programming formulation", "authors": [ { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" }, { "first": "Wen-Tau", "middle": [], "last": "Yih", "suffix": "" } ], "year": 2007, "venue": "Introduction to Statistical Relational Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Roth and Wen-tau Yih. 2007. Global inference for entity and relation identification via a linear pro- gramming formulation. In Lise Getoor and Ben Taskar, editors, Introduction to Statistical Relational Learning. MIT Press.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "The Syntactic Process. Bradford Books", "authors": [ { "first": "Mark", "middle": [], "last": "Steedman", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Steedman. 2001. The Syntactic Process. Brad- ford Books. MIT Press.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "A Decision Method for Elementary Algebra and Geometry", "authors": [ { "first": "Alfred", "middle": [], "last": "Tarski", "suffix": "" } ], "year": 1951, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alfred Tarski. 1951. A Decision Method for Elemen- tary Algebra and Geometry. University of Califor- nia Press, Berkeley.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Discourse representation in context", "authors": [ { "first": "Jan", "middle": [], "last": "Van Eijck", "suffix": "" }, { "first": "Hans", "middle": [], "last": "Kamp", "suffix": "" } ], "year": 2011, "venue": "Handbook of Logic and Language", "volume": "", "issue": "", "pages": "181--252", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jan van Eijck and Hans Kamp. 2011. Discourse rep- resentation in context. In Johan van Benthem and Alice ter Meulen, editors, Handbook of Logic and Language, Second Edition, pages 181-252. Elsevier.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Procedures as a representation for data in a computer program for understanding natural language", "authors": [ { "first": "Terry", "middle": [], "last": "Winograd", "suffix": "" } ], "year": 1971, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Terry Winograd. 1971. Procedures as a representation for data in a computer program for understanding natural language. Technical report, Massachusetts Institute of Technology, Feb. MIT AI Technical Re- port 235.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "text": "terms t ::= v | f (t1, . . . , t k ) | \u039bv.t | \u039bv.D conditions C ::= P (t1, . . . , t k ) | \u00acD | D1 \u2192 D2 DRSs D ::= ({v1, . . . , v k },{C1, . . . , Cm}) Syntax of DRS such as", "type_str": "figure" }, "FIGREF1": { "uris": null, "num": null, "text": "Translation of DRS to HOPL When S/S/S : \u03bbP.\u03bbQ.P \u2192 Q the centers of C1 and C2 S/(S\\NP) : \u03bbP.({x, x1, x2}, {x = [x1, x2], x1 = center of(C1), x2 = center of(C2)})\u2022P x coincide S\\NP : \u03bbx.({}, {coincide(x)}) >", "type_str": "figure" }, "FIGREF2": { "uris": null, "num": null, "text": "Example of combinatory rules", "type_str": "figure" }, "FIGREF3": { "uris": null, "num": null, "text": "", "type_str": "figure" }, "FIGREF4": { "uris": null, "num": null, "text": "Syntax of iDRS the representation of the preceding context during the semantic composition. 3", "type_str": "figure" }, "FIGREF5": { "uris": null, "num": null, "text": "Transformation from iDRS to directive sequence Let's take the following problem as an example:Let p > 0. R is a rectangle whose perimeter is p. Find the maximum value of the area of R as a function of p.We have its iDRS representation shown below, by parsing the sentences and composing the resulting iDRSs into one (in this case, just by sequencing the three sentences' iDRSs):\uf8ee \uf8f0 0 < p; is rectangle(R); perimeter of(R) = p; \u2203m; max(\u039bx. [cc; x = area of(R)] , m); Find(a)[cc; \u2203 \u22121 p; a = m] \uf8f9We then bind all free variables in the iDRS at their narrowest scopes: \uf8ee\uf8f0 \u2203p; 0 < p; \u2203R; is rectangle(R); perimeter of(R) = p; \u2203m; max(\u039bx. [cc; x = area of(R)] , m); Find(a)[cc; \u2203 \u22121 p; a = m] \uf8f9 This amounts to assume each variable appearing in a problem text is, unless it is explicitly quantified, interpreted to be existentially quantified as default, and to be universally quantified if it appears in the antecedent of an implication. The iDRS is then processed by the functions { {\u2022} } c and [[\u2022]] c defined in Fig. 6. In the definition, \u03f5 stands for an empty sequence. The function { {\u2022} } c extracts the imperative meaning from an iDRS, using [[\u2022]] c as a 'sub-routine' that extracts the declarative meaning from an iDRS. The suffix (c) of the two functions stands for the preceding context represented as an iDRS. When [[\u2022]] c processes a sequence I 1 ; I 2 or an implication I 1 \u2192 I 2 , the declarative content of I 1 (i.e., [[I 1 ]] c ) is appended to the preceding context c, and c; [[I 1 ]] c is passed as the preceding context when processing I 2 . When [[\u2022]] c finds a cc variable, it substitutes the cc with the current context stored in the suffix. By applying { {\u2022} } \u03f5 to the iDRS of a problem, we can extract the logical form of the problem as a sequence of directives. For the example problem, we have a single directive as follows: p; \u2203R; is rectangle(R); perimeter of(R) = p; \u2203m; x = area of(R) the definition of { {\u2022} } c and [[\u2022]] c , the iDRS I inside a directive Find(v)", "type_str": "figure" }, "FIGREF6": { "uris": null, "num": null, "text": "(a) [p > 0; a = area of(circle(0, 0, p))] , but our default variable binding rule gives Find(a) [\u2203p; p > 0; a = area of(circle(0, 0, p))] .", "type_str": "figure" }, "FIGREF7": { "uris": null, "num": null, "text": "x 2 + y 2 = 4 and lP is the normal line to the circle at P)\u2192 lP passes through F ))] Scope inversion by cue phrasesLet O(0, 0), A(2, 6), B(3, 4) be 3 points on the coordinate plane. Draw the perpendicular to line AB through O, which meets AB at C. Let s, t be real numbers, and let P be such that OP = s \u2212\u2192 OA + t \u2212 \u2212 \u2192 OB. Answer the following questions.(1) Calculate the coordinates of point C, and write | \u2212 \u2212 \u2192 CP | 2 in terms of s and t.(2) Let s be constant, and let t vary in the range t \u2265 0. Calculate the minimum of | \u2212 \u2212 \u2192 CP | 2 .", "type_str": "figure" }, "FIGREF8": { "uris": null, "num": null, "text": "Kyushu University 2009 (Science Course) Problem 1", "type_str": "figure" } } } }