|
{ |
|
"paper_id": "D07-1003", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T16:18:30.157303Z" |
|
}, |
|
"title": "What is the Jeopardy Model? A Quasi-Synchronous Grammar for QA", |
|
"authors": [ |
|
{ |
|
"first": "Mengqiu", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Carnegie Mellon University Pittsburgh", |
|
"location": { |
|
"postCode": "15213", |
|
"region": "PA", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "mengqiu@cs.cmu.edu" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Carnegie Mellon University Pittsburgh", |
|
"location": { |
|
"postCode": "15213", |
|
"region": "PA", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "nasmith@cs.cmu.edu" |
|
}, |
|
{ |
|
"first": "Teruko", |
|
"middle": [], |
|
"last": "Mitamura", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Carnegie Mellon University Pittsburgh", |
|
"location": { |
|
"postCode": "15213", |
|
"region": "PA", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "teruko@cs.cmu.edu" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper presents a syntax-driven approach to question answering, specifically the answer-sentence selection problem for short-answer questions. Rather than using syntactic features to augment existing statistical classifiers (as in previous work), we build on the idea that questions and their (correct) answers relate to each other via loose but predictable syntactic transformations. We propose a probabilistic quasi-synchronous grammar, inspired by one proposed for machine translation (D. Smith and Eisner, 2006), and parameterized by mixtures of a robust nonlexical syntax/alignment model with a(n optional) lexical-semantics-driven log-linear model. Our model learns soft alignments as a hidden variable in discriminative training. Experimental results using the TREC dataset are shown to significantly outperform strong state-of-the-art baselines.", |
|
"pdf_parse": { |
|
"paper_id": "D07-1003", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper presents a syntax-driven approach to question answering, specifically the answer-sentence selection problem for short-answer questions. Rather than using syntactic features to augment existing statistical classifiers (as in previous work), we build on the idea that questions and their (correct) answers relate to each other via loose but predictable syntactic transformations. We propose a probabilistic quasi-synchronous grammar, inspired by one proposed for machine translation (D. Smith and Eisner, 2006), and parameterized by mixtures of a robust nonlexical syntax/alignment model with a(n optional) lexical-semantics-driven log-linear model. Our model learns soft alignments as a hidden variable in discriminative training. Experimental results using the TREC dataset are shown to significantly outperform strong state-of-the-art baselines.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Open-domain question answering (QA) is a widelystudied and fast-growing research problem. Stateof-the-art QA systems are extremely complex. They usually take the form of a pipeline architecture, chaining together modules that perform tasks such as answer type analysis (identifying whether the correct answer will be a person, location, date, etc.), document retrieval, answer candidate extraction, and answer reranking. This architecture is so predominant that each task listed above has evolved into its own sub-field and is often studied and evaluated independently (Shima et al., 2006) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 569, |
|
"end": 589, |
|
"text": "(Shima et al., 2006)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction and Motivation", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "At a high level, the QA task boils down to only two essential steps (Echihabi and Marcu, 2003) . The first step, retrieval, narrows down the search space from a corpus of millions of documents to a focused set of maybe a few hundred using an IR engine, where efficiency and recall are the main focus. The second step, selection, assesses each candidate answer string proposed by the first step, and finds the one that is most likely to be an answer to the given question. The granularity of the target answer string varies depending on the type of the question. For example, answers to factoid questions (e.g., Who, When, Where) are usually single words or short phrases, while definitional questions and other more complex question types (e.g., How, Why) look for sentences or short passages. In this work, we fix the granularity of an answer to a single sentence.", |
|
"cite_spans": [ |
|
{ |
|
"start": 68, |
|
"end": 94, |
|
"text": "(Echihabi and Marcu, 2003)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction and Motivation", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Earlier work on answer selection relies only on the surface-level text information. Two approaches are most common: surface pattern matching, and similarity measures on the question and answer, represented as bags of words. In the former, patterns for a certain answer type are either crafted manually (Soubbotin and Soubbotin, 2001) or acquired from training examples automatically (Ittycheriah et al., 2001; Ravichandran et al., 2003; Licuanan and Weischedel, 2003) . In the latter, measures like cosine-similarity are applied to (usually) bag-of-words representations of the question and answer. Although many of these systems have achieved very good results in TREC-style evaluations, shallow methods using the bag-of-word representation clearly have their limitations. Examples of cases where the bag-of-words approach fails abound in QA literature; here we borrow an example used by Echihabi and Marcu (2003) . The question is \"Who is the leader of France?\", and the sentence \"Henri Hadjenberg, who is the leader of France 's Jewish community, endorsed ...\" (note tokenization), which is not the correct answer, matches all keywords in the question in exactly the same order. (The correct answer is found in \"Bush later met with French President Jacques Chirac.\")", |
|
"cite_spans": [ |
|
{ |
|
"start": 302, |
|
"end": 333, |
|
"text": "(Soubbotin and Soubbotin, 2001)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 383, |
|
"end": 409, |
|
"text": "(Ittycheriah et al., 2001;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 410, |
|
"end": 436, |
|
"text": "Ravichandran et al., 2003;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 437, |
|
"end": 467, |
|
"text": "Licuanan and Weischedel, 2003)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 889, |
|
"end": 914, |
|
"text": "Echihabi and Marcu (2003)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction and Motivation", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This example illustrates two types of variation that need to be recognized in order to connect this question-answer pair. The first variation is the change of the word \"leader\" to its semantically related term \"president\". The second variation is the syntactic shift from \"leader of France\" to \"French president.\" It is also important to recognize that \"France\" in the first sentence is modifying \"community\", and therefore \"Henri Hadjenberg\" is the \"leader of ... community\" rather than the \"leader of France.\" These syntactic and semantic variations occur in almost every question-answer pair, and typically they cannot be easily captured using shallow representations. It is also worth noting that such syntactic and semantic variations are not unique to QA; they can be found in many other closely related NLP tasks, motivating extensive community efforts in syntactic and semantic processing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction and Motivation", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Indeed, in this work, we imagine a generative story for QA in which the question is generated from the answer sentence through a series of syntactic and semantic transformations. The same story has been told for machine translation (Yamada and Knight, 2001, inter alia) , in which a target language sentence (the desired output) has undergone semantic transformation (word to word translation) and syntactic transformation (syntax divergence across languages) to generate the source language sentence (noisy-channel model). Similar stories can also be found in paraphrasing (Quirk et al., 2004; Wu, 2005) and textual entailment (Harabagiu and Hickl, 2006; Wu, 2005) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 232, |
|
"end": 269, |
|
"text": "(Yamada and Knight, 2001, inter alia)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 574, |
|
"end": 594, |
|
"text": "(Quirk et al., 2004;", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 595, |
|
"end": 604, |
|
"text": "Wu, 2005)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 628, |
|
"end": 655, |
|
"text": "(Harabagiu and Hickl, 2006;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 656, |
|
"end": 665, |
|
"text": "Wu, 2005)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction and Motivation", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our story makes use of a weighted formalism known as quasi-synchronous grammar (hereafter, QG), originally developed by D. Smith and Eisner (2006) for machine translation. Unlike most synchronous formalisms, QG does not posit a strict isomorphism between the two trees, and it provides an elegant description for the set of local configurations. In Section 2 we situate our contribution in the context of earlier work, and we give a brief discussion of quasi-synchronous grammars in Section 3. Our version of QG, called the Jeopardy model, and our parameter estimation method are described in Section 4. Experimental results comparing our approach to two state-of-the-art baselines are presented in Section 5. We discuss portability to cross-lingual QA and other applied semantic processing tasks in Section 6.", |
|
"cite_spans": [ |
|
{ |
|
"start": 123, |
|
"end": 146, |
|
"text": "Smith and Eisner (2006)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction and Motivation", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "To model the syntactic transformation process, researchers in these fields-especially in machine translation-have developed powerful grammatical formalisms and statistical models for representing and learning these tree-to-tree relations (Wu and Wong, 1998; Eisner, 2003; Gildea, 2003; Melamed, 2004; Ding and Palmer, 2005; Quirk et al., 2005; Galley et al., 2006; Smith and Eisner, 2006, inter alia) . We can also observe a trend in recent work in textual entailment that more emphasis is put on explicit learning of the syntactic graph mapping between the entailed and entailed-by sentences (Mac-Cartney et al., 2006) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 238, |
|
"end": 257, |
|
"text": "(Wu and Wong, 1998;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 258, |
|
"end": 271, |
|
"text": "Eisner, 2003;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 272, |
|
"end": 285, |
|
"text": "Gildea, 2003;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 286, |
|
"end": 300, |
|
"text": "Melamed, 2004;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 301, |
|
"end": 323, |
|
"text": "Ding and Palmer, 2005;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 324, |
|
"end": 343, |
|
"text": "Quirk et al., 2005;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 344, |
|
"end": 364, |
|
"text": "Galley et al., 2006;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 365, |
|
"end": 400, |
|
"text": "Smith and Eisner, 2006, inter alia)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 593, |
|
"end": 619, |
|
"text": "(Mac-Cartney et al., 2006)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "However, relatively fewer attempts have been made in the QA community. As pointed out by Katz and Lin (2003) , most early experiments in QA that tried to bring in syntactic or semantic features showed little or no improvement, and it was often the case that performance actually degraded (Litkowski, 1999; Attardi et al., 2001 ). More recent attempts have tried to augment the bag-ofwords representation-which, after all, is simply a real-valued feature vector-with syntactic features. The usual similarity measures can then be used on the new feature representation. For example, Punyakanok et al. (2004) used approximate tree matching and tree-edit-distance to compute a similarity score between the question and answer parse trees. Similarly, Shen et al. (2005) experimented with dependency tree kernels to compute similarity between parse trees. Cui et al. (2005) measured sentence similarity based on similarity measures between dependency paths among aligned words. They used heuristic functions similar to mutual information to assign scores to matched pairs of dependency links. Shen and Klakow (2006) extend the idea further through the use of log-linear models to learn a scoring function for relation pairs. Echihabi and Marcu (2003) presented a noisychannel approach in which they adapted the IBM model 4 from statistical machine translation (Brown et al., 1990; Brown et al., 1993) and applied it to QA. Similarly, Murdock and Croft (2005) adopted a simple translation model from IBM model 1 (Brown et al., 1990; Brown et al., 1993) and applied it to QA. Porting the translation model to QA is not straightforward; it involves parse-tree pruning heuristics (the first two deterministic steps in Echihabi and Marcu, 2003) and also replacing the lexical translation table with a monolingual \"dictionary\" which simply encodes the identity relation. This brings us to the question that drives this work: is there a statistical translation-like model that is natural and accurate for question answering? We propose Smith and Eisner's (2006) quasi-synchronous grammar (Section 3) as a general solution and the Jeopardy model (Section 4) as a specific instance.", |
|
"cite_spans": [ |
|
{ |
|
"start": 89, |
|
"end": 108, |
|
"text": "Katz and Lin (2003)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 279, |
|
"end": 305, |
|
"text": "degraded (Litkowski, 1999;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 306, |
|
"end": 326, |
|
"text": "Attardi et al., 2001", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 746, |
|
"end": 764, |
|
"text": "Shen et al. (2005)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 850, |
|
"end": 867, |
|
"text": "Cui et al. (2005)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 1087, |
|
"end": 1109, |
|
"text": "Shen and Klakow (2006)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 1219, |
|
"end": 1244, |
|
"text": "Echihabi and Marcu (2003)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 1354, |
|
"end": 1374, |
|
"text": "(Brown et al., 1990;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 1375, |
|
"end": 1394, |
|
"text": "Brown et al., 1993)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 1428, |
|
"end": 1452, |
|
"text": "Murdock and Croft (2005)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 1505, |
|
"end": 1525, |
|
"text": "(Brown et al., 1990;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 1526, |
|
"end": 1545, |
|
"text": "Brown et al., 1993)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 1708, |
|
"end": 1733, |
|
"text": "Echihabi and Marcu, 2003)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 2033, |
|
"end": 2048, |
|
"text": "Eisner's (2006)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "For a formal description of QG, we recommend Smith and Eisner (2006) . We briefly review the central idea here. QG arose out of the empirical observation that translated sentences often have some isomorphic syntactic structure, but not usually in entirety, and the strictness of the isomorphism may vary across words or syntactic rules. The idea is that, rather than a synchronous structure over the source and target sentences, a tree over the target sentence is modeled by a source-sentence-specific grammar that is inspired by the source sentence's tree. 1 This is implemented by a \"sense\"-really just a subset of nodes in the source tree-attached to each grammar node in the target tree. The senses define an alignment between the trees. Because it only loosely links the two sentences' syntactic structure, QG is particularly well-suited for QA insofar as QA is like \"free\" translation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 55, |
|
"end": 68, |
|
"text": "Eisner (2006)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Quasi-Synchronous Grammar", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "A concrete example that is easy to understand is a binary quasi-synchronous context-free grammar (denoted QCFG). Let V S be the set of constituent tokens in the source tree. QCFG rules would take the augmented form", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Quasi-Synchronous Grammar", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "X, S 1 \u2192 Y, S 2 Z, S 3 X, S 1 \u2192 w", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Quasi-Synchronous Grammar", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "where X, Y, and Z are ordinary CFG nonterminals, each S i \u2208 2 V S (subsets of nodes in the source tree to which the nonterminals align), and w is a targetlanguage word. QG can be made more or less \"liberal\" by constraining the cardinality of the S i (we force all |S i | = 1), and by constraining the relationships among the S i mentioned in a single rule. These are called permissible \"configurations.\" An example of a strict configuration is that a target parent-child pair must align (respectively) to a source parentchild pair. Configurations are shown in Table 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 560, |
|
"end": 567, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Quasi-Synchronous Grammar", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Here, following Smith and Eisner 2006, we use a weighted, quasi-synchronous dependency grammar. Apart from the obvious difference in application task, there are a few important differences with their model. First, we are not interested in the alignments per se; we will sum them out as a hidden variable when scoring a question-answer pair. Second, our probability model includes an optional mixture component that permits arbitrary featureswe experiment with a small set of WordNet lexicalsemantics features (see Section 4.4). Third, we apply a more discriminative training method (conditional maximum likelihood estimation, Section 4.5).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Quasi-Synchronous Grammar", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Our model, informally speaking, aims to follow the process a player of the television game show Jeopardy! might follow. The player knows the answer (or at least thinks he knows the answer) and must quickly turn it into a question. 2 The question-answer pairs used on Jeopardy! are not precisely what we have in mind for the real task (the questions are not specific enough), but the syntactic transformation inspires our model. In this section we formally define this probability model and present the necessary algorithms for parameter estimation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 231, |
|
"end": 232, |
|
"text": "2", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Jeopardy Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The Jeopardy model is a QG designed for QA. Let q = q 1 , ..., q n be a question sentence (each q i is a word), and let a = a 1 , ..., a m be a candidate answer sentence. (We will use w to denote an abstract sequence that could be a question or an answer.) In practice, these sequences may include other information, such as POS, but for clarity we assume just words in the exposition. Let A be the set of candidate answers under consideration. Our aim is to choose:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probabilistic Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u00e2 = argmax a\u2208A p(a | q)", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Probabilistic Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "At a high level, we make three adjustments. The first is to apply Bayes' rule, p(a | q) \u221d p(q | a) \u2022 p(a). Because A is known and is assumed to be generated by an external extraction system, we could use that extraction system to assign scores (and hence, probabilities p(a)) to the candidate answers. Other scores could also be used, such as reputability of the document the answer came from, grammaticality, etc. Here, aiming for simplicity, we do not aim to use such information. Hence we treat p(a) as uniform over A. 3 The second adjustment adds a labeled, directed dependency tree to the question and the answer. The tree is produced by a state-of-the-art dependency parser (McDonald et al., 2005 ) trained on the Wall Street Journal Penn Treebank (Marcus et al., 1993) . A dependency tree on a sequence w = w 1 , ..., w k is a mapping of indices of words to indices of their syntactic parents and a label for the syntactic relation, \u03c4 : {1, ..., k} \u2192 {0, ..., k} \u00d7 L. Each word w i has a single parent, denoted w \u03c4 (i).par . Cycles are not permitted. w 0 is taken to be the invisible \"wall\" symbol at the left edge of the sentence; it has a single child (|{i : \u03c4 (i) = 0}| = 1). The label for w i is denoted \u03c4 (i).lab.", |
|
"cite_spans": [ |
|
{ |
|
"start": 522, |
|
"end": 523, |
|
"text": "3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 680, |
|
"end": 702, |
|
"text": "(McDonald et al., 2005", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 754, |
|
"end": 775, |
|
"text": "(Marcus et al., 1993)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probabilistic Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The third adjustment involves a hidden variable X, the alignment between question and answer words. In our model, each question-word maps to exactly one answer-word. Let x : {1, ..., n} \u2192 {1, ..., m} be a mapping from indices of words in q to indices of words in a. (It is for computational reasons that we assume |x(i)| = 1; in general x could range over subsets of {1, ..., m}.) Because we define the correspondence in this direction, note that it is possible for multple question words to map to the same answer word.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probabilistic Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Why do we treat the alignment X as a hidden variable? In prior work, the alignment is assumed to be known given the sentences, but we aim to discover it from data. Our guide in this learning is the structure inherent in the QG: the configurations between parent-child pairs in the question and their corresponding, aligned words in the answer. The hidden variable treatment lets us avoid commitment to any one x mapping, making the method more robust to noisy parses (after all, the parser is not 100% accurate) and any wrong assumptions imposed by the model (that |x(i)| = 1, for example, or that syntactic transformations can explain the connection between q and a at all). 4 Our model, then, defines", |
|
"cite_spans": [ |
|
{ |
|
"start": 676, |
|
"end": 677, |
|
"text": "4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probabilistic Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "p(q, \u03c4 q | a, \u03c4 a ) = x p(q, \u03c4 q , x | a, \u03c4 a ) (2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probabilistic Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "where \u03c4 q and \u03c4 a are the question tree and answer tree, respectively. The stochastic process defined by our model factors cleanly into recursive steps that derive the question from the top down. The QG defines a grammar for this derivation; the grammar depends on the specific answer. Let \u03c4 i w refer to the subtree of \u03c4 w rooted at w i . The model is defined by:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probabilistic Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "p(\u03c4 i q | q i , \u03c4 q (i), x(i), \u03c4 a ) = (3) p #kids (|{j : \u03c4 q (j) = i, j < i}| | q i , left) \u00d7p #kids (|{j : \u03c4 q (j) = i, j > i}| | q i , right) \u00d7 j:\u03c4q(j)=i m x(j)=0 p kid (q j , \u03c4 q (j).lab | q i , \u03c4 q (i), x(i), x(j), \u03c4 a ) \u00d7p(\u03c4 j q | q j , \u03c4 q (j), x(j), \u03c4 a )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probabilistic Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Note the recursion in the last line. While the above may be daunting, in practice it boils down only to defining the conditional distribution p kid , since the number of left and right children of each node need not be modeled (the trees are assumed known)p #kids is included above for completeness, but in the model applied here we do not condition it on q i and therefore do not need to estimate it (since the trees are fixed). p kid defines a distribution over syntactic children of q i and their labels, given (1) the word q i , (2) the parent of q i , (3) the dependency relation between q i and its parent, (4) the answer-word q i is aligned to, (5) the answer-word the child being predicted is aligned to, and (6) the remainder of the answer tree.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probabilistic Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Given q, the score for an answer is simply p(q, \u03c4 q | a, \u03c4 a ). Computing the score requires summing over alignments and can be done efficiently by bottom-up dynamic programming. Let S(j, ) refer to the score of \u03c4 j q , assuming that the parent of q j , \u03c4 q (j).par , is aligned to a . The base case, for leaves of \u03c4 q , is:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dynamic Programming", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "S(j, ) = (4) p #kids (0 | q j , left) \u00d7 p #kids (0 | q j , right) \u00d7 m k=0 p kid (q j , \u03c4 q (j).lab | q \u03c4 q(j) , , k, \u03c4 a )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dynamic Programming", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Note that k ranges over indices of answer-words to be aligned to q j . The recursive case is", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dynamic Programming", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "S(i, ) = (5) p #kids (|{j : \u03c4 q (j) = i, j < i}| | q j , left) \u00d7p #kids (|{j : \u03c4 q (j) = i, j > i}| | q j , right) \u00d7 m k=0 p kid (q i , \u03c4 q (i).lab | q \u03c4q(i) , , k, \u03c4 a ) \u00d7 j:\u03c4q(j)=i S(j, k)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dynamic Programming", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Solving these equations bottom-up can be done in O(nm 2 ) time and O(nm) space; in practice this is very efficient. In our experiments, computing the value of a question-answer pair took two seconds on average. 5 We turn next to the details of p kid , the core of the model.", |
|
"cite_spans": [ |
|
{ |
|
"start": 211, |
|
"end": 212, |
|
"text": "5", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dynamic Programming", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Our base model factors p kid into three conditional multinomial distributions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Base Model", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p base kid (q i , \u03c4 q (i).lab | q \u03c4q(i) , , k, \u03c4 a ) = p(q i .pos | a k .pos) \u00d7 p(q i .ne | a k .ne) \u00d7p(\u03c4 q (i).lab | config(\u03c4 q , \u03c4 a , i))", |
|
"eq_num": "(6)" |
|
} |
|
], |
|
"section": "Base Model", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "where q i .pos is question-word i's POS label and q i .ne is its named-entity label. config maps question-word i, its parent, and their alignees to a QG configuration as described in Table 1 ; note that some configurations are extended with additional tree information. The base model does not directly predict the specific words in the questiononly their parts-of-speech, named-entity labels, and dependency relation labels. This model is very similar to Smith and Eisner (2006) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 466, |
|
"end": 479, |
|
"text": "Eisner (2006)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 183, |
|
"end": 190, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Base Model", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Because we are interested in augmenting the QG with additional lexical-semantic knowledge, we also estimate p kid by mixing the base model with a model that exploits WordNet (Miller et al., 1990) lexical-semantic relations. The mixture is given by:", |
|
"cite_spans": [ |
|
{ |
|
"start": 174, |
|
"end": 195, |
|
"text": "(Miller et al., 1990)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Base Model", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "p kid (\u2022 | \u2022) = \u03b1p base kid (\u2022 | \u2022)+(1\u2212\u03b1)p ls kid (\u2022 | \u2022) (7)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Base Model", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "The lexical-semantics model p ls kid is defined by predicting a (nonempty) subset of the thirteen classes for the question-side word given the identity of its aligned answer-side word. These classes include WordNet relations: identical-word, synonym, antonym (also extended and indirect antonym), hypernym, hyponym, derived form, morphological variation (e.g., plural form), verb group, entailment, entailed-by, see-also, and causal relation. In addition, to capture the special importance of Whwords in questions, we add a special semantic relation called \"q-word\" between any word and any Wh-word. This is done through a log-linear model with one feature per relation. Multiple relations may fire, motivating the log-linear model, which permits \"overlapping\" features, and, therefore prediction of any of the possible 2 13 \u2212 1 nonempty subsets. It is important to note that this model assigns zero probability to alignment of an answer-word with any question-word that is not directly related to it through any relation. Such words may be linked in the mixture model, however, via p base kid . 6 (It is worth pointing out that log-linear models provide great flexibility in defining new features. It is straightforward to extend the feature set to include more domain-specific knowledge or other kinds of morphological, syntactic, or semantic information. Indeed, we explored some additional syntactic features, fleshing out the configurations in Table 1 in ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1449, |
|
"end": 1459, |
|
"text": "Table 1 in", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Lexical-Semantics Log-Linear Model", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "The parameters to be estimated for the Jeopardy model boil down to the conditional multinomial distributions in p base kid , the log-linear weights inside of p ls kid , and the mixture coefficient \u03b1. 7 Stan-dard applications of log-linear models apply conditional maximum likelihood estimation, which for our case involves using an empirical distributionp over question-answer pairs (and their trees) to optimize as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parameter Estimation", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "max \u03b8 q,\u03c4q,a,\u03c4ap (q, \u03c4 q , a, \u03c4 a ) log p \u03b8 (q, \u03c4 q | a, \u03c4 a ) P x p \u03b8 (q,\u03c4q,x|a,\u03c4a)", |
|
"eq_num": "(8)" |
|
} |
|
], |
|
"section": "Parameter Estimation", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "Note the hidden variable x being summed out; that makes the optimization problem non-convex. This sort of problem can be solved in principle by conditional variants of the Expectation-Maximization algorithm (Baum et al., 1970; Dempster et al., 1977; Meng and Rubin, 1993; Jebara and Pentland, 1999) . We use a quasi-Newton method known as L-BFGS (Liu and Nocedal, 1989) that makes use of the gradient of the above function (straightforward to compute, but omitted for space).", |
|
"cite_spans": [ |
|
{ |
|
"start": 207, |
|
"end": 226, |
|
"text": "(Baum et al., 1970;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 227, |
|
"end": 249, |
|
"text": "Dempster et al., 1977;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 250, |
|
"end": 271, |
|
"text": "Meng and Rubin, 1993;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 272, |
|
"end": 298, |
|
"text": "Jebara and Pentland, 1999)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 346, |
|
"end": 369, |
|
"text": "(Liu and Nocedal, 1989)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parameter Estimation", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "To evaluate our model, we conducted experiments using Text REtrieval Conference (TREC) 8-13 QA dataset. 8", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The TREC dataset contains questions and answer patterns, as well as a pool of documents returned by participating teams. Our task is the same as Punyakanok et al. (2004) and Cui et al. (2005) , where we search for single-sentence answers to factoid questions. We follow a similar setup to Shen and Klakow (2006) by automatically selecting answer candidate sentences and then comparing against a human-judged gold standard.", |
|
"cite_spans": [ |
|
{ |
|
"start": 163, |
|
"end": 169, |
|
"text": "(2004)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 174, |
|
"end": 191, |
|
"text": "Cui et al. (2005)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 289, |
|
"end": 311, |
|
"text": "Shen and Klakow (2006)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We used the questions in TREC 8-12 for training and set aside TREC 13 questions for development (84 questions) and testing (100 questions). To generate the candidate answer set for development and testing, we automatically selected sentences from each question's document pool that contains one or more non-stopwords from the question. For generating the training candidate set, in addtion to the sentences that contain non-stopwords from the question, we also added sentences that contain correct tributions; \u03b1 is initialized to be 0.1. answer pattern. Manual judgement was produced for the entire TREC 13 set, and also for the first 100 questions from the training set TREC 8-12. 9 On average, each question in the development set has 3.1 positive and 17.1 negative answers. There are 3.6 positive and 20.0 negative answers per question in the test set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We tokenized sentences using the standard treebank tokenization script, and then we performed part-of-speech tagging using MXPOST tagger (Ratnaparkhi, 1996) . The resulting POS-tagged sentences were then parsed using MSTParser (McDonald et al., 2005) , trained on the entire Penn Treebank to produce labeled dependency parse trees (we used a coarse dependency label set that includes twelve label types). We used BBN Identifinder (Bikel et al., 1999) for named-entity tagging.", |
|
"cite_spans": [ |
|
{ |
|
"start": 137, |
|
"end": 156, |
|
"text": "(Ratnaparkhi, 1996)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 227, |
|
"end": 250, |
|
"text": "(McDonald et al., 2005)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 413, |
|
"end": 450, |
|
"text": "BBN Identifinder (Bikel et al., 1999)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "As answers in our task are considered to be single sentences, our evaluation differs slightly from TREC, where an answer string (a word or phrase like 1977 or George Bush) has to be accompanied by a supporting document ID. As discussed by Punyakanok et al. (2004) , the single-sentence assumption does not simplify the task, since the hardest part of answer finding is to locate the correct sentence. From an end-user's point of view, presenting the sentence that contains the answer is often more informative and evidential. Furthermore, although the judgement data in our case are more labor-intensive to obtain, we believe our evaluation method is a better indicator than the TREC evaluation for the quality of an answer selection algorithm.", |
|
"cite_spans": [ |
|
{ |
|
"start": 239, |
|
"end": 263, |
|
"text": "Punyakanok et al. (2004)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "To illustrate the point, consider the example question, \"When did James Dean die?\" The correct an-9 More human-judged data are desirable, though we will address training from noisy, automatically judged data in Section 5.4. It is important to note that human judgement of answer sentence correctness was carried out prior to any experiments, and therefore is unbiased. The total number of questions in TREC 13 is 230. We exclude from the TREC 13 set questions that either have no correct answer candidates (27 questions), or no incorrect answer candidates (19 questions). Any algorithm will get the same performance on these questions, and therefore obscures the evaluation results. 6 such questions were also excluded from the 100 manually-judged training questions, resulting in 94 questions for training. For computational reasons (the cost of parsing), we also eliminated answer candidate sentences that are longer than 40 words from the training and evaluation set. After these data preparation steps, we have 348 positive Q-A pairs for training, 1,415 Q-A pairs in the development set, and 1,703 Q-A pairs in the test set. swer as appeared in the sentence \"In 1955, actor James Dean was killed in a two-car collision near Cholame, Calif.\" is 1955. But from the same document, there is another sentence which also contains 1955: \"In 1955, the studio asked him to become a technical adviser on Elia Kazan's 'East of Eden,' starring James Dean.\" If a system missed the first sentence but happened to have extracted 1955 from the second one, the TREC evaluation grants it a \"correct and well-supported\" point, since the document ID matches the correct document ID-even though the latter answer does not entail the true answer. Our evaluation does not suffer from this problem.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We report two standard evaluation measures commonly used in IR and QA research: mean average precision (MAP) and mean reciprocal rank (MRR). All results are produced using the standard trec eval program.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We implemented two state-of-the-art answer-finding algorithms (Cui et al., 2005; Punyakanok et al., 2004) as strong baselines for comparison. Cui et al. (2005) is the answer-finding algorithm behind one of the best performing systems in TREC evaluations. It uses a mutual information-inspired score computed over dependency trees and a single alignment between them. We found the method to be brittle, often not finding a score for a testing instance because alignment was not possible. We extended the original algorithm, allowing fuzzy word alignments through WordNet expansion; both results are reported.", |
|
"cite_spans": [ |
|
{ |
|
"start": 62, |
|
"end": 80, |
|
"text": "(Cui et al., 2005;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 81, |
|
"end": 105, |
|
"text": "Punyakanok et al., 2004)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 142, |
|
"end": 159, |
|
"text": "Cui et al. (2005)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline Systems", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The second baseline is the approximate treematching work by Punyakanok et al. (2004) . Their algorithm measures the similarity between \u03c4 q and \u03c4 a by computing tree edit distance. Our replication is close to the algorithm they describe, with one subtle difference. Punyakanok et al. used answer-typing in computing edit distance; this is not available in our dataset (and our method does not explicitly carry out answer-typing). Their heuristics for reformulating questions into statements were not replicated. We did, however, apply WordNet type-checking and approximate, penalized lexical matching. Punyakanok et al. (2004) ; +WN modifies their edit distance function using WordNet. We also report our implementation of Cui et al. (2005) , along with our WordNet expansion (+WN). The Jeopardy base model and mixture with the lexical-semantics log-linear model perform best; both are trained using conditional maximum likelihood estimation. The top part of the table shows performance using 100 manually-annotated question examples (questions 1-100 in TREC 8-12), and the bottom part adds noisily, automatically annotated questions 101-2,393. Boldface marks the best score in a column and any scores in that column not significantly worse under a a two-tailed paired t-test (p < 0.03).", |
|
"cite_spans": [ |
|
{ |
|
"start": 60, |
|
"end": 84, |
|
"text": "Punyakanok et al. (2004)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 601, |
|
"end": 625, |
|
"text": "Punyakanok et al. (2004)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 722, |
|
"end": 739, |
|
"text": "Cui et al. (2005)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline Systems", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Evaluation results on the development and test sets of our model in comparison with the baseline algorithms are shown in Table 2 . Both our model and the model in Cui et al. (2005) are trained on the manually-judged training set (questions 1-100 from TREC 8-12). The approximate tree matching algorithm in Punyakanok et al. (2004) uses fixed edit distance functions and therefore does not require training. From the table we can see that our model significantly outperforms the two baseline algorithmseven when they are given the benefit of WordNeton both development and test set, and on both MRR and MAP.", |
|
"cite_spans": [ |
|
{ |
|
"start": 163, |
|
"end": 180, |
|
"text": "Cui et al. (2005)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 306, |
|
"end": 330, |
|
"text": "Punyakanok et al. (2004)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 121, |
|
"end": 128, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Although manual annotation of the remaining 2,293 training sentences' answers in TREC 8-12 was too labor-intensive, we did experiment with a simple, noisy automatic labeling technique. Any answer that had at least three non-stop word types seen in the question and contains the answer pattern defined in the dataset was labeled as \"correct\" and used in training. The bottom part of Table 2 shows the results. Adding the noisy data hurts all methods, but the Jeopardy model maintains its lead and consistently suffers less damage than Cui et al. (2005) . (The TreeMatch method of Punyakanok et al. (2004) does not use training examples.)", |
|
"cite_spans": [ |
|
{ |
|
"start": 534, |
|
"end": 551, |
|
"text": "Cui et al. (2005)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 579, |
|
"end": 603, |
|
"text": "Punyakanok et al. (2004)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 382, |
|
"end": 389, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments with Noisy Training Data", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "Unlike most previous work, our model does not try to find a single correspondence between words in the question and words in the answer, during training or during testing. An alternative method might choose the best (most probable) alignment, rather than the sum of all alignment scores. This involves a slight change to Equation 3, replacing the summation with a maximization. The change could be made during training, during testing, or both. Table 3 shows that summing is preferable, especially during training.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 445, |
|
"end": 452, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Summing vs. Maximizing", |
|
"sec_num": "5.5" |
|
}, |
|
{ |
|
"text": "The key experimental result of this work is that loose syntactic transformations are an effective way to carry out statistical question answering. One unique advantage of our model is the mixture of a factored, multinomial-based base model and a potentially very rich log-linear model. The base model gives our model robustness, and the log-test set training decoding MAP MRR \u03a3 \u03a3 0.6029 0.6852 \u03a3 max 0.5822 0.6489 max \u03a3 0.5559 0.6250 max max 0.5571 0.6365 Table 3 : Experimental results on comparing summing over alignments (\u03a3) with maximizing (max) over alignments on the test set. Boldface marks the best score in a column and any scores in that column not significantly worse under a a two-tailed paired ttest (p < 0.03).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 456, |
|
"end": 463, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "linear model allows us to throw in task-or domainspecific features. Using a mixture gives the advantage of smoothing (in the base model) without having to normalize the log-linear model by summing over large sets. This powerful combination leads us to believe that our model can be easily ported to other semantic processing tasks where modeling syntactic and semantic transformations is the key, such as textual entailment, paraphrasing, and crosslingual QA.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The traditional approach to cross-lingual QA is that translation is either a pre-processing or postprocessing step done independently from the main QA task. Notice that the QG formalism that we have employed in this work was originally proposed for machine translation. We might envision transformations that are performed together to form questions from answers (or vice versa) and to translatea Jeopardy! game in which bilingual players must ask a question in a different language than that in which the answer is posed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We described a statistical syntax-based model that softly aligns a question sentence with a candidate answer sentence and returns a score. Discriminative training and a relatively straightforward, barelyengineered feature set were used in the implementation. Our scoring model was found to greatly outperform two state-of-the-art baselines on an answer selection task using the TREC dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Smith and Eisner also show how QG formalisms generalize synchronous grammar formalisms.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A round of Jeopardy! involves a somewhat involved and specific \"answer\" presented to the competitors, and the first competitor to hit a buzzer proposes the \"question\" that leads to the answer. For example, an answer might be, This Eastern European capital is famous for defenestrations. In Jeopardy! the players must respond with a queston: What is Prague?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The main motivation for modeling p(q | a) is that it is easier to model deletion of information (such as the part of the sentence that answers the question) than insertion. Our QG does not model the real-world knowledge required to fill in an answer; its job is to know what answers are likely to look like, syntactically.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "If parsing performance is a concern, we might also treat the question and/or answer parse trees as hidden variables, though that makes training and testing more computationally expensive.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Experiments were run on a 64-bit machine with 2\u00d7 2.2GHz dual-core CPUs and 4GB of memory.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "It is to preserve that robustness property that the models are mixed, and not combined some other way.7 In our experiments, all log-linear weights are initialized to be 1; all multinomial distributions are initialized as uniform dis-", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We thank the organizers and NIST for making the dataset publicly available.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The authors acknowledge helpful input from three anonymous reviewers, Kevin Gimpel, and David Smith.This work is supported in part by ARDA/DTO Advanced Question Answering for Intelligence (AQUAINT) program award number NBCHC040164.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": " David A. Smith and Jason Eisner. 2006 ", |
|
"cite_spans": [ |
|
{ |
|
"start": 1, |
|
"end": 38, |
|
"text": "David A. Smith and Jason Eisner. 2006", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "annex", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Selectively using relations to improve precision in question answering", |
|
"authors": [ |
|
{ |
|
"first": "Giuseppe", |
|
"middle": [], |
|
"last": "Attardi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antonio", |
|
"middle": [], |
|
"last": "Cisternino", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francesco", |
|
"middle": [], |
|
"last": "Formica", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Simi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Tommasi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ellen", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Voorhees", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Harman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the 10th Text REtrieval Conference (TREC-10)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Giuseppe Attardi, Antonio Cisternino, Francesco Formica, Maria Simi, Alessandro Tommasi, Ellen M. Voorhees, and D. K. Harman. 2001. Selectively using relations to improve precision in question answering. In Proceedings of the 10th Text REtrieval Conference (TREC-10), Gaithersburg, MD, USA.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains", |
|
"authors": [ |
|
{ |
|
"first": "Leonard", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Baum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ted", |
|
"middle": [], |
|
"last": "Petrie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "George", |
|
"middle": [], |
|
"last": "Soules", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Norman", |
|
"middle": [], |
|
"last": "Weiss", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1970, |
|
"venue": "The Annals of Mathematical Statistics", |
|
"volume": "41", |
|
"issue": "1", |
|
"pages": "164--171", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Leonard E. Baum, Ted Petrie, George Soules, and Nor- man Weiss. 1970. A maximization technique occur- ring in the statistical analysis of probabilistic functions of Markov chains. The Annals of Mathematical Statis- tics, 41(1):164-171.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "An algorithm that learns what\u015b in a name", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Daniel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Bikel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ralph", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Schwartz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Weischedel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Machine Learning", |
|
"volume": "34", |
|
"issue": "", |
|
"pages": "211--231", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel M. Bikel, Richard Schwartz, and Ralph M. Weischedel. 1999. An algorithm that learns what\u015b in a name. Machine Learning, 34(1-3):211-231.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "A statistical approach to machine translation", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Peter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [ |
|
"A Della" |
|
], |
|
"last": "Cocke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vincent", |
|
"middle": [ |
|
"J Della" |
|
], |
|
"last": "Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Frederick", |
|
"middle": [], |
|
"last": "Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Jelinek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Lafferty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Mercer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Roossin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Computational Linguistics", |
|
"volume": "16", |
|
"issue": "2", |
|
"pages": "79--85", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter F. Brown, John Cocke, Stephen A. Della Pietra, Vincent J. Della Pietra, Frederick Jelinek, John D. Laf- ferty, Robert L. Mercer, and Paul S. Roossin. 1990. A statistical approach to machine translation. Computa- tional Linguistics, 16(2):79-85.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "The mathematics of statistical machine translation: Parameter estimation", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Peter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [ |
|
"A Della" |
|
], |
|
"last": "Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vincent", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Della Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mercer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Computational Linguistics", |
|
"volume": "19", |
|
"issue": "2", |
|
"pages": "263--311", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estima- tion. Computational Linguistics, 19(2):263-311.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Question answering passage retrieval using dependency relations", |
|
"authors": [ |
|
{ |
|
"first": "Hang", |
|
"middle": [], |
|
"last": "Cui", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Renxu", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Keya", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Min-Yen", |
|
"middle": [], |
|
"last": "Kan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tat-Seng", |
|
"middle": [], |
|
"last": "Chua", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the 28th ACM-SIGIR International Conference on Research and Development in Information Retrieval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hang Cui, Renxu Sun, Keya Li, Min-Yen Kan, and Tat- Seng Chua. 2005. Question answering passage re- trieval using dependency relations. In Proceedings of the 28th ACM-SIGIR International Conference on Re- search and Development in Information Retrieval, Sal- vador, Brazil.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Maximum likelihood from incomplete data via the EM algorithm", |
|
"authors": [ |
|
{ |
|
"first": "Arthur", |
|
"middle": [], |
|
"last": "Dempster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nan", |
|
"middle": [], |
|
"last": "Laird", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Donald", |
|
"middle": [], |
|
"last": "Rubin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1977, |
|
"venue": "Journal of the Royal Statistical Society", |
|
"volume": "39", |
|
"issue": "1", |
|
"pages": "1--38", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Arthur Dempster, Nan Laird, and Donald Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, 39(1):1-38.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Machine translation using probabilistic synchronous dependency insertion grammars", |
|
"authors": [ |
|
{ |
|
"first": "Yuan", |
|
"middle": [], |
|
"last": "Ding", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the 43st Annual Meeting of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yuan Ding and Martha Palmer. 2005. Machine trans- lation using probabilistic synchronous dependency in- sertion grammars. In Proceedings of the 43st Annual Meeting of the Association for Computational Linguis- tics (ACL), Ann Arbor, MI, USA.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "A noisy-channel approach to question answering", |
|
"authors": [ |
|
{ |
|
"first": "Abdessamad", |
|
"middle": [], |
|
"last": "Echihabi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Abdessamad Echihabi and Daniel Marcu. 2003. A noisy-channel approach to question answering. In Proceedings of the 41st Annual Meeting of the Associ- ation for Computational Linguistics (ACL), Sapporo, Japan.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Learning non-isomorphic tree mappings for machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jason Eisner. 2003. Learning non-isomorphic tree map- pings for machine translation. In Proceedings of the 41st Annual Meeting of the Association for Computa- tional Linguistics (ACL), Sapporo, Japan.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Scalable inference and training of context-rich syntactic translation models", |
|
"authors": [ |
|
{ |
|
"first": "Michel", |
|
"middle": [], |
|
"last": "Galley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Graehl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steve", |
|
"middle": [], |
|
"last": "Deneefe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ignacio", |
|
"middle": [], |
|
"last": "Thayer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44st Annual Meeting of the Association for Computational Linguistics (COLING-ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scalable inference and training of context-rich syntactic translation models. In Proceed- ings of the 21st International Conference on Computa- tional Linguistics and the 44st Annual Meeting of the Association for Computational Linguistics (COLING- ACL), Sydney, Australia.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Loosely tree-based alignment for machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Gildea", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 41st Annual Meeting on Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Gildea. 2003. Loosely tree-based alignment for machine translation. In Proceedings of the 41st An- nual Meeting on Association for Computational Lin- guistics (ACL), Sapporo, Japan.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Methods for using textual entailment in open-domain question answering", |
|
"authors": [ |
|
{ |
|
"first": "Sanda", |
|
"middle": [], |
|
"last": "Harabagiu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Hickl", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics (COLING-ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sanda Harabagiu and Andrew Hickl. 2006. Methods for using textual entailment in open-domain question answering. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics (COLING-ACL), Sydney, Australia.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "IBM's statistical question answering system-TREC-10", |
|
"authors": [ |
|
{ |
|
"first": "Abraham", |
|
"middle": [], |
|
"last": "Ittycheriah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Franz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salim", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the 10th Text REtrieval Conference (TREC-10)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Abraham Ittycheriah, Martin Franz, and Salim Roukos. 2001. IBM's statistical question answering system- TREC-10. In Proceedings of the 10th Text REtrieval Conference (TREC-10), Gaithersburg, MD, USA.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Maximum conditional likelihood via bound maximization and the CEM algorithm", |
|
"authors": [ |
|
{ |
|
"first": "Tony", |
|
"middle": [], |
|
"last": "Jebara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Pentland", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proceedings of the 1998 Conference on Advances in Neural Information Processing Systems II (NIPS)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "494--500", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tony Jebara and Alex Pentland. 1999. Maximum con- ditional likelihood via bound maximization and the CEM algorithm. In Proceedings of the 1998 Confer- ence on Advances in Neural Information Processing Systems II (NIPS), pages 494-500, Denver, CO, USA.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Selectively using relations to improve precision in question answering", |
|
"authors": [ |
|
{ |
|
"first": "Boris", |
|
"middle": [], |
|
"last": "Katz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the EACL-2003 Workshop on Natural Language Processing for Question Answering", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Boris Katz and Jimmy Lin. 2003. Selectively using relations to improve precision in question answering. In Proceedings of the EACL-2003 Workshop on Nat- ural Language Processing for Question Answering, Gaithersburg, MD, USA.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Trec2003 qa at bbn: Answering definitional questions", |
|
"authors": [ |
|
{ |
|
"first": "Jinxi", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ana", |
|
"middle": [], |
|
"last": "Licuanan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ralph", |
|
"middle": [], |
|
"last": "Weischedel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 12th Text REtrieval Conference (TREC-12)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jinxi Xu Ana Licuanan and Ralph Weischedel. 2003. Trec2003 qa at bbn: Answering definitional questions. In Proceedings of the 12th Text REtrieval Conference (TREC-12), Gaithersburg, MD, USA.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Question-answering using semantic relation triples", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Kenneth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Litkowski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proceedings of the 8th Text REtrieval Conference (TREC-8)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kenneth C. Litkowski. 1999. Question-answering us- ing semantic relation triples. In Proceedings of the 8th Text REtrieval Conference (TREC-8), Gaithers- burg, MD, USA.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "On the limited memory BFGS method for large scale optimization", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Dong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jorge", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Nocedal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "Math. Programming", |
|
"volume": "45", |
|
"issue": "", |
|
"pages": "503--528", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dong C. Liu and Jorge Nocedal. 1989. On the limited memory BFGS method for large scale optimization. Math. Programming, 45:503-528.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Learning to recognize features of valid textual entailments", |
|
"authors": [ |
|
{ |
|
"first": "Bill", |
|
"middle": [], |
|
"last": "Maccartney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Trond", |
|
"middle": [], |
|
"last": "Grenager", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marie-Catherine", |
|
"middle": [], |
|
"last": "De Marneffe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Cer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bill MacCartney, Trond Grenager, Marie-Catherine de Marneffe, Daniel Cer, and Christopher D. Manning. 2006. Learning to recognize features of valid textual entailments. In Proceedings of the Human Language Technology Conference of the North American Chap- ter of the Association for Computational Linguistics (HLT-NAACL), New York, NY, USA.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Building a large annotated corpus of english: the penn treebank", |
|
"authors": [ |
|
{ |
|
"first": "Mitchell", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Marcus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mary", |
|
"middle": [ |
|
"Ann" |
|
], |
|
"last": "Marcinkiewicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Beatrice", |
|
"middle": [], |
|
"last": "Santorini", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Computational Linguistics", |
|
"volume": "19", |
|
"issue": "2", |
|
"pages": "313--330", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beat- rice Santorini. 1993. Building a large annotated cor- pus of english: the penn treebank. Computational Lin- guistics, 19(2):313-330.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Online large-margin training of dependency parsers", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Mcdonald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Koby", |
|
"middle": [], |
|
"last": "Crammer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernado", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the 43st Annual Meeting of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan McDonald, Koby Crammer, and Fernado Pereira. 2005. Online large-margin training of dependency parsers. In Proceedings of the 43st Annual Meeting of the Association for Computational Linguistics (ACL), Ann Arbor, MI, USA.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Maximum likelihood estimation via the ECM algorithm: A general framework", |
|
"authors": [ |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Melamed", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Proceedings of the Conference on Theoretical and Methodological Issues in Machine Translation (TMI)", |
|
"volume": "80", |
|
"issue": "", |
|
"pages": "267--278", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "I. Dan Melamed. 2004. Algorithms for syntax-aware statistical machine translation. In Proceedings of the Conference on Theoretical and Methodological Issues in Machine Translation (TMI), Baltimore, MD, USA. Xiao-Li Meng and Donald B. Rubin. 1993. Maximum likelihood estimation via the ECM algorithm: A gen- eral framework. Biometrika, 80:267-278.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "WordNet: an on-line lexical database", |
|
"authors": [ |
|
{ |
|
"first": "George", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Miller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Beckwith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christiane", |
|
"middle": [], |
|
"last": "Fellbaum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Derek", |
|
"middle": [], |
|
"last": "Gross", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katherine", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Miller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "International Journal of Lexicography", |
|
"volume": "3", |
|
"issue": "4", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "George A. Miller, Richard Beckwith, Christiane Fell- baum, Derek Gross, and Katherine J. Miller. 1990. WordNet: an on-line lexical database. International Journal of Lexicography, 3(4).", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "A translation model for sentence retrieval", |
|
"authors": [ |
|
{ |
|
"first": "Vanessa", |
|
"middle": [], |
|
"last": "Murdock", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W. Bruce", |
|
"middle": [], |
|
"last": "Croft", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vanessa Murdock and W. Bruce Croft. 2005. A trans- lation model for sentence retrieval. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing (HLT-EMNLP), Vancouver, BC, USA.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Mapping dependencies trees: An application to question answering", |
|
"authors": [ |
|
{ |
|
"first": "Vasin", |
|
"middle": [], |
|
"last": "Punyakanok", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wen-Tau", |
|
"middle": [], |
|
"last": "Yih", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the 8th International Symposium on Artificial Intelligence and Mathematics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vasin Punyakanok, Dan Roth, and Wen-Tau Yih. 2004. Mapping dependencies trees: An application to ques- tion answering. In Proceedings of the 8th Interna- tional Symposium on Artificial Intelligence and Math- ematics, Fort Lauderdale, FL, USA.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Monolingual machine translation for paraphrase generation", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Quirk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Brockett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Dolan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chris Quirk, Chris Brockett, and William Dolan. 2004. Monolingual machine translation for paraphrase gen- eration. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing (EMNLP), Barcelona, Spain.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Dependency treelet translation: Syntactically informed phrasal SMT", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Quirk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arul", |
|
"middle": [], |
|
"last": "Menezes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Colin", |
|
"middle": [], |
|
"last": "Cherry", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chris Quirk, Arul Menezes, and Colin Cherry. 2005. De- pendency treelet translation: Syntactically informed phrasal SMT. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics (ACL), Ann Arbor, MI, USA.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "A maximum entropy partof-speech tagger", |
|
"authors": [ |
|
{ |
|
"first": "Adwait", |
|
"middle": [], |
|
"last": "Ratnaparkhi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adwait Ratnaparkhi. 1996. A maximum entropy part- of-speech tagger. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), Philadelphia, PA, USA.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Automatic derivation of surface text patterns for a maximum entropy based question answering system", |
|
"authors": [ |
|
{ |
|
"first": "Abharam", |
|
"middle": [], |
|
"last": "Deepak Ravichandran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salim", |
|
"middle": [], |
|
"last": "Ittycheriah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the Human Language Technology Conference and North American Chapter of the Association for Computational Linguistics (HLT-NAACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Deepak Ravichandran, Abharam Ittycheriah, and Salim Roukos. 2003. Automatic derivation of surface text patterns for a maximum entropy based question an- swering system. In Proceedings of the Human Lan- guage Technology Conference and North American Chapter of the Association for Computational Linguis- tics (HLT-NAACL), Edmonton, Canada.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Exploring correlation of dependency relation paths for answer extraction", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dietrich", |
|
"middle": [], |
|
"last": "Klakow", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics (COLING-ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Shen and Dietrich Klakow. 2006. Exploring corre- lation of dependency relation paths for answer extrac- tion. In Proceedings of the 21st International Confer- ence on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguis- tics (COLING-ACL), Sydney, Australia.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Exploring syntactic relation patterns for question answering", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Geert-Jan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dietrich", |
|
"middle": [], |
|
"last": "Kruijff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Klakow", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the Second International Joint Conference on Natural Language Processing (IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Shen, Geert-Jan M. Kruijff, and Dietrich Klakow. 2005. Exploring syntactic relation patterns for ques- tion answering. In Proceedings of the Second Interna- tional Joint Conference on Natural Language Process- ing (IJCNLP), Jeju Island, Republic of Korea.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Modular approach to error analysis and evaluation for multilingual question answering", |
|
"authors": [ |
|
{ |
|
"first": "Hideki", |
|
"middle": [], |
|
"last": "Shima", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mengqiu", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Frank", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Teruko", |
|
"middle": [], |
|
"last": "Mitamura", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hideki Shima, Mengqiu Wang, Frank Lin, and Teruko Mitamura. 2006. Modular approach to error analysis and evaluation for multilingual question answering. In Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC), Genoa, Italy.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "more detail, but did not see any interesting improvements.) parent-child Question parent-child pair align respectively to answer parent-child pair. Augmented with the q.-side dependency label. child-parent Question parent-child pair align respectively to answer child-parent pair. Augmented with the q.-side dependency label. grandparent-child Question parent-child pair align respectively to answer grandparent-child pair. Augmented with the q.-side dependency label. same node Question parent-child pair align to the same answer-word. siblings Question parent-child pair align to siblings in the answer. Augmented with the tree-distance between the a.-side siblings. c-commandThe parent of one answer-side word is an ancestor of the other answer-side word. other A catch-all for all other types of configurations, which are permitted.", |
|
"uris": null |
|
}, |
|
"TABREF0": { |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table/>", |
|
"text": "Syntactic alignment configurations are partitioned into these sets for prediction under the Jeopardy model.", |
|
"html": null |
|
}, |
|
"TABREF2": { |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table/>", |
|
"text": "Results on development and test sets. TreeMatch is our implementation of", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |