{ "paper_id": "N19-1029", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:58:38.902312Z" }, "title": "Simple Question Answering with Subgraph Ranking and Joint-Scoring", "authors": [ { "first": "Wenbo", "middle": [], "last": "Zhao", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University", "location": {} }, "email": "wzhao1@andrew.cmu.edu" }, { "first": "Tagyoung", "middle": [], "last": "Chung", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University", "location": {} }, "email": "tagyoung@amazon.com" }, { "first": "Anuj", "middle": [], "last": "Goyal", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University", "location": {} }, "email": "anujgoya@amazon.com" }, { "first": "Angeliki", "middle": [], "last": "Metallinou", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Knowledge graph based simple question answering (KBSQA) is a major area of research within question answering. Although only dealing with simple questions, i.e., questions that can be answered through a single knowledge base (KB) fact, this task is neither simple nor close to being solved. Targeting on the two main steps, subgraph selection and fact selection, the research community has developed sophisticated approaches. However, the importance of subgraph ranking and leveraging the subject-relation dependency of a KB fact have not been sufficiently explored. Motivated by this, we present a unified framework to describe and analyze existing approaches. Using this framework as a starting point, we focus on two aspects: improving subgraph selection through a novel ranking method and leveraging the subject-relation dependency by proposing a joint scoring CNN model with a novel loss function that enforces the wellorder of scores. Our methods achieve a new state of the art (85.44% in accuracy) on the SimpleQuestions dataset. * Work conducted during an internship at Alexa AI, CA.", "pdf_parse": { "paper_id": "N19-1029", "_pdf_hash": "", "abstract": [ { "text": "Knowledge graph based simple question answering (KBSQA) is a major area of research within question answering. Although only dealing with simple questions, i.e., questions that can be answered through a single knowledge base (KB) fact, this task is neither simple nor close to being solved. Targeting on the two main steps, subgraph selection and fact selection, the research community has developed sophisticated approaches. However, the importance of subgraph ranking and leveraging the subject-relation dependency of a KB fact have not been sufficiently explored. Motivated by this, we present a unified framework to describe and analyze existing approaches. Using this framework as a starting point, we focus on two aspects: improving subgraph selection through a novel ranking method and leveraging the subject-relation dependency by proposing a joint scoring CNN model with a novel loss function that enforces the wellorder of scores. Our methods achieve a new state of the art (85.44% in accuracy) on the SimpleQuestions dataset. * Work conducted during an internship at Alexa AI, CA.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Knowledge graph based simple question answering (KBSQA) is an important area of research within question answering, which is one of the core areas of interest in natural language processing (Yao and Van Durme, 2014; Yih et al., 2015; Dong et al., 2015; Khashabi et al., 2016; Zhang et al., 2018; Hu et al., 2018) . It can be used for many applications such as virtual home assistants, customer service, and chat-bots. A knowledge graph is a multi-entity and multi-relation directed graph containing the information needed to answer the questions. The graph can be represented as collection of triples {(subject, relation, object)}. Each triple is called a fact, where a directed relational arrow points from subject node to object node. A simple question means that the question can be answered by extracting a single fact from the knowledge graph, i.e., the question has a single subject and a single relation, hence a single answer. For example, the question \"Which Harry Potter series did Rufus Scrimgeour appear in?\" can be answered by a single fact (Rufus Scrimgeour, book.book-characters.appears-inbook, Harry Potter and the Deathly Hallows). Given the simplicity of the questions, one would think this task is trivial. Yet it is far from being easy or close to being solved. The complexity lies in two aspects. One is the massive size of the knowledge graph, usually in the order of billions of facts. The other is the variability of the questions in natural language. Based on this anatomy of the problem, the solutions also consist of two steps: (1) selecting a relatively small subgraph from the knowledge graph given a question and (2) selecting the correct fact from the subgraph.", "cite_spans": [ { "start": 190, "end": 215, "text": "(Yao and Van Durme, 2014;", "ref_id": "BIBREF24" }, { "start": 216, "end": 233, "text": "Yih et al., 2015;", "ref_id": "BIBREF25" }, { "start": 234, "end": 252, "text": "Dong et al., 2015;", "ref_id": "BIBREF6" }, { "start": 253, "end": 275, "text": "Khashabi et al., 2016;", "ref_id": "BIBREF13" }, { "start": 276, "end": 295, "text": "Zhang et al., 2018;", "ref_id": "BIBREF28" }, { "start": 296, "end": 312, "text": "Hu et al., 2018)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Different approaches have been studied to tackle the KBSQA problems. The common solution for the first step, subgraph selection (which is also known as entity linking), is to label the question with subject part (mention) and nonsubject part (pattern) and then use the mention to retrieve related facts from the knowledge graph, constituting the subgraph. Sequence labeling models, such as a BiLSTM-CRF tagger , are commonly employed to label the mention and the pattern. To retrieve the subgraph, it is common to search all possible n-grams of the mention against the knowledge graph and collect the facts with matched subjects as the subgraph. The candidate facts in the subgraph may contain incorrect subjects and relations. In our running example, we first identify the mention in the question, i.e.,\"Rufus Scrimgeour\", and then retrieve the subgraph which could contain the following facts: {(Rufus Scrimgeour, book.book-characters.appears-in-book, Harry Potter and the Deathly Hallows), (Rufus Wainwright, music.singer.singer-of, I Don't Know What That Is)}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For the second step, fact selection, a common approach is to construct models to match the mention with candidate subjects and match the pattern with candidate relations in the subgraph from the first step. For example, the correct fact is identified by matching the mention \"Rufus Scrimgeour\" with candidate subjects {Rufus Scrimgeour, Rufus Wainwright} and matching the pattern \"Which Harry Potter series did m appear in\" with candidate relations {book.book-characters.appears-inbook, music.singer.singer-of}. Different neural network models can be employed (Bordes et al., 2015; Dai et al., 2016; Yin et al., 2016; Yu et al., 2017; Petrochuk and Zettlemoyer, 2018) .", "cite_spans": [ { "start": 560, "end": 581, "text": "(Bordes et al., 2015;", "ref_id": "BIBREF2" }, { "start": 582, "end": 599, "text": "Dai et al., 2016;", "ref_id": "BIBREF5" }, { "start": 600, "end": 617, "text": "Yin et al., 2016;", "ref_id": "BIBREF26" }, { "start": 618, "end": 634, "text": "Yu et al., 2017;", "ref_id": "BIBREF27" }, { "start": 635, "end": 667, "text": "Petrochuk and Zettlemoyer, 2018)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Effective as these existing approaches are, there are three major drawbacks. (1) First, in subgraph selection, there is no effective way to deal with inexact matches and the facts in subgraph are not ranked by relevance to the mention; however, we will later show that effective ranking can substantially improve the subgraph recall. (2) Second, the existing approaches do not leverage the dependency between mention-subjects and pattern-relations; however, mismatches of mention-subject can lead to incorrect relations and hence incorrect answers. We will later show that leveraging such dependency contributes to the overall accuracy. (3) Third, the existing approaches minimize the ranking loss (Yin et al., 2016; Lukovnikov et al., 2017; Qu et al., 2018) ; however, we will later show that the ranking loss is suboptimal.", "cite_spans": [ { "start": 698, "end": 716, "text": "(Yin et al., 2016;", "ref_id": "BIBREF26" }, { "start": 717, "end": 741, "text": "Lukovnikov et al., 2017;", "ref_id": "BIBREF16" }, { "start": 742, "end": 758, "text": "Qu et al., 2018)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Addressing these points, the contributions of this paper are three-fold: (1) We propose a subgraph ranking method with combined literal and semantic score to improve the recall of the subgraph selection. It can deal with inexact match, and achieves better performance compared to the previous state of the art. (2) We propose a lowcomplexity joint-scoring CNN model and a wellorder loss to improve fact selection. It couples the subject matching and the relation matching by learning order-preserving scores and dynamically adjusting the weights of scores. (3) We achieve better performance (85.44% in accuracy) than the previous state of the art on the SimpleQuestions dataset, surpassing the best baseline by a large margin 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The methods for subgraph selection fall in two schools: parsing methods (Berant et al., 2013; Yih et al., 2015; Zheng et al., 2018) and sequence tagging methods (Yin et al., 2016) . The latter proves to be simpler yet effective, with the most effective model being BiLSTM-CRF (Yin et al., 2016; Dai et al., 2016; Petrochuk and Zettlemoyer, 2018) .", "cite_spans": [ { "start": 72, "end": 93, "text": "(Berant et al., 2013;", "ref_id": "BIBREF0" }, { "start": 94, "end": 111, "text": "Yih et al., 2015;", "ref_id": "BIBREF25" }, { "start": 112, "end": 131, "text": "Zheng et al., 2018)", "ref_id": "BIBREF30" }, { "start": 161, "end": 179, "text": "(Yin et al., 2016)", "ref_id": "BIBREF26" }, { "start": 276, "end": 294, "text": "(Yin et al., 2016;", "ref_id": "BIBREF26" }, { "start": 295, "end": 312, "text": "Dai et al., 2016;", "ref_id": "BIBREF5" }, { "start": 313, "end": 345, "text": "Petrochuk and Zettlemoyer, 2018)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The two categories of methods for fact selection are match-scoring models and classification models. The match-scoring models employ neural networks to score the similarity between the question and the candidate facts in the subgraph and then find the best match. For instance, Bordes et al. (2015) use a memory network to encode the questions and the facts to the same representation space and score their similarities. Yin et al. (2016) use two independent models, a characterlevel CNN and a word-level CNN with attentive max-pooling. Dai et al. (2016) formulate a two-step conditional probability estimation problem and use BiGRU networks. Yu et al. (2017) use two separate hierarchical residual BiLSTMs to represent questions and relations at different abstractions and granularities. Qu et al. (2018) propose an attentive recurrent neural network with similarity matrix based convolutional neural network (AR-SMCNN) to capture the semantic-level and literal-level similarities. In the classification models, Ture and Jojic (2017) employ a twolayer BiGRU model. Petrochuk and Zettlemoyer (2018) employ a BiLSTM to classify the relations and achieve the state-of-the-art performance. In addition, Mohammed et al. (2018) evaluate various strong baselines with simple neural networks (LSTMs and GRUs) or non-neural network models (CRF). Lukovnikov et al. (2017) propose an end-to-end word/character-level encoding network to rank subject-relation pairs and retrieve relevant facts.", "cite_spans": [ { "start": 421, "end": 438, "text": "Yin et al. (2016)", "ref_id": "BIBREF26" }, { "start": 537, "end": 554, "text": "Dai et al. (2016)", "ref_id": "BIBREF5" }, { "start": 643, "end": 659, "text": "Yu et al. (2017)", "ref_id": "BIBREF27" }, { "start": 789, "end": 805, "text": "Qu et al. (2018)", "ref_id": "BIBREF20" }, { "start": 1200, "end": 1222, "text": "Mohammed et al. (2018)", "ref_id": "BIBREF17" }, { "start": 1338, "end": 1362, "text": "Lukovnikov et al. (2017)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "However, the multitude of methods yield progressively smaller gains with increasing model complexity (Mohammed et al., 2018; Gupta et al., 2018) . Most approaches focus on fact matching and relation classification while assigning less emphasis to subgraph selection. They also do not sufficiently leverage the important signature of the knowledge graph-the subject-relation dependency, namely, incorrect subject matching can lead to incorrect relations. Our approach is similar to (Yin et al., 2016 ), but we take a different path by focusing on accurate subgraph selection and utilizing the subject-relation dependency.", "cite_spans": [ { "start": 101, "end": 124, "text": "(Mohammed et al., 2018;", "ref_id": "BIBREF17" }, { "start": 125, "end": 144, "text": "Gupta et al., 2018)", "ref_id": "BIBREF7" }, { "start": 481, "end": 498, "text": "(Yin et al., 2016", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Ranking and Joint-Scoring", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Answering with Subgraph", "sec_num": "3" }, { "text": "We provide a unified description of the KBSQA framework. First, we define Definition 1. Answerable Question A question is answerable if and only if one of its facts is in the knowledge graph.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unified Framework", "sec_num": "3.1" }, { "text": "Let Q := {q | q is anwerable} be the set of answerable questions, and G := {(s, r, o) | s \u2208 S, r \u2208 R, o \u2208 O} be the knowledge graph, where S, R and O are the set of subjects, relations and objects, respectively. The triple (s, r, o) is a fact. By the definition of answerable questions, the key to solving the KBSQA problem is to find the fact in knowledge graph corresponding to the question, i.e., we want a map \u03a6 : Q \u2192 G. Ideally, we would like this map to be injective such that for each question, the corresponding fact can be uniquely determined (more precisely, the injection maps from the equivalent class of Q to G since similar questions may have the same answer, but we neglect such difference here for simplicity). However, in general, it is hard to find such map directly because of (1) the massive knowledge graph and (2) natural language variations in questions. Therefore, end-to-end approaches such as parsing to structured query and encodingdecoding models are difficult to achieve (Yih et al., 2015; Sukhbaatar et al., 2015; Kumar et al., 2016; He and Golub, 2016; Hao et al., 2017) . Instead, related works and this work mitigate the difficulties by breaking down the problem into the aforementioned two steps, as illustrated below:", "cite_spans": [ { "start": 1000, "end": 1018, "text": "(Yih et al., 2015;", "ref_id": "BIBREF25" }, { "start": 1019, "end": 1043, "text": "Sukhbaatar et al., 2015;", "ref_id": "BIBREF21" }, { "start": 1044, "end": 1063, "text": "Kumar et al., 2016;", "ref_id": "BIBREF15" }, { "start": 1064, "end": 1083, "text": "He and Golub, 2016;", "ref_id": "BIBREF10" }, { "start": 1084, "end": 1101, "text": "Hao et al., 2017)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Unified Framework", "sec_num": "3.1" }, { "text": "(1) Subgraph Selection:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unified Framework", "sec_num": "3.1" }, { "text": "question \u2212\u2192 {mention, pattern}, mention \u2212\u2192 subgraph", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unified Framework", "sec_num": "3.1" }, { "text": "(2) Fact Selection:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unified Framework", "sec_num": "3.1" }, { "text": "match mention \u2194 subject pattern \u2194 relation \u2200(subject, relation) \u2208 subgraph \u21d2 (subject*, relation*) \u2212\u2192 object* (answer*)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unified Framework", "sec_num": "3.1" }, { "text": "In the first step, the size of the knowledge graph is significantly reduced. In the second step, the variations of questions are confined to mentionsubject variation and pattern-relation variation. Formally, we denote the questions as the union of mentions and patterns Q = M P and the knowledge graph as the subset of the Cartesian product of subjects, relations and objects G \u2286 S \u00d7 R \u00d7 O. In the first step, given a question q \u2208 Q, we find the mention via a sequence tagger \u03c4 : Q \u2192 M, q \u2192 m q . The tagged mention consists of a sequence of words m q = {w 1 , . . . , w n } and the pattern is the question excluding the mention p q = q\\m q . We denote the set of n-grams of m q as W n (m q ) and use W n (m q ) to retrieve the subgraph as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unified Framework", "sec_num": "3.1" }, { "text": "S q \u00d7 R q \u00d7 O q \u2287 G q := {(s, r, o) \u2208 G | W n (s) W n (m q ) = \u2205, n = 1, . . . , |m q |}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unified Framework", "sec_num": "3.1" }, { "text": "Next, to select the correct fact (the answer) in the subgraph, we match the mention m q with candidate subjects in S q , and match the pattern p q with candidate relations in R q . Specifically, we want to maximize the log-likelihood", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unified Framework", "sec_num": "3.1" }, { "text": "maxs\u2208S q log P(s | mq) maxr\u2208R q log P(r | pq).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unified Framework", "sec_num": "3.1" }, { "text": "( 1)The probabilities in (1) are modeled by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unified Framework", "sec_num": "3.1" }, { "text": "P(s | mq) = e h(f (mq ),f (s))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unified Framework", "sec_num": "3.1" }, { "text": "s \u2208Sq e h(f (mq ),f (s ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unified Framework", "sec_num": "3.1" }, { "text": "(2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unified Framework", "sec_num": "3.1" }, { "text": "P(r | pq) = e h(g(pq ),g(r))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unified Framework", "sec_num": "3.1" }, { "text": "r \u2208Rq e h(g(pq ),g(r )) ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unified Framework", "sec_num": "3.1" }, { "text": "where f : M S \u2192 R d maps the mention and the subject onto a d-dimensional differentiable manifold embedded in the Hilbert space and similarly, g : P R \u2192 R d . Both f and g are in the form of neural networks. The map h :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unified Framework", "sec_num": "3.1" }, { "text": "R d \u00d7 R d \u2192", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unified Framework", "sec_num": "3.1" }, { "text": "R is a metric that measures the similarity of the vector representations (e.g., the cosine similarity). Practically, directly optimizing (1) is difficult because the subgraph G q is large and computing the partition functions in (2) and (3) can be intractable. Alternatively, a surrogate objective, the ranking loss (or hinge loss with negative samples) (Col-lobert and Weston, 2008; Dai et al., 2016) is minimized", "cite_spans": [ { "start": 354, "end": 383, "text": "(Col-lobert and Weston, 2008;", "ref_id": null }, { "start": 384, "end": 401, "text": "Dai et al., 2016)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Unified Framework", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Lrank = q\u2208Q \uf8eb \uf8ed s\u2208Sq h f (mq, s \u2212 ) \u2212 h f (mq, s + ) + \u03bb + + r\u2208Rq hg(pq, r \u2212 ) \u2212 hg(pq, r + ) + \u03bb + \uf8f6 \uf8f8 ,", "eq_num": "(4)" } ], "section": "Unified Framework", "sec_num": "3.1" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unified Framework", "sec_num": "3.1" }, { "text": "h f (\u2022, \u2022) = h(f (\u2022), f (\u2022)), h g (\u2022, \u2022) = h(g(\u2022), g(\u2022))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unified Framework", "sec_num": "3.1" }, { "text": "; the sign + and \u2212 indicate correct candidate and incorrect candidate,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unified Framework", "sec_num": "3.1" }, { "text": "[\u2022] + = max(\u2022, 0)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unified Framework", "sec_num": "3.1" }, { "text": ", and \u03bb > 0 is a margin term. Other variants of the ranking loss are also studied (Cao et al., 2006; Zhao et al., 2015; Vu et al., 2016) .", "cite_spans": [ { "start": 82, "end": 100, "text": "(Cao et al., 2006;", "ref_id": "BIBREF3" }, { "start": 101, "end": 119, "text": "Zhao et al., 2015;", "ref_id": "BIBREF29" }, { "start": 120, "end": 136, "text": "Vu et al., 2016)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Unified Framework", "sec_num": "3.1" }, { "text": "To retrieve the subgraph of candidate facts using n-gram matching (Bordes et al., 2015) , one first constructs the map from n-grams W n (s) to subject s for all subjects in the knowledge graph, yielding", "cite_spans": [ { "start": 66, "end": 87, "text": "(Bordes et al., 2015)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Subgraph Ranking", "sec_num": "3.2" }, { "text": "{W n (s) \u2192 s | s \u2208 S, n = 1, . . . , |s|}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subgraph Ranking", "sec_num": "3.2" }, { "text": "Next, one uses the n-grams of mention W n (m) to match the n-grams of subjects W n (s) and fetches those matched facts to compose the subgraph", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subgraph Ranking", "sec_num": "3.2" }, { "text": "{(s, r, o) \u2208 G | W n (s) W n (m) = \u2205, n = 1, . . . |m|}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subgraph Ranking", "sec_num": "3.2" }, { "text": "In our running example, for the mention \"Rufus Scrimgeour\", we collect the subgraph of facts with the bigrams and unigrams of subjects matching the bigram {\"Rufus Scrimgeour\"} and unigrams {\"Rufus\", \"Scrimgeour\"}. One problem with this approach is that the retrieved subgraph can be fairly large. Therefore, it is desirable to rank the subgraph by relevance to the mention and only preserve the most relevant facts. To this end, different ranking methods are used, such as surface-level matching score with added heuristics (Yin et al., 2016) , relation detection network (Yu et al., 2017; Hao et al., 2018) , term frequency-inverse document frequency (TF-IDF) score (Ture and Jojic, 2017; Mohammed et al., 2018) . However, these ranking methods only consider matching surface forms and cannot handle inexact matches, synonyms, or polysemy (\"New York\" , \"the New York City\", \"Big Apple\").", "cite_spans": [ { "start": 524, "end": 542, "text": "(Yin et al., 2016)", "ref_id": "BIBREF26" }, { "start": 572, "end": 589, "text": "(Yu et al., 2017;", "ref_id": "BIBREF27" }, { "start": 590, "end": 607, "text": "Hao et al., 2018)", "ref_id": "BIBREF8" }, { "start": 667, "end": 689, "text": "(Ture and Jojic, 2017;", "ref_id": "BIBREF22" }, { "start": 690, "end": 712, "text": "Mohammed et al., 2018)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Subgraph Ranking", "sec_num": "3.2" }, { "text": "This motivates us to rank the subgraph not only by literal relevance but also semantic relevance. Hence, we propose a ranking score with literal closeness and semantic closeness. Specifically, the literal closeness is measured by the length of the longest common subsequence |\u03c3|(s, m) between a subject s and a mention m. The semantic closeness is measured by the co-occurrence probability of the subject s and the mention m P(s, m) = P(s|m)P(m) = P(w1, . . . wn| w1, . . . wm)P( w1, . . . wm)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subgraph Ranking", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "(5) = n i=1 P(wi| w1, . . . wm)P( w1, . . . wm) (6) = n i=1 m k=1 P(wi| w k ) P( w1, . . . wm) (7) = n i=1 m k=1 P(wi| w k ) m\u22121 j=1 P( wj+1| wj)P( w1),", "eq_num": "(8)" } ], "section": "Subgraph Ranking", "sec_num": "3.2" }, { "text": "where from (5) to (6) we assume conditional independence of the words in subject and the words in mention; from (6) to (7) and from (7) to (8) we factorize the factors using the chain rule with conditional independence assumption. The marginal term P( w 1 ) is calculated by the word occurrence frequency. Each conditional term is approximated by P(w i |w j ) \u2248 exp{\u0175 T i\u0175 j } where\u0175 i s are pretrained GloVe vectors (Pennington et al., 2014) . These vectors are obtained by taking into account the word co-occurrence probability of surrounding context. Hence, the GloVe vector space encodes the semantic closeness. In practice we use the log-likelihood as the semantic score to convert multiplication in (8) to summation and normalize the GloVe embeddings into a unit ball. Then, the score for ranking the subgraph is the weighted sum of the literal score and the semantic score", "cite_spans": [ { "start": 417, "end": 442, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Subgraph Ranking", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "score(s, m) = \u03c4 |\u03c3|(s, m) + (1 \u2212 \u03c4 ) log P(s, m),", "eq_num": "(9)" } ], "section": "Subgraph Ranking", "sec_num": "3.2" }, { "text": "where \u03c4 is a hyper-parameter whose value need to be tuned on the validation set. Consequently, for each question q, we can get the top-n ranked subgraph G n q\u2193 as well as the corresponding top-n ranked candidate subjects S n q\u2193 and relations R n q\u2193 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subgraph Ranking", "sec_num": "3.2" }, { "text": "Once we have the ranked subgraph, next we need to identify the correct fact in the subgraph. One school of conventional methods (Bordes et al., 2014 (Bordes et al., , 2015 Yin et al., 2016; Dai et al., 2016) is minimizing the surrogate ranking loss (4) where neural networks are used to transform the (subject, mention) and (relation, pattern) pairs into a Hilbert space and score them with inner product. One problem with this approach is that it matches mention-subject and pattern-relation 3) The model takes input pairs (mention, subject) and (pattern, relation) to produce the similarity scores. The loss dynamically adjusts the weights and enforces the order of positive and negative scores.", "cite_spans": [ { "start": 128, "end": 148, "text": "(Bordes et al., 2014", "ref_id": "BIBREF1" }, { "start": 149, "end": 171, "text": "(Bordes et al., , 2015", "ref_id": "BIBREF2" }, { "start": 172, "end": 189, "text": "Yin et al., 2016;", "ref_id": "BIBREF26" }, { "start": 190, "end": 207, "text": "Dai et al., 2016)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Joint-Scoring Model with Well-Order Loss", "sec_num": "3.3" }, { "text": "separately, neglecting the difference of their contributions to fact matching. Given that the number of subjects (order of millions) are much larger than the number of relations (order of thousands), incorrect subject matching can lead to larger error than incorrect relation matching. Therefore, matching the subjects correctly should be given more importance than matching the relations. Further, the ranking loss is suboptimal, as it does not preserve the relative order of the matching scores. We empirically find that the ranking loss tends to bring the matching scores to the neighborhood of zero (during the training the scores shrink to very small numbers), which is not functioning as intended.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint-Scoring Model with Well-Order Loss", "sec_num": "3.3" }, { "text": "To address these points, we propose a jointscoring model with well-order loss (Figure 1 ). Together they learn to map from joint-input pairs to order-preserving scores supervised by a well-order loss, hence the name. The joint-scoring model takes joint-input pairs, (subject, mention) or (relation, pattern), to produce the similarity scores directly. The well-order loss then enforces the wellorder in scores.", "cite_spans": [], "ref_spans": [ { "start": 78, "end": 87, "text": "(Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Joint-Scoring Model with Well-Order Loss", "sec_num": "3.3" }, { "text": "A well-order, first of all, is a total order-a binary relation on a set which is antisymmetric, transitive, and connex. In our case it is just the \"\u2264\" relation. In addition, the well-order is a total order with the property that every non-empty set has a least element. The well-order restricts that the scores of correct matches are always larger or equal to the scores of incorrect matches, i.e., \u2200i : \u2200j : S + i \u2265 S \u2212 j where S + i and S \u2212 i indicate the score of correct match and the score of incorrect match.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint-Scoring Model with Well-Order Loss", "sec_num": "3.3" }, { "text": "We derive the well-order loss in the following way. Let S = {S 1 , . . . , S n } = S + S \u2212 be the set of scores where S + and S \u2212 are the set of scores with correct and incorrect matches. Let I = I + I \u2212 be the index set of S, |I", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint-Scoring Model with Well-Order Loss", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "+ | = n 1 , |I \u2212 | = n 2 , n = n 1 + n 2 . Following the well-order relation inf S + \u2265 sup S \u2212 \u21d4 \u2200i + \u2208 I + : \u2200i \u2212 \u2208 I \u2212 : S + i + \u2212 S \u2212 i \u2212 \u2265 0 \u21d4 i + \u2208I + i \u2212 \u2208I \u2212 (S + i + \u2212 S \u2212 i \u2212 ) \u2265 0", "eq_num": "(10)" } ], "section": "Joint-Scoring Model with Well-Order Loss", "sec_num": "3.3" }, { "text": "\u21d4 n2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint-Scoring Model with Well-Order Loss", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "i + \u2208I + S + i + \u2212 n1 i \u2212 \u2208I \u2212 S \u2212 i \u2212 \u2265 0,", "eq_num": "(11)" } ], "section": "Joint-Scoring Model with Well-Order Loss", "sec_num": "3.3" }, { "text": "where from (10) to (11) we expand the sums and reorder the terms. Consequently, we obtain the well-order loss", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint-Scoring Model with Well-Order Loss", "sec_num": "3.3" }, { "text": "Lwell-order(Sms, Spr) =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint-Scoring Model with Well-Order Loss", "sec_num": "3.3" }, { "text": "|I", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint-Scoring Model with Well-Order Loss", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "+ | i \u2212 S i \u2212 ms \u2212 |I \u2212 | i + S i + ms + |I + ||I \u2212 |\u03bb + + \uf8ee \uf8f0 |J + | j \u2212 S j \u2212 pr \u2212 |J \u2212 | j + S j + pr + |J + ||J \u2212 |\u03bb \uf8f9 \uf8fb + ,", "eq_num": "(12)" } ], "section": "Joint-Scoring Model with Well-Order Loss", "sec_num": "3.3" }, { "text": "where S ms , S pr are the scores for (mention, subject), (pattern, relation) pairs for a question, I, J are the index sets for candidate subjects, relations in the ranked subgraph, +, \u2212 indicate the correct candidate and incorrect candidate, [\u2022] + = max(\u2022, 0), and \u03bb > 0 is a margin term. Then, the objective (1) becomes", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint-Scoring Model with Well-Order Loss", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "min q\u2208Q,(s,r)\u2208S n q\u2193 \u00d7R n q\u2193 |I + | i \u2212 h f (mq, s i \u2212 )\u2212 |I \u2212 | i + h f (mq, s i + ) + |I + ||I \u2212 |\u03bb + + \uf8ee \uf8f0 |J + | j \u2212 hg(pq, r j \u2212 )\u2212 |J \u2212 | j + hg(pq, r j + ) + |J + ||J \u2212 |\u03bb \uf8f9 \uf8fb + .", "eq_num": "(13)" } ], "section": "Joint-Scoring Model with Well-Order Loss", "sec_num": "3.3" }, { "text": "This new objective with well-order loss differs from the ranking loss (4) in two ways, and plays a vital role in the optimization. First, instead of considering the match of mention-subjects and pattern-relations separately, (13) jointly considers both input pairs and their dependency. Specifically, (13) incorporates such dependency as the weight factors |I| (for subjects) and |J| (for relations). These factors are the controlling factors and are automatically and dynamically adjusted as they are the sizes of candidate subjects and relations. Further, the match of subjects, weighted by (I + , I \u2212 ), will control the match of relations, weighted by (J + , J \u2212 ). To see this, for a question and a fixed number of candidate facts in subgraph, |I| = |J|, the incorrect number of subjects |I \u2212 | is usually larger than the incorrect number of relations |J \u2212 |, which causes larger loss for mismatching subjects. As a result, the model is forced to match subjects more correctly, and in turn, prune the relations with incorrect subjects and reduce the size of J \u2212 , leading to smaller loss. Second, the well-order loss enforces the well-order relation of scores while the ranking loss does not have such constraint.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint-Scoring Model with Well-Order Loss", "sec_num": "3.3" }, { "text": "Here, we evaluate our proposed approach for the KBSQA problem on the SimpleQuestions benchmark dataset and compare with baseline approaches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "The SimpleQuestions (Bordes et al., 2015) dataset is released by the Facebook AI Research. It is the standard dataset on which almost all previous state-of-the-art literature reported their numbers (Gupta et al., 2018; Hao et al., 2018) . It also represents the largest publicly available dataset for KBSQA with its size several orders of magnitude larger than other available datasets. It has 108, 442 simple questions with the corresponding facts from subsets of the Freebase (FB2M and FB5M). There are 1, 837 unique relations. We use the default train, validation and test partitions (Bordes et al., 2015) with 75, 910, 10, 845 and 21, 687 questions, respectively. We use FB2M with 2, 150, 604 entities, 6, 701 relations and 14, 180, 937 facts, respectively. ", "cite_spans": [ { "start": 20, "end": 41, "text": "(Bordes et al., 2015)", "ref_id": "BIBREF2" }, { "start": 198, "end": 218, "text": "(Gupta et al., 2018;", "ref_id": "BIBREF7" }, { "start": 219, "end": 236, "text": "Hao et al., 2018)", "ref_id": "BIBREF8" }, { "start": 587, "end": 608, "text": "(Bordes et al., 2015)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "4.1" }, { "text": "For sequence tagging, we use the same BiLSTM-CRF model as the baseline (Dai et al., 2016) to label each word in the question as either subject or non-subject. The configurations of the model (Table 1) basically follow the baseline (Dai et al., 2016) .", "cite_spans": [ { "start": 71, "end": 89, "text": "(Dai et al., 2016)", "ref_id": "BIBREF5" }, { "start": 231, "end": 249, "text": "(Dai et al., 2016)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 191, "end": 200, "text": "(Table 1)", "ref_id": null } ], "eq_spans": [], "section": "Models", "sec_num": "4.2" }, { "text": "For subgraph selection, we use only unigrams of the tagged mention to retrieve the candidate facts (see Section 3.2) and rank them by the proposed relevance score (9) with the tuned weight \u03c4 = 0.9 (hence more emphasizing on literal matching). We select the facts with top-n scores as the subgraphs and compare the corresponding recalls with the baseline method (Yin et al., 2016) .", "cite_spans": [ { "start": 361, "end": 379, "text": "(Yin et al., 2016)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "4.2" }, { "text": "For fact selection, we employ a character-based CNN (CharCNN) model to score (mention, subject) pairs and a word-based CNN (WordCNN) model to score (pattern, relation) pairs (with model configurations shown in Table 2 ), which is similar to one of the state-of-the-art baselines AM-PCNN (Yin et al., 2016) . In fact, we first replicated the AMPCNN model and achieved comparable results, and then modified the AMPCNN model to take joint inputs and output scores directly (see Section 3.3 and Figure 1 ). Our CNN models have only two convolutional layers (versus six convolutional layers in the baseline) and have no attention mechanism, bearing much lower complexity than the baseline. The CharCNN and WordCNN differ only in the embedding layer, the former using character embeddings and the latter using word embeddings. The optimizer used for training the models is Adam (Kingma and Ba, 2014). The learning configurations are shown in Table 3 .", "cite_spans": [ { "start": 287, "end": 305, "text": "(Yin et al., 2016)", "ref_id": "BIBREF26" } ], "ref_spans": [ { "start": 210, "end": 217, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 491, "end": 499, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 936, "end": 943, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Models", "sec_num": "4.2" }, { "text": "For the hyper-parameters shown in Table 1 , 2 and 3, we basically follow the settings in baseline literature (Yin et al., 2016; Dai et al., 2016) to promote a fair comparison. Other hyper-parameters, such as the \u03c4 in the relevance score (9), are tuned on the validation set.", "cite_spans": [ { "start": 109, "end": 127, "text": "(Yin et al., 2016;", "ref_id": "BIBREF26" }, { "start": 128, "end": 145, "text": "Dai et al., 2016)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 34, "end": 41, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Models", "sec_num": "4.2" }, { "text": "Our proposed approach and the baseline approaches are evaluated in terms of (1) the top-n subgraph selection recall (the percentage of questions that have the correct subjects in the topn candidates) and (2) the fact selection accuracy (i.e., the overall question answering accuracy).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "4.2" }, { "text": "Subgraph selection The subgraph selection results for our approach and one of the state-of-theart baselines (Yin et al., 2016) are summarized in Table 4 . Both the baseline and our approach use unigrams to retrieve candidates. The baseline ranks the candidates by the length of the longest common subsequence with heuristics while we rank the candidates by the joint relevance score defined in (9). We see that the literal score used in the baseline performs well and using the semantic score (the log-likelihood) (8) only does not outperform the baseline (except for the top-50 case). This is due to the nature of how the questions in the SimpleQuestions dataset are generated-the majority of the questions only contain mentions matching the subjects in the Freebase in the lexical level, making the literal score sufficiently effective. However, we see that combining the literal score and semantic score outperforms the baseline by a large margin. For top-1, 5, 10, 20, 50 recall our ranking approach surpasses the baseline by 11.9%, 5.4%, 4.6%, 3.9%, 4.1%, respectively. Our approach also surpasses other baselines (Lukovnikov et al., 2017; Yu et al., 2017; Qu et al., 2018; Gupta et al., 2018) under the same settings. We note that the recall is not monotonically increasing with the top-n. The reason is that, as opposed to conventional methods which rank the entire subgraph returned from unigram matching to select the top-n candidates, we choose only the first 200 candidates from the subgraph and then rank them with our proposed ranking score. This is more efficient, but at the price of potentially dropping the correct facts. One could trade efficiency for accuracy by ranking all the candidates in the subgraph.", "cite_spans": [ { "start": 108, "end": 126, "text": "(Yin et al., 2016)", "ref_id": "BIBREF26" }, { "start": 1119, "end": 1144, "text": "(Lukovnikov et al., 2017;", "ref_id": "BIBREF16" }, { "start": 1145, "end": 1161, "text": "Yu et al., 2017;", "ref_id": "BIBREF27" }, { "start": 1162, "end": 1178, "text": "Qu et al., 2018;", "ref_id": "BIBREF20" }, { "start": 1179, "end": 1198, "text": "Gupta et al., 2018)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 145, "end": 152, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Results", "sec_num": "4.3" }, { "text": "Fact selection The fact selection results for our approach and baselines are shown in Table 5 . The object accuracy is the same as the overall question answer accuracy. Recall that in Section 3.3 we explained that the weight components in the well-order loss (13) are adjusted dynamically in the training to impose a larger penalty for mention-subject mismatches and hence enforce correct matches. This can be observed by looking at the different loss components and weights as well the subject and relation matching accuracies during the training. As weights for mentionsubject matches increase, the losses for mentionsubject matches also increase, while both the errors for mention-subject matches and patternrelation matches are high. To reduce the errors, the model is forced to match mention-subject more correctly. As a result, the corresponding weights and losses decrease, and both mention-subject and pattern-relation match accuracies increase.", "cite_spans": [], "ref_spans": [ { "start": 86, "end": 93, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "4.3" }, { "text": "Effectiveness of well-order loss and joint- 1 AMPCNN 76.4 (Yin et al., 2016) 2 BiLSTM 78.1 (Petrochuk and Zettlemoyer, 2018) 3 Table 5 : Fact Selection Accuracy (%). The object accuracy is the end-to-end question answer accuracy, while subject and relation accuracies refer to separately computed subject accuracy and relation accuracy.", "cite_spans": [ { "start": 58, "end": 76, "text": "(Yin et al., 2016)", "ref_id": "BIBREF26" }, { "start": 91, "end": 124, "text": "(Petrochuk and Zettlemoyer, 2018)", "ref_id": "BIBREF19" } ], "ref_spans": [ { "start": 127, "end": 134, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "4.3" }, { "text": "scoring model The first and second row of Table 5 are taken from the baseline AMPCNN (Yin et al., 2016) and BiLSTM (Petrochuk and Zettlemoyer, 2018) (the state of the art prior to our work 2 ). The third row shows the accuracy of the baseline with our proposed well-order loss and we see a 1.3% improvement, demonstrating the effectiveness of the well-order loss. Further, the fourth row shows the accuracy of our joint-scoring (JS) model with well-order loss and we see a 3% improvement over the best baseline 3 , demonstrating the effectiveness of the joint-scoring model. Effectiveness of subgraph ranking The fifth row of Table 5 shows the accuracy of our jointscoring model with well-order loss and top-50 ranked subgraph and we see a further 4.3% improvement over our model without subgraph ranking (the fourth row), and a 7.3% improvement over the best baseline. In addition, the subject accuracy increases by 4.0%, which is due to the subgraph ranking. Interestingly, the relation accuracy increases by 7.8%, which supports our claim that improving subject matching can improve relation matching. This demonstrates the effectiveness of our subgraph ranking and joint-scoring approach. The sixth row shows the accuracy of our joint-scoring model with well-order loss and only the top-1 subject. In this case, the subject accuracy is limited by the top-1 recall which is 85.5%. Despite that, our approach outperforms the best baseline by 1.2%. Further, the relation accuracy increases by 7.1% over the fifth row, because restricting the subject substantially confines 2 As noted, Ture and Jojic (2017) reported better performance than us but neither Petrochuk and Zettlemoyer (2018) nor Mohammed et al. (2018) could replicate their result.", "cite_spans": [ { "start": 85, "end": 103, "text": "(Yin et al., 2016)", "ref_id": "BIBREF26" }, { "start": 115, "end": 148, "text": "(Petrochuk and Zettlemoyer, 2018)", "ref_id": "BIBREF19" }, { "start": 1586, "end": 1607, "text": "Ture and Jojic (2017)", "ref_id": "BIBREF22" }, { "start": 1656, "end": 1688, "text": "Petrochuk and Zettlemoyer (2018)", "ref_id": "BIBREF19" }, { "start": 1693, "end": 1715, "text": "Mohammed et al. (2018)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 42, "end": 49, "text": "Table 5", "ref_id": null }, { "start": 626, "end": 633, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "4.3" }, { "text": "3 At the time of submission we also found that Hao et al. (2018) the choice of relations. This shows that a sufficiently high top-1 subgraph recall reduces the need for subject matching.", "cite_spans": [ { "start": 47, "end": 64, "text": "Hao et al. (2018)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.3" }, { "text": "In order to analyze what constitutes the errors of our approach, we select the questions in the test set for which our best model has predicted wrong answers, and analyze the source of errors (see Table 6 ). We observe that the errors can be categorized as follows: (1) Incorrect subject prediction; however, some subjects are actually correct, e.g., the prediction \"New York\" v.s. \"New York City.\"", "cite_spans": [], "ref_spans": [ { "start": 197, "end": 204, "text": "Table 6", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Error Analysis", "sec_num": "4.4" }, { "text": "(2) Incorrect relation prediction; however, some relations are actually correct, e.g., the prediction \"fictional-universe.fictional-character.charactercreated-by\" v.s. \"book.written-work.author\" in the question \"Who was the writer of Dark Sun?\" and \"music.album.genre\" v.s. \"music.artist.genre.\" (3) Incorrect prediction of both. However, these three reasons only make up 59.43% of the errors. The other 40.57% errors are due to: (4) Ambiguous questions, which take up the majority of the errors, e.g., \"Name a species of fish.\" or \"What movie is a short film?\" These questions are too general and can have multiple correct answers. Such issues in the SimpleQuestions dataset are analyzed by Petrochuk and Zettlemoyer (2018) (see further discussion on this at the end of this Section). (5) Non-simple questions, e.g., \"Which drama film was released in 1922?\" This question requires two KB facts instead of one to answer correctly. (6) Wrong fact questions where the reference fact is non-relevant, e.g., \"What is an active ingredient in Pacific?\" is labeled with \"Triclosan 0.15 soap\". (7) Out of scope questions, which have entities or relations out the scope of FB2M. (8) Spelling inconsistencies, e.g., the predicted answer \"Operation Shylock: A Confession\" v.s. the reference answer \"Operation Shylock\", and the predicted answer \"Tom and Jerry: Robin Hood and His Merry Mouse\" v.s. the reference answer \"Tom and Jerry\". For these cases, even when the models predict the subjects and relations correctly, these questions are fundamentally unanswerable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "4.4" }, { "text": "Although these issues are inherited from the dataset itself, given the large size of the dataset and the small proportion of the problematic questions, it is sufficient to validate the reliability and significance of our performance improvement and conclusions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "4.4" }, { "text": "Answerable Questions Redefined Petrochuk and Zettlemoyer (2018) set an upper bound of 83.4% for the accuracy on the SimpleQuestions dataset. However, our models are able to do better than the upper bound. Are we doing something wrong? Petrochuk and Zettlemoyer (2018) claim that a question is unanswerable if there exist multiple valid subject-relation pairs in the knowledge graph, but we claim that a question is unanswerable if and only if there is no valid fact in the knowledge graph. There is a subtle difference between these two claims.", "cite_spans": [ { "start": 235, "end": 267, "text": "Petrochuk and Zettlemoyer (2018)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "4.4" }, { "text": "Based on different definitions of answerable questions, we further claim that incorrect subject or incorrect relation can still lead to a correct answer. For example, for the question \"What is a song from Hier Komt De Storm?\" with the fact (Hier Komt De Storm: 1980-1990 live, music.release.track-list, Stephanie), our predicted subject \"Hier Komt De Storm: 1980-1990 live\" does not match the reference subject \"Hier Komt De Storm\", but our model predicts the correct answer \"Stephanie\" because it can deal with inexact match of the subjects. In the second example, for the question \"Arkham House is the publisher behind what novel?\", our predicted relation \"book.book-edition.publisher\" does not match the reference relation \"book.publishingcompany.books-published\", but our model pre-dicts the correct answer \"Watchers at the Strait Gate\" because it can deal with paraphrases of relations. In the third example, for the question \"Who was the king of Lydia and Croesus's father?\", the correct subject \"Croesus\" ranks second in our subject predictions and the correct relation \"people.person.parents\" ranks fourth in our relation predictions, but our model predicts the correct answer \"Alyattes of Lydia\" because it reweighs the scores with respect to the subject-relation dependency and the combined score of subject and relation ranks first.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "4.4" }, { "text": "To summarize, the reason that we are able to redefine answerable questions and achieve significant performance gain is that we take advantage of the subgraph ranking and the subject-relation dependency.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "4.4" }, { "text": "In this work, we propose a subgraph ranking method and joint-scoring approach to improve the performance of KBSQA. The ranking method combines literal and semantic scores to deal with inexact match and achieves better subgraph selection results than the state of the art. The jointscoring model with well-order loss couples the dependency of subject matching and relation matching and enforces the order of scores. Our proposed approach achieves a new state of the art on the SimpleQuestions dataset, surpassing the best baseline by a large margin.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "In the future work, one could further improve the performance on simple question answering tasks by exploring relation ranking, different embedding strategies and network structures, dealing with open questions and out-of-scope questions. One could also consider extending our approach to complex questions, e.g., multi-hop questions where more than one supporting facts is required. Potential directions may include ranking the subgraph by assigning each edge (relation) a closeness score and evaluating the length of the shortest path between any two path-connected entity nodes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "Ture and Jojic (2017) reported better performance than us but neitherPetrochuk and Zettlemoyer (2018) norMohammed et al. (2018) could replicate their result.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The authors would like to thank anonymous reviewers. The authors would also like to thank Nikko Str\u00f6m and other Alexa AI team members for their feedback.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Semantic parsing on Freebase from question-answer pairs", "authors": [ { "first": "Jonathan", "middle": [], "last": "Berant", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Chou", "suffix": "" }, { "first": "Roy", "middle": [], "last": "Frostig", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1533--1544", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1533-1544.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Question answering with subgraph embeddings", "authors": [ { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "Sumit", "middle": [], "last": "Chopra", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "615--620", "other_ids": {}, "num": null, "urls": [], "raw_text": "Antoine Bordes, Sumit Chopra, and Jason Weston. 2014. Question answering with subgraph embed- dings. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Process- ing, pages 615-620, Doha, Qatar. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Large-scale simple question answering with memory networks", "authors": [ { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "Nicolas", "middle": [], "last": "Usunier", "suffix": "" }, { "first": "Sumit", "middle": [], "last": "Chopra", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1506.02075" ] }, "num": null, "urls": [], "raw_text": "Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Adapting ranking SVM to document retrieval", "authors": [ { "first": "Yunbo", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Tie-Yan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Hang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yalou", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Hsiao-Wuen", "middle": [], "last": "Hon", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval", "volume": "", "issue": "", "pages": "186--193", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yunbo Cao, Jun Xu, Tie-Yan Liu, Hang Li, Yalou Huang, and Hsiao-Wuen Hon. 2006. Adapting rank- ing SVM to document retrieval. In Proceedings of the 29th Annual International ACM SIGIR Confer- ence on Research and Development in Information Retrieval, pages 186-193. ACM.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A unified architecture for natural language processing: Deep neural networks with multitask learning", "authors": [ { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 25th International Conference on Machine Learning", "volume": "", "issue": "", "pages": "160--167", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Pro- ceedings of the 25th International Conference on Machine Learning, pages 160-167. ACM.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "CFO: Conditional focused neural question answering with largescale knowledge bases", "authors": [ { "first": "Zihang", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "800--810", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zihang Dai, Lei Li, and Wei Xu. 2016. CFO: Condi- tional focused neural question answering with large- scale knowledge bases. In Proceedings of the 54th Annual Meeting of the Association for Computa- tional Linguistics, volume 1: Long Papers, pages 800-810, Berlin, Germany. Association for Compu- tational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Question answering over Freebase with multicolumn convolutional neural networks", "authors": [ { "first": "Li", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Furu", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Ke", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "260--269", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li Dong, Furu Wei, Ming Zhou, and Ke Xu. 2015. Question answering over Freebase with multi- column convolutional neural networks. In Proceed- ings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th Interna- tional Joint Conference on Natural Language Pro- cessing, volume 1: Long Papers, pages 260-269.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Retrieve and re-rank: A simple and effective ir approach to simple question answering over knowledge graphs", "authors": [ { "first": "Vishal", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Manoj", "middle": [], "last": "Chinnakotla", "suffix": "" }, { "first": "Manish", "middle": [], "last": "Shrivastava", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the First Workshop on Fact Extraction and Verification (FEVER)", "volume": "", "issue": "", "pages": "22--27", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vishal Gupta, Manoj Chinnakotla, and Manish Shri- vastava. 2018. Retrieve and re-rank: A simple and effective ir approach to simple question answer- ing over knowledge graphs. In Proceedings of the First Workshop on Fact Extraction and Verification (FEVER), pages 22-27.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Pattern-revising enhanced simple question answering over knowledge bases", "authors": [ { "first": "Yanchao", "middle": [], "last": "Hao", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Shizhu", "middle": [], "last": "He", "suffix": "" }, { "first": "Kang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "3272--3282", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yanchao Hao, Hao Liu, Shizhu He, Kang Liu, and Jun Zhao. 2018. Pattern-revising enhanced simple ques- tion answering over knowledge bases. In Proceed- ings of the 27th International Conference on Com- putational Linguistics, pages 3272-3282.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "An endto-end model for question answering over knowledge base with cross-attention combining global knowledge", "authors": [ { "first": "Yanchao", "middle": [], "last": "Hao", "suffix": "" }, { "first": "Yuanzhe", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Kang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Shizhu", "middle": [], "last": "He", "suffix": "" }, { "first": "Zhanyi", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "221--231", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yanchao Hao, Yuanzhe Zhang, Kang Liu, Shizhu He, Zhanyi Liu, Hua Wu, and Jun Zhao. 2017. An end- to-end model for question answering over knowl- edge base with cross-attention combining global knowledge. In Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics, volume 1: Long Papers, pages 221-231.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Character-level question answering with attention", "authors": [ { "first": "Xiaodong", "middle": [], "last": "He", "suffix": "" }, { "first": "David", "middle": [], "last": "Golub", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1598--1607", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaodong He and David Golub. 2016. Character-level question answering with attention. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1598-1607.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Answering natural language questions by subgraph matching over knowledge graphs", "authors": [ { "first": "Sen", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Zou", "suffix": "" }, { "first": "Jeffrey", "middle": [ "Xu" ], "last": "Yu", "suffix": "" }, { "first": "Haixun", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Dongyan", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2018, "venue": "IEEE Transactions on Knowledge and Data Engineering", "volume": "30", "issue": "5", "pages": "824--837", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sen Hu, Lei Zou, Jeffrey Xu Yu, Haixun Wang, and Dongyan Zhao. 2018. Answering natural language questions by subgraph matching over knowledge graphs. IEEE Transactions on Knowledge and Data Engineering, 30(5):824-837.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Bidirectional LSTM-CRF models for sequence tagging", "authors": [ { "first": "Zhiheng", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1508.01991" ] }, "num": null, "urls": [], "raw_text": "Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidi- rectional LSTM-CRF models for sequence tagging. arXiv preprint arXiv:1508.01991.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Question answering via integer programming over semi-structured knowledge", "authors": [ { "first": "Daniel", "middle": [], "last": "Khashabi", "suffix": "" }, { "first": "Tushar", "middle": [], "last": "Khot", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Sabharwal", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Etzioni", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "1145--1152", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Khashabi, Tushar Khot, Ashish Sabharwal, Peter Clark, Oren Etzioni, and Dan Roth. 2016. Question answering via integer programming over semi-structured knowledge. In Proceedings of the Twenty-Fifth International Joint Conference on Ar- tificial Intelligence, pages 1145-1152. AAAI Press.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.6980" ] }, "num": null, "urls": [], "raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Ask me anything: Dynamic memory networks for natural language processing", "authors": [ { "first": "Ankit", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Ozan", "middle": [], "last": "Irsoy", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Ondruska", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "James", "middle": [], "last": "Bradbury", "suffix": "" }, { "first": "Ishaan", "middle": [], "last": "Gulrajani", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Zhong", "suffix": "" }, { "first": "Romain", "middle": [], "last": "Paulus", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" } ], "year": 2016, "venue": "International Conference on Machine Learning", "volume": "", "issue": "", "pages": "1378--1387", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, and Richard Socher. 2016. Ask me anything: Dynamic memory networks for natural language processing. In International Con- ference on Machine Learning, pages 1378-1387.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Neural network-based question answering over knowledge graphs on word and character level", "authors": [ { "first": "Denis", "middle": [], "last": "Lukovnikov", "suffix": "" }, { "first": "Asja", "middle": [], "last": "Fischer", "suffix": "" }, { "first": "Jens", "middle": [], "last": "Lehmann", "suffix": "" }, { "first": "S\u00f6ren", "middle": [], "last": "Auer", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 26th International Conference on World Wide Web", "volume": "", "issue": "", "pages": "1211--1220", "other_ids": {}, "num": null, "urls": [], "raw_text": "Denis Lukovnikov, Asja Fischer, Jens Lehmann, and S\u00f6ren Auer. 2017. Neural network-based question answering over knowledge graphs on word and char- acter level. In Proceedings of the 26th International Conference on World Wide Web, pages 1211-1220. International World Wide Web Conferences Steering Committee.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Strong baselines for simple question answering over knowledge graphs with and without neural networks", "authors": [ { "first": "Salman", "middle": [], "last": "Mohammed", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "2", "issue": "", "pages": "291--296", "other_ids": {}, "num": null, "urls": [], "raw_text": "Salman Mohammed, Peng Shi, and Jimmy Lin. 2018. Strong baselines for simple question answering over knowledge graphs with and without neural net- works. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, volume 2: Short Papers, pages 291-296.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "GloVe: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing, pages 1532-1543, Doha, Qatar. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "SimpleQuestions nearly solved: A new upperbound and baseline approach", "authors": [ { "first": "Michael", "middle": [], "last": "Petrochuk", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1804.08798" ] }, "num": null, "urls": [], "raw_text": "Michael Petrochuk and Luke Zettlemoyer. 2018. SimpleQuestions nearly solved: A new upper- bound and baseline approach. arXiv preprint arXiv:1804.08798.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Question answering over Freebase via attentive RNN with similarity matrix based CNN", "authors": [ { "first": "Yingqi", "middle": [], "last": "Qu", "suffix": "" }, { "first": "Jie", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Liangyi", "middle": [], "last": "Kang", "suffix": "" }, { "first": "Qinfeng", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Ye", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1804.03317" ] }, "num": null, "urls": [], "raw_text": "Yingqi Qu, Jie Liu, Liangyi Kang, Qinfeng Shi, and Dan Ye. 2018. Question answering over Free- base via attentive RNN with similarity matrix based CNN. arXiv preprint arXiv:1804.03317.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "End-to-end memory networks", "authors": [ { "first": "Sainbayar", "middle": [], "last": "Sukhbaatar", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Rob", "middle": [], "last": "Fergus", "suffix": "" } ], "year": 2015, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "2440--2448", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In Advances in Neural Information Processing Systems, pages 2440-2448.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "No need to pay attention: Simple recurrent neural networks work!", "authors": [ { "first": "Ferhan", "middle": [], "last": "Ture", "suffix": "" }, { "first": "Oliver", "middle": [], "last": "Jojic", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2866--2872", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ferhan Ture and Oliver Jojic. 2017. No need to pay attention: Simple recurrent neural networks work! In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 2866-2872, Copenhagen, Denmark. Association for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Bi-directional recurrent neural network with ranking loss for spoken language understanding", "authors": [ { "first": "Ngoc", "middle": [ "Thang" ], "last": "Vu", "suffix": "" }, { "first": "Pankaj", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Heike", "middle": [], "last": "Adel", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 2016, "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", "volume": "", "issue": "", "pages": "6060--6064", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ngoc Thang Vu, Pankaj Gupta, Heike Adel, and Hin- rich Sch\u00fctze. 2016. Bi-directional recurrent neural network with ranking loss for spoken language un- derstanding. In 2016 IEEE International Confer- ence on Acoustics, Speech and Signal Processing (ICASSP), pages 6060-6064. IEEE.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Information extraction over structured data: Question answering with Freebase", "authors": [ { "first": "Xuchen", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "956--966", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xuchen Yao and Benjamin Van Durme. 2014. Infor- mation extraction over structured data: Question an- swering with Freebase. In Proceedings of the 52nd Annual Meeting of the Association for Computa- tional Linguistics, volume 1: Long Papers, pages 956-966.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Semantic parsing via staged query graph generation: Question answering with knowledge base", "authors": [ { "first": "Wentau", "middle": [], "last": "Yih", "suffix": "" }, { "first": "Minwei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "He", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "1321--1331", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wentau Yih, Minwei Chang, Xiaodong He, and Jian- feng Gao. 2015. Semantic parsing via staged query graph generation: Question answering with knowl- edge base. In Proceedings of the 53rd Annual Meet- ing of the Association for Computational Linguistics and the 7th International Joint Conference on Nat- ural Language Processing, volume 1: Long Papers, pages 1321-1331.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Simple question answering by attentive convolutional neural network", "authors": [ { "first": "Wenpeng", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Mo", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Xiang", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 2016, "venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers", "volume": "", "issue": "", "pages": "1746--1756", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wenpeng Yin, Mo Yu, Bing Xiang, Bowen Zhou, and Hinrich Sch\u00fctze. 2016. Simple question answering by attentive convolutional neural network. In Pro- ceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Techni- cal Papers, pages 1746-1756, Osaka, Japan. The COLING 2016 Organizing Committee.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Improved neural relation detection for knowledge base question answering", "authors": [ { "first": "Mo", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Wenpeng", "middle": [], "last": "Yin", "suffix": "" }, { "first": "", "middle": [], "last": "Kazi Saidul Hasan", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Cicero Dos Santos", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Xiang", "suffix": "" }, { "first": "", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "571--581", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mo Yu, Wenpeng Yin, Kazi Saidul Hasan, Cicero dos Santos, Bing Xiang, and Bowen Zhou. 2017. Im- proved neural relation detection for knowledge base question answering. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics, volume 1: Long Papers, pages 571-581.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Variational reasoning for question answering with knowledge graph", "authors": [ { "first": "Yuyu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Hanjun", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Zornitsa", "middle": [], "last": "Kozareva", "suffix": "" }, { "first": "Alexander", "middle": [ "J" ], "last": "Smola", "suffix": "" }, { "first": "Le", "middle": [], "last": "Song", "suffix": "" } ], "year": 2018, "venue": "Thirty-Second AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuyu Zhang, Hanjun Dai, Zornitsa Kozareva, Alexan- der J Smola, and Le Song. 2018. Variational reason- ing for question answering with knowledge graph. In Thirty-Second AAAI Conference on Artificial In- telligence.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Deep semantic ranking based hashing for multi-label image retrieval", "authors": [ { "first": "Fang", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Yongzhen", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Liang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Tieniu", "middle": [], "last": "Tan", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "1556--1564", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fang Zhao, Yongzhen Huang, Liang Wang, and Tieniu Tan. 2015. Deep semantic ranking based hashing for multi-label image retrieval. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1556-1564.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Question answering over knowledge graphs: Question understanding via template decomposition", "authors": [ { "first": "Weiguo", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Jeffrey", "middle": [ "Xu" ], "last": "Yu", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Zou", "suffix": "" }, { "first": "Hong", "middle": [], "last": "Cheng", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the VLDB Endowment", "volume": "11", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Weiguo Zheng, Jeffrey Xu Yu, Lei Zou, and Hong Cheng. 2018. Question answering over knowledge graphs: Question understanding via template de- composition. Proceedings of the VLDB Endowment, 11(11).", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "text": "Model Diagram (Section 3.", "type_str": "figure" }, "TABREF1": { "num": null, "type_str": "table", "text": "Matching Model Configurations", "content": "", "html": null }, "TABREF3": { "num": null, "type_str": "table", "text": "Learning Configurations", "content": "
", "html": null }, "TABREF5": { "num": null, "type_str": "table", "text": "Subgraph Selection Results", "content": "
", "html": null }, "TABREF7": { "num": null, "type_str": "table", "text": "reported 80.2% accuracy.", "content": "
Incorrect Sub. only8.67
Incorrect Rel. only16.26
Incorrect Sub. & Rel. 34.50
Other40.57
", "html": null }, "TABREF8": { "num": null, "type_str": "table", "text": "Error Decomposition (%). Percentages for total of 3157 errors.", "content": "", "html": null } } } }