{ "paper_id": "2018", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:27:25.196952Z" }, "title": "Supporting Evidence Retrieval for Answering Yes/No Questions", "authors": [ { "first": "Meng-Tse", "middle": [], "last": "Wu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Academia Sinica", "location": {} }, "email": "" }, { "first": "Yi-Chung", "middle": [], "last": "Lin", "suffix": "", "affiliation": { "laboratory": "", "institution": "Academia Sinica", "location": {} }, "email": "" }, { "first": "Keh-Yih", "middle": [], "last": "Su", "suffix": "", "affiliation": { "laboratory": "", "institution": "Academia Sinica", "location": {} }, "email": "kysu@iis.sinica.edu.tw" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper proposes a new n-gram matching approach for retrieving the supporting evidence, which is a question related text passage in the given document, for answering Yes/No questions. It locates the desired passage according to the question text with an efficient and simple n-gram matching algorithm. In comparison with those previous approaches, this model is more efficient and easy to implement. The proposed approach was tested on a task of answering Yes/No questions of Taiwan elementary school Social Studies lessons. Experimental results showed that the performance of our proposed approach is 5% higher than the well-known Apache Lucene search engine.", "pdf_parse": { "paper_id": "2018", "_pdf_hash": "", "abstract": [ { "text": "This paper proposes a new n-gram matching approach for retrieving the supporting evidence, which is a question related text passage in the given document, for answering Yes/No questions. It locates the desired passage according to the question text with an efficient and simple n-gram matching algorithm. In comparison with those previous approaches, this model is more efficient and easy to implement. The proposed approach was tested on a task of answering Yes/No questions of Taiwan elementary school Social Studies lessons. Experimental results showed that the performance of our proposed approach is 5% higher than the well-known Apache Lucene search engine.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Supporting evidence retrieval is a key step in the question-answering task. It locates the related text passage from the given documents according to the question content so that the system can efficiently answer the question only based on the retrieved passage. The goal of supporting evidence retrieval is to merely keep necessary information (but filter out the irrelevant content as much as possible) to reduce the associated inference time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Previous supporting evidence retrieval approaches can be classified into three categories:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "(1) Term matching approaches (Chen, Fisch, Weston & Bordes, 2017) , (2) Syntactic/Semantic scoring approaches (Murdock, Fan, Lally, Shima & Boguraev, 2012; Jansen, Sharp, Surdeanu & Clark, 2017) , and (3) Translation model based approaches (Berger, Caruana, Cohn, Freitag & Mittal, 2000; Jeon, Croft & Lee, 2005; Xue, Jeon & Croft, 2008; Zhou, Cai, Zhao & Liu, 2011) . Term matching approaches, such as Lucene search 1 , used the vector space model and some language models adopted in Information Retrieval (Manning, Raghavan & Sch\u00fctze, 48 Meng-Tse Wu et al. 2008 ). On the other hand, syntactic/semantic scoring approaches (Murdock et al., 2012; Jansen et al., 2017) retrieved the supporting evidence by conducting the syntactic/semantic analysis of each document sentence. They detected certain terms or structures in the question and then weighted the candidates differently by the appearance of those terms or structures. Finally, approaches that utilize a translation model were widely adopted in the Community QA systems (Berger et al., 2000; Jeon et al., 2005; Xue et al., 2008; Zhou et al., 2011) . They used phrase-based or word-based translation models to find the similar historical questions from the new queried question. In the task of supporting evidence retrieval, we could let the question play the role of new queried question and the supporting evidence play the role of historical questions, and then adopt the translation model to find the supporting evidence.", "cite_spans": [ { "start": 29, "end": 65, "text": "(Chen, Fisch, Weston & Bordes, 2017)", "ref_id": "BIBREF2" }, { "start": 110, "end": 155, "text": "(Murdock, Fan, Lally, Shima & Boguraev, 2012;", "ref_id": "BIBREF8" }, { "start": 156, "end": 194, "text": "Jansen, Sharp, Surdeanu & Clark, 2017)", "ref_id": "BIBREF4" }, { "start": 240, "end": 287, "text": "(Berger, Caruana, Cohn, Freitag & Mittal, 2000;", "ref_id": "BIBREF0" }, { "start": 288, "end": 312, "text": "Jeon, Croft & Lee, 2005;", "ref_id": "BIBREF5" }, { "start": 313, "end": 337, "text": "Xue, Jeon & Croft, 2008;", "ref_id": "BIBREF11" }, { "start": 338, "end": 366, "text": "Zhou, Cai, Zhao & Liu, 2011)", "ref_id": "BIBREF12" }, { "start": 507, "end": 563, "text": "(Manning, Raghavan & Sch\u00fctze, 48 Meng-Tse Wu et al. 2008", "ref_id": null }, { "start": 624, "end": 646, "text": "(Murdock et al., 2012;", "ref_id": "BIBREF8" }, { "start": 647, "end": 667, "text": "Jansen et al., 2017)", "ref_id": "BIBREF4" }, { "start": 1027, "end": 1048, "text": "(Berger et al., 2000;", "ref_id": "BIBREF0" }, { "start": 1049, "end": 1067, "text": "Jeon et al., 2005;", "ref_id": "BIBREF5" }, { "start": 1068, "end": 1085, "text": "Xue et al., 2008;", "ref_id": "BIBREF11" }, { "start": 1086, "end": 1104, "text": "Zhou et al., 2011)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Term matching approaches are widely adopted in the search engine due to its efficiency. However, they do not consider the local context of each term, not even mentioning the associated syntactic/semantic information. Therefore, they usually result in low accuracy. On the other hand, syntactic/semantic scoring approaches utilize syntactic/semantic meaning of each document sentence. They can understand the questions more in the syntactic/semantic level. However, those approaches are not only time consuming but also task orientated. Finally, translation model based approaches are widely adopted in the Community QA systems. However, they need large training data to train the translation models, and are thus not suitable for the tasks with only small amount of training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "To overcome the problems mentioned above, we aimed at the approach that is efficient, general and accurate enough. Therefore, the approach of term (most of them are unigrams) matching is still adopted in this paper for computation efficiency and generalization. However, to further consider the phrase and local context, it is extended into n-gram for considering the local dependency. It thus avoids the drawbacks of previous approaches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Given a question, our goal is to find a related passage, from the given corpus, that contains minimum but sufficient information to answer the question. In other words, good supporting evidence should include sufficient related information and less irrelevant and redundant information for the given question. On the other hand, supporting evidence can be extracted in different granularity. For instance, they are specified as top 5 articles in (Chen et al., 2017) . The smaller the granularity is, the harder the approach is to find the appropriate supporting evidence (since we need to locate it more accurately). In our task, we define the supporting evidence as a text passage with consecutive sentences in the same paragraph, which will be explained in Section 4.3. We propose two scoring functions for finding the supporting evidence: QE-BLUE and modified F-measure. QE-BLUE is converted from the CR-BLEU score (Papineni, Roukos, Ward & Zhu, 2002) which only considers n-gram precision and is used in evaluating the performance of a machine translation system. In contrast, the modified F-measure takes both recall and precision of n-grams into consideration.", "cite_spans": [ { "start": 446, "end": 465, "text": "(Chen et al., 2017)", "ref_id": "BIBREF2" }, { "start": 918, "end": 954, "text": "(Papineni, Roukos, Ward & Zhu, 2002)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Therefore, the modified F-measure is able to evaluate the portion of the matched terms in the question. In comparison with those term matching approaches, the proposed method provided better performance. On the other hand, in comparison with those semantic scoring approaches, the proposed method is more efficient, easy to implement and task independent. In summary, we make the following contributions in this paper:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Supporting Evidence Retrieval for Answering Yes/No Questions 49", "sec_num": null }, { "text": "\uf0b7 We studied the desired characteristics of extracted supporting evidence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Supporting Evidence Retrieval for Answering Yes/No Questions 49", "sec_num": null }, { "text": "\uf0b7 We proposed a novel scoring function for retrieving the supporting evidence by jointly considering precision and recall of n-grams.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Supporting Evidence Retrieval for Answering Yes/No Questions 49", "sec_num": null }, { "text": "\uf0b7 We adopted and tested several techniques for improving the supporting evidence retrieval.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Supporting Evidence Retrieval for Answering Yes/No Questions 49", "sec_num": null }, { "text": "\uf0b7 We conducted the experiments to show the superiority of the proposed approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Supporting Evidence Retrieval for Answering Yes/No Questions 49", "sec_num": null }, { "text": "The remainder of this paper is organized as follows. Section 2 illustrates the desired characteristics that an effective supporting evidence retrieval algorithm should possess. The proposed approach is introduced in Section 3. Section 4 shows the experimental result. The error analysis of the proposed approach is then given in Section 5. The related work is introduced in Section 6. Finally, Section 7 concludes this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Supporting Evidence Retrieval for Answering Yes/No Questions 49", "sec_num": null }, { "text": "Question:\u6211\u5011|\u61c9\u8a72|\u5b8c\u5168|\u807d\u5f9e|\u7236\u6bcd|\u7684|\u5efa\u8b70|\uff0c|\u9078\u64c7|\u52a0\u5165|\u5b78\u6821|\u7684|\u5718\u968a|\u3002 \"We should fully follow the advice of parents for choosing which school group to join.\" Evidence:\u6211\u5011|\u53ef\u4ee5|\u4f9d\u7167|\u81ea\u5df1|\u7684|\u8208\u8da3|\uff0c|\u53c3\u8003|\u8001\u5e2b|\u548c|\u7236\u6bcd|\u7684|\u5efa\u8b70|\uff0c|\u9078\u64c7|\u52a0\u5165|\u4e0d\u540c|\u7684| \u5718\u968a|\u5b78\u7fd2|\u3002 \"We can consider our own interest and refer to the advice from the teachers and parents for choosing which learning group to join.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Desired Characteristics", "sec_num": "2." }, { "text": "From the question and its supporting evidence shown in Figure 1 , we can see that they share many words (which are marked in bold and underlined). This is because the questions usually use the same words or sentences to describe the same thing.", "cite_spans": [], "ref_spans": [ { "start": 55, "end": 63, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Figure 1. A question and its corresponding supporting evidence", "sec_num": null }, { "text": "stand for the i-th matched word, stand for the j-th unmatched word, * stand for the j-th string which purely consists of an arbitrary number of unmatched words, and * denote the number of words contained in * . The desired characteristics of an effective supporting evidence retrieval algorithm are listed as follows. This preference is illustrated with the above Example-1. We prefer Candidate-2 here since it additionally mentions that \u5b89\u5e73\u53e4\u5821 (\"Fort Zeelandia\") has a long history which entails that it is a monument. As a result, we prefer more occurrences of a matching term because it may contain more information we need. Candidate-1 in Example-2 contains the extra information \" \u5305 \u62ec \u5927 \u5821 \u7901 \u548c \u7d05 \u6d77 \"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Let", "sec_num": null }, { "text": "(\"including the Great Barrier Reef and the Red Sea\") which is irrelevant to our question. Therefore, we prefer Candidate-2 which contains less unmatched terms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Characteristic-2: Prefer less unmatched terms", "sec_num": null }, { "text": "Candidate1:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Characteristic-3: Prefer more different term-types", "sec_num": null }, { "text": "s 1 * s 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Characteristic-3: Prefer more different term-types", "sec_num": null }, { "text": "Suppose | * | = | * | and | * | = | * |, and both Candidate-1 and Candidate-2 match two terms in the above pattern. However, Candidate-1 has the same two terms s 1 but Candidate-2 has two different terms s 1 and s 2 . In this case we prefer Candidate-2 as the supporting evidence because it recalls more terms from the question. Consider the following Example-3:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Characteristic-3: Prefer more different term-types", "sec_num": null }, { "text": "Question: \u96fb\u8166|\u548c|\u624b\u6a5f|\u5df2|\u6210\u70ba|\u73fe\u4ee3\u4eba|\u751f\u6d3b|\u7684|\u5fc5\u9700\u54c1|\u3002 \"Computers and mobile phones have become necessities for modern life.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 3", "sec_num": null }, { "text": "Candidate-1: \u96fb\u8166|\u5728|\u8a31\u591a|\u5de5\u4f5c|\u4e2d|\u5ee3\u6cdb|\u4f7f\u7528|\uff0c|\u96fb\u8166|\u4e5f|\u8b93|\u6211\u5011|\u751f\u6d3b|\u66f4\u52a0|\u9032\u6b65|\u3002 \"Computers are widely used in many jobs. Computers also make our lives more advanced.\" Candidate-2: \u96fb\u8166|\u5728|\u8a31\u591a|\u5de5\u4f5c|\u4e2d|\u5ee3\u6cdb|\u4f7f\u7528|\uff0c|\u624b\u6a5f|\u4e5f|\u8b93|\u6211\u5011|\u751f\u6d3b|\u66f4\u52a0|\u9032\u6b65|\u3002 \"Computers are widely used in many jobs and mobile phones make our lives more advanced.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 3", "sec_num": null }, { "text": "Candidate-1 only mentions the information about computer twice; however, Candidate-2 contains the information about both computer and mobile phone (which provide more question-related information). As the result, we prefer the candidate-2 that matches more term-types.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 3", "sec_num": null }, { "text": "According to the desired characteristics of the supporting evidence mentioned above, \"Prefer more matching occurrences\" and \"Prefer less unmatched terms\" could be reflected through the precision-rate; and \"Prefer more different term-types\" could be reflected through the recall-rate. Following two cases (Table 1 and Table 2 ) illustrate the effect of precision and recall in retrieving the supporting evidence candidates.", "cite_spans": [], "ref_spans": [ { "start": 304, "end": 324, "text": "(Table 1 and Table 2", "ref_id": null } ], "eq_spans": [], "section": "Example 3", "sec_num": null }, { "text": "Candidate Terms Precision Recall 1 w 5 s 1 s 1 2/3 1/5 2 w 6 w 7 w 8 s 1 s 1 s 1 3/6 1/5 Table 1 shows that the precision-rate could truly reflect the desired Characteristic-1 and Characteristic-2. Therefore, with the precision-rate, we can successfully select the human desired Candidate-1 as the supporting evidence.", "cite_spans": [], "ref_spans": [ { "start": 89, "end": 96, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Table 1. Precision and Recall for question-case-1: Question: w 1 w 2 w 3 s 1 w 4", "sec_num": null }, { "text": "Candidate Terms Precision Recall", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 2. Precision and Recall for question-case-2: Question: w 1 w 2 w 3 s 1 w 4", "sec_num": null }, { "text": "1 w 5 w 6 w 7 s 1 s 1 2/5 1/5 2 w 8 w 9 w 10 s 1 s 2 2/5 2/5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 2. Precision and Recall for question-case-2: Question: w 1 w 2 w 3 s 1 w 4", "sec_num": null }, { "text": "However, the precision-rate alone is not enough to meet the desired Characteristic-3. For example, the precision-rate cannot tell the difference between two candidates in Case-2, since both the candidates match two terms. However, by measuring the recall-rate we can choose the better candidate that matches more terms of the question.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 2. Precision and Recall for question-case-2: Question: w 1 w 2 w 3 s 1 w 4", "sec_num": null }, { "text": "According to the above two cases, it clearly shows that both precision-rate and recall-rate should be involved in the scoring function for obtaining the best supporting evidence", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 2. Precision and Recall for question-case-2: Question: w 1 w 2 w 3 s 1 w 4", "sec_num": null }, { "text": "Intuitively, BLEU score (Papineni et al., 2002) , which is a widely used metric in evaluating machine translation quality via comparing the machine-translation output with human-translation references, could be adopted for this task as it can check the similarity between the question content and the passage of the supporting evidence. BLEU score (also called CR-BLEU score, where C stands for candidate and R stands for reference) is originally defined as: * \u220f", "cite_spans": [ { "start": 24, "end": 47, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "QE-BLEU Scoring Function", "sec_num": "3.1" }, { "text": "(1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "QE-BLEU Scoring Function", "sec_num": "3.1" }, { "text": "1 / (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "QE-BLEU Scoring Function", "sec_num": "3.1" }, { "text": "where p n is the modified n-gram precision between machine translation candidate and a set of human translation references, w n is the n-gram weight, r and c are the reference and candidate lengths, respectively. BP is the brevity penalty which penalizes the candidate that is shorter than the reference. BLEU score combines each n-gram precision by multiplication.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "QE-BLEU Scoring Function", "sec_num": "3.1" }, { "text": "As shown in Equation 1, CR-BLEU score only cares about the precision-rate of a candidate. However, we actually more care about the recall-rate in retrieving supporting evidence. We thus adapt the original CR-BLEU metric by letting the given question plays the role of translation candidate and each possible supporting evidence as the translation reference. Therefore, we propose an alternative QE-BLEU score which is defined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "QE-BLEU Scoring Function", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "QE-BLEU * \u220f (3) 1 1 /", "eq_num": "(4)" } ], "section": "QE-BLEU Scoring Function", "sec_num": "3.1" }, { "text": "where Q and E denote the question and the evidence passage, respectively. Question and evidence thus correspond to the candidate and the reference, respectively, in the original function of CR-BLEU.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "QE-BLEU Scoring Function", "sec_num": "3.1" }, { "text": "On the other hand, F-measure is a widely used evaluation metric in information retrieval which considers both precision and recall of the information retrieved (Chinchor, 1992; Sasaki, 2007) . We thus prefer the F-measure, instead of BLEU score, for this task as both precision and recall are required to meet the desired characteristics listed in Section 2.", "cite_spans": [ { "start": 160, "end": 176, "text": "(Chinchor, 1992;", "ref_id": "BIBREF3" }, { "start": 177, "end": 190, "text": "Sasaki, 2007)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Modified F-measure Scoring Function", "sec_num": "3.2" }, { "text": "Inspired by BLEU score metric, we also apply n-gram model to consider the word order information. Therefore, we proposed a new Modified F-measure:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modified F-measure Scoring Function", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Modified F-measure \u2211", "eq_num": "(5)" } ], "section": "Modified F-measure Scoring Function", "sec_num": "3.2" }, { "text": "where p n and r n denote the n-gram precision and recall of the question passage, respectively; and w n is the corresponding n-gram weight as that in BLEU score; is an adjustable parameter ranging from 0 to 1. If is close to 0, Modified F-measure becomes more recall-oriented; on contrary, it becomes more precision-oriented when is close to 1. The adopted precision and recall are defined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modified F-measure Scoring Function", "sec_num": "3.2" }, { "text": "# # (6) # # (7)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modified F-measure Scoring Function", "sec_num": "3.2" }, { "text": "4. Experiments", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modified F-measure Scoring Function", "sec_num": "3.2" }, { "text": "We evaluate various approaches on a Taiwan elementary school social studies Yes/No question supporting evidence benchmark data set, which was created by two part-time workers and decided by the third person when there is a conflict. The original corpus consists of 178 lessons, and each lesson is composed of several paragraphs and then followed with its associated questions. We randomly divide those lessons into a development-set (124 lessons) and a test-set (54 lessons). Afterwards, we arbitrarily selected 202 and 414 questions from the development-set and the test-set, respectively. Afterwards, each question is annotated with its supporting evidence benchmark. The statistics of the benchmark is showed in Table 3 . ", "cite_spans": [], "ref_spans": [ { "start": 715, "end": 722, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Data Sets Adopted", "sec_num": "4.1" }, { "text": "Step 0: Preprocessing:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "4.2" }, { "text": "The raw texts of lessons and questions are segmented into words via HanLP 2 package. The punctuations are then eliminated after the segmentation (as the punctuations are only used for segmenting sentences). We had tested the case of eliminating stop words, but the result seems not much different. Therefore, we keep all the words in the following experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "4.2" }, { "text": "After the preprocessing process, we retrieve the supporting evidence via following four steps:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "4.2" }, { "text": "Step 1: Paragraph-based search Given a question and its corresponding lesson, we first locate the top-1 paragraph with Apache Lucene search engine. This step is used to cut down the search space of locating the supporting evidence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "4.2" }, { "text": "Step 2: Sentence-level candidate generation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "4.2" }, { "text": "After the above paragraph-based search, we generate various supporting evidence candidates by increasingly concatenating the consecutive sentences (up to the whole paragraph). For example, if we have a paragraph with three consecutive sentences A, B and C in order, then we will generate the following six different candidates: A, B, C, AB, BC, and ABC.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "4.2" }, { "text": "Step 3: Candidate scoring This step is the focus of our approach. We use either QE-BLEU or Modified F-measure to score each candidate according to the given question passage.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "4.2" }, { "text": "Step 4: Select the top-1 candidate After scoring the candidates with a specific scoring function, we then choose the candidate with the highest score as the supporting evidence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "4.2" }, { "text": "Smoothing: We adopt the package jbleu 3 , which uses the smoothing method-3 4 adopted in (Chen & Cherry, 2014) to smooth both QE-BLEU and Modified F-measure. After smoothing, they will get a small non-zero value (instead of zero) when there is no match for a given n-gram.", "cite_spans": [ { "start": 89, "end": 110, "text": "(Chen & Cherry, 2014)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments Settings", "sec_num": "4.3" }, { "text": "Weight optimization: Last, there are four n-gram weights in QE-BLEU; however, there are four n-gram weights and one additional parameter in Modified F-measure. These parameters affect the performance of the proposed scoring functions significantly. We adopt Particle Swarm Optimization 5 , which is known for being able to escape from the local maximum points, to automatically search for their optimal values on the development-set. We then use the obtained optimal parameters to evaluate the performance on the test-set. There are two values tested in the Modified F-measure approach. \u03b1=0.5 is the situation to weight precision and recall equally; \u03b1=0.13 is obtained by optimizing the Modified F-measure with equal n-gram weights. And finally, \u03b1=0.12 (without smoothing) and \u03b1=0.21 (with smoothing) are the optimal values obtained by jointly optimizing the n-gram weight and \u03b1 value.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments Settings", "sec_num": "4.3" }, { "text": "For various reasons, there are some benchmarks that cannot be generated by our candidate generation procedure (Step-2). that no appropriate evidence can be found in the text. 12.8% of the selected top-1 paragraph is different with the desired paragraph. 13.8% of the benchmarks are not a consecutive passage within a paragraph. In order to focus on comparing the effectiveness of various scoring functions, we eliminate those types of questions that the desired benchmark cannot be included in the candidate-set, and only evaluate the performance on the remaining questions (total 237 questions remained) in the following tests.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Results", "sec_num": "4.4" }, { "text": "The performances of various approaches are shown in Table 5 . Apache Lucene Core 5.5.0 is regarded as our baseline which uses the vector space model and a pre-specified scoring function for ranking. We adopted two widely used scoring functions, TF-IDF and BM25, as our baselines. The performances of equally weighting the n-gram are listed in the table \"Equal N-gram Weight\". The \"+Smoothing\" column shows the experiments that involve smoothing technique. The table \"Optimal Weight\" shows the experiments that adopt the optimized parameters which include various n-gram weights and the \u03b1 value (for Modified F-measure). Again, the columns labeled with \"+Smoothing\" are the experiments that adopt smoothing technique with optimal weights. Table 5 shows that the overall performance of both QE-BLEU and Modified F-measure with optimal weight and smoothing technique outperform the baseline Apache Lucene (TF-IDF) about 5%. ", "cite_spans": [], "ref_spans": [ { "start": 52, "end": 59, "text": "Table 5", "ref_id": null }, { "start": 738, "end": 745, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Experiment Results", "sec_num": "4.4" }, { "text": "Apache Lucene: We find that Apache Lucene makes errors (for selecting the desired candidate) in the cases that the top-1 paragraph contains more sentences. This is mainly due to that IDF weight is adopted in both BM25 and TF-IDF, and IDF weight is based on the diversification of the documents to give the term weights. The term which appears in many documents is thus given a lower weight. However, various supporting evidence candidates are actually from the same paragraph (due to the way that they are created). Therefore, the term which appears in many candidates may actually be the key word (in the question) that we should pay attenuation to. As a result, Apache Lucene is not a preferable method for supporting evidence retrieval because it is related to the term distribution in the supporting evidence candidates. As shown in Table 5 , the performance of BM25 is lower than that of TF-IDF. The reason is that the IDF matrix in BM25 is more sensitive, which deteriorates the performance in this task.", "cite_spans": [], "ref_spans": [ { "start": 837, "end": 844, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Error Analysis and Discussion", "sec_num": "5." }, { "text": "We find that most errors resulted from the QE-BLEU approach is due to the brevity penalty factor, as it penalizes the length of evidence candidates when the length of a candidate is longer than that of the question. In principle, the brevity penalty factor is mainly introduced to avoid involving unnecessary sentences in the evidence. However, as we mentioned in Section 1, the supporting evidence selection is only affected by the relevant and irrelevant information but not the question length. If we punish the evidence of which the length is larger than the question length, we tend to get the supporting evidence that is shorter, and might lose some relevant information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "QE-BLEU:", "sec_num": null }, { "text": "Modified F-measure: As shown in Table 5 , we test two \u03b1 values: 0.5 (i.e., equally weighting precision and recall) and 0.13 (which is the optimal value obtained from the developoment-set). The performances are found about 8%~20% better when we adopt the optimal \u03b1 value. However, both QE-BLEU and Modified F-measure get the same performance in the \"+Smoothing\" column in \"Optimal Weight\" (Modified F-measure improves 14 cases against QE-BLEU, but it also deteriorates the same number of cases). Furthermore, the optimal \u03b1 value (\u03b1 = 0.13) shows that recall is more important than precision since \u03b1= 0.13 < 0.5. However, this model is found that it tends to find the evidence which is the longest among the candidates if we only consider recall. To avoid involving unnecessary sentences in the evidence, the proposed approaches actually adopt two different stategies: QE-BLEU relies on Brevity Penalty (which penalizes the longer passage regardless of its content) and Modified F-measure relies on Precision (which penalizes the passage with more irrelevant content). However, utilizing Precision is better than adopting Brevity Penalty since Brevity Penalty only penalizes the passage with the length being longer than that of the quesiton without considering its content. To show the effect of this issue, we further extend the experiments to test Top-N (intead of Top-1 only) accuracy-rates to demonstrate the superiority of Modified F-measure. Table 6 shows that the performances of Modified F-measure are better under the columns of Top-2 and Top-3. Where \"+Both\" means that the experiments are under the setting of the optimal N-gram weight and smoothing technique. Last, we further check 30 wrong cases from the Modified F-measure with optimal parameters along with smoothing technique. It is observed that those associated errors are mainly due to six different types as shown in Figure 2 . They will be further illustrated as follows. (1) Treat lexicons equally (30%): Since we match the terms without considering which terms are more important in the sentence, some error occurs due to the focusing-words are not weighted more. For example: Question: \u73ed\u7d1a|\u5e79\u90e8|\u8981\u4ee5|\u516c\u5e73|\uff0c|\u516c\u6b63|\u7684|\u614b\u5ea6|\uff0c|\u5f15\u5c0e|\u540c\u5b78|\u9075\u5b88|\u5718\u9ad4|\u79e9\u5e8f|\u3002 \"Class leaders should guide the classmates to abide by group order with a fair and just attitude.\"", "cite_spans": [], "ref_spans": [ { "start": 32, "end": 39, "text": "Table 5", "ref_id": null }, { "start": 1447, "end": 1454, "text": "Table 6", "ref_id": "TABREF5" }, { "start": 1887, "end": 1895, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "QE-BLEU:", "sec_num": null }, { "text": "Top-1 candidate: \u4e26\u4ee5|\u516c\u5e73|\uff0c|\u516c\u6b63|\u7684|\u614b\u5ea6|\uff0c|\u5f15\u5c0e|\u540c\u5b78|\u9075\u5b88|\u5718\u9ad4|\u79e9\u5e8f|\uff0c \"and guide them to abide by group order with a fair and just attitude.\" Benchmark: \u64d4\u4efb|\u73ed\u7d1a|\u5e79\u90e8|\uff0c|\u8981\u80fd|\u4f5c\u70ba|\u540c\u5b78|\u7684|\u699c\u6a23|\uff0c|\u4e26\u4ee5|\u516c\u5e73|\uff0c|\u516c\u6b63|\u7684|\u614b\u5ea6|\uff0c|\u5f15 \u5c0e|\u540c\u5b78|\u9075\u5b88|\u5718\u9ad4|\u79e9\u5e8f|\uff0c \"As a class leader, you should be able to serve as a role model for classmates and guide them to abide by group order with a fair and just attitude.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "QE-BLEU:", "sec_num": null }, { "text": "The terms \"\u73ed\u7d1a\"and\"\u5e79\u90e8\" (\"Class leaders\") are the important topic words in this Yes/No question. However, they are interleaved with other unmatched words in the first half of the benchmark. The Top-1 candidate, instead of the benchmark, is thus selected because it possesses a higher precision-rate. This kind of error need a specific technique to find the focusing-words in the sentence and give different term weights according to the degree of importance of the terms in the sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "QE-BLEU:", "sec_num": null }, { "text": "(2) Contradictory mismatch (17%): Some Yes/No questions are designed to describe the wrong fact. Therefore, the sentence which describes the wrong fact would not match the evidence sentence in the lesson, but this unmatched evidence sentence still should be regarded as a part of the supporting evidence. For example:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "QE-BLEU:", "sec_num": null }, { "text": "Question: \u5728|\u5f9e\u524d|\uff0c|\u8fb2\u6c11|\u53c3\u8207|\u6c11\u4fd7|\u85dd|\u9663|\u7684|\u76ee\u7684|\uff0c|\u662f|\u70ba\u4e86|\u53cd\u6297|\u653f\u5e9c|\u800c|\u96c6\u7d50|\u7d44\u6210|\u7684|\u3002 \"In the past, the purpose that farmers participate in folk art array was to assemble against the government.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "QE-BLEU:", "sec_num": null }, { "text": "Top-1 candidate: \u81fa\u7063|\u7684|\u6c11\u4fd7|\u85dd|\u9663|\u5f9e\u524d|\u591a|\u662f|\u696d|\u9918|\u7684|\u7d44\u7e54|\uff0c|\u6751\u6c11|\u5229\u7528|\u8fb2\u9592|\u6642|\u53c3\u8207|\u85dd| \u9663|\uff0c \"Taiwanese folk art array used to be an amateur organization, and villagers used leisure time to participate in the folk art array.\" Benchmark: \u81fa\u7063|\u7684|\u6c11\u4fd7|\u85dd\u9663|\u5f9e\u524d|\u591a|\u662f|\u696d|\u9918|\u7684|\u7d44\u7e54|\uff0c|\u6751\u6c11|\u5229\u7528|\u8fb2\u9592|\u6642|\u53c3\u8207|\u85dd|\u9663|\uff0c |\u65e2|\u53ef|\u4f11\u9592|\u5a1b\u6a02|\u3001|\u7df4\u6b66|\u5f37\u8eab|\uff0c|\u4e5f|\u9593\u63a5|\u9023\u7d61|\u60c5\u8abc|\uff0c|\u51dd\u805a|\u5730\u65b9|\u7684|\u5411\u5fc3\u529b|\u3002 \"Taiwanese folk art array used to be an amateur organization, and villagers used leisure time to participate in the folk art array. The purpose of it is for leisure, martial arts and also connect with friendship, condense the centripetal force of the place.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "QE-BLEU:", "sec_num": null }, { "text": "The sentence \" \u662f \u70ba \u4e86 \u53cd \u6297 \u653f \u5e9c \u800c \u96c6 \u7d50 \u7d44 \u6210 \u7684 \" (\"was assembled against the government\") is the wrong fact in the question which describe the incorrect purpose of forming \"\u6c11\u4fd7\u85dd\u9663\" (\"folk art array\"). Although the sentences \"\u65e2\u53ef\u4f11\u9592\u5a1b\u6a02\u3001\u7df4\u6b66\u5f37 \u8eab\uff0c\u4e5f\u9593\u63a5\u9023\u7d61\u60c5\u8abc\uff0c\u51dd\u805a\u5730\u65b9\u7684\u5411\u5fc3\u529b\" are not matched, they in fact provide the supporting evidence to conclude that the associated statement in the given question is incorrect. Therefore, they should be included in the supporting evidence. This kind of error also need to identify the focusing-words in the sentence, and emphasize them with larger weights. To deal with the errors of this category, we need to know that the sentences \"\u7576\u5730\u4eba\u5efa\u5edf\u796d \u7940\uff0c\u662f\u6f33\u5dde\u4eba\u7684\u4fdd\u8b77\u795e\" (\"the local people built temples to honor him, he is the protecting god of the people of Zhangzhou\") implies \"\u4fe1\u4ef0\" (\"believe\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "QE-BLEU:", "sec_num": null }, { "text": "(4) Paraphrase mismatch (10%): Since we only count those \"exactly matched\" words, we Supporting Evidence Retrieval for Answering Yes/No Questions 61 cannot match two terms that describe similar concepts but use different word-types. For example:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "QE-BLEU:", "sec_num": null }, { "text": "Question: \u53c3\u89c0|\u540d\u52dd\u53e4\u8e5f|\u6642|\u8981|\u7dad\u8b77|\u74b0\u5883|\u7684|\u6574\u6f54|\u3002 \"We should maintain a clean environment when visiting famous places and monuments.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "QE-BLEU:", "sec_num": null }, { "text": "Top-1 candidate: \u53e4\u8e5f|\u6642|\uff0c|\u61c9|\u9075\u5b88|\u898f\u5b9a|\u4e26|\u7dad\u8b77|\u74b0\u5883|\u6574\u6f54|\uff1b \"monuments, you should abide by the regulations and maintain a clean environment.\" Benchmark: \u62dc\u8a2a|\u540d\u52dd|\uff0c|\u53e4\u8e5f|\u6642|\uff0c|\u61c9|\u9075\u5b88|\u898f\u5b9a|\u4e26|\u7dad\u8b77|\u74b0\u5883|\u6574\u6f54|\uff1b \"When traveling to famous places and monuments, you should abide by the regulations and maintain a clean environment.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "QE-BLEU:", "sec_num": null }, { "text": "The terms \"\u53c3\u89c0\" (\"visiting\") and \"\u62dc\u8a2a\" (\"traveling to\") have similar meaning but are not matched in string. Therefore, the capability of detecting paraphrasing is needed to deal with this kind of problems. Because the same string \"\u540d\u52dd\u53e4\u8e5f\"(\"historical sites\") is segmented differently in the question (as one word: \"\u540d\u52dd\u53e4\u8e5f\") and in the candidates (as two words: \"\u540d\u52dd\" and \"\u53e4\u8e5f\"), the system thus regards the second sentence in the benchmark as a purely irrelevant string. (6) Others (27%): The errors in this category are either the cases that are caused by multiple error types mentioned above or the errors that only occupy a small portion. In the following example:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "QE-BLEU:", "sec_num": null }, { "text": "Question: \u4f4d|\u5728|\u5c71\u5730|\u4e18\u9675|\u7684|\u5730\u65b9|\u9069\u5408|\u767c\u5c55|\u6797\u696d|\uff0c|\u755c\u7267\u696d|\u3002 \"It is suitable for the development of forestry and animal husbandry in the hilly areas.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "QE-BLEU:", "sec_num": null }, { "text": "Top-1 candidate: \u5c71\u5730|\u4e18\u9675|\u7b49|\u5730\u65b9|\uff0c|\u767c\u5c55|\u51fa|\u6797\u696d|\uff0c|\u755c\u7267\u696d|\u7b49|\u6d3b\u52d5|\uff1b|\u800c|\u5c45\u4f4f|\u5728|\u5e73\u539f| \u5730\u5340|\u7684|\u5c45\u6c11|\uff0c \"In hilly areas where forestry, animal husbandry and other activities are developed; for those residents living in the plains,\" Benchmark: \u5c71\u5730|\u4e18\u9675|\u7b49|\u5730\u65b9|\uff0c|\u767c\u5c55|\u51fa|\u6797\u696d|\uff0c|\u755c\u7267\u696d|\u7b49|\u6d3b\u52d5|\uff1b \"In hilly areas where forestry, animal husbandry and other activities are developed;\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "QE-BLEU:", "sec_num": null }, { "text": "The error is caused by multiple reasons. First, because we treat lexicons equally, the last sentence in the Top-1 candidate matches the stop words which are not important. Second, the last sentence in the Top-1 candidate cannot express a meaning completely by its own. We need to detect the coherent of the sentence to deal with this kind of problem. An another example:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "QE-BLEU:", "sec_num": null }, { "text": "Question: \u8fd1\u5e74\u4f86|\u5404\u7e23\u5e02|\u89aa\u6c34|\u6b65\u9053|\uff0c|\u6cb3\u6ff1\u516c\u5712|\u7684|\u8a2d\u7acb|\u90fd|\u662f|\u6cb3\u5ddd|\u6574\u6cbb|\u7684|\u6210\u679c|\uff0c|\u4e0d\u4f46| \u6539\u5584|\u4e86|\u6cb3\u6d41|\u7684|\u6c34\u8cea|\uff0c|\u4e5f|\u63d0\u9ad8|\u4e86|\u5c45\u6c11|\u7684|\u751f\u6d3b\u54c1\u8cea|\u3002 \"In recent years, some city's hydrophilic trails and the establishment of the riverside park are the result of river remediation, which not only improves the water quality of the river, but also improves the quality of life of the residents.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "QE-BLEU:", "sec_num": null }, { "text": "Top-1 candidate: \u5728|\u6574\u6cbb|\u904e\u7a0b|\u5f8c|\uff0c|\u6539\u5584|\u4e86|\u6cb3\u6d41|\u7684|\u6c34\u8cea|\uff0c|\u4e5f|\u63d0\u9ad8|\u5c45\u6c11|\u7684|\u751f\u6d3b\u54c1\u8cea|\u3002 \"After the remediation process, the water quality of the river has been improved and the quality of life of the residents has also been improved.\" Benchmark: \u9ad8\u96c4\u5e02|\u7684|\u611b\u6cb3|\u66fe|\u906d\u53d7|\u56b4\u91cd|\u6c61\u67d3|\uff0c|\u5728|\u6574\u6cbb|\u904e\u7a0b|\u5f8c|\uff0c|\u6539\u5584|\u4e86|\u6cb3\u6d41|\u7684|\u6c34\u8cea |\uff0c|\u4e5f|\u63d0\u9ad8|\u5c45\u6c11|\u7684|\u751f\u6d3b\u54c1\u8cea|\u3002 \"The love river in Kaohsiung has been seriously polluted. After the remediation process, the water quality of the river has been improved and the quality of life of the residents has also been improved.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "QE-BLEU:", "sec_num": null }, { "text": "In this case, we need an extra module to link \"\u9ad8\u96c4\u5e02\" (Kaohsiung) to \"\u5404\u7e23\u5e02\" (some city) because \"Kaohsiung\" is an instance of \"some city\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "QE-BLEU:", "sec_num": null }, { "text": "As mentioned in Section 1, the previous studies of retrieving supporting evidence can be grouped into three categories: matching terms, conducting syntactic/semantic analysis, and scoring with a translation model. Term matching approaches focus on retrieving the related query from a large scale of documents by using similarity functions and word weight functions. For example, DrQA system (Chen et al., 2017) was developed for large scale applications such as retrieving the relevant documents from Wikipedia. In their document retriever model, they evaluated the similarity of the articles and questions by the score of TF-IDF weighted bag-of-word vectors. They also improved the model by taking bi-gram counts. However, those approaches usually do not consider word order and local context. Syntactic/semantic scoring approaches are specially developed to deal with certain QA datasets. The DeepQA pipeline in IBM Watson system (Murdock et al., 2012) , which is used in the task Jeopardy! 6 , presented four passage-scoring algorithms to retrieve the supporting evidence by scoring the passages. The scoring algorithms operate on the syntactic-semantic graphs constructed from analyzing the syntactic and semantic information of the documents. The QA system in (Jansen et al., 2017) was developed for standardized science exams. They extracted the focus words according to their scores of the concrete concepts. The words are scored by the psycholinguistic concreteness and rated from 1 (highly abstract) to 5 (highly concrete) by human. Nonetheless, this kind of approaches is more complex and their operations are usually more time-consuming.", "cite_spans": [ { "start": 391, "end": 410, "text": "(Chen et al., 2017)", "ref_id": "BIBREF2" }, { "start": 932, "end": 954, "text": "(Murdock et al., 2012)", "ref_id": "BIBREF8" }, { "start": 1265, "end": 1286, "text": "(Jansen et al., 2017)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6." }, { "text": "Translation model based approaches are widely adopted in community Q&A tasks. They mainly check the similarity between the queried question and those historical questions kept in the archive with a translation model (in which a higher translation score implies that they are more similar). In our case, this approach translates the given question into the specified supporting evidence candidate via a translation model, and then assigns the obtained translation score as the associated score of that candidate. These approaches can be further categorized into word-based approaches and phrase-based approaches. Word-based approaches (Berger et al., 2000; Jeon et al., 2005; Xue et al., 2008) adopt word translation probabilities in a language model to rank the similarity. Zhou et al. (2011) further extended this model into a phrase-based one and obtained better performances. This kind of approaches clearly needs large benchmark data which is expensive to construct in our task.", "cite_spans": [ { "start": 634, "end": 655, "text": "(Berger et al., 2000;", "ref_id": "BIBREF0" }, { "start": 656, "end": 674, "text": "Jeon et al., 2005;", "ref_id": "BIBREF5" }, { "start": 675, "end": 692, "text": "Xue et al., 2008)", "ref_id": "BIBREF11" }, { "start": 774, "end": 792, "text": "Zhou et al. (2011)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6." }, { "text": "In comparison with previous term matching approaches, our proposed n-gram matching approaches further consider word order and local context, and thus improve the retrieval accuracy. On the other hand, for those syntactic/semantic scoring approaches, the proposed approaches can operate more efficiently due to the use of simple string matching. Last, comparing with those translation model based approaches, our approaches do not need large training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6." }, { "text": "Two different models are proposed in this paper to retrieve supporting evidence for the given Yes/No question: QE-BLEU and Modified F-measure. In comparison with previous approaches, the proposed approaches provide better accuracy and efficiency. Both of them adopt n-gram to incorporate phrases and local context; however, the Modified F-measure takes care of both precision and recall, while QE-BLEU only handles recall of the question. Experiment results have shown that both of them outperform Lucene Apache search engine by 5%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7." }, { "text": "Our main contributions mainly are: (1) We proposed and tested two novel approaches to retrieve the supporting evidence, and have obtained better performances. 2We list the desired characteristics of the supporting evidence retrieved. 3We implement and compare various refinement techniques, including smoothing and optimization, for the proposed approaches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7." }, { "text": "https://github.com/hankcs/HanLP", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "GitHub repository, https://github.com/jhclark/multeval/tree/master/src/jbleu 4 It basically assigns a geometric sequence to the n-gram that has 0 matches. 5 https://pythonhosted.org/pyswarm/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Jeopardy! is an American television game show.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Meng-Tse Wu et al.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Bridging the lexical chasm: statistical approaches to answer-finding", "authors": [ { "first": "A", "middle": [], "last": "Berger", "suffix": "" }, { "first": "R", "middle": [], "last": "Caruana", "suffix": "" }, { "first": "D", "middle": [], "last": "Cohn", "suffix": "" }, { "first": "D", "middle": [], "last": "Freitag", "suffix": "" }, { "first": "V", "middle": [], "last": "Mittal", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval", "volume": "", "issue": "", "pages": "192--199", "other_ids": { "DOI": [ "10.1145/345508.345576" ] }, "num": null, "urls": [], "raw_text": "Berger, A., Caruana, R., Cohn, D., Freitag, D. & Mittal, V. (2000). Bridging the lexical chasm: statistical approaches to answer-finding. In Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval, 192-199. doi: 10.1145/345508.345576", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A systematic comparison of smoothing techniques for sentence-level BLEU", "authors": [ { "first": "B", "middle": [], "last": "Chen", "suffix": "" }, { "first": "C", "middle": [], "last": "Cherry", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Ninth Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.3115/v1/W14-3346" ] }, "num": null, "urls": [], "raw_text": "Chen, B. & Cherry, C. (2014). A systematic comparison of smoothing techniques for sentence-level BLEU. In Proceedings of the Ninth Workshop on Statistical Machine Translation. doi: 10.3115/v1/W14-3346", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Reading Wikipedia to Answer Open-Domain Questions", "authors": [ { "first": "D", "middle": [], "last": "Chen", "suffix": "" }, { "first": "A", "middle": [], "last": "Fisch", "suffix": "" }, { "first": "J", "middle": [], "last": "Weston", "suffix": "" }, { "first": "A", "middle": [], "last": "Bordes", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1704.00051" ] }, "num": null, "urls": [], "raw_text": "Chen, D., Fisch, A., Weston, J. & Bordes, A. (2017). Reading Wikipedia to Answer Open-Domain Questions. Retrived from arXiv preprint arXiv:1704.00051", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The statistical significance of the MUC-4 results", "authors": [ { "first": "N", "middle": [], "last": "Chinchor", "suffix": "" } ], "year": 1992, "venue": "Proceedings of the 4th conference on Message understanding", "volume": "", "issue": "", "pages": "30--50", "other_ids": { "DOI": [ "10.3115/1072064.1072068" ] }, "num": null, "urls": [], "raw_text": "Chinchor, N. (1992). The statistical significance of the MUC-4 results. In Proceedings of the 4th conference on Message understanding, 30-50. doi: 10.3115/1072064.1072068", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Framing QA as Building and Ranking Intersentence Answer Justifications", "authors": [ { "first": "P", "middle": [], "last": "Jansen", "suffix": "" }, { "first": "R", "middle": [], "last": "Sharp", "suffix": "" }, { "first": "M", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "P", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2017, "venue": "Computational Linguistics", "volume": "43", "issue": "2", "pages": "407--449", "other_ids": { "DOI": [ "10.1162/COLI_a_00287" ] }, "num": null, "urls": [], "raw_text": "Jansen, P., Sharp, R., Surdeanu, M. & Clark, P. (2017). Framing QA as Building and Ranking Intersentence Answer Justifications. Computational Linguistics, 43(2), 407-449. doi: 10.1162/COLI_a_00287", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Finding similar questions in large question and answer archives", "authors": [ { "first": "J", "middle": [], "last": "Jeon", "suffix": "" }, { "first": "W", "middle": [ "B" ], "last": "Croft", "suffix": "" }, { "first": "J", "middle": [ "H" ], "last": "Lee", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 14th ACM international conference on Information and knowledge management", "volume": "", "issue": "", "pages": "84--90", "other_ids": { "DOI": [ "10.1145/1099554.1099572" ] }, "num": null, "urls": [], "raw_text": "Jeon, J., Croft, W. B. & Lee, J. H. (2005). Finding similar questions in large question and answer archives. In Proceedings of the 14th ACM international conference on Information and knowledge management, 84-90. doi: 10.1145/1099554.1099572", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Supporting Evidence Retrieval for Answering Yes/No Questions 65", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Supporting Evidence Retrieval for Answering Yes/No Questions 65", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Introduction to information retrieval", "authors": [ { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "P", "middle": [], "last": "Raghavan", "suffix": "" }, { "first": "H", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manning, C. D., Raghavan, P. & Sch\u00fctze, H. (2008). Introduction to information retrieval. New York, NY: Cambridge university press.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Textual evidence gathering and analysis", "authors": [ { "first": "J", "middle": [ "W" ], "last": "Murdock", "suffix": "" }, { "first": "J", "middle": [], "last": "Fan", "suffix": "" }, { "first": "A", "middle": [], "last": "Lally", "suffix": "" }, { "first": "H", "middle": [], "last": "Shima", "suffix": "" }, { "first": "B", "middle": [ "K" ], "last": "Boguraev", "suffix": "" } ], "year": 2012, "venue": "IBM Journal of Research and Development", "volume": "56", "issue": "3-4", "pages": "", "other_ids": { "DOI": [ "10.1147/JRD.2012.2187249" ] }, "num": null, "urls": [], "raw_text": "Murdock, J. W., Fan, J., Lally, A., Shima, H. & Boguraev, B. K. (2012). Textual evidence gathering and analysis. IBM Journal of Research and Development, 56(3-4), 8:1-8:14. doi: 10.1147/JRD.2012.2187249", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "BLEU: a method for automatic evaluation of machine translation", "authors": [ { "first": "K", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "S", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "T", "middle": [], "last": "Ward", "suffix": "" }, { "first": "W.-J", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th annual meeting of the association for computational linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": { "DOI": [ "10.3115/1073083.1073135" ] }, "num": null, "urls": [], "raw_text": "Papineni, K., Roukos, S., Ward, T. & Zhu, W.-J. (2002). BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the association for computational linguistics, 311-318. doi: 10.3115/1073083.1073135", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The truth of the F-measure. Teach Tutor mater", "authors": [ { "first": "Y", "middle": [], "last": "Sasaki", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sasaki, Y. (2007). The truth of the F-measure. Teach Tutor mater. Retrived from http://www.flowdx.com/F-measure-YS-26Oct07.pdf", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Retrieval models for question and answer archives", "authors": [ { "first": "X", "middle": [], "last": "Xue", "suffix": "" }, { "first": "J", "middle": [], "last": "Jeon", "suffix": "" }, { "first": "W", "middle": [ "B" ], "last": "Croft", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval", "volume": "", "issue": "", "pages": "475--482", "other_ids": { "DOI": [ "10.1145/1390334.1390416" ] }, "num": null, "urls": [], "raw_text": "Xue, X., Jeon, J. & Croft, W. B. (2008). Retrieval models for question and answer archives. In Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval, 475-482. doi: 10.1145/1390334.1390416", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Phrase-based translation model for question retrieval in community question answer archives", "authors": [ { "first": "G", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "L", "middle": [], "last": "Cai", "suffix": "" }, { "first": "J", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "K", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "653--662", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhou, G., Cai, L., Zhao, J. & Liu, K. (2011). Phrase-based translation model for question retrieval in community question answer archives. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, 1, 653-662.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Error types of Modified F-measure Supporting Evidence Retrieval for Answering Yes/No Questions 59", "type_str": "figure", "num": null, "uris": null }, "FIGREF1": { "text": "Require real-world knowledge (13%): This kind of errors is caused by the shortage of real-world knowledge. For example: Question: \u958b|\u6f33|\u8056\u738b|\u9673\u5143\u5149|\u56e0|\u958b\u767c|\u6f33\u5dde|\u6709|\u529f|\u800c|\u88ab|\u7576\u5730|\u4eba\u5011|\u6240|\u4fe1\u4ef0|\u3002 \"Chen Yuanguang, the Kaizhang Shengwang, was believed by the local people for his contribution in developing Zhangzhou.\" Top-1 candidate: \u5b9c\u862d\u7e23|\u58ef\u570d\u9109|\u958b|\u6f33|\u8056\u738b|\u5edf|\u796d\u7940|\u958b|\u6f33|\u8056\u738b|\u3002|\u56e0|\u5510\u671d|\u6b66\u9032\u58eb|\u9673\u5143\u5149| \u958b\u767c|\u6f33\u5dde|\u6709|\u529f|\uff0c \"In the Zhuangwei Township of Yilan County, the Kaizhang Shengwang Temple worship the Kaizhang Shengwang. Because Chen Yuanguang had contributed in developing Zhangzhou,\" Benchmark: \u5b9c\u862d\u7e23|\u58ef\u570d\u9109|\u958b|\u6f33|\u8056\u738b|\u5edf|\u796d\u7940|\u958b|\u6f33|\u8056\u738b|\u3002|\u56e0|\u5510\u671d|\u6b66\u9032\u58eb|\u9673\u5143\u5149|\u958b\u767c| \u6f33\u5dde|\u6709|\u529f|\uff0c|\u7576\u5730\u4eba|\u5efa|\u5edf|\u796d\u7940|\uff0c|\u662f|\u6f33\u5dde\u4eba|\u7684|\u4fdd\u8b77\u795e|\u3002 \"In the Zhuangwei Township of Yilan County, the Kaizhang Shengwang Temple worship the Kaizhang Shengwang. Because Chen Yuanguang had contributed in developing Zhangzhou, the local people built temples to honor him, he is the protecting god of the people of Zhangzhou.\"", "type_str": "figure", "num": null, "uris": null }, "FIGREF2": { "text": "Inconsistent word segmentation (3%):This type of errors is caused by the inconsistent word segmentation between the word in questions and lessons. For example:Question: \u540d\u52dd\u53e4\u8e5f|\u7684|\u74b0\u5883|\u7dad\u8b77|\u662f|\u653f\u5e9c|\u7684|\u8cac\u4efb|\uff0c|\u8207|\u53c3\u8a2a|\u6c11\u773e|\u7121\u95dc|\u3002 \"The environmental maintenance of historical sites is the responsibility of the government and has nothing to do with the visitors.\"Top-1 candidate: \u7dad\u8b77|\u5bb6\u9109|\u7684|\u540d\u52dd|\uff0c|\u53e4\u8e5f|\uff0c|\u9700\u8981|\u653f\u5e9c|\u6a5f\u95dc|\u8207|\u6c11\u9593|\u6a5f\u69cb|\u7a4d\u6975\u5408\u4f5c|\uff0c \"The maintenance of historical sites in the hometown requires active cooperation between government agencies and private institutions.\" Benchmark: \u7dad\u8b77|\u5bb6\u9109|\u7684|\u540d\u52dd|\uff0c|\u53e4\u8e5f|\u9700\u8981|\u653f\u5e9c|\u6a5f\u95dc|\u8207|\u6c11\u9593|\u6a5f\u69cb|\u7a4d\u6975\u5408\u4f5c|\uff0c|\u52a0\u5f37|\u5c0d| \u540d\u52dd|\uff0c|\u53e4\u8e5f|\u7684|\u7ba1\u7406|\u8207|\u4fee\u5fa9|\uff0c|\u4e5f|\u9700\u8981|\u5c45\u6c11|\u5171\u540c|\u95dc\u5fc3|\u8207|\u611b\u8b77|\u3002 \"The maintenance of historical sites in the hometown requires active cooperation between government agencies and private institutions in order to strengthen the management and restoration of historical sites. It also requires the resident's care and protection.\"", "type_str": "figure", "num": null, "uris": null }, "TABREF2": { "text": "", "html": null, "num": null, "type_str": "table", "content": "
Data-SetDevelopment-SetTest-Set
#Lesson12454
#Question202414
Averaged #paragraphs per lesson26.830.6
Averaged #sentences per paragraph3.73.6
Averaged #words per sentence5.05.0
Averaged #characters per sentence9.19.0
" }, "TABREF3": { "text": "briefly lists different reasons and their associated percentages. As shown inTable 4, 16.2% of the questions are originally marked as the case", "html": null, "num": null, "type_str": "table", "content": "
56
" }, "TABREF4": { "text": "", "html": null, "num": null, "type_str": "table", "content": "
No evidence in the text16.2% (67/414)
Non-Top-1 paragraph12.8% (53/414)
Non-consecutive passage13.8% (57/414)
Total42.8% (177/414)
Baseline:
Apache Lucene(TF-IDF)54.43%
Apache Lucene (BM25)46.84%
Equal N-gram Weight:
Equal N-gram Weight+Smoothing
QE-BLEU37.13%52.32%
Modified F-measure (\u03b1=0.5)37.55%42.19%
Modified F-measure(\u03b1=0.13)58.23%50.63%
" }, "TABREF5": { "text": "", "html": null, "num": null, "type_str": "table", "content": "
F-measure
" } } } }