ACL-OCL / Base_JSON /prefixI /json /inlg /2020.inlg-1.41.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
136 kB
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:28:36.191995Z"
},
"title": "Transformer based Natural Language Generation for Question-Answering",
"authors": [
{
"first": "Imen",
"middle": [],
"last": "Akermi",
"suffix": "",
"affiliation": {},
"email": "imen.elakermi@orange.com"
},
{
"first": "Johannes",
"middle": [],
"last": "Heinecke",
"suffix": "",
"affiliation": {},
"email": "johannes.heinecke@orange.com"
},
{
"first": "Fr\u00e9d\u00e9ric",
"middle": [],
"last": "Herledan",
"suffix": "",
"affiliation": {},
"email": "frederic.herledan@orange.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper explores Natural Language Generation within the context of Question-Answering task. The several works addressing this task only focused on generating a short answer or a long text span that contains the answer, while reasoning over a Web page or processing structured data. Such answers' length are usually not appropriate as the answer tend to be perceived as too brief or too long to be read out loud by an intelligent assistant. In this work, we aim at generating a concise answer for a given question using an unsupervised approach that does not require annotated data. Tested over English and French datasets, the proposed approach shows very promising results.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper explores Natural Language Generation within the context of Question-Answering task. The several works addressing this task only focused on generating a short answer or a long text span that contains the answer, while reasoning over a Web page or processing structured data. Such answers' length are usually not appropriate as the answer tend to be perceived as too brief or too long to be read out loud by an intelligent assistant. In this work, we aim at generating a concise answer for a given question using an unsupervised approach that does not require annotated data. Tested over English and French datasets, the proposed approach shows very promising results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Question-Answering systems (QAS) aim at analyzing and processing user questions in order to provide relevant answers (Hirschman and Gaizauskas, 2001 ). The recent popularity of intelligent assistants has increased the interest in QAS which have become a key component of \"Human-Machine\" exchanges since they allow users to have instant answers to their questions in natural language using their own terminology without having to go through a long list of documents to find the appropriate answers.",
"cite_spans": [
{
"start": 117,
"end": 148,
"text": "(Hirschman and Gaizauskas, 2001",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most of the existing research work focuses on the major complexity of these systems residing in the processing and interpretation of the question that expresses the user's need for information, without considering the representation of the answer itself. Usually, the answer is either represented by a short set of terms answering exactly the question (case of QAS which extract answers from structured data), or by a text span extracted from a document which, besides the exact answer, can integrate other unnecessary information that are not relevant to the context of the question asked. The following presents two answers for Who is the thesis supervisor of Albert Einstein? possibly generated by two systems :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Alfred Kleiner Albert Einstein is a German-born theoretical physicist who developed the theory of relativity, one of the two pillars of modern physics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Given the specificity of QAS which extract answers from structured data, users generally receive only a short and limited answer to their questions as illustrated by the example above. This type of answer representation might not meet the user expectations. Indeed, the type of answer given by the first system can be perceived as too brief not recalling the context of the question. The second system returns a passage which contains information that are out of the question's scope and might be deemed by the user as irrelevant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "It is within this framework that we propose in this article an approach which allows to generate a concise answer in natural language (e.g. The thesis superviser of Albert Einstein was Alfred Kleiner) that shows very promising results tested over French and English questions. This approach is a component of a QAS that we proposed in Rojas Barahona et al. (2019) and that we will briefly present in this article.",
"cite_spans": [
{
"start": 341,
"end": 363,
"text": "Barahona et al. (2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In what follows, we detail in section 3 the approach we propose for answer generation in Natural Language and we briefly discuss the QAS developed. We present in section 4 the experiments that we have conducted to evaluate this approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The huge amount of information available nowadays makes the task of retrieving relevant informa-tion complex and time consuming. This complexity has prompted the development of QAS which help spare the user the search and the information filtering tasks, as it is often the case with search engines, and directly return the exact answer to a question asked in natural language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The QAS cover mainly three tasks: question analysis, information retrieval and answer extraction (Lopez et al., 2011) . These tasks have been tackled in different ways, considering the knowledge bases used, the types of questions addressed (Iida et al., 2019; Zayaraz et al., 2015; Dwivedi and Singh, 2013; Lopez et al., 2011) and the way in which the answer is presented. In this article, we particularly focus on the answer generation process.",
"cite_spans": [
{
"start": 97,
"end": 117,
"text": "(Lopez et al., 2011)",
"ref_id": "BIBREF25"
},
{
"start": 240,
"end": 259,
"text": "(Iida et al., 2019;",
"ref_id": "BIBREF18"
},
{
"start": 260,
"end": 281,
"text": "Zayaraz et al., 2015;",
"ref_id": "BIBREF50"
},
{
"start": 282,
"end": 306,
"text": "Dwivedi and Singh, 2013;",
"ref_id": "BIBREF14"
},
{
"start": 307,
"end": 326,
"text": "Lopez et al., 2011)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We generally notice two forms of representation addressed in literature. The answer can take the form of a paragraph selected from a set of text passages retrieved from the web (Asai et al., 2018; Du and Cardie, 2018; Wang and Jiang, 2016; Wang et al., 2017; Oh et al., 2016) , as it can also be the exact answer to the question extracted from a knowledge base (Wu et al., 2003; Bhaskar et al., 2013; Le et al., 2016) .",
"cite_spans": [
{
"start": 177,
"end": 196,
"text": "(Asai et al., 2018;",
"ref_id": "BIBREF4"
},
{
"start": 197,
"end": 217,
"text": "Du and Cardie, 2018;",
"ref_id": "BIBREF13"
},
{
"start": 218,
"end": 239,
"text": "Wang and Jiang, 2016;",
"ref_id": "BIBREF47"
},
{
"start": 240,
"end": 258,
"text": "Wang et al., 2017;",
"ref_id": "BIBREF48"
},
{
"start": 259,
"end": 275,
"text": "Oh et al., 2016)",
"ref_id": "BIBREF32"
},
{
"start": 361,
"end": 378,
"text": "(Wu et al., 2003;",
"ref_id": "BIBREF49"
},
{
"start": 379,
"end": 400,
"text": "Bhaskar et al., 2013;",
"ref_id": "BIBREF6"
},
{
"start": 401,
"end": 417,
"text": "Le et al., 2016)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Despite the abundance of work in the field of QAS, the answers generation issue has received little attention. A first approach indirectly addressing this task has been proposed in Brill et al. (2001 Brill et al. ( , 2002 . Indeed, the authors aimed at diversifying the possible answer patterns by permuting the question's words in order to maximise the number of retrieved documents that may contain the answer to the given question. Another answer representation approach based on rephrasing rules has also been proposed in Agichtein and Gravano (2000) ; Lawrence and Giles (1998) within the context of query expansion task for document retrieval and not purposely for the question-answering task.",
"cite_spans": [
{
"start": 181,
"end": 199,
"text": "Brill et al. (2001",
"ref_id": "BIBREF8"
},
{
"start": 200,
"end": 221,
"text": "Brill et al. ( , 2002",
"ref_id": "BIBREF7"
},
{
"start": 526,
"end": 554,
"text": "Agichtein and Gravano (2000)",
"ref_id": "BIBREF3"
},
{
"start": 557,
"end": 582,
"text": "Lawrence and Giles (1998)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The few works that have considered this task within the QAS framework have approached it from a text summary generation perspective (Ishida et al., 2018; Iida et al., 2019; Rush et al., 2015; Chopra et al., 2016; Nallapati et al., 2016; Miao and Blunsom, 2016; See et al., 2017; Oh et al., 2016; Sharp et al., 2016; . These works consist in generating a summary of a single or various text spans that contain the answer to a question. Most of these works have only considered causality questions like the ones starting with \"why\" and whose answers are para-graphs. To make these answers more concise, the extracted paragraphs are summed up.",
"cite_spans": [
{
"start": 132,
"end": 153,
"text": "(Ishida et al., 2018;",
"ref_id": "BIBREF19"
},
{
"start": 154,
"end": 172,
"text": "Iida et al., 2019;",
"ref_id": "BIBREF18"
},
{
"start": 173,
"end": 191,
"text": "Rush et al., 2015;",
"ref_id": "BIBREF37"
},
{
"start": 192,
"end": 212,
"text": "Chopra et al., 2016;",
"ref_id": "BIBREF9"
},
{
"start": 213,
"end": 236,
"text": "Nallapati et al., 2016;",
"ref_id": "BIBREF29"
},
{
"start": 237,
"end": 260,
"text": "Miao and Blunsom, 2016;",
"ref_id": "BIBREF27"
},
{
"start": 261,
"end": 278,
"text": "See et al., 2017;",
"ref_id": "BIBREF40"
},
{
"start": 279,
"end": 295,
"text": "Oh et al., 2016;",
"ref_id": "BIBREF32"
},
{
"start": 296,
"end": 315,
"text": "Sharp et al., 2016;",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Other approaches (Kruengkrai et al., 2017; Girju, 2003; Verberne et al., 2011; Oh et al., 2013) have explored this task as a classification problem that consists in predicting whether a text passage can be considered as an answer to a given question.",
"cite_spans": [
{
"start": 17,
"end": 42,
"text": "(Kruengkrai et al., 2017;",
"ref_id": "BIBREF20"
},
{
"start": 43,
"end": 55,
"text": "Girju, 2003;",
"ref_id": "BIBREF15"
},
{
"start": 56,
"end": 78,
"text": "Verberne et al., 2011;",
"ref_id": "BIBREF46"
},
{
"start": 79,
"end": 95,
"text": "Oh et al., 2013)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "It should be noted that these approaches only intend to diversify as much as possible the answer representation patterns to a given question in order to increase the probability of extracting the correct answer from the Web and do not focus on the answer's representation itself. It should also be noted that these approaches are only applicable for QAS which extract answers as a text snippet and cannot be applied to short answers usually extracted from knowledge bases. The work presented in Pal et al. (2019) tried to tackle this issue by proposing a supervised approach that was trained on a small dataset whose questions/answers pairs were extracted from machine comprehension datasets and augmented manually which make generalization and capturing variation very limited.",
"cite_spans": [
{
"start": 495,
"end": 512,
"text": "Pal et al. (2019)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our answer generation approach differs from these works as it is unsupervised, can be adapted to any type of factual question (except for why) and is based only on easily accessible and unannotated data. Indeed, we build upon the intuitive hypothesis that a concise answer and easily pronounced by an intelligent assistant can in fact consist of a reformulation of the question asked. This approach is a part of a QAS that we have developed in Rojas Barahona et al. (2019) that extracts the answer to a question from structured data.",
"cite_spans": [
{
"start": 450,
"end": 472,
"text": "Barahona et al. (2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In what follows, we detail in section 3 the approach we propose for answer generation in Natural Language and we briefly discuss the QAS developed. We present in section 4 the experiments that we have conducted to evaluate this approach. and we conclude in section 5 with the limitations noted and the perspectives considered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The answer generation approach proposed is a component of a system which was developed in Rojas Barahona et al. (2019) and which consists in a spoken conversational question-answering system which analyses and translates a question in natural language (French or English) in a formal representation that is transformed into a Sparql query 1 .",
"cite_spans": [
{
"start": 96,
"end": 118,
"text": "Barahona et al. (2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NLG Approach for Answer Generation",
"sec_num": "3"
},
{
"text": "The Sparql query helps extracting the answer to the given question from an RDF knowledge base, in our case Wikidata 2 . The extracted answer takes the form of a list of URIs or values.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NLG Approach for Answer Generation",
"sec_num": "3"
},
{
"text": "Although the QAS that we have developed (Rojas Barahona et al., 2019) is able to find the correct answer to a question, we have noticed that its short representation is not user-friendly. Therefore, we propose an unsupervised approach which integrates the use of Transformer models such as BERT (Devlin et al., 2019) and GPT (Radford et al., 2018) . The choice of an unsupervised approach arises from the fact that there is no available training dataset associating a question with an exhaustive and concise answer at the same time. such dataset could have helped use an End-to-End learning neural architecture that can generate an elaborated answer to a question.",
"cite_spans": [
{
"start": 47,
"end": 69,
"text": "Barahona et al., 2019)",
"ref_id": null
},
{
"start": 295,
"end": 316,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 325,
"end": 347,
"text": "(Radford et al., 2018)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NLG Approach for Answer Generation",
"sec_num": "3"
},
{
"text": "This approach builds upon the fact that we have already extracted the short answer to a given question and assumes that a user-friendly answer can consist in rephrasing the question words along with the short answer. This approach is composed of two fundamental phases: The dependency analysis of the input question and the answer generation using Transformer models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NLG Approach for Answer Generation",
"sec_num": "3"
},
{
"text": "For the dependency analysis, we use an extended version of UDPipeFuture (Straka, 2018) which showed its state of the art performance by becoming first in terms of the Morphology-aware Labeled Attachment Score (MLAS) 3 metric at the CoNLL Shared Task of dependency parsing in 2018 (Zeman et al., 2018). UDPipeFuture is a POS tagger and graph parser based dependency parser using a BiLSTM, inspired by Dozat et al. (2017) .",
"cite_spans": [
{
"start": 72,
"end": 86,
"text": "(Straka, 2018)",
"ref_id": "BIBREF42"
},
{
"start": 400,
"end": 419,
"text": "Dozat et al. (2017)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency parsing",
"sec_num": "3.1"
},
{
"text": "Our modification consisted in adding several contextual word embeddings (with respect to the language). In order to find the best configuration we experimented with models like multilingual BERT (Devlin et al., 2019) , XLM-R (Conneau et al., 2019) (for both, English and French), RoBERTA (Liu et al., 2019 ) (for English), FlauBERT (Le et al., 2020) and CamemBERT (Martin et al., 2019 ) (for French) during the training of the treebanks French-2 https://www.wikidata.org/ 3 MLAS is a metric which takes into account POS tags and morphological features. It is inspired by the Content-Word Labeled Attachment Score (CLAS, Nivre and Fang (2017) which differentiates between content word and function words. Both are derived from the standard Labeled Attachment Score (LAS) metric.",
"cite_spans": [
{
"start": 195,
"end": 216,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 288,
"end": 305,
"text": "(Liu et al., 2019",
"ref_id": "BIBREF24"
},
{
"start": 332,
"end": 349,
"text": "(Le et al., 2020)",
"ref_id": null
},
{
"start": 364,
"end": 384,
"text": "(Martin et al., 2019",
"ref_id": "BIBREF26"
},
{
"start": 620,
"end": 641,
"text": "Nivre and Fang (2017)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency parsing",
"sec_num": "3.1"
},
{
"text": "GSD and English-EWT 4 , of the Universal Dependencies project (UD) (Nivre et al., 2016) 5 . Adding contextual word embedding increases significantly the results for all metrics, LAS, CLAS and MLAS (cf. table 1). This is the case for all languages (of the CoNLL shared task), where language specific contextual embeddings or multingual ones (as BERT or XLM-R) improved parsing (Heinecke, 2020) French ( In order to parse simple, quiz-like questions, the training corpora of the two UD treebanks are not appropriate (enough), since both treebanks do not contain many questions, if at all 6 .",
"cite_spans": [
{
"start": 67,
"end": 89,
"text": "(Nivre et al., 2016) 5",
"ref_id": null
},
{
"start": 376,
"end": 392,
"text": "(Heinecke, 2020)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency parsing",
"sec_num": "3.1"
},
{
"text": "An explanation for bad performance on questions of parser models trained on standard UD is the fact, that in both languages, the syntax of questions differs from the syntax of declarative sentences: apart from wh question words, in English the to do periphrasis is nearly always used in questions. In French, subject and direct objects can be inversed and the est-ce que construction appears frequently. Both, the English to do periphrasis and the French est-ce que construction are absent in declarative sentences. Table 2 shows the (much lower) results when parsing questions using models trained only on the standard UD treebanks.",
"cite_spans": [],
"ref_spans": [
{
"start": 516,
"end": 523,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Dependency parsing",
"sec_num": "3.1"
},
{
"text": "In order to get a better analysis, we decided to annotate additional sentences (quiz-like questions) and add this data to the basic treebanks. For English we annotated 309 questions (plus 91 questions for validation) from the QALD7 (Usbeck et al., 2017) and QALD8 corpora 7 . For French we translated the QALD7 questions into French and formulated others ourselves (276 train, 66 validation). For the annotations we followed the general UD guidelines 8 as well as the treebank specific guidelines of En-EWT and Fr-GSD.",
"cite_spans": [
{
"start": 232,
"end": 253,
"text": "(Usbeck et al., 2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency parsing",
"sec_num": "3.1"
},
{
"text": "As table 3 shows, the quality of the dependency analysis improves considerably. The contextual word embeddings CamemBERT (for French) and BERT (English) have the biggest impact. We rely on the UdpipeFuture version which we have improved with BERT (for English)/CamemBERT (for French) and which gives the best results in terms of dependency analysis, in order to proceed with the partitioning of the question into textual fragments (also called chunks):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency parsing",
"sec_num": "3.1"
},
{
"text": "Q = {c 1 , c 2 , . . . , c n }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency parsing",
"sec_num": "3.1"
},
{
"text": "If we take the example of the question What is the political party of the mayor of Paris?, the set of textual fragments would be Q = {What, is, the political party of the mayor of Paris }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency parsing",
"sec_num": "3.1"
},
{
"text": "During this phase, we first carry out a first test of the set Q to check whether the text fragment which contains a question marker (exp: what, when, who etc.) represents the subject nsubj in the analysed question. If so, we simply replace that text fragment with the answer we identified earlier. Let us take the previous example What is the political party of the mayor of Paris?, the system automatically detects that the text fragment containing the question marker What represents the subject and will therefore be replaced directly by the exact answer The Socialist Party. Therefore, the concise answer generated will be The Socialist Party is the political party of the mayor of Paris.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Answer generation",
"sec_num": "3.2"
},
{
"text": "Otherwise, we remove the text fragment containing the question marker that we detected and we add the short answer R to Q:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Answer generation",
"sec_num": "3.2"
},
{
"text": "Q = {c 1 , c 2 , . . . , c n\u22121 , R}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Answer generation",
"sec_num": "3.2"
},
{
"text": "Using the text fragments set Q, we proceed with a permutation based generation of all possible answer structures that can form the sentence answering the question asked:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Answer generation",
"sec_num": "3.2"
},
{
"text": "S = {s 1 (R, c 1 , c 2 , . . . , c n\u22121 ), s 2 (c 1 , R, c 2 , . . . , c n\u22121 ), . . . , s m (c 1 , c 2 , .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Answer generation",
"sec_num": "3.2"
},
{
"text": ". . , c n\u22121 , R)} These structures will be evaluated by a Language Model (LM) based on Transformer models which will extract the most probable sequence of text fragments that can account for the answer to be sent to the user:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Answer generation",
"sec_num": "3.2"
},
{
"text": "structure * = s \u2208 S; p(s) = argmax s i \u2208S p(s i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Answer generation",
"sec_num": "3.2"
},
{
"text": "Once the best structure is identified, we initiate the generation process of possible missing words. Indeed, we suppose that there could be some terms which do not necessarily appear in the question or in the short answer but which are, on the other hand, necessary to the generation of a correct grammatical structure of the final answer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Answer generation",
"sec_num": "3.2"
},
{
"text": "This process requires that we set two parameters, the number of possible missing words and their positions within the selected structure. In this paper, we experiment the assumption that one word could be missing and that it is located before the short answer within the identified structure, as it could be the case for a missing article (the, a, etc.) or a preposition (in, at, etc.) for example.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Answer generation",
"sec_num": "3.2"
},
{
"text": "Therefore, to predict this missing word, we use BERT as the generation model (GM) for its ability to capture bidirectionally the context of a given word within a sentence. In case when BERT returns a non-alphabetic character sequence, we assume that the optimal structure, as predicted by the LM, does not need to be completed by an additional word. The following example illustrates the different steps of the proposed approach:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Answer generation",
"sec_num": "3.2"
},
{
"text": "Question: When did princess Diana die?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Answer generation",
"sec_num": "3.2"
},
{
"text": "1. Question parsing and answer extraction using the system proposed in Rojas Barahona et al. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Answer generation",
"sec_num": "3.2"
},
{
"text": "The existing QAS test sets are more tailored to systems which generate the exact short answer to a question or more focused on the Machine Reading Comprehension task where the answer consists of a text passage from a document containing the short answer. Therefore, we have created a dataset which maps questions extracted from the QALD-7 challenge dataset (Usbeck et al., 2017) with natural language answers which were defined by a linguist and which we individually reviewed. This dataset called QUEREO consists of 150 questions with the short answers extracted by the QAS that we described above. We denote an average of three possible gold sanswers in natural language for each question. French and English versions were created for this dataset. As illustrated in figure 1, two possible architectures of the approach proposed for answer generation have been evaluated. The first architecture A1 consists in generating all possible answer structures in order to have them evaluated afterwards by a LM which will identify the optimal answer structure to which we generate possible missing words. Architecture A2 starts with generating missing words for each structure in S which will then be evaluated by the LM. In this paper, we assume that there is only one missing word per structure.",
"cite_spans": [
{
"start": 357,
"end": 378,
"text": "(Usbeck et al., 2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Evaluation",
"sec_num": "4"
},
{
"text": "To evaluate the proposed approach, we have referred to standard metrics defined for NLG tasks such as Automatic Translation and Summarization, as they allow to assess to what extent a generated sentence is similar to the gold sentence. We con-sider three N-gram metrics (BLEU, METEOR and ROUGE) and the BERT score metric which exploits the pre-trained embeddings of BERT to calculate the similarity between the answer generated and the gold answer. To be able to compare the different configurations of the approach, we refer to Friedman's test (Milton, 1939) which allows to detect the performance variation of different configurations of a model evaluated by several metrics based on the average ranks. We also conducted a human evaluation study for the French and the English versions of the dataset, in which we asked 20 native speakers participants to evaluate the relevance of a generated answer (correct or not correct) regarding a given question while indicating the type of errors depicted (grammar, wrong preposition, word order, extra word(s), etc). Figure 3 presents the evaluation framework that we have implemented and provided to the participants. The results of each participant are saved in a json-file (figure 4). The inter-agreement rate between participants reached 70% which indicates a substantial agreement. Through the human evaluation study, we wanted to explore to what extent the standard metrics are reliable to assess NLG approaches within the context of question-answering systems. Table 4 (French dataset) represents the obtained results for the first three best models according to the human evaluation ranking and the Friedman test ranking. We indicate between brackets each model's rank according to the metric used.",
"cite_spans": [
{
"start": 545,
"end": 559,
"text": "(Milton, 1939)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 1061,
"end": 1069,
"text": "Figure 3",
"ref_id": null
},
{
"start": 1512,
"end": 1519,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments and Evaluation",
"sec_num": "4"
},
{
"text": "We note that the highest human accuracy score for French of about 85% was scored with the first architecture coupled with BERT as the generation model (GM) and CamemBERT as the language model (LM). We also notice that the architecture A1, which considers the LM assessment of the structure before generating missing words, performs better. Surprisingly, as a generative model, the multi- Table 4 : Model ranking for French dataset according to the human evaluation study (best in bold) and the Friedman test (best in yellow). \"BT\" in Column GM stands for BERT-base-multilingual-cased. In column LM we use \"CmBT\" for CamemBERT-base, \"BT-ml-c\" for BERT-base-multilingual-cased, \"XRob\" for XLM-RoBERTa-base, \"FBT-s-c\" for FlauBERT-small-cased, \"FBT-b-uc\" for FlauBERT-base-uncased and \"clm-1024\" for XLM-clmenfr-1024",
"cite_spans": [],
"ref_spans": [
{
"start": 388,
"end": 395,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments and Evaluation",
"sec_num": "4"
},
{
"text": "lingual BERT model predicts missing words better than CamemBERT for French sentences. These findings are also confirmed by the Friedman test where we can clearly see that the first ranked configuration maps the best configuration selected according to the human accuracy, with a very slight difference for the other four configurations. Let us see if that means that the four metrics are correlated with the human accuracy. According to table 6 which presents the Pearson correlation (Benesty et al., 2009) of the human accuracy with the four metrics and to figure 2 which illustrates the ranking given by each evaluation metric along with the human judgement for each configuration (i.e. configuration = GM \u00d7 architecture \u00d7 LM) tested, we can clearly see that the human evaluation results are positively and strongly correlated with the BLEU, the METEOR and the BERT scores. These metrics are practically matching the human ranking and thus are obviously able to identify which configuration gives better results. The rouge metric, used for French question/answer evaluation, is moderately correlated with the human evaluation which means that we should not only rely on this metric when assessing such task. On the other hand, when the ROUGE metric is considered with the other metrics, it helps to get closer to the human judgement. Table 5 presents the results for the English dataset and shows that the best accuracy scored is about 72% with A1, BERT as the generative model and the Generative Pretrained Transformer (GPT) as the language model. According to the first three configurations, architecture A2 prevails and the GPT transformer takes over the other lan-guage models. These results are also confirmed by the Friedman test with a very slight difference on the ranking and also upheld with the correlation scores between the human assessment and each of the four metrics as shown by figure 5 and table 6.",
"cite_spans": [
{
"start": 484,
"end": 506,
"text": "(Benesty et al., 2009)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 1336,
"end": 1343,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments and Evaluation",
"sec_num": "4"
},
{
"text": "These findings mean that we actually can rely on the use of these standard metrics to evaluate the answer generation task for question-answering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Evaluation",
"sec_num": "4"
},
{
"text": "We also tried to analyse the errors indicated by the participants. As we can note from figure 6, the most common error reported for both English and French datasets is the word order which sheds the light on a problem related to the language model assessment phase. The second most reported error addresses the generation process, whether to indicate that there are one or more missing words within the answer (French) or the presence of some odd words (English).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Evaluation",
"sec_num": "4"
},
{
"text": "When trying to get an insight on the answers generated by the current intelligent systems such as Google assistant and Alexa, we noted that these systems are very accurate when extracting the correct answer to a question and can sometimes generate user-friendly answers that help recall the question context, specially with Alexa. However, we noticed that most of the answers generated by these systems are more verbose than necessary, we also found out that when addressing yes/no questions, these systems generally settle for just a yes or no without elaborating, or, on the other hand, present a text span extracted from a Web page and let the user guess the answer. Let us take for example the following question Was US president Jackson involved in a war? Table 5 : Model ranking for English dataset according to the human evaluation study (best in bold) and the Friedman ranking (best in yellow). In Column GM we use \"BT-ml\" for BERT-base-multilingual-cased and \"BT\" for BERTlarge-cased. In column LM \"GPT\" stands for for OpenAI-GPT, \"GPT2-l\" for GPT2-large, \"GPT2-m\" for GPT2medium, \"GPT2\" for GPT2, \"BT-b-uc\" for BERT-base-uncased, \"mlm-2048\" for XLM-mlm-en-2048 and \"BT-l-c\" for BERT-large-cased.",
"cite_spans": [],
"ref_spans": [
{
"start": 761,
"end": 768,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments and Evaluation",
"sec_num": "4"
},
{
"text": "[ { \"ID\": \"quereo_5.4\", \"QUESTION\": \"Quelles sont les companies d'\u00e9lectronique fond\u00e9es a Beijing ?\", \"SHORT_ANSWER\":",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Evaluation",
"sec_num": "4"
},
{
"text": "[ \"Xiaomi\", \"Lenovo\" ], \"GENERATED_ANSWER\": \"Les companies d'\u00e9lectronique fond\u00e9es\u00e0 beijing sont xiao xiaomi et lenovo\", \"MISSING_WORD\": \"Xiao\", \"EVALUATION\": \"correcte\", \"ERROR\":",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Evaluation",
"sec_num": "4"
},
{
"text": "[ \"aucun\" ], \"COMMENT\": \"\" }, { \"ID\": \"quereo_8.8\", \"QUESTION\":",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Evaluation",
"sec_num": "4"
},
{
"text": "\"Combien de films a r\u00e9alis\u00e9 Park Chan-wook ?\", \"SHORT_ANSWER\":",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Evaluation",
"sec_num": "4"
},
{
"text": "[ \"quatorze\" ], \"GENERATED_ANSWER\": \"Quatorze films a r\u00e9alis\u00e9 park chan-wook\", \"MISSING_WORD\": \".\", \"EVALUATION\": \"incorrecte\", \"ERROR\":",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Evaluation",
"sec_num": "4"
},
{
"text": "[ \"ordre\", \"accord\" ], \"COMMENT\": \"\" }, ... ] Here's something I found on the Web. According to constitutioncenter.org: After the War of 1812, Jackson led military forces against the Indians and was involved in treaties that led to the relocation of Indians.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Evaluation",
"sec_num": "4"
},
{
"text": "The user has to focus on the returned text fragment in order to guess that the answer to his question is actually yes. This issue was particularly noted when addressing French questions. If we also take the example How many grandchildren did Jacques Cousteau have ? the two systems answer as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Evaluation",
"sec_num": "4"
},
{
"text": "Fabien Cousteau, Alexandra Cousteau, Philippe Cousteau Jr., C\u00e9line Cousteau. Jacques Cousteau's grandchildren were Philippe Cousteau Jr., Alexandra ousteau, C\u00e9line Cousteau, and Fabien Cousteau However, the user is not asking about the names of Cousteau's grand-children and has to guess by himself that the answer for this question is four.",
"cite_spans": [
{
"start": 7,
"end": 193,
"text": "Cousteau, Alexandra Cousteau, Philippe Cousteau Jr., C\u00e9line Cousteau. Jacques Cousteau's grandchildren were Philippe Cousteau Jr., Alexandra ousteau, C\u00e9line Cousteau, and Fabien Cousteau",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Evaluation",
"sec_num": "4"
},
{
"text": "A more accurate answer should indicate the exact answer to the question and then elaborate Jacques Cousteau had four grand-children. But these systems perform better in case when the terms employed in the question are not necessarily relevant to the answer. If we take the example of the question who is the wife of Lance Bass, the approach that we propose will generate The wife of Lance Bass is Michael Turchin. As we can note the answer generated was not adapted to the actual answer, while the other systems are able to detect such nuance:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Evaluation",
"sec_num": "4"
},
{
"text": "Lance Bass is married to Michael Turchin. They have been married since 2014.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Evaluation",
"sec_num": "4"
},
{
"text": "This issue has still to be addressed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Evaluation",
"sec_num": "4"
},
{
"text": "We have put forward, in this paper, an approach for Natural Language Generation within the framework of the question-answering task that considers dependency analysis and probability distribution of words sequences. This approach takes part of a question/answering system in order to help generate a user-friendly answer rather than a short one. The results obtained through a human evaluation and standard metrics tested over French and English questions are very promising and shows a good correlation with human judgement. However, we intend to put more emphasis on the Language Model choice as reported by the human study and consider the generation of more than one missing word within the answer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and perspectives",
"sec_num": "5"
},
{
"text": "https://www.w3.org/TR/sparql11-overview/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "As for the Shared Task CoNLL 2018, we use version 2.2 to be able to compare with the official results 5 https://universaldependencies.org/ 6 At least for French a question treebank exists within the UD project (French-FQB,Seddah and Candito (2016)). However its questions are rather long and literary, not like thoses used in quizzes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/ag-sc/QALD 8 https://universaldependencies.org/guidelines.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A2/Bert-mlg/gpt2-large A2/Bert/gpt2-large",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A2/Bert-mlg/gpt2-large A2/Bert/gpt2-large",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "/Bert-mlg/gpt2-medium A2/Bert/gpt2-medium A2/Bert/openai-gpt",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "/Bert-mlg/gpt2-medium A2/Bert/gpt2-medium A2/Bert/openai-gpt",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "/Bert-mlg/gpt2-medium A2/Bert/xlnet-large",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "/Bert-mlg/gpt2-medium A2/Bert/xlnet-large",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Snowball: Extracting relations from large plain-text collections",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Agichtein",
"suffix": ""
},
{
"first": "Luis",
"middle": [],
"last": "Gravano",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the fifth ACM conference on Digital libraries",
"volume": "",
"issue": "",
"pages": "85--94",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugene Agichtein and Luis Gravano. 2000. Snowball: Extracting relations from large plain-text collections. In Proceedings of the fifth ACM conference on Digi- tal libraries, pages 85-94.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Multilingual extractive reading comprehension by runtime machine translation",
"authors": [
{
"first": "Akari",
"middle": [],
"last": "Asai",
"suffix": ""
},
{
"first": "Akiko",
"middle": [],
"last": "Eriguchi",
"suffix": ""
},
{
"first": "Kazuma",
"middle": [],
"last": "Hashimoto",
"suffix": ""
},
{
"first": "Yoshimasa",
"middle": [],
"last": "Tsuruoka",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Akari Asai, Akiko Eriguchi, Kazuma Hashimoto, and Yoshimasa Tsuruoka. 2018. Multilingual extractive reading comprehension by runtime machine transla- tion. https://arxiv.org/abs/1809.03275.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Yiteng Huang, and Israel Cohen",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Benesty",
"suffix": ""
},
{
"first": "Jingdong",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--4",
"other_ids": {
"DOI": [
"10.1007/978-3-642-00296-0_5"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Benesty, Jingdong Chen, Yiteng Huang, and Is- rael Cohen. 2009. Pearson Correlation Coefficient, pages 1-4. Springer Berlin Heidelberg, Berlin, Hei- delberg.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A hybrid question answering system for Multiple Choice Question (MCQ). In Question Answering for Machine Reading Evaluation (QA4MRE) at CLEF",
"authors": [
{
"first": "Pinaki",
"middle": [],
"last": "Bhaskar",
"suffix": ""
},
{
"first": "Somnath",
"middle": [],
"last": "Banerjee",
"suffix": ""
},
{
"first": "Partha",
"middle": [],
"last": "Pakray",
"suffix": ""
},
{
"first": "Samadrita",
"middle": [],
"last": "Banerjee",
"suffix": ""
},
{
"first": "Sivaji",
"middle": [],
"last": "Bandyopadhyay",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Gelbukh",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pinaki Bhaskar, Somnath Banerjee, Partha Pakray, Samadrita Banerjee, Sivaji Bandyopadhyay, and Alexander Gelbukh. 2013. A hybrid question answering system for Multiple Choice Question (MCQ). In Question Answering for Machine Read- ing Evaluation (QA4MRE) at CLEF 2013.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "An analysis of the AskMSR question-answering system",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Brill",
"suffix": ""
},
{
"first": "Susan",
"middle": [],
"last": "Dumais",
"suffix": ""
},
{
"first": "Michele",
"middle": [],
"last": "Banko",
"suffix": ""
}
],
"year": 2002,
"venue": "EMNLP 2002",
"volume": "",
"issue": "",
"pages": "257--264",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Brill, Susan Dumais, and Michele Banko. 2002. An analysis of the AskMSR question-answering sys- tem. In EMNLP 2002, pages 257-264. ACL.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Data-intensive question answering",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Brill",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Michele",
"middle": [],
"last": "Banko",
"suffix": ""
},
{
"first": "Susan",
"middle": [],
"last": "Dumais",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2001,
"venue": "TREC 2001",
"volume": "",
"issue": "",
"pages": "393--400",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Brill, Jimmy Lin, Michele Banko, Susan Dumais, and Andrew Ng. 2001. Data-intensive question an- swering. In TREC 2001, pages 393-400.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Abstractive sentence summarization with attentive recurrent neural networks",
"authors": [
{
"first": "Sumit",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
}
],
"year": 2016,
"venue": "NAACL 2016",
"volume": "",
"issue": "",
"pages": "93--98",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sumit Chopra, Michael Auli, and Alexander M. Rush. 2016. Abstractive sentence summarization with at- tentive recurrent neural networks. In NAACL 2016, pages 93-98.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Unsupervised Cross-lingual Representation Learning at Scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grace",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grace, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2019. Unsupervised Cross-lingual Representation Learning at Scale. https://arxiv.org/abs/1911.02116.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In NAACL, pages 4171-4186, Minneapo- lis. ACL.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Stanford's Graph-based Neural Dependency Parser at the CoNLL 2017 Shared Task",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Dozat",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "CoNLL 2017 Shared Task. Multilingual Parsing from Raw Text to Universal Dependencies",
"volume": "",
"issue": "",
"pages": "20--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothy Dozat, Peng Qi, and Christopher D. Manning. 2017. Stanford's Graph-based Neural Dependency Parser at the CoNLL 2017 Shared Task. In CoNLL 2017 Shared Task. Multilingual Parsing from Raw Text to Universal Dependencies, pages 20-30, Van- couver, Canada. ACL.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Harvesting paragraph-level question-answer pairs from Wikipedia",
"authors": [
{
"first": "Xinya",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2018,
"venue": "ACL 2018",
"volume": "",
"issue": "",
"pages": "1907--1917",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1177"
]
},
"num": null,
"urls": [],
"raw_text": "Xinya Du and Claire Cardie. 2018. Harvest- ing paragraph-level question-answer pairs from Wikipedia. In ACL 2018, pages 1907-1917, Mel- bourne, Australia. ACL.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Research and reviews in question answering system",
"authors": [
{
"first": "K",
"middle": [],
"last": "Sanjay",
"suffix": ""
},
{
"first": "Vaishali",
"middle": [],
"last": "Dwivedi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2013,
"venue": "Procedia Technology",
"volume": "10",
"issue": "",
"pages": "417--424",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sanjay K. Dwivedi and Vaishali Singh. 2013. Research and reviews in question answering system. Procedia Technology, 10:417-424.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Automatic detection of causal relations for question answering",
"authors": [
{
"first": "Roxana",
"middle": [],
"last": "Girju",
"suffix": ""
}
],
"year": 2003,
"venue": "ACL 2003",
"volume": "",
"issue": "",
"pages": "76--83",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roxana Girju. 2003. Automatic detection of causal re- lations for question answering. In ACL 2003, pages 76-83. ACL.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Hybrid enhanced Universal Dependencies parsing",
"authors": [
{
"first": "Johannes",
"middle": [],
"last": "Heinecke",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing into Enhanced Universal Dependencies",
"volume": "",
"issue": "",
"pages": "174--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johannes Heinecke. 2020. Hybrid enhanced Universal Dependencies parsing. In International Conference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing into Enhanced Universal Dependen- cies, pages 174-180, Online. ACL.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Natural language question answering: the view from here",
"authors": [
{
"first": "Lynette",
"middle": [],
"last": "Hirschman",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Gaizauskas",
"suffix": ""
}
],
"year": 2001,
"venue": "natural language engineering",
"volume": "7",
"issue": "4",
"pages": "275--300",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lynette Hirschman and Robert Gaizauskas. 2001. Nat- ural language question answering: the view from here. natural language engineering, 7(4):275-300.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Exploiting background knowledge in compact answer generation for why-questions",
"authors": [
{
"first": "Ryu",
"middle": [],
"last": "Iida",
"suffix": ""
},
{
"first": "Canasai",
"middle": [],
"last": "Kruengkrai",
"suffix": ""
},
{
"first": "Ryo",
"middle": [],
"last": "Ishida",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Torisawa",
"suffix": ""
},
{
"first": "Jong-Hoon",
"middle": [],
"last": "Oh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Kloetzer",
"suffix": ""
}
],
"year": 2019,
"venue": "AAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "142--151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryu Iida, Canasai Kruengkrai, Ryo Ishida, Kentaro Torisawa, Jong-Hoon Oh, and Julien Kloetzer. 2019. Exploiting background knowledge in compact an- swer generation for why-questions. In AAI Con- ference on Artificial Intelligence, volume 33, pages 142-151.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Semi-distantly supervised neural model for generating compact answers to open-domain why questions",
"authors": [
{
"first": "Ryo",
"middle": [],
"last": "Ishida",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Torisawa",
"suffix": ""
},
{
"first": "Jong-Hoon",
"middle": [],
"last": "Oh",
"suffix": ""
},
{
"first": "Ryu",
"middle": [],
"last": "Iida",
"suffix": ""
},
{
"first": "Canasai",
"middle": [],
"last": "Kruengkrai",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Kloetzer",
"suffix": ""
}
],
"year": 2018,
"venue": "32nd AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryo Ishida, Kentaro Torisawa, Jong-Hoon Oh, Ryu Iida, Canasai Kruengkrai, and Julien Kloetzer. 2018. Semi-distantly supervised neural model for generat- ing compact answers to open-domain why questions. In 32nd AAAI Conference on Artificial Intelligence.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Improving event causality recognition with multiple background knowledge sources using multi-column convolutional neural networks",
"authors": [
{
"first": "Canasai",
"middle": [],
"last": "Kruengkrai",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Torisawa",
"suffix": ""
},
{
"first": "Chikara",
"middle": [],
"last": "Hashimoto",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Kloetzer",
"suffix": ""
},
{
"first": "Jong-Hoon",
"middle": [],
"last": "Oh",
"suffix": ""
},
{
"first": "Masahiro",
"middle": [],
"last": "Tanaka",
"suffix": ""
}
],
"year": 2017,
"venue": "31st AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Canasai Kruengkrai, Kentaro Torisawa, Chikara Hashimoto, Julien Kloetzer, Jong-Hoon Oh, and Masahiro Tanaka. 2017. Improving event causal- ity recognition with multiple background knowl- edge sources using multi-column convolutional neu- ral networks. In 31st AAAI Conference on Artificial Intelligence.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Context and page analysis for improved web search",
"authors": [
{
"first": "Steve",
"middle": [],
"last": "Lawrence",
"suffix": ""
},
{
"first": "C. Lee",
"middle": [],
"last": "Giles",
"suffix": ""
}
],
"year": 1998,
"venue": "IEEE Internet computing",
"volume": "2",
"issue": "4",
"pages": "38--46",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steve Lawrence and C. Lee Giles. 1998. Context and page analysis for improved web search. IEEE Inter- net computing, 2(4):38-46.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Alexandre Allauzen, Beno\u00eet Crabb\u00e9, Laurent Besacier, and Didier Schwab. 2020. FlauBERT: Unsupervised Language Model Pre-training for French",
"authors": [
{
"first": "Hang",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Lo\u00efc",
"middle": [],
"last": "Vial",
"suffix": ""
},
{
"first": "Jibril",
"middle": [],
"last": "Frej",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Segonne",
"suffix": ""
},
{
"first": "Maximin",
"middle": [],
"last": "Coavoux",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Lecouteux",
"suffix": ""
}
],
"year": null,
"venue": "LREC 2020",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hang Le, Lo\u00efc Vial, Jibril Frej, Vincent Segonne, Max- imin Coavoux, Benjamin Lecouteux, Alexandre Al- lauzen, Beno\u00eet Crabb\u00e9, Laurent Besacier, and Didier Schwab. 2020. FlauBERT: Unsupervised Language Model Pre-training for French. In LREC 2020.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Answer extraction based on merging score strategy of hot terms",
"authors": [
{
"first": "Juan",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Chunxia",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhendong",
"middle": [],
"last": "Niu",
"suffix": ""
}
],
"year": 2016,
"venue": "Chinese Journal of Electronics",
"volume": "25",
"issue": "4",
"pages": "614--620",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juan Le, Chunxia Zhang, and Zhendong Niu. 2016. Answer extraction based on merging score strat- egy of hot terms. Chinese Journal of Electronics, 25(4):614-620.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Jingfei Adn Joshi",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Mandar Du, Jingfei adn Joshi, Danqi Chen, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretrain- ing Approach. https://arxiv.org/abs/1907.11692.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Is question answering fit for the semantic web?: a survey",
"authors": [
{
"first": "Vanessa",
"middle": [],
"last": "Lopez",
"suffix": ""
},
{
"first": "Victoria",
"middle": [],
"last": "Uren",
"suffix": ""
},
{
"first": "Marta",
"middle": [],
"last": "Sabou",
"suffix": ""
},
{
"first": "Enrico",
"middle": [],
"last": "Motta",
"suffix": ""
}
],
"year": 2011,
"venue": "Semantic Web",
"volume": "2",
"issue": "",
"pages": "125--155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vanessa Lopez, Victoria Uren, Marta Sabou, and En- rico Motta. 2011. Is question answering fit for the semantic web?: a survey. Semantic Web, 2(2):125- 155.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "\u00c9ric Villemonte de la Clergerie, Djam\u00e9 Seddah, and Beno\u00eet Sagot",
"authors": [
{
"first": "Louis",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Muller",
"suffix": ""
},
{
"first": "Pedro Javier Ortiz",
"middle": [],
"last": "Su\u00e1rez",
"suffix": ""
},
{
"first": "Yoann",
"middle": [],
"last": "Dupont",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Romary",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Louis Martin, Benjamin Muller, Pedro Javier Ortiz Su\u00e1rez, Yoann Dupont, Laurent Romary,\u00c9ric Ville- monte de la Clergerie, Djam\u00e9 Seddah, and Beno\u00eet Sagot. 2019. CamemBERT: a Tasty French Lan- guage Model. https://arxiv.org/abs/1911.03894.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Language as a latent variable: Discrete generative models for sentence compression",
"authors": [
{
"first": "Yishu",
"middle": [],
"last": "Miao",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2016,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yishu Miao and Phil Blunsom. 2016. Language as a latent variable: Discrete generative models for sen- tence compression. EMNLP 2016.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A correction: The use of ranks to avoid the assumption of normality implicit in the analysis of variance",
"authors": [
{
"first": "Friedman",
"middle": [],
"last": "Milton",
"suffix": ""
}
],
"year": 1939,
"venue": "Journal of the American Statistical Association",
"volume": "34",
"issue": "205",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Friedman Milton. 1939. A correction: The use of ranks to avoid the assumption of normality implicit in the analysis of variance. Journal of the American Statis- tical Association, 34(205):109.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Abstractive text summarization using sequence-to-sequence RNNs and beyond",
"authors": [
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cicero Dos Santos",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Aglar Gul\u00e7ehre",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Xiang",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "280--290",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, \u00c7 aglar Gul\u00e7ehre, Bing Xiang, et al. 2016. Abstrac- tive text summarization using sequence-to-sequence RNNs and beyond. In CoNLL 2016, pages 280-290.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Universal Dependency Evaluation",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Chiao-Ting",
"middle": [],
"last": "Fang",
"suffix": ""
}
],
"year": 2017,
"venue": "NoDaLiDa 2017 Workshop on Universal Dependencies",
"volume": "",
"issue": "",
"pages": "86--95",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre and Chiao-Ting Fang. 2017. Univer- sal Dependency Evaluation. In NoDaLiDa 2017 Workshop on Universal Dependencies, pages 86-95, G\u00f6teborg.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Universal Dependencies v1: A Multilingual Treebank Collection",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Haji\u010d",
"suffix": ""
},
{
"first": "Manning",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
}
],
"year": 2016,
"venue": "10th LREC",
"volume": "",
"issue": "",
"pages": "23--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre, Marie-Catherine de Marneffe, Filip Gin- ter, Yoav Goldberg, Yoav Goldberg, Jan Haji\u010d, Man- ning Christopher D., Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016. Universal Dependencies v1: A Multilingual Treebank Collection. In 10th LREC, pages 23-38, Portoro\u017e, Slovenia. ELRA.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "A semi-supervised learning approach to whyquestion answering",
"authors": [
{
"first": "Jong-Hoon",
"middle": [],
"last": "Oh",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Torisawa",
"suffix": ""
},
{
"first": "Chikara",
"middle": [],
"last": "Hashimoto",
"suffix": ""
},
{
"first": "Ryu",
"middle": [],
"last": "Iida",
"suffix": ""
},
{
"first": "Masahiro",
"middle": [],
"last": "Tanaka",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Kloetzer",
"suffix": ""
}
],
"year": 2016,
"venue": "Thirtieth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jong-Hoon Oh, Kentaro Torisawa, Chikara Hashimoto, Ryu Iida, Masahiro Tanaka, and Julien Kloetzer. 2016. A semi-supervised learning approach to why- question answering. In Thirtieth AAAI Conference on Artificial Intelligence.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Why-question answering using intra-and inter-sentential causal relations",
"authors": [
{
"first": "Jong-Hoon",
"middle": [],
"last": "Oh",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Torisawa",
"suffix": ""
},
{
"first": "Chikara",
"middle": [],
"last": "Hashimoto",
"suffix": ""
},
{
"first": "Motoki",
"middle": [],
"last": "Sano",
"suffix": ""
},
{
"first": "Kiyonori",
"middle": [],
"last": "Stijn De Saeger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ohtake",
"suffix": ""
}
],
"year": 2013,
"venue": "ACL 2013",
"volume": "",
"issue": "",
"pages": "1733--1743",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jong-Hoon Oh, Kentaro Torisawa, Chikara Hashimoto, Motoki Sano, Stijn De Saeger, and Kiyonori Ohtake. 2013. Why-question answering using intra-and inter-sentential causal relations. In ACL 2013, pages 1733-1743.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Answering naturally: Factoid to full length answer generation",
"authors": [
{
"first": "Vaishali",
"middle": [],
"last": "Pal",
"suffix": ""
},
{
"first": "Manish",
"middle": [],
"last": "Shrivastava",
"suffix": ""
},
{
"first": "Irshad",
"middle": [],
"last": "Bhat",
"suffix": ""
}
],
"year": 2019,
"venue": "2nd Workshop on New Frontiers in Summarization",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {
"DOI": [
"10.18653/v1/D19-5401"
]
},
"num": null,
"urls": [],
"raw_text": "Vaishali Pal, Manish Shrivastava, and Irshad Bhat. 2019. Answering naturally: Factoid to full length answer generation. In 2nd Workshop on New Fron- tiers in Summarization, pages 1-9, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Improving language understanding by generative pre-training",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Karthik",
"middle": [],
"last": "Narasimhan",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Karthik Narasimhan, Tim Sal- imans, and Ilya Sutskever. 2018. Im- proving language understanding by gen- erative pre-training. https://cdn.openai.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Emmanuel Mory, and Fr\u00e9d\u00e9ric Herl\u00e9dan. 2019. Spoken Conversational Search for General Knowledge",
"authors": [
{
"first": "Lina",
"middle": [
"M Rojas"
],
"last": "Barahona",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Bellec",
"suffix": ""
},
{
"first": "Beno\u00eet",
"middle": [],
"last": "Besset",
"suffix": ""
},
{
"first": "Martinho Dos",
"middle": [],
"last": "Santos",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Heinecke",
"suffix": ""
},
{
"first": "Munshi",
"middle": [],
"last": "Asadullah",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Leblouch",
"suffix": ""
},
{
"first": "Jean-Yves",
"middle": [],
"last": "Lancien",
"suffix": ""
},
{
"first": "G\u00e9raldine",
"middle": [],
"last": "Damnati",
"suffix": ""
}
],
"year": null,
"venue": "SIGdial Meeting on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "110--113",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lina M. Rojas Barahona, Pascal Bellec, Beno\u00eet Bes- set, Martinho Dos Santos, Johannes Heinecke, Mun- shi Asadullah, Olivier Leblouch, Jean-Yves Lancien, G\u00e9raldine Damnati, Emmanuel Mory, and Fr\u00e9d\u00e9ric Herl\u00e9dan. 2019. Spoken Conversational Search for General Knowledge. In SIGdial Meeting on Dis- course and Dialogue, pages 110-113, Stockholm. ACL.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "A neural attention model for abstractive sentence summarization",
"authors": [
{
"first": "Sumit",
"middle": [],
"last": "Alexander M Rush",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. https://arxiv.org/abs/1509. 00685.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Attentive pooling networks",
"authors": [
{
"first": "Santos",
"middle": [],
"last": "Cicero Dos",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cicero dos Santos, Ming Tan, Bing Xiang, and Bowen Zhou. 2016. Attentive pooling networks. https:// arxiv.org/abs/1602.03609.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Hard Time Parsing Questions: Building a QuestionBank for French",
"authors": [
{
"first": "Djam\u00e9",
"middle": [],
"last": "Seddah",
"suffix": ""
},
{
"first": "Marie",
"middle": [],
"last": "Candito",
"suffix": ""
}
],
"year": 2016,
"venue": "10th LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Djam\u00e9 Seddah and Marie Candito. 2016. Hard Time Parsing Questions: Building a QuestionBank for French. In 10th LREC, Portoro\u017e, Slovenia. ELRA.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Get to the point: Summarization with pointer-generator networks",
"authors": [
{
"first": "Abigail",
"middle": [],
"last": "See",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abigail See, Peter J. Liu, and Christopher D. Man- ning. 2017. Get to the point: Summarization with pointer-generator networks. https://arxiv.org/abs/ 1704.04368.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Creating causal embeddings for question answering with minimal supervision",
"authors": [
{
"first": "Rebecca",
"middle": [],
"last": "Sharp",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Jansen",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Hammond",
"suffix": ""
}
],
"year": 2016,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rebecca Sharp, Mihai Surdeanu, Peter Jansen, Pe- ter Clark, and Michael Hammond. 2016. Creating causal embeddings for question answering with min- imal supervision. EMNLP 2016.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "UDPipe 2.0 Prototype at CoNLL 2018 UD Shared Task",
"authors": [
{
"first": "Milan",
"middle": [],
"last": "Straka",
"suffix": ""
}
],
"year": 2018,
"venue": "CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies",
"volume": "",
"issue": "",
"pages": "197--207",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Milan Straka. 2018. UDPipe 2.0 Prototype at CoNLL 2018 UD Shared Task. In CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal De- pendencies, pages 197-207, Brussels. ACL.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Improved representation learning for question answer matching",
"authors": [
{
"first": "Ming",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Cicero Dos Santos",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2016,
"venue": "ACL 2016",
"volume": "",
"issue": "",
"pages": "464--473",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ming Tan, Cicero dos Santos, Bing Xiang, and Bowen Zhou. 2016. Improved representation learning for question answer matching. In ACL 2016, pages 464- 473.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Semantic Web Challenges",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "59--69",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "In Semantic Web Challenges, pages 59-69, Cham. Springer International Publishing.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Learning to rank for why-question answering",
"authors": [
{
"first": "Suzan",
"middle": [],
"last": "Verberne",
"suffix": ""
},
{
"first": "Daphne",
"middle": [],
"last": "Hans Van Halteren",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Theijssen",
"suffix": ""
},
{
"first": "Lou",
"middle": [],
"last": "Raaijmakers",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Boves",
"suffix": ""
}
],
"year": 2011,
"venue": "Information Retrieval",
"volume": "14",
"issue": "2",
"pages": "107--132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Suzan Verberne, Hans van Halteren, Daphne Theijssen, Stephan Raaijmakers, and Lou Boves. 2011. Learn- ing to rank for why-question answering. Informa- tion Retrieval, 14(2):107-132.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Machine comprehension using match-lstm and answer pointer",
"authors": [
{
"first": "Shuohang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shuohang Wang and Jing Jiang. 2016. Machine com- prehension using match-lstm and answer pointer. https://arxiv.org/abs/1608.07905.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "A joint model for question answering and question generation",
"authors": [
{
"first": "Tong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xingdi",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Trischler",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tong Wang, Xingdi Yuan, and Adam Trischler. 2017. A joint model for question answering and question generation. https://arxiv.org/abs/1706.01450.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Question answering by pattern matching, web-proofing, semantic form proofing",
"authors": [
{
"first": "Min",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Xiaoyu",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Michelle",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Tomek",
"middle": [],
"last": "Strzalkowski",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Albany",
"suffix": ""
}
],
"year": 2003,
"venue": "TREC 2003",
"volume": "",
"issue": "",
"pages": "500--255",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Min Wu, Xiaoyu Zheng, Michelle Duan, Ting Liu, Tomek Strzalkowski, and S Albany. 2003. Ques- tion answering by pattern matching, web-proofing, semantic form proofing. In TREC 2003, pages 500- 255.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Concept relation extraction using na\u00efve bayes classifier for ontologybased question answering systems",
"authors": [
{
"first": "Godandapani",
"middle": [],
"last": "Zayaraz",
"suffix": ""
}
],
"year": 2015,
"venue": "Journal of King Saud University-Computer and Information Sciences",
"volume": "27",
"issue": "1",
"pages": "13--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Godandapani Zayaraz et al. 2015. Concept relation extraction using na\u00efve bayes classifier for ontology- based question answering systems. Journal of King Saud University-Computer and Information Sciences, 27(1):13-24.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Zeman",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Haji\u010d",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Popel",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Potthast",
"suffix": ""
},
{
"first": "Milan",
"middle": [],
"last": "Straka",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--21",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Zeman, Jan Haji\u010d, Martin Popel, Martin Pot- thast, Milan Straka, Filip Ginter, Joakim Nivre, and Slav Petrov. 2018. CoNLL 2018 Shared Task: Mul- tilingual Parsing from Raw Text to Universal Depen- dencies. In CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 1-21, Brussels. ACL.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Figure 1: Experiment framework"
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Correlation assessment between human evaluation and the Bleu, Meteor, Rouge and Bert scores -French Q/A (\"CmBert\" stands for CamemBERT)Figure 3: Screenshot of the evaluation tool"
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Extract of a human evaluation result 20 Correlation assessment between human evaluation and the Bleu, Meteor, rouge and Bert scores -English Q/A"
},
"FIGREF4": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Distribution of generation errors Andrew Jackson, who served as a major general in the War of 1812, commanded U.S. forces in a five-month campaign against the Creek Indians, allies of the British."
},
"TABREF1": {
"content": "<table/>",
"type_str": "table",
"num": null,
"html": null,
"text": "Dependency Analysis for English and French (UD v2.2) using different contextual word embeddings, best results in bold"
},
"TABREF3": {
"content": "<table/>",
"type_str": "table",
"num": null,
"html": null,
"text": "Dependency Analysis of questions using models trained on the standard UD treebanks"
},
"TABREF5": {
"content": "<table/>",
"type_str": "table",
"num": null,
"html": null,
"text": "Dependency analysis of questions using models trained on enriched UD treebanks"
},
"TABREF10": {
"content": "<table><tr><td>% of all errors</td><td>20 40 60</td><td/><td>English French</td><td/></tr><tr><td/><td>0</td><td>extra words</td><td>grammar missing words</td><td>prepo-sitions</td><td>order word</td></tr><tr><td/><td/><td/><td colspan=\"2\">error categories</td></tr></table>",
"type_str": "table",
"num": null,
"html": null,
"text": "Pearson Correlation of the four metrics with the human evaluation/judgement"
}
}
}
}