Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "A92-1023",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:03:28.377208Z"
},
"title": "A Practical Methodology for the Evaluation of Spoken Language Systems",
"authors": [
{
"first": "Sean",
"middle": [],
"last": "Boisen",
"suffix": "",
"affiliation": {},
"email": "sboisen@bbn.com"
},
{
"first": "Madeleine",
"middle": [],
"last": "Bates",
"suffix": "",
"affiliation": {},
"email": "bates@bbn.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "",
"pdf_parse": {
"paper_id": "A92-1023",
"_pdf_hash": "",
"abstract": [],
"body_text": [
{
"text": "A meaningful evaluation methodology can advance the state-of-the-art by encouraging mature, practical applications rather than \"toy\" implementations. Evaluation is also crucial to assessing competing claims and identifying promising technical approaches. While work in speech recognition (SR) has a history of evaluation methodologies that permit comparison among various systems, until recently no methodology existed for either developers of natural language (NL) interfaces or researchers in speech understanding (SU) to evaluate and compare the systems they developed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recently considerable progress has been made by a number of groups involved in the DARPA Spoken Language Systems (SLS) program to agree on a methodology for comparative evaluation of SLS systems, and that methodology has been put into practice several times in comparative tests of several SLS systems. These evaluations are probably the only NL evaluations other than the series of Message Understanding Conferences (Sundheim, 1989; Sundheim, 1991) to have been developed and used by a group of researchers at different sites, although several excellent workshops have been held to study some of these problems (Palmer et al., 1989; Neal et al., 1991) . This paper describes a practical \"black-box\" methodology for automatic evaluation of question-answering NL systems. While each new application domain will require some development of special resources, the heart of the methodology is domain-independent, and it can be used with either speech or text input. The particular characteristics of the approach are described in the following section: subsequent sections present its implementation in the DARPA SLS community, and some problems and directions for future development.",
"cite_spans": [
{
"start": 417,
"end": 433,
"text": "(Sundheim, 1989;",
"ref_id": null
},
{
"start": 434,
"end": 449,
"text": "Sundheim, 1991)",
"ref_id": "BIBREF16"
},
{
"start": 612,
"end": 633,
"text": "(Palmer et al., 1989;",
"ref_id": "BIBREF14"
},
{
"start": 634,
"end": 652,
"text": "Neal et al., 1991)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The goal of this research has been to produce a well-defined, meaningful evaluation methodology which is *The work reported here was supported by the Advanced Research Projects Agency and was monitored by the Off'ice of Naval Research under Contract No. 00014-89-C-0008. The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the United States Government.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Characteristics of the Methodology",
"sec_num": "2.1"
},
{
"text": "\u2022 automatic, to enable evaluation over large quantities of data based on an objective assessment of the understanding capabilities of a NL system (rather than its user interface, portability, speed, etc.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Characteristics of the Methodology",
"sec_num": "2.1"
},
{
"text": "\u2022 capable of application to a wide variety of NL systems and approaches",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Characteristics of the Methodology",
"sec_num": "2.1"
},
{
"text": "\u2022 suitable for blind testing",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Characteristics of the Methodology",
"sec_num": "2.1"
},
{
"text": "\u2022 as non-intrusive as possible on the system being evaluated (to decrease the costs of evaluation)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Characteristics of the Methodology",
"sec_num": "2.1"
},
{
"text": "\u2022 domain independent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Characteristics of the Methodology",
"sec_num": "2.1"
},
{
"text": "The systems are assumed to be front ends to an interactive database query system, implemented in a particular common domain. The methodology can be described as \"black box\" in thai there is no attempt to evaluate the internal representations (syntactic, semantic, etc.) of a system. Instead, only the content of an answer relrieved from the database is evaluated: if the answer is correct, it is assumed that the system understood the query correctly. Comparing answers has the practical advantage of being a simple way to give widely var. ied systems a common basis for comparison. Although some recent work has suggested promising approaches (Black e, al., 1991) , system-internal representations are hard to com. pare, or even impossible in some cases where System X hm no level of representation corresponding to System Y's. I is easy, however, to define a simple common language fo~ representing answers (see Appendix A), and easy to ma~ system-specific representations into this common language.",
"cite_spans": [
{
"start": 644,
"end": 664,
"text": "(Black e, al., 1991)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Characteristics of the Methodology",
"sec_num": "2.1"
},
{
"text": "This methodology has been successfully applied in the context of cross-site blind tests, where the evaluation i: based on input which the system has never seen before This type of evaluation leaves out many other important as pects of a system, such as the user interface, or the utilit: (or speed) of performing a particular task with a system tha includes a NL component (work by Tennant (1981) , Bate: and Rettig (1988) , and Neal et al. (1991) addresses some o these other factors).",
"cite_spans": [
{
"start": 382,
"end": 396,
"text": "Tennant (1981)",
"ref_id": "BIBREF17"
},
{
"start": 399,
"end": 422,
"text": "Bate: and Rettig (1988)",
"ref_id": null
},
{
"start": 425,
"end": 447,
"text": "and Neal et al. (1991)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Characteristics of the Methodology",
"sec_num": "2.1"
},
{
"text": "Examples below will be taken from the current DARPt SLS application, the Airline Travel Information Systen (ATIS). This is a database of flights with information o the aircraft, stops and connections, meals, etc. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Characteristics of the Methodology",
"sec_num": "2.1"
},
{
"text": "We assume an evaluation architecture like that in Figure 1 . The shaded components are common resources of the evaluation, and are not part of the system(s) being evaluated.",
"cite_spans": [],
"ref_spans": [
{
"start": 50,
"end": 58,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Evaluation Architecture and Common Resources",
"sec_num": "2.2"
},
{
"text": "Specifically, it is assumed there is a common database which all systems use in producing answers, which defines both the data tuples (rows in tables) and the data types for elements of these tuples (string, integer, etc.). Queries relevant to the database are collected under conditions as realistic as possible (see 2.4). Answers to the corpus of queries must be provided, expressed in a common standard format (Common Answer Specification, or CAS): one such format is exemplified in Appendix A. Some portion of these pairs of queries and answers is then set aside as a test corpus, and the remainder is provided as training material.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Architecture and Common Resources",
"sec_num": "2.2"
},
{
"text": "In practice, it has also proved useful to include in the training data the database query expression (for example, an SQL expression) which was used to produce the reference answer: this often makes it possible for system developers to understand what was expected for a query, even if the answer is empty or otherwise limited in content.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Architecture and Common Resources",
"sec_num": "2.2"
},
{
"text": "While the pairing of queries with answers provides the training and test corpora, these must be augmented by common agreement as to how queries should be answered. In practice, agreeing on the meaning of queries has been one of the hardest tasks. The issues are often extremely subtle, and interact with the structure and content of the database in sometimes unexpected ways.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Agreeing on Meaning",
"sec_num": "2.2.1"
},
{
"text": "As an example of the problem, consider the following request to an airline information system: List the direct flights from Boston to Dallas that serve meals.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Agreeing on Meaning",
"sec_num": "2.2.1"
},
{
"text": "It seems straightforward, but should this include flights that might stop in Chicago without making a connection there? Should it include flights that serve a snack, since a snack is not considered by some people to be a full meal?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Agreeing on Meaning",
"sec_num": "2.2.1"
},
{
"text": "Without some common agreement, many systems would produce very different answers for the same questions, all of them equally right according to each system's own definitions of the terms, but not amenable to automatic intersystem comparison. To implement this methodology for such a domain, therefore, it is necessary to stipulate the meaning of potentiMly ambiguous terms such as \"mid-day\", \"meals\" , \"the fare of a flight\". The current list of such \"principles of interpretation\" for the ATIS domain contains about 60 specifications, including things like:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Agreeing on Meaning",
"sec_num": "2.2.1"
},
{
"text": "\u2022 which tables and fields in the database identify the major entities in the domain (flights, aircraft, fares, etc.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Agreeing on Meaning",
"sec_num": "2.2.1"
},
{
"text": "\u2022 how to interpret fare expressions like \"one-way fare\", \"the cheapest fare\", \"excursion fare\", etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Agreeing on Meaning",
"sec_num": "2.2.1"
},
{
"text": "\u2022 which cities are to be considered \"near\" an airport.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Agreeing on Meaning",
"sec_num": "2.2.1"
},
{
"text": "Some other examples from the current principles of interpretation are given in Appendix B.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Agreeing on Meaning",
"sec_num": "2.2.1"
},
{
"text": "It is not enough to agree on meaning of queries in the chosen domain. It is also necessary to develop a common understanding of precisely what is to be produced as the answer, or part of the answer, to a question.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference Answers",
"sec_num": "2.2.2"
},
{
"text": "For example, if a user asks \"What is the departure time of the earliest flight from San Francisco to Atlanta?\", one system might reply with a single time and another might reply with that time plus additional columns containing the carrier and flight number, a third system might also include the arrival time and the origin and destination airports. None of these answers could be said to be wrong, although one might argue about the advantages and disadvantages of terseness and verbosity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference Answers",
"sec_num": "2.2.2"
},
{
"text": "While it is technically possible to mandate exactly which columns from the database should be returned for expressions, this is not practical: it requires agreement on a much larger set of issues, and conflicts with the principle that evaluation should be as non-intrusive as possible. Furthermore, it is not strictly necessary: what matters most is not whether a system provided exactly the same data as some reference answer, but whether the correct answer is clearly among the data provided (as long as no incorrect data was returned).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference Answers",
"sec_num": "2.2.2"
},
{
"text": "For the sake of automatic evaluation, then, a canonical reference answer (the minimum \"right answer\") is developed for each evaluable query in the training set. The content of this reference answer is determined both by domainindependent linguistic principles (Boisen et al., 1989) and domain-specific stipulation. The language used to express the answers for the ATIS domain is presented in Appendix A.",
"cite_spans": [
{
"start": 260,
"end": 281,
"text": "(Boisen et al., 1989)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reference Answers",
"sec_num": "2.2.2"
},
{
"text": "Evaluation using the minimal answer alone makes it possible to exploit the fact that extra fields in an answer axe not penalized. For example, the answer ((\"AA\" 152 0920 1015 \"BOS .... CHI\" \"SNACK\" ) ) could be produced for any of the following queries:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference Answers",
"sec_num": "2.2.2"
},
{
"text": "\u2022 \"When does American Airlines flight 152 leave?\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference Answers",
"sec_num": "2.2.2"
},
{
"text": "\u2022 \"What's the earliest flight from Boston to Chicago?\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference Answers",
"sec_num": "2.2.2"
},
{
"text": "\u2022 \"Does the 9:20 flight to Chicago serve meals?\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference Answers",
"sec_num": "2.2.2"
},
{
"text": "and would be counted correct. For the ATIS evaluations, it was necessary to rectify this problem without overly constraining what systems can produce as an answer. The solution arrived at was to have two kinds of reference answers for each query: a minimum answer, which contains the absolute minimum amount of data that must be included in an answer for it to be correct, and a maximum answer (that can be automatically derived from the minimum) containing all the \"reasonable\" fields that might be included, but no completely irrelevant ones. For example, for a question asking about the arrival time of a flight, the minimum answer would contain the flight 1D and the arrival time. The maximum answer would contain the airline name and flight number, but not the meal service or any fare information. In order to be counted correct, the answer produced by a system must contain at least the data in the minimum answer, and no more than the data in the maximum answer; if additional fields are produced, the answer is counted as wrong. This successfully reduced the incentive for systems to overgenerate answers in hope of getting credit for answering queries that they did not really understand.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference Answers",
"sec_num": "2.2.2"
},
{
"text": "Another common resource is software to compare the reference answers to those produced by various systems. 1 This task is complicated substantially by the fact that the reference answer is intentionally minimal, but the answer supplied by a system may contain extra information, and cannot be assumed to have the columns or rows in the same order as the reference answer. Some intelligence is therefore needed to determine when two answers match: simple identity tests won't work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison Software",
"sec_num": "2.2.3"
},
{
"text": "In the general case, comparing the atomic values in an answer expression just means an identity test. The only exception is real numbers, for which an epsilon test is performed, to deal with round-off discrepancies arising from different hardware precision. 2 The number of significant digits that are required to be the same is a parameter of the comparator.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison Software",
"sec_num": "2.2.3"
},
{
"text": "Answer comparison at the level of tables require more sophistication, since column order is ignored, and the answer may include additional columns that are not in the specification. Furthermore, those additional columns can mean that the answer will include extra whole tuples not present in the specification. For example, in the ATIS domain, if the Concorde and Airbus are both aircraft whose type is \"JET\", they would together contribute only one tuple (row) to the simple list of aircraft types below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison Software",
"sec_num": "2.2.3"
},
{
"text": "((\"JET\") ( \"TURBOPROP\" ) ( \"HELICOPTER\" ) (\"AMPHIBIAN\") (\"PROPELLER\"))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison Software",
"sec_num": "2.2.3"
},
{
"text": "On the other hand, if aircraft names were included in the table, they would each appear, producing a larger number of tuples overall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison Software",
"sec_num": "2.2.3"
},
{
"text": "( (\"AEROSPATIALE CONCORDE\"~. \"JET\") (\"AIRBUS INDUSTRIE .... JET\") ( \"LOCKHEED L18 8 ELECTRA .... TURBOPROP\" ] .o.)",
"cite_spans": [
{
"start": 97,
"end": 109,
"text": "TURBOPROP\" ]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison Software",
"sec_num": "2.2.3"
},
{
"text": "With answers in the form of tables, the algorithm explores each possible mapping from the required columns found in the reference answer (henceforth REF) to the actual columns found in the answer being evaluated (HYP). (Naturally, there must be at least as many columns in HYP as in REF, or the answer is clearly wrong.) For each such mapping, it reduces HYP according to the mapping, eliminating any duplicate tuples in the reduced (Boisen et al., 19891 . It has since been re-implementex and modified by NIST for the ATIS evaluations.",
"cite_spans": [
{
"start": 433,
"end": 454,
"text": "(Boisen et al., 19891",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison Software",
"sec_num": "2.2.3"
},
{
"text": "2For the ATIS evaluations, this identity test has been relaxed somewhat so that, e.g., strings need not have quotes around their if they do not contain \"white space\" characters. See Appendix t for further details. A special answer token (NO_ANSWER) was also agreed to, so that when a system can detect that it doesn't have enough information, it can report that fact rather than guessing. This is based on the assumption that failing to answer is less serious than answering incorrectly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison Software",
"sec_num": "2.2.3"
},
{
"text": "Expressing results can be almost as complicated as obtaining them. Originally it was thought that a simple \"X percent correct\" measure would be sufficient, however it became clear that there was a significant difference between giving a wrong answer and giving no answer at all, so the results are now presented as: Number right, Number wrong, Number not answered, Weighted Error Percentage (weighted so that wrong answers are twice as bad as no answer at all), and Score (100 -weighted error).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring Answers",
"sec_num": "2.3"
},
{
"text": "Whenever numeric measures of understanding are presented, they should in principle be accompanied by some measure of the significance and reliability of the metric. Although precise significance tests for this methodology are not yet known, it is clear that \"'black box\" testing is not a perfect measure. In particular, it is impossible to tell whether a system got a correct answer for the \"right\" reason, rather than through chance: this is especially true when the space of possible answers is small (yes-no questions are an extreme answer). Since more precise measures are much more costly, however, the present methodology has been considered adequate for the current state of the art in NL evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring Answers",
"sec_num": "2.3"
},
{
"text": "Given that current weighted error rates for the DARPA ATIS evaluations range from 55%--18%, we can roughly estimate the confidence interval to be approximately 8%. 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring Answers",
"sec_num": "2.3"
},
{
"text": "Another source of variation in the scoring metric is the fact that queries taken from different speakers can vary widely in terms of how easy it is for systems to understand and answer them correctly. For example, in the February 1991 ATIS evaluations, the performance of BBN's Delphi SLS on text input from individual speakers ranged from 75% to 10% correcL The word error from speech recognition was also the highest for those speakers with the highest NL error rates, suggesting that individual speaker differences can strongly impact the results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring Answers",
"sec_num": "2.3"
},
{
"text": "3Assuming there is some probability of error in each trial (query), the variance in this error rate can be estimated using the formula where e is the error rate expressed as a decimal (so 55% error = 0.55), and n is the size of the test set. Taking e = 0.45 (one of the better scores from the February 91 ATIS evaluation), and n --145, differences in scores greater than 0.08 (8%) have a 95% likelihood of being significant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring Answers",
"sec_num": "2.3"
},
{
"text": "The methodology presented above places no a priori restrictions on how the data itself should be collected. For the ATIS evaluations, several different methods of data collection, including a method called \"Wizard scenarios\", were used to collect raw data, both speech and transcribed text (Hemphill, 1990) . This resulted in the collection of a number of human-machine dialogues. One advantage of this approach is that it produced both the queries and draft answers at the same time. It also became clear that the language obtained is very strongly influenced by the particular task, the domain and database being used, the amount and form of data returned to the user, and the type of data collection methodology used. This is still an area of active research in the DARPA SLS community.",
"cite_spans": [
{
"start": 290,
"end": 306,
"text": "(Hemphill, 1990)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Collecting Data",
"sec_num": "2.4.1"
},
{
"text": "Typically, some of the data which is collected is not suitable as test data, because:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifying Data",
"sec_num": "2.4.2"
},
{
"text": "\u2022 the queries fall outside the domain or the database query application",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifying Data",
"sec_num": "2.4.2"
},
{
"text": "\u2022 the queries require capabilities beyond strict NL understanding (for example, very complex inferencing or the use of large amounts of knowledge outside the domain)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifying Data",
"sec_num": "2.4.2"
},
{
"text": "\u2022 the queries are overly vague (\"Tell me about ...\")",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifying Data",
"sec_num": "2.4.2"
},
{
"text": "It is also possible that phenomena may arise in test data which falls outside the agreement on meanings derived from the training data (the \"principles of interpretation\"). Such queries should be excluded from the test corpus, since it is not possible to make a meaningful comparison on answers unless there is prior agreement on precisely what the answer should be.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifying Data",
"sec_num": "2.4.2"
},
{
"text": "The methodology of comparing paired queries and answers assumes the query itself contains all the information necessary for producing an answer. This is, of course, often not true in spontaneous goal-directed utterances, since one query may create a context for another, and the full context is required to answer (e.g., \"Show me the flights ... \", 'Which of THEM ...\"). Various means of extending this methodology for evaluating context-dependent queries have been proposed, and some of them have been implemented in the ATIS evaluations (Boisen et al. (1989) , Hirschman et al. (1990) , Bates and Ayuso (1991) , Pallett (1991) ).",
"cite_spans": [
{
"start": 539,
"end": 560,
"text": "(Boisen et al. (1989)",
"ref_id": "BIBREF5"
},
{
"start": 563,
"end": 586,
"text": "Hirschman et al. (1990)",
"ref_id": "BIBREF11"
},
{
"start": 589,
"end": 611,
"text": "Bates and Ayuso (1991)",
"ref_id": "BIBREF1"
},
{
"start": 614,
"end": 628,
"text": "Pallett (1991)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse Context",
"sec_num": "2.4.3"
},
{
"text": "The goal of the DARPA Spoken Language Systems program is to further research and demonstrate the potential utility of speech understanding. Currently, at least five major sites (AT&T, BBN, CMU, MIT, and SRI) are developing complete SLS systems, and another site (Paramax) is integrating its NL component with other speech systems. Representatives from these and other organizations meet regularly to discuss program goals and to evaluate progress. This DARPA SLS community formed a committee on evaluation 4, chaired by David Pallett of the National Institute of Standards and Technology (NIST). The committee was to develop a methodology for data collection, training data dissemination, and testing for SLS systems under development. The first community-wide evaluation using the first version of this methodology took place in June, 1990, with subsequent evaluations in February 1991 and February 1992.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The DARPA SLS Evaluations",
"sec_num": "3"
},
{
"text": "The emphasis of the committee's work has been on automatic evaluation of queries to an air travel information system (ATIS). Air travel was chosen as an application that is easy for everyone to understand. The methodology presented here was originally developed in the context of the need for SLS evaluation, and has been extended in important ways by the community based on the practical experience of doing evaluations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The DARPA SLS Evaluations",
"sec_num": "3"
},
{
"text": "As a result of the ATIS evaluations, a body of resources has now been compiled and is available through NIST. This includes the ATIS relational database, a corpus of paired queries and answers, protocols for data collection, software for automatic comparison of answers, the \"Principles of Interpretation\" specifying domain-specific meanings of queries, and the CAS format (Appendix A is the current version). Interested parties should contact David Pallet of NIST for more information. 5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The DARPA SLS Evaluations",
"sec_num": "3"
},
{
"text": "Several benefits come from the use of this methodology:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": null
},
{
"text": "\u2022 It forces advance agreement on the meaning of critical terms and on some information to be included in the answer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": null
},
{
"text": "\u2022 It is objective, to the extent that a method for selecting testable queries can be defined, and to the extent that the agreements mentioned above can be reached.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": null
},
{
"text": "\u2022 It requires less human effort (primarily in the creating of canonical examples and answers) than non-automatic, more subjective evaluation. It is thus better suited to large test sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": null
},
{
"text": "\u2022 It can be easily extended.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": null
},
{
"text": "Most of the weaknesses of this methodology arise from the fact that the answers produced by a database query system are only an approximation of its understanding capabilities. As with any black-box approach, it may give undue credit to a system that gets the right answer for the wrong reason (i.e., without really understanding the query), although this should be mitigated by using larger and more varied test corpora. It does not distinguish between merely acceptable answers and very good answers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": null
},
{
"text": "Another limitation of this approach is that it does not adequately measure the handling of some phenomena, such as extended dialogues.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": null
},
{
"text": "This approach to evaluation shares many characteristics with the methods used for the DARPA-sponsored Message Understanding Conferences (Sundheim, 1989; Sundheim, 1991) . In particular, both approaches are focused on external (black-box) evaluation of the understanding capabilities of systems using input/output pairs, and there are many similar problems in precisely specifying how NL systems are to satisfy the application task.",
"cite_spans": [
{
"start": 136,
"end": 152,
"text": "(Sundheim, 1989;",
"ref_id": null
},
{
"start": 153,
"end": 168,
"text": "Sundheim, 1991)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Other Evaluation Methodologies",
"sec_num": "5"
},
{
"text": "Despite these similarities, this methodology probably comes closer to evaluating the actual understanding capabilities of NL systems. One reason is that the constraints on both input and output are more rigorous. For database query tasks, virtually every word must be correctly understood to produce a correct answer: by contrast, much of the MUC-3 texts is irrelevant to the application task. Since this methodology focuses on single queries (.perhaps with additional context), a smaller amount of language is being examined in each individual comparison.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other Evaluation Methodologies",
"sec_num": "5"
},
{
"text": "Similarly, for database query, the database itself implicitly constrains the space of possible answers, and each answer is scored as either correct or incorrect. This differs from the MUC evaluations, where an answer template is a composite of many bits of information, and is scored along the dimensions of recall, precision, and overgeneration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other Evaluation Methodologies",
"sec_num": "5"
},
{
"text": "Rome Laboratory has also sponsored a recent effort to define another approach to evaluating NL systems (Neal et al., 1991; Walter, 1992) . This methodology is focussed on human evaluation of interactive systems, and is a \"glassbox\" method which looks at the performance of the linguistic components of the system under review.",
"cite_spans": [
{
"start": 103,
"end": 122,
"text": "(Neal et al., 1991;",
"ref_id": "BIBREF12"
},
{
"start": 123,
"end": 136,
"text": "Walter, 1992)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Other Evaluation Methodologies",
"sec_num": "5"
},
{
"text": "The hottest topic currently facing the SLS community with respect to evaluation is what to do about dialogues. Many of the natural tasks one might do with a database interface involve extended problem-solving dialogues, but no methodology exists for evaluating the capabilities of systems attempting to engage in dialogues with users.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Issues",
"sec_num": "6"
},
{
"text": "A Common Answer Specification (CAS) for the ATIS Application (Note: this is the official CAS specification for the DARPA ATIS evaluations, as distributed by NIST. It is domain independent, but not necessarily complete: for example, it assumes that the units of any database value are unambiguously determined by the database specification. This would not be sufficient for applications that allowed unit conversion, e.g. \"Show me the weight of ...\" where the weight could be expressed in tons, metric tons, pounds, etc. This sort of extension should not affect the ease of automatically comparing answer expressions, however.) Standard BNF notation has been extended to include two other common devices : \"A+\" means \"one or more A's\" and \"m*\" means \"zero or more A's\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Issues",
"sec_num": "6"
},
{
"text": "The formulation given above does not define char_except_whitespace and char. All of the standard ASCII characters count as members of char, and all but \"white space\" are counted as char_except_whitespace. Following ANSI \"C\", blanks, horizontal and vertical tabs, newlines, formfeeds, and comments are, collectively, \"white space\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Issues",
"sec_num": "6"
},
{
"text": "The only change in the syntax of CAS itself from the previous version is that now a string may be represented as either a sequence of characters not containing white space or as a sequence of any characters enclosed in quotation marks. Note that only non-exponential real numbers are allowed, and that empty tuples are not allowed (but empty relations are).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Issues",
"sec_num": "6"
},
{
"text": "The syntactic classes boolean-value, string, and numbervalue define the types \"boolean\", \"string\", and \"'number\", respectively. All the tuples in a relation must have the same number of values, and those values must be of the same respective types (boolean, string, or number).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additional Syntactic Constraints",
"sec_num": null
},
{
"text": "If a token could represent either a string or a number, it will be taken to be a number; if it could represent either a string or a boolean, it will be taken to be a boolean. Interpretation as a string may be forced by enclosing a token in quotation marks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additional Syntactic Constraints",
"sec_num": null
},
{
"text": "In a tuple, NIL as the representation of missing data is allowed as a special case for any value, so a legal answer indicating the costs of ground transportation in Boston would be ({\"L\" 5.00) (\"R\" nil) (\"A\" nil) (\"R\" nil))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additional Syntactic Constraints",
"sec_num": null
},
{
"text": "String comparison is case-sensitive, but the distinguished values (YES, NO, TRUE, FALSE, NO~ANSWEP~ and NIL) may be written in either upper or lower case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Elementary Rules for CAS Comparisons",
"sec_num": null
},
{
"text": "Each indexical position for a value in a tuple (say, the ith) is assumed to represent the same field or variable in all the tuples in a given relation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Elementary Rules for CAS Comparisons",
"sec_num": null
},
{
"text": "Answer relations must be derived from the existing relations in the database, either by subsetting and combining relations or by operations like averaging, summation, etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Elementary Rules for CAS Comparisons",
"sec_num": null
},
{
"text": "In In comparing two real number values, a tolerance will be allowed; the default is -t-.01%. No tolerance is allowed in the comparison of integers. In comparing two strings, initial and final sub-strings of white space are ignored. In comparing boolean values, TRUE and YES are equivalent, as are FALSE and NO.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Elementary Rules for CAS Comparisons",
"sec_num": null
},
{
"text": "Interpretation Document for the ATIS Application (Note: these are excerpted from the official Principles of Interpretation document dated 11/20/91. The entire document is comprised of about 60 different points, and is available from David Pallet at NIST. The term \"annotator\" below refers to a human preparing training or test data by reviewing reference answers to queries.) INTERPETING ATIS QUERIES RE THE DATABASE 1 General Principles:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Some Examples from the Principles of",
"sec_num": null
},
{
"text": "1.1 Only reasonable interpretations will be used. An annotator or judge must decide if a linguistically possible interpretation is reasonable or not.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Some Examples from the Principles of",
"sec_num": null
},
{
"text": "1.2 The context will be used in deciding if an interpretation is reasonable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Some Examples from the Principles of",
"sec_num": null
},
{
"text": "...",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Some Examples from the Principles of",
"sec_num": null
},
{
"text": "At present (11/18/91) a few specified exceptions to this principle are allowed, such as allowing boolean answers for yes/no questions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Each interpretation must be expressible as one SQL statement.",
"sec_num": "1.3"
},
{
"text": "1.4 All interpretations meeting the above rules will be used by the annotators to generate possible reference answers. A query is thus ambiguous iff it has two interpretations that are fairly represented by distinct SQL expressions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Each interpretation must be expressible as one SQL statement.",
"sec_num": "1.3"
},
{
"text": "The reference SQL expression stands as a semantic representation or logical form. If a query has two interpretations that result in the same SQL, it will not be considered ambiguous. The fact that the two distinct SQL expressions may yield the same answer given the database is immaterial. The annotators must be aware of the usual sources of ambiguity, such as structural ambiguity, exemplified by cases like \"the prices of flights, first class, from X to Y\", in which the attachment of a modifier that can apply to either prices or flights is unclear. (This should be (ambiguously) interpreted both ways, as both \"the first-class prices on flights from X to Y\" and \"the prices on first-class flights from X to Y\".) More generally, if structural ambiguities like this could result in different (SQL) interpretations, they must be treated as ambiguous.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Each interpretation must be expressible as one SQL statement.",
"sec_num": "1.3"
},
{
"text": "2 Specific Principles: In this arena, certain English expressions have special meanings, particularly in terms of the database distributed by TI in the spring of 1990 and revised in November 1990 and May 1991. Here are the ones we have agreed on: (In the following, \"A.B\" refers to field B of table A.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Each interpretation must be expressible as one SQL statement.",
"sec_num": "1.3"
},
{
"text": "A large class of tables in the database have entries that can be taken as defining things that can be asked for in a query. In the answer, each of these things will be identified by giving a value of the primary key of its table. These tables are: A \"one-way\" fare is a fare for which round_trip_required = \"NO\". A \"round-trip\" fare is a fare with a non-null value for fare.round_trip_cost. The \"cheapest fare\" means the lowest onedirection fare.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Requests for enumeration.",
"sec_num": "2.1"
},
{
"text": "...",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Requests for enumeration.",
"sec_num": "2.1"
},
{
"text": "Questions about fares will always be treated as fares for flights in the maximal answer. The normal answer to otherwise unmodified \"when\" queries will be a time of day, not a date or a duration. The answer to queries like \"On what days does flight X fly\" will be a list of days.day.name fields.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Requests for enumeration.",
"sec_num": "2.1"
},
{
"text": "Queries that refer to a time earlier than 1300 hours without specifying \"a.m.\" or \"p.m.\" are ambiguous and may be interpreted as either.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Requests for enumeration.",
"sec_num": "2.1"
},
{
"text": "Periods of the day. The following table gives precise interpretations for some vague terms referring to time periods. The time intervals given do not include the end points. Items flagged with \"*\" are in the current (rdb3.3) database interval 2.9.1 With the particular exceptions noted below, requests for the \"meaning\" of something will only be interpretable if that thing is a code with a canned decoding definition in the database. In case the code field is not the key field of the table, informarion should be returned for all tuples that match on the code field. Here are the things so defined, with the fields containing their decoding: 2.11 Queries that are literally yes-or-no questions are considereal to be ambiguous between interpretation as a yes-or-no question and interpretation as the corresponding wh-question. For example, \"Are there flights from Boston to Philly?\" may be answered by either a boolean value (\"YES/TRUE/NO/FALSE\") or a table of flights from Boston to Philadelphia.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Requests for enumeration.",
"sec_num": "2.1"
},
{
"text": "2.15 When a query refers to an aircraft type such as \"BOE-ING 767\", the manufacturer (if one is given) must match the aircraft.manufacturer field and the type may be matched against either the aircraft.code field or the aircraft.basic_type field, ambiguously.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Requests for enumeration.",
"sec_num": "2.1"
},
{
"text": "2.16 Utterances whose answers require arithmetic computation are not now considered to be interpretable; this does not apply to arithmetic comparisons, including computing the maximum or minimum value of a field, or counting elements of a set of tuples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Requests for enumeration.",
"sec_num": "2.1"
},
{
"text": "2.4 Times ....",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Requests for enumeration.",
"sec_num": "2.1"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A Methodology for Evaluating Near-Prototype NL Processors",
"authors": [
{
"first": "B",
"middle": [],
"last": "Ballard",
"suffix": ""
}
],
"year": 1981,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Ballard. A Methodology for Evaluating Near-Prototype NL Processors. Technical Report OSU--CISRC-TR-81-4, Ohio State University, 1981.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A proposal for incremental dialogue evaluation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Bates",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Ayuso",
"suffix": ""
}
],
"year": 1991,
"venue": "Proceedings of the Speech and Natural Language Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Bates and D. Ayuso. A proposal for incremental dia- logue evaluation. In Proceedings of the Speech and Natural Language Workshop, San Mateo, California, February 1991.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "How to choose NL software. A/ Expert",
"authors": [
{
"first": "M",
"middle": [],
"last": "Bates",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Rettig",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Bates and M. Rettig. How to choose NL software. A/ Expert, July 1988.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A procedure for quantitatively comparing the syntactic coverage of English grammars",
"authors": [
{
"first": "E",
"middle": [],
"last": "Mack",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Abney",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Flickenger",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Gdaniec",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Grishman",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Harrison",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Hindle",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Ingria",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Jelinek",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Klavens",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Liberman",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Strzalkowski",
"suffix": ""
}
],
"year": 1991,
"venue": "Proceedings of the Speech and Natural Language Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Mack, S. Abney, D. Flickenger, C. Gdaniec, R. Grishman, P. Harrison, D. Hindle, B. Ingria, F. Jelinek, J. Klavens, M. Liberman, M. Marcus, S. Roukos, B. Santorini, and T. Strzalkowski. A procedure for quantitatively comparing the syntactic coverage of English grammars. In Proceedings of the Speech and Natural Language Workshop, San Ma- teo, California, February 1991. DARPA, Morgan Kaufmann Publishers, Inc.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A proposal for SLS evaluation",
"authors": [
{
"first": "S",
"middle": [],
"last": "Boisen",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Ayuso",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Bates",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of the Speech and Natural Language Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Boisen, L. Ramshaw, D. Ayuso, and M. Bates. A proposal for SLS evaluation. In Proceedings of the Speech and Nat- ural Language Workshop, San Marco, California, October 1989. DARPA, Morgan Kaufmann Publishers, Inc.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "DARPA. Proceedings of the Speech and Natural Language Workshop",
"authors": [],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "DARPA. Proceedings of the Speech and Natural Language Workshop, San Mateo, California, June 1990. Morgan Kauf- mann Publishers, Inc.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "DARPA. Proceedings of the Speech and Natural Language Workshop",
"authors": [],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "DARPA. Proceedings of the Speech and Natural Language Workshop, San Mateo, California, February 1991. Morgan Kaufmann Publishers, Inc.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "DARPA. Proceedings of the Third Message Understanding Conference (MUC-3)",
"authors": [],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "DARPA. Proceedings of the Third Message Understand- ing Conference (MUC-3), San Marco, California, May 1991.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "DARPA. Proceedings of the Speech and Natural Language Workshop",
"authors": [],
"year": 1992,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "DARPA. Proceedings of the Speech and Natural Language Workshop, San Mateo, California, February 1992. Morgan Kaufmann Publishers, Inc.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "TI implementation of corpus collection",
"authors": [
{
"first": "C",
"middle": [],
"last": "Hemphiu",
"suffix": ""
}
],
"year": 1990,
"venue": "Proceedings of the Speech and Natural Language Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. HemphiU. TI implementation of corpus collection. In Proceedings of the Speech and Natural Language Workshop, San Marco, California, June 1990. DARPA, Morgan Kauf- mann Publishers, Inc.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A proposal for automatic evaluation of discourse",
"authors": [
{
"first": "L",
"middle": [],
"last": "Hirschman",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Dahl",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Mckay",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Norton",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Linebarger",
"suffix": ""
}
],
"year": 1990,
"venue": "Proceedings of the Speech and Natural Language Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Hirschman, D. Dahl, D. McKay, L. Norton, and M. Linebarger. A proposal for automatic evaluation of dis- course. In Proceedings of the Speech and Natural Language Workshop, San Marco, California, June 1990. DARPA, Mor- gan Kaufmann Publishers, Inc.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Workshop on the Evaluation of Natural Language Processing Systems",
"authors": [
{
"first": "J",
"middle": [],
"last": "Neal",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Finin",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Grishman",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Montgomery",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Walter",
"suffix": ""
}
],
"year": 1991,
"venue": "RADC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Neal, T. Finin, R. Grishman, C. Montgomery, and S. Wal- ter. Workshop on the Evaluation of Natural Language Pro- cessing Systems. Technical Report (to appear), RADC, June 1991.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "DARPA Resource Management and ATIS benchmark test poster session",
"authors": [
{
"first": "D",
"middle": [
"S"
],
"last": "Pallett",
"suffix": ""
}
],
"year": 1991,
"venue": "Proceedings of the Speech and Natural Language Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. S. Pallett. DARPA Resource Management and ATIS benchmark test poster session. In Proceedings of the Speech and Natural Language Workshop, San Mateo, California, February 1991. DARPA, Morgan Kaufmann Publishers, Inc.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Workshop on the Evaluation of Natural Language Processing Systems",
"authors": [
{
"first": "M",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Finin",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Walter",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Palmer, T. Finin, and S. Walter. Workshop on the Eval- uation of Natural Language Processing Systems. Technical Report RADC-TR-89-302, RADC, 1989.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Plans for a task-oriented evaluation of natural language understanding systems",
"authors": [
{
"first": "M",
"middle": [],
"last": "Sundheirn",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of the Speech and Natural Language Workshop",
"volume": "",
"issue": "",
"pages": "197--202",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Sundheirn. Plans for a task-oriented evaluation of natural language understanding systems. In Proceedings of the Speech and Natural Language Workshop, pages 197-202, Philadelphia, PA, Februrary 1989.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Overview of the Third Message Understanding Evaluation and Conference",
"authors": [
{
"first": "B",
"middle": [
"M"
],
"last": "Sundheim",
"suffix": ""
}
],
"year": 1991,
"venue": "Proceedings of the Third Message Understanding Conference (MUC-3)",
"volume": "",
"issue": "",
"pages": "3--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. M. Sundheim. Overview of the Third Message Under- standing Evaluation and Conference. In Proceedings of the Third Message Understanding Conference (MUC-3), pages 3-16, San Marco, California, May 1991. DARPA, Morgan Kaufmann Publishers, Inc.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Neal-Montgomery NLP system evaluation methodology",
"authors": [
{
"first": "H",
"middle": [],
"last": "Tennant",
"suffix": ""
},
{
"first": ";",
"middle": [
"S"
],
"last": "Walter",
"suffix": ""
}
],
"year": 1981,
"venue": "Proceedings of the Speech and Natural Language Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Tennant. Evaluation of Natural Language Processors. PhD thesis, University of Illinois, 1981. S. Walter. Neal-Montgomery NLP system evaluation methodology. In Proceedings of the Speech and Natural Language Workshop, San Mateo, California, February 1992.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Issues and Red Herrings in Evaluating Natural Language Interfaces",
"authors": [
{
"first": "R",
"middle": [
"M"
],
"last": "Weischedel",
"suffix": ""
}
],
"year": 1986,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. M. Weischedel. Issues and Red Herrings in Evaluating Natural Language Interfaces. Pergamnon Press, 1986.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "!!iiiiiiiii!#:iiii:#~ii~ii~ii~!~ii~i~iiiii!!iiiiiiii Data ]Answer The evaluation process",
"num": null
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"text": "4The primary members of the original committee are: Lyn Bates (BBN), Debbie Dahl (UNISYS), Bill Fisher (NIST), Lynette Hirschman (M1T), Bob Moore (SRI), and Rich Stern (CMU). Successor committees have also included Jared Bernstein (SRI), Kate Hunike-Smith (SRI), Patti Price (SRI), Alex Rudnicky (CMU), and Jay Wilpon (AT&T). Many other people have contributed to the work of these committees and their subcommittees.5David Pallet may be contacted at the National Institute of Standards and Technology, Technology Building, Room A216, Gaithersburg, MD 20899, (301)975-2944.",
"num": null
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"text": "matching an hypothesized (HYP) CAS form with a reference (REF) one, the order of values in the tuples is not important; nor is the order of tuples in a relation, nor the order of alternatives in a CAS form using OR. The scoring algorithm will use the re-ordering that maximizes the indicated score. Extra values in a tuple are not counted as errors, but distinct extra tuples in a relation are. A tuple is not distinct if its values for the fields specified by the REF CAS are the same as another tuple in the relation; these duplicate tuples are ignored. CAS forms that include alternate CAS's connected with OR are intended to allow a single HYP form to match any one of several REF CAS forms. If the HYP CAS form contains alternates, the score is undefined.",
"num": null
},
"TABREF0": {
"text": "table, and then compares REF against that reduced table, testing set-equivalence between the two. Special provision is made for single element answers, sc that a scalar REF and a HYP which is a table containing a single element are judged to be equivalent That is, a scalar REF will match either a scalar or a single elemenl 1The first implementation of this software was by Lance Ramshaw",
"html": null,
"content": "<table/>",
"type_str": "table",
"num": null
},
"TABREF1": {
"text": "HYP, and a REF which is a single element table specification will also match either kind of answer. For the ATIS evaluations, two extensions were made to this approach. A REF may be ambiguous, containing several sub expressions each of which is itself a REF: in this case, if HYP matches any of the answers in REF, the comparison succeeds.",
"html": null,
"content": "<table/>",
"type_str": "table",
"num": null
},
"TABREF2": {
"text": "",
"html": null,
"content": "<table><tr><td>Basic Syntax in BNF</td><td/><td/><td/></tr><tr><td>answer</td><td colspan=\"3\">, casl [ ( casl OR answer )</td></tr><tr><td>casl</td><td colspan=\"3\">, scalar-value [ relation ] NO.ANSWER</td></tr><tr><td/><td colspan=\"2\">I no_answer</td><td/></tr><tr><td>scalar-value</td><td colspan=\"2\">, boolean-value</td><td>I number-value</td><td>[</td></tr><tr><td/><td colspan=\"2\">string</td><td/></tr><tr><td>boolean-value</td><td/><td/><td/></tr><tr><td>number-value</td><td colspan=\"3\">, integer ] real-number</td></tr><tr><td>integer</td><td colspan=\"2\">, [sign] digit+</td><td/></tr><tr><td>sign</td><td>, +</td><td>-</td><td/></tr><tr><td>digit</td><td>,0</td><td colspan=\"3\">1 [ 2 [ 3 { 4 [ 5 { 6 I 7 I</td></tr><tr><td/><td colspan=\"2\">8 9</td><td/></tr><tr><td>real-number</td><td colspan=\"3\">, sign digit+, digit* [ digit+, digit*</td></tr><tr><td>string</td><td colspan=\"4\">, char_except_whitespace+ I \" char* \"</td></tr><tr><td>relation</td><td colspan=\"2\">, ( tuple* )</td><td/></tr><tr><td>tuple</td><td colspan=\"2\">~ ( value+ )</td><td/></tr><tr><td>value</td><td colspan=\"3\">, scalar-value [NIL</td></tr></table>",
"type_str": "table",
"num": null
},
"TABREF3": {
"text": "A request for a flight's stops will be interpreted as asking for the intermediate stops only, from the flight_stop table.",
"html": null,
"content": "<table><tr><td/><td>English Term(s)</td><td>Primary Key</td></tr><tr><td>aircraft</td><td>aircraft, equipment</td><td>aircraft _code</td></tr><tr><td>airline</td><td>airline</td><td>airline_code</td></tr><tr><td>airport</td><td>airport</td><td>airport_code</td></tr><tr><td>flight_stop</td><td>(intermed.) stops</td><td>flight_id, stop_number</td></tr><tr><td/><td/><td>high_flight_number</td></tr><tr><td>2.2 Flights.</td><td/><td/></tr><tr><td colspan=\"3\">2.2.1 A flight \"between X and Y\" means a flight \"from</td></tr><tr><td>X toY\".</td><td/><td/></tr><tr><td>\u00b0o.</td><td/><td/></tr><tr><td>2.2.3 .o.</td><td/><td/></tr><tr><td>2.3 Fares.</td><td/><td/></tr><tr><td>2.3.1</td><td/><td/></tr><tr><td>2.3.2</td><td/><td/></tr><tr><td>2.3.3</td><td/><td/></tr><tr><td>2.3.8</td><td/><td/></tr></table>",
"type_str": "table",
"num": null
},
"TABREF6": {
"text": "",
"html": null,
"content": "<table><tr><td/><td>Field</td><td>Decoding Field</td></tr><tr><td>aircraft</td><td>aircraft_code</td><td>aircraft_description</td></tr><tr><td>airline</td><td>airline_code</td><td>airline_name</td></tr><tr><td>airport</td><td>airport_code</td><td>airlx~_name</td></tr><tr><td>city</td><td>city_code</td><td>city_name</td></tr><tr><td>class_of_service</td><td colspan=\"2\">booking_class class_description</td></tr><tr><td colspan=\"2\">code_description code</td><td>description</td></tr><tr><td>.,.</td><td/><td/></tr></table>",
"type_str": "table",
"num": null
}
}
}
}