{ "paper_id": "A00-1045", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:12:13.220809Z" }, "title": "Improving Testsuites via Instrumentation", "authors": [ { "first": "Norbert", "middle": [ "Brsker" ], "last": "Eschenweg", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper explores the usefulness of a technique from software engineering, namely code instrumentation, for the development of large-scale natural language grammars. Information about the usage of grammar rules in test sentences is used to detect untested rules, redundant test sentences, and likely causes of overgeneration. Results show that less than half of a large-coverage grammar for German is actually tested by two large testsuites, and that i0-30% of testing time is redundant. The methodology applied can be seen as a re-use of grammar writing knowledge for testsuite compilation.", "pdf_parse": { "paper_id": "A00-1045", "_pdf_hash": "", "abstract": [ { "text": "This paper explores the usefulness of a technique from software engineering, namely code instrumentation, for the development of large-scale natural language grammars. Information about the usage of grammar rules in test sentences is used to detect untested rules, redundant test sentences, and likely causes of overgeneration. Results show that less than half of a large-coverage grammar for German is actually tested by two large testsuites, and that i0-30% of testing time is redundant. The methodology applied can be seen as a re-use of grammar writing knowledge for testsuite compilation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Computational Linguistics (CL) has moved towards the marketplace: One finds programs employing CLtechniques in every software shop: Speech Recognition, Grammar and Style Checking, and even Machine Translation are available as products. While this demonstrates the applicability of the research done, it also calls for a rigorous development methodology of such CL application products.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper,lI describe the adaptation of a technique from Software Engineering, namely code instrumentation, to grammar development. Instrumentation is based on the simple idea of marking any piece of code used in processing, and evaluating this usage information afterwards. The application I present here is the evaluation and improvement of grammar and testsuites; other applications are possible.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Both software and grammar development are similar processes: They result in a system transforming some input into some output, based on a functional specification (e.g., cf. (Ciravegna et al., 1998) for the application of a particular software design methodology to linguistic engineering). Although Grammar", "cite_spans": [ { "start": 174, "end": 198, "text": "(Ciravegna et al., 1998)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Software Engineering vs. Grammar Engineering", "sec_num": "1.1" }, { "text": "Engineering usually is not based on concrete specifications, research from linguistics provides an informal specification. Software Engineering developed many methods to assess the quality of a program, ranging from static analysis of the program code to dynamic testing of the program's behavior. Here, we adapt dynamic testing, which means running the implemented program against a set of test cases. The test cases are designed to maximize the probability of detecting errors in the program, i.e., incorrect conditions, incompatible assumptions on subsequent branches, etc. (for overviews, cf. (Hetzel, 1988; Liggesmeyer, 1990) ).", "cite_spans": [ { "start": 597, "end": 611, "text": "(Hetzel, 1988;", "ref_id": "BIBREF6" }, { "start": 612, "end": 630, "text": "Liggesmeyer, 1990)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Software Engineering vs. Grammar Engineering", "sec_num": "1.1" }, { "text": "Engineering How can we fruitfully apply the idea of measuring the coverage of a set of test cases to grammar development? I argue that by exploring the relation between grammar and testsuite, one can improve both of them. Even the traditional usage of testsuites to indicate grammar gaps or overgeneration can profit from a precise indication of the grammar rules used to parse the sentences (cf. Sec.4). Conversely, one may use the grammar to improve the testsuite, both in terms of its coverage (cf. Sec.3.1) and its economy (cf. Sec.3.2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Instrumentation in Grammar", "sec_num": "1.2" }, { "text": "Viewed this way, testsuite writing can benefit from grammar development because both describe the syntactic constructions of a natural language. Testsuites systematically list these constructions, while grammars give generative procedures to construct them. Since there are currently many more grammars than testsuites, we may re-use the work that has gone into the grammars for the improvement of testsuites.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Instrumentation in Grammar", "sec_num": "1.2" }, { "text": "The work reported here is situated in a large cooperative project aiming at the development of largecoverage grammars for three languages. The grammars have been developed over years by different people, which makes the existence of tools for navigation, testing, and documentation mandatory. Although the sample rules given below are in the format of LFG, nothing of the methodology relies on VP~V $=T; NP?$= (I\" OBJ); PP* {$= (T OBL); 156 ($ ADJUNCT);}. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Instrumentation in Grammar", "sec_num": "1.2" }, { "text": "Measures from Software Engineering cannot be simply transferred to Grammar Engineering, because the structure of programs is different from that of unification grammars. Nevertheless, the structure of a grammar allows the derivation of suitable measures, similar to the structure of programs; this is discussed in Sec.2.1. The actual instrumentation of the grammar depends on the formalism used, and is discussed in Sec.2.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammar Instrumentation", "sec_num": "2" }, { "text": "Consider the LFG grammar rule in Fig. 1 . 2 On first view, one could require of a testsuite that each such rule is exercised at least once. ~rther thought will indicate that there are hidden alternatives, namely the optionality of the NP and the PP. The rule can only be said to be thoroughly tested if test cases exist which test both presence and absence of optional constituents (requiring 4 test cases for this rule). In addition to context-free rules, unification grammars contain equations of various sorts, as illustrated in Fig.1 . Since these annotations may also contain disjunctions, a testsuite with complete rule coverage is not guaranteed to exercise all equation alternatives. The phrase-structure-based criterion defined above must be refined to cover all equation alternatives in the rule (requiring two test cases for the PP annotation). Even if we assume that (as, e.g., in LFG) there is at least one equation associated with each constituent, equation coverage does not subsume rule coverage: Optional constituents introduce a rule disjunct (without the constituent) that is not characterizable by an equation. A measure might thus be defined as follows: disjunct coverage The disjunct coverage of a testsuite is the quotient number of disjuncts tested Tdis = number of disjuncts in grammar 2Notation: ?/*/+ represent optionality/iteration including/excluding zero occurrences on categories. Annotations to a category specify equality (=) or set membership (6) of feature values, or non-existence of features (-1); they are terminated by a semicolon ( ; ). Disjunctions are given in braces ({... I-.. }). $ ($) are metavariables representing the feature structure corresponding to the mother (daughter) of the rule. Comments are enclosed in quotation marks (\"... \"). Cf. (Kaplan and Bresnan, 1982) for an introduction to LFG notation.", "cite_spans": [ { "start": 1791, "end": 1817, "text": "(Kaplan and Bresnan, 1982)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 33, "end": 39, "text": "Fig. 1", "ref_id": "FIGREF0" }, { "start": 532, "end": 537, "text": "Fig.1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Coverage Criteria", "sec_num": "2.1" }, { "text": "where a disjunct is either a phrase-structure alternative, or an annotation alternative. Optional constituents (and equations, if the formalism allows them) have to be treated as a disjunction of the constituent and an empty category (cf. the instrumented rule in Fig.2 for an example).", "cite_spans": [], "ref_spans": [ { "start": 264, "end": 269, "text": "Fig.2", "ref_id": null } ], "eq_spans": [], "section": "Coverage Criteria", "sec_num": "2.1" }, { "text": "Instead of considering disjuncts in isolation, one might take their interaction into account. The most complete test criterion, doing this to the fullest extent possible, can be defined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coverage Criteria", "sec_num": "2.1" }, { "text": "interaction coverage The interaction coverage of a testsuite is the quotient number of disjunct combinations tested Tinter = number of legal disjunct combinations There are methodological problems in this criterion, however. First, the set of legal combinations may not be easily definable, due to far-reaching dependencies between disjuncts in different rules, and second, recursion leads to infinitely many legal disjunct combinations as soon as we take the number of usages of a disjunct into account. Requiring complete interaction coverage is infeasible in practice, similar to the path coverage criterion in Software Engineering.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coverage Criteria", "sec_num": "2.1" }, { "text": "We will say that an analysis (and the sentence receiving this analysis) relies on a grammar disjunct if this disjunct was used in constructing the analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coverage Criteria", "sec_num": "2.1" }, { "text": "Basically, grammar instrumentation is identical to program instrumentation: For each disjunct in a given source grammar, we add grammar code that will identify this disjunct in the solution produced, iff that disjunct has been used in constructing the solution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Instrumentation", "sec_num": "2.2" }, { "text": "Assuming a unique numbering of disjuncts, an annotation of the form DISJUNCT-nn = + can be used for marking. To determine whether a certain disjunct was used in constructing a solution, one only needs to check whether the associated feature occurs (at some level of embedding) in the solution. Alternatively, if set-valued features are available, one can use a set-valued feature DISJUNCTS to collect atomic symbols representing one disjunct each:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Instrumentation", "sec_num": "2.2" }, { "text": "DISJUNCT-nn 6 DISJUNCTS.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Instrumentation", "sec_num": "2.2" }, { "text": "One restriction is imposed by using the unification formalism, though: One occurrence of the mark cannot be distinguished from two occurrences, since the second application of the equation introduces no new information. The markers merely unify, and there is no way of counting. (Frank et al., 1998) ). In this way, from the root node of each solution the set of all disjuncts used can be collected, together with a usage count. Fig. 2 shows the rule from Fig.1 with such an instrumentation; equations of the form DISJUNCT-nnE o* express membership of the disjunct-specific atom DISJUNCT-nn in the sentence's multiset of disjunct markers.", "cite_spans": [ { "start": 279, "end": 299, "text": "(Frank et al., 1998)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 429, "end": 435, "text": "Fig. 2", "ref_id": null }, { "start": 456, "end": 461, "text": "Fig.1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Instrumentation", "sec_num": "2.2" }, { "text": "Tool support is mandatory for a scenario such as instrumentation: Nobody will manually add equations such as those in Fig. 2 to several hundred rules. Based on the format of the grammar rules, an algorithm instrumenting a grammar can be written down easily.", "cite_spans": [], "ref_spans": [ { "start": 118, "end": 124, "text": "Fig. 2", "ref_id": null } ], "eq_spans": [], "section": "Processing Tools", "sec_num": "2.3" }, { "text": "Given a grammar and a testsuite or corpus to compare, first an instrumented grammar must be constructed using such an algorithm. This instrumented grammar is then used to parse the testsuite, yielding a set of solutions associated with information about usage of grammar disjuncts. Up to this point, the process is completely automatic. The following two sections discuss two possibilities to evaluate this information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Processing Tools", "sec_num": "2.3" }, { "text": "This section addresses the aspects of completeness (\"does the testsuite exercise all disjuncts in the grammar?\") and economy of a testsuite (\"is it minimal?\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quality of Testsuites", "sec_num": "3" }, { "text": "Complementing other work on testsuite construction (cf. Sec.5), we will assume that a grammar is already available, and that a testsuite has to be constructed or extended. While one may argue that grammar and testsuite should be developed in parallel, such that the coding of a new grammar disjunct is accompanied by the addition of suitable test cases, and vice versa, this is seldom the case. Apart from the existence of grammars which lack a testsuite, and to which this procedure could be usefully applied, there is the more principled obstacle of the evolution of the grammar, leading to states where previously necessary rules silently loose their usefulness, because their function is taken over by some other rules, structured differently. This is detectable by instrumentation, as discussed in Sec.3.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quality of Testsuites", "sec_num": "3" }, { "text": "On the other hand, once there is a testsuite, you want to use it in the most economic way, avoiding redundant tests. Sec.3.2 shows that there are different levels of redundancy in a testsuite, dependent on the specific grammar used. Reduction of this redundancy can speed up the test activity, and give a clearer picture of the grammar's performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quality of Testsuites", "sec_num": "3" }, { "text": "If the disjunct coverage of a testsuite is 1 for some grammar, the testsuite is complete w.r.t, this grammar. Such a testsuite can reliably be used to monitor changes in the grammar: Any reduction in the grammar's coverage will show up in the failure of some test case (for negative test cases, cf. Sec.4).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Testsuite Completeness", "sec_num": "3.1" }, { "text": "If there is no complete testsuite, one can -via instrumentation -identify disjuncts in the grammar for which no test case exists. There might be either (i) appropriate, but untested, disjuncts calling for the addition of a test case, or (ii) inappropriate disjuncts, for which one cannot construct a grammatical test case relying on them (e.g., left-overs from rearranging the grammar). Grammar instrumentation singles out all untested disjuncts automatically, but cases (i) and (ii) have to be distinguished manually.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Testsuite Completeness", "sec_num": "3.1" }, { "text": "Checking completeness of our local testsuite of 1787 items, we found that only 1456 out of 3730 grammar disjuncts ir~ our German grammar were tested, yielding Tdis = 0.39 (the TSNLP testsuite containing 1093 items tests only 1081 disjuncts, yielding Tdis = 0.28). 3 Fig.3 shows an example of a gap in our testsuite (there are no examples of circumpositions), while Fig.4 shows an inapproppriate disjunct thus discovered (the category ADVadj has been eliminated in the lexicon, but not in all rules). Another error class is illustrated by Fig.5 , which shows a rule that can never be used due to an LFG coherence violation; the grammar is inconsistent here. 4 3There are, of course, unparsed but grammatical test cases in both testsuites, which have not been taken into account in these figures. This explains the difference to the overall number of 1582 items in the German TSNLP testsuite. 4Test cases using a free dative pronoun may be in the testsuite, but receive no analysis since the grammatical function FREEDAT is not defined as such in the configuration section. ", "cite_spans": [], "ref_spans": [ { "start": 266, "end": 271, "text": "Fig.3", "ref_id": null }, { "start": 365, "end": 370, "text": "Fig.4", "ref_id": null }, { "start": 538, "end": 543, "text": "Fig.5", "ref_id": null } ], "eq_spans": [], "section": "Testsuite Completeness", "sec_num": "3.1" }, { "text": "Besides being complete, a testsuite must be economical, i.e., contain as few items as possible without sacrificing its diagnostic capabilities. Instrumentation can identify redundant test cases. Three criteria can be applied in determining whether a test case is redundant: similarity There is a set of other test cases which jointly rely on all disjunct on which the test case under consideration relies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Testsuite Economy", "sec_num": "3.2" }, { "text": "equivalence There is a single test case which relies on exactly the same combination(s) of disjuncts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Testsuite Economy", "sec_num": "3.2" }, { "text": "strict equivalence There is a single test case which is equivalent to and, additionally, relies on the disjuncts exactly as often as, the test case under consideration.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Testsuite Economy", "sec_num": "3.2" }, { "text": "For all criteria, lexical and structural ambiguities must be taken into account. Fig.6 shows some equivalent test cases derived from our testsuite: Example 1 illustrates the distinction between equivalence and strict equivalence; the test cases contain different numbers of attributive adjectives, but are nevertheless considered equivalent. Example 2 shows that our grammar does not make any distinction between adverbial usage and secondary (subject or object) predication. Example 3 shows test cases which should not be considered equivalent, and is discussed below.", "cite_spans": [], "ref_spans": [ { "start": 81, "end": 86, "text": "Fig.6", "ref_id": null } ], "eq_spans": [], "section": "Testsuite Economy", "sec_num": "3.2" }, { "text": "The reduction we achieved in size and processing time is shown in 'He eats the schnitzel naked/raw/quickly.'", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Testsuite Economy", "sec_num": "3.2" }, { "text": "3 Otto versucht oft zu lachen.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Testsuite Economy", "sec_num": "3.2" }, { "text": "Otto versucht zu lachen.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Testsuite Economy", "sec_num": "3.2" }, { "text": "'Otto (often) tries to laugh.' Figure 6 : Sets of equivalent test cases ily selected), and one without similar test cases. The last was constructed using a simple heuristic: Starting with the sentence relying on the most disjuncts, working towards sentences relying on fewer disjuncts, a sentence was selected only if it relied on a disjunct on which no previously selected sentence relied. Assuming that a disjunct working correctly once will work correctly more than once, we did not consider strict equivalence. We envisage the following use of this redundancy detection: There clearly are linguistic reasons to distinguish all test cases in example 2, so they cannot simply be deleted from the testsuite. Rather, their equivalence indicates that the grammar is not yet perfect (or never will be, if it remains purely syntactic). Such equivalences could be interpreted as The level of equivalence can be taken as a limited interaction test: These test cases represent one complete selection of grammar disjuncts, and (given the grammar) there is nothing we can gain by checking a test case if an equivalent one was tested. Thus, this level of redundancy may be used for ensuring the quality of grammar changes prior to their incorporation into the production version of the grammar. The level of similarity contains much less test cases, and does not test any (systematic) interaction between disjuncts. Thus, it may be used during development as a quick rule-of-thumb procedure detecting serious errors only. Coming back to example 3 in Fig.6 , building equivalence classes also helps in detecting grammar errors: If, according to the grammar, two cases are equivalent which actually aren't, the grammar is incorrect. Example 3 shows two test cases which are syntactically different in that the first contains the adverbial oft, while the other doesn't. The reason why they are equivalent is an incorrect rule that assigns an incorrect reading to the second test case, where the infinitival particle \"zu\" functions as an adverbial.", "cite_spans": [], "ref_spans": [ { "start": 31, "end": 39, "text": "Figure 6", "ref_id": null }, { "start": 1541, "end": 1546, "text": "Fig.6", "ref_id": null } ], "eq_spans": [], "section": "Testsuite Economy", "sec_num": "3.2" }, { "text": "To control overgeneration, appropriately marked ungrammatical sentences are important in every testsuite. Instrumentation as proposed here only looks at successful parses, but can still be applied in this context: If an ungrammatical test case receives an analysis, instrumentation informs us about the disjuncts used in the incorrect analysis. One (or more) of these disjuncts must be incorrect, or the sentence would not have received a solution. We exploit this information by accumulation across the entire test suite, looking for disjuncts that appear in unusually high proportion in parseable ungrammatical test cases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Negative Test Cases", "sec_num": "4" }, { "text": "In this manner, six grammar disjuncts are singled out by the parseable ungrammatical test cases in the TSNLP testsuite. The most prominent disjunct appears in 26 sentences (listed in Fig.7) , of which group 1 is really grammatical and the rest fall into two groups: A partial VP with object NP, interpreted as an imperative sentence (group 2), and a weird interaction with the tokenizer incorrectly handling capitalization (group 3).", "cite_spans": [], "ref_spans": [ { "start": 183, "end": 189, "text": "Fig.7)", "ref_id": null } ], "eq_spans": [], "section": "Negative Test Cases", "sec_num": "4" }, { "text": "Far from being conclusive, the similarity of these sentences derived from a suspicious grammar disjunct, and the clear relation of the sentences to only two exactly specifiable grammar errors make it plausible that this approach is very promising in reducing overgeneration. Although there are a number of efforts to construct reusable large-coverage testsuites, none has to my knowledge explored how existing grammars could be used for this purpose.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Negative Test Cases", "sec_num": "4" }, { "text": "Starting with (Flickinger et al., 1987) , testsuites have been drawn up from a linguistic viewpoint, \"in-]ormed by [the] study of linguistics and [reflecting] the grammatical issues that linguists have concerned themselves with\" (Flickinger et al., 1987, , p.4) . Although the question is not explicitly addressed in (Balkan et al., 1994) , all the testsuites reviewed there also seem to follow the same methodology. The TSNLP project (Lehmann and Oepen, 1996) and its successor DiET (Netter et al., 1998) , which built large multilingual testsuites, likewise fall into this category. The use of corpora (with various levels of annotation) has been studied, but even here the recommendations are that much manual work is required to turn corpus examples into test cases (e.g., (Balkan and Fouvry, 1995) ). The reason given is that corpus sentences neither contain linguistic phenomena in isolation, nor do they contain systematic variation. Corpora thus are used only as an inspiration. (Oepen and Flickinger, 1998) stress the interdependence between application and testsuite, but don't comment on the relation between grammar and testsuite.", "cite_spans": [ { "start": 14, "end": 39, "text": "(Flickinger et al., 1987)", "ref_id": "BIBREF3" }, { "start": 115, "end": 158, "text": "[the] study of linguistics and [reflecting]", "ref_id": null }, { "start": 229, "end": 261, "text": "(Flickinger et al., 1987, , p.4)", "ref_id": null }, { "start": 317, "end": 338, "text": "(Balkan et al., 1994)", "ref_id": "BIBREF1" }, { "start": 435, "end": 460, "text": "(Lehmann and Oepen, 1996)", "ref_id": "BIBREF8" }, { "start": 484, "end": 505, "text": "(Netter et al., 1998)", "ref_id": "BIBREF10" }, { "start": 777, "end": 802, "text": "(Balkan and Fouvry, 1995)", "ref_id": "BIBREF0" }, { "start": 987, "end": 1015, "text": "(Oepen and Flickinger, 1998)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Negative Test Cases", "sec_num": "4" }, { "text": "The approach presented tries to make available the linguistic knowledge that went into the grammar for development of testsuites. Grammar development and testsuite compilation are seen as complementary and interacting processes, not as isolated modules. We have seen that even large testsuites cover only a fraction of existing large-coverage grammars, and presented evidence that there is a considerable amount of redundancy within existing testsuites. To empirically validate that the procedures outlined above improve grammar and testsuite, careful grammar development is required. Based on the information derived from parsing with instrumented grammars, the changes and their effects need to be evaluated. In addition to this empirical work, instrumentation can be applied to other areas in Grammar Engineering, e.g., to detect sources of spurious ambiguities, to select sample sentences relying on a disjunct for documentation, or to assist in the construction of additional test cases. Methodological work is also required for the definition of a practical and intuitive criterion to measure limited interaction coverage.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Each existing grammar development environment undoubtely offers at least some basic tools for comparing the grammar's coverage with a testsuite. Regrettably, these tools are seldomly presented publicly (which accounts for the short list of such references). It is my belief that the thorough discussion of such infrastructure items (tools and methods) is of more immediate importance to the quality of the lingware than the discussion of open linguistic problems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "1The work reported here was conducted during my time at the Institut fiir Maschinelle Sprachverarbeitung (IMS), Stuttgart University, Germany.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Corpus-based test suite generation. TSNLP-WP 2.2", "authors": [ { "first": "L", "middle": [], "last": "Balkan", "suffix": "" }, { "first": "F", "middle": [], "last": "Fouvry", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Balkan and F. Fouvry. 1995. Corpus-based test suite generation. TSNLP-WP 2.2, University of Essex.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Test Suite Design Annotation Scheme", "authors": [ { "first": "L", "middle": [], "last": "Balkan", "suffix": "" }, { "first": "S", "middle": [], "last": "Meijer", "suffix": "" }, { "first": "D", "middle": [], "last": "Arnold", "suffix": "" }, { "first": "D", "middle": [], "last": "Estival", "suffix": "" }, { "first": "K", "middle": [], "last": "Falkedal", "suffix": "" } ], "year": 1994, "venue": "TSNLP-WP2", "volume": "2", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Balkan, S. Meijer, D. Arnold, D. Estival, and K. Falkedal. 1994. Test Suite Design Annotation Scheme. TSNLP-WP2.2, University of Essex.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Developing language reesources and applications with geppetto", "authors": [ { "first": "F", "middle": [], "last": "Ciravegna", "suffix": "" }, { "first": "A", "middle": [], "last": "Lavelli", "suffix": "" }, { "first": "D", "middle": [], "last": "Petrelli", "suffix": "" }, { "first": "F", "middle": [], "last": "Pianesi", "suffix": "" } ], "year": 1998, "venue": "Proc. 1st Int'l Con/. on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "28--30", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Ciravegna, A. Lavelli, D. Petrelli, and F. Pianesi. 1998. Developing language reesources and appli- cations with geppetto. In Proc. 1st Int'l Con/. on Language Resources and Evaluation, pages 619- 625. Granada/Spain, 28-30 May 1998.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Toward Evaluation o/ NLP Systems", "authors": [ { "first": "D", "middle": [], "last": "Flickinger", "suffix": "" }, { "first": "J", "middle": [], "last": "Nerbonne", "suffix": "" }, { "first": "I", "middle": [], "last": "Sag", "suffix": "" }, { "first": "T", "middle": [], "last": "Wasow", "suffix": "" } ], "year": 1987, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Flickinger, J. Nerbonne, I. Sag, and T. Wa- sow. 1987. Toward Evaluation o/ NLP Systems. Hewlett-Packard Laboratories, Palo Alto/CA.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Optimality theory style constraint ranking in large-scale lfg gramma", "authors": [ { "first": "A", "middle": [], "last": "Frank", "suffix": "" }, { "first": "T", "middle": [ "H" ], "last": "King", "suffix": "" }, { "first": "J", "middle": [], "last": "Kuhn", "suffix": "" }, { "first": "J", "middle": [], "last": "Maxwell", "suffix": "" } ], "year": 1998, "venue": "Proc. of the LFG98", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Frank, T.H. King, J. Kuhn, and J. Maxwell. 1998. Optimality theory style constraint ranking in large-scale lfg gramma. In Proc. of the LFG98", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "The complete guide to software testing", "authors": [ { "first": "W", "middle": [ "C" ], "last": "Hetzel", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "W.C. Hetzel. 1988. The complete guide to software testing. QED Information Sciences, Inc. Welles- ley/MA 02181.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Lexicalfunctional grammar: A formal system for grammatical representation", "authors": [ { "first": "R", "middle": [ "M" ], "last": "Kaplan", "suffix": "" }, { "first": "J", "middle": [], "last": "Bresnan", "suffix": "" } ], "year": 1982, "venue": "The Mental Representation of Grammatical Relations", "volume": "", "issue": "", "pages": "173--281", "other_ids": {}, "num": null, "urls": [], "raw_text": "R.M. Kaplan and J. Bresnan. 1982. Lexical- functional grammar: A formal system for gram- matical representation. In J. Bresnan and R.M. Kaplan, editors, The Mental Representation of Grammatical Relations, pages 173-281. Cam- bridge, MA: MIT Press.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "TSNLP -Test Suites for Natural Language Processing", "authors": [ { "first": "S", "middle": [], "last": "Lehmann", "suffix": "" }, { "first": "S", "middle": [], "last": "Oepen", "suffix": "" } ], "year": 1996, "venue": "Proc. 16th Int'l Con]. on Computational Linguistics", "volume": "", "issue": "", "pages": "711--716", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Lehmann and S. Oepen. 1996. TSNLP -Test Suites for Natural Language Processing. In Proc. 16th Int'l Con]. on Computational Linguistics, pages 711-716. Copenhagen/DK.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Modultest und Modulverij~ka-tion", "authors": [ { "first": "P", "middle": [], "last": "Liggesmeyer", "suffix": "" } ], "year": 1990, "venue": "Angewandte Informatik", "volume": "4", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Liggesmeyer. 1990. Modultest und Modulverij~ka- tion. Angewandte Informatik 4. Mannheim: BI Wissenschaftsverlag.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Diet -diagnostic and evaluation tools for nlp applications", "authors": [ { "first": "K", "middle": [], "last": "Netter", "suffix": "" }, { "first": "S", "middle": [], "last": "Armstrong", "suffix": "" }, { "first": "T", "middle": [], "last": "Kiss", "suffix": "" }, { "first": "J", "middle": [], "last": "Klein", "suffix": "" }, { "first": "S", "middle": [], "last": "Lehman", "suffix": "" } ], "year": 1998, "venue": "Proc. 1st", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Netter, S. Armstrong, T. Kiss, J. Klein, and S. Lehman. 1998. Diet -diagnostic and eval- uation tools for nlp applications. In Proc. 1st", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "on Language Resources and Evaluation", "authors": [], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "28--30", "other_ids": {}, "num": null, "urls": [], "raw_text": "Int'l Con/. on Language Resources and Evalua- tion, pages 573-579. Granada/Spain, 28-30 May 1998.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Towards systematic grammar profiling:test suite techn. 10 years afte", "authors": [ { "first": "S", "middle": [], "last": "Oepen", "suffix": "" }, { "first": "D", "middle": [ "P" ], "last": "Flickinger", "suffix": "" } ], "year": 1998, "venue": "Journal of Computer Speech and Language", "volume": "12", "issue": "", "pages": "411--435", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Oepen and D.P. Flickinger. 1998. Towards sys- tematic grammar profiling:test suite techn. 10 years afte. Journal of Computer Speech and Lan- guage, 12:411-435.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "num": null, "text": "Sample Rule the choice of linguistic or computational paradigm." }, "FIGREF1": { "type_str": "figure", "uris": null, "num": null, "text": "have used a special feature of our grammar development environment: Following the LFG spirit of different representation levels associated with each solution (so-called projections), it provides for a multiset of symbols associated with the complete solution, where structural embedding plays no role (so-called optimality projection; see" }, "FIGREF2": { "type_str": "figure", "uris": null, "num": null, "text": "Figure 3: Appropriate untested disjunct ADVP=~ { { e DISJUNCT-021 E o*; I ADVadj 4=1` DISJUNCT-022 E o* \"unused disjunct\" ; } ADVstd 4=1\" DISJUNCT-023 E o, \"unused disjunct\" ; } I .,. } Figure 4: Inappropriate disjunct" }, "FIGREF3": { "type_str": "figure", "uris": null, "num": null, "text": "Figure 7: Sentences relying on a suspicious disjunct" }, "TABREF0": { "num": null, "html": null, "content": "
, which contains measure-
ments for a test run containing only the parseable
test cases, one without equivalent test cases (for ev-
ery set of equivalent test cases, one was arbitrar-
", "type_str": "table", "text": "" }, "TABREF2": { "num": null, "html": null, "content": "
Dieselbe schlafen .
Die schlafen.Das schlafen .
Eines schlafen.
3 Man schlafen .Jede schlafen .
Dieser schlafen .Dieses schlafen.
Ich schlafen .Eine schlafen .
Der schlafen.Meins schlafen.
Jeder schlafen.Dasjenige schlafen.
Derjenige schlafen .Jedes schlafen .
Jener schlafen .Diejenige schlafen.
Keiner schlafen .Jenes schlafen.
Derselbe schlafen.Keines schlafen .
Er schlafen.Dasselbe schlafen.
Irgendjemand schlafen .
", "type_str": "table", "text": "Der Test fg.llt leicht." } } } }