File size: 54,700 Bytes
6fa4bc9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 |
{
"paper_id": "A00-1045",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:12:13.220809Z"
},
"title": "Improving Testsuites via Instrumentation",
"authors": [
{
"first": "Norbert",
"middle": [
"Brsker"
],
"last": "Eschenweg",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper explores the usefulness of a technique from software engineering, namely code instrumentation, for the development of large-scale natural language grammars. Information about the usage of grammar rules in test sentences is used to detect untested rules, redundant test sentences, and likely causes of overgeneration. Results show that less than half of a large-coverage grammar for German is actually tested by two large testsuites, and that i0-30% of testing time is redundant. The methodology applied can be seen as a re-use of grammar writing knowledge for testsuite compilation.",
"pdf_parse": {
"paper_id": "A00-1045",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper explores the usefulness of a technique from software engineering, namely code instrumentation, for the development of large-scale natural language grammars. Information about the usage of grammar rules in test sentences is used to detect untested rules, redundant test sentences, and likely causes of overgeneration. Results show that less than half of a large-coverage grammar for German is actually tested by two large testsuites, and that i0-30% of testing time is redundant. The methodology applied can be seen as a re-use of grammar writing knowledge for testsuite compilation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Computational Linguistics (CL) has moved towards the marketplace: One finds programs employing CLtechniques in every software shop: Speech Recognition, Grammar and Style Checking, and even Machine Translation are available as products. While this demonstrates the applicability of the research done, it also calls for a rigorous development methodology of such CL application products.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper,lI describe the adaptation of a technique from Software Engineering, namely code instrumentation, to grammar development. Instrumentation is based on the simple idea of marking any piece of code used in processing, and evaluating this usage information afterwards. The application I present here is the evaluation and improvement of grammar and testsuites; other applications are possible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Both software and grammar development are similar processes: They result in a system transforming some input into some output, based on a functional specification (e.g., cf. (Ciravegna et al., 1998) for the application of a particular software design methodology to linguistic engineering). Although Grammar",
"cite_spans": [
{
"start": 174,
"end": 198,
"text": "(Ciravegna et al., 1998)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Software Engineering vs. Grammar Engineering",
"sec_num": "1.1"
},
{
"text": "Engineering usually is not based on concrete specifications, research from linguistics provides an informal specification. Software Engineering developed many methods to assess the quality of a program, ranging from static analysis of the program code to dynamic testing of the program's behavior. Here, we adapt dynamic testing, which means running the implemented program against a set of test cases. The test cases are designed to maximize the probability of detecting errors in the program, i.e., incorrect conditions, incompatible assumptions on subsequent branches, etc. (for overviews, cf. (Hetzel, 1988; Liggesmeyer, 1990) ).",
"cite_spans": [
{
"start": 597,
"end": 611,
"text": "(Hetzel, 1988;",
"ref_id": "BIBREF6"
},
{
"start": 612,
"end": 630,
"text": "Liggesmeyer, 1990)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Software Engineering vs. Grammar Engineering",
"sec_num": "1.1"
},
{
"text": "Engineering How can we fruitfully apply the idea of measuring the coverage of a set of test cases to grammar development? I argue that by exploring the relation between grammar and testsuite, one can improve both of them. Even the traditional usage of testsuites to indicate grammar gaps or overgeneration can profit from a precise indication of the grammar rules used to parse the sentences (cf. Sec.4). Conversely, one may use the grammar to improve the testsuite, both in terms of its coverage (cf. Sec.3.1) and its economy (cf. Sec.3.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Instrumentation in Grammar",
"sec_num": "1.2"
},
{
"text": "Viewed this way, testsuite writing can benefit from grammar development because both describe the syntactic constructions of a natural language. Testsuites systematically list these constructions, while grammars give generative procedures to construct them. Since there are currently many more grammars than testsuites, we may re-use the work that has gone into the grammars for the improvement of testsuites.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Instrumentation in Grammar",
"sec_num": "1.2"
},
{
"text": "The work reported here is situated in a large cooperative project aiming at the development of largecoverage grammars for three languages. The grammars have been developed over years by different people, which makes the existence of tools for navigation, testing, and documentation mandatory. Although the sample rules given below are in the format of LFG, nothing of the methodology relies on VP~V $=T; NP?$= (I\" OBJ); PP* {$= (T OBL); 156 ($ ADJUNCT);}. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Instrumentation in Grammar",
"sec_num": "1.2"
},
{
"text": "Measures from Software Engineering cannot be simply transferred to Grammar Engineering, because the structure of programs is different from that of unification grammars. Nevertheless, the structure of a grammar allows the derivation of suitable measures, similar to the structure of programs; this is discussed in Sec.2.1. The actual instrumentation of the grammar depends on the formalism used, and is discussed in Sec.2.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammar Instrumentation",
"sec_num": "2"
},
{
"text": "Consider the LFG grammar rule in Fig. 1 . 2 On first view, one could require of a testsuite that each such rule is exercised at least once. ~rther thought will indicate that there are hidden alternatives, namely the optionality of the NP and the PP. The rule can only be said to be thoroughly tested if test cases exist which test both presence and absence of optional constituents (requiring 4 test cases for this rule). In addition to context-free rules, unification grammars contain equations of various sorts, as illustrated in Fig.1 . Since these annotations may also contain disjunctions, a testsuite with complete rule coverage is not guaranteed to exercise all equation alternatives. The phrase-structure-based criterion defined above must be refined to cover all equation alternatives in the rule (requiring two test cases for the PP annotation). Even if we assume that (as, e.g., in LFG) there is at least one equation associated with each constituent, equation coverage does not subsume rule coverage: Optional constituents introduce a rule disjunct (without the constituent) that is not characterizable by an equation. A measure might thus be defined as follows: disjunct coverage The disjunct coverage of a testsuite is the quotient number of disjuncts tested Tdis = number of disjuncts in grammar 2Notation: ?/*/+ represent optionality/iteration including/excluding zero occurrences on categories. Annotations to a category specify equality (=) or set membership (6) of feature values, or non-existence of features (-1); they are terminated by a semicolon ( ; ). Disjunctions are given in braces ({... I-.. }). $ ($) are metavariables representing the feature structure corresponding to the mother (daughter) of the rule. Comments are enclosed in quotation marks (\"... \"). Cf. (Kaplan and Bresnan, 1982) for an introduction to LFG notation.",
"cite_spans": [
{
"start": 1791,
"end": 1817,
"text": "(Kaplan and Bresnan, 1982)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 33,
"end": 39,
"text": "Fig. 1",
"ref_id": "FIGREF0"
},
{
"start": 532,
"end": 537,
"text": "Fig.1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Coverage Criteria",
"sec_num": "2.1"
},
{
"text": "where a disjunct is either a phrase-structure alternative, or an annotation alternative. Optional constituents (and equations, if the formalism allows them) have to be treated as a disjunction of the constituent and an empty category (cf. the instrumented rule in Fig.2 for an example).",
"cite_spans": [],
"ref_spans": [
{
"start": 264,
"end": 269,
"text": "Fig.2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Coverage Criteria",
"sec_num": "2.1"
},
{
"text": "Instead of considering disjuncts in isolation, one might take their interaction into account. The most complete test criterion, doing this to the fullest extent possible, can be defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coverage Criteria",
"sec_num": "2.1"
},
{
"text": "interaction coverage The interaction coverage of a testsuite is the quotient number of disjunct combinations tested Tinter = number of legal disjunct combinations There are methodological problems in this criterion, however. First, the set of legal combinations may not be easily definable, due to far-reaching dependencies between disjuncts in different rules, and second, recursion leads to infinitely many legal disjunct combinations as soon as we take the number of usages of a disjunct into account. Requiring complete interaction coverage is infeasible in practice, similar to the path coverage criterion in Software Engineering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coverage Criteria",
"sec_num": "2.1"
},
{
"text": "We will say that an analysis (and the sentence receiving this analysis) relies on a grammar disjunct if this disjunct was used in constructing the analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coverage Criteria",
"sec_num": "2.1"
},
{
"text": "Basically, grammar instrumentation is identical to program instrumentation: For each disjunct in a given source grammar, we add grammar code that will identify this disjunct in the solution produced, iff that disjunct has been used in constructing the solution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Instrumentation",
"sec_num": "2.2"
},
{
"text": "Assuming a unique numbering of disjuncts, an annotation of the form DISJUNCT-nn = + can be used for marking. To determine whether a certain disjunct was used in constructing a solution, one only needs to check whether the associated feature occurs (at some level of embedding) in the solution. Alternatively, if set-valued features are available, one can use a set-valued feature DISJUNCTS to collect atomic symbols representing one disjunct each:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Instrumentation",
"sec_num": "2.2"
},
{
"text": "DISJUNCT-nn 6 DISJUNCTS.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Instrumentation",
"sec_num": "2.2"
},
{
"text": "One restriction is imposed by using the unification formalism, though: One occurrence of the mark cannot be distinguished from two occurrences, since the second application of the equation introduces no new information. The markers merely unify, and there is no way of counting. (Frank et al., 1998) ). In this way, from the root node of each solution the set of all disjuncts used can be collected, together with a usage count. Fig. 2 shows the rule from Fig.1 with such an instrumentation; equations of the form DISJUNCT-nnE o* express membership of the disjunct-specific atom DISJUNCT-nn in the sentence's multiset of disjunct markers.",
"cite_spans": [
{
"start": 279,
"end": 299,
"text": "(Frank et al., 1998)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 429,
"end": 435,
"text": "Fig. 2",
"ref_id": null
},
{
"start": 456,
"end": 461,
"text": "Fig.1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Instrumentation",
"sec_num": "2.2"
},
{
"text": "Tool support is mandatory for a scenario such as instrumentation: Nobody will manually add equations such as those in Fig. 2 to several hundred rules. Based on the format of the grammar rules, an algorithm instrumenting a grammar can be written down easily.",
"cite_spans": [],
"ref_spans": [
{
"start": 118,
"end": 124,
"text": "Fig. 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Processing Tools",
"sec_num": "2.3"
},
{
"text": "Given a grammar and a testsuite or corpus to compare, first an instrumented grammar must be constructed using such an algorithm. This instrumented grammar is then used to parse the testsuite, yielding a set of solutions associated with information about usage of grammar disjuncts. Up to this point, the process is completely automatic. The following two sections discuss two possibilities to evaluate this information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Processing Tools",
"sec_num": "2.3"
},
{
"text": "This section addresses the aspects of completeness (\"does the testsuite exercise all disjuncts in the grammar?\") and economy of a testsuite (\"is it minimal?\").",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quality of Testsuites",
"sec_num": "3"
},
{
"text": "Complementing other work on testsuite construction (cf. Sec.5), we will assume that a grammar is already available, and that a testsuite has to be constructed or extended. While one may argue that grammar and testsuite should be developed in parallel, such that the coding of a new grammar disjunct is accompanied by the addition of suitable test cases, and vice versa, this is seldom the case. Apart from the existence of grammars which lack a testsuite, and to which this procedure could be usefully applied, there is the more principled obstacle of the evolution of the grammar, leading to states where previously necessary rules silently loose their usefulness, because their function is taken over by some other rules, structured differently. This is detectable by instrumentation, as discussed in Sec.3.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quality of Testsuites",
"sec_num": "3"
},
{
"text": "On the other hand, once there is a testsuite, you want to use it in the most economic way, avoiding redundant tests. Sec.3.2 shows that there are different levels of redundancy in a testsuite, dependent on the specific grammar used. Reduction of this redundancy can speed up the test activity, and give a clearer picture of the grammar's performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quality of Testsuites",
"sec_num": "3"
},
{
"text": "If the disjunct coverage of a testsuite is 1 for some grammar, the testsuite is complete w.r.t, this grammar. Such a testsuite can reliably be used to monitor changes in the grammar: Any reduction in the grammar's coverage will show up in the failure of some test case (for negative test cases, cf. Sec.4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Testsuite Completeness",
"sec_num": "3.1"
},
{
"text": "If there is no complete testsuite, one can -via instrumentation -identify disjuncts in the grammar for which no test case exists. There might be either (i) appropriate, but untested, disjuncts calling for the addition of a test case, or (ii) inappropriate disjuncts, for which one cannot construct a grammatical test case relying on them (e.g., left-overs from rearranging the grammar). Grammar instrumentation singles out all untested disjuncts automatically, but cases (i) and (ii) have to be distinguished manually.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Testsuite Completeness",
"sec_num": "3.1"
},
{
"text": "Checking completeness of our local testsuite of 1787 items, we found that only 1456 out of 3730 grammar disjuncts ir~ our German grammar were tested, yielding Tdis = 0.39 (the TSNLP testsuite containing 1093 items tests only 1081 disjuncts, yielding Tdis = 0.28). 3 Fig.3 shows an example of a gap in our testsuite (there are no examples of circumpositions), while Fig.4 shows an inapproppriate disjunct thus discovered (the category ADVadj has been eliminated in the lexicon, but not in all rules). Another error class is illustrated by Fig.5 , which shows a rule that can never be used due to an LFG coherence violation; the grammar is inconsistent here. 4 3There are, of course, unparsed but grammatical test cases in both testsuites, which have not been taken into account in these figures. This explains the difference to the overall number of 1582 items in the German TSNLP testsuite. 4Test cases using a free dative pronoun may be in the testsuite, but receive no analysis since the grammatical function FREEDAT is not defined as such in the configuration section. ",
"cite_spans": [],
"ref_spans": [
{
"start": 266,
"end": 271,
"text": "Fig.3",
"ref_id": null
},
{
"start": 365,
"end": 370,
"text": "Fig.4",
"ref_id": null
},
{
"start": 538,
"end": 543,
"text": "Fig.5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Testsuite Completeness",
"sec_num": "3.1"
},
{
"text": "Besides being complete, a testsuite must be economical, i.e., contain as few items as possible without sacrificing its diagnostic capabilities. Instrumentation can identify redundant test cases. Three criteria can be applied in determining whether a test case is redundant: similarity There is a set of other test cases which jointly rely on all disjunct on which the test case under consideration relies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Testsuite Economy",
"sec_num": "3.2"
},
{
"text": "equivalence There is a single test case which relies on exactly the same combination(s) of disjuncts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Testsuite Economy",
"sec_num": "3.2"
},
{
"text": "strict equivalence There is a single test case which is equivalent to and, additionally, relies on the disjuncts exactly as often as, the test case under consideration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Testsuite Economy",
"sec_num": "3.2"
},
{
"text": "For all criteria, lexical and structural ambiguities must be taken into account. Fig.6 shows some equivalent test cases derived from our testsuite: Example 1 illustrates the distinction between equivalence and strict equivalence; the test cases contain different numbers of attributive adjectives, but are nevertheless considered equivalent. Example 2 shows that our grammar does not make any distinction between adverbial usage and secondary (subject or object) predication. Example 3 shows test cases which should not be considered equivalent, and is discussed below.",
"cite_spans": [],
"ref_spans": [
{
"start": 81,
"end": 86,
"text": "Fig.6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Testsuite Economy",
"sec_num": "3.2"
},
{
"text": "The reduction we achieved in size and processing time is shown in 'He eats the schnitzel naked/raw/quickly.'",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Testsuite Economy",
"sec_num": "3.2"
},
{
"text": "3 Otto versucht oft zu lachen.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Testsuite Economy",
"sec_num": "3.2"
},
{
"text": "Otto versucht zu lachen.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Testsuite Economy",
"sec_num": "3.2"
},
{
"text": "'Otto (often) tries to laugh.' Figure 6 : Sets of equivalent test cases ily selected), and one without similar test cases. The last was constructed using a simple heuristic: Starting with the sentence relying on the most disjuncts, working towards sentences relying on fewer disjuncts, a sentence was selected only if it relied on a disjunct on which no previously selected sentence relied. Assuming that a disjunct working correctly once will work correctly more than once, we did not consider strict equivalence. We envisage the following use of this redundancy detection: There clearly are linguistic reasons to distinguish all test cases in example 2, so they cannot simply be deleted from the testsuite. Rather, their equivalence indicates that the grammar is not yet perfect (or never will be, if it remains purely syntactic). Such equivalences could be interpreted as The level of equivalence can be taken as a limited interaction test: These test cases represent one complete selection of grammar disjuncts, and (given the grammar) there is nothing we can gain by checking a test case if an equivalent one was tested. Thus, this level of redundancy may be used for ensuring the quality of grammar changes prior to their incorporation into the production version of the grammar. The level of similarity contains much less test cases, and does not test any (systematic) interaction between disjuncts. Thus, it may be used during development as a quick rule-of-thumb procedure detecting serious errors only. Coming back to example 3 in Fig.6 , building equivalence classes also helps in detecting grammar errors: If, according to the grammar, two cases are equivalent which actually aren't, the grammar is incorrect. Example 3 shows two test cases which are syntactically different in that the first contains the adverbial oft, while the other doesn't. The reason why they are equivalent is an incorrect rule that assigns an incorrect reading to the second test case, where the infinitival particle \"zu\" functions as an adverbial.",
"cite_spans": [],
"ref_spans": [
{
"start": 31,
"end": 39,
"text": "Figure 6",
"ref_id": null
},
{
"start": 1541,
"end": 1546,
"text": "Fig.6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Testsuite Economy",
"sec_num": "3.2"
},
{
"text": "To control overgeneration, appropriately marked ungrammatical sentences are important in every testsuite. Instrumentation as proposed here only looks at successful parses, but can still be applied in this context: If an ungrammatical test case receives an analysis, instrumentation informs us about the disjuncts used in the incorrect analysis. One (or more) of these disjuncts must be incorrect, or the sentence would not have received a solution. We exploit this information by accumulation across the entire test suite, looking for disjuncts that appear in unusually high proportion in parseable ungrammatical test cases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Negative Test Cases",
"sec_num": "4"
},
{
"text": "In this manner, six grammar disjuncts are singled out by the parseable ungrammatical test cases in the TSNLP testsuite. The most prominent disjunct appears in 26 sentences (listed in Fig.7) , of which group 1 is really grammatical and the rest fall into two groups: A partial VP with object NP, interpreted as an imperative sentence (group 2), and a weird interaction with the tokenizer incorrectly handling capitalization (group 3).",
"cite_spans": [],
"ref_spans": [
{
"start": 183,
"end": 189,
"text": "Fig.7)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Negative Test Cases",
"sec_num": "4"
},
{
"text": "Far from being conclusive, the similarity of these sentences derived from a suspicious grammar disjunct, and the clear relation of the sentences to only two exactly specifiable grammar errors make it plausible that this approach is very promising in reducing overgeneration. Although there are a number of efforts to construct reusable large-coverage testsuites, none has to my knowledge explored how existing grammars could be used for this purpose.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Negative Test Cases",
"sec_num": "4"
},
{
"text": "Starting with (Flickinger et al., 1987) , testsuites have been drawn up from a linguistic viewpoint, \"in-]ormed by [the] study of linguistics and [reflecting] the grammatical issues that linguists have concerned themselves with\" (Flickinger et al., 1987, , p.4) . Although the question is not explicitly addressed in (Balkan et al., 1994) , all the testsuites reviewed there also seem to follow the same methodology. The TSNLP project (Lehmann and Oepen, 1996) and its successor DiET (Netter et al., 1998) , which built large multilingual testsuites, likewise fall into this category. The use of corpora (with various levels of annotation) has been studied, but even here the recommendations are that much manual work is required to turn corpus examples into test cases (e.g., (Balkan and Fouvry, 1995) ). The reason given is that corpus sentences neither contain linguistic phenomena in isolation, nor do they contain systematic variation. Corpora thus are used only as an inspiration. (Oepen and Flickinger, 1998) stress the interdependence between application and testsuite, but don't comment on the relation between grammar and testsuite.",
"cite_spans": [
{
"start": 14,
"end": 39,
"text": "(Flickinger et al., 1987)",
"ref_id": "BIBREF3"
},
{
"start": 115,
"end": 158,
"text": "[the] study of linguistics and [reflecting]",
"ref_id": null
},
{
"start": 229,
"end": 261,
"text": "(Flickinger et al., 1987, , p.4)",
"ref_id": null
},
{
"start": 317,
"end": 338,
"text": "(Balkan et al., 1994)",
"ref_id": "BIBREF1"
},
{
"start": 435,
"end": 460,
"text": "(Lehmann and Oepen, 1996)",
"ref_id": "BIBREF8"
},
{
"start": 484,
"end": 505,
"text": "(Netter et al., 1998)",
"ref_id": "BIBREF10"
},
{
"start": 777,
"end": 802,
"text": "(Balkan and Fouvry, 1995)",
"ref_id": "BIBREF0"
},
{
"start": 987,
"end": 1015,
"text": "(Oepen and Flickinger, 1998)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Negative Test Cases",
"sec_num": "4"
},
{
"text": "The approach presented tries to make available the linguistic knowledge that went into the grammar for development of testsuites. Grammar development and testsuite compilation are seen as complementary and interacting processes, not as isolated modules. We have seen that even large testsuites cover only a fraction of existing large-coverage grammars, and presented evidence that there is a considerable amount of redundancy within existing testsuites. To empirically validate that the procedures outlined above improve grammar and testsuite, careful grammar development is required. Based on the information derived from parsing with instrumented grammars, the changes and their effects need to be evaluated. In addition to this empirical work, instrumentation can be applied to other areas in Grammar Engineering, e.g., to detect sources of spurious ambiguities, to select sample sentences relying on a disjunct for documentation, or to assist in the construction of additional test cases. Methodological work is also required for the definition of a practical and intuitive criterion to measure limited interaction coverage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Each existing grammar development environment undoubtely offers at least some basic tools for comparing the grammar's coverage with a testsuite. Regrettably, these tools are seldomly presented publicly (which accounts for the short list of such references). It is my belief that the thorough discussion of such infrastructure items (tools and methods) is of more immediate importance to the quality of the lingware than the discussion of open linguistic problems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "1The work reported here was conducted during my time at the Institut fiir Maschinelle Sprachverarbeitung (IMS), Stuttgart University, Germany.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Corpus-based test suite generation. TSNLP-WP 2.2",
"authors": [
{
"first": "L",
"middle": [],
"last": "Balkan",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Fouvry",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Balkan and F. Fouvry. 1995. Corpus-based test suite generation. TSNLP-WP 2.2, University of Essex.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Test Suite Design Annotation Scheme",
"authors": [
{
"first": "L",
"middle": [],
"last": "Balkan",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Meijer",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Arnold",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Estival",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Falkedal",
"suffix": ""
}
],
"year": 1994,
"venue": "TSNLP-WP2",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Balkan, S. Meijer, D. Arnold, D. Estival, and K. Falkedal. 1994. Test Suite Design Annotation Scheme. TSNLP-WP2.2, University of Essex.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Developing language reesources and applications with geppetto",
"authors": [
{
"first": "F",
"middle": [],
"last": "Ciravegna",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Lavelli",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Petrelli",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pianesi",
"suffix": ""
}
],
"year": 1998,
"venue": "Proc. 1st Int'l Con/. on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "28--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Ciravegna, A. Lavelli, D. Petrelli, and F. Pianesi. 1998. Developing language reesources and appli- cations with geppetto. In Proc. 1st Int'l Con/. on Language Resources and Evaluation, pages 619- 625. Granada/Spain, 28-30 May 1998.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Toward Evaluation o/ NLP Systems",
"authors": [
{
"first": "D",
"middle": [],
"last": "Flickinger",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Nerbonne",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Sag",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Wasow",
"suffix": ""
}
],
"year": 1987,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Flickinger, J. Nerbonne, I. Sag, and T. Wa- sow. 1987. Toward Evaluation o/ NLP Systems. Hewlett-Packard Laboratories, Palo Alto/CA.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Optimality theory style constraint ranking in large-scale lfg gramma",
"authors": [
{
"first": "A",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "T",
"middle": [
"H"
],
"last": "King",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Kuhn",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Maxwell",
"suffix": ""
}
],
"year": 1998,
"venue": "Proc. of the LFG98",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Frank, T.H. King, J. Kuhn, and J. Maxwell. 1998. Optimality theory style constraint ranking in large-scale lfg gramma. In Proc. of the LFG98",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The complete guide to software testing",
"authors": [
{
"first": "W",
"middle": [
"C"
],
"last": "Hetzel",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W.C. Hetzel. 1988. The complete guide to software testing. QED Information Sciences, Inc. Welles- ley/MA 02181.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Lexicalfunctional grammar: A formal system for grammatical representation",
"authors": [
{
"first": "R",
"middle": [
"M"
],
"last": "Kaplan",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Bresnan",
"suffix": ""
}
],
"year": 1982,
"venue": "The Mental Representation of Grammatical Relations",
"volume": "",
"issue": "",
"pages": "173--281",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R.M. Kaplan and J. Bresnan. 1982. Lexical- functional grammar: A formal system for gram- matical representation. In J. Bresnan and R.M. Kaplan, editors, The Mental Representation of Grammatical Relations, pages 173-281. Cam- bridge, MA: MIT Press.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "TSNLP -Test Suites for Natural Language Processing",
"authors": [
{
"first": "S",
"middle": [],
"last": "Lehmann",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Oepen",
"suffix": ""
}
],
"year": 1996,
"venue": "Proc. 16th Int'l Con]. on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "711--716",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Lehmann and S. Oepen. 1996. TSNLP -Test Suites for Natural Language Processing. In Proc. 16th Int'l Con]. on Computational Linguistics, pages 711-716. Copenhagen/DK.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Modultest und Modulverij~ka-tion",
"authors": [
{
"first": "P",
"middle": [],
"last": "Liggesmeyer",
"suffix": ""
}
],
"year": 1990,
"venue": "Angewandte Informatik",
"volume": "4",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Liggesmeyer. 1990. Modultest und Modulverij~ka- tion. Angewandte Informatik 4. Mannheim: BI Wissenschaftsverlag.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Diet -diagnostic and evaluation tools for nlp applications",
"authors": [
{
"first": "K",
"middle": [],
"last": "Netter",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Armstrong",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Kiss",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Lehman",
"suffix": ""
}
],
"year": 1998,
"venue": "Proc. 1st",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Netter, S. Armstrong, T. Kiss, J. Klein, and S. Lehman. 1998. Diet -diagnostic and eval- uation tools for nlp applications. In Proc. 1st",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "on Language Resources and Evaluation",
"authors": [],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "28--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Int'l Con/. on Language Resources and Evalua- tion, pages 573-579. Granada/Spain, 28-30 May 1998.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Towards systematic grammar profiling:test suite techn. 10 years afte",
"authors": [
{
"first": "S",
"middle": [],
"last": "Oepen",
"suffix": ""
},
{
"first": "D",
"middle": [
"P"
],
"last": "Flickinger",
"suffix": ""
}
],
"year": 1998,
"venue": "Journal of Computer Speech and Language",
"volume": "12",
"issue": "",
"pages": "411--435",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Oepen and D.P. Flickinger. 1998. Towards sys- tematic grammar profiling:test suite techn. 10 years afte. Journal of Computer Speech and Lan- guage, 12:411-435.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Sample Rule the choice of linguistic or computational paradigm."
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "have used a special feature of our grammar development environment: Following the LFG spirit of different representation levels associated with each solution (so-called projections), it provides for a multiset of symbols associated with the complete solution, where structural embedding plays no role (so-called optimality projection; see"
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Figure 3: Appropriate untested disjunct ADVP=~ { { e DISJUNCT-021 E o*; I ADVadj 4=1` DISJUNCT-022 E o* \"unused disjunct\" ; } ADVstd 4=1\" DISJUNCT-023 E o, \"unused disjunct\" ; } I .,. } Figure 4: Inappropriate disjunct"
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Figure 7: Sentences relying on a suspicious disjunct"
},
"TABREF0": {
"num": null,
"html": null,
"content": "<table><tr><td>, which contains measure-</td></tr><tr><td>ments for a test run containing only the parseable</td></tr><tr><td>test cases, one without equivalent test cases (for ev-</td></tr><tr><td>ery set of equivalent test cases, one was arbitrar-</td></tr></table>",
"type_str": "table",
"text": ""
},
"TABREF2": {
"num": null,
"html": null,
"content": "<table><tr><td/><td>Dieselbe schlafen .</td></tr><tr><td>Die schlafen.</td><td>Das schlafen .</td></tr><tr><td/><td>Eines schlafen.</td></tr><tr><td>3 Man schlafen .</td><td>Jede schlafen .</td></tr><tr><td>Dieser schlafen .</td><td>Dieses schlafen.</td></tr><tr><td>Ich schlafen .</td><td>Eine schlafen .</td></tr><tr><td>Der schlafen.</td><td>Meins schlafen.</td></tr><tr><td>Jeder schlafen.</td><td>Dasjenige schlafen.</td></tr><tr><td>Derjenige schlafen .</td><td>Jedes schlafen .</td></tr><tr><td>Jener schlafen .</td><td>Diejenige schlafen.</td></tr><tr><td>Keiner schlafen .</td><td>Jenes schlafen.</td></tr><tr><td>Derselbe schlafen.</td><td>Keines schlafen .</td></tr><tr><td>Er schlafen.</td><td>Dasselbe schlafen.</td></tr><tr><td>Irgendjemand schlafen .</td><td/></tr></table>",
"type_str": "table",
"text": "Der Test fg.llt leicht."
}
}
}
} |