File size: 57,454 Bytes
6fa4bc9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 |
{
"paper_id": "W01-0703",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T06:01:49.014759Z"
},
"title": "Learning class-to-class selectional preferences",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": "",
"affiliation": {
"laboratory": "IXA NLP Group University of the Basque Country",
"institution": "",
"location": {
"addrLine": "649 pk. 20.080 Donostia",
"country": "Spain"
}
},
"email": ""
},
{
"first": "David",
"middle": [],
"last": "Martinez",
"suffix": "",
"affiliation": {
"laboratory": "IXA NLP Group University of the Basque Country",
"institution": "",
"location": {
"addrLine": "649 pk. 20.080 Donostia",
"country": "Spain"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Selectional preference learning methods have usually focused on wordto-class relations, e.g., a verb selects as its subject a given nominal class. This papers extends previous statistical models to class-to-class preferences, and presents a model that learns selectional preferences for classes of verbs. The motivation is twofold: different senses of a verb may have different preferences, and some classes of verbs can share preferences. The model is tested on a word sense disambiguation task which uses subject-verb and object-verb relationships extracted from a small sense-disambiguated corpus.",
"pdf_parse": {
"paper_id": "W01-0703",
"_pdf_hash": "",
"abstract": [
{
"text": "Selectional preference learning methods have usually focused on wordto-class relations, e.g., a verb selects as its subject a given nominal class. This papers extends previous statistical models to class-to-class preferences, and presents a model that learns selectional preferences for classes of verbs. The motivation is twofold: different senses of a verb may have different preferences, and some classes of verbs can share preferences. The model is tested on a word sense disambiguation task which uses subject-verb and object-verb relationships extracted from a small sense-disambiguated corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Previous literature on selectional preference has usually learned preferences for words in the form of classes, e.g., the object of eat is an edible entity. This paper extends previous statistical models to classes of verbs, yielding a relation between classes in a hierarchy, as opposed to a relation between a word and a class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The model is trained using subject-verb and object-verb associations extracted from Semcor, a corpus (Miller et al., 1993) tagged with WordNet word-senses (Miller et al., 1990) . The syntactic relations were extracted using the Minipar parser (Lin, 1993) . A peculiarity of this exercise is the use of a small sensedisambiguated corpus, in contrast to using a large corpus of ambiguous words. We think that two factors can help alleviate the scarcity of data: the fact that using disambiguated words provides purer data, and the ability to use classes of verbs in the preferences. Nevertheless, the approach can be easily extended to larger, nondisambiguated corpora.",
"cite_spans": [
{
"start": 101,
"end": 122,
"text": "(Miller et al., 1993)",
"ref_id": "BIBREF4"
},
{
"start": 155,
"end": 176,
"text": "(Miller et al., 1990)",
"ref_id": null
},
{
"start": 243,
"end": 254,
"text": "(Lin, 1993)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We have defined a word sense disambiguation exercise in order to evaluate the extracted preferences, using a sample of words and a sample of documents, both from Semcor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Following this short introduction, section 2 reviews selectional restriction acquisition. Section 3 explains our approach, which is formalized in sections 4 and 5. Next, section 6 shows the results on the WSD experiment. Some of the acquired preferences are analysed in section 7. Finally, some conclusions are drawn and future work is outlined.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Selectional preferences try to capture the fact that linguistic elements prefer arguments of a certain semantic class, e.g. a verb like 'eat' prefers as object edible things, and as subject animate entities, as in, (1) \"She was eating an apple\". Selectional preferences get more complex than it might seem: (2) \"The acid ate the metal\", (3) \"This car eats a lot of gas\", (4) \"We ate our savings\", etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selectional preference learning",
"sec_num": "2"
},
{
"text": "Corpus-based approaches for selectional preference learning extract a number of (e.g. verb/subject) relations from large corpora and use an algorithm to generalize from the set of nouns for each verb separately. Usually, nouns are generalized using classes (concepts) from a lexical knowledge base (e.g. WordNet). Resnik (1992 Resnik ( , 1997 defines an informationtheoretic measure of the association between a verb and nominal WordNet classes: selectional association. He uses verb-argument pairs from Brown. Evaluation is performed applying intuition and WSD. Our measure follows in part from his formalization. Abe and Li (1995) follow a similar approach, but they employ a different informationtheoretic measure (the minimum description length principle) to select the set of concepts in a hierarchy that generalize best the selectional preferences for a verb. The argument pairs are extracted from the WSJ corpus, and evaluation is performed using intuition and PP-attachment resolution. Stetina et al. (1998) extract word-arg-word triples for all possible combinations, and use a measure of \"relational probability\" based on frequency and similarity. They provide an algorithm to disambiguate all words in a sentence. It is directly applied to WSD with good results.",
"cite_spans": [
{
"start": 314,
"end": 326,
"text": "Resnik (1992",
"ref_id": "BIBREF5"
},
{
"start": 327,
"end": 342,
"text": "Resnik ( , 1997",
"ref_id": "BIBREF6"
},
{
"start": 615,
"end": 632,
"text": "Abe and Li (1995)",
"ref_id": null
},
{
"start": 994,
"end": 1015,
"text": "Stetina et al. (1998)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Selectional preference learning",
"sec_num": "2"
},
{
"text": "The model explored in this paper emerges as a result of the following observations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our approach",
"sec_num": "3"
},
{
"text": "\u2022 Distinguishing verb senses can be useful.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our approach",
"sec_num": "3"
},
{
"text": "The examples for eat above are taken from WordNet, and each corresponds to a different word sense 1 : example (1) is from the \"take in solid food\" sense of eat, (2) from the \"cause to rust\" sense, and examples (3) and (4) from the \"use up\" sense. \u2022 If the word senses of a set of verbs are similar (e.g. word senses of ingestion verbs like eat, devour, ingest, etc.) they can have related selectional preferences, and we can generalize and say that a class of verbs has a particular selectional preference. Our formalization thus distinguishes among verb senses, that is, we treat each verb sense as a 1 1 A note is in order to introduce the terminology used in the paper. We use concept and class indistinguishably, and they refer to the so-called synsets in WordNet. Concepts in WordNet are represented as sets of synonyms, e.g. <food, nutrient>. A word sense in WordNet is a word-concept pairing, e.g. given the concepts a=<chicken, poulet, volaille> and b=<wimp, chicken, crybaby> we can say that chicken has at least two word senses, the pair chickena and the pair chicken-b. In fact the former is sense 1 of chicken, and the later is sense 3 of chicken. For the sake of simplicity we also talk about <chicken, poulet, volaille> being a word sense of chicken. different unit that has a particular selectional preference. From the selectional preferences of single verb word senses, we also infer selectional preferences for classes of verbs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our approach",
"sec_num": "3"
},
{
"text": "Contrary to other methods (e.g. Li and Abe's), we don't try to find the classes which generalize best the selectional preferences. All possibilities, even the very low probability ones, are stored.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our approach",
"sec_num": "3"
},
{
"text": "The method stands as follows: we collect [noun-word-sense relation verb-word-sense] triples from Semcor, where the relation is either subject or object. As word senses refer to concepts, we also collect the triple for each possible combination of concepts that subsume the word senses in the triple. Direct frequencies and estimates of frequencies for classes are then used to compute probabilities for the triples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our approach",
"sec_num": "3"
},
{
"text": "These probabilities could be used to disambiguate either nouns, verbs or both at the same time. For the time being, we have chosen to disambiguate nouns only, and therefore we compute the probability for a nominal concept, given that it is the subject/object of a particular verb. Note that when disambiguating we ignore the particular sense in which the governing verb occurs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our approach",
"sec_num": "3"
},
{
"text": "As mentioned in the previous sections we are interested in modelling the probability of a nominal concept given that it is the subject/object of a particular verb:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formalization",
"sec_num": "4"
},
{
"text": ") | ( v rel cn P i (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formalization",
"sec_num": "4"
},
{
"text": "Before providing the formalization for our approach we present a model based on words and a model based on nominal-classes. Our class-to-class model is an extension of the second 2 . The estimation of the frequencies of classes are presented in the following section. 1 2 Notation: v stands for a verb, cn (cv) stand for nominal (verbal) concept, cn i (cv i ) stands for the concept linked to the i-th sense of the given noun (verb), rel could be any grammatical relation (in our case object or subject), \u2286 stands for the subsumption relation, fr stands for frequency and r f\u02c6.for the estimation of the frequencies of classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formalization",
"sec_num": "4"
},
{
"text": "At this stage we do not use information of class subsumption. The probability of the first sense of chicken being an object of eat depends on how often does the concept linked to chicken 1 appear as object of the word eat, divided by the number of occurrences of eat with an object.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-to-word model: eat chicken i",
"sec_num": "4.1"
},
{
"text": ") ( ) ( ) | ( v rel fr v rel cn fr v rel cn P i i = (2) Note that instead of ) | ( v rel sense P i we use ) | ( v rel cn P i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-to-word model: eat chicken i",
"sec_num": "4.1"
},
{
"text": ", as we count occurrences of concepts rather than word senses. This means that synonyms also count, e.g. poulet as synonyms of the first sense of chicken.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-to-word model: eat chicken i",
"sec_num": "4.1"
},
{
"text": "The probability of eat chicken 1 depends on the probabilities of the concepts subsumed by and subsuming chicken 1 being objects of eat. For instance, if chicken 1 never appears as an object of eat, but other word senses under <food, nutrient> do, the probability of chicken 1 will not be 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "word-to-class model: eat <food, nutrient>",
"sec_num": "4.2"
},
{
"text": "Formula 3shows that for all concepts subsuming cn i the probability of cn i given the more general concept times the probability of the more general concept being a subject/object of the verb is added. The first probability is estimated dividing the class frequencies of cn i with the class frequencies of the more general concept. The second probability is estimated as in 4.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "word-to-class model: eat <food, nutrient>",
"sec_num": "4.2"
},
{
"text": "The probability of eat chicken 1 depends on the probabilities of all concepts above chicken 1 being objects of all concepts above the possible senses of eat. For instance, if devour never appeared on the training corpus, the model could infer its selectional preference from that of its ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "class-to-class model: <ingest, take in, \u2026> <food, nutrient>",
"sec_num": "4.3"
},
{
"text": "\u2211 \u2211 \u2287 \u2287 \u00d7 = \u00d7 = i cn cn i cn cn v rel fr v rel cn r f cn r f cn cn r f v rel cn P cn cn P v rel cn P i i i ) ( ) ( ) ( ) , ( ) | ( ) | ( ) | ( (3) \u2211 \u2211 \u2211 \u2211 \u2287 \u2287 \u2287 \u2287 \u00d7 \u00d7 = \u00d7 \u00d7 = i cn cn cv cv v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "class-to-class model: <ingest, take in, \u2026> <food, nutrient>",
"sec_num": "4.3"
},
{
"text": "v rel cn P max max ) ( ) ( ) ( ) , ( ) ( ) , ( ) | ( ) | ( ) | ( ) | ( (4) \u2211 \u2286 \u00d7 = cn i cn i i cn fr cn classes cn r f ) ( ) ( 1 ) ( (5) \uf8f4 \uf8f3 \uf8f4 \uf8f2 \uf8f1 \u2286 \u00d7 = \u2211 \u2286 otherwise cn cn if cn fr cn classes cn cn r f i i j j j i cn cn 0 ) ( ) ( 1 ) , ( (6) \u2211 \u2286 \u00d7 = cn i cn v rel cn fr cn classes v rel cn r f i i ) ( ) ( 1 ) ( (7) \u2211 \u2211 \u2286 \u2286 \u00d7 \u00d7 = cn i cn cn i cv i i i i cv rel cn fr cv classes cn classes cv rel cn r f ) ( ) ( 1 ) ( 1 ) ( (8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "class-to-class model: <ingest, take in, \u2026> <food, nutrient>",
"sec_num": "4.3"
},
{
"text": "superclass <ingest, take in, ...>. As the verb can be polysemous, the sense with maximum probability is selected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "class-to-class model: <ingest, take in, \u2026> <food, nutrient>",
"sec_num": "4.3"
},
{
"text": "Formula 4shows that the maximum probability for the possible senses (cv j ) of the verb is taken. For each possible verb concept (cv) and noun concept (cn) subsuming the target concepts (cn i ,cv j ), the probability of the target concept given the subsuming concept (this is done twice, once for the verb, once for the noun) times the probability the nominal concept being subject/object of the verbal concept is added.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "class-to-class model: <ingest, take in, \u2026> <food, nutrient>",
"sec_num": "4.3"
},
{
"text": "Frequencies for classes can be counted directly from the corpus when the class is linked to a word sense that actually appears in the corpus, written as fr(cn i ). Otherwise they have to be estimated using the direct counts for all subsumed concepts, written as ) (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimation of class frequencies",
"sec_num": "5"
},
{
"text": "i cn r f",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimation of class frequencies",
"sec_num": "5"
},
{
"text": ". Formula (5) shows that all the counts for the subsumed concepts (cn i ) are added, but divided by the number of classes for which c i is a subclass (that is, all ancestors in the hierarchy). This is necessary to guarantee the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimation of class frequencies",
"sec_num": "5"
},
{
"text": "\u2211 \u2287 i cn cn cn cn P i ) | ( = 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimation of class frequencies",
"sec_num": "5"
},
{
"text": "Formula (6) shows the estimated frequency of a concept given another concept. In the case of the first concept subsuming the second it is 0, otherwise the frequency is estimated as in (5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimation of class frequencies",
"sec_num": "5"
},
{
"text": "Formula (7) estimates the counts for [nominal-concept relation verb] triples for all possible nominal-concepts, which is based on the counts for the triples that actually occur in the corpus. All the counts for subsumed concepts are added, divided by the number of classes in order to guarantee the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimation of class frequencies",
"sec_num": "5"
},
{
"text": "\u2211 cn v subj cn P ) | ( =1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimation of class frequencies",
"sec_num": "5"
},
{
"text": "Finally, formula (8) extends formula (7) to [nominal-concept relation verbal-concept] in a similar way.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimation of class frequencies",
"sec_num": "5"
},
{
"text": "For training we used the sense-disambiguated part of Brown, Semcor, which comprises around 250.000 words tagged with WordNet word senses. The parser we used is Minipar. For this current experiment we only extracted verbobject and verb-subject pairs. Overall 14.471 verb-object pairs and 12.242 verb-subject pairs were extracted. For the sake of efficiency, we stored all possible class-to-class relations and class frequencies at this point, as defined in formulas (5) to (8).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training and testing on a WSD exercise",
"sec_num": "6"
},
{
"text": "The acquired data was tested on a WSD exercise. The goal was to disambiguate all nouns occurring as subjects and objects, but it could be also used to disambiguate verbs. The WSD algorithm just gets the frequencies and computes the probabilities as they are needed. The word sense with the highest probability is chosen.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training and testing on a WSD exercise",
"sec_num": "6"
},
{
"text": "Two experiments were performed: on the lexical sample we selected a set of 8 nouns at random 3 and applied 10fold crossvalidation to make use of all available examples. In the case of whole documents, they were withdrawn from the training corpus and tested in turn.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training and testing on a WSD exercise",
"sec_num": "6"
},
{
"text": "3 This set was also used on a previous paper (Agirre & Martinez, 2000 Table 1 shows the data for the set of nouns. Note that only 19% (15%) of the occurrences of the nouns are objects (subjects) of any verb. Table 2 shows the average results using subject and object relations for each possible formalization. Each column shows respectively, the precision, the coverage over the occurrences with the given relation, and the recall. Random and most frequent baselines are also shown. Word-to-word gets the highest precision of all three, but it can only be applied on a few instances. Word-to-class gets slightly better precision than class-to-class, but class-to-class is near complete coverage and thus gets the best recall of all three. All are well above the random baseline, but slightly below the most frequent sense.",
"cite_spans": [
{
"start": 45,
"end": 69,
"text": "(Agirre & Martinez, 2000",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 70,
"end": 77,
"text": "Table 1",
"ref_id": null
},
{
"start": 208,
"end": 215,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "1",
"sec_num": null
},
{
"text": "On the all-nouns experiment, we disambiguated the nouns appearing in four files extracted from Semcor. We observed that not many nouns were related to a verb as object or subject (e.g. in the file br-a01 only 40% (16%) of the polisemous nouns were tagged as object (subject)). Table 3 illustrates the results on this task. Again, word-to-word obtains the best precision in all cases, but because of the lack of data the recall is low. Class-to-class attains the best recall.",
"cite_spans": [],
"ref_spans": [
{
"start": 277,
"end": 284,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "1",
"sec_num": null
},
{
"text": "We think that given the small corpus available, the results are good. Note that there is no smoothing or cut-off value involved, and some decisions are taken with very little points of data. Sure enough both smoothing and cut-off values will allow to improve the precision. On the contrary, literature has shown that the most frequent sense baseline needs less training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "1",
"sec_num": null
},
{
"text": "In order to analyze the acquired selectional preferences, we wanted a word that did not occur too often and which had clearly distinguishable senses. The goal is to study the preferences that were applied in the disambiguation for all occurrences, and check what is the difference among each of the models. The selected word was church, which has three senses in WordNet, and occurs 19 times. Figure 1 shows the three word senses and the corresponding subsuming concepts. Table 4 shows the results of the disambiguation algorithm for church. In the word-to-word model, the model is unable to tag any of the examples 4 (all the verbs related to \"church\" were different). For church as object, both class-to-class and word-to-class have similar recall, but word-to-class has better precision. Notice that the majority of the examples with church as object were not tagged with the most frequent sense in Semcor, and therefore the MFS precision is remarkably low (21%). For church as subject, the class-to-class model has both better precision and coverage.",
"cite_spans": [],
"ref_spans": [
{
"start": 393,
"end": 402,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 473,
"end": 480,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis of the acquired selectional preferences",
"sec_num": "7"
},
{
"text": "We will now study in more detail each of the examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of the acquired selectional preferences",
"sec_num": "7"
},
{
"text": "There were 19 examples with church as object (15 tagged in Semcor with sense 2 and 4 with sense 1). Using the word-to-class model, 12 were tagged correctly, 5 incorrectly and 2 had not enough data to answer. In the class-to-class model 12 examples were tagged correctly and 7 incorrectly. Therefore there was no gain in recall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Church as object",
"sec_num": "7.1"
},
{
"text": "First, we will analyze the results of the word-to-class model. From the 12 hits, 10 corresponded to sense 2 and the other 2 to sense 1. Here we show the 12 verbs and the superconcept of the senses of church that gets the highest selectional preference probability, and thus selects the winning sense, in this case, correctly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Church as object",
"sec_num": "7.1"
},
{
"text": "\u2022 Tagged with sense 2: look: <building, edifice> have:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Church as object",
"sec_num": "7.1"
},
{
"text": "<building, edifice> 1 4 Note that we applied 10fold crossvalidation. The model is not able to tag anything because the verbs in the testing samples do not appear in the training samples. In fact all the verbs governing church occur only once.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Church as object",
"sec_num": "7.1"
},
{
"text": "demolish: <building, edifice> move: <structure, construction> support:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Church as object",
"sec_num": "7.1"
},
{
"text": "<structure, construction> build:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Church as object",
"sec_num": "7.1"
},
{
"text": "<structure, construction> enter:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Church as object",
"sec_num": "7.1"
},
{
"text": "<structure, construction> sell:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Church as object",
"sec_num": "7.1"
},
{
"text": "<artifact, artefact> abandon: <artifact, artefact> see:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Church as object",
"sec_num": "7.1"
},
{
"text": "<artifact, artefact> \u2022 Tagged with sense 1 strengthen: <organization, organisation> turn_to: <organization, organisation> The five examples where the model failed revealed different types of errors. We will check each of the verbs in turn. 1. Attend (Semcor 2, word-to-class 1) 5 : We quote the whole sentence:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Church as object",
"sec_num": "7.1"
},
{
"text": "From many sides come remarks that Protestant churches are badly attended and the large medieval cathedrals look all but empty during services .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Church as object",
"sec_num": "7.1"
},
{
"text": "We think that the correct sense should be 3 ( \"church services\" are attended, not the buildings). In any case, the class that gets the higher weight is <institution, establishment>, pointing to sense 1 of church and beating the more appropriate class <religious ceremony, religious ritual> because of the lack of examples in the training. 2. Join (Semcor 1, word-to-class 2): It seems that this verb should be a good clue for sense 1. But among the few occurrences of join in the training set there were \"join-obj-temple\" and \"join-obj-synagogue\". Both temple and synagogue have do not have organizationrelated concepts in WordNet and they were thus tagged with a concept under <building, 1 5 For each verb we list the sense in Semcor (the correct reference sense) and the sense assigned by the model. 10 8 2 0 .800 1.00 .800 subj word-to-word 10 0 0 10 .000 .000 .000 subj word-to-class 10 4 3 3 .571 .700 .400 subj class-to-class 10 6 4 0 .600 1.00 .600 Table 4 : Results disambiguating the word church.",
"cite_spans": [],
"ref_spans": [
{
"start": 956,
"end": 963,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Church as object",
"sec_num": "7.1"
},
{
"text": "edifice>. This implies that <place of worship, house of prayer, house of God, house of worship> gets most credit and the answer is sense 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Church as object",
"sec_num": "7.1"
},
{
"text": "The scarcity of training examples is very evident here. There are only 2 examples of imprison with an object, one of them wrongly selected by Minipar (imprison-obj-trip) that falls under <act, human action, human activity> and points to sense 3. 4. Empty (Semcor 2, word-to-class 1): The different senses of empty introduce misleading examples. The best credit is given to <group, grouping> (following an sense of empty which is not appropriate here) which selects the sense 1 of church. The correct sense of empty in this context relates with <object, physical object>, and would thus select the correct sense, but does not have enough credit. 5. Advance (Semcor 2, : the misleading senses of \"advance\" and the low number of examples point to sense 3.",
"cite_spans": [
{
"start": 656,
"end": 666,
"text": "(Semcor 2,",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Imprison (Semcor 1, word-to-class 3):",
"sec_num": "3."
},
{
"text": "We thus identified 4 sources of error in the word-to-class model: A. Incorrect Semcor tag B. Wrongly extracted verb-object relations C. Scarcity of data D. Misleading verb senses",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Imprison (Semcor 1, word-to-class 3):",
"sec_num": "3."
},
{
"text": "The class-to-class model should help to mitigate the effects of errors type C and D. We would specially hope for the class-to-class model to discard misleading verb senses. We now turn to analyze the results of this model. From the 12 correct examples tagged using word-to-class, we observed that 3 were mistagged using class-to-class. The reason was that the class-to-class introduces new examples from verbs that are superclasses of the target verb, and these introduced noise. For example, we examined the verb turn_to (tagged in Semcor with sense 1): 1. turn-to (Semcor 1, word-to-class 1): there are fewer training examples than in the class-toclass model and they get more credit. The relation \"turn_to-obj-platoon\" gives weight to the class <organization, organisation>. 2. turn-to (Semcor 1, class-to-class 2): the relations \"take_up-obj-position\" and \"call_onobj-esprit_de_corps\" introduce noise and point to the class <artifact, artefact>. As a result, the sense 2 is wrongly selected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Imprison (Semcor 1, word-to-class 3):",
"sec_num": "3."
},
{
"text": "From the 5 mistagged examples in class-toclass, only \"empty\" was tagged correctly using classes (in this case the class-to-class model is able to select the correct sense of the verb, discarding the misleading senses of empty): 1. Attend, Join, Advance: they had errors of type A and B (incorrect Semcor tag/ misleading verb-object relations) and we can not expect the \"class-to-class\" model to handle them. 2. Imprison: still has not enough information to make a good choice. 3. Empty (Semcor 2, : new examples associated to the appropriate sense of empty give credit to the classes <place of worship, house of prayer, house of God, house of worship> and <church, church building>. With the weight of these classes the correct sense 2 is correctly chosen.",
"cite_spans": [
{
"start": 486,
"end": 496,
"text": "(Semcor 2,",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Imprison (Semcor 1, word-to-class 3):",
"sec_num": "3."
},
{
"text": "Finally, the 2 examples that received no answer in the \"word-to-class\" model were tagged correctly: 1. Flurry (Semcor 2, : the answer is correct although the choice is made with few data. The strongest class is <structure, construction>. 2. Rebuild (Semcor 2, class-to-class 2): the new information points to the appropriate sense.",
"cite_spans": [
{
"start": 110,
"end": 120,
"text": "(Semcor 2,",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Imprison (Semcor 1, word-to-class 3):",
"sec_num": "3."
},
{
"text": "The class2class model showed a better behavior with the examples in which church appeared as subject. There were only 10 examples, 8 tagged with sense 1 and 2 with sense 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Church as subject",
"sec_num": "7.2"
},
{
"text": "In this case, the class-to-class model tagged in the same way the examples tagged by the class-to-word model, but it also tagged the 3 occurrences that had not been tagged by the word-to-class model (2 correctly and 1 incorrectly).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Church as subject",
"sec_num": "7.2"
},
{
"text": "We presented a statistical model that extends selectional preference to classes of verbs, yielding a relation between classes in a hierarchy, as opposed to a relation between a word and a class. The motivation is twofold: different senses of a verb may have different preferences, and some classes of verbs can share preferences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "8"
},
{
"text": "The model is trained using subject-verb and object-verb relations extracted from a sense-disambiguated corpus using Minipar. A peculiarity of this exercise is the use of a small sense-disambiguated corpus, in contrast to using a large corpus of ambiguous words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "8"
},
{
"text": "Contrary to other methods we do not try to find the classes which generalize best the selectional preferences. All possibilities, even the ones with very low probability, are stored.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "8"
},
{
"text": "Evaluation is based on a word sense disambiguation exercise for a sample of words and a sample of documents from Semcor. The proposed model gets similar results on precision but significantly better recall than the classical word-to-class model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "8"
},
{
"text": "We plan to train the model on a large untagged corpus, in order to compare the quality of the acquired selectional preferences with those extracted from this small tagged corpora. The model can easily be extended to disambiguate other relations and POS. At present we are also integrating the model on a supervised WSD algorithm that uses decision lists.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "8"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Learning Word Association Norms Using Tree Cut Pair Models",
"authors": [
{
"first": "H",
"middle": [],
"last": "Abe",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 13th International Conference on Machine Learning ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abe, H. & Li, N. 1996. Learning Word Association Norms Using Tree Cut Pair Models. In Proceedings of the 13th International Conference on Machine Learning ICML.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Decision lists and automatic word sense disambiguation",
"authors": [
{
"first": "E",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Martinez",
"suffix": ""
}
],
"year": 2000,
"venue": "Workshop on Semantic Annotation and Intelligent Content",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Agirre E. and Martinez D. 2000. Decision lists and automatic word sense disambiguation. COLING 2000, Workshop on Semantic Annotation and Intelligent Content. Luxembourg.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Principle Based parsing without Overgeneration",
"authors": [
{
"first": "D",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 1993,
"venue": "31st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "112--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lin, D. 1993. Principle Based parsing without Overgeneration. In 31st Annual Meeting of the Association for Computational Linguistics. Columbus, Ohio. pp 112-120.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A Semantic Concordance",
"authors": [
{
"first": "G",
"middle": [
"A"
],
"last": "Miller",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Leacock",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Tengi",
"suffix": ""
},
{
"first": "R",
"middle": [
"T"
],
"last": "Bunker",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the ARPA Workshop on Human Language Technology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miller, G. A., C. Leacock, R. Tengi, and R. T. Bunker. 1993. A Semantic Concordance. Proceedings of the ARPA Workshop on Human Language Technology.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A class-based approach to lexical discovery",
"authors": [
{
"first": "P",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the Proceedings of the 30th Annual Meeting of the Association for Computational Linguists",
"volume": "",
"issue": "",
"pages": "327--329",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Resnik, P. 1992. A class-based approach to lexical discovery. In Proceedings of the Proceedings of the 30th Annual Meeting of the Association for Computational Linguists., . 327-329.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Selectional Preference and Sense Disambiguation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the ANLP Workshop ``Tagging Text with Lexical Semantics: Why What and How?''",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Resnik,P. 1997. Selectional Preference and Sense Disambiguation.. In Proceedings of the ANLP Workshop ``Tagging Text with Lexical Semantics: Why What and How?''., Washington, DC.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "General Word Sense Disambiguation Method Based on a Full Sentential Context. In Usage of WordNet in Natural Language Processing",
"authors": [
{
"first": "J",
"middle": [],
"last": "Stetina",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Kurohashi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Nagao",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of COLING-ACL Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stetina J., Kurohashi S., Nagao M. 1998. General Word Sense Disambiguation Method Based on a Full Sentential Context. In Usage of WordNet in Natural Language Processing , Proceedings of COLING-ACL Workshop. Montreal (C Canada).",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Word senses and superclasses for church"
},
"TABREF1": {
"num": null,
"content": "<table><tr><td/><td/><td/><td># occ.</td><td># occ.</td></tr><tr><td>Noun</td><td colspan=\"2\"># sens # occ</td><td>as obj</td><td>as subj</td></tr><tr><td>account</td><td/><td>10 27</td><td>8</td><td>3</td></tr><tr><td>age</td><td/><td>5 104</td><td>10</td><td>9</td></tr><tr><td>church</td><td/><td>3 128</td><td>19</td><td>10</td></tr><tr><td>duty</td><td/><td>3 25</td><td>8</td><td>1</td></tr><tr><td>head</td><td/><td>30 179</td><td>58</td><td>16</td></tr><tr><td>interest</td><td/><td>7 140</td><td>31</td><td>13</td></tr><tr><td>member</td><td/><td>5 74</td><td>13</td><td>11</td></tr><tr><td>people</td><td/><td>4 282</td><td>41</td><td>83</td></tr><tr><td>Overall</td><td/><td colspan=\"2\">67 959 188</td><td>146</td></tr><tr><td/><td/><td>Obj</td><td/><td>Subj</td></tr><tr><td colspan=\"2\">Prec.</td><td colspan=\"2\">Cov. Rec Prec.</td><td>Cov. Rec.</td></tr><tr><td>Random</td><td colspan=\"4\">.192 1.00 .192 .192 1.00 .192</td></tr><tr><td>MFS</td><td colspan=\"4\">.690 1.00 .690 .690 1.00 .690</td></tr><tr><td>).</td><td/><td/><td/></tr></table>",
"html": null,
"type_str": "table",
"text": "Data for the selected nouns. Word2word .959 .260 .249 .742 .243 .180 Word2class .669 .867 .580 .562 .834 .468 Class2class .666 .973 .648 .540 .995 .537 Average results for the 8 nouns."
},
"TABREF2": {
"num": null,
"content": "<table><tr><td/><td>Object</td><td/><td/><td/><td colspan=\"2\">Subject</td><td/></tr><tr><td colspan=\"8\">File Rand. MFS word2word word2class class2class Rand. MFS word2word word2class class2class</td></tr><tr><td>br-a01 .286 .746</td><td>.138</td><td>.447</td><td>.542</td><td>.313 .884</td><td>.312</td><td>.640</td><td>.749</td></tr><tr><td>br-b20 .233 .776</td><td>.093</td><td>.418</td><td>.487</td><td>.292 .780</td><td>.354</td><td>.580</td><td>.677</td></tr><tr><td>br-j09 .254 .645</td><td>.071</td><td>.429</td><td>.399</td><td>.256 .761</td><td>.200</td><td>.500</td><td>.499</td></tr><tr><td>br-r05 .269 .639</td><td>.126</td><td>.394</td><td>.577</td><td>.294 .720</td><td>.144</td><td>.601</td><td>.710</td></tr><tr><td/><td/><td/><td/><td>Sense 1</td><td/><td/><td/></tr><tr><td/><td/><td/><td/><td colspan=\"4\">church, Christian church, Christianity</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">=> religion, faith</td><td/><td/></tr><tr><td/><td/><td/><td/><td colspan=\"4\">=> institution, establishment</td></tr><tr><td/><td/><td/><td/><td colspan=\"4\">=> organization, organisation</td></tr><tr><td/><td/><td/><td/><td/><td colspan=\"2\">=> social group</td><td/></tr><tr><td/><td/><td/><td/><td/><td colspan=\"3\">=> group, grouping</td></tr><tr><td/><td/><td/><td/><td>Sense 2</td><td/><td/><td/></tr><tr><td/><td/><td/><td/><td colspan=\"3\">church, church building</td><td/></tr><tr><td/><td/><td/><td/><td colspan=\"4\">=> place of worship, house of prayer,</td></tr><tr><td/><td/><td/><td/><td colspan=\"4\">house of God, house of worship</td></tr><tr><td/><td/><td/><td/><td colspan=\"3\">=> building, edifice</td><td/></tr><tr><td/><td/><td/><td/><td colspan=\"4\">=> structure, construction</td></tr><tr><td/><td/><td/><td/><td/><td colspan=\"2\">=> artifact, artefact</td><td/></tr><tr><td/><td/><td/><td/><td/><td colspan=\"3\">=> object, physical object</td></tr><tr><td/><td/><td/><td/><td/><td colspan=\"3\">=> entity, something</td></tr><tr><td/><td/><td/><td/><td>Sense 3</td><td/><td/><td/></tr><tr><td/><td/><td/><td/><td colspan=\"2\">church service, church</td><td/><td/></tr><tr><td/><td/><td/><td/><td colspan=\"4\">=> service, religious service, divine service</td></tr><tr><td/><td/><td/><td/><td colspan=\"4\">=> religious ceremony, religious ritual</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">=> ceremony</td><td/><td/></tr><tr><td/><td/><td/><td/><td/><td>=> activity</td><td/><td/></tr><tr><td/><td/><td/><td/><td/><td colspan=\"3\">=> act, human action, human activity</td></tr></table>",
"html": null,
"type_str": "table",
"text": "Average recall for the nouns in the four Semcor files."
}
}
}
} |