File size: 51,100 Bytes
6fa4bc9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 |
{
"paper_id": "I08-1027",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:42:17.465596Z"
},
"title": "Automatic Estimation of Word Significance oriented for Speech-based Information Retrieval",
"authors": [
{
"first": "Takashi",
"middle": [],
"last": "Shichiri",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Ryukoku University Seta",
"location": {
"postCode": "520-2194",
"settlement": "Otsu",
"country": "Japan"
}
},
"email": "shichiri@nlp.i.ryukoku.ac.jp"
},
{
"first": "Hiroaki",
"middle": [],
"last": "Nanjo",
"suffix": "",
"affiliation": {},
"email": "nanjo@nlp.i.ryukoku.ac.jp"
},
{
"first": "Takehiko",
"middle": [],
"last": "Yoshimi",
"suffix": "",
"affiliation": {},
"email": "yoshimi@nlp.i.ryukoku.ac.jp"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Automatic estimation of word significance oriented for speech-based Information Retrieval (IR) is addressed. Since the significance of words differs in IR, automatic speech recognition (ASR) performance has been evaluated based on weighted word error rate (WWER), which gives a weight on errors from the viewpoint of IR, instead of word error rate (WER), which treats all words uniformly. A decoding strategy that minimizes WWER based on a Minimum Bayes-Risk framework has been shown, and the reduction of errors on both ASR and IR has been reported. In this paper, we propose an automatic estimation method for word significance (weights) based on its influence on IR. Specifically, weights are estimated so that evaluation measures of ASR and IR are equivalent. We apply the proposed method to a speech-based information retrieval system, which is a typical IR system, and show that the method works well.",
"pdf_parse": {
"paper_id": "I08-1027",
"_pdf_hash": "",
"abstract": [
{
"text": "Automatic estimation of word significance oriented for speech-based Information Retrieval (IR) is addressed. Since the significance of words differs in IR, automatic speech recognition (ASR) performance has been evaluated based on weighted word error rate (WWER), which gives a weight on errors from the viewpoint of IR, instead of word error rate (WER), which treats all words uniformly. A decoding strategy that minimizes WWER based on a Minimum Bayes-Risk framework has been shown, and the reduction of errors on both ASR and IR has been reported. In this paper, we propose an automatic estimation method for word significance (weights) based on its influence on IR. Specifically, weights are estimated so that evaluation measures of ASR and IR are equivalent. We apply the proposed method to a speech-based information retrieval system, which is a typical IR system, and show that the method works well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Based on the progress of spoken language processing, the main target of speech processing has shifted from speech recognition to speech understanding. Since speech-based information retrieval (IR) must extract user intention from speech queries, it is thus a typical speech understanding task. IR typically searches for appropriate documents such as newspaper articles or Web pages using statistical match-ing for a given query. To define the similarity between a query and documents, the word vector space model or \"bag-of-words\" model is widely adopted, and such statistics as the TF-IDF measure are introduced to consider the significance of words in the matching. Therefore, when using automatic speech recognition (ASR) as a front-end of such IR systems, the significance of the words should be considered in ASR; words that greatly affect IR performance must be detected with higher priority.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Based on such a background, ASR evaluation should be done from the viewpoint of the quality of mis-recognized words instead of quantity. From this point of view, word error rate (WER), which is the most widely used evaluation measure of ASR accuracy, is not an appropriate evaluation measure when we want to use ASR systems for IR because all words are treated identically in WER. Instead of WER, weighted WER (WWER), which considers the significance of words from a viewpoint of IR, has been proposed as an evaluation measure for ASR. Nanjo et.al showed that the ASR based on the Minimum Bayes-Risk framework could reduce WWER and the WWER reduction was effective for key-sentence indexing and IR (H. Nanjo et al., 2005) .",
"cite_spans": [
{
"start": 702,
"end": 721,
"text": "Nanjo et al., 2005)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To exploit ASR which minimizes WWER for IR, we should appropriately define weights of words. Ideal weights would give a WWER equivalent to IR performance degradation when a corresponding ASR result is used as a query for the IR system. After obtaining such weights, we can predict IR degradation by simply evaluating ASR accuracy, and thus, minimum WWER decoding (ASR) will be the most effective for IR.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For well-defined IRs such as relational database retrieval (E. Levin et al., 2000) , significant words (=keywords) are obvious. On the contrary, determining significant words for more general IR task (T. Misu et al., 2004 ) (C.Hori et al., 2003 is not easy. Moreover, even if significant words are given, the weight of each word is not clear. To properly and easily integrate the ASR system into an IR system, the weights of words should be determined automatically. Conventionally, they are determined by an experienced system designer. Actually, in conventional studies of minimum WWER decoding for key-sentence indexing (H. Nanjo and T.Kawahara, 2005) and IR (H. Nanjo et al., 2005) , weights were defined based on TF-IDF values used in back-end indexing or IR systems. These values reflect word significance for IR, but are used without having been proven suitable for IR-oriented ASR. In this paper, we propose an automatic estimation method of word weights based on the influences on IR.",
"cite_spans": [
{
"start": 63,
"end": 82,
"text": "Levin et al., 2000)",
"ref_id": null
},
{
"start": 204,
"end": 221,
"text": "Misu et al., 2004",
"ref_id": null
},
{
"start": 222,
"end": 244,
"text": ") (C.Hori et al., 2003",
"ref_id": "BIBREF1"
},
{
"start": 627,
"end": 654,
"text": "Nanjo and T.Kawahara, 2005)",
"ref_id": null
},
{
"start": 666,
"end": 685,
"text": "Nanjo et al., 2005)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The conventional ASR evaluation measure, namely, word error rate (WER), is defined as Equation (1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weighted Word Error Rate (WWER)",
"sec_num": "2.1"
},
{
"text": "WER = I + D + S N (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weighted Word Error Rate (WWER)",
"sec_num": "2.1"
},
{
"text": "Here, N is the number of words in the correct transcript, I is the number of incorrectly inserted words, D is the number of deletion errors, and S is the number of substitution errors. For each utterance, DP matching of the ASR result and the correct transcript is performed to identify the correct words and calculate WER.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weighted Word Error Rate (WWER)",
"sec_num": "2.1"
},
{
"text": "Apparently in WER, all words are treated uniformly or with the same weight. However, there must be a difference in the weight of errors, since several keywords have more impact on IR or the understanding of the speech than trivial functional words. Based on the background, WER is generalize and weighted WER (WWER), in which each word has a different weight that reflects its influence : ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weighted Word Error Rate (WWER)",
"sec_num": "2.1"
},
{
"text": "C I C S C D WWER = (V I + V D + V S )/V N V N = v a + v c + v d + v f + v g , V I = v b V D = v g , V S = max(v d + v e , v d ) v i : weight of word i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weighted Word Error Rate (WWER)",
"sec_num": "2.1"
},
{
"text": "WWER = V I + V D + V S V N (2) V N = \u03a3 w i v w i (3) V I = \u03a3\u0175 i \u2208I v\u0175 i (4) V D = \u03a3 w i \u2208D v w i (5) V S = \u03a3 seg j \u2208S v seg j (6) v seg j = max(\u03a3\u0175 i \u2208seg j v\u0175 i , \u03a3 w i \u2208seg j v w i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weighted Word Error Rate (WWER)",
"sec_num": "2.1"
},
{
"text": "Here, v w i is the weight of word w i , which is the i-th word of the correct transcript, and v\u0175 i is the weight of word\u0175 i , which is the i-th word of the ASR result. seg j represents the j-th substituted segment, and v seg j is the weight of segment seg j . For segment seg j , the total weight of the correct words and the recognized words are calculated, and then the larger one is used as v seg j . In this work, we use alignment for WER to identify the correct words and calculate WWER. Thus, WWER equals WER if all word weights are set to 1. In Fig. 1 , an example of a WWER calculation is shown.",
"cite_spans": [],
"ref_spans": [
{
"start": 552,
"end": 558,
"text": "Fig. 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Weighted Word Error Rate (WWER)",
"sec_num": "2.1"
},
{
"text": "WWER calculated based on ideal word weights represents IR performance degradation when the ASR result is used as a query for IR. Thus, we must perform ASR to minimize WWER for speech-based IR.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weighted Word Error Rate (WWER)",
"sec_num": "2.1"
},
{
"text": "Next, a decoding strategy to minimize WWER based on the Minimum Bayes-Risk framework (V. Goel et al., 1998) is described.",
"cite_spans": [
{
"start": 89,
"end": 107,
"text": "Goel et al., 1998)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Minimum Bayes-Risk Decoding",
"sec_num": "2.2"
},
{
"text": "In Bayesian decision theory, ASR is described with a decision rule \u03b4(X): X \u2192\u0174 . Using a realvalued loss function l(W, \u03b4(X)) = l(W, W ), the decision rule minimizing Bayes-risk is given as follows. It is equivalent to the orthodox ASR (maximum likelihood ASR) when a 0/1 loss function is used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Minimum Bayes-Risk Decoding",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b4(X) =argmin W W l(W, W ) \u2022 P (W |X)",
"eq_num": "(7)"
}
],
"section": "Minimum Bayes-Risk Decoding",
"sec_num": "2.2"
},
{
"text": "The minimization of WWER is realized using WWER as a loss function (H. Nanjo and T.Kawahara, 2005 ) (H. Nanjo et al., 2005) .",
"cite_spans": [
{
"start": 71,
"end": 97,
"text": "Nanjo and T.Kawahara, 2005",
"ref_id": null
},
{
"start": 104,
"end": 123,
"text": "Nanjo et al., 2005)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Minimum Bayes-Risk Decoding",
"sec_num": "2.2"
},
{
"text": "A word weight should be defined based on its influence on IR. Specifically, weights are estimated so that WWER will be equivalent to an IR performance degradation. For an evaluation measure of IR performance degradation, IR score degradation ratio (IRDR), which is described in detail in Section 4.2, is introduced in this work. The estimation of weights is performed as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimation of Word Weights",
"sec_num": "3"
},
{
"text": "1. Query pairs of a spoken-query recognition result and its correct transcript are set as training data. For each query pair m, do procedures 2 to 5. Practically, procedure 6 is defined to minimize the mean square error between both evaluation measures (WWER and IRDR) as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimation of Word Weights",
"sec_num": "3"
},
{
"text": "F (x) = m E m (x) C m (x) \u2212 IRDR m 2 \u2192 min (8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimation of Word Weights",
"sec_num": "3"
},
{
"text": "Here, x is a vector that consists of the weights of words. E m (x) is a function that determines the sum of the weights of mis-recognized words. C m (x) is a function that determines the sum of the weights of the correct transcript. E m (x) and C m (x) correspond to the numerator and denominator of Equation (2), respectively. In this work, we adopt the steepest decent method to determine the weights that give minimal F (x). Initially, all weights are set to 1, and then each word weight (x k ) is iteratively updated based on Equation (9) until the mean square error between WWER and IRDR is converged.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimation of Word Weights",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x k = \u23a7 \u23aa \u23aa \u23aa \u23aa \u23a8 \u23aa \u23aa \u23aa \u23aa \u23a9 x k \u2212 \u03b1 if \u2202F \u2202x k > 0 x k + \u03b1 else if \u2202F \u2202x k < 0 x k otherwise",
"eq_num": "(9)"
}
],
"section": "Estimation of Word Weights",
"sec_num": "3"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimation of Word Weights",
"sec_num": "3"
},
{
"text": "\u2202F \u2202x k = m 2 E m C m \u2212IRDR m \u2022 E m C m \u2212IRDR m = m 2 E m C m \u2212IRDR m \u2022 E m \u2022 C m \u2212 E m \u2022 C m C 2 m = m 2 E m C m \u2212IRDR m \u2022 1 C m E m \u2212C m \u2022 E m C m = m 2 C m (WWER m \u2212IRDR m ) E m \u2212C m \u2022WWER m 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimation of Word Weights",
"sec_num": "3"
},
{
"text": "Weight Estimation on Orthodox IR",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimation of Word Weights",
"sec_num": "3"
},
{
"text": "In this paper, weight estimation is evaluated with an orthodox IR system that searches for appropriate documents using statistical matching for a given query. The similarity between a query and documents is defined by the inner product of the feature vectors of the query and the specific document. In this work, a feature vector that consists of TF-IDF values is used. The TF-IDF value is calculated for each word t and document (query) i as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WEB Page Retrieval",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "TF-IDF(t, i) = tf t,i DL i avglen + tf t,i \u2022 log N df t",
"eq_num": "(10)"
}
],
"section": "WEB Page Retrieval",
"sec_num": "4.1"
},
{
"text": "Here, term frequency tf t,i represents the occurrence counts of word t in a specific document i, and document frequency df t represents the total number of documents that contain word t. A word that occurs frequently in a specific document and rarely occurs in other documents has a large TF-IDF value. We normalize TF values using length of the document (DL i ) and average document lengths over all documents (avglen) because longer document have more words and TF values tend to be larger.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WEB Page Retrieval",
"sec_num": "4.1"
},
{
"text": "For evaluation data, web retrieval task \"NTCIR-3 WEB task\", which is distributed by NTCIR (NTC, ), is used. The data include web pages to be searched, queries, and answer sets. For speech-based information retrieval, 470 query utterances by 10 speakers are also included.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WEB Page Retrieval",
"sec_num": "4.1"
},
{
"text": "For an evaluation measure of IR, discount cumulative gain (DCG) is used, and described below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Measure of IR",
"sec_num": "4.2"
},
{
"text": "DCG(i) = \u23a7 \u23a8 \u23a9 g(1) if i = 1 DCG(i \u2212 1) + g(i) log(i) otherwise (11) g(i) = \u23a7 \u23aa \u23aa \u23aa \u23aa \u23a8 \u23aa \u23aa \u23aa \u23aa \u23a9 h if d i \u2208 H a else if d i \u2208 A b else if d i \u2208 B c otherwise",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Measure of IR",
"sec_num": "4.2"
},
{
"text": "Here, d i represents i-th retrieval result (document). H, A, and B represent a degree of relevance; H is labeled to documents that are highly relevant to the query. A and B are labeled to documents that are relevant and partially relevant to the query, respectively. \"h\", \"a\", \"b\", and \"c\" are the gains, and in this work, (h, a, b, c) = (3, 2, 1, 0) is adopted. When retrieved documents include many relevant documents that are ranked higher, the DCG score increases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Measure of IR",
"sec_num": "4.2"
},
{
"text": "In this work, word weights are estimated so that WWER and IR performance degradation will be equivalent. For an evaluation measure of IR performance degradation, we define IR score degradation ratio (IRDR) as below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Measure of IR",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "IRDR = 1 \u2212 H R",
"eq_num": "(12)"
}
],
"section": "Evaluation Measure of IR",
"sec_num": "4.2"
},
{
"text": "R represents a DCG score calculated with IR results by text query, and H represents a DCG score given by the ASR result of the spoken query. IRDR represents the ratio of DCG score degradation affected by ASR errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Measure of IR",
"sec_num": "4.2"
},
{
"text": "In this paper, ASR system is set up with following acoustic model, language model and a decoder Julius rev.3.4.2(A. Lee et al., 2001) . As for acoustic model, gender independent monophone model (129 states, 16 mixtures) trained with JNAS corpus are used. Speech analysis is performed every 10 msec. and a 25 dimensional parameter is computed (12 MFCC + 12\u0394MFCC + \u0394Power). For language model, a word trigram model with the vocabulary of 60K words trained with WEB text is used. Generally, trigram model is used as acoustic model in order to improve the recognition accuracy. However, monophone model is used in this paper, since the proposed estimation method needs recognition error (and IRDR).",
"cite_spans": [
{
"start": 116,
"end": 133,
"text": "Lee et al., 2001)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic speech recognition system",
"sec_num": "4.3"
},
{
"text": "We analyzed the correlations of conventional ASR evaluation measures with IRDR by selecting appropriate test data as follows. First, ASR is performed for 470 spoken queries of an NTCIR-3 web task. Then, queries are eliminated whose ASR results do not contain recognition errors and queries with which no IR results are retrieved. Finally, we selected 107 pairs of query transcripts and their ASR results as test data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlation between Conventional ASR and IR Evaluation Measures",
"sec_num": "4.4.1"
},
{
"text": "For all 107 pairs, we calculated WER and IRDR using corresponding ASR result. Figure 2 shows the correlations between WER and IRDR. Correlation coefficient between both is 0.119. WER is not correlated with IRDR. Since our IR system only uses the statistics of nouns, WER is not an appropriate evaluation measure for IR. Conventionally, for such tasks, keyword recognition has been performed, and keyword error rate (KER) has been used as an evaluation measure. KER is calculated by setting all keyword weights to 1 and all weights of the other words to 0 in WWER calculation. Figure 3 shows the correlations between KER and IRDR. Although IRDR is more correlated with KER than WER, KER is not significantly correlated with IRDR (correlation coefficient: 0.224). Thus, KER is not a suitable evaluation measure of ASR for IR. This fact shows that each keyword has a different influence on IR and ",
"cite_spans": [],
"ref_spans": [
{
"start": 78,
"end": 86,
"text": "Figure 2",
"ref_id": "FIGREF3"
},
{
"start": 576,
"end": 584,
"text": "Figure 3",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Correlation between Conventional ASR and IR Evaluation Measures",
"sec_num": "4.4.1"
},
{
"text": "In ASR for IR, since some words are significant, each word should have a different weight. Thus, we assume that each keyword has a positive weight, and non-keywords have zero weight. WWER calculated with these assumptions is then defined as weighted keyword error rate (WKER).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlation between WWER and IR Evaluation Measure",
"sec_num": "4.4.2"
},
{
"text": "Using the same test data (107 queries), keyword weights were estimated with the proposed estimation method. The correlation between IRDR and WKER calculated with the estimated word weights is shown in Figure 4 . A high correlation between IRDR and WKER is confirmed (correlation coefficient: 0.969). The result shows that the proposed method works well and proves that giving a different weight to each word is significant.",
"cite_spans": [],
"ref_spans": [
{
"start": 201,
"end": 209,
"text": "Figure 4",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Correlation between WWER and IR Evaluation Measure",
"sec_num": "4.4.2"
},
{
"text": "The proposed method enables us to extend text-based IR systems to speech-based IR systems with typical text queries for the IR system, ASR results of the queries, and answer sets for each query. ASR results are not necessary since they can be substituted with simulated texts that can be automatically generated by replacing some words with others. On the contrary, text queries and answer sets are indispensable and must be prepared. It costs too much to make answer sets manually since we should consider whether each answer is relevant to the query. For these reasons, it is difficult to apply the method to a large-scale speech-based IR system. An estimation method without hand-labeled answer sets is strongly required. An estimation method without hand-labeled answer sets, namely, the unsupervised estimation of word weights, is also tested. Unsupervised estimation is performed as described in Section 3. In unsupervised estimation, the IR result (document set) with a correct transcript is regarded as an answer set, namely, a presumed answer set, and it is used for IRDR calculation instead of a hand-labeled answer set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlation between WWER and IR Evaluation Measure",
"sec_num": "4.4.2"
},
{
"text": "The result (correlation between IRDR and WKER) is shown in Figure 5 . Without handlabeled answer sets, we obtained high correlation (0.712 of correlation coefficient) between IRDR and WKER. The result shows that the proposed estimation method is effective and widely applicable to IR systems since it requires only typical text queries for IR. With the WWER given by the estimated weights, IR performance degradation can be confidently predicted. It is confirmed that the ASR approach to minimize such WWER, which is realized with decoding based on a Minimum Bayes-Risk framework (H. Nanjo and T.Kawahara, 2005 )(H. Nanjo et al., 2005) , is effective for IR.",
"cite_spans": [
{
"start": 584,
"end": 610,
"text": "Nanjo and T.Kawahara, 2005",
"ref_id": null
},
{
"start": 616,
"end": 635,
"text": "Nanjo et al., 2005)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 59,
"end": 67,
"text": "Figure 5",
"ref_id": "FIGREF7"
}
],
"eq_spans": [],
"section": "Correlation between WWER and IR Evaluation Measure",
"sec_num": "4.4.2"
},
{
"text": "In this section, we discuss the problem of word weight estimation. Although we obtained high correlation between IRDR and KWER, the estimation may encounter the over-fitting problem when we use small estimation data. When we want to design a speech-based IR system, a sufficient size of typical queries is often prepared, and thus, our proposed method can estimate appropriate weights for typical significant words. Moreover, this problem will be avoided using a large amount of dummy data (pair of query and IRDR) with unsupervised estimation. In this work, although obtained correlation coefficient of 0.712 in unsupervised estimation, it is desirable to obtain much higher correlation. There are much room to improve unsupervised estimation method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.5"
},
{
"text": "In addition, since typical queries for IR system will change according to the users, current topic, and so on, word weights should be updated accordingly. It is reasonable approach to update word weights with small training data which has been input to the system currently. For such update system, our estimation method, which may encounter the over-fitting problem to the small training data, may work as like as cache model (P. Clarkson and A.J.Robinson, 1997) , which gives higher language model probability to currently observed words.",
"cite_spans": [
{
"start": 431,
"end": 463,
"text": "Clarkson and A.J.Robinson, 1997)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.5"
},
{
"text": "We described the automatic estimation of word significance for IR-oriented ASR. The proposed esti-mation method only requires typical queries for the IR, and estimates weights of words so that WWER, which is an evaluation measure for ASR, will be equivalent to IRDR, which represents a degree of IR degradation when an ASR result is used as a query for IR. The proposed estimation method was evaluated on a web page retrieval task. WWER based on estimated weights is highly correlated with IRDR. It is confirmed that the proposed method is effective and we can predict IR performance confidently with such WWER, which shows the effectiveness of our proposed ASR approach minimizing such WWER for IR.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [
{
"text": "Acknowledgment: The work was partly supported by KAKENHI WAKATE(B).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Julius -an open source real-time large vocabulary recognition engine",
"authors": [
{
"first": "A",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Kawahara",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Shikano",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. EUROSPEECH",
"volume": "",
"issue": "",
"pages": "1691--1694",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A.Lee, T.Kawahara, and K.Shikano. 2001. Julius -an open source real-time large vocabulary recognition en- gine. In Proc. EUROSPEECH, pages 1691-1694.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Deriving disambiguous queries in a spoken interactive ODQA system",
"authors": [
{
"first": "C",
"middle": [],
"last": "Hori",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Hori",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Isozaki",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Maeda",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Katagiri",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Furui",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. IEEE-ICASSP",
"volume": "",
"issue": "",
"pages": "624--627",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C.Hori, T.Hori, H.Isozaki, E.Maeda, S.Katagiri, and S.Furui. 2003. Deriving disambiguous queries in a spoken interactive ODQA system. In Proc. IEEE- ICASSP, pages 624-627.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The AT&T-DARPA communicator mixedinitiative spoken dialogue system",
"authors": [
{
"first": "E",
"middle": [],
"last": "Levin",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Narayanan",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Pieraccini",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Biatov",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Bocchieri",
"suffix": ""
},
{
"first": "G",
"middle": [
"D"
],
"last": "Fabbrizio",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Eckert",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Pokrovsky",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Rahim",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Ruscitti",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Walker",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. ICSLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E.Levin, S.Narayanan, R.Pieraccini, K.Biatov, E.Bocchieri, G.D.Fabbrizio, W.Eckert, S.Lee, A.Pokrovsky, M.Rahim, P.Ruscitti, and M.Walker. 2000. The AT&T-DARPA communicator mixed- initiative spoken dialogue system. In Proc. ICSLP.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A new ASR evaluation measure and minimum Bayes-risk decoding for open-domain speech understanding",
"authors": [
{
"first": "H",
"middle": [],
"last": "Nanjo",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Kawahara",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. IEEE-ICASSP",
"volume": "",
"issue": "",
"pages": "1053--1056",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H.Nanjo and T.Kawahara. 2005. A new ASR evalua- tion measure and minimum Bayes-risk decoding for open-domain speech understanding. In Proc. IEEE- ICASSP, pages 1053-1056.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Minimum Bayes-risk decoding considering word significance for information retrieval system",
"authors": [
{
"first": "H",
"middle": [],
"last": "Nanjo",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Misu",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Kawahara",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. INTER-SPEECH",
"volume": "",
"issue": "",
"pages": "561--564",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H.Nanjo, T.Misu, and T.Kawahara. 2005. Minimum Bayes-risk decoding considering word significance for information retrieval system. In Proc. INTER- SPEECH, pages 561-564.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Language Model Adaptation using Mixtures and an Exponentially Decaying cache",
"authors": [
{
"first": "P",
"middle": [],
"last": "Clarkson",
"suffix": ""
},
{
"first": "A",
"middle": [
"J"
],
"last": "Robinson",
"suffix": ""
}
],
"year": 1997,
"venue": "Proc. IEEE-ICASSP",
"volume": "2",
"issue": "",
"pages": "799--802",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P.Clarkson and A.J.Robinson. 1997. Language Model Adaptation using Mixtures and an Exponentially De- caying cache. In Proc. IEEE-ICASSP, volume 2, pages 799-802.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Confirmation strategy for document retrieval systems with spoken dialog interface",
"authors": [
{
"first": "T",
"middle": [],
"last": "Misu",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Komatani",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Kawahara",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. ICSLP",
"volume": "",
"issue": "",
"pages": "45--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T.Misu, K.Komatani, and T.Kawahara. 2004. Confirma- tion strategy for document retrieval systems with spo- ken dialog interface. In Proc. ICSLP, pages 45-48.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "LVCSR rescoring with modified loss functions: A decision theoretic perspective",
"authors": [
{
"first": "V",
"middle": [],
"last": "Goel",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Byrne",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 1998,
"venue": "Proc. IEEE-ICASSP",
"volume": "1",
"issue": "",
"pages": "425--428",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V.Goel, W.Byrne, and S.Khudanpur. 1998. LVCSR rescoring with modified loss functions: A decision the- oretic perspective. In Proc. IEEE-ICASSP, volume 1, pages 425-428.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Example of WWER calculation on IR, is introduced. WWER is defined as follows."
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Correlation between ratio of IR score degradation"
},
"FIGREF4": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Correlation between ratio of IR score degradation and KER should be given a different weight based on its influence on IR."
},
"FIGREF6": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Correlation between ratio of IR score degradation and WKER (supervised estimation)"
},
"FIGREF7": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Correlation between ratio of IR score degradation and WKER (unsupervised estimation)"
},
"TABREF0": {
"html": null,
"content": "<table><tr><td>2. Perform IR with a correct transcript and calcu-</td></tr><tr><td>late IR score R m .</td></tr><tr><td>3. Perform IR with a spoken-query ASR result</td></tr><tr><td>and calculate IR score H m .</td></tr><tr><td>4. Calculate IR score degradation ratio</td></tr><tr><td>(IRDR m = 1 \u2212 Hm Rm ).</td></tr><tr><td>5. Calculate WWER m .</td></tr><tr><td>6.</td></tr></table>",
"num": null,
"text": "Estimate word weights so that WWER m and IRDR m are equivalent for all queries.",
"type_str": "table"
}
}
}
} |