diff --git "a/Full_text_JSON/prefixO/json/O00/O00-1003.json" "b/Full_text_JSON/prefixO/json/O00/O00-1003.json" new file mode 100644--- /dev/null +++ "b/Full_text_JSON/prefixO/json/O00/O00-1003.json" @@ -0,0 +1,1708 @@ +{ + "paper_id": "O00-1003", + "header": { + "generated_with": "S2ORC 1.0.0", + "date_generated": "2023-01-19T07:59:13.763922Z" + }, + "title": "The Improving Techniques for Disambiguating Non-alphabet Sense Categories", + "authors": [ + { + "first": "Feng-Long", + "middle": [], + "last": "Hwang", + "suffix": "", + "affiliation": { + "laboratory": "", + "institution": "National Chung-Hsing University", + "location": { + "postCode": "40227", + "settlement": "Taichung", + "country": "Taiwan" + } + }, + "email": "flhwang@mail.lctc.edu.tw" + }, + { + "first": "Ming-Shing", + "middle": [], + "last": "Yu", + "suffix": "", + "affiliation": { + "laboratory": "", + "institution": "National Chung-Hsing University", + "location": { + "postCode": "40227", + "settlement": "Taichung", + "country": "Taiwan" + } + }, + "email": "msyu@dragon.nchu.edu.tw" + }, + { + "first": "Min-Jer", + "middle": [], + "last": "Wu", + "suffix": "", + "affiliation": { + "laboratory": "", + "institution": "National Chung-Hsing University", + "location": { + "postCode": "40227", + "settlement": "Taichung", + "country": "Taiwan" + } + }, + "email": "" + } + ], + "year": "", + "venue": null, + "identifiers": {}, + "abstract": "Usually, there are various non-alphabet symbols (\"/\", \":\", \"-\", etc.) occurring in Mandarin texts. Such symbols may be pronounced more than one oral expression with respect to its sense category. In our previous works, we proposed the multi-layer decision classifier to disambiguate the sense category of non-alphabet symbols; the elementary feature is the statistical probability of token adopting the Bayesian rule. This paper adopts more features of tokens in sentences. Three techniques are further proposed to improve the performance. Experiments show that the proposed techniques can disambiguate the sense category of target symbols quite well, even with small size of data. The precision rates for inside and outside tests are upgraded to 99.6% and 96.5% by using more features of token and techniques.", + "pdf_parse": { + "paper_id": "O00-1003", + "_pdf_hash": "", + "abstract": [ + { + "text": "Usually, there are various non-alphabet symbols (\"/\", \":\", \"-\", etc.) occurring in Mandarin texts. Such symbols may be pronounced more than one oral expression with respect to its sense category. In our previous works, we proposed the multi-layer decision classifier to disambiguate the sense category of non-alphabet symbols; the elementary feature is the statistical probability of token adopting the Bayesian rule. This paper adopts more features of tokens in sentences. Three techniques are further proposed to improve the performance. Experiments show that the proposed techniques can disambiguate the sense category of target symbols quite well, even with small size of data. The precision rates for inside and outside tests are upgraded to 99.6% and 96.5% by using more features of token and techniques.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Abstract", + "sec_num": null + } + ], + "body_text": [ + { + "text": "Various homographs or non-alphabet symbols in the Mandarin (but not limited to) occur frequently. The patterns containing these symbols may be pronounced with respect to its semantic sense. The non-alphabet symbols are defined: the symbols which are not the Mandarin characters (\u5b57) and may be pronounced different oral expressions. We call such phenomenon oral ambiguity.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Introduction", + "sec_num": "1." + }, + { + "text": "The purpose of word sense disambiguation (WSD) is to identify the most possible category among candidate's sense category. It is important to disambiguate the word sense automatically for the natural language processing (NLP). Many works [Brown etc., 1991] , [Fujii and Inue,1998 ] and [Ide and Veronis,1998 ], addressed WSD problems in the past.", + "cite_spans": [ + { + "start": 238, + "end": 256, + "text": "[Brown etc., 1991]", + "ref_id": null + }, + { + "start": 259, + "end": 279, + "text": "[Fujii and Inue,1998", + "ref_id": null + }, + { + "start": 286, + "end": 307, + "text": "[Ide and Veronis,1998", + "ref_id": null + } + ], + "ref_spans": [], + "eq_spans": [], + "section": "Introduction", + "sec_num": "1." + }, + { + "text": "In our previous works [Hwang, etc., 1999a; Hwang, etc., 1999b] , we proposed the \u03c8 Correspondence author. multi-layer decision classifier (MLDC) to predict the sense category, in which the voting scheme is used to predict the final category. Even though the domains of sense in the paper just focus on three non-alphabet symbols, the proposed approach can be extended into other symbols in Mandarin and related ambiguity problems. The features of token and improving techniques described in this paper will be employed in the 2 nd layer classifier. The main domain will focus on the improvements for the 2 nd layer decision classifier. The model of our previous works is regarded as the baseline system. Comparing with the baseline model, the proposed features of token and techniques in this paper improve the performance of inside test from 97.8 to 99.6% and outside test from 93.0 to 96.6%.", + "cite_spans": [ + { + "start": 22, + "end": 42, + "text": "[Hwang, etc., 1999a;", + "ref_id": null + }, + { + "start": 43, + "end": 62, + "text": "Hwang, etc., 1999b]", + "ref_id": null + } + ], + "ref_spans": [], + "eq_spans": [], + "section": "Introduction", + "sec_num": "1." + }, + { + "text": "The paper is organized as follows: related information and previous works will be described first. Section 3 elaborates the principal techniques for 2 nd layer classifier in MLDC.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Introduction", + "sec_num": "1." + }, + { + "text": "Section 4 focuses on the evaluation for empirical features. Some improving techniques are proposed in section 5. The conclusions are presented in last Section.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Introduction", + "sec_num": "1." + }, + { + "text": "In this Section, we first describe the applications of word sense disambiguation. The precious literatures on WSD and several methods, which are used to disambiguate the sense categories and classification problems of ambiguity, will be introduced next. Finally we will illustrate our previous approach.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Description of Related Works", + "sec_num": "2." + }, + { + "text": "The applications of WSD in natural language processing include the following domains:", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Applications of Word Sense Disambiguation", + "sec_num": "2.1" + }, + { + "text": "\u2022 Content and thematic analysis Analyzing the distribution of pre-defined categories of words in text.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Applications of Word Sense Disambiguation", + "sec_num": "2.1" + }, + { + "text": "When querying information, in a standalone system or Internet environment, the system should identify the real meaning for the query; excluding unnecessary data then correctly return desirable information among heterogeneous data.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "\u2022 Information retrieval and extraction", + "sec_num": null + }, + { + "text": "We can first disambiguate the word sense categories, and then translate the word into correct semantic meanings associated with the target word.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "\u2022 Machine translation", + "sec_num": null + }, + { + "text": "Within the text analysis phase of TTS synthesis, the sense ambiguity of non-alphabet symbols or homographs should be resolved. The patterns containing such symbols can be translated into their oral expressions. The problem dealt with in our paper is very important for the precise speech output of TTS system.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "\u2022 Speech processing", + "sec_num": null + }, + { + "text": "A lot of literatures have been published on word sense disambiguation in the past. They range from dictionary-based to corpus-based approaches. The former is dependent on the definitions of machine readable dictionary (MRD) [Veronis, etc., 1990] while the later usually rely only on the frequency of word extracted from the text corpus to construct the feature database [Schutze, etc.,1995] . Corpus-based approach adopts the co-occurrence of words which are extracted from the large text corpora to construct the feature database [Leacock, 1993] and provides the advantage of being generally applicable to new text, domains and corpus without the costly, error-prone parsing and semantic analysis. However, corpus-based approach also has some weakness: the corpus is always hard to collect and is time-consuming. The situation is so called \"knowledge acquisition bottleneck\" [Gale etc., 1992] .", + "cite_spans": [ + { + "start": 224, + "end": 245, + "text": "[Veronis, etc., 1990]", + "ref_id": null + }, + { + "start": 370, + "end": 390, + "text": "[Schutze, etc.,1995]", + "ref_id": null + }, + { + "start": 531, + "end": 546, + "text": "[Leacock, 1993]", + "ref_id": "BIBREF8" + }, + { + "start": 876, + "end": 893, + "text": "[Gale etc., 1992]", + "ref_id": null + } + ], + "ref_spans": [], + "eq_spans": [], + "section": "Related Works", + "sec_num": "2.2" + }, + { + "text": "Based on the type of context in examples, the classifiers for word sense category use two contextual information: local and topical context. Hearst, etc. [1999] use local context with a narrow syntactic parse, in which the context is segmented into noun phrases, verb groups and other groups. Gale etc.[1992] developed a topical classifier, in which the Bayesian rule is used and the only information adopted is the co-occurrence of unordered word.", + "cite_spans": [ + { + "start": 141, + "end": 160, + "text": "Hearst, etc. [1999]", + "ref_id": null + }, + { + "start": 293, + "end": 308, + "text": "Gale etc.[1992]", + "ref_id": null + } + ], + "ref_spans": [], + "eq_spans": [], + "section": "Related Works", + "sec_num": "2.2" + }, + { + "text": "With respect to the contextual information, lexical information is formalized form of information involved in each surrounding word. Lee etc. [1997] adopt the discrimination score, based on maximum entropy of surrounding words in a sentence, to discriminate the word sense. Its precision rate is 80 % average. Yarowsky [1994 and 1997] build a classifier using the local context cues within k windows for target word. A log-likelihood ratio is generated, which stands for the strength of each clue of local context. The decision will be made for matching sorted ratio sequence to decide the sense category of target word. The average performance ranges from 96% to 97% while the domain size of sense is only 2 for all ambiguous questions.", + "cite_spans": [ + { + "start": 142, + "end": 148, + "text": "[1997]", + "ref_id": null + }, + { + "start": 319, + "end": 334, + "text": "[1994 and 1997]", + "ref_id": null + } + ], + "ref_spans": [], + "eq_spans": [], + "section": "Related Works", + "sec_num": "2.2" + }, + { + "text": "In contrast to 2-gram, 3-gram and n-gram language models, our previous paper [Hwang, etc., 1999a [Hwang, etc., , 1999b proposed an approach of multi-layer decision classifiers, which can resolve the category ambiguity of oral expression for non-alphabet symbols. A two-layer classifier has been developed. The first layer decision classifier can be viewed as decision tree based on the linguistic knowledge. Some impossible categories will be excluded while the remaining categories are all the possible categories. The second classifier employs a voting scheme to predict the final category with maximum probability score. The precision rates for inside and outside testing are 97.8% and 93.0% average.", + "cite_spans": [ + { + "start": 77, + "end": 96, + "text": "[Hwang, etc., 1999a", + "ref_id": null + }, + { + "start": 97, + "end": 118, + "text": "[Hwang, etc., , 1999b", + "ref_id": null + } + ], + "ref_spans": [], + "eq_spans": [], + "section": "Our Previous Works", + "sec_num": "2.3" + }, + { + "text": "At first, the data set and sense categories for three target symbols are described. In 2 nd decision classifier, a voting scheme, derived from Bayesian rule, is used to predict the portable sense category with maximum score.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "The Principal Techniques", + "sec_num": "3." + }, + { + "text": "The original data set is collected through different source, including: Academic Sinica Balance Corpus (ASBC), text files downloaded from Internet. ASBC is composed of 316 text files which contain 5.22M characters in Mandarin, English and other symbols totally [Huang, 1995; CKIP, 1995] . Only the sentence with such non-alphabet symbols will be extracted and appended into the empirical data set. Examples of three non-alphabet symbols slash (/), colon (:) and dash (-) are extracted and appended into our empirical data set. The sentence size of three non-alphabet symbols is 1115,1282 and 1685 respectively. The ratio of training and testing set is 4:1 appropriately. These sentences will be classified into different sense category with respect to target symbols. The sense categories and their oral expressions are listed in Tables 1-3 . Less frequent (less than 1%) sense categories will be neglected.", + "cite_spans": [ + { + "start": 261, + "end": 274, + "text": "[Huang, 1995;", + "ref_id": "BIBREF4" + }, + { + "start": 275, + "end": 286, + "text": "CKIP, 1995]", + "ref_id": null + } + ], + "ref_spans": [ + { + "start": 830, + "end": 840, + "text": "Tables 1-3", + "ref_id": "TABREF0" + } + ], + "eq_spans": [], + "section": "Elementary Information of Data Set", + "sec_num": "3.1" + }, + { + "text": "Word segmentation paradigm is based on the Academia Sinica Chinese Electronic Dictionary (ASCED), which contains about 78,000 words. The words in ASCED are composed of one to 10 characters. Our principal rule of segmentation is first subject to maximal length of words and then to least number of words in a segmented pattern sequence.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Elementary Information of Data Set", + "sec_num": "3.1" + }, + { + "text": "The priority scheme is that the segmented word sequence, which contains a word of maximal length, will be chosen. If two sequences have same maximum length of words, we compare further the total number of words in such sequences; then the sequence that is composed of least number of words will be chosen. The same segmentation's priority will be adopted within the training phase and testing phase.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Elementary Information of Data Set", + "sec_num": "3.1" + }, + { + "text": "There are several categories which speech for non-alphabet symbol \"/\" are silence; the duration for silence in prosodic parameter is still different to other senses. During the synthesis processing in TTS system, the duration with respective to its category will be varied and decided with respect to prosody needed. The numbers of token and sentence for three target symbols in our feature database are listed in Table 4 . Table 4 : numbers of token and sentence for three target symbols. Table 5 displays several entries of token word \"\u516c\u8eca\" in the feature database. Twelve entries of token \"\u516c\u8eca\" are listed in Table 5 , in which each entry is composed of 5 tuples (w, l, count, s, pos) . Tag \"Na\" represents the common noun. The number in field l represents that the location of token w is preceding (negative number) or following (positive number) the target symbol respectively. It is possible that one token maybe occurs in more than two categories. Table 5 : the token word \"\u516c\u8eca\" in feature database occurs in sense category 1,3,6", + "cite_spans": [ + { + "start": 664, + "end": 685, + "text": "(w, l, count, s, pos)", + "ref_id": null + } + ], + "ref_spans": [ + { + "start": 414, + "end": 421, + "text": "Table 4", + "ref_id": null + }, + { + "start": 424, + "end": 431, + "text": "Table 4", + "ref_id": null + }, + { + "start": 490, + "end": 497, + "text": "Table 5", + "ref_id": null + }, + { + "start": 610, + "end": 617, + "text": "Table 5", + "ref_id": null + }, + { + "start": 953, + "end": 960, + "text": "Table 5", + "ref_id": null + } + ], + "eq_spans": [], + "section": "Elementary Information of Data Set", + "sec_num": "3.1" + }, + { + "text": ". ", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Elementary Information of Data Set", + "sec_num": "3.1" + }, + { + "text": "The function of multiple decision classifiers (MLDC) can be described as follow:", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "The Structure of MLDC", + "sec_num": "3.2" + }, + { + "text": "Suppose that E denotes the example with non-alphabet symbols, denote the 1 st and 2 nd classifier respectively. And possi_set is the set containing all possible categories induced by 1 st classifier. TScore(\uff0e) will compute the total score for a given category based on the voting criterion and statistical parameters schemes.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "The Structure of MLDC", + "sec_num": "3.2" + }, + { + "text": "(1)", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "The Structure of MLDC", + "sec_num": "3.2" + }, + { + "text": "where s j denotes the sense category for target symbols. possi_set contains all the possible sense categories. TScore(\uff0e) denotes the function of computing the total score for sense category.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "The Structure of MLDC", + "sec_num": "3.2" + }, + { + "text": "The segmentation task of testing phase adopts same criterions as that in training phase. A sentence will be divided into CH L and CH R , which are segmented into one to several basic tokens (Mandarin word or character). For each token in example, the probability of each category can be calculated and summed up based on the evidence (parameters found in feature database) respectively. It is called the voting scheme.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "The Statistical Decision Classifier with voting schemes", + "sec_num": "3.3" + }, + { + "text": "w l count s pos w l count s pos \u516c\u8eca -6 1 1 Na \u516c\u8eca +5 4 3 Na \u516c\u8eca -5 2 1 Na \u516c\u8eca -7 1 6 Na \u516c\u8eca -2 1 1 Na \u516c\u8eca -5 2 6 Na \u516c\u8eca -1 3 1 Na \u516c\u8eca -2 1 6 Na \u516c\u8eca -4 2 3 Na \u516c\u8eca -1 2 6 Na \u516c\u8eca -1 1 3 Na \u516c\u8eca +5 8 6 Na W l count s pos \u516c\u8eca p 7 1 Na \u516c\u8eca p 3 3 Na \u516c\u8eca f 4 3 Na \u516c\u8eca f 5 6 Na \u516c\u8eca f 8 6 Na , _ ) ( 1 set possi E = \u03a6 2 1 and \u03a6 \u03a6 ) ( max arg ) _ ( _ 2 j set possi s s TScore set possi j \u2208 = \u03a6", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "The Statistical Decision Classifier with voting schemes", + "sec_num": "3.3" + }, + { + "text": "Based on the voting schemes, each token in CH L and CH R have a statistical probability value, which looks like the voting suffrage, assigned to each category of the non-alphabet symbol. Like the political voting mechanism, the only candidate who gets the tickets in majority (maximum score in our approach) will become to be the predicted one. First the token unit we use is word with the location feature in CH L or CH R , in which the count of token occurred in same chunk (CH L or CH R ) will be summed up with respect to the sense category. The scheme with character token will be analyzed in Section 4.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "The Statistical Decision Classifier with voting schemes", + "sec_num": "3.3" + }, + { + "text": "The prediction processing is based on the occurrence of each token inside training corpus for each category. The example E is composed of word sequence W and contains three parts: chunk-L(CH L ), non-alphabet symbol TS (target symbol) and chunk-R(CH R ). E, CH L and CH R can be expressed as:", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "The Statistical Decision Classifier with voting schemes", + "sec_num": "3.3" + }, + { + "text": "(3) (4) 5where m and n are the total number of tokens in CH L and CH R .", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "The Statistical Decision Classifier with voting schemes", + "sec_num": "3.3" + }, + { + "text": "Let the category s max be the sense category with maximum conditional probability of sense category s, given the word sequence W. By the definition of the Bayesian rule, P(s|W) can be written as:", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "The Statistical Decision Classifier with voting schemes", + "sec_num": "3.3" + }, + { + "text": "(6) MLDC needs to find the sense category s max with maximum conditional probability P(s|W). Thus: 7where N and M denote the number of sense category of target symbol and token (word) in word sequence W.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "The Statistical Decision Classifier with voting schemes", + "sec_num": "3.3" + }, + { + "text": "Two problems should be considered for the Eq. (7). One is the fact that the probability of p(w 1 ,w 2 ,\u2026,w n |s) needs large memory and computation for the word sequence W. The other is the data sparseness because of the small amount of data set; which usually cause the situation of zero frequency. Each token w in word sequence W, under our voting scheme of preference scoring, can be regarded independent to other token. For the probability of sense category s given a token w, the Eq. (7) can be modified as: 8where P(s|w i ) is the probability of sense category s given a token w i . Such probability can be", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "The Statistical Decision Classifier with voting schemes", + "sec_num": "3.3" + }, + { + "text": "1 ) 1 ( w \u2212 \u2212 \u2212 \u2212 \u2212 = \uff0e\uff0e\uff0e \uff0e\uff0e\uff0e j m m L w w w CH R L CH TS CH E + + = w 2 1 n j R w w w CH + + + + = \uff0e\uff0e\uff0e \uff0e\uff0e\uff0e ) ( ) | ( ) ( ) | ( W P s W P s P W s P \u2022 = , ) , , , ( ) | , , , ( ) ( ) ( ) | ( ) ( max 2 1 2 1 max M M s w w w p s w w w p s p W P s W P s P s L L \u2022 = \u2022 = ) | ( ) | ( \u2211 = i i w s P W s Score", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "The Statistical Decision Classifier with voting schemes", + "sec_num": "3.3" + }, + { + "text": "considered further as the score for token w to vote for sense category s. Eq. (8) can be expressed as: 9where C(s,w) denotes the count of token w occurred in feature database for certain sense category s. TC(w) is the total count of token w in feature database for target symbol.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "The Statistical Decision Classifier with voting schemes", + "sec_num": "3.3" + }, + { + "text": "Score(s|w) is the relative frequency, which can be regarded as the score of token w voting to sense category s in our voting approaches. Eq. (9) satisfies the Bayesian rule and easy to understand intuitively. When computing the probability score of each word w for sense category s, we just need to use token count C(s,w) and total count TC(w) with respect to the sense category s and target symbol. So, the Score(s|w) can be computed easily for all tokens in the word sequence W of sentence. The probability can be regarded further as a score for each token in CH L and CH R to vote for each category of non-text symbol.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "The Statistical Decision Classifier with voting schemes", + "sec_num": "3.3" + }, + { + "text": "Referring to the Eq. (10), the score 1 Score L and Score R of each token in CH L and CH R voting for sense category s j of non-text symbol can be computed as: , ", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "The Statistical Decision Classifier with voting schemes", + "sec_num": "3.3" + }, + { + "text": "where J denotes the number of sense category for target symbol.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "The Statistical Decision Classifier with voting schemes", + "sec_num": "3.3" + }, + { + "text": "By definition of score( ) above, can be regarded as the relative frequency which the will occur in the sense category s j . As the result, our voting schemes are based on such probability value.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "The Statistical Decision Classifier with voting schemes", + "sec_num": "3.3" + }, + { + "text": "For the 2 nd decision classifier in MLDC, the total score TScore L (\u02d9)and TScore R (\u02d9) for all the tokens in substring CH L and CH R of example E to vote for sense category s j can be computed as:", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "The Statistical Decision Classifier with voting schemes", + "sec_num": "3.3" + }, + { + "text": "1", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "The Statistical Decision Classifier with voting schemes", + "sec_num": "3.3" + }, + { + "text": "The resulting score of each token fall between 0 and 1, while it is possible that the accumulated scores of all tokens in sentence for certain sense category will be greater than 1.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "The Statistical Decision Classifier with voting schemes", + "sec_num": "3.3" + }, + { + "text": "i i w w + \u2212 and ) , ( ) ( 1 \u2211 = \u2212 \u2212 = J j i j L i L w s C w TC 1 ) , ( , 1 ) , ( 1 1 = = \u2211 \u2211 = = J j R j R J j L j L w s Score w s Score ) , ( and ) , ( i j R i j L w s Score w s Score + \u2212 ) ( ) , ( ) , ( i R i j R i j R w TC w s C w s Score + + + = ) , ( ) ( 1 \u2211 = + + = J j i j R i R w s C w TC n i m i + \u2264 + \u2264 + \u2212 \u2264 \u2212 \u2264 \u2212 1 and 1 ) ( ) , ( ) , ( i L i j L i j L w TC w s C w s Score \u2212 \u2212 \u2212 = ) ( and ) ( i R i L w TC w TC + \u2212 ) ( ) , ( ) | ( ) | ( w TC w s C w s Score w s P = = ) , ( C and ) , ( i j R i j L w s w s C + \u2212 i i w w + \u2212 and i i w w + \u2212 and i i w w + \u2212", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "The Statistical Decision Classifier with voting schemes", + "sec_num": "3.3" + }, + { + "text": "and 13In 2 nd decision classifier, total score TScore(\u02d9) of all tokens in example E for each sense category are displayed as: 14TScore(\u02d9) will be used in Eq. (2) by the multi-layer decision classifiers to predict the final sense category s j .", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "The Statistical Decision Classifier with voting schemes", + "sec_num": "3.3" + }, + { + "text": "Several well-known methods for probability of unknown words are described in [Su etc.,1996; Daniel et al.,2000] : additive discounting, Good-Turing and Back-Off . The principle reason is that there are a lot of tokens in natural language, usually more several ten thousands.", + "cite_spans": [ + { + "start": 77, + "end": 91, + "text": "[Su etc.,1996;", + "ref_id": null + }, + { + "start": 92, + "end": 111, + "text": "Daniel et al.,2000]", + "ref_id": "BIBREF7" + } + ], + "ref_spans": [], + "eq_spans": [], + "section": "The Probability of Unknown token", + "sec_num": "3.4" + }, + { + "text": "New lexicons or tokens will be occurred in near future. Within natural language processing, it is so hard to collect all the words.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "The Probability of Unknown token", + "sec_num": "3.4" + }, + { + "text": "In our paper, the so-called unknown tokens can be considered that do not occur in our feature database, which have been generated in the training phase. It is so apparent that the distribution and total number of collected data set will affect the statistical parameters seriously, especially on the statistical models. Another situation is the data sparseness. The smoothing techniques can alleviate the problems. In this paper we use additive discounting and assign 0.5 to the count of unknown tokens.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "The Probability of Unknown token", + "sec_num": "3.4" + }, + { + "text": "The experiments with elementary approach and schemes are evaluated first. Two different scoring scheme adopted by our classifier are tested to decide which is better for WSD problems in this paper. We will compare the 2 nd classifier in MLDC with the well-known language model. The location effectiveness with respect to different token unit (Mandarin word or character) is also evaluated in final subsection.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Evaluations", + "sec_num": "4." + }, + { + "text": "At first, we will describe the voting scheme with winner-take-all scoring then compare such two scoring schemes. In contrast to the so-called preference-scoring scheme described in Section 4.3, the voting scheme with winner-take-all scoring adopts a different scoring rule.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Evaluation for Two Scoring Schemes", + "sec_num": "4.1" + }, + { + "text": "Ho Lee etc [1997] . Lee employed the winner-take-all scoring scheme to word sense disambiguation, without comparison between these two schemes in his paper. Lee's precision rate was 80% average.", + "cite_spans": [ + { + "start": 11, + "end": 17, + "text": "[1997]", + "ref_id": null + } + ], + "ref_spans": [], + "eq_spans": [], + "section": "Evaluation for Two Scoring Schemes", + "sec_num": "4.1" + }, + { + "text": "For each token in sentence, will be assigned the score 1 to sense category s j * for token w -i and w +i and 0 to all the other sense categories. Eq.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Evaluation for Two Scoring Schemes", + "sec_num": "4.1" + }, + { + "text": ") , ( ) ( , ) , ( ) ( 1 1 i j n i R j R m i i j L j L w s", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Evaluation for Two Scoring Schemes", + "sec_num": "4.1" + }, + { + "text": "(10) should be rewritten as:", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Evaluation for Two Scoring Schemes", + "sec_num": "4.1" + }, + { + "text": "EQUATION", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [ + { + "start": 0, + "end": 8, + "text": "EQUATION", + "ref_id": "EQREF", + "raw_str": "(15)", + "eq_num": "(16)" + } + ], + "section": "Evaluation for Two Scoring Schemes", + "sec_num": "4.1" + }, + { + "text": "where sense category is with respect to the category of which the have the maximum score among all categories for w -i and w +i . Based on the voting scheme with winner-take-all scoring, Eqs. (10) -(14) should not be modified.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Evaluation for Two Scoring Schemes", + "sec_num": "4.1" + }, + { + "text": "In case that several sense categories have the maximum score for token w, Eqs (15) and(16) should be revised. The total probability score 1 for token w will be shared by these sense categories. It means that the total score 1 will be divided by the number of sense categories with same maximum score.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Evaluation for Two Scoring Schemes", + "sec_num": "4.1" + }, + { + "text": "The first parameter to be evaluated is the scoring scheme for each token. Figure 6 displays an example of the accumulated score for 5 categories using two different scoring methods: preference and winner-take-all scoring on the Eqs 15and 16. The example E1contains 15 individual tokens (including symbol \":\"). Sense category time (s 2 ) gets the maximum score 6.92 in Figure 1 . Similarly, it still gets maximum score 7.0 by using the winner-take-all scoring. The sense category time (s 2 ) in (E1) gets maximum score in two scoring schemes, however, some other examples may not hold yet. Especially, while top 2 scores are so close it is possible that the sense category with second maximum score will precede the first category with maximum score by employing different scoring scheme. For instance, as shown in example (E2), the sense category date (s 1 ) got the maximum score and is predicted as the final category by using the winner-take-all scoring scheme. Instead of such scoring scheme, we use the preference scoring to predict the category and the result is correct. In fact, the substring \"1/3\" means \"one third\". This is an example that winner-take-all scoring makes a wrong prediction while preference scoring can finds the correct sense category. The scores for each sense category 2 are listed below example (E2). Table 7 lists the performances with two voting schemes: preference and winner-take-all scoring. Obviously, the former is superior uniformly to the later on both inside and outside testing for three symbols. So we adopt the voting scheme of preference scoring, excluding winner-take-all scoring, for all following experiments. Note that the 2 nd decision classifier in MLDC, based on the voting scheme of preference scoring with Mandarin word's token, is regarded as the baseline model in this paper. As shown in Table 7 , the net results are enhanced up 5.5% and 5.9% for inside and outside testing respectively. ", + "cite_spans": [], + "ref_spans": [ + { + "start": 74, + "end": 82, + "text": "Figure 6", + "ref_id": null + }, + { + "start": 368, + "end": 376, + "text": "Figure 1", + "ref_id": "FIGREF1" + }, + { + "start": 1330, + "end": 1337, + "text": "Table 7", + "ref_id": "TABREF6" + }, + { + "start": 1842, + "end": 1849, + "text": "Table 7", + "ref_id": "TABREF6" + } + ], + "eq_spans": [], + "section": "Evaluation for Two Scoring Schemes", + "sec_num": "4.1" + }, + { + "text": "(E1) \u4e09 \u6708 \u4e8c \u5341 \u65e5 \u65e9 \u4e0a \uff17 \uff1a \uff13 \uff10 \u65bc \u653f \u5927 \u5716 \u66f8 \u9928 \u524d \u96c6 \u5408 \u3002", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Evaluation for Two Scoring Schemes", + "sec_num": "4.1" + }, + { + "text": "(E2) \u50f9 \u683c \u6bd4 \u53f0 \u7063 \u4fbf \u5b9c \u7d04 \uff11\uff0f\uff13 \u5de6 \u53f3 \uff0c s n", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Evaluation for Two Scoring Schemes", + "sec_num": "4.1" + }, + { + "text": "In this Section, we will compare baseline defined in previous subsection with the n-gram (n=1, 2 in this experiments), widely used in various domains of natural language processing.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Comparing the 2 nd Classifier with n-gram Models", + "sec_num": "4.2" + }, + { + "text": "The base line model displays attractive empirical results. ", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Comparing the 2 nd Classifier with n-gram Models", + "sec_num": "4.2" + }, + { + "text": "In addition to our baseline model, we will analyze further the effectiveness of the 1 st classifier in MLDC. Two classifiers in MLDC could be merged together to improve the prediction rate.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Merging Two Layer Classifiers Together", + "sec_num": "4.3" + }, + { + "text": "For instance, example (E3) shows the effectiveness of merging the 1 st layer classifier into baseline (the 2 nd layer classifier). Exploiting the 1 st classifier to exclude some impossible categories first. As shown in example (E3), the sense category with maximum score (2.4), predicted by using the 2 nd layer classifier with voting scheme only, is date (s 1 ) and it is apparent that the prediction is incorrect. The number of w +1 token (32) in pattern \"3/32\" is larger than 31, which is the maximum number of date. Therefore sense category date 3 was excluded for target symbol \"/\" by the 1 st layer classifier. However, the category music time (s 3 )", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Merging Two Layer Classifiers Together", + "sec_num": "4.3" + }, + { + "text": "with second maximum score (1.8) was predicted as the final one among all remained categories correctly by the 2 nd layer classifier with voting scheme. The performances are attractive and listed in Table 9 . As shown, the final results for outside testing is 97.8, 95.6 and 92.1 for three symbols respectively by combining the 1 st and 2 nd classifier with voting scheme of preference scoring in 2 nd classifier. The numbers in parenthesis are the net results. The average net results by merging two classifiers are upgraded 0.5% and 4.5% (referring to Table 8 and Table 10 ). ", + "cite_spans": [], + "ref_spans": [ + { + "start": 198, + "end": 205, + "text": "Table 9", + "ref_id": "TABREF10" + }, + { + "start": 553, + "end": 573, + "text": "Table 8 and Table 10", + "ref_id": "TABREF0" + } + ], + "eq_spans": [], + "section": "Merging Two Layer Classifiers Together", + "sec_num": "4.3" + }, + { + "text": "(E3) \u6f14 \u594f \u7684 \u66f2\u5b50 \u662f \uff13\uff0f\uff13\uff12 \u62cd \u4e14 \u70ba \uff24 \u5927 \u8abf \u3002 s n", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Merging Two Layer Classifiers Together", + "sec_num": "4.3" + }, + { + "text": "In previous Section, the location of each token is just labeled two types: preceding (p) and following (f) the target symbol. While the count for each token was statistically accumulated, we just consider whether the token is located within the chunk-L (CH L ) or chunk-R (CH R ) of sentence. Will the performance be improved by considering further the individual location of each token in CH L (w -i ) and CH R (w +i )? In this Section, the effect of individual location for each token (word) will be evaluated further.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Evaluation for the Effect of Word's Location", + "sec_num": "4.4" + }, + { + "text": "In this Section Token unit is still Mandarin word. Instead of the two chunk types described previously, each token is labeled with the individual location in CH L and CH R , in which the count of each token occurred in same location will be summed up with respect to the sense category. So the technique is the word-based scheme with individual location.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Evaluation for the Effect of Word's Location", + "sec_num": "4.4" + }, + { + "text": "The Eqs. (10)-(12) can be changed as follow:", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Evaluation for the Effect of Word's Location", + "sec_num": "4.4" + }, + { + "text": ",", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Evaluation for the Effect of Word's Location", + "sec_num": "4.4" + }, + { + "text": ",", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Evaluation for the Effect of Word's Location", + "sec_num": "4.4" + }, + { + "text": "where i is the location of word with respect to the non-text symbol , -m<= -i <=-1 and 1<=+i<=n. are the count of word w -i and w +i with the location -i and +i occurred in feature corpus for sense category s j respectively.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Evaluation for the Effect of Word's Location", + "sec_num": "4.4" + }, + { + "text": "are the total count of word w -i and w +i occurred in the location -i and +i in feature database respectively Let's take a look at the example (E4), the sense category (date) is incorrectly predicted based on the chunk scheme whereas correctly predicted on individual location of each token.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Evaluation for the Effect of Word's Location", + "sec_num": "4.4" + }, + { + "text": "(E4) \u66f9 \u9326 \u8f1d \u9084 \u6709 \u6a5f \u6703 \u5728 \uff11\uff11\uff0f\uff12\uff10 \u4ee3 \u8868 \u53f0 \u7063 \u5927 \u806f \u76df \u8207 \u7d71 \u4e00 \u968a \u6bd4 \u8cfd\u3002 Comparing two schemes of token (word) with individual and two chunks' location, the net precision rates of outside testing are 0.6%, 1.5% and -0.3% for three target symbols respectively. As Shown the Table 10 , the former is average superior to the later, in which the sentence is divided into two chunks (CH L or CH R ). Referring to the accumulated score for correct predicted sense category, although the rate of unknown words token in data set reaches about 45%, the former still make the prediction efficiently. However, it is easier for", + "cite_spans": [], + "ref_spans": [ + { + "start": 254, + "end": 262, + "text": "Table 10", + "ref_id": "TABREF0" + } + ], + "eq_spans": [], + "section": "Evaluation for the Effect of Word's Location", + "sec_num": "4.4" + }, + { + "text": ") ( ) , ( ) , ( i i i j i i j R w TC w s C w s Score + + + + + = ) ( ) , ( ) , ( i i i j i i j L w TC w s C w s Score \u2212 \u2212 \u2212 \u2212 \u2212 = ) , ( ) ( 1 \u2211 = \u2212 \u2212 \u2212 \u2212 = J j i j i i i w s C w TC ) , ( ) ( 1 \u2211 = + + + + = J j i j i i i w s C w TC 1 ) , ( , 1 ) , ( 1 1 = = \u2211 \u2211 = + = \u2212 J j i j R J j i j L w s Score w s Score ) , ( C and ) , ( i i j i j i w s w s C + + \u2212 \u2212 ) ( and ) ( i i i i w TC w TC + + \u2212 \u2212", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Evaluation for the Effect of Word's Location", + "sec_num": "4.4" + }, + { + "text": "the techniques with voting scheme, which identify a half of total tokens in sentence, to make the correct prediction. The net precision rates for inside and outside testing are 0.2 and 0.6. \" 99.3(-0.2) 99.5 98.6(+0.7) 97.9", + "cite_spans": [], + "ref_spans": [ + { + "start": 190, + "end": 202, + "text": "\" 99.3(-0.2)", + "ref_id": null + } + ], + "eq_spans": [], + "section": "Evaluation for the Effect of Word's Location", + "sec_num": "4.4" + }, + { + "text": "\"\uff1a\" 99.2(+0.9) 98. ", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Evaluation for the Effect of Word's Location", + "sec_num": "4.4" + }, + { + "text": "Until now, the sentence will be divided into two chunks: chunk-L(CH L ) and chunk-R(CH R ), which are in the left and right side of target symbol TS in sentence. Such chunks will be segmented into one to several words based the ASCED and segmentation scheme. In Mandarin Vocabulary, there are about 70000 frequent Mandarin words, which are composed of one to ten characters. For example, the number for 1-character token (Mandarin word) is 7522 and 48315 for 2-character token (Mandarin word) in ASCED while just 13053 for frequent Mandarin characters. It is apparent that segmented sentence will generate more unknown tokens for the same data set. The more unknown tokens are in sentence, the less precision rate will be. The process of word segmentation may generate possible mistake, which will also degrade the performance of prediction. Usually the situation becomes serious if the data set is sparse or volume of sentence is small.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Evaluation for Effect of Token Unit", + "sec_num": "4.5" + }, + { + "text": "In this section, the sentence will not be segmented so each character in sentence is the voting token. The location of each character will be considered same as described in previous section. The token unit is character with the individual location in CH L or CH R , in which the count of each character occurred in same chunk (CH L or CH R ) will be summed up with respect to the sense category. So the technique is the character-based scheme with individual location. Example E is still composed of three parts: CH L , TS and CH R . Each chunk may comprise one to several characters. Note that the foreign words (such as: IBM, DR., Windows, etc.) within chunk will be regarded as a token.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Evaluation for Effect of Token Unit", + "sec_num": "4.5" + }, + { + "text": "where c denotes the individual character in CH L and CH R and m, n the number of characters ", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Evaluation for Effect of Token Unit", + "sec_num": "4.5" + }, + { + "text": "+ + + + =", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Evaluation for Effect of Token Unit", + "sec_num": "4.5" + }, + { + "text": "\uff0e\uff0e\uff0e \uff0e\uff0e\uff0e in CH L and CH R respectively. The Eqs. (10)-(12) of probability scoring can be rewritten as:", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Evaluation for Effect of Token Unit", + "sec_num": "4.5" + }, + { + "text": "EQUATION", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [ + { + "start": 0, + "end": 8, + "text": "EQUATION", + "ref_id": "EQREF", + "raw_str": ", (21) ,", + "eq_num": "(22)" + } + ], + "section": "Evaluation for Effect of Token Unit", + "sec_num": "4.5" + }, + { + "text": "where i is the location of character with respect to the non-text symbol , -m<= -i <=-1 and 1<=+i<=n. are the count of character c -i and c +i occurred in feature corpus with the location -i and +i for sense category s j respectively.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Evaluation for Effect of Token Unit", + "sec_num": "4.5" + }, + { + "text": "are the total count of character c -i and c +i occurred with the location -i and +i in feature corpus respectively", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Evaluation for Effect of Token Unit", + "sec_num": "4.5" + }, + { + "text": "The total score TScore L (\u02d9)and TScore R (\u02d9) for all individual characters of CH L and CH R", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Evaluation for Effect of Token Unit", + "sec_num": "4.5" + }, + { + "text": "in example E to vote for sense category can be computed like Eqs. 13and 14. The method will be regarded as the character-based approach with location scheme.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Evaluation for Effect of Token Unit", + "sec_num": "4.5" + }, + { + "text": "Until now, the adopted token unit of sentence is Mandarin word. There are some possible errors occurred during the segmentation process for generating the token (word). Based on the character 4 token unit with location scheme, there are fewer unknown token. The example (E5) in our data set is divided into two chunks, in which the individual token is the character without needing the word segmentation. The characters in CH L will be labeled with location -m~-1 and the characters in CH R labeled with +1~+n. (E5) is an example in which the correct sense category can't be predicted by using the scheme with word token, while it can be correctly predicted by using character as token.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Evaluation for Effect of Token Unit", + "sec_num": "4.5" + }, + { + "text": "(E5) \u7d50 \u679c \uff11\uff10\uff0f\uff11\uff10 \u90a3 \u5929 \u6843 \u5712 \u7e23 \u5efa \u7bc9 \u5e2b \u518d \u4f86 \u8a8d \u5b9a \u6642 \uff0c Intuitively, in natural language processing of Mandarin, the token unit used is usually word, which is the basic unit containing complete and useful semantic information. Instead, 4 The Mandarin characters we use is 13053, which are collected in the BIG-5 character set. 5 In contract to our previous example, each Mandarin character here is regarded as a token, without word ) ( why the performance for character tokens is superior to that for word tokens both with individual location?", + "cite_spans": [ + { + "start": 223, + "end": 224, + "text": "4", + "ref_id": null + }, + { + "start": 314, + "end": 315, + "text": "5", + "ref_id": null + } + ], + "ref_spans": [], + "eq_spans": [], + "section": "Evaluation for Effect of Token Unit", + "sec_num": "4.5" + }, + { + "text": "EQUATION", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [ + { + "start": 0, + "end": 8, + "text": "EQUATION", + "ref_id": "EQREF", + "raw_str": ") , ( ) , ( i i i j i i j R c TC c s C w s Score + + + + + = ) ( ) , ( ) , ( i i i j i i j L c TC c s C w s Score \u2212 \u2212 \u2212 \u2212 \u2212 = ) , ( ) ( 1 \u2211 = \u2212 \u2212 \u2212 \u2212 = J j i j i i i c s C c TC ) , ( ) ( 1 \u2211 = + + + + = J j i j i i i c s C c TC 1 ) , (", + "eq_num": ", 1" + } + ], + "section": "Evaluation for Effect of Token Unit", + "sec_num": "4.5" + }, + { + "text": "Depending on our observations, there are three following reasons with respect to such phenomenon. First, it is not easy for the process of word segmentation to generate the most portable word sequence W. The second reason is the data sparseness; the situation exists in our WSD problem and more unknown tokens will happen. The third, related to the unknown token, is the token unit. The number for Mandarin character is approximately 13,000 whereas 70,000 for Mandarin word. It is obvious that adopting word's token will lead to more unknown tokens than that of character's token. Such situation will affect the performance. As described below, suppose that a two-character word \"\u6628\u5929(yesterday)\" occurred with specific location in our feature database. Now a token \"\u4eca\u5929(today)\" in a testing example occurs, labeled by same location of token \"\u6628\u5929\", and will be still regarded as a unknown token based on the token with scheme of individual location. However, the token \"\u6628\u5929\" can further be divided into two characters: \"\u6628\" and \"\u5929\". The second character of word \"\u6628\u5929\" and \"\u4eca \u5929\" is both \"\u5929\". So character \"\u5929\" is a known token and can provide the statistical information based on the character token with individual location. Referring to Table 10 , the average precision rates in Table 11 are upgraded 0.5% and 0.4% for inside and outside testing obtained from the individual location for each token (character). ", + "cite_spans": [], + "ref_spans": [ + { + "start": 1230, + "end": 1238, + "text": "Table 10", + "ref_id": "TABREF0" + }, + { + "start": 1272, + "end": 1280, + "text": "Table 11", + "ref_id": "TABREF0" + } + ], + "eq_spans": [], + "section": "Evaluation for Effect of Token Unit", + "sec_num": "4.5" + }, + { + "text": "In this Section, we will discuss several features of token in example to improve the performance. At first, the weighting of token in different location with respect to the target symbol will be analyzed. We hope to find the effectiveness of weighting value for each individual token. Another technique is subject to the specific patterns contained in example.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Further Improvements", + "sec_num": "5." + }, + { + "text": "Such patterns represent a special semantic meaning. In the next subsection, we will discuss the difference of top 2 score for each example. A threshold value will be used to decide when the alternative technique can be used to improve the performance.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Further Improvements", + "sec_num": "5." + }, + { + "text": "It is our intuition that the nearer a token is to target symbol, the higher prediction capability to token is. So in this Section we will try to find the effect of the tokens in different locations. And possibly, we can assign different weights to tokens with respect to its location in sentence.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Weights for Individual Token", + "sec_num": "5.1" + }, + { + "text": "The function weight(i) denotes the weighting value for token unit with location i , which can be derived from experiments for three symbols. Therefore, the related Equations, Eqs. 13and 14, will be revised as:", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Weights for Individual Token", + "sec_num": "5.1" + }, + { + "text": "EQUATION", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [ + { + "start": 0, + "end": 8, + "text": "EQUATION", + "ref_id": "EQREF", + "raw_str": "(24)", + "eq_num": "(25)" + } + ], + "section": "Weights for Individual Token", + "sec_num": "5.1" + }, + { + "text": "In this subsection, we will discuss the patterns in text, which belong to the specific sense category and can be assigned directly. For instance, example (E6) contains the pattern \"42/7\", which is incorrectly predicted as category others (s 7 ) with maximum score 4.6 generated by", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Pattern Table", + "sec_num": "5.2" + }, + { + "text": "In fact, the pattern \"42/7\" stands for a name of network company. The target symbol \"/\" in \"24/7\" will be a silence. Therefore the pattern should be pronounced directly in Mandarin \"\u56db \u5341\u4e8c (shi si er), a silence and \u4e03 (chi)\". All such specific patterns, which are ambiguous and represent the specific term, such as a company name, specific date \"9/21\" etc., will be collected into the pattern table. Such table should be searched in front of adopting the MLDC.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "MLDC.", + "sec_num": null + }, + { + "text": "If the specific patterns of examples are found, its associated sense category will be assigned immediately without the prediction of MLDC. Currently, there are 12 entries collected in our pattern table. The use of pattern table can resolve several special cases and improve the performance by the amounts 0.6% ~ 1.0% for the three target symbols. (E6) \uff14\uff12\uff0f\uff17 \u53ef \u5354 \u52a9 \u7db2 \u7ad9 \u89e3 \u6c7a \u7db2 \u8def \u5ee3 \u544a \u5b58 \u8ca8 \u554f \u984c\u3002", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "MLDC.", + "sec_num": null + }, + { + "text": "In the previous section, we introduced the token schemes of word and character, which are based on the different token unit in sentence. Finally the best average precision rate of outside test are 97.83%, 98.46 and 92.37% for symbols \"/\", \":\" and \"-\" respectively using the character token scheme with location. One consideration is that whether the performance can be improved further by merging different token schemes or not? Although the token scheme of characters can obtain highest precision rate currently, what is the condition to adopt the alternative schemes to improve the performance further?", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Adopting the Alternative", + "sec_num": "5.3" + }, + { + "text": "The normalized difference is defined as: (score 1 -score 2 )/NT. score 1 and score 2 are the top 2 score computed by proposed approach for target symbols. TN denotes the token number of sentence and will be changed with different token schemes. TN will normalize the difference of top 2 scores.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Adopting the Alternative", + "sec_num": "5.3" + }, + { + "text": "Note that the Elementary approach here was described at the end of Section 4.5. The final empirical performances of inside and outside testing are 99.6% and 96.5% average, employing the improving techniques proposed in this Section.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Adopting the Alternative", + "sec_num": "5.3" + }, + { + "text": "We have developed an approach, which contains the multi-layer decision classifiers and can disambiguate the sense ambiguity of non-alphabet symbols in Mandarin effectively. In contract to the n-gram language models, the new approach just needs smaller size of corpus and still hold the linguistic knowledge for statistical parameters. The model with voting scheme (baseline) is superior to n-gram (n=1,2) model. Several techniques are proposed and evaluated in our elementary experiment. Some examples are displayed to illustrate for each technique. The precision rates are 99.4% and 95.5% for inside and outside testing.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Conclusions", + "sec_num": "6." + }, + { + "text": "Three techniques are proposed to improve the performance further: weights for token with individual location, pattern table and the alternative. The final precision rates of further improvements are 99.6% and 96.5% for inside and outside test respectively.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Conclusions", + "sec_num": "6." + }, + { + "text": "In addition to the target symbols \" /\", \":\" and \"-\" analyzed in the paper, there are some other symbols, such as *, %, [] and so on, in which the oral ambiguity problems will be incurred and should be resolved. Our approaches can be extended into related symbols. 1.6* 0.9 1.6* 0.6 0.1* 1.5 4.6 incorrect", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Conclusions", + "sec_num": "6." + }, + { + "text": "All the sense categories for three target symbols discussed in our paper are displayed inTables 1-3.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "", + "sec_num": null + }, + { + "text": "In fact, the decision tree excludes three sense categories: date, computer term and version.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "", + "sec_num": null + } + ], + "back_matter": [], + "bib_entries": { + "BIBREF0": { + "ref_id": "b0", + "title": "Word Sense Disambiguation Using Statistical Methods", + "authors": [ + { + "first": "P", + "middle": [], + "last": "Brown", + "suffix": "" + }, + { + "first": "S", + "middle": [ + "Della" + ], + "last": "Pietra", + "suffix": "" + }, + { + "first": "V", + "middle": [ + "Della" + ], + "last": "Pietra", + "suffix": "" + }, + { + "first": "R", + "middle": [], + "last": "Mercer", + "suffix": "" + } + ], + "year": 1991, + "venue": "Proceeding of the 29th Annual Meeting of the Association for Computational Linguistics", + "volume": "", + "issue": "", + "pages": "264--270", + "other_ids": {}, + "num": null, + "urls": [], + "raw_text": "P. Brown, S. Della Pietra, V. Della Pietra and R. Mercer. Word Sense Disambiguation Using Statistical Methods. In Proceeding of the 29th Annual Meeting of the Association for Computational Linguistics, Berkeley, pp. 264-270, 1991.", + "links": null + }, + "BIBREF1": { + "ref_id": "b1", + "title": "Selective Sampling for Example-base Word Sense Disambiguation", + "authors": [ + { + "first": "Atsushi", + "middle": [], + "last": "Fujii", + "suffix": "" + }, + { + "first": "Kentaro", + "middle": [], + "last": "Inui", + "suffix": "" + } + ], + "year": 1998, + "venue": "Computational Linguistics", + "volume": "24", + "issue": "4", + "pages": "573--597", + "other_ids": {}, + "num": null, + "urls": [], + "raw_text": "Atsushi Fujii, Kentaro Inui, Selective Sampling for Example-base Word Sense Disambiguation, Computational Linguistics, vol. 24, number 4, 1998,pp 573-597.", + "links": null + }, + "BIBREF2": { + "ref_id": "b2", + "title": "A Method for Disambiguating Word Sense in a Large corpus", + "authors": [ + { + "first": "William", + "middle": [], + "last": "Gale", + "suffix": "" + }, + { + "first": "W", + "middle": [], + "last": "Kenneth", + "suffix": "" + }, + { + "first": "David", + "middle": [], + "last": "Church", + "suffix": "" + }, + { + "first": "", + "middle": [], + "last": "Yarowsky", + "suffix": "" + } + ], + "year": 1992, + "venue": "Computer and the Humanities", + "volume": "26", + "issue": "", + "pages": "", + "other_ids": {}, + "num": null, + "urls": [], + "raw_text": "William Gale, Kenneth W., Church and David Yarowsky, A Method for Disambiguating Word Sense in a Large corpus, Computer and the Humanities, 1992, Vol. 26.", + "links": null + }, + "BIBREF3": { + "ref_id": "b3", + "title": "A Bayesian hybrid method for Context-Sensitive Spelling Correction", + "authors": [ + { + "first": "A", + "middle": [ + "R" + ], + "last": "Golding", + "suffix": "" + } + ], + "year": 1995, + "venue": "Proceedings of the third workshop on Very Large Corpora", + "volume": "", + "issue": "", + "pages": "39--53", + "other_ids": {}, + "num": null, + "urls": [], + "raw_text": "A. R.Golding, A Bayesian hybrid method for Context-Sensitive Spelling Correction, In Proceedings of the third workshop on Very Large Corpora, pp. 39-53, Boston, USA, 1995.", + "links": null + }, + "BIBREF4": { + "ref_id": "b4", + "title": "Semantic Classification for Patterns Containing Non-alphabet Symbols in Mandarin Text, ROCLING XII", + "authors": [ + { + "first": "Huang", + "middle": [], + "last": "Chu Ren", + "suffix": "" + } + ], + "year": 1995, + "venue": "", + "volume": "", + "issue": "", + "pages": "55--66", + "other_ids": {}, + "num": null, + "urls": [], + "raw_text": "Chu Ren Huang, Introduction to the Academic Sinica Balance Corpus, Proceeding of ROCLLING VII, pp. 81-99, 1995. Feng-Long Hwang, Ming-Shing Yu, Min-Jer Wu and Shyh-Yang Hwang, Semantic Classification for Patterns Containing Non-alphabet Symbols in Mandarin Text, ROCLING XII, NCTU, 1999a, pp. 55-66.", + "links": null + }, + "BIBREF5": { + "ref_id": "b5", + "title": "Sense Disambiguation of Non-alphabet Symbols in Mandarin Text Using Multiple Layer Decision Classifiers", + "authors": [ + { + "first": "Feng-Long", + "middle": [], + "last": "Hwang", + "suffix": "" + }, + { + "first": "Ming-Shing", + "middle": [], + "last": "Yu", + "suffix": "" + }, + { + "first": "Min-Jer", + "middle": [], + "last": "Wu", + "suffix": "" + }, + { + "first": "Shyh-Yang", + "middle": [], + "last": "Hwang", + "suffix": "" + } + ], + "year": 1999, + "venue": "Proceedings of 5 th Natural Language Processing Pacific Rim Symposium (NLPRS)", + "volume": "", + "issue": "", + "pages": "334--339", + "other_ids": {}, + "num": null, + "urls": [], + "raw_text": "Feng-Long Hwang, Ming-Shing Yu, Min-Jer Wu and Shyh-Yang Hwang, Sense Disambiguation of Non-alphabet Symbols in Mandarin Text Using Multiple Layer Decision Classifiers, Proceedings of 5 th Natural Language Processing Pacific Rim Symposium (NLPRS), Beijing China, 1999b, pp. 334-339.", + "links": null + }, + "BIBREF6": { + "ref_id": "b6", + "title": "Introduction to the Special issue on Word Sense Disambiguation: The State of the Art", + "authors": [ + { + "first": "Nancy", + "middle": [], + "last": "Ide", + "suffix": "" + }, + { + "first": "Jean", + "middle": [], + "last": "Veronis", + "suffix": "" + } + ], + "year": 1998, + "venue": "Computational Linguistics", + "volume": "24", + "issue": "1", + "pages": "1--40", + "other_ids": {}, + "num": null, + "urls": [], + "raw_text": "Nancy Ide and Jean Veronis, Introduction to the Special issue on Word Sense Disambiguation: The State of the Art, Computational Linguistics, vol. 24, number 1,1998,pp 1-40.", + "links": null + }, + "BIBREF7": { + "ref_id": "b7", + "title": "Ho Lee, Dae-Ho Baek, Hae-Chang Rim, Word Sense Disambiguation Based on the Information Theory", + "authors": [ + { + "first": "Daniel", + "middle": [], + "last": "Jurafsky", + "suffix": "" + }, + { + "first": "James", + "middle": [ + "H" + ], + "last": "Martin", + "suffix": "" + } + ], + "year": 1997, + "venue": "Proceedings of ROCLING X International Conference, Research on Computational Linguistics", + "volume": "", + "issue": "", + "pages": "49--58", + "other_ids": {}, + "num": null, + "urls": [], + "raw_text": "Daniel Jurafsky, James H. Martin, Speech and Language Processing, Printice Hall, 2000. Ho Lee, Dae-Ho Baek, Hae-Chang Rim, Word Sense Disambiguation Based on the Information Theory, Proceedings of ROCLING X International Conference, Research on Computational Linguistics, Taiwan, pp. 49-58,1997", + "links": null + }, + "BIBREF8": { + "ref_id": "b8", + "title": "Corpus-based Statistical sense Resolution", + "authors": [ + { + "first": "Claudia", + "middle": [], + "last": "Leacock", + "suffix": "" + }, + { + "first": "Geoffery", + "middle": [], + "last": "Towell", + "suffix": "" + }, + { + "first": "Ellen", + "middle": [ + "M" + ], + "last": "Voorhees", + "suffix": "" + } + ], + "year": 1993, + "venue": "proceedings of ARPA Workshop on Human Language Technology", + "volume": "", + "issue": "", + "pages": "", + "other_ids": {}, + "num": null, + "urls": [], + "raw_text": "Claudia Leacock, Geoffery Towell, and Ellen M. Voorhees, Corpus-based Statistical sense Resolution, In proceedings of ARPA Workshop on Human Language Technology, San Francisco, CA, Morgan Kaufman, 1993.", + "links": null + }, + "BIBREF9": { + "ref_id": "b9", + "title": "Ambiguity and Language Learning: Computational and Cognitive Models", + "authors": [ + { + "first": "Hinrich", + "middle": [], + "last": "Schutze", + "suffix": "" + } + ], + "year": 1995, + "venue": "", + "volume": "", + "issue": "", + "pages": "", + "other_ids": {}, + "num": null, + "urls": [], + "raw_text": "Hinrich Schutze, Ambiguity and Language Learning: Computational and Cognitive Models, Ph. D Thesis, and Standard University, 1995.", + "links": null + }, + "BIBREF10": { + "ref_id": "b10", + "title": "A Overview of Corpus-Based Statistical-Oriented (CBSO) Techniques for Natural Language Processing", + "authors": [ + { + "first": "K", + "middle": [ + "Y" + ], + "last": "Su", + "suffix": "" + }, + { + "first": "T", + "middle": [ + "H" + ], + "last": "Chiang", + "suffix": "" + }, + { + "first": "J", + "middle": [ + "S" + ], + "last": "Chang", + "suffix": "" + } + ], + "year": 1996, + "venue": "Computational Linguistics and Chinese Language Processing", + "volume": "1", + "issue": "", + "pages": "101--157", + "other_ids": {}, + "num": null, + "urls": [], + "raw_text": "K. Y. Su, T. H. Chiang, J. S. Chang, A Overview of Corpus-Based Statistical-Oriented (CBSO) Techniques for Natural Language Processing, Computational Linguistics and Chinese Language Processing, vol. 1, no. 1, pp.101-157, August 1996.", + "links": null + }, + "BIBREF11": { + "ref_id": "b11", + "title": "Word sense Disambiguation with very large neural extracted from Machine Readable Dictionaries", + "authors": [ + { + "first": "Jean", + "middle": [], + "last": "Verious", + "suffix": "" + }, + { + "first": "Nancy", + "middle": [], + "last": "Ide", + "suffix": "" + } + ], + "year": 1990, + "venue": "proceeding of COLING-90", + "volume": "", + "issue": "", + "pages": "", + "other_ids": {}, + "num": null, + "urls": [], + "raw_text": "Jean Verious and Nancy Ide, Word sense Disambiguation with very large neural extracted from Machine Readable Dictionaries, in proceeding of COLING-90, 1990.", + "links": null + }, + "BIBREF12": { + "ref_id": "b12", + "title": "The Content of Academia Sinica Balanced Corpus(ASBC) , Sinica Academia", + "authors": [ + { + "first": ";", + "middle": [ + "R O C" + ], + "last": "David Yarowsky", + "suffix": "" + } + ], + "year": 1995, + "venue": "", + "volume": "", + "issue": "", + "pages": "157--172", + "other_ids": {}, + "num": null, + "urls": [], + "raw_text": "David Yarowsky, Homograph Disambiguation in Text-to Speech Synthesis, pp.157-172, 1997. Chinese Knowledge Information Processing (CKIP) Group, Technical Report: The Content of Academia Sinica Balanced Corpus(ASBC) , Sinica Academia, R.O.C., 1995.", + "links": null + } + }, + "ref_entries": { + "FIGREF1": { + "uris": null, + "type_str": "figure", + "text": "(left) accumulated score of categories for non text symbol \":\" based on the preference scoring ; category time (s 2 ) gets maximum score 6.92. (right) based on winner-take-all scoring; the category time (s 2 )", + "num": null + }, + "TABREF0": { + "type_str": "table", + "html": null, + "content": "
categorylexical patterns with non-alphabet symbol \"\uff0f\"oral expression in Mandarindata dis. (%)
1. date\uff13\uff0f\uff14(March 4th )\u4e09\u6708\u56db\u65e515.96
2. fraction\uff13\uff0f\uff14(three fourth)\u56db\u5206\u4e4b\u4e098.88
3. time(music)\uff13\uff0f\uff14(three four time)\u56db\u5206\u4e4b\u4e09\u62cd17.52
4. path, directory\uff0f\uff44\uff45\uff56\uff0f\uff4e\uff55\uff4c\uff4c\u659c\u7dda\uff44\uff45\uff56\u659c\u7dda\uff4e\uff55\uff4c\uff4c25.69
5, computer words\uff29\uff0f\uff2fSilence or \u659c\u7dda2.04
6. production version \uff36\uff21\uff38\uff0f\uff36\uff2d\uff33Silence (longer pause) or \u659c\u7dda5.52
7. others\u4e2d\uff0f\u65e5\uff0f\u97d3\u6587(China/Japan/Korea)Silence (longer pause)25.45
", + "text": "Seven sense categories and their related oral expressions of the target symbol \"/\".", + "num": null + }, + "TABREF1": { + "type_str": "table", + "html": null, + "content": "
Sense categorylexical patterns with non-alphabet symbol \"\uff1a\"oral expression in Mandarindata dis. (%)
\uff11. punctuation \u512a\u9ede\uff1a\u7d93\u6fdf\u7701\u6642\u512a\u9ede(silence)\u7d93\u6fdf\u7701\u664232.64
\uff12. time\uff13\uff1a\uff12\uff10PM\u4e0b\u5348\u4e09\u9ede\u4e8c\u5341\u5206(three twenty PM)11.63
\uff13. versus\uff13\uff1a\uff12\uff10\u4e09\u6bd4\u4e8c\u5341(three versus twenty)13.39
\uff14. telephoneTEL\uff1a\uff14\uff12\uff16\uff14\uff18\uff15\uff16\u96fb\u8a71(silence)\uff14\uff12\uff16\uff14\uff18\uff15\uff168.50
\uff15. expression\u6559\u7df4\u8868\u793a\uff1a\u7167\u5e38\u9032\u884c\u6559\u7df4\u8868\u793a(silence)\u7167\u5e38\u9032\u884c33.43
Table \uff13: Seven sense categories and its related oral expressions of target symbol \"-\".
Categorylexical patterns with non-alphabet symbol \"-\"oral expression in Mandarindata dis. (%)
\uff11. figure, address\u5716\uff12-\uff11(Figure 2-1)\u5716\uff12\u4e4b\uff117.64
\uff12. interval\uff16-\uff19\u6708\u4efd\u71df\u696d\u6536\u5165\uff16\u81f3\uff19\u6708\u4efd\u71df\u696d\u6536\u516521.05
\uff13. production\uff50\uff43-\uff43\uff49\uff4c\uff4c\uff49\uff4f\uff4e\uff50\uff43(silence)\uff43\uff49\uff4c\uff4c\uff49\uff4f\uff4e17.01
\uff14. computer term\uff25-\uff2d\uff41\uff49\uff4c\uff25(silence)\uff2d\uff41\uff49\uff4c5.91
\uff15. tel. fax\u96fb\u8a71\uff1a\uff14\uff12\uff16-\uff14\uff18\uff15\uff16\u96fb\u8a71\uff1a\uff14\uff12\uff16(silence)\uff14\uff18\uff15\uff1621.91
\uff16. hyphen\u767b\u8a18\u5730\u9ede-\u5716\u66f8\u9928\u524d\u767b\u8a18\u5730\u9ede(silence)\u5716\u66f8\u9928\u524d24.22
\uff17. minus\u516c\u5f0f\uff1a\uff38-\uff12\uff1d\uff12\uff10\u516c\u5f0f\uff1a\uff38\u6e1b\uff12\u7b49\u65bc\uff12\uff102.23
", + "text": "Five sense categories and its related oral expressions of target symbol \":\".", + "num": null + }, + "TABREF2": { + "type_str": "table", + "html": null, + "content": "
represents the tokens occurrence only considering the two location types:
CH L and CH R . Field l represents the token's location preceding (CH L ) or following (CH R ) the
non-alphabet symbols neglecting the token order. p and f in field l denote the location
preceding and following the non-alphabet symbols. In our experiments, two location schemes
will be evaluated in Section 4 and 5.
", + "text": "", + "num": null + }, + "TABREF3": { + "type_str": "table", + "html": null, + "content": "", + "text": "The token \"\u516c\u8eca\" occurs in feature database; without regarding the individual location.", + "num": null + }, + "TABREF4": { + "type_str": "table", + "html": null, + "content": "
", + "text": "where , are labeled as the token w in CH L and CH R . are the count of token occurred in CH L and CH R for the category s j in feature database. stand for the total count of occurred in CH L and CH R , which can be computed as: ,", + "num": null + }, + "TABREF6": { + "type_str": "table", + "html": null, + "content": "
scoring schemepreferencewinner-take-all
precision rate(%)inside testoutside testinside testoutside test
\"\uff0f\"99.294.692.984.8
\"\uff1a\"95.791.191.584.1
\"-\"96.885.790.883.5
average (net)97.2(+5.5)90.0(+5.9)91.784.1
", + "text": "The performance of the 2 nd decision classifier (baseline) in MLDC; employing two scoring scheme.", + "num": null + }, + "TABREF7": { + "type_str": "table", + "html": null, + "content": "
We observe further the performance between baseline and n-gram. The minimum
difference between is +1.4% for outside testing of target symbol \":\". The baseline is superior
to 2-gram model for all target symbols. The average net results for inside and outside test are
0.5% and 4.7%.
", + "text": "indicates the performance of three models: baseline with voting scheme, uni-gram and 2-gram, on the same testing data set without employing the 1 st layer decision classifier or other techniques. Comparing the 2-gram with uni-gram, it is so apparent that the former is superior to the latter. The average net results for inside and outside test are 1.3% and 4.3% respectively.", + "num": null + }, + "TABREF8": { + "type_str": "table", + "html": null, + "content": "
The numbers in parenthesis denote the net performance comparing
base line with 2-gram.
schemeinside testoutside test
symbolsbase lineuni-gram2-grambaselineuni-gram2-gram
\" \uff0f\"99.2(+0.3)97.698.994.6(+2.0)90.592.6
\"\uff1a\"95.7(+0.5)92.295.291.1(+1.4)79.989.7
\"-\"96.8(+0.7)95.996.185.7(+9.2)74.376.5
average (net) 97.2(+0.5)95.496.790.0(+4.7)81.085.3
", + "text": "Comparisons between our base line and n-gram (n=1,2).", + "num": null + }, + "TABREF10": { + "type_str": "table", + "html": null, + "content": "
merging 1 stinside testing outside testing
classifier ?
without merging \"\uff0f\" merging99.2 99.5(+0.3)94.6 97.9(+3.3)
without merging \"\uff1a\" merging95.7 98.3(+2.6)91.1 95.6(+4.5)
without merging \"-\" merging96.8 98.4(+1.6)85.7 92.1(+5.4)
average merging97.794.5
", + "text": "The effectiveness of merging the 1 st and 2 nd decision classifiers", + "num": null + }, + "TABREF11": { + "type_str": "table", + "html": null, + "content": "
inside testingoutside testing
individualchunkindividualchunk
\" \uff0f
", + "text": "The comparison of two location schemes for each token.", + "num": null + }, + "TABREF12": { + "type_str": "table", + "html": null, + "content": "
inside testingoutside testing
characterwordcharacterword
\" \uff0f\" 99.6(+0.3)99.398.3(-0.3)98.6
\"\uff1a\" 99.6(+0.4)99.298.1(+1.0)97.1
\"-\" 99.2(+1.0)98.292.4(+0.6)91.8
average 99.4(+0.5)98.995.5(+0.4) 95.1
", + "text": "Two token units: word and character. Each token is labeled by individual location.Currently, the elementary experiments have been implemented and several schemes in our proposed approach were evaluated. The best performance for WSD problem based on such empirical parameters can be achieved. In summary, that are the following empirical features: preference scoring, merging the 1 st and 2 nd decision classifier together, individual location (-m~+n) of token, character token. The precision rates, obtained by using the techniques above, of outside testing are 98.3%, 98.1% and 92.4% (95.5% average) for the three target symbols respectively.", + "num": null + } + } + } +} \ No newline at end of file