diff --git "a/Full_text_JSON/prefixO/json/O01/O01-2001.json" "b/Full_text_JSON/prefixO/json/O01/O01-2001.json" new file mode 100644--- /dev/null +++ "b/Full_text_JSON/prefixO/json/O01/O01-2001.json" @@ -0,0 +1,2146 @@ +{ + "paper_id": "O01-2001", + "header": { + "generated_with": "S2ORC 1.0.0", + "date_generated": "2023-01-19T08:09:38.299987Z" + }, + "title": "Improving Translation Selection with a New Translation Model Trained by Independent Monolingual Corpora", + "authors": [ + { + "first": "Ming", + "middle": [], + "last": "Zhou", + "suffix": "", + "affiliation": { + "laboratory": "", + "institution": "Asia. + Tsinghua University", + "location": {} + }, + "email": "" + }, + { + "first": "Ding", + "middle": [], + "last": "Yuan", + "suffix": "", + "affiliation": {}, + "email": "" + }, + { + "first": "Changning", + "middle": [], + "last": "+1", + "suffix": "", + "affiliation": {}, + "email": "" + }, + { + "first": "", + "middle": [], + "last": "Huang", + "suffix": "", + "affiliation": { + "laboratory": "", + "institution": "Asia. + Tsinghua University", + "location": {} + }, + "email": "" + } + ], + "year": "", + "venue": null, + "identifiers": {}, + "abstract": "We propose a novel statistical translation model to improve translation selection of collocation. In the statistical approach that has been popularly applied for translation selection, bilingual corpora are used to train the translation model. However, there exists a formidable bottleneck in acquiring large-scale bilingual corpora, in particular for language pairs involving Chinese. In this paper, we propose a new approach to training the translation model by using unrelated monolingual corpora. First, a Chinese corpus and an English corpus are parsed with dependency parsers, respectively, and two dependency triple databases are generated. Then, the similarity between a Chinese word and an English word can be estimated using the two monolingual dependency triple databases with the help of a simple Chinese-English dictionary. This cross-language word similarity is used to simulate the word translation probability. Finally, the generated translation model is used together with the language model trained with the English dependency database to realize translation of Chinese collocations into English. To demonstrate the effectiveness of this method, we performed various experiments with verb-object collocation translation. The experiments produced very promising results.", + "pdf_parse": { + "paper_id": "O01-2001", + "_pdf_hash": "", + "abstract": [ + { + "text": "We propose a novel statistical translation model to improve translation selection of collocation. In the statistical approach that has been popularly applied for translation selection, bilingual corpora are used to train the translation model. However, there exists a formidable bottleneck in acquiring large-scale bilingual corpora, in particular for language pairs involving Chinese. In this paper, we propose a new approach to training the translation model by using unrelated monolingual corpora. First, a Chinese corpus and an English corpus are parsed with dependency parsers, respectively, and two dependency triple databases are generated. Then, the similarity between a Chinese word and an English word can be estimated using the two monolingual dependency triple databases with the help of a simple Chinese-English dictionary. This cross-language word similarity is used to simulate the word translation probability. Finally, the generated translation model is used together with the language model trained with the English dependency database to realize translation of Chinese collocations into English. To demonstrate the effectiveness of this method, we performed various experiments with verb-object collocation translation. The experiments produced very promising results.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Abstract", + "sec_num": null + } + ], + "body_text": [ + { + "text": "Selecting the appropriate word translation among several options is a key technology of machine translation. For example, the Chinese verb \"\u8ba2\" is translated in different ways in 2 M. Zhou et al. terms of objects, as shown in the following:", + "cite_spans": [ + { + "start": 183, + "end": 194, + "text": "Zhou et al.", + "ref_id": null + } + ], + "ref_spans": [], + "eq_spans": [], + "section": "Introduction", + "sec_num": "1." + }, + { + "text": "\u8ba2 \u62a5\u7eb8 \u2192subscribe to a newspaper \u8ba2 \u8ba1\u5212 \u2192make a plan \u8ba2 \u65c5\u9986 \u2192book a hotel \u8ba2 \u8f66\u7968 \u2192reserve a ticket \u8ba2 \u65f6\u95f4 \u2192determine the time", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Introduction", + "sec_num": "1." + }, + { + "text": "In recent years, there has been increasing interest in applying statistical approaches to various machine translation tasks, from MT system mechanisms to translation knowledge acquisition. For translation selection, most researches applied statistical translation models. In such statistical translation models, to get the word translation probability as well as translation templates, bilingual corpora are needed. However, for quite a few languages, large bilingual corpora rarely exist, while large monolingual corpora are easy to acquire. It will be helpful to alleviate the burden of collecting bilingual corpus if we can use monolingual corpora to estimate the translation model and find alternative to translation selection.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Introduction", + "sec_num": "1." + }, + { + "text": "We propose a novel approach to this problem in the Chinese-English machine translation module which is to be used for cross-language information retrieval. Our method is based on the intuition that although the Chinese language and the English language have different definitions of dependency relations, the main dependency relations like subject-verb, verb-object, adjective-noun and adverb-verb tend to have strongly direct correspondence. This assumption can be used to estimate the word translation probability. Our proposed method works as follows. First, a Chinese corpus and an English corpus are parsed, respectively, with a Chinese dependency parser and an English dependency parser, and two dependency triple databases are generated as the result. Second, the word similarity between a Chinese word and an English word are estimated with these two monolingual dependency triple databases with the help of a simple Chinese-English dictionary. This cross-language word similarity is used as the succedaneum of the word translation model. At the same time, the probability of a triple in English can be estimated with the English triple database. Finally, the word translation model, working together with the triple probability, can realize a new translation framework. Our experiments showed that this new translation model achieved promising results in improving translation selection. The unique characteristics of our method include: 1) use of two monolingual corpora to estimate the translation model. 2) use of dependency triples as basis for our method.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Introduction", + "sec_num": "1." + }, + { + "text": "The remainder of this paper is organized as follows. In Section 2, we give a detailed description to our new translation model. In section 3, we describe the training process of our new model, focusing on the process of constructing the dependency triple database for English and Chinese. The experiments and evaluation of this new method are reported in Section 4. In", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Introduction", + "sec_num": "1." + }, + { + "text": "Section 5, some related works are introduced. Finally in Section 6, we draw conclusions and discuss future work.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Improving Translation Selection with a New Translation Model Trained 3", + "sec_num": null + }, + { + "text": "In this section, we will describe the proposed translation model. First, we will report our observations from a sample word-aligned bilingual corpus in order to verify our assumption. After that, we will introduce the method for estimating the cross-language word similarity by means of two monolingual corpora. Finally, we will give a formal description of the new translation model.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "A New Statistical Machine Translation Model", + "sec_num": "2." + }, + { + "text": "A dependency triple consists of a head, a dependant, and a dependency relation between the head and the dependant. Using a dependency parser, a sentence can be analyzed to obtain a set of dependency triples in the following form:", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "1 Dependency Correspondence between Chinese and English", + "sec_num": "2." + }, + { + "text": ") , , ( 2 1 w rel w trp = ,", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "1 Dependency Correspondence between Chinese and English", + "sec_num": "2." + }, + { + "text": "which means that word 1 w has a dependency relation of rel with word 2 w .", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "1 Dependency Correspondence between Chinese and English", + "sec_num": "2." + }, + { + "text": "For example, for the English sentence \"I have a brown dog\", a dependency parser obtains a set of triples as follows:", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "1 Dependency Correspondence between Chinese and English", + "sec_num": "2." + }, + { + "text": "(1) a.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "1 Dependency Correspondence between Chinese and English", + "sec_num": "2." + }, + { + "text": "have a brown dog b. (have, sub, I) , (I, sub-of, have) , (have, obj, dog) , (dog, obj-of, have) , (dog, adj, brown) , (brown, adj-of, dog) , (dog, det, a) , (a, det-of, dog) 2 Similarly, for the Chinese sentence \"\u56fd\u5bb6\u9881\u5e03\u4e86\u8ba1\u5212\", we can get the following dependency triples with a dependency parser: 2 The standard expression of the dependency parsing result is: (have, sub, I) , (have, obj, dog) , (dog, adj, brown) , (dog, det, a) . Among all the dependency relations in Chinese and in English, the key dependency relations are subject-verb (denoted as sub), verb-object (denoted as obj), adjective-noun(denoted as adj) and adverb-verb(denoted as adv). Our intuitive assumption is that although Chinese language and English language have different schemes of dependency relations, these key dependency relations tend to have strong correspondence. For instance, normally, a word pair with subject-verb relation in Chinese can be translated into a subject-verb relation pair in English. Formally speaking, for a triple (A, D, B) in Chinese, where A and B are words, and D is one of the key dependency relations mentioned above, the translation of the triple (A, D, B) in English, can be expressed as (A', D', B') , where A' and B' are the translations of A and B, respectively, and D' is the dependency relation between A' and B' in the English language 4 . Our assumption is that although D and D' may be different in denotation, they can be mapped directly in most cases.", + "cite_spans": [ + { + "start": 20, + "end": 34, + "text": "(have, sub, I)", + "ref_id": null + }, + { + "start": 37, + "end": 54, + "text": "(I, sub-of, have)", + "ref_id": null + }, + { + "start": 57, + "end": 73, + "text": "(have, obj, dog)", + "ref_id": null + }, + { + "start": 76, + "end": 95, + "text": "(dog, obj-of, have)", + "ref_id": null + }, + { + "start": 98, + "end": 115, + "text": "(dog, adj, brown)", + "ref_id": null + }, + { + "start": 118, + "end": 138, + "text": "(brown, adj-of, dog)", + "ref_id": null + }, + { + "start": 141, + "end": 154, + "text": "(dog, det, a)", + "ref_id": null + }, + { + "start": 157, + "end": 175, + "text": "(a, det-of, dog) 2", + "ref_id": null + }, + { + "start": 293, + "end": 294, + "text": "2", + "ref_id": null + }, + { + "start": 356, + "end": 370, + "text": "(have, sub, I)", + "ref_id": null + }, + { + "start": 373, + "end": 389, + "text": "(have, obj, dog)", + "ref_id": null + }, + { + "start": 392, + "end": 409, + "text": "(dog, adj, brown)", + "ref_id": null + }, + { + "start": 412, + "end": 425, + "text": "(dog, det, a)", + "ref_id": null + }, + { + "start": 1013, + "end": 1022, + "text": "(A, D, B)", + "ref_id": null + }, + { + "start": 1152, + "end": 1161, + "text": "(A, D, B)", + "ref_id": null + }, + { + "start": 1194, + "end": 1206, + "text": "(A', D', B')", + "ref_id": null + } + ], + "ref_spans": [], + "eq_spans": [], + "section": "I", + "sec_num": null + }, + { + "text": "In order to verify our assumption, we conducted an investigation with a Chinese-English bilingual corpus 5 . The bilingual corpus, consisting of 60,000 pairs of Chinese sentences and English sentences selected from newspapers, novels, general bilingual dictionaries and software product manuals, was aligned manually at the word level. An example of the word aligned corpus is given in Table 1 . Each word is identified with a number in order to indicate the word alignment information. 3 The standard expression of the dependency parsing result is: (\u9881\u5e03, sub, \u56fd\u5bb6), (\u9881\u5e03, obj, \u8ba1\u5212), (\u9881\u5e03, comp, \u4e86). 4 Sometimes to get a better translation, a triple in one language is not translated into a triple in other language, but except in very extreme cases, it will still be acceptable if it is translated into a triple. 5 This corpus, produced by Microsoft Research Asia, is currently reserved for Microsoft internal use only. comp obj sub Improving Translation Selection with a New Translation Model Trained 5 Table 1 . The word aligned bilingual corpus Chinese sentence \u5f53/1 \u65af\u79d1\u7279/2 \u62b5\u8fbe/3 \u5357\u6781/4 \u7684/5 \u65f6\u5019/6 \uff0c/7 \u4ed6/8 \u53d1\u73b0/9 \u963f\u8499\u68ee/10 \u6bd4/11 \u4ed6/12 \u9886\u5148/13 \u3002/14", + "cite_spans": [ + { + "start": 487, + "end": 488, + "text": "3", + "ref_id": null + } + ], + "ref_spans": [ + { + "start": 386, + "end": 393, + "text": "Table 1", + "ref_id": null + }, + { + "start": 998, + "end": 1009, + "text": "5 Table 1", + "ref_id": null + } + ], + "eq_spans": [], + "section": "I", + "sec_num": null + }, + { + "text": "English sentence When/1 Scott/2 reached/3 the/4 South/5 Pole/6 , /7 he/8 found/9 Amundsen/10 had/11 anticipated/12 him/13 ./14", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "I", + "sec_num": null + }, + { + "text": "Aligned word pair", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "I", + "sec_num": null + }, + { + "text": "(1,5,6:1); (2:2); (3:3); (4:4,5,6); (7:7); (8:8); (9:9); (10:10); (11:nil); (12:13); (13:12); (14:14);", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "I", + "sec_num": null + }, + { + "text": "To obtain statistics of the dependency relation correspondence, we parsed 10,000 sentence pairs with the English parser Minipar [Lin 1993 , Lin 1994 ] and the Chinese parser BlockParser [Zhou 2000 ]. The parsing results were expressed in dependency triples. We then mapped the dependency relations so that we could count the correspondences between an English dependency relation and a Chinese dependency relation. More than 80% of subject-verb, adjective-noun and adv-verb dependency relations could be mapped, while verb-object correspondence was not so high. We show the verb-object correspondence results in Table 2 . \"E-C Positive\" means an English verb-object was translated into a Chinese verb-object. \"E-C Negative\" means an English verb-object was not translated into a Chinese verb-object. The E-C Positive Rate reached 64.8% and the C-E Positive Rate reached 64.3%. These statistics show that our correspondence assumption is reasonable but not strong. Now we will examine the reasons why some of the dependency relations cannot be mapped directly. \u2026found it pleasant to escape to a time when life, though hard, was relatively simple.", + "cite_spans": [ + { + "start": 128, + "end": 137, + "text": "[Lin 1993", + "ref_id": "BIBREF8" + }, + { + "start": 138, + "end": 148, + "text": ", Lin 1994", + "ref_id": "BIBREF9" + }, + { + "start": 186, + "end": 196, + "text": "[Zhou 2000", + "ref_id": "BIBREF18" + } + ], + "ref_spans": [ + { + "start": 612, + "end": 619, + "text": "Table 2", + "ref_id": "TABREF0" + } + ], + "eq_spans": [], + "section": "I", + "sec_num": null + }, + { + "text": "From Table 3 , we can see that \"negative\" mapping has several causes. The most important reasons are: a Chinese verb-object can be translated into a single English verb (e.g., an intransitive verb) or can be translated into verb+prep+obj. If these two mappings (as shown in Table 4 ) are also considered reasonable correspondences, then the mapping rate will increase significantly. As seen in Table 5 , the E-C Positive rate and the C-E Positive rate reached 82.71% and 83.87% respectively. This implies that all four key dependency relations can be mapped very well, showing that our assumption is correct. This fact will be used to estimate the word translation model using two monolingual corpora. The method will be given in the following subsections.", + "cite_spans": [], + "ref_spans": [ + { + "start": 5, + "end": 12, + "text": "Table 3", + "ref_id": "TABREF1" + }, + { + "start": 274, + "end": 281, + "text": "Table 4", + "ref_id": "TABREF2" + }, + { + "start": 394, + "end": 401, + "text": "Table 5", + "ref_id": null + } + ], + "eq_spans": [], + "section": "I", + "sec_num": null + }, + { + "text": "We will next describe our approach to estimating the word translation likelihood based on the triple correspondence assumption with the help of a simple Chinese-English dictionary. The key idea is to calculate \"cross-language similarity\", which is an extension of word similarity within one language.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Cross-Language Word Similarity", + "sec_num": "2.2" + }, + { + "text": "Several statistical approaches to computing word similarity have been proposed. In these approaches, a word is represented by a word co-occurrence vector in which each feature corresponds to one word in the lexicon. The value of a feature specifies the frequency of joint occurrence of the two words in some particular relations and/or in a certain window size in the text. The degree of similarity between a pair of words is computed using a certain similarity (or distance) measure that is applied to the corresponding pairs of vectors. This similarity computation method relies on the assumption that the meanings of the words are related to their co-occurrence patterns with other words in the text. Given this assumption, we can expect that words which have similar co-occurrence patterns will resemble each other in meaning.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Cross-Language Word Similarity", + "sec_num": "2.2" + }, + { + "text": "Different types of word co-occurrences have been examined with respect to computing word similarity. They can in general be classified into two types, which refer to the co-occurrence of words within the specified syntactic relations, and the co-occurrence of words that have non-grammatical relations in a certain window in the text. The set of co-occurrences of a word within syntactic relations strongly reflects its semantic properties. Lin [1998b] defined lexical co-occurrences within syntactic relations, such as subject-verb, verb-object, adj-noun, etc. These types of co-occurrences can be used to compute the similarity of two words.", + "cite_spans": [ + { + "start": 441, + "end": 452, + "text": "Lin [1998b]", + "ref_id": "BIBREF11" + } + ], + "ref_spans": [], + "eq_spans": [], + "section": "Cross-Language Word Similarity", + "sec_num": "2.2" + }, + { + "text": "While most methods proposed up to now are for computing the word similarity within one language, we believe that some of these ideas can be extended to computation of \"cross-language word similarity\". Cross-language word similarity denotes the commonality between one word in a language and one word in another language. In each language, a word is represented by a vector of features in which each feature corresponds to one word in the lexicon. The key to computing cross-language similarity is to determine how to calculate the similarity of two vectors which are represented by words in different languages.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Cross-Language Word Similarity", + "sec_num": "2.2" + }, + { + "text": "Based on the triple correspondence assumption which we have made in 2.1, dependency triples can be used to compute the cross language similarity. In each language, a word is represented by a vector of dependency triples which co-occur with the word in the sentence. Our approach assumes that a word in one language is similar to a word in another language if their vectors are similar in some sense. In addition, we can use a bilingual lexicon to bridge the words in the two vectors to compute cross-language similarity.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Cross-Language Word Similarity", + "sec_num": "2.2" + }, + { + "text": "Our similarity measure is an extension of the measure proposed in [Lin, 1998b] , where the similarity between two words is defined as the amount of information contained in the commonality between the words and is divided by the sum of information in the descriptions of the two words in each language respectively.", + "cite_spans": [ + { + "start": 66, + "end": 78, + "text": "[Lin, 1998b]", + "ref_id": "BIBREF11" + } + ], + "ref_spans": [], + "eq_spans": [], + "section": "Cross-Language Word Similarity", + "sec_num": "2.2" + }, + { + "text": "In Lin [1998b] 's work, a dependency parser was used to extract dependency triples. For a word 1", + "cite_spans": [ + { + "start": 3, + "end": 14, + "text": "Lin [1998b]", + "ref_id": "BIBREF11" + } + ], + "ref_spans": [], + "eq_spans": [], + "section": "Cross-Language Word Similarity", + "sec_num": "2.2" + }, + { + "text": "w , a triple ) , , ( 2 1 w rel w", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Cross-Language Word Similarity", + "sec_num": "2.2" + }, + { + "text": "represents a feature of 1 w , which means 1 w can be used in relation of rel with word 2 w . The description of a word w consists of the frequency counts of all the dependency triples that match the pattern (w,* , *).", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Cross-Language Word Similarity", + "sec_num": "2.2" + }, + { + "text": "An occurrence of a dependency triple ) , , ( 2 1 w rel w can be regarded as the co-occurrence of three events [Lin, 1998b] :", + "cite_spans": [ + { + "start": 110, + "end": 122, + "text": "[Lin, 1998b]", + "ref_id": "BIBREF11" + } + ], + "ref_spans": [], + "eq_spans": [], + "section": "Cross-Language Word Similarity", + "sec_num": "2.2" + }, + { + "text": "A: a randomly selected word is 1 w ; B: a randomly selected dependency type is rel ; C: a randomly selected word is 2 w .", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Cross-Language Word Similarity", + "sec_num": "2.2" + }, + { + "text": "According to Lin [1998b] , if we assume that A and C are conditionally independent given B, then the information contained in", + "cite_spans": [ + { + "start": 13, + "end": 24, + "text": "Lin [1998b]", + "ref_id": "BIBREF11" + } + ], + "ref_spans": [], + "eq_spans": [], + "section": "Cross-Language Word Similarity", + "sec_num": "2.2" + }, + { + "text": "EQUATION", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [ + { + "start": 0, + "end": 8, + "text": "EQUATION", + "ref_id": "EQREF", + "raw_str": "c w rel w f w rel w = = ) , , ( || , , || 2 1 2 1 can be 8 M. Zhou et al. computed as follows 6 : )) , , ( log ( )) | ( ) | ( ) ( log( ) , , ( 2 1 C B A P B C P B A P B P w rel w I MLE MLE MLE MLE \u2212 \u2212 \u2212 = ; (1) where: ,*) (*, ,*) , ( ) | ( 1 rel f rel w f B A P MLE = ; (2) ,*) (*, ) , (*, ) | ( 2 rel f w rel f B C P MLE = ; (3) (*,*,*) ,*) (*, ) ( f rel f B P MLE = ; (4) (*,*,*) ) , , ( ) , , ( 2 1 f w rel w f C B A P MLE = ;", + "eq_num": "(5)" + } + ], + "section": "Cross-Language Word Similarity", + "sec_num": "2.2" + }, + { + "text": "where ) (x f denotes the frequency of x ; * is a wildcard for all possible combinations.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Cross-Language Word Similarity", + "sec_num": "2.2" + }, + { + "text": "Finally, we have [Lin, 1998b] )", + "cite_spans": [ + { + "start": 17, + "end": 29, + "text": "[Lin, 1998b]", + "ref_id": "BIBREF11" + } + ], + "ref_spans": [], + "eq_spans": [], + "section": "Cross-Language Word Similarity", + "sec_num": "2.2" + }, + { + "text": ", (*, ,*) , ( ,*) (*, ) , , ( log ) , , ( 2 1 2 1 2 2 1 w rel f rel w f rel f w rel w f w rel w I = (6) Let ) (w T be the set of ) , ( ' w rel such that ) ' , (*, ,*) , ( ,*) (*, ) ' , , ( log 2 w rel f rel w f rel f w rel w f", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Cross-Language Word Similarity", + "sec_num": "2.2" + }, + { + "text": "is positive.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Cross-Language Word Similarity", + "sec_num": "2.2" + }, + { + "text": "Then the similarity between two words, 1 w and 2 w , within one language is defined as follows [Lin, 1998b] :", + "cite_spans": [ + { + "start": 95, + "end": 107, + "text": "[Lin, 1998b]", + "ref_id": "BIBREF11" + } + ], + "ref_spans": [], + "eq_spans": [], + "section": "Cross-Language Word Similarity", + "sec_num": "2.2" + }, + { + "text": ") , , ( ) , , ( )) , , ( ) , , ( ( ) , ( 2 ) ( ) , ( 1 ) ( ) , ( 2 1 ) ( ) ( ) , ( 2 1 2 1 2 1 w rel w I w rel w I w rel w I w rel w I w w Sim w T w rel w T w rel w T w T w rel \u2208 \u2208 \u2208 \u2211 + \u2211 + \u2211 = \u2229 (7)", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Cross-Language Word Similarity", + "sec_num": "2.2" + }, + { + "text": "Now, let us see how we can extend to cross language. Similarly, for a Chinese word", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Cross-Language Word Similarity", + "sec_num": "2.2" + }, + { + "text": "C w and an English word E w , let ) ( C w T be the set of pairs ) , ( ' C C w rel such that ) ' , (*, ,*) , ( ,*) (*, ) ' , , ( log 2 c c c c c c c c w rel f rel w f rel f w rel w f", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Cross-Language Word Similarity", + "sec_num": "2.2" + }, + { + "text": "is positive, and let ) ( E w T be the set of pairs", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Cross-Language Word Similarity", + "sec_num": "2.2" + }, + { + "text": ") , ( ' E E w rel such that ) ' , (*, ,*) , ( ,*) (*, ) ' , , ( log 2 E E E E E E E E w rel f rel w f rel f w rel w f", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Cross-Language Word Similarity", + "sec_num": "2.2" + }, + { + "text": "is positive. Then we can similarly define cross-language word similarity as follows:", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Cross-Language Word Similarity", + "sec_num": "2.2" + }, + { + "text": "6 Please see [Lin, 1998b] for the detailed derivation process of this formula.", + "cite_spans": [ + { + "start": 13, + "end": 25, + "text": "[Lin, 1998b]", + "ref_id": "BIBREF11" + } + ], + "ref_spans": [], + "eq_spans": [], + "section": "Cross-Language Word Similarity", + "sec_num": "2.2" + }, + { + "text": "9 ) ' , , ( ) ' , , ( ) , ( ) , ( ) ( ) ' , ( ) ( ) ' , ( E E E w T w rel C C C w T w rel E C common E C w rel w I w rel w I w w I w w Sim E E E C C C \u2208 \u2208 \u2211 + \u2211 = (8) where ) , ( E C common w w I", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Improving Translation Selection with a New Translation Model Trained", + "sec_num": null + }, + { + "text": "denotes the total information contained in the commonality of the features of C w and E w . Actually, we have three different methods for ", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Improving Translation Selection with a New Translation Model Trained", + "sec_num": null + }, + { + "text": "calculating ) , ( E C common w w I . 1) Map Chinese into English We define )} ' ( ' ), ( ), ( ) ' , ( | ) ' , {( ) ( ) ( ) ' , ( ), ( )} ' ( ' ), ( | ) ' , {( ) ( C E C E E E E C C C E C C C C E C E C E E E E E C", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Improving Translation Selection with a New Translation Model Trained", + "sec_num": null + }, + { + "text": ")} ' ( ' ), ( ), ( ) ' , ( | ) ' , {( ) ( ) ( ) ' , ( ), ( )} ' ( ' ), ( | ) ' , {( ) ( E C E C C C C E E E C E E E E C E C E C C C C C E", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Improving Translation Selection with a New Translation Model Trained", + "sec_num": null + }, + { + "text": ") ( ) ( ) ( ) ( ) ( ) ( E E C E C E E E C C E C C C E C E C w T w T w T w T w T w T \u2192 \u2192 \u2194 \u2192 \u2192 \u2194 \u222a = \u222a =", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Improving Translation Selection with a New Translation Model Trained", + "sec_num": null + }, + { + "text": "Then, we can define the cross-language word similarity of C w and E w in the following three ways:", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Improving Translation Selection with a New Translation Model Trained", + "sec_num": null + }, + { + "text": "\u2211 \u2211 \u2211 \u2211 \u2208 \u2208 \u2208 \u2208 \u2192 + + = \u2192 \u2192 ) ( ) ' , ( ) ( ) ' , ( ) ( ) , ( ) ( ) ' , ( ) ' , , ( ) ' , , ( ) ' , , ( ) ' , , ( ) , ( ' E E E C C C E E C E E C E C C C w T w rel E E E w T w rel C C C w T w rel E E E w T w rel C C C E C E C w rel w I w rel w I w rel w I w rel w I w w Sim (9) \u2211 \u2211 \u2211 \u2211 \u2208 \u2208 \u2208 \u2208 \u2192 + + = \u2192 \u2192 ) ( ) ' , ( ) ( ) ' , ( ) ( ) , ( ) ( ) ' , ( ) ' , , ( ) ' , , ( ) ' , , ( ) ' , , ( ) , ( ' E E E C C C E C E E E C C E C C w T w rel E E E w T w rel C C C w T w rel E E E w T w rel C C C E C C E w rel w I w rel w I w rel w I w rel w I w w Sim (10) \u2211 \u2211 \u2211 \u2211 \u2208 \u2208 \u2208 \u2208 \u2194 + + = \u2194 \u2194 ) ( ) ' , ( ) ( ) ' , ( ) ( ) , ( ) ( ) ' , ( ) ' , , ( ) ' , , ( ) ' , , ( ) ' , , ( ) , ( ' E E E C C C E C E E E C C E C C w T w rel E E E w T w rel C C C w T w rel E E E w T w rel C C C E C C E w rel w I w rel w I w rel w I w rel w I w w Sim (11)", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Improving Translation Selection with a New Translation Model Trained", + "sec_num": null + }, + { + "text": "Similarity (9) can be seen as the likelihood of translating a Chinese word into an English word, similarity (10) can be seen as the likelihood of translating an English word into a Chinese word, and similarity (11), a balanced and asymmetry formula, can be seen the \"neural\" similarity of a Chinese word and an English word.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Improving Translation Selection with a New Translation Model Trained", + "sec_num": null + }, + { + "text": "We will next discuss how we can build a translation model in order to solve the translation selection problem in dependency triple translation. Suppose we want to translate a Chinese dependency triple", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Translation Selection Model Based on Cross-Language Similarity", + "sec_num": "2.3" + }, + { + "text": ") , , ( 2 1 C C C w rel w c = into an English dependency triple ) , , ( 2 1 E E E w rel w e =", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Translation Selection Model Based on Cross-Language Similarity", + "sec_num": "2.3" + }, + { + "text": "; this is equivalent to finding max e that will maximize the value ) | ( c e P according to the statistical translation model [Brown, 1993] ) (e P can be estimated using formula (5), which can be rewritten as", + "cite_spans": [ + { + "start": 126, + "end": 139, + "text": "[Brown, 1993]", + "ref_id": "BIBREF0" + } + ], + "ref_spans": [], + "eq_spans": [], + "section": "Translation Selection Model Based on Cross-Language Similarity", + "sec_num": "2.3" + }, + { + "text": "(*,*,*) ) , , ( ) , , ( 2 1 2 1 f w rel w f w rel w P E E E E E E MLE = In addition, we have ) | ( ) , | ( ) , | ( ) | ( 2 1 e rel P e rel w P e rel w P e c P C C C C C \u00d7 \u00d7 =", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Translation Selection Model Based on Cross-Language Similarity", + "sec_num": "2.3" + }, + { + "text": "We suppose that the selection of a word in translation is independent of the type of dependency relation, therefore we can assume that ", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Translation Selection Model Based on Cross-Language Similarity", + "sec_num": "2.3" + }, + { + "text": ") | ( ) , ( ) , ( ) | ( 2 2 1 1 e rel P w w Sim w w Sim e c Likelihood C E C C E E C C E \u00d7 \u00d7 = \u2192 \u2192 (14) ) | ( e rel P C", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Translation Selection Model Based on Cross-Language Similarity", + "sec_num": "2.3" + }, + { + "text": "is a parameter which mostly depends on specific word. But this can be simplified as In this formula, we use the English dependency triple sets to estimate ) (e P , and use the English dependency sets and Chinese dependency sets which are independent of each other, to estimate the translation model based on our dependency correspondence assumption. In the whole process, no manually aligned or tagged corpus is needed.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Translation Selection Model Based on Cross-Language Similarity", + "sec_num": "2.3" + }, + { + "text": ") | ( e rel P C = ) | ( E C rel rel P Then we have ) | ( ) , ( ) , ( ) | ( 2 2 1 1 E C E C C E E C C", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Translation Selection Model Based on Cross-Language Similarity", + "sec_num": "2.3" + }, + { + "text": "To estimate the cross-language similarity and the target language triple probability, both Chinese and English dependency triple sets are required to build. Similar to [Lin 1998b ], we also use parsers to extract dependency triples from the text corpus. The workflow of constructing the dependency triple databases is depicted in Fig 1. Figure 1 The flowchart of constructing the dependency triple database.", + "cite_spans": [ + { + "start": 168, + "end": 178, + "text": "[Lin 1998b", + "ref_id": "BIBREF11" + } + ], + "ref_spans": [ + { + "start": 330, + "end": 347, + "text": "Fig 1. Figure 1", + "ref_id": null + } + ], + "eq_spans": [], + "section": "Model Training", + "sec_num": "3." + }, + { + "text": "As shown in Fig. 1 , each sentence from the text corpus is parsed by a dependency parser, and a set of dependency triples is generated. Each triple is put into the triple database. If an instantiation of a type of triple already exists in the triple database, then the frequency of this triple will increase one time. After all the sentences are parsed, we can get a triple database with a large number of triples. Since the parser can not be expected to be 100% correct, some parsing mistakes will inevitably be introduced into the triple database. It is necessary to remove the noisy triples as Lin did [1998a] , but in our experiment, we did not apply any noise Our English text corpus consists of 750 M (byte) of text from the Wall' Street Journal (1980) (1981) (1982) (1983) (1984) (1985) (1986) (1987) (1988) (1989) (1990) , and our Chinese text corpus contains 1,200 M(byte) of text from People's Daily (1980) (1981) (1982) (1983) (1984) (1985) (1986) (1987) (1988) (1989) (1990) (1991) (1992) (1993) (1994) (1995) (1996) (1997) (1998) . The English parser we used was Minipar [Lin 1993 , Lin 1994 ]. Minipar is a broad-coverage, principle-based parser with a lexicon of more than 90,000 words. The Chinese parser we used here was BlockParser [Zhou 2000 ]. This is a robust rule parser that breaks up Chinese sentences into \"blocks\", which are represented by headwords. Then syntactical dependency analysis was applied to the \"blocks\". 17 POS tags and 19 grammatical relations were recognized by this parser, and 220,000 entries were registered in the parsing lexicon.", + "cite_spans": [ + { + "start": 597, + "end": 612, + "text": "Lin did [1998a]", + "ref_id": null + }, + { + "start": 752, + "end": 758, + "text": "(1980)", + "ref_id": null + }, + { + "start": 759, + "end": 765, + "text": "(1981)", + "ref_id": null + }, + { + "start": 766, + "end": 772, + "text": "(1982)", + "ref_id": null + }, + { + "start": 773, + "end": 779, + "text": "(1983)", + "ref_id": null + }, + { + "start": 780, + "end": 786, + "text": "(1984)", + "ref_id": null + }, + { + "start": 787, + "end": 793, + "text": "(1985)", + "ref_id": null + }, + { + "start": 794, + "end": 800, + "text": "(1986)", + "ref_id": null + }, + { + "start": 801, + "end": 807, + "text": "(1987)", + "ref_id": null + }, + { + "start": 808, + "end": 814, + "text": "(1988)", + "ref_id": null + }, + { + "start": 815, + "end": 821, + "text": "(1989)", + "ref_id": null + }, + { + "start": 822, + "end": 828, + "text": "(1990)", + "ref_id": null + }, + { + "start": 910, + "end": 916, + "text": "(1980)", + "ref_id": null + }, + { + "start": 917, + "end": 923, + "text": "(1981)", + "ref_id": null + }, + { + "start": 924, + "end": 930, + "text": "(1982)", + "ref_id": null + }, + { + "start": 931, + "end": 937, + "text": "(1983)", + "ref_id": null + }, + { + "start": 938, + "end": 944, + "text": "(1984)", + "ref_id": null + }, + { + "start": 945, + "end": 951, + "text": "(1985)", + "ref_id": null + }, + { + "start": 952, + "end": 958, + "text": "(1986)", + "ref_id": null + }, + { + "start": 959, + "end": 965, + "text": "(1987)", + "ref_id": null + }, + { + "start": 966, + "end": 972, + "text": "(1988)", + "ref_id": null + }, + { + "start": 973, + "end": 979, + "text": "(1989)", + "ref_id": null + }, + { + "start": 980, + "end": 986, + "text": "(1990)", + "ref_id": null + }, + { + "start": 987, + "end": 993, + "text": "(1991)", + "ref_id": null + }, + { + "start": 994, + "end": 1000, + "text": "(1992)", + "ref_id": null + }, + { + "start": 1001, + "end": 1007, + "text": "(1993)", + "ref_id": null + }, + { + "start": 1008, + "end": 1014, + "text": "(1994)", + "ref_id": null + }, + { + "start": 1015, + "end": 1021, + "text": "(1995)", + "ref_id": null + }, + { + "start": 1022, + "end": 1028, + "text": "(1996)", + "ref_id": null + }, + { + "start": 1029, + "end": 1035, + "text": "(1997)", + "ref_id": null + }, + { + "start": 1036, + "end": 1042, + "text": "(1998)", + "ref_id": null + }, + { + "start": 1084, + "end": 1093, + "text": "[Lin 1993", + "ref_id": "BIBREF8" + }, + { + "start": 1094, + "end": 1104, + "text": ", Lin 1994", + "ref_id": "BIBREF9" + }, + { + "start": 1250, + "end": 1260, + "text": "[Zhou 2000", + "ref_id": "BIBREF18" + } + ], + "ref_spans": [ + { + "start": 12, + "end": 18, + "text": "Fig. 1", + "ref_id": null + } + ], + "eq_spans": [], + "section": "Model Training", + "sec_num": "3." + }, + { + "text": "The 750M (byte) English newspaper corpus was parsed within 50 hours on a machine with 4 Pentium\u2122 III 800 CPU, and the 1200 M (byte) Chinese newspaper corpus was parsed in 110 hours on the same machine. We extracted the dependency triples from the parsed corpus. There were 19 million occurrences of dependency triple in the English parsed corpus, and 33 million occurrences of dependency triples in the Chinese parsed corpus. As a result, we acquired two databases of dependency triples of the two languages. These two databases served as the information source for the translation model training and triple probability, which we have described in the above sections. 1980-1990 750M 19 ,000,000 Minipar", + "cite_spans": [ + { + "start": 668, + "end": 685, + "text": "1980-1990 750M 19", + "ref_id": null + } + ], + "ref_spans": [], + "eq_spans": [], + "section": "Model Training", + "sec_num": "3." + }, + { + "text": "The E-C and C-E dictionaries used here are the bilingual lexicon used in machine translation systems developed by Harbin Institute of Technology 8 . The E-C lexicon contains 78,197 entries, and C-E dictionary contains 74,299 entries.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Model Training", + "sec_num": "3." + }, + { + "text": "Since in this paper, we are primarily interested in the selection of translations of verbs, we utilized only three types of dependency relations for similarity estimation, i.e., verb-object, verb-adverb and subject-verb. The symmetric triples \"object-of\", \"adverb-of\" and \"subject-of\" were also used in calculating the translation model and the triple probability. Table 7 shows the statistics of occurrences of the three kinds of dependency relations. w that constructs the dependency relation, and the frequency count #. Then we extracted the word lists from the Chinese triple sets and the English triple sets, and calculated the similarity of each Chinese word and each English word. For similarity, we only calculated the similarity between verbs and between nouns of the two languages. As a result, a large table was constructed recording the cross-language similarity as shown in table 8. S (i,j) is the similarity between a Chinese word i C and an English word j E . Please note that we only apply similarity formula (10) since we were interested in the translation likelihood from an English word to a Chinese word, as explained in the previous section. ", + "cite_spans": [], + "ref_spans": [ + { + "start": 365, + "end": 372, + "text": "Table 7", + "ref_id": "TABREF9" + } + ], + "eq_spans": [], + "section": "Model Training", + "sec_num": "3." + }, + { + "text": "E 2 E \u2026 m E 1 C 11 S 12 S \u2026 m S 1 2 C 21 S 22 S \u2026 m S 2 \u2026 \u2026 \u2026 \u2026 \u2026 n C 1 n S 2 n S \u2026 nm S", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Model Training", + "sec_num": "3." + }, + { + "text": "Please note that in this paper, we only focus on the verb-object triple translation experiments to demonstrate how to improve translation selection. We conducted a set of experiments with several translation models on the verb-object translation. As the baseline experiment, Model A selected the translation of a verb and its object with the highest frequency as the translation output. Model B utilized the target language triple probability but did not apply the translation model. Model C utilized both the target language triple probability and the translation model. The verb-object translation answer sets were built manually by English experts from the Department of Foreign Languages of Beijing University. For a certain triple, all the plausible translations are given in building the translation evaluation set. Samples of the evaluation sets are shown in Table 9 . ; this is equivalent to finding max e that would maximize translation model we have proposed. To test our method, we conducted a series of translation experiments with incrementally enhanced resources. All the translation experiments reported in this paper were conducted with Chinese-English verb-object triple translation.", + "cite_spans": [], + "ref_spans": [ + { + "start": 866, + "end": 873, + "text": "Table 9", + "ref_id": "TABREF11" + } + ], + "eq_spans": [], + "section": "Translation Experiments", + "sec_num": "4." + }, + { + "text": "As the baseline for our experiment, Model A simply selected the translation word in the bilingual lexicon which had the highest frequency in the English corpus. It translated verb and object separately. Model A did not utilize the triple probability or the translation model. ", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Model A (selecting the highest-frequency translation)", + "sec_num": null + }, + { + "text": "\u2212 = = \u2208 \u2208", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Model A (selecting the highest-frequency translation)", + "sec_num": null + }, + { + "text": "Model C (selecting the translation which fits both the triple probability and the translation model best)", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Model A (selecting the highest-frequency translation)", + "sec_num": null + }, + { + "text": "In Model C, both the translation model and triple probability were considered. We have ", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Model A (selecting the highest-frequency translation)", + "sec_num": null + }, + { + "text": ") , ( ) , ( ) , , ( max arg ) | ( ) ( max arg 2 2 1 1 2 1 ) ( ) ( max 2 2 1 1 E C C E E C C E E E w", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Model A (selecting the highest-frequency translation)", + "sec_num": null + }, + { + "text": "We designed a series of evaluations to test the above models. In this subsection, the evaluation results will be reported. To achieve an objective evaluation, we designed three kinds of testing set, 1) high frequency verb and its object, 2) a low frequency verb and its object, and 3) a low frequency verb-object triple. Please note that each selected verb should take a simple noun as its object, the verbs like \"\u662f\"(be)\uff0c\"\u4f7f\"(make), \"\u8bf7\"(invite), \"\u8ba4\u4e3a\" were not used since their translations were not directly relied on their objects.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Evaluation", + "sec_num": "4.2" + }, + { + "text": "We wanted to observe the performance of these models in the translation of verb-objects in which the verbs were high frequency ones. We randomly selected 53 high-frequency verbs (see Appendix I), and randomly extracted certain number of triples of verb-object relation from the Chinese triple database. Totally 730 triples are extracted. The translation results obtained using the various models are shown in Table 10 . From these results, we can see that Model B and Model C achieved considerably better translation precision than did Model A. Model C worked a little better than Model B.", + "cite_spans": [], + "ref_spans": [ + { + "start": 409, + "end": 417, + "text": "Table 10", + "ref_id": "TABREF14" + } + ], + "eq_spans": [], + "section": "Case-I: High-frequency verbs with their objects", + "sec_num": null + }, + { + "text": "We tested the translation of the triples composed of low-frequency verbs and a noun. We randomly selected 23 low frequency verbs (see Appendix II) and randomly extracted 108 verb-object triples containing these words from the Chinese triple database. The translation results obtained using the various models are shown in Table 11 . We also tested the translation of low-frequency triples. First we selected the following objects: \"\u56fd\u5bb6, \u540c\u5fd7, \u4f01\u4e1a, \u653f\u5e9c, \u8bb0\u8005, \u4f1a\u8bae, \u7ecf\u6d4e, \u7fa4\u4f17, \u519c\u6c11, \u5e02\u573a, \u653f\u7b56, \u516c\u53f8, \u5bb6, \u6761\u4ef6, \u5730\u533a, \u57fa\u7840, \u4e66, \u65f6\u95f4, \u9879\u76ee, \u4eba\u5458, \u5229\u76ca\". Then we selected triples which contained the above words and occurred less than 5 times. Since the set of such low-frequency triples was very large, we randomly selected 340 triples as the evaluation sets. The results are shown in Table 12 . We can see that our methods obtained very promising results in all the cases.", + "cite_spans": [], + "ref_spans": [ + { + "start": 322, + "end": 330, + "text": "Table 11", + "ref_id": "TABREF15" + }, + { + "start": 746, + "end": 754, + "text": "Table 12", + "ref_id": "TABREF0" + } + ], + "eq_spans": [], + "section": "Case-II: Translation of low-frequency verbs with their objects", + "sec_num": null + }, + { + "text": "One of the reasons for translation mistakes is the OOV problem, i.e., the best translation is out of vocabulary. Therefore, the translation quality is seriously affected. For example, \"\u5c55\u5f00\" has two translations in the translation lexicon: \"unfold\" and \"develop\". However, the triple \"\u5c55\u5f00, verb-object, \u8fdb\u653b\", which should be translated as \"launch, verb-object, attack\", cannot be properly produced with the translations given by the dictionary. To solve this problem, we used new methods to get a number of possible translations based on the translations defined in the dictionary and obtained very interesting results.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Accommodating Lexical Gaps (OOV)", + "sec_num": "4.3" + }, + { + "text": "For the Chinese verb-object triple", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Model D (Translation expansion using a bilingual lexicon)", + "sec_num": null + }, + { + "text": ") , , ( 2 1 C C w object verb w c \u2212 =", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Model D (Translation expansion using a bilingual lexicon)", + "sec_num": null + }, + { + "text": ", we can expand new translations by employing an E-C lexicon and the C-E lexicon circles: translation of \"talk\". Looking up in the C-E dictionary again, \"speak\" is one translation of \"\u8bf4 \u8bdd\". In this way, \"\u8bf4\" is translated as \"speak\" in addition to the original translation \"talk\". Model D can be described formally as follows: ", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Model D (Translation expansion using a bilingual lexicon)", + "sec_num": null + }, + { + "text": ") , ( ) , ( ) , , ( max arg ) | ( ) ( max arg 2 2 1 1 2 1 ) ( 1 ) ( 1 max 2 2 1 1 E C C E E C C E E E w", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Model D (Translation expansion using a bilingual lexicon)", + "sec_num": null + }, + { + "text": ") ( )} ( 1 , 0 ) , ( | { ) ( 2 1 2 2 2 , 1 1 1 C C E E E E C w Tran w Tran w where w object verb w I w w Tran \u222a = \u2212 =", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Model D (Translation expansion using a bilingual lexicon)", + "sec_num": null + }, + { + "text": "To reduce the bad impact of the blind translation expansion of Model E, we try to assign lower probability to the verbs that are expanded out of the bilingual lexicon. We use the following method: the translations given by the bilingual lexicon share a probability of 0.6 and the other possible translations that are expanded using Model E share a probability of 0.4. Suppose * P is the additionally assigned probability, and suppose there are m translations given by the bilingual lexicon and n translations expanded by model E. We have the following:", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Model D (Translation expansion using a bilingual lexicon)", + "sec_num": null + }, + { + "text": "m P 6 . 0 * =", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Model D (Translation expansion using a bilingual lexicon)", + "sec_num": null + }, + { + "text": "If the translation is obtained from the C-E lexicon n P 4 . 0 * = If the translation is obtained through expansion of Model E", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Model D (Translation expansion using a bilingual lexicon)", + "sec_num": null + }, + { + "text": "Then Model E can be described as: ", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Model D (Translation expansion using a bilingual lexicon)", + "sec_num": null + }, + { + "text": "EQUATION", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [ + { + "start": 0, + "end": 8, + "text": "EQUATION", + "ref_id": "EQREF", + "raw_str": "* 2 2 * 1 1 2 1 ) ( 1 ) ( 2 max ) , ( ) , ( ) , ,", + "eq_num": "( max arg" + } + ], + "section": "Model D (Translation expansion using a bilingual lexicon)", + "sec_num": null + }, + { + "text": "E C C E E E w Tran w w Tran w e C E C E \u00d7 \u00d7 \u00d7 \u00d7 \u2212 = \u00d7 = \u2192 \u2192 \u2208 \u2208", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Model D (Translation expansion using a bilingual lexicon)", + "sec_num": null + }, + { + "text": "The evaluation results obtained using Case-I testing set are shown in Table 13 . We can find that both Model D and Model E improved the translation precision. Model E is more powerful than Model D. We also found that the translation performance was influenced by data sparseness of the triple database. Typically, when an English counterpart for a verb-object triple in Chinese could not be found, Model E will yielded 0 for", + "cite_spans": [], + "ref_spans": [ + { + "start": 70, + "end": 78, + "text": "Table 13", + "ref_id": "TABREF1" + } + ], + "eq_spans": [], + "section": "Model D (Translation expansion using a bilingual lexicon)", + "sec_num": null + }, + { + "text": ") , , ( 2 1 E E w object verb w P \u2212", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Model D (Translation expansion using a bilingual lexicon)", + "sec_num": null + }, + { + "text": ". For example, \"eat twisted crullers\", which corresponds to \"\u5403\u6cb9\u6761\" did not appeared anywhere in the English triple set. This will generate very big influence. We shall tackle this problem in the future.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Model D (Translation expansion using a bilingual lexicon)", + "sec_num": null + }, + { + "text": "The key to improving translation selection is to incorporate human translation knowledge into a computer system. One way is for translation experts to handcraft the translation selection knowledge in the form of selection rules and lexicon features. However, this method is time-consuming and cannot ensure high quality in a consistent way. Current commercial MT systems mainly rely on this method. Another way is to let the computer learn the translation selection knowledge automatically by using a large parallel text. A good survey on this research is that of McKeown & Radev [2000] . Some of the contents are quoted here in a condensed way. Smadja et al. [1996] created a system called Champolion, which is based on Smadja's collocation extractor, Xtract. Champollion uses a statistical method to translate both flexible and rigid collocations between English and French using the Canadian Hansard corpus. Champollion's output is a bilingual list of collocations ready for use in a machine translation system. Smadja et al. indicated that 78% of the French translations of valid English collocations were judged to be correct based on three evaluations by human experts. Kupiec [1993] described an algorithm for the translation of a specific kind of collocations, namely, noun phrases. An evaluation of his algorithm has shown that 90% of the 100 highest ranking correspondences are correct.", + "cite_spans": [ + { + "start": 564, + "end": 586, + "text": "McKeown & Radev [2000]", + "ref_id": "BIBREF12" + }, + { + "start": 646, + "end": 666, + "text": "Smadja et al. [1996]", + "ref_id": "BIBREF14" + }, + { + "start": 1183, + "end": 1189, + "text": "[1993]", + "ref_id": null + } + ], + "ref_spans": [], + "eq_spans": [], + "section": "Related Works", + "sec_num": "5." + }, + { + "text": "Selecting the right word translation is related to word sense disambiguation. Most of the research has reported on using supervised methods, which use sense-tagged corpora. Mooney [1996] gave a good quantitative comparison of various methods. Yarowsky [1995] reported an impressive unsupervised-learning result that trains decision lists for binary sense disambiguation. Schutze [1998] also proposed an unsupervised method, which in essence clusters usages of a word. However, although both Yarowsky and Schutze minimized the amount of supervision, their reported results only for very few examples.", + "cite_spans": [ + { + "start": 173, + "end": 186, + "text": "Mooney [1996]", + "ref_id": "BIBREF13" + }, + { + "start": 243, + "end": 258, + "text": "Yarowsky [1995]", + "ref_id": "BIBREF17" + }, + { + "start": 371, + "end": 385, + "text": "Schutze [1998]", + "ref_id": "BIBREF15" + } + ], + "ref_spans": [], + "eq_spans": [], + "section": "Related Works", + "sec_num": "5." + }, + { + "text": "Another related field is computer assisted bilingual lexicon (term) construction. A tool for semi-automatic translation of collocations, Termight, wa described by Dagan and Church [1994] . It can be used to aid translators in finding technical term correspondences in bilingual corpora. The method proposed by Dagan and Church uses extraction of noun phrases in English and word alignment to align the head and tail words of noun phrases with words in the other language. A word sequence of words corresponding to the head and tail is produced as the translation. Because it does not rely on statistical correlation metrics to identify the words of the translation, this method allows the identification of infrequent terms that would otherwise be missed owing to their low statistical significance. Fung [1995] used a pattern-matching algorithm to compile a lexicon of nouns and noun phrases between English and Chinese. Wu and Xia [1994] computed a bilingual Chinese-English lexicon. They used the EM algorithm to produce word alignment across parallel corpora and then applied various linguistic filtering techniques to improve the results.", + "cite_spans": [ + { + "start": 163, + "end": 186, + "text": "Dagan and Church [1994]", + "ref_id": "BIBREF2" + }, + { + "start": 800, + "end": 811, + "text": "Fung [1995]", + "ref_id": "BIBREF4" + }, + { + "start": 922, + "end": 939, + "text": "Wu and Xia [1994]", + "ref_id": "BIBREF16" + } + ], + "ref_spans": [], + "eq_spans": [], + "section": "Related Works", + "sec_num": "5." + }, + { + "text": "Since large aligned bilingual corpora are hard to acquire due to copyright restrictions and construction expenses, some researchers have proposed methods which do not rely on parallel corpora. Tanaka and Iwasaki [1996] demonstrated how to use nonparallel corpora to choose the best translations among a small set of candidates. Fung [1997] used similarities in the collocates of a given word to find its translation in the other language. Fung [1998] also explored using an IR approach to get translations of new words using non-parallel but comparable corpora. Dagan and Itai [1994] use a second language monolingual corpus for word sense disambiguation. They used a target language model to find the correct word translations.", + "cite_spans": [ + { + "start": 193, + "end": 218, + "text": "Tanaka and Iwasaki [1996]", + "ref_id": "BIBREF1" + }, + { + "start": 328, + "end": 339, + "text": "Fung [1997]", + "ref_id": "BIBREF5" + }, + { + "start": 439, + "end": 450, + "text": "Fung [1998]", + "ref_id": "BIBREF6" + }, + { + "start": 562, + "end": 583, + "text": "Dagan and Itai [1994]", + "ref_id": "BIBREF3" + } + ], + "ref_spans": [], + "eq_spans": [], + "section": "Related Works", + "sec_num": "5." + }, + { + "text": "Most of the methods for statistical machine translation obtain word translation probability by learning from large parallel corpora [Brown et al., 1993] . Very few researchers have tried to use monolingual corpora to train word translation probability. The most similar work to our approach is that of [Koehn and Knight. 2000] . Using two completely unrelated monolingual corpora and a bilingual lexicon, they constructed a word translation model for 3830 German and 6147 English noun tokens by estimating word translation probabilities using the EM algorithm. In their experiment, they assumed that the word sequence of English and German was the same, so that in the EM iteration step, the language model of the target language could be used. However, their model was only used to test the translation of nouns; they did not conduct experiments on verb translation. They also did not consider syntactic relations. In addition, it is hard to extend their model to other language pair like Chinese and English.", + "cite_spans": [ + { + "start": 132, + "end": 152, + "text": "[Brown et al., 1993]", + "ref_id": "BIBREF0" + }, + { + "start": 302, + "end": 326, + "text": "[Koehn and Knight. 2000]", + "ref_id": "BIBREF7" + } + ], + "ref_spans": [], + "eq_spans": [], + "section": "Related Works", + "sec_num": "5." + }, + { + "text": "We have proposed a new statistical translation model. The unique characteristics of our model are:", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Conclusion", + "sec_num": "6." + }, + { + "text": "1) The translation model is trained using two unrelated monolingual corpora. We have defined the cross-lingual word similarity, which enable us to compute the similarity between a source language word and a target language word with a simple bilingual lexicon, without using bilingual corpora.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Conclusion", + "sec_num": "6." + }, + { + "text": "2) The translation model is based on dependency triples, not on word level, which is typically used. It can overcome the long distance dependence problem to some extent. Since the translation of a word is often decided based on a syntactic member that may not be adjacent to the word, this method can hopefully improve translation precision compared with the existing word-based model.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Conclusion", + "sec_num": "6." + }, + { + "text": "3) Based on the new translation model, we have further proposed new models for tackling OOV issue. The experiments showed that Model E, which expands translations using an English triple database, is a promising model for solving the OOV issue. This is very promising too for the application of cross language information retrieval.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Conclusion", + "sec_num": "6." + }, + { + "text": "Our approach is completely unsupervised, so it is not necessary for the two corpora to be aligned in any way or to be tagged manually with any information. Such monolingual corpora are readily available for most languages, while parallel corpora rarely exist even for common language pairs. So our method can help overcome the bottleneck of acquiring large-scale parallel corpora. Since this method does not rely on specific dependency triples, it can be used to translate other types of triples such as adjective-noun, adverb-verb and verb-complement in the same way. In addition, our method can be used to build a collocation translation lexicon for an automatic translation system. This triple based translation approach can be further extended to sentence level translation. Given a sentence, the main dependency triple can be extracted with a parser, and then each triple can be translated using our method. Then, for dependency triples which are specific to the source language, we can apply a rule-based approach. After all the main triples are correctly translated, a target language grammar can be introduced to realize target language generation. This hopefully will enable us to realize sentence skeleton translation system.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Conclusion", + "sec_num": "6." + }, + { + "text": "There are some interesting topics for future research. First, since we use parsers which inevitably introduce some parsing mistakes into the generated dependency triple databases, we need to find an effective way to filter out mistakes and perform necessary automatic correction. Second, we need to find a more precise translation expansion method to overcome the OOV issue which is caused by the limited coverage of the lexicon. For instance, we can try using translation expansion by employing a thesaurus that is trained automatically with a large corpus or employ a pre-defined thesaurus like WORDNET. Third, triple data sparseness is a big problem; to solve it, we need to apply some approaches used in statistical language models, such as smoothing methods and the class based models.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "Conclusion", + "sec_num": "6." + }, + { + "text": "Since Likelihood is not normalized in [0,1], we do not call it probability to avoid confusion.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "", + "sec_num": null + }, + { + "text": "These two lexicons are not publicly available.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "", + "sec_num": null + }, + { + "text": "We didn't use the dependency relation of adj-noun.", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "", + "sec_num": null + } + ], + "back_matter": [ + { + "text": "Let x be a Chinese words, let 'x be the English translation of x defined in the C-E lexicon, let ' ' x be the Chinese translation of 'x defined in E-C lexicon, and let ' ' 'x be the English translation of ' ' x defined in C-E lexicon. Taking \"\u8bf4\" as an example, \"talk\" is one translation based on the C-E lexicon. Then looking up in the E-C lexicon, \"\u8bf4\u8bdd\" is one ", + "cite_spans": [], + "ref_spans": [], + "eq_spans": [], + "section": "annex", + "sec_num": null + } + ], + "bib_entries": { + "BIBREF0": { + "ref_id": "b0", + "title": "The mathematics of machine translation: parameter Estimation", + "authors": [ + { + "first": "P", + "middle": [ + "F" + ], + "last": "Brown", + "suffix": "" + }, + { + "first": "A", + "middle": [ + "Della" + ], + "last": "Stephen", + "suffix": "" + }, + { + "first": "Vincent", + "middle": [ + "J Della" + ], + "last": "Pietra", + "suffix": "" + }, + { + "first": "Robert", + "middle": [ + "L" + ], + "last": "Pietra", + "suffix": "" + }, + { + "first": "", + "middle": [], + "last": "Mercer", + "suffix": "" + } + ], + "year": 1993, + "venue": "Computational Linguistics", + "volume": "19", + "issue": "2", + "pages": "263--311", + "other_ids": {}, + "num": null, + "urls": [], + "raw_text": "Brown P.F., Stephen A. Della Pietra, Vincent J. Della Pietra, Robert L. Mercer, \"The mathematics of machine translation: parameter Estimation\". Computational Linguistics, 19(2), 1993, pp. 263-311.", + "links": null + }, + "BIBREF1": { + "ref_id": "b1", + "title": "Extraction of lexical translation from nonaligned corpora", + "authors": [ + { + "first": "K", + "middle": [], + "last": "Tanaka", + "suffix": "" + }, + { + "first": "", + "middle": [], + "last": "Iwasaki", + "suffix": "" + } + ], + "year": 1996, + "venue": "COLING-96: The 16 th International Conference on Computational Linguistics", + "volume": "", + "issue": "", + "pages": "580--585", + "other_ids": {}, + "num": null, + "urls": [], + "raw_text": "Tanaka K, H Iwasaki, \"Extraction of lexical translation from nonaligned corpora.\" COLING-96: The 16 th International Conference on Computational Linguistics, Copenhagen, Denmark, 1996, pp. 580-585.", + "links": null + }, + "BIBREF2": { + "ref_id": "b2", + "title": "TERMIGHT: identifying and translating technical terminology\". 4 th Conference on Applied Natural Language Processing", + "authors": [ + { + "first": "I", + "middle": [], + "last": "Dagan", + "suffix": "" + }, + { + "first": "", + "middle": [], + "last": "Church", + "suffix": "" + } + ], + "year": 1994, + "venue": "", + "volume": "", + "issue": "", + "pages": "34--40", + "other_ids": {}, + "num": null, + "urls": [], + "raw_text": "Dagan I, K Church, \"TERMIGHT: identifying and translating technical terminology\". 4 th Conference on Applied Natural Language Processing, Stuttgart, Germany, 1994, pp. 34-40.", + "links": null + }, + "BIBREF3": { + "ref_id": "b3", + "title": "Word sense disambiguation using a second language monolingual corpus", + "authors": [ + { + "first": "I", + "middle": [], + "last": "Dagan", + "suffix": "" + }, + { + "first": "A", + "middle": [], + "last": "Itai", + "suffix": "" + } + ], + "year": 1994, + "venue": "Computational Linguistics", + "volume": "20", + "issue": "4", + "pages": "563--596", + "other_ids": {}, + "num": null, + "urls": [], + "raw_text": "Dagan I, Itai, A, \"Word sense disambiguation using a second language monolingual corpus\", Computational Linguistics, 20(4), 1994, pp. 563-596.", + "links": null + }, + "BIBREF4": { + "ref_id": "b4", + "title": "A pattern matching method for finding noun and proper noun translations from noisy parallel corpora", + "authors": [ + { + "first": "P", + "middle": [], + "last": "Fung", + "suffix": "" + } + ], + "year": 1995, + "venue": "33 rd Annual Conference of the Association for Computational Linguistics", + "volume": "", + "issue": "", + "pages": "236--233", + "other_ids": {}, + "num": null, + "urls": [], + "raw_text": "Fung P, \" A pattern matching method for finding noun and proper noun translations from noisy parallel corpora\", 33 rd Annual Conference of the Association for Computational Linguistics, Cambridge, MA, 1995, pp. 236-233.", + "links": null + }, + "BIBREF5": { + "ref_id": "b5", + "title": "Using word signature features for terminology translation from large corpora", + "authors": [ + { + "first": "P", + "middle": [], + "last": "Fung", + "suffix": "" + } + ], + "year": 1997, + "venue": "", + "volume": "", + "issue": "", + "pages": "", + "other_ids": {}, + "num": null, + "urls": [], + "raw_text": "Fung P, \"Using word signature features for terminology translation from large corpora\". Ph.D dissertation, Columbia University, 1997, New York.", + "links": null + }, + "BIBREF6": { + "ref_id": "b6", + "title": "An IR approach for translating new words from nonparallel, comparable Texts", + "authors": [ + { + "first": "P", + "middle": [], + "last": "Fung", + "suffix": "" + }, + { + "first": "Yee", + "middle": [], + "last": "Lo Yuen", + "suffix": "" + } + ], + "year": 1998, + "venue": "", + "volume": "", + "issue": "", + "pages": "414--420", + "other_ids": {}, + "num": null, + "urls": [], + "raw_text": "Fung P and LO Yuen Yee, \"An IR approach for translating new words from nonparallel, comparable Texts\". The 36th Annual Conference of the Association for Computational Linguistics, Montreal, Canada, August 1998, pp. 414-420.", + "links": null + }, + "BIBREF7": { + "ref_id": "b7", + "title": "Estimating word Translation probabilities from unrelated monolingual corpora using the EM Algorithm", + "authors": [ + { + "first": "K", + "middle": [ + "P" + ], + "last": "Koehn", + "suffix": "" + }, + { + "first": "", + "middle": [], + "last": "Knight", + "suffix": "" + } + ], + "year": 2000, + "venue": "National Conference on Artificial Intelligence (AAAI)", + "volume": "", + "issue": "", + "pages": "", + "other_ids": {}, + "num": null, + "urls": [], + "raw_text": "Koehn. P and K. Knight, \"Estimating word Translation probabilities from unrelated monolingual corpora using the EM Algorithm\", National Conference on Artificial Intelligence (AAAI), 2000, Austin, Texas.", + "links": null + }, + "BIBREF8": { + "ref_id": "b8", + "title": "Principle-based parsing without over-generation", + "authors": [ + { + "first": "D", + "middle": [], + "last": "Lin", + "suffix": "" + } + ], + "year": 1993, + "venue": "Proceedings of ACL-93", + "volume": "", + "issue": "", + "pages": "112--120", + "other_ids": {}, + "num": null, + "urls": [], + "raw_text": "Lin D., \"Principle-based parsing without over-generation\", Proceedings of ACL-93, 1993, pp 112-120, Columbus, Ohio.", + "links": null + }, + "BIBREF9": { + "ref_id": "b9", + "title": "Principar-an efficient, broad-coverage, principle-based parser", + "authors": [ + { + "first": "D", + "middle": [], + "last": "Lin", + "suffix": "" + } + ], + "year": 1994, + "venue": "Proceedings of COLING-94", + "volume": "", + "issue": "", + "pages": "482--488", + "other_ids": {}, + "num": null, + "urls": [], + "raw_text": "Lin D., \"Principar-an efficient, broad-coverage, principle-based parser\". Proceedings of COLING-94, pp. 482-488, Kyoto, Japan, 1994.", + "links": null + }, + "BIBREF10": { + "ref_id": "b10", + "title": "Extracting collocations from test corpora", + "authors": [ + { + "first": "D", + "middle": [], + "last": "Lin", + "suffix": "" + } + ], + "year": 1998, + "venue": "First Workshop on Computational Terminology", + "volume": "", + "issue": "", + "pages": "", + "other_ids": {}, + "num": null, + "urls": [], + "raw_text": "Lin D..1998a, \"Extracting collocations from test corpora\", First Workshop on Computational Terminology, Montreal, Canada, 1998.", + "links": null + }, + "BIBREF11": { + "ref_id": "b11", + "title": "Automatic retrieval and clustering of similar words", + "authors": [ + { + "first": "D", + "middle": [], + "last": "Lin", + "suffix": "" + } + ], + "year": 1998, + "venue": "COLING-ACL98", + "volume": "", + "issue": "", + "pages": "", + "other_ids": {}, + "num": null, + "urls": [], + "raw_text": "Lin D., 1998b, \"Automatic retrieval and clustering of similar words\", COLING-ACL98, Montreal, Canada, 1998.", + "links": null + }, + "BIBREF12": { + "ref_id": "b12", + "title": "Handbook of Natural Language Processing", + "authors": [ + { + "first": "K", + "middle": [], + "last": "Mckeown", + "suffix": "" + }, + { + "first": "\"", + "middle": [], + "last": "R, D R Radev", + "suffix": "" + }, + { + "first": "", + "middle": [], + "last": "Collocations", + "suffix": "" + } + ], + "year": 2000, + "venue": "", + "volume": "", + "issue": "", + "pages": "507--523", + "other_ids": {}, + "num": null, + "urls": [], + "raw_text": "Mckeown, K R, D R Radev, \"Collocations\", Handbook of Natural Language Processing, pp507-523, Edited by Robert Dale, Hermann Moisl, Harold Somers, 2000.", + "links": null + }, + "BIBREF13": { + "ref_id": "b13", + "title": "Comparative experiments on disambiguation word senses: An illustration of bias in machine learning", + "authors": [ + { + "first": "R", + "middle": [], + "last": "Mooney", + "suffix": "" + } + ], + "year": 1996, + "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", + "volume": "", + "issue": "", + "pages": "", + "other_ids": {}, + "num": null, + "urls": [], + "raw_text": "Mooney, R., \"Comparative experiments on disambiguation word senses: An illustration of bias in machine learning\", In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP, 1996.", + "links": null + }, + "BIBREF14": { + "ref_id": "b14", + "title": "Translation collocations for bilingual lexicons: a statistical approach", + "authors": [ + { + "first": "F", + "middle": [], + "last": "Smadja", + "suffix": "" + }, + { + "first": "K", + "middle": [ + "R" + ], + "last": "Mckeown", + "suffix": "" + }, + { + "first": "", + "middle": [], + "last": "Hatzivassiloglou", + "suffix": "" + } + ], + "year": 1996, + "venue": "Computational Linguistics", + "volume": "22", + "issue": "", + "pages": "1--38", + "other_ids": {}, + "num": null, + "urls": [], + "raw_text": "Smadja F., K. R. Mckeown, V Hatzivassiloglou, \"Translation collocations for bilingual lexicons: a statistical approach\". Computational Linguistics, 22:1-38, 1996.", + "links": null + }, + "BIBREF15": { + "ref_id": "b15", + "title": "Automatic word sense disambiguation rivaling supervised methods", + "authors": [ + { + "first": "H", + "middle": [], + "last": "Schutze", + "suffix": "" + } + ], + "year": 1998, + "venue": "Computational Linguistics", + "volume": "24", + "issue": "1", + "pages": "97--123", + "other_ids": {}, + "num": null, + "urls": [], + "raw_text": "Schutze, H. \"Automatic word sense disambiguation rivaling supervised methods\", Computational Linguistics, 24(1):97-123, 1998.", + "links": null + }, + "BIBREF16": { + "ref_id": "b16", + "title": "Learning an English-Chinese lexicon from a parallel corpus", + "authors": [ + { + "first": "D", + "middle": [], + "last": "Wu", + "suffix": "" + }, + { + "first": "X", + "middle": [], + "last": "Xia", + "suffix": "" + } + ], + "year": 1994, + "venue": "Technology partnerships for Crossing the Language Barrier: Proceedings of the First Conference of the Association for Machine Translation in the Americas", + "volume": "", + "issue": "", + "pages": "206--213", + "other_ids": {}, + "num": null, + "urls": [], + "raw_text": "Wu D., X. Xia, \"Learning an English-Chinese lexicon from a parallel corpus\", Technology partnerships for Crossing the Language Barrier: Proceedings of the First Conference of the Association for Machine Translation in the Americas, Columbia, MD, pp206-213, 1994.", + "links": null + }, + "BIBREF17": { + "ref_id": "b17", + "title": "Unsupervised word sense disambiguation rivaling supervised methods", + "authors": [ + { + "first": "D", + "middle": [], + "last": "Yarowsky", + "suffix": "" + } + ], + "year": 1995, + "venue": "Proceedings of ACL-33", + "volume": "", + "issue": "", + "pages": "189--196", + "other_ids": {}, + "num": null, + "urls": [], + "raw_text": "Yarowsky, D. \"Unsupervised word sense disambiguation rivaling supervised methods\". In Proceedings of ACL-33, pp. 189-196, 1995.", + "links": null + }, + "BIBREF18": { + "ref_id": "b18", + "title": "A Block-Based Robust Dependency Parser for Unrestricted Chinese Text", + "authors": [ + { + "first": "M", + "middle": [], + "last": "Zhou", + "suffix": "" + } + ], + "year": 2000, + "venue": "", + "volume": "", + "issue": "", + "pages": "", + "other_ids": {}, + "num": null, + "urls": [], + "raw_text": "Zhou M., \"A Block-Based Robust Dependency Parser for Unrestricted Chinese Text\", 2 nd workshop on Chinese language processing, Hong Kong, 2000.", + "links": null + } + }, + "ref_entries": { + "FIGREF0": { + "text": "sub, \u56fd\u5bb6), (\u56fd\u5bb6, sub-of, \u9881\u5e03), (\u9881\u5e03, obj, \u8ba1\u5212), (\u8ba1\u5212, obj-of, \u9881\u5e03), (\u9881 \u5e03, comp, \u4e86), (\u4e86,comp-of, \u9881\u5e03) 3", + "num": null, + "uris": null, + "type_str": "figure" + }, + "TABREF0": { + "html": null, + "content": "
DependencyE-CE-CMappingC-EC-EMapping
TypePositiveNegativeRatePositiveNegativeRate
Verb-Object7,8324,24764.8%6,7693,75164.3%
", + "text": "Triple correspondence between Chinese and English.", + "num": null, + "type_str": "table" + }, + "TABREF1": { + "html": null, + "content": "
Chinese verb-object tripleEnglish translation
\u591f \u5f00\u9500be enough for
\u7528 \u6570\u5b57in numeral characters
\u7528 \u8d27\u5e01Change to currency
\u540d\u53eb \u5a01\u5ec9_\u2022_\u7f57an Englishman, Willian Low
\u2026\u89c9\u5f97\u9003\u907f\u5230\u751f\u6d3b\u867d\u8270\u82e6\u4f46\u6bd4\u8f83\u7b80\u6734\u7684\u5e74\u4ee3
\u91cc\u662f\u4ef6\u6109\u5feb\u7684\u4e8b\u3002
", + "text": "Negative examples of triple mapping.", + "num": null, + "type_str": "table" + }, + "TABREF2": { + "html": null, + "content": "
Chinese tripleEnglish tripleExamples
Verb-ObjectVerb(usually intransitive verb)\u8bfb-\u4e66 \u2192read
Verb-ObjectVerb+Prep-Object\u7528-\u8d27\u5e01\u2192change to -currency
Table 5. Triple correspondence between Chinese and English.
TypeE-CE-CMappingC-EC-EMapping
PositiveNegativeratePositiveNegativeRate
Verb-Object9991208882\uff0e71% 8823169783\uff0e87%
", + "text": "Extended mapping.", + "num": null, + "type_str": "table" + }, + "TABREF5": { + "html": null, + "content": "
Improving Translation Selection with a New Translation Model Trained11
Since the denominatorP(c)is independent of e and is a constant for a given Chinese triple,
we have
emax=arg( max eP) e (P( c|)) e(13)
Here, the) (e P
) P is (e
usually called the language model, which depends only on the target language.) ( e | c Pis
usually called the translation model.
In single triple translation,
.
Using Bayes' theorem, we can write
P(e|c)=P(e) c c ( P ( P )|e)(12)
", + "text": "factor is a measure of the likelihood of the occurrence of a dependency triple e in the English language. It makes the output of e natural and grammatical.", + "num": null, + "type_str": "table" + }, + "TABREF8": { + "html": null, + "content": "
LanguageDescriptionSize(bytes)#TripleParser
ChinesePeople's Daily 1980~19981,200M33,000,000Block Parser
EnglishWall's Street Journal
", + "text": "shows a summary of the corpora and parsers in Chinese and English.", + "num": null, + "type_str": "table" + }, + "TABREF9": { + "html": null, + "content": "
Language Verb-Object Verb-Adverb Subject-Verb
Chinese14,327,35810,783,1398,729,639
English6,438,3983,011,7675,282,866
Therefore,awordwisrepresentedbyaco-occurrence
vector in which each feature ( ), # , , {( ' rel w rel 1,, rel ' 2 w ()...} # , ' 1 w # ,, where ) consists of the dependency relation , { adverb , verb object verb rel \u2212 \u2212 \u2208 , rel another word verb } subj \u2212 1'9
", + "text": "Statistics of the three main triples", + "num": null, + "type_str": "table" + }, + "TABREF10": { + "html": null, + "content": "
1
", + "text": "Cross-language word similarity matrix", + "num": null, + "type_str": "table" + }, + "TABREF11": { + "html": null, + "content": "
VerbNounTranslation
\u8bf4\u4e8btalk business
\u7528\u624buse hand
\u770b\u7535\u5f71see film, see movie
\u770b\u7535\u89c6watch TV
\u4f5c\u8d21\u732emake contribution
The performance was evaluated based on precision, which is defined as
= precision## total correct verb translaion \u2212 triples obj\u00d7% 100
4.1 Various Translation Models
Suppose we want to translate the Chinese dependency triple English dependency triple ) , , ( 2 1 E E E w rel w e =c =(w C1,relC,C w2)into the
", + "text": "Evaluation sets prepared by human translators", + "num": null, + "type_str": "table" + }, + "TABREF12": { + "html": null, + "content": "
Formally, Model A can be expressed as
emax=(max ( w Trans arg 1 e c e w \u22081)(freq(wE1)),verb\u2212object,max ( w Trans arg 2 C e W \u22082)(freq(wE2))
We have
emaxargmaxP(e)argmaxP(wE1,verbobj,wE2)
ew E1Trans(w Cq),
w E2Trans(w C2)
", + "text": "Model B only used the triple probability in target language, neglecting the translation model. It selected the translation of the triple which was most likely to occur in the target language.", + "num": null, + "type_str": "table" + }, + "TABREF14": { + "html": null, + "content": "
Model#CorrectPercentage
Model A39353.8%
Model B51270.1%
Model C51971.1%
", + "text": "Evaluation on verbs of high frequency", + "num": null, + "type_str": "table" + }, + "TABREF15": { + "html": null, + "content": "
Model#CorrectPercentage
Model A6156.5%
Model B8578.7%
Model C8881.5%
Case III: Translation of low-frequency triples
", + "text": "Evaluation of verbs of low frequency", + "num": null, + "type_str": "table" + }, + "TABREF16": { + "html": null, + "content": "
Model#CorrectPercentage
Model A18253.5%
Model B28383.2%
Model C28985.0%
", + "text": "Evaluation of triples of low frequency", + "num": null, + "type_str": "table" + }, + "TABREF19": { + "html": null, + "content": "
Model#CorrectPercentage
Model D52671.8%
Model E58780.1%
Using Model C, \"\u5c55\u5f00\u8fdb\u653b\" could not be translated correctly, while Model E correctly gave
the answer \"launch attack\". In table 14 and Appendix III, there are more examples showing
the cases in which Model E correctly selected translations. (The English translations marked
with * are cases where the translations could not be found in the translation lexicon but were
generated with Model E only.)
Table 14. The translation result overcoming OOV
\u5c55\u5f00\u8fdb\u653blaunch* attack\u6253\u4e3b\u610fmake plan
\u91c7\u53d6\u884c\u52a8Take action\u6253\u57fa\u7840make foundation
\u91c7\u53d6\u529e\u6cd5adopt* method\u6253\u7403play ball
\u770b\u7535\u89c6watch television\u6253\u6d1emake hole
\u770b\u4e66Read book\u6253\u6298\u6263offer* discount
\u770b\u8282\u76eeSee program\u6253\u9523strike gong
\u6253\u7535\u62a5send telegram\u535a\u53d6\u540c\u60c5evoke* sympathy
", + "text": "Evaluation on verbs of high frequency", + "num": null, + "type_str": "table" + } + } + } +} \ No newline at end of file