{ "paper_id": "2004", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:17:56.608858Z" }, "title": "Example-based Machine Translation using Structural Translation Examples", "authors": [ { "first": "Eiji", "middle": [], "last": "Aramaki", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Tokyo", "location": {} }, "email": "aramaki@kc.t.u-tokyo.ac.jp" }, { "first": "Sadao", "middle": [], "last": "Kurohashi", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Tokyo", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper proposes an example-based machine translation system which handles structural translation examples. The structural translation examples have the potential advantage of high-usability. However, technologies which build such translation examples are still being developed. In such a situation, the comparison of the proposed system and the other approach systems is meaningful. This paper presents the system algorithm and its performance on the IWSLT04 Japanese-English unrestricted task.", "pdf_parse": { "paper_id": "2004", "_pdf_hash": "", "abstract": [ { "text": "This paper proposes an example-based machine translation system which handles structural translation examples. The structural translation examples have the potential advantage of high-usability. However, technologies which build such translation examples are still being developed. In such a situation, the comparison of the proposed system and the other approach systems is meaningful. This paper presents the system algorithm and its performance on the IWSLT04 Japanese-English unrestricted task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "We are developing an example-based machine translation (EBMT) [1] system using structural translation examples, which is potentially suitable to deal with the infinite productivity of languages. Structural translation examples have the advantage of high-usability, and the system under this approach needs only a reasonable scale of corpus.", "cite_spans": [ { "start": 62, "end": 65, "text": "[1]", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "However, building the structural translation examples requires many technologies, e.g., parsing and tree-alignment and so on, which are still being developed. So a naive method without such technologies may be efficient in a limited domain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In such a situation, we believe that the comparison of the proposed system and the other approach systems is meaningful.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The proposed system challenged the \"Japanese-English unrestricted\" task, but it utilized no extra bilingual corpus of the domain; it used only a training corpus given in the IWSLT04, Japanese and English parsers, a Japanese thesaurus and translation dictionaries. Figure 1 shows the system outline. It consists of two modules, (1) an alignment module and (2) a translation module.", "cite_spans": [], "ref_spans": [ { "start": 264, "end": 272, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The alignment module estimates correspondences in the corpus using translation dictionaries. Then, the alignment results are stored in a translation memory which is a database of translation examples. The translation module selects plausible translation examples for each parts of an input sentence. Finally, the selected examples are combined to generate an output sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "This paper is organized as follows. The next section presents our system algorithm. Section 3 reports experimental results. Then, Section 4 presents our conclusions. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "An EBMT system needs a large set of translation examples. In order to build them, we use the dictionary-based alignment method presented in [2] .", "cite_spans": [ { "start": 140, "end": 143, "text": "[2]", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Alignment Module", "sec_num": "2.1." }, { "text": "First, sentence pairs are parsed by the Japanese parser KNP [3] and the English nl-parser [4] . The English parser outputs a phrase structure. Then, it is converted into a dependency structures by rules which decide on a head word in a phrase. A Japanese phrase consists of sequential context words and their following function words. An English phrase is a base-NP or a base-VP.", "cite_spans": [ { "start": 60, "end": 63, "text": "[3]", "ref_id": "BIBREF2" }, { "start": 90, "end": 93, "text": "[4]", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Alignment Module", "sec_num": "2.1." }, { "text": "Then, correspondences are estimated by using translation dictionaries. We used four dictionaries: EDR, EDICT, ENAMDICT, and EIJIRO. These dictionaries have about two million entries in total. If there are out of dictionary phrases, they are merged into their parent correspondence. A sample alignment result is shown in Figure 2 .", "cite_spans": [], "ref_spans": [ { "start": 320, "end": 328, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Alignment Module", "sec_num": "2.1." }, { "text": "After alignment, the system generates all combinations of correspondences which are connected to each other. We call such a combination of correspondences a translation example. As a result, the 6 translation examples shown in Figure 3 are generated from the aligned sentence pair shown in Figure 2 .", "cite_spans": [], "ref_spans": [ { "start": 227, "end": 233, "text": "Figure", "ref_id": null }, { "start": 290, "end": 298, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Alignment Module", "sec_num": "2.1." }, { "text": "Finally, these translation examples are stored in the translation memory. In this operation, surrounding phrases (its parent and its children phrases) are also preserved as the contexts (mentioned in the next Section).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment Module", "sec_num": "2.1." }, { "text": "First, an input sentence is analyzed by the parser [3] . Then, for each phrase of the input sentence, the system selects plausible translation examples from the translation memory by using the following three measures.", "cite_spans": [ { "start": 51, "end": 54, "text": "[3]", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Translation Module", "sec_num": "2.2." }, { "text": "If large parts of a translation example are equal to the input, we regard it as a reliable example. The equality is the number of translation example phrases which are equal to the input. The system conducts the equal check in content words and some function words which express a sentence mood. The differences of the other function words are disregarded. In Figure 4 , the translation example has equality 2.", "cite_spans": [], "ref_spans": [ { "start": 360, "end": 368, "text": "Figure 4", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Equality:", "sec_num": "1." }, { "text": "Context is an important clue for word selection. We regard the context as the surrounding phrases of the equal part. The similarity score between the surrounding phrases and their corresponding input phrases is calculated with a Japanese thesaurus (max=1.0).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity:", "sec_num": "2." }, { "text": "We also take into account the alignment confidence. We define the alignment confidence as the ratio of content words which can be found in dictionaries (max =1.0).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Confidence:", "sec_num": "3." }, { "text": "The detailed definitions of those measures are presented in [5] . These measures are weighted by a parameter \u03bb as follows 1 , and the system selects the translation examples which have the highest score for each parts of the input:", "cite_spans": [ { "start": 60, "end": 63, "text": "[5]", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Confidence:", "sec_num": "3." }, { "text": "(Equality + Similarity) \u00d7 (\u03bb + Conf idence).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Confidence:", "sec_num": "3." }, { "text": "If there is no translation example, the system uses the translation dictionaries and acquires target expressions. If the translation dictionaries have no entry, the system stops the following procedures and goes to a shortcut pass (mentioned in Section 2.3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Confidence:", "sec_num": "3." }, { "text": "After the selection of translation examples, the target expressions in the examples are combined into a target dependency tree and its word order is decided. In this operation, the dependency relations and the word order are decided by the following principles. Figure 5 shows an example for a Japanese input which means \"give me a Chinese newspaper\" with selected examples and its target dependency tree.", "cite_spans": [], "ref_spans": [ { "start": 262, "end": 270, "text": "Figure 5", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Confidence:", "sec_num": "3." }, { "text": "Yet there are no perfect alignment and parsing technologies, so the proposed system has a risk of pre-processing errors. In view of this, we also prepare another translation method without such pre-processing We call this method a shortcut. The shortcut method searches the most similar translation examples by using a character-based DP matching method, and outputs its target parts as it is.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shortcut", "sec_num": "2.3." }, { "text": "The shortcut is used in the following three situations. Almost Equal: An input has more than 90% similarity which is calculated by a character-based DP matching method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shortcut", "sec_num": "2.3." }, { "text": "The system can not acquire any target expressions from either the translation memory or the dictionaries.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "No expression:", "sec_num": null }, { "text": "The system generates un-grammatical expressions, e.g., the same word sequence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Un-grammatical:", "sec_num": null }, { "text": "We built translation examples from a training-set for the IWSLT04. The training-set consists of 20,000 Japanese and English sentence pairs. The evaluation was conducted using a dev-set and a test-set for the IWSLT04, which consist of about 500 Japanese sentences with 16 references.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Condition", "sec_num": "3.1." }, { "text": "The following five automatic evaluation results are shown in Table 1 and some translation samples are shown in Table 2 .", "cite_spans": [], "ref_spans": [ { "start": 61, "end": 68, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 111, "end": 118, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Result", "sec_num": "3.2." }, { "text": "The geometric mean of n-gram precision by the system output with respect to the reference translations [6] .", "cite_spans": [ { "start": 103, "end": 106, "text": "[6]", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "BLEU:", "sec_num": null }, { "text": "NIST: A variant of BLEU using the arithmetic mean of weighted n-gram precision values.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BLEU:", "sec_num": null }, { "text": "WER (word error rate): The edit distance between the system output and the closest reference translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BLEU:", "sec_num": null }, { "text": "PER (position-independent WER): A variant of mWER which disregards word ordering.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BLEU:", "sec_num": null }, { "text": "GTM (general text matcher): Harmonic mean of precision and recall measures for maximum matchings of aligned words in a bitext grid. The dev-set and test-set scores are similar because the system has no tuning metrics for the dev-set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BLEU:", "sec_num": null }, { "text": "Then, we investigated the relation between the corpus size (the number of sentence pairs) and its performance (bleu) . The result is shown in Figure 6 . The score is not saturated at the point of x=20,000. Therefore, the system will achieve a higher performance if we obtain more corpora.", "cite_spans": [], "ref_spans": [ { "start": 142, "end": 150, "text": "Figure 6", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "BLEU:", "sec_num": null }, { "text": "Most of the errors are classified into the following three problems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "3.3." }, { "text": "1. Function Words: Because the system selects translation examples using mainly content words, it sometimes generates un-natural function words, especially in determiners and prepositions. For example, the system generates the output \"i 'd like to contact my japanese embassy\" using a translation example \"I 'd like to contact my bank\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "3.3." }, { "text": "In the future, the system should deal with translation examples more carefully.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "3.3." }, { "text": "The word order between translation examples is decided by the heuristic rules. The lack of rules leads to the wrong word order. For example, \"is there anything a like local cuisine?\" A target language model may be helpful for this problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Order:", "sec_num": "2." }, { "text": "3. Lack of a Subject: The proposed system sometimes generates an output without a subject, for example, \"has a bad headache\". It is because the input sentence often includes a zero-pronoun.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Order:", "sec_num": "2." }, { "text": "In the future, we are planning to incorporate the zeropronoun resolution technology.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Order:", "sec_num": "2." }, { "text": "In this paper, we described an EBMT system which handles structural translation examples. The experimental result shows the basic feasibility of this approach. In the future, as the amount of corpora increases, the system will achieve a higher performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4." }, { "text": ". The dependency relations and the word order in a translation example are preserved.2. The dependency relations between the translation ex-amples are equal to the relations of their corresponding input phrases.3. The word order between translation examples is decided by the rules governing both the dependency relation and the word order.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "\u03bb was determined by a preliminary experiment not to deteriorate the accuracy of the system. In preliminary experiments, we set \u03bb as 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "* The system without a corpus can generate translations using only the translation dictionaries.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A framework of a mechanical translation between Japanese and English by analogy principle", "authors": [ { "first": "M", "middle": [], "last": "Nagao", "suffix": "" } ], "year": 1984, "venue": "Artificial and Human Intelligence", "volume": "", "issue": "", "pages": "173--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Nagao, \"A framework of a mechanical translation be- tween Japanese and English by analogy principle,\" in In Artificial and Human Intelligence, 1984, pp. 173-180.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Finding translation correspondences from parallel parsed corpus for example-based translation", "authors": [ { "first": "E", "middle": [], "last": "Aramaki", "suffix": "" }, { "first": "S", "middle": [], "last": "Kurohashi", "suffix": "" }, { "first": "S", "middle": [], "last": "Sato", "suffix": "" }, { "first": "H", "middle": [], "last": "Watanabe", "suffix": "" } ], "year": 2001, "venue": "Proceedings of MT Summit VIII", "volume": "", "issue": "", "pages": "27--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Aramaki, S. Kurohashi, S. Sato, and H. Watan- abe, \"Finding translation correspondences from paral- lel parsed corpus for example-based translation,\" in Pro- ceedings of MT Summit VIII, 2001, pp. 27-32.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A syntactic analysis method of long Japanese sentences based on the detection of conjunctive structures", "authors": [ { "first": "S", "middle": [], "last": "Kurohashi", "suffix": "" }, { "first": "M", "middle": [], "last": "Nagao", "suffix": "" } ], "year": 1994, "venue": "Computational Linguistics", "volume": "20", "issue": "4", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Kurohashi and M. Nagao, \"A syntactic analysis method of long Japanese sentences based on the detec- tion of conjunctive structures,\" Computational Linguis- tics, vol. 20, no. 4, 1994.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A maximum-entropy-inspired parser", "authors": [ { "first": "E", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 2000, "venue": "Proceedings of NAACL 2000", "volume": "", "issue": "", "pages": "132--139", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Charniak, \"A maximum-entropy-inspired parser,\" in In Proceedings of NAACL 2000, 2000, pp. 132-139.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Word selection for ebmt based on monolingual similarity and translation confidence", "authors": [ { "first": "E", "middle": [], "last": "Aramaki", "suffix": "" }, { "first": "S", "middle": [], "last": "Kurohashi", "suffix": "" }, { "first": "H", "middle": [], "last": "Kashioka", "suffix": "" }, { "first": "H", "middle": [], "last": "Tanaka", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the HLT-NAACL 2003 Workshop on Building and Using Parallel Texts: Data Driven Machine Translation and Beyond", "volume": "", "issue": "", "pages": "57--64", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Aramaki, S. Kurohashi, H. Kashioka, and H. Tanaka, \"Word selection for ebmt based on monolingual simi- larity and translation confidence,\" in Proceedings of the HLT-NAACL 2003 Workshop on Building and Using Par- allel Texts: Data Driven Machine Translation and Be- yond, 2003, pp. 57-64.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "K", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "S", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "T", "middle": [], "last": "Ward", "suffix": "" }, { "first": "W.-J", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of ACL 2002", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu, \"Bleu: a method for automatic evaluation of machine translation,\" in Proceedings of ACL 2002, 2002, pp. 311-318.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "type_str": "figure", "text": "System Outline." }, "FIGREF1": { "num": null, "uris": null, "type_str": "figure", "text": "Aligned Sentence Pair. * In this paper, a sentence structure is illustrated by locating its root node at the left." }, "FIGREF2": { "num": null, "uris": null, "type_str": "figure", "text": "Equality and Similarity." }, "FIGREF3": { "num": null, "uris": null, "type_str": "figure", "text": "Example of Translation Flow." }, "FIGREF4": { "num": null, "uris": null, "type_str": "figure", "text": "Translation Examples." }, "FIGREF5": { "num": null, "uris": null, "type_str": "figure", "text": "Corpus Size and Performance (BLEU)." }, "TABREF0": { "html": null, "type_str": "table", "text": "Result.", "num": null, "content": "
bleu nistwer pergtm
dev-set 0.38 7.86 0.52 0.45 0.66
test-set 0.39 7.89 0.49 0.42 0.67
" }, "TABREF1": { "html": null, "type_str": "table", "text": "", "num": null, "content": "
Result Samples
input
output it is a throbbing pain
ref I am suffering from a throbbing pain .
input
output where is the bus stop for the city hall
ref Where is the bus stop for buses going to city hall ?
input
output i would like to try this sweater for an cotton
ref Is it alright if I try on this cotton sweater ?
input
output where is the gate
ref Where is the passenger boarding gate ?
input
output could you send it to this japan
ref Could you send this to Japan ?
" } } } }