|
{ |
|
"paper_id": "2005", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T07:22:20.088845Z" |
|
}, |
|
"title": "Example-based Machine Translation Pursuing Fully Structural NLP", |
|
"authors": [ |
|
{ |
|
"first": "Sadao", |
|
"middle": [], |
|
"last": "Kurohashi", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Tokyo", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Toshiaki", |
|
"middle": [], |
|
"last": "Nakazawa", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Tokyo", |
|
"location": {} |
|
}, |
|
"email": "nakazawa@kc.t.u-tokyo.ac.jp" |
|
}, |
|
{ |
|
"first": "Kauffmann", |
|
"middle": [], |
|
"last": "Alexis", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Tokyo", |
|
"location": {} |
|
}, |
|
"email": "alexis@kc.t.u-tokyo.ac.jp" |
|
}, |
|
{ |
|
"first": "Daisuke", |
|
"middle": [], |
|
"last": "Kawahara", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Tokyo", |
|
"location": {} |
|
}, |
|
"email": "kawahara@kc.t.u-tokyo.ac.jp" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We are conducting Example-Based Machine Translation research aiming at the improvement both of structural NLP and machine translation. This paper describes UTokyo system challenged IWSLT05 Japanese-English translation tasks.", |
|
"pdf_parse": { |
|
"paper_id": "2005", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We are conducting Example-Based Machine Translation research aiming at the improvement both of structural NLP and machine translation. This paper describes UTokyo system challenged IWSLT05 Japanese-English translation tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "We are conducting research on Example-Based Machine Translation, or EBMT [1] aiming at the improvement both of structural NLP and machine translation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 73, |
|
"end": 76, |
|
"text": "[1]", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "Machine translation has been actively studied recently, and the major approach is Statistical Machine Translation, or SMT. EBMT and SMT have something in common and something different. The important common feature is to use bilingual corpus, or translation examples, for the translation of new inputs. Both methods exploit translation knowledge implicitly embedded in translation examples, and make MT system maintenance and improvement much easier compared with Rule-Based Machine Translation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "The difference is that SMT supposes bilingual corpus is the only available resource (but not a bilingual lexicon and parsers); EBMT does not consider such a constraint. SMT basically combines words or phrases (relatively small pieces) with high probability [2] ; EBMT tries to use larger translation examples. When EBMT tries to use larger examples, it had better handle examples which are discontinuous as a word-string, but continuous structurally. Accordingly, though it is not inevitable, EBMT naturally seeks syntactic information.", |
|
"cite_spans": [ |
|
{ |
|
"start": 257, |
|
"end": 260, |
|
"text": "[2]", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "The difference in the problem setting is important. SMT is a natural approach when linguistic resources such as parsers and a bilingual lexicon are not available. On the other hand, in case of such linguistic resources are available, it is also natural to see how accurate MT can be achieved using all the available resources.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "We chose the latter problem setting and conducting EBMT research, and here we would like to mention two reasons we chose this setting.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "One reason is that we are aiming at the improvement of structural NLP. We have been conducting research on Japanese morphological analyzer, parser, and anaphora/omission analyses. MT is considered as an application of these fundamental technologies. Amelioration of fundamental NLP technologies naturally improves applications, and applications give some feedback to fundamental NLP, pointing the shortcomings. Needless to say, MT is not the only NLP application, and monolingual NLP applications such as man-machine interface and information retrieval can benefit from the improvement of fundamental NLP.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "The second point is that, in practice, we often encounter cases to which EBMT problem setting is suitable. That is, there is no huge bilingual corpus which enables SMT, but some very similar translation examples are available, and it would be nice if automatic translation or translation assistance can be provided by exploiting the examples. For example, translation of manuals when translations of the old version manuals are available, and patent translation when translations of the related patents are available. Or, in the translation of an article, the translations to a certain point can be used effectively as translation memory step by step, because the same or similar expressions/sentences are often used in an article. In such cases, EBMT approach is suitable which tries to find larger translation examples.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "This paper describes our Japanese-English EBMT system, UTokyo, challenged to IWSLT05, and reports the evaluation results and discussion.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "Our system consists of two modules: an alignment module for parallel sentences and a translation module retrieving ap-propriate translation examples and combining them. First, we explain the alignment module.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Alignment of Parallel Sentences", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "The alignment of Japanese-English parallel sentences is achieved by the following steps, using a Japanese parser, an English parser, and a bilingual dictionary (see Figure 1 ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 165, |
|
"end": 173, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Alignment of Parallel Sentences", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "1. Dependency analysis of Japanese and English sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Alignment of Parallel Sentences", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "2. Detection of Word/phrase correspondences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Alignment of Parallel Sentences", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "3. Disambiguation of correspondences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Alignment of Parallel Sentences", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Among IWSLT05 20,000 training data, some pairs consists of two or more sentences. We utilized the pairs with the same number of Japanese sentences and English sentences, and separated them into one-to-one Japanese English sentence pairs. As a result, we utilized 21,412 sentence pairs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Handling of remaining words.", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "We explain these alignment steps in detail.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Handling of remaining words.", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "Japanese sentences are converted into dependency structures using a morphological analyzer, JUMAN, and a dependency analyzer, KNP [3] . These tools can detect Japanese sentence structures in high accuracy: for the news article domain, 99% for segmentation and POS-tagging, and 90% for dependency analysis. They are robust enough to handle travel domain conversations and the accuracy is almost the same with news article sentences. Japanese dependency structure consists of nodes which correspond with content words. Function words such as postpositions, affixes, and auxiliary verbs are included in content words' nodes.", |
|
"cite_spans": [ |
|
{ |
|
"start": 130, |
|
"end": 133, |
|
"text": "[3]", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency Analysis of Japanese and English Sentences", |
|
"sec_num": "2.1." |
|
}, |
|
{ |
|
"text": "For English sentences, Charniak's nlparser is used to convert them into phrase structures [4] , and then they are transformed into dependency structures by rules defining head words for phrases. In the same way as Japanese, each content word composes a node of English dependency tree.", |
|
"cite_spans": [ |
|
{ |
|
"start": 90, |
|
"end": 93, |
|
"text": "[4]", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency Analysis of Japanese and English Sentences", |
|
"sec_num": "2.1." |
|
}, |
|
{ |
|
"text": "Charniak's nlparser was trained on Penn Treebank, and is not necessarily suitable for travel domain conversations. In some cases, basic English sentences were wrongly parsed by the parser.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency Analysis of Japanese and English Sentences", |
|
"sec_num": "2.1." |
|
}, |
|
{ |
|
"text": "Japanese word/phrase to English word/phrase correspondences are detected by two methods.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Detection of Word/Phrase Correspondences", |
|
"sec_num": "2.2." |
|
}, |
|
{ |
|
"text": "One is to use a Japanese-English dictionary, EIJIRO [5] . The original EIJIRO contains about 1.5M entries, but we utilized about 0.9M entries excluding slang words/expressions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 52, |
|
"end": 55, |
|
"text": "[5]", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Detection of Word/Phrase Correspondences", |
|
"sec_num": "2.2." |
|
}, |
|
{ |
|
"text": "The other method handles transliteration. For possible person names and geo names suggested by the morphological analyzer and Katakana words (Katakana is a Japanese alphabet usually used for loan words), their possible transliterations are produced and their similarity with words in the English sentence is calculated based on the edit distance. If there are similar words exceeding the threshold, they are handled as a correspondence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Detection of Word/Phrase Correspondences", |
|
"sec_num": "2.2." |
|
}, |
|
{ |
|
"text": "For example, the following words can be corresponded by the transliteration module, which are rarely handled by the existing bilingual dictionary entries:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Detection of Word/Phrase Correspondences", |
|
"sec_num": "2.2." |
|
}, |
|
{ |
|
"text": "\u2192 Shinjuku \u2194 Shinjuku (similarity:1.0) \u2192 rosuwain \u2194 rose wine (simi- larity:0.78)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Detection of Word/Phrase Correspondences", |
|
"sec_num": "2.2." |
|
}, |
|
{ |
|
"text": "The units of correspondences are nodes, and function words in nodes are included in the correspondences of content words. If the bilingual dictionary and transliteration module detect a correspondence with two or more content words, the correspondence of two or more nodes are generated accordingly. In Figure 1 , for example, the two Japanese nodes \" (cross)\" and \" (point) \" corresponds to the one English node \"at the intersection\".", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 303, |
|
"end": 311, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Detection of Word/Phrase Correspondences", |
|
"sec_num": "2.2." |
|
}, |
|
{ |
|
"text": "The method described in the previous section sometimes detects ambiguous correspondences, that is, one-to-many or many-to-many correspondences. Such ambiguity is resolved based on harmonious criteria.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Disambiguation of Correspondences", |
|
"sec_num": "2.3." |
|
}, |
|
{ |
|
"text": "Suppose there is a correspondence X with ambiguity, and there is an unambiguous correspondence Y with the distance n in the Japanese dependency tree and the distance m in the English dependency tree, we give the score 1/n + 1/m to the correspondence X, since we can consider that the nearer Y is to X, the more strongly Y supports X. Here we define the distance of correspondences as the number of traversing nodes in a dependency tree. For example, in Figure 1 , the distance between \"the car\" and \"came\" is 1, and that between \"the car\" and \"at the intersection\" is 2.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 453, |
|
"end": 461, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Disambiguation of Correspondences", |
|
"sec_num": "2.3." |
|
}, |
|
{ |
|
"text": "Then, we accept the ambiguous correspondence, the sum of whose neighboring correspondences' scores is the largest, and reject the others conflicting with the accepted one. This calculation is repeated until all the ambiguous correspondences are resolved.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Disambiguation of Correspondences", |
|
"sec_num": "2.3." |
|
}, |
|
{ |
|
"text": "IWSLT05 training sentences are fairly short, and most correspondences are unambiguous. Ambiguous correspondences are only 4.8%.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Disambiguation of Correspondences", |
|
"sec_num": "2.3." |
|
}, |
|
{ |
|
"text": "The alignment procedure so far found all corresponds in parallel sentences. Then, we merge the remaining nodes into existing correspondences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Handling of Remaining Words", |
|
"sec_num": "2.4." |
|
}, |
|
{ |
|
"text": "First, the root nodes of the dependency trees are handled as follows. In the given training data, we suppose all parallel sentences have appropriate translation relation. Accordingly, if neither root nodes (of the Japanese dependency tree and the English dependency tree) are included in any correspondences, the new correspondence of the two root nodes are generated. If either root node is remaining, it is merged into the correspondence of the other root node.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Handling of Remaining Words", |
|
"sec_num": "2.4." |
|
}, |
|
{ |
|
"text": "Then, both for Japanese remaining node and English remaining node, if it is within a base NP and another node in the NP is in a correspondence, it is merged into the correspondence. The other remaining nodes are merged into correspondences of their parent (or ancestor) nodes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Handling of Remaining Words", |
|
"sec_num": "2.4." |
|
}, |
|
{ |
|
"text": "In the case of Figure 1, \" (that)\" is merged into the correspondence \" (car) \u2194 the car\", since it is within an NP. Then, \" (suddenly)\", \"at me\" and \"from the side\" are merged into their parent correspondence, \"", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 15, |
|
"end": 26, |
|
"text": "Figure 1, \"", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Handling of Remaining Words", |
|
"sec_num": "2.4." |
|
}, |
|
{ |
|
"text": "(rush out) \u2194 came\".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Handling of Remaining Words", |
|
"sec_num": "2.4." |
|
}, |
|
{ |
|
"text": "We call the correspondences constructed so far as basic correspondences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Handling of Remaining Words", |
|
"sec_num": "2.4." |
|
}, |
|
{ |
|
"text": "Here, let us compare our alignment method with an EM based alignment. We tested an EM based tool, giza++ for the alignment of 20,000 training data [6] . We found many inappropriate word alignments in the giza++ results, and concluded that this size of training data might be too small for EM based alignment.", |
|
"cite_spans": [ |
|
{ |
|
"start": 147, |
|
"end": 150, |
|
"text": "[6]", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison with EM based Alignment", |
|
"sec_num": "2.5." |
|
}, |
|
{ |
|
"text": "On the other hand, our method using a 0.9M-entry bilingual dictionary and a transliteration module could find correspondences quite accurately. For the given training set, we could conclude that our proposed method is superior to the EM based method.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison with EM based Alignment", |
|
"sec_num": "2.5." |
|
}, |
|
{ |
|
"text": "However, the correspondence statistics in the whole training data must be an important information, and it is our future target to use a flat bilingual dictionary and the statistical information together.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison with EM based Alignment", |
|
"sec_num": "2.5." |
|
}, |
|
{ |
|
"text": "Once we detect basic correspondences in the parallel sentences, all basic correspondences and all combination of adjoining basic correspondences (both in Japanese and English dependency trees) are registered into the translation example database.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Translation Example Database", |
|
"sec_num": "2.6." |
|
}, |
|
{ |
|
"text": "From the parallel sentences in Figure 1 , the three basic correspondences and their combinations such as \" \u2194 came at me from the side at the intersection\" and \" \u2194 the car came at me from the side\" are registered.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 31, |
|
"end": 39, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Translation Example Database", |
|
"sec_num": "2.6." |
|
}, |
|
{ |
|
"text": "In the translation process, first, a Japanese input sentence is converted into the dependency structure as in the parallel sentence alignment. Then, translation examples for each subtrees are retrieved, the best translation examples are selected, and their English expressions are combined to generate the English translation (Figure 2 ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 326, |
|
"end": 335, |
|
"text": "(Figure 2", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Translation", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "At first, the root of the input sentence is set to the retrieval root, and each sub-tree whose root is the retrieval root is retrieved step by step. If there is no translation example for a sub-tree, the retrieval for the current retrieval root stops. Then, each child node of the current retrieval root is set to the new retrieval root and its sub-trees are retrieved.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Retrieval of Translation Examples", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "In the case of Figure 2 , sub-trees from the root node \" (was)\" are retrieved: \" (was)\", \" (blue) (was)\", \" (signal) (was)\", \" (signal) (blue) (was)\" and so on. Then, sub-trees from \" (blue)\" and sub-trees from \" (signal) \" are retrieved step by step.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 15, |
|
"end": 23, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Retrieval of Translation Examples", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "If no translation example is found for a Japanese node, the bilingual dictionary is looked up and its translation is used as if it is an translation example. (If there is no entry in the dictionary we output nothing for the node.)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Retrieval of Translation Examples", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "Then, out of retrieved translation examples, good ones are selected to generate the English translation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Selection of Translation Examples", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "The basic idea of example-based machine translation is to prefer to use larger translation example, which takes into consideration larger context and could provide an appropriate translation. According to this idea, our system also selects larger examples.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Selection of Translation Examples", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "The criterion is based on the size of translation example (the number of matching nodes with the input), plus the similarities of the neighboring outside nodes, ranging from 0.0 to 1.0 depending on the similarity calculated by a thesaurus. The similar outside node is used as a bond to combine two translation examples, as explained in the next section.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Selection of Translation Examples", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "For example, if the size of a translation example is two, and the outside parent node is similar to the outside parent node of the matching Japanese input sub-tree by 0.3 similarity, and one outside child node is also similar to the corresponding input by 0.4, the score of the translation example becomes 2.7. 1 The set of translation examples just enough for the input is searched in a greedy way. That is, the best translation example is selected among all the examples first, and then the next best example is selected for the remaining input nodes, and this process is repeated.", |
|
"cite_spans": [ |
|
{ |
|
"start": 311, |
|
"end": 312, |
|
"text": "1", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Selection of Translation Examples", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "It is easy to generate an English expression from a translation example, because it contains enough information of English dependency structure and word order. The problem is how to combine two or more translation examples. However, in most cases, the bond node is available outside the example, to which the adjoining example is attached. There are two types of bond nodes: a child bond and a parent bond.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combination of Translation Examples", |
|
"sec_num": "3.3." |
|
}, |
|
{ |
|
"text": "If there is a child node, it is easy to attach the adjoining example on it. For example, in Figure 2 , the translation example \" (enter) (when)\" has a child bond, \" (house) \", corresponding to \"a house\" in the English side. The adjoining example \" ( ) \u2194 (at) the intersection\" is attached on \" \", which means \"house\" is replaced with \"the intersection\".", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 92, |
|
"end": 100, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Combination of Translation Examples", |
|
"sec_num": "3.3." |
|
}, |
|
{ |
|
"text": "On the other hand, a parent bond tells that the translation example modifies its head from the front or from behind, but there is no information about the order with the other children. Currently, we handle it as the first child if it modifies from the front; as the last child if it modifies from behind. In Figure 2 , \" \u2194 my \" has a parent bond, \" \u2194 sign\" and it tells that \"my\" should modify its head from the front. Then, \"my\" is put to the first child of \"the light\", before \"traffic\".", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 309, |
|
"end": 317, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Combination of Translation Examples", |
|
"sec_num": "3.3." |
|
}, |
|
{ |
|
"text": "It is not often, but if there is no bond, the order of combining two translation examples is controlled by heuristic rules.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combination of Translation Examples", |
|
"sec_num": "3.3." |
|
}, |
|
{ |
|
"text": "Numerals in Japanese are translated into English in several ways.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Handling of Numerals", |
|
"sec_num": "3.4." |
|
}, |
|
{ |
|
"text": "\u2022 cardinal : 124 \u2192 one hundred twenty four At the time of parallel sentence alignment, it is checked in which type Japanese numerals are translated.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Handling of Numerals", |
|
"sec_num": "3.4." |
|
}, |
|
{ |
|
"text": "Translation examples of non-numeral type are used only if the numerals match exactly (\"8 \u2192 August\" cannot be used to translate \"7 \"). However, translation examples of the other types can be used by generalizing numerals, and the input numeral is transformed according to the type. For example, \"2 \u2192 second\" can be used to translate \"13 \", transforming to the ordinal, \"thirteenth\".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Handling of Numerals", |
|
"sec_num": "3.4." |
|
}, |
|
{ |
|
"text": "In Japanese-English translation, omission of pronouns often causes problems. In conversational utterances, Japanese pronouns such as \" (I)\", \" (you)\", \" (this)\" are often omitted, and this could cause erroneous translations. Essentially, omissions in Japanese sentences should be analyzed appropriately (in the case of parallel sentences, referring to English translations). However, the current system handles this problem using a language model of English. There are two patterns when pronoun omission causes erroneous translations. One is that a pronoun is omitted in a translation example and not omitted in an input sentence. In such a case, there is no correspondence for the English pronoun, and it is merged into the other (usually predicate's) correspondence. If this merged pronoun is used in the translation, it overlaps with the pronoun from the input. For example, if the translation example \" (stomach) (ache) \u2194 I 've a stomachache\" is used to translate \" (I) (stomach) (ache)\", the translation becomes \"I I 've a stomachache\" naively. To solve this problem, the merged pronoun is marked at the alignment, and two translations with it and without it are generated and ranked using a language model of English.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Handling of Pronoun Omission", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "The opposite case also causes erroneous translations. That is, when a pronoun is in a translation example and is omitted in an input, the ungrammatical English sentence without pronoun is generated. For example, when \" (this) (Japan) (mail) \u2194 will you mail this to Japan\" is used to translate \" (Japan) (mail)\", the translation becomes \"will you mail to Japan\" by eliminating \" \u2194 this\". To handle such a problem, a bond node, which is not used for translation in a normal case, is used as a translation candidate when the bond node is a pronoun, and the best translation is selected using a language model of English.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Handling of Pronoun Omission", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "In the IWSLT05, we used English sentences in 20,000 training data and Cam Toolkit by CMU for a English language model [8] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 118, |
|
"end": 121, |
|
"text": "[8]", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Handling of Pronoun Omission", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "Our Japanese-English translation system challenged to both manual manuscript translation and ASR output translation (for ASR output we just translated the best path, though). Our system utilized Japanese and English parsers and a bilingual dictionary, and it was categorized to \"supplied & tools\" data track. Table 1 shows evaluation scores for development set 1, development set 2, and the test set. Since we have not overtuned our system to development sets, IWSLT05 test set might be a bit tough task, which means that the coverage by training set is a bit small.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 309, |
|
"end": 316, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "When our system translates one test sentence (7.5 words/3.2 nodes on average), 1.8 translation examples of the size of 1.5 nodes, and 0.5 translation from the bilingual dic-tionary are used.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "We examined the translation results and found out that it was not the case that there was a few major problems, but there were variety of problems, such as parsing errors of both languages, excess and deficiency of the bilingual dictionary, and the inaccurate and inflexible use of translation examples. Now, let us discuss the biggest question: \"is the current parsing technology useful and accurate enough for machine translation?\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "If the translation performance was significantly better than the other systems without parsing, we could answer \"YES\" to the question. However, unfortunately our performance is average and we cannot claim that. Currently, we can at least dispel the suspicion that parsing might cause sideeffects and lower translation performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "As we mentioned above, parsing errors are not a principal cause of translation errors, but these are not a few. One of the possible countermeasures is to reconsider the learning process of an English parser. The English parser used here is learned from Penn Treebank, and seems to be vulnerable to conversational sentences in travel domain. Furthermore, it is quite possible to improve parsing accuracies of both languages complementarily by taking advantage of the difference of syntactic ambiguities between the two languages [9] . This approach may not substantially improve the parsing accuracy of the travel domain sentences, because of their short length, but is promising for translating longer general sentences.", |
|
"cite_spans": [ |
|
{ |
|
"start": 528, |
|
"end": 531, |
|
"text": "[9]", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "As we stated in Introduction, we not only aim at the development of machine translation through some evaluation measure, but also tackle this task from the comprehensive viewpoint including the development of structural NLP. The examination of translation errors revealed the problems, such as problems in parsing and inflexible matching of a Japanese input and Japanese translation examples. Resolving such problems is considered to be an important issue not only for MT but also for other NLP applications. We pursue the study of machine translation from this standpoint continuously.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7." |
|
}, |
|
{ |
|
"text": "We proposed a method of selecting translation examples based on translation probability[7]. Though we used size and similarity based criteria for IWSLT05 because of time constraint, we are planning to use probability based criteria from now on.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "A framework of a mechanical translation between Japanese and English by analogy principle", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Nagao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1984, |
|
"venue": "Proceedings of the international NATO symposium on Artificial and human intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "173--180", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Nagao, \"A framework of a mechanical translation be- tween Japanese and English by analogy principle,\" in Proceedings of the international NATO symposium on Artificial and human intelligence, 1984, pp. 173-180.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Improved alignment models for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Och", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Tillmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proceedings of the Joint Conference of Empirical Methods in Natural Language Processing and Very Large Corpora", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "20--28", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "F. J. Och, C. Tillmann, and H. Ney, \"Improved align- ment models for statistical machine translation,\" in Pro- ceedings of the Joint Conference of Empirical Methods in Natural Language Processing and Very Large Cor- pora, 1999, pp. 20-28.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "A syntactic analysis method of long Japanese sentences based on the detection of conjunctive structures", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Kurohashi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Nagao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Computational Linguistics", |
|
"volume": "20", |
|
"issue": "4", |
|
"pages": "507--534", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Kurohashi and M. Nagao, \"A syntactic analysis method of long Japanese sentences based on the detec- tion of conjunctive structures,\" Computational Linguis- tics, vol. 20, no. 4, pp. 507-534, 1994.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "A maximum-entropy-inspired parser", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Charniak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the 1st Meeting of the North American Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "132--139", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "E. Charniak, \"A maximum-entropy-inspired parser,\" in Proceedings of the 1st Meeting of the North American Chapter of the Association for Computational Linguis- tics, 2000, pp. 132-139.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Electronic Diciotionary Project, EIJIRO 2nd Edition", |
|
"authors": [], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Electronic Diciotionary Project, EIJIRO 2nd Edition. ALC Press Inc., 2005.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "A systematic comparison of various statistical alignment models", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Och", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Computational Linguistics", |
|
"volume": "29", |
|
"issue": "1", |
|
"pages": "19--51", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "F. J. Och and H. Ney, \"A systematic comparison of var- ious statistical alignment models,\" Computational Lin- guistics, vol. 29, no. 1, pp. 19-51, 2003.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Probabilistic model for example-based machine translation", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Aramaki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Kurohashi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Kashioka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Tanaka", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of MT Summit X", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "219--226", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "E. Aramaki, S. Kurohashi, H. Kashioka, and H. Tanaka, \"Probabilistic model for example-based machine trans- lation,\" in Proceedings of MT Summit X, 2005, pp. 219- 226.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Statistical language modeling using the CMU-Cambridge toolkit", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Clarkson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Rosenfeld", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Proceedings of the European Conference on Speech Communication and Technology", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2707--2710", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "P. Clarkson and R. Rosenfeld, \"Statistical language mod- eling using the CMU-Cambridge toolkit,\" in Proceed- ings of the European Conference on Speech Communi- cation and Technology, 1997, pp. 2707-2710.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Structural matching of parallel texts", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Matsumoto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Ishimoto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Utsuro", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "23--30", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Y. Matsumoto, H. Ishimoto, and T. Utsuro, \"Structural matching of parallel texts,\" in Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics, 1993, pp. 23-30.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF1": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "An example of parallel sentence alignment. (The root of a tree is placed at the extreme left and phrases are placed from top to bottom. Correspondences of underlined words were detected by a bilingual dictionary.)" |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "traffic light was green when entering the intersection." |
|
}, |
|
"FIGREF4": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "An example of Japanese-English translation." |
|
}, |
|
"FIGREF5": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "ordinal (e.g., day) : 2 \u2192 second \u2022 two-figure (e.g., room number, year) : 124 \u2192 one twenty four \u2022 one-figure (e.g., flight number, phone number) : 124 \u2192 one two four \u2022 non-numeral (e.g., month) : 8 \u2192 August" |
|
}, |
|
"TABREF0": { |
|
"content": "<table><tr><td/><td>BLEU</td><td>NIST</td></tr><tr><td>Development 1</td><td colspan=\"2\">0.4245 8.5655</td></tr><tr><td>Development 2</td><td colspan=\"2\">0.4056 8.4967</td></tr><tr><td colspan=\"3\">IWSLT05 manual 0.3718 7.8472</td></tr><tr><td>IWSLT05 ASR</td><td colspan=\"2\">0.3361 7.4157</td></tr></table>", |
|
"num": null, |
|
"type_str": "table", |
|
"text": "Evaluation results.", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |