|
{ |
|
"paper_id": "2004", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T07:24:04.482663Z" |
|
}, |
|
"title": "On Feature Selection in Maximum Entropy Approach to Statistical Concept-based Speech-to-Speech Translation", |
|
"authors": [ |
|
{ |
|
"first": "Liang", |
|
"middle": [], |
|
"last": "Gu", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "IBM T. J. Watson Research Center", |
|
"location": { |
|
"postCode": "10598", |
|
"settlement": "Yorktown Heights", |
|
"region": "NY" |
|
} |
|
}, |
|
"email": "lianggu@us.ibm.com" |
|
}, |
|
{ |
|
"first": "Yuqing", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "IBM T. J. Watson Research Center", |
|
"location": { |
|
"postCode": "10598", |
|
"settlement": "Yorktown Heights", |
|
"region": "NY" |
|
} |
|
}, |
|
"email": "yuqing@us.ibm.com" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Feature selection is critical to the performance of maximumentropy-based statistical concept-based spoken language translation. The source language spoken message is first parsed into a structured conceptual tree, and then generated into the target language based on maximum entropy modeling. To improve feature selection in this maximum entropy approach, a new concept-word feature is proposed, which exploits both concept-level and word-level information. It thus enables the design of concise yet informative concept sets and easies both annotation and parsing efforts. The concept generation error rate is reduced by over 90% on training set and 7% on test set in our speech translation corpus within limited domains. To alleviate data sparseness problem, multiple feature sets are proposed and employed, which achieves 10%-14% further error rate reduction. Improvements are also achieved in our experiments on speech-to-speech translation.", |
|
"pdf_parse": { |
|
"paper_id": "2004", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Feature selection is critical to the performance of maximumentropy-based statistical concept-based spoken language translation. The source language spoken message is first parsed into a structured conceptual tree, and then generated into the target language based on maximum entropy modeling. To improve feature selection in this maximum entropy approach, a new concept-word feature is proposed, which exploits both concept-level and word-level information. It thus enables the design of concise yet informative concept sets and easies both annotation and parsing efforts. The concept generation error rate is reduced by over 90% on training set and 7% on test set in our speech translation corpus within limited domains. To alleviate data sparseness problem, multiple feature sets are proposed and employed, which achieves 10%-14% further error rate reduction. Improvements are also achieved in our experiments on speech-to-speech translation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Automatic spoken language translation is crucial to speech-tospeech (S2S) translation systems that facilitate communication between people who speak different languages. While substantial progress has been made over the past decades in research areas of speech recognition and machine translation, multilingual natural speech translation remains a grand challenge for human speech and language technologies [1, 2, 3, 4] . Compared to written-text messages, most conversational spoken messages are conveyed through casual spontaneous speech with strong disfluencies and imperfect syntax. In addition, the output from speech recognizers often contains recognition errors and no punctuations, which brings serious challenges to robust and accurate translation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 407, |
|
"end": 410, |
|
"text": "[1,", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 411, |
|
"end": 413, |
|
"text": "2,", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 414, |
|
"end": 416, |
|
"text": "3,", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 417, |
|
"end": 419, |
|
"text": "4]", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "INTRODUCTION", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "In our prior work [5] , we presented a statistical spoken language translation framework based on tree-structured semantic/syntactic representations, or concepts, as illustrated in Figure 1 . In this example, the source English sentence and the corresponding Chinese translation are represented by a set of concepts -{PLACE, SUBJECT, WELLNESS, QUERY, PREPPH, BODY-PART}. Some of the concepts (such as PLACE, WELL-NESS and BODY-PART) are semantic representations while some of the concepts (such as PREPPH) are syntactic representations. There are also concepts (such as SUBJECT and QUERY) that represent both semantic and syntactic information. Note that although the source and target sentences share the same set of concepts, the tree structures are significantly different from each other because of the well-known distinct nature of these two languages (i.e., English and Chinese).", |
|
"cite_spans": [ |
|
{ |
|
"start": 18, |
|
"end": 21, |
|
"text": "[5]", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 181, |
|
"end": 190, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "INTRODUCTION", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "The above concept tree is comparable to interlingua [1] -a language-independent representation of intended meanings that is commonly used in modern spoken language translation systems. In our approach, the intended meanings are represented by a set of language-independent concepts (same as conventional interlingua approach) organized in a language-dependent treestructure (different from conventional interlingua method). The process of this concept-based translation may be further divided into two cascaded sub-processes: a) the generation of conceptual tree structure, and b) the generation of words within each concept, in the target language. While the total number of concepts may usually be limited to alleviate data sparseness impacts (especially for new domains), there are no constraints on the structures of the conceptual trees. Therefore, compared to traditional interlingua-based speech translation approaches, our conceptualtree-based approach could achieve more flexible meaning preservation with wider coverage and, hence, higher robustness and accuracy on translation tasks in limited domains, at the cost of additional challenges in the appropriate transformation of conceptual trees between source and target languages.", |
|
"cite_spans": [ |
|
{ |
|
"start": 52, |
|
"end": 55, |
|
"text": "[1]", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "INTRODUCTION", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "Two principal challenges remain open in the design of conceptbased speech translation systems. One challenge is the design and selection of language-independent concepts, which usually depends on the domain in which the translation system is used. This is a lengthy, tedious but very important task. The concepts have to be not only broad enough to cover all intended meanings in the source sentence but also informative so that a target sentence can be generated with right word sense and in a grammatically correct manner. The size of the concept set is also important as too many concepts may result in data sparseness for training, while too few concepts could degrade the translation accuracy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "INTRODUCTION", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "Another challenge is the generation of concepts in the target language via a natural concept generation (NCG) process. The purpose of NCG is to generate the correct concept structure in the target language corresponding to the concept structure in the source language. As explained before, the concept structures are language-dependent. Errors in concept generation could greatly distort or even ruin the meaning to be expressed in the target language, particularly in conversational speech translations where in most cases only a few concepts are conveyed in the messages to be translated. Therefore, accurate and robust NCG is viewed as an essential step towards high-performance conceptbased spoken language translation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "INTRODUCTION", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "While NCG approaches can be rule-based or statistical, we prefer the latter because of its trainability, scalability and portability. One such approach based on maximum-entropy (ME) criterion was presented in our previous work [5] . It was then improved in [6] and [7] by the employment of a series of algorithms such as forward-backward modeling and confidence measurement.", |
|
"cite_spans": [ |
|
{ |
|
"start": 227, |
|
"end": 230, |
|
"text": "[5]", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 257, |
|
"end": 260, |
|
"text": "[6]", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 265, |
|
"end": 268, |
|
"text": "[7]", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "INTRODUCTION", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "One critical problem remain in our ME-based translation approach is feature selection. In theory, the principle of maximum entropy does not directly concern itself with the issue of feature selection [8] . It merely provides a framework to combine constraints of both source and target language into a translation model. In reality, however, the feature selection problem is crucial to the performance of ME-based approaches, since the universe of possible constraints (or features) is typically in thousands or even millions for natural language processing. Some of these impacts on ME-based speech translation were preliminarily described in our previous work [6] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 200, |
|
"end": 203, |
|
"text": "[8]", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 662, |
|
"end": 665, |
|
"text": "[6]", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "INTRODUCTION", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "In this paper, to address the above concerns, we analyze and discuss in greater detail the feature selection issue in the design of ME-based statistical concept-based speech translation systems. In particular, a novel feature is proposed to use the combination of concept and word information to achieve higher NCG accuracy while minimize the total number of distinct concepts and hence greatly reduce the concept annotation and natural language understanding effort. A multiple feature selection algorithm is further employed to handle data sparseness issues. Experiments with these new algorithms are performed and analyzed on both the NCG accuracy and the overall speech translation performance. Figure 2 shows a general framework of our MASTOR speech translation system for applications in limited domains. A cascaded scheme of large-vocabulary conversational automatic speech recognition (ASR), statistical concept-based machine translation and concatenative text-to-speech (TTS) synthesis is applied by using state-of-the-art speech and language processing techniques. While each of these three functional units is crucial to the overall speech-to-speech translation quality, we are only concerned with the performance of statistical concept-based translation here.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 699, |
|
"end": 707, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "INTRODUCTION", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "The baseline statistical concept-based translation further consists of three cascaded functional components: natural language understanding (NLU), natural concept generation (NCG) and natural word generation (NWG). In our MASTOR system, the NLU function is performed via a decision-tree-based statistical semantic parser pre-trained on an annotated text corpus [9] . The NWG process generates words in the target language based on the generated structural concepts from NCG as well as a tag-based word-to-word multilingual dictionary [10] . Although these two components are very important to our statistical interlinguabased translation, they are, again, beyond the scope of this paper.", |
|
"cite_spans": [ |
|
{ |
|
"start": 361, |
|
"end": 364, |
|
"text": "[9]", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 534, |
|
"end": 538, |
|
"text": "[10]", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A. Statistical Concept-based S2S Translation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The NCG process generates a set of structural concepts in the target language according to a concept-based semantic parse tree derived from the NLU process in the source language. The accuracy of the NCG process has a great impact on the final translation performance as any errors of inserted, missing, replaced or mistakenly ordered concepts may cause severe understanding problems or loss of meaning during multilingual speech communication. Therefore, highly accurate NCG is essential to our goal of meaning preservation in conversational speech translation. In this paper, we focus on improving the ME-based statistical NCG method, as explained next.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A. Statistical Concept-based S2S Translation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The baseline statistical NCG algorithm on sequence level was proposed in [5] as an extension from the \"NLG2\" algorithm described in [11] . During natural concept sequence generation, the concept sequences in the target language are generated sequentially according to the output of NLU parser. Each new concept is generated based on the local n-grams of the up-to-date generated concept sequence and the subset of the input concept sequence that has not yet appeared in the generated sequence. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 73, |
|
"end": 76, |
|
"text": "[5]", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 132, |
|
"end": 136, |
|
"text": "[11]", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B. ME-based Statistical NCG on Sequence Level", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "= V s k s s c s f g k k s s c s f g k n n m n n m k n n m k s s c s p 1 1 , , , , , , , , 1 , , \u03b1 \u03b1 ,", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "B. ME-based Statistical NCG on Sequence Level", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where s is the concept candidate to be generated, n s and 1 \u2212 n s are the previous two concepts in S . V is the set of all possible concepts that can be generated.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B. ME-based Statistical NCG on Sequence Level", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "( ) k k k k k s s c s f 1 0 1 , , , \u2212 + = 2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B. ME-based Statistical NCG on Sequence Level", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "is the k-th feature. The selection of ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B. ME-based Statistical NCG on Sequence Level", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "( ) [ ] \u2211 \u2211 \u2211 = \u2208 \u2212 = L l q s m n n m k l s s c s p 1 1 , , log max arg \u03b1 \u03b1 ,", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "B. ME-based Statistical NCG on Sequence Level", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B. ME-based Statistical NCG on Sequence Level", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "{ } L l q Q l \u2264 \u2264 = 1 ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B. ME-based Statistical NCG on Sequence Level", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "is the total set of concept sequences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B. ME-based Statistical NCG on Sequence Level", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The optimization process can be accomplished via the Improved Iterative Scaling algorithm using maximum entropy criterion described in [11] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 135, |
|
"end": 139, |
|
"text": "[11]", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B. ME-based Statistical NCG on Sequence Level", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "g is a binary test function defined as , ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B. ME-based Statistical NCG on Sequence Level", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "+ = n s s S 1 ; 3) If C s n \u2208 +1 , set 1 + \u2212 = n s C C (remove 1 + n s from C); ac- cordingly, let 1 \u2212 \u2190 M M ; 4) If 1 \u2265 M or N n \u2264 + 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B. ME-based Statistical NCG on Sequence Level", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": ", repeat 2) and 3); Otherwise, stop and output generated concept sequence S.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B. ME-based Statistical NCG on Sequence Level", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Since the number of concepts generated in S could be different from the number of concepts in the input sequence in the source language, only a maximum number (denoted as N) of concepts may be generated. In our experiments, 11 = N .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B. ME-based Statistical NCG on Sequence Level", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "An example of primary-level (or main level) concept sequence generation is depicted in Figure 3 when translating the English sentence in Figure 1 into Chinese.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 87, |
|
"end": 95, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 137, |
|
"end": 145, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "B. ME-based Statistical NCG on Sequence Level", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The algorithms described above only deal with the concept generation issue of a single sequence. To tackle the generation problem of multiple sequences at different structural levels, a recursive structural concept sequence generation algorithm is proposed in [2, 3] as follows:", |
|
"cite_spans": [ |
|
{ |
|
"start": 260, |
|
"end": 263, |
|
"text": "[2,", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 264, |
|
"end": 266, |
|
"text": "3]", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C. Structural Concept Sequence Generation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "1) Traverse the semantic parse tree in a bottom-up left-to-right manner;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C. Structural Concept Sequence Generation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "2) For each un-processed concept sequence in the parse tree, generate an optimal concept sequence in the target language based on the procedure described in sub-section 2.B; after each concept sequence is processed, mark the root-node of this sequence as visited;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C. Structural Concept Sequence Generation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "3) Repeat step 2) until all parse braches in the source language are processed; 4) Replace nodes with their corresponding output sequence to form a complete concept tree for the output sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C. Structural Concept Sequence Generation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "An example of structural concept sequence generation is depicted in Figure 4 when translating the English sentence in Figure 1 and Figure 3 into Chinese.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 68, |
|
"end": 76, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 118, |
|
"end": 126, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 131, |
|
"end": 139, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "C. Structural Concept Sequence Generation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Earlier we introduced two basic challenges in the design of statistical maximum-entropy-based models for natural concept generation: 1) finding appropriate facts or features about the observed data; 2) optimally incorporate these features into the target models. In the previous section, we solved the second problem by using maximum-entropy principle in equations (1) (2) (3) (4) . In this section, we will attack the first challenge and improve natural concept generation performance by augmenting feature dimensions and combining various feature sets, as explained next.", |
|
"cite_spans": [ |
|
{ |
|
"start": 365, |
|
"end": 368, |
|
"text": "(1)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 369, |
|
"end": 372, |
|
"text": "(2)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 373, |
|
"end": 376, |
|
"text": "(3)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 377, |
|
"end": 380, |
|
"text": "(4)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A. Problem Statement and Baseline Features", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We begin with the basic four-dimensional feature set", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A. Problem Statement and Baseline Features", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "( ) ( ) k k k k k s s c s f 1 0 1 4 , , , \u2212 + = 2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A. Problem Statement and Baseline Features", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "defined in equation 1and (2) , which was first proposed in [5] . In this feature set, the order of concepts in the input sequence is discarded to alleviate performance degradation caused by sparse training data. However, there exist many cases in which the same set of concepts need to be generated into two different concept sequences depending on the order of the input sequence. For these typical concept sequences, generation errors are inevitable with the features of the specific form no matter how the statistical model is optimized.", |
|
"cite_spans": [ |
|
{ |
|
"start": 25, |
|
"end": 28, |
|
"text": "(2)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 59, |
|
"end": 62, |
|
"text": "[5]", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A. Problem Statement and Baseline Features", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To tackle this problem, we proposed in [6] ", |
|
"cite_spans": [ |
|
{ |
|
"start": 39, |
|
"end": 42, |
|
"text": "[6]", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A. Problem Statement and Baseline Features", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "So far we tried to extract features on the concept level. However, as explained earlier, the definition and detection of concept itself is a very challenging task. On the one hand, the concepts are defined as concise as possible, since the smaller the number of total distinct concepts, the less the effort will be endeavored in the labor-extensive and time-consuming annotation procedure, and the higher the accuracy and robustness will be of the statistical natural language understanding algorithms. On the other hand, the concepts should be as informative as possible, because the concept generation accuracy will largely rely on the sufficient information provided by each concept. One possible solution to the above problem is to expand current concept set and thus make it more informative. When concept WHQ is divided into two sub-concepts: WHQ-what and WHQwhere, the previous confusion is removed as depicted in Figure 5 (b). Unfortunately, this also dramatically increase the total number of distinct, and therefore introduce more much burden on human annotation of these concepts in the training data. More importantly, the expansion of concepts may subject to much lower parsing accuracy during natural language understanding (NLU) due to the much worse data sparseness problem.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 921, |
|
"end": 930, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "B. Conciseness versus Informativity of Concepts", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Instead of expanding concept sets with the above drawbacks, we propose a new approach based on a novel feature set that uses both concept and word level information. Compared to the expansion approach discussed in the previous sub-section, this concept-word-feature approach keeps the original concept set intact, and therefore maintain both high conciseness and Informativity of concepts by taking into account the word-level information when building maximum-entropy-based statistical models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C. Features using both Concept and Word Information", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "One concern to the above concept-word-feature is the much severe data sparseness problem. Given a typical concept vocabulary size of 70 and a word vocabulary of mere 3000, the total possible number of feature ( ) 0.7 % / 0.4 % 17.4 % / 11.4 % Table 1 . ME-NCG performance (sequence error rate / concept error rate) using different features with forward generation models.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 243, |
|
"end": 250, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "D. Enhancing Robustness by Combining Multiple Feature Sets", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Training-set Test-set Baseline NCG with basic feature ( ) Table 2 . ME-NCG performance (sequence error rate / concept error rate) using different features with forward-backward generation models. n n m m k k 1 1 1 , , , , , , , , , , , , , , , , , , , , , , , , 1 ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 58, |
|
"end": 65, |
|
"text": "Table 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 196, |
|
"end": 304, |
|
"text": "n n m m k k 1 1 1 , , , , , , , , , , , , , , , , , , , , , , , , 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "ME-NCG Methods", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The performance of our new algorithms in ME-NCG and statistical concept-based spoken language translation was evaluated on the English-to-Chinese speech translation task within a limited domain of emergency medical care. Altogether 10,000 conversational in-domain parallel sentences in both English and Chinese were collected and annotated as the data corpus for evaluation. The vocabulary size is about 3000 in each language. 68 concepts were designed and used for data annotation, NLU model training and NLU parsing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "EXPERIMENTS", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "The first set of experiments is carried out on the concept level to evaluate the performance of ME-based statistical NCG. A primary concept sequence is extracted from each annotated sentence, which represents the top-layer concepts in a semantic parser tree. Concept sequences containing only one concept are removed as they are easy to generate. To further simplify the problem, we train and test on parallel concept sequences that contain the same set of concepts in English and Chinese. In this specific case, NCG is performed to generate the correct order of concepts in the sequences of target language. More general and complex experiments are performed and shown in the next subsection.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A. Experiments on ME-based statistical NCG", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "According to the above criterion, about 5600 concept sequences are selected as our experimental corpus. During experimentation, this corpus is randomly partitioned into training corpus containing 80% of the sequences and test corpus with the remaining 20%. This random process is repeated 50 times and the average performance is recorded. Two evaluation metrics were applied. A concept sequence is considered to have an error during measurement of sequence error rate if one or more errors occur in this sequence. Concept error rate, on the other hand, evaluates concept errors in concept sequences such as substitution, deletion and insertion.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A. Experiments on ME-based statistical NCG", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In the first experiment, various feature types were implemented, tested and compared on both the training and test corpus with basic forward generation models. The results are shown in Table 1 . As expected, the use of concept-word features dramatically reduced the sequence/concept generation error rate from 14.0% / 8.8% with baseline four-dimensional features, and 9.1% / 5.5 % with five-dimensional features on parallel corpora, to 0.7% / 0.4%, which represents a 95% and 92% error rate reduction, respectively. The improvement becomes smaller on the test-set error rate, which is 27.9% and 7.0%, respectively. After combining ( ) error reduction was achieved on the test data. These experimental results clearly demonstrate that the concept-word features are superior to our previous proposed features, especially when the multiple feature set algorithm is employed.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 185, |
|
"end": 194, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A. Experiments on ME-based statistical NCG", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In the second experiments, the above features were evaluated with more advanced forward-backward generation models proposed in [7] . The results are listed in Table 2 . While similar huge improvements were recorded on the training set, the conceptword features alone did not obtain significant accuracy improvement on the test set over previously proposed parallel features ( ) . Even so, 10.7% error rate reduction was achieved when multiple feature sets of ( ) ", |
|
"cite_spans": [ |
|
{ |
|
"start": 127, |
|
"end": 130, |
|
"text": "[7]", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 159, |
|
"end": 166, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A. Experiments on ME-based statistical NCG", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Experimental results on statistical concept-based text-to-text and speech-to-text translation are shown in Table 4 Table 3 . Improvement of Bleu score in S2S translation by using new algorithms in ME-NCG (the score may range from 0 .0 to 1.0, with 1.0 indicating best translation quality) score described in [12] , which measures MT performance by evaluating n-gram accuracy with a brevity penalty. It is now one of the most widely accepted evaluation metric in the machine translation society.", |
|
"cite_spans": [ |
|
{ |
|
"start": 308, |
|
"end": 312, |
|
"text": "[12]", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 107, |
|
"end": 114, |
|
"text": "Table 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 115, |
|
"end": 122, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "B. Experiments on statistical concept-based S2S translation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "277 unseen speech sentences are tested. The new methods proposed achieved better performance compared to both baseline 1 (NCG process described in [6] ) and baseline 2 (NCG methods proposed in [7] ). From Table 3 we can see that, while the improvement is significant, the relatively smaller gains of overall S2S performance compared with NCG gains in Table 1 imply the importance of other S2S functional units, and the importance of further algorithmic improvement in all of these units.", |
|
"cite_spans": [ |
|
{ |
|
"start": 147, |
|
"end": 150, |
|
"text": "[6]", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 193, |
|
"end": 196, |
|
"text": "[7]", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 205, |
|
"end": 212, |
|
"text": "Table 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 351, |
|
"end": 358, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "B. Experiments on statistical concept-based S2S translation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Feature selection is a critical functional component in our maximum-entropy-based statistical natural concept generation. A new concept-word feature is proposed in this paper that exploits both the concept-level and word-level information during the training and decoding of maximum entropy models. It is then combined with our previous proposed features to alleviate the data-sparseness-caused over-training problem. Significant improvements are achieved in both concept sequence generation and speech translation experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CONCLUSION", |
|
"sec_num": "5." |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Janus-III: Speech-to-Speech Translation in Multiple Languages", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Lavie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Proceedings of ICASSP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Lavie, et al, \" Janus-III: Speech-to-Speech Translation in Multi- ple Languages,\" Proceedings of ICASSP, 1997.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Vermobile: Foundation of Speech-to-Speech Translation", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Wahlster", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "W. Wahlster, ed., Vermobile: Foundation of Speech-to-Speech Translation, Springer, 2000.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Algorithms for Statistical Translation of Spoken Language", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Hey", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "IEEE Trans. on Speech and Audio Processing", |
|
"volume": "8", |
|
"issue": "1", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "H. Hey, et al, \" Algorithms for Statistical Translation of Spoken Language\" , IEEE Trans. on Speech and Audio Processing, vol.8, no.1, January 2002.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "A Japanese-to-English Speech Translation System: ART-MATRIX", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Takezawa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Proceedings of ICSLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "T. Takezawa, et al, \" A Japanese-to-English Speech Translation System: ART-MATRIX\" , Proceedings of ICSLP, 1998.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "MARS: A statistical semantic parsing and generation based multilingual automatic translation system", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Y. Gao, et al, \" MARS: A statistical semantic parsing and genera- tion based multilingual automatic translation system\" , Machine Translation, 2004.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Improving Statistical Natural Concept Generation in Interlingua-based Speech-to-Speech Translation", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Gu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of Eurospeech", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "L. Gu, et al, \" Improving Statistical Natural Concept Generation in Interlingua-based Speech-to-Speech Translation\" , Proceedings of Eurospeech, 2003.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Forward-Backward Modeling in Statistical Natural Concept Generation for Interlingua-based Speech-to-Speech Translation", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Gu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "IEEE Workshop on Automatic Speech Recognition and Understanding", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "L. Gu, et al, \" Forward-Backward Modeling in Statistical Natural Concept Generation for Interlingua-based Speech-to-Speech Translation\" , IEEE Workshop on Automatic Speech Recognition and Understanding, 2003.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "A Maximum Entropy Approach to Natural Language Processing", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Berger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Computational Linguistics", |
|
"volume": "22", |
|
"issue": "1", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Berger, et al, \" A Maximum Entropy Approach to Natural Lan- guage Processing\" , Computational Linguistics, vol.22, no.1, 1996.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Natural Language Parsing as Statistical Pattern Recognition", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Magerman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Magerman. Natural Language Parsing as Statistical Pattern Recognition, Ph. D. thesis, Stanford Univ., 1994.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Use of Statistical N-Gram Models in Natural Language Generation for Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "F.-H", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of ICASSP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "F.-H. Liu, et al, \" Use of Statistical N-Gram Models in Natural Language Generation for Machine Translation\" , Proceedings of ICASSP, 2003.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Trainable methods for surface natural language generation", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Ratnaparkhi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "First Meeting of the North American Chapter of the Association for computational Linguistics (NAACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Ratnaparkhi, \" Trainable methods for surface natural language generation\" , First Meeting of the North American Chapter of the Association for computational Linguistics (NAACL), Seattle, Washington, 2000.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Bleu: a Method for Automatic Evaluation of Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Papineni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "K. Papineni, et al, \" Bleu: a Method for Automatic Evaluation of Machine Translation\" , ACL 2002.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "Example of Concept-based English-to-Chinese Translation" |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "Let us assume that the source language concept sequence produced from NLU parser is concepts has already been generated in target language." |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "be discussed in the next section. k \u03b1 is a probability weight corresponding to each feature k fThe value of k \u03b1 is always positive and is optimized over a training corpus by maximizing the overall logarithmic likelihood, i.e.," |
|
}, |
|
"FIGREF3": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "co-occurrence of the generated concept s and its context information of is generated by selecting the concept candidate with highest probability, i.e., Example of Concept Sequence Generation during translation of English sentence \"is he bleeding anywhere else besides his abdomen\" as illustrated inFigure" |
|
}, |
|
"FIGREF4": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "Example of Structural Concept Generation during translation of English sentence \"is he bleeding anywhere else besides his abdomen\" as illustrated in Figure 1 and Figure 3.s = START, where \" START\" is a pre-defined concept representing the start of the sequence; Set n = 0; Define initial set of generation sequence \u03c6 = S For each n, generate 1 + n s according to equation (3) and set { } 1 1" |
|
}, |
|
"FIGREF5": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "are two adjacent concepts in the source concept sequence C . Accordingly, the conditional probability of a concept candidate and the probability weights are modified as pre-annotated parallel corpora during ME-based model training. Particularly, the optimization of (6) is performed upon a parallel tree-bank augmented feature strengthens the link between sequences in source and target languages, and can thereby improve NCG accuracy as reported in[6]." |
|
}, |
|
"FIGREF6": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "a) gives a real example in our medical speech translation domain (what did you eat yesterday vs. where did you eat yesterday) where insufficient information of concepts cause generation confusion. While the two input English sentences share exact the same set and order of concepts, the correct concept orders in Chinese are clearly different. Therefore, whether using feature errors are inevitable no matter how well the ME models are optimized. For this specific example, the reason is quite obvious: the concept WHQ is too concise that it is not informative enough to discriminate the different generation behavior between (WHQ,what) and (WHQ,where)." |
|
}, |
|
"FIGREF7": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "are two word phrases belong to k c 0 and k crespectively. Accordingly, the conditional probability of a concept candidate and the probability weights are modified as inFigure 5(a) as now two different feature sets are extracted for sentence {what did you eat yesterday} and sentence {where did you eat yesterday}." |
|
}, |
|
"FIGREF8": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": ". That is a 10 7 -times bigger space! To solve the resulted data sparseness issue, a combination of feature sets is proposed in ME-based concept generation. Multiple feature sets are extracted with various dimensions and concept/word constraints. In particular, we combine features ( ) in equation(7). These two sets of features adopted in the optimization of ME models as (a) Example of Concept-based English-to-Chinese Translation with concept information only (b) Example of Concept-based English-to-Chinese Translation with concept and sub-" |
|
}, |
|
"FIGREF11": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "based generation, when both feature sets are observed, the more informative feature ( ) combined ME models will back off to the more robust models defined in(5)." |
|
}, |
|
"FIGREF14": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "overtraining because of much larger feature space and the resulted data sparseness problem. This problem is alleviated by the proposed multiple feature sets which lead to a decent improvement over our best performance previously proposed." |
|
} |
|
} |
|
} |
|
} |