|
{ |
|
"paper_id": "2005", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T07:20:35.859411Z" |
|
}, |
|
"title": "Microsoft Research T reelet T ran slation S y stem : I W S L T E v alu ation", |
|
"authors": [ |
|
{ |
|
"first": "Arul", |
|
"middle": [], |
|
"last": "Menezes", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Microsoft Research O n e Microsoft W ay Red m on d", |
|
"location": { |
|
"postCode": "W A 9 8 0 5 2" |
|
} |
|
}, |
|
"email": "arulm@microsoft.com" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Quirk", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Microsoft Research O n e Microsoft W ay Red m on d", |
|
"location": { |
|
"postCode": "W A 9 8 0 5 2" |
|
} |
|
}, |
|
"email": "chrisq@microsoft.com" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "The Microsoft Research translation system is a syntactically informed p hrasal S MT system that u ses a p hrase translation mod el based on d ep end ency treelets and a g lobal reord ering mod el based on the sou rce d ep end ency tree. These mod els are combined w ith sev eral other k now led g e sou rces in a loglinear manner. The w eig hts of the ind iv id u al comp onents in the log-linear mod el are set by an au tomatic p arameter-tu ning method. W e g iv e a brief ov erv iew of the comp onents of the system and d iscu ss its p erformance at IW S L T in tw o track s: J ap anese to E ng lish (su p p lied d ata and tools), and E ng lish to C hinese (su p p lied d ata and tools).", |
|
"pdf_parse": { |
|
"paper_id": "2005", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "The Microsoft Research translation system is a syntactically informed p hrasal S MT system that u ses a p hrase translation mod el based on d ep end ency treelets and a g lobal reord ering mod el based on the sou rce d ep end ency tree. These mod els are combined w ith sev eral other k now led g e sou rces in a loglinear manner. The w eig hts of the ind iv id u al comp onents in the log-linear mod el are set by an au tomatic p arameter-tu ning method. W e g iv e a brief ov erv iew of the comp onents of the system and d iscu ss its p erformance at IW S L T in tw o track s: J ap anese to E ng lish (su p p lied d ata and tools), and E ng lish to C hinese (su p p lied d ata and tools).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The d ep end ency treelet translation system d ev elop ed at MS R is a statistical MT system that tak es ad v antag e of ling u istic tools, namely a sou rce lang u ag e d ep end ency p arser, as w ell as a w ord alig nment comp onent. [1] To train a translation system, w e req u ire a sentencealig ned p arallel corp u s. F irst the sou rce sid e is p arsed to obtain d ep end ency trees. N ex t the corp u s is w ord -alig ned , and the sou rce d ep end encies are p roj ected onto the targ et sentences u sing the w ord alig nments. F rom the alig ned d ep end ency corp u s w e ex tract all treelet translation p airs, and train an ord er mod el and a bi-lex ical d ep end ency mod el.", |
|
"cite_spans": [ |
|
{ |
|
"start": 236, |
|
"end": 239, |
|
"text": "[1]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ". I n tro d u cti o n", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "To translate, w e p arse the inp u t sentence, and emp loy a d ecod er to find a combination and ord ering of treelet translation p airs that cov er the sou rce tree and are op timal accord ing to a set of mod els. In a now -common g eneraliz ation of the classic noisy-channel framew ork , w e u se a log -linear combination of mod els [2] , as in below :", |
|
"cite_spans": [ |
|
{ |
|
"start": 337, |
|
"end": 340, |
|
"text": "[2]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ". I n tro d u cti o n", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\uf8f4 \uf8fe \uf8f4 \uf8fd \uf8fc \uf8f4 \uf8f3 \uf8f4 \uf8f2 \uf8f1 = \u2211 \u2208F T T S \u039b F S f f f n translatio ) , ( \u03bb max arg ) , , (", |
|
"eq_num": "( 3)" |
|
} |
|
], |
|
"section": ". I n tro d u cti o n", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "S u ch an ap p roach tow ard translation scoring has p rov en v ery effectiv e in p ractice, as it allow s a translation system to incorp orate information from a v ariety of p robabilistic or nonp robabilistic sou rces. The w eig hts \u039b = { \u03bb f } are selected by d iscriminativ ely training ag ainst held ou t d ata.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ". I n tro d u cti o n", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "A brief w ord on notation: s and t rep resent sou rce and targ et lex ical nod es; S and T rep resent sou rce and targ et trees; s and t rep resent sou rce and targ et treelets ( connected su bg rap hs of the d ep end ency tree). W here the intent is clear, w e w ill d isreg ard the stru ctu re of these elements and consid er these stru ctu res to be sets of lex ical items: the ex p ression \u2200t\u2208 T refers to all the lex ical items in the targ et lang u ag e tree T. S imilarly, | T| refers to the cou nt of lex ical items in T. W e u se su bscrip ts to ind icate selected w ord s: T n rep resents the n th lex ical item in an in-ord er trav ersal of T. We employ several channel models: a direct maximum lik elihood estimate of the prob ab ility of targ et g iven source, as w ell as an estimate of source g iven targ et and targ et g iven source using the w ord-b ased I B M M odel 1 [ 6] . F or M L E , w e use ab solute discounting to smooth the prob ab ilities:", |
|
"cite_spans": [ |
|
{ |
|
"start": 887, |
|
"end": 891, |
|
"text": "[ 6]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ". S y ste m D e tai l s", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": ",*) ( ) , ( ) | ( P s t s s t c c MLE \u03bb \u2212 =", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": ". S y ste m D e tai l s", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "H ere, c represents the count of instances of the treelet pair \u2329s , t\u232a in the training corpus, and \u03bb is determined empirically. F or M odel 1 prob ab ilities w e compute the sum over all possib le alig nments of the treelet w ithout normaliz ing f or leng th. T he calculation of source g iven targ et is presented b elow ; targ et g iven source is calculated symmetrically. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ". S y ste m D e tai l s", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u2208 \u2208 = t s s t t s M s t ) | P( ) | ( P 1", |
|
"eq_num": "(5" |
|
} |
|
], |
|
"section": "\u220f\u2211", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u220f \u220f = = \u2212 \u2212 = = | | 1 | | 1 1 2 )) ( | ( P ) ( P ) , | ( P ) ( P T T T T T T T T T i i i bidep dep i i i i trisurf surf parent (6 )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u220f\u2211", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "P trisu rf is a K neser-N ey smoothed trigram language model trained on the target side of the training corp us, and P b il e x is a K neser-N ey smoothed bigram language model trained on target language dep endencies ex tracted from the aligned p arallel dep endency tree corp us.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u220f\u2211", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "T he order model attemp ts to assign a p robability to the p osition (p o s) of each target node relative to its head based on information in both the source and target trees:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "O r d e r m o d e l", |
|
"sec_num": "2.3.3." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": ") , | )) ( , ( P( ) , | ) ( ( P T S T S T T \u220f \u2208 = t order t parent t pos order", |
|
"eq_num": "(7)" |
|
} |
|
], |
|
"section": "O r d e r m o d e l", |
|
"sec_num": "2.3.3." |
|
}, |
|
{ |
|
"text": "H ere, p osition is modeled in terms of closeness to head in the dep endency tree. T he closest p re-modifier of given head has p osition -1; the closest p ost-modifier has a p osition 1. F igure 1 show s an ex amp le dep endency tree p air annotated w ith head-relative p ositions. We use a small set of features reflecting local information in the dep endency tree to model P(p os(t,p arent(t)) | S, T):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "O r d e r m o d e l", |
|
"sec_num": "2.3.3." |
|
}, |
|
{ |
|
"text": "\u2022 L ex ical items of t and p a r e n t(t).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "O r d e r m o d e l", |
|
"sec_num": "2.3.3." |
|
}, |
|
{ |
|
"text": "\u2022 L ex ical items of the source nodes aligned to t and head(t).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "O r d e r m o d e l", |
|
"sec_num": "2.3.3." |
|
}, |
|
{ |
|
"text": "\u2022 Part-of-sp eech (\" cat\" ) of the source nodes aligned to the head and modifier.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "O r d e r m o d e l", |
|
"sec_num": "2.3.3." |
|
}, |
|
{ |
|
"text": "\u2022 H ead-relative p osition of the source node aligned to the source modifier. T hese features along w ith the target feature are gathered from the w ord-aligned p arallel dep endency tree corp us and used to train a decision tree.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "O r d e r m o d e l", |
|
"sec_num": "2.3.3." |
|
}, |
|
{ |
|
"text": "[ 9]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "O r d e r m o d e l", |
|
"sec_num": "2.3.3." |
|
}, |
|
{ |
|
"text": "2.3.4 . O th e r m o d e ls I n addition to these basic models, w e also incorp orate a variety of other information to gather information about the translation p rocess.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "O r d e r m o d e l", |
|
"sec_num": "2.3.3." |
|
}, |
|
{ |
|
"text": "\u2022 Tr e e l e t c o u n t . T his feature is a count of treelets used to construct the candidate. I t acts as a bias tow ard translations that use a smaller number of treelets; hence tow ard larger siz ed treelets incorp orating more contex t.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "O r d e r m o d e l", |
|
"sec_num": "2.3.3." |
|
}, |
|
{ |
|
"text": "\u2022 W o r d c o u n t . We also include a count of the w ords in the target sentence. T his feature help s to offset the bias of the target language model tow ard shorter sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "O r d e r m o d e l", |
|
"sec_num": "2.3.3." |
|
}, |
|
{ |
|
"text": "\u2022 W h o l e se n t e n c e M o d e l 1 sc o r e s. We p rovide the sy stem w ith both the p robability of the w hole source sentence given the w hole target sentence and vice versa, as described in [ 10 ] : ", |
|
"cite_spans": [ |
|
{ |
|
"start": 198, |
|
"end": 204, |
|
"text": "[ 10 ]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "O r d e r m o d e l", |
|
"sec_num": "2.3.3." |
|
}, |
|
{ |
|
"text": "( ) \u220f\u2211 \u2208 \u2208 + = S T S T T S s t t s ) | P( 1 | | \u03b5 ) | P( | | (8 ) \u2022 Deletion penalty. A s i n [ 1 1 ] , w", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "O r d e r m o d e l", |
|
"sec_num": "2.3.3." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "} ) | P( . | { ) , ( d s t t s D < \u2208 \u2200 \u2208 = T S T S", |
|
"eq_num": "(9)" |
|
} |
|
], |
|
"section": "O r d e r m o d e l", |
|
"sec_num": "2.3.3." |
|
}, |
|
{ |
|
"text": "\u2022 I ns er tion penalty. A n a p p r o x i m a t i o n o f t h e n u m b e r o f i n s e r t i o n s c a n b e c o u n t e d i n t h e s a m e m a n n e r : ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "O r d e r m o d e l", |
|
"sec_num": "2.3.3." |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "} ) | P( . | { ) , ( i t s s t I < \u2208 \u2200 \u2208 = S T T S (", |
|
"eq_num": "1 0" |
|
} |
|
], |
|
"section": "O r d e r m o d e l", |
|
"sec_num": "2.3.3." |
|
}, |
|
{ |
|
"text": "\u07e2\u07d2\u07da \u07d2\u080e \u07cf \u07c1 \u07c6 \u202b\u0795\u202c V J G N K I J V T G F Y C U", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "O r d e r m o d e l", |
|
"sec_num": "2.3.3." |
|
}, |
|
{ |
|
"text": "F ig u r e 1 : A l i g n e d d e p e n d e n c y t r e e p a i r , a n n o t a t e d w i t h h e a d -r e l a t i v e p o s i t i o n s", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "O r d e r m o d e l", |
|
"sec_num": "2.3.3." |
|
}, |
|
{ |
|
"text": "broken Japanese text. We also found it advantageous to rem ove (after parsing) c ertain lexic al item s, suc h as the topic m arker (\" \u1b22\") , the politeness prefix \" \u14ee\", and the m odal expression \"\u1b21 \u1b1a \u1b0c \". A nd finally w e norm aliz e the seq uenc e of a q uestion partic le and a period (\" \") into a single q uestion m ark c harac ter (\"? \").", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "O r d e r m o d e l", |
|
"sec_num": "2.3.3." |
|
}, |
|
{ |
|
"text": "Japanese is c om m only w ritten in a c om bination of three distinc t w riting sy stem s: kanj i (ideographic ), hiragana (sy llabic ), and katakana (sy llabic ). I n m ost situations, there is a c anonic al spelling of a lexic al item ; how ever c ertain item s adm it m ultiple spellings (e.g. \u1b02 \u1b13 \u1b08 /\u0a05\u1b08 or \u1b42 \u1b3d \u1b31 \u1b0e \u1b46 /\u0cbd \u1b3d \u1b31 \u1b0e \u1b46 ). S uc h am biguity exac erbates the data sparsity problem already evident in the sm all training c orpus. I n addition the distribution of hiragana to kanj i spellings in the final testing data is notic eably different than that of the training set and developm ent sets. We establish a baseline and propose tw o c orrec tions for this situation. B asel i ne: N o no rm al i z ati o n. I n this version of the sy stem , no c harac ter set norm aliz ation is perform ed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "5 . J ap anese c h arac ter set no rm al i z ati o n", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "M eth o d 1: H i rag ana no rm al i z ati o n. A fter parsing the input, w e look up the dic tionary headw ord of eac h node in the tree, and, if present in the dic tionary , replac e that lexic al item w ith its lem m atiz ed H iragana form . T he m apping is lossy in that distinc t w ords w ith different kanj i representations but identic al hiragana representations are c onflated. A lso m orphologic al endings on w ords are lost. M eth o d 2: L earned no rm al i z ati o n. We observed that betw een the old (unbroken) and new (w ord-broken) distributions of the training data a signific ant num ber of c harac ter set c hanges had been m ade to the data, in addition to the w ord-breaking. We used these differenc es to autom atic ally ac q uire a c harac ter-set norm aliz ation table that w as then applied to test and training data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "5 . J ap anese c h arac ter set no rm al i z ati o n", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "T his c orpus c ontains large am ount of repetitive phrases; to a large extent, this c om es w ith the dom ain. B asic travel expressions are c om m only short, sim ple phrases like \"all right\" or \"thanks\". T o m inim iz e errors on suc h stoc k phrases, w e introduc e a verbatim c om ponent, a ty pe of translation m em ory . I f the input sentenc e m atc hes the sourc e segm ent of a training pair, w e return the target side of the training pair as the translation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "5 . 3 . V erb ati m transl ati o ns", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "2.6. P a r a m e t e r t r a i n i n g a n d t u n i n g T here are several ty pes of tunable param eters. F irst w e have \u039b, a 12 dim ensional w eight vec tor: one real-valued w eight for eac h feature func tion. T his w eight vec tor is determ ined by m axim iz ation of the B L E U sc ore using n-best lists [12] . H ow ever, there are a variety of other param eters that c annot be tuned via n-best lists; instead these are optim iz ed by grid searc h on B L E U sc ore.", |
|
"cite_spans": [ |
|
{ |
|
"start": 311, |
|
"end": 315, |
|
"text": "[12]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "5 . 3 . V erb ati m transl ati o ns", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "\u2022 M a x i m u m t r e e l e t s i z e : keep treelet translation pairs up to s nodes (for both JE and E C , the optim al value w as 9).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "5 . 3 . V erb ati m transl ati o ns", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "\u2022 T r e e l e t t r a n s l a t i o n p a i r p r u n i n g c u t o f f : explore only the top k treelet translation pairs per input node (JE : 9; E C : 6 ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "5 . 3 . V erb ati m transl ati o ns", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "\u2022 D e c o d e r b e a m w i d t h : keep only n best translated subtrees per input subtree (JE : 15 ; E C : 12).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "5 . 3 . V erb ati m transl ati o ns", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "\u2022 E x h a u s t i v e o r d e r i n g t h r e s h o l d : fall bac k to greedy ordering w hen the c ount of c hildren w hose order is unspec ified exc eeds this lim it; see [1] for details (JE : 7 ; E C : 6 ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "5 . 3 . V erb ati m transl ati o ns", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "\u2022 M L E c h a n n e l m o d e l d i s c o u n t (JE and E C : 0 .7 ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "5 . 3 . V erb ati m transl ati o ns", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "\u2022 D e l e t i o n a n d i n s e r t i o n p e n a l t y c u t o f f s (JE and E C : 0 .1).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "5 . 3 . V erb ati m transl ati o ns", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "\u2022 D e f a u l t N U L L t r a n s l a t i o n p r o b a b i l i t y (JE : 0 ; E C : 0 .1).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "5 . 3 . V erb ati m transl ati o ns", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "We present results in tw o trac ks: Japanese to E nglish (supplied data and tools), and E nglish to C hinese (supplied data and tools). T able 1 sum m ariz es the results in these c ategories. S everal trends are w orthy of note. F irst, it seem s that kanj i/hiragana norm aliz ation is an im portant c om ponent of Japanese to E nglish translation. U nfortunately the m ore prom ising of the tw o m ethods on the developm ent sets turned out to be less effec tive on the final training set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "D i s c u s s i o n", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "S ec ondly , w e find surprisingly m ixed results from using verbatim translations. While their addition has a sm all but respec table im pac t on Japanese to E nglish translation (w hic h uses 16 referenc e translations), the im pac t is strongly negative on E nglish to C hinese translation (w hic h uses only 1 referenc e). M ost likely this is bec ause, even though the translations obtained in this manner are almost definitely g ood translations, they may not match the sing le reference. F inally, w e note the tradeoff betw een op timal B L E U scores and op timal N I S T scores in J ap anese to E ng lish translation. W hile the N I S T score has a harsh brev ity p enalty, B L E U is mu ch more tolerant of v ery short translations, hence some systems may p rodu ce misleading ly larg e B L E U scores by p rodu cing v ery short translations. C onsidering both metrics simu ltaneou sly help s to identify w hen this situ ation is occu rring . I n this ev alu ation w e observ e this to be a maj or issu e across all the p articip ants. F or ex amp le, in the J ap anese to E ng lish S u p p lied D ata track, ordering systems by N I S T score v s. B L E U score p rodu ces a sig nificantly different ranking . T he system that ranks first by B L E U drop s to fifth w hen ranked by N I S T , w hereas the systems ranking first and second by N I S T drop to third and six th p lace by B L E U .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "D i s c u s s i o n", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "W e w ou ld like to thank H isami S u z u ki and C hris B rockett for su g g estions and imp rov ements to the J ap anese analysis comp onents; K ev in D u h for analysis of C hinese ou tp u t; R obert C . M oore for su g g estions on smoothing ; and the I W S L T org aniz ers, esp ecially C hiori H ori and M attias E ck, for their feedback and assistance throu g hou t the ev alu ation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A c k n o w l e d g e m e n t s", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "[ 1] Q u irk, C ., M enez es, A ., and C herry, C ., \"D ep endency T ree T ranslation: S yntactically I nformed P hrasal S M T \", Proceedings of ACL 2 0 0 5 , A nn A rbor, M I , U S A , 2 0 0 5 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ". R e f e r e n c e s", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "[ 2 ] O ch, F . J ., and N ey, H ., \"D iscriminativ e T raining and M ax imu m E ntrop y M odels for S tatistical M achine T ranslation\", Proceedings of ACL 2 0 0 2 , P hiladelp hia, P A , U S A , 2 0 0 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ". R e f e r e n c e s", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "[ 3 ] H eidorn, G., \" I ntellig ent w riting assistance\" , in D ale et al. H a ndb ook of N a t u ra l La ngu a ge Processing, M arcel D ekker, 2 0 0 0 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ". R e f e r e n c e s", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "[ 4 ] O ch, F . J ., and N ey H ., \"A S ystematic C omp arison of V ariou s S tatistical A lig nment M odels\", Com p u t a t iona l Lingu ist ics, 2 9 (1) : 19 -5 1, M arch 2 0 0 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ". R e f e r e n c e s", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "[ 5 ] P ap ineni, K ., R ou kos, S ., W ard, T ., and Z hu , W .-J ., \"B L E U : a method for au tomatic ev alu ation of machine translation\", Proceedings of ACL 2 0 0 2 , P hiladelp hia, P A , U S A , 2 0 0 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ". R e f e r e n c e s", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "[ 6 ] B row n, P . F ., D ella P ietra, S ., D ella P ietra, V . J ., and M ercer, R . L ., \"T he M athematics of S tatistical M achine T ranslation: P arameter E stimation\", Com p u t a t iona l Lingu ist ics 19 (2 ) : 2 6 3 -3 11, 19 9 4 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ". R e f e r e n c e s", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "[ 7 ] A u e, A ., M enez es, A ., M oore, R ., Q u irk, C ., and R ing g er, E ., \"S tatistical M achine T ranslation U sing L abeled S emantic D ep endency Grap hs.\" Proceedings of T M I 2 0 0 4 , B altimore, M D , U S A , 2 0 0 4 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ". R e f e r e n c e s", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "[ 8 ] C ollins, M ., \"T hree g enerativ e, lex icalised models for statistical p arsing \", Proceedings of ACL 1 9 9 7 , M adrid, S p ain, 19 9 7 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ". R e f e r e n c e s", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "[ 9 ] C hickering , D .M ., \"T he W inM ine T oolkit\", M icrosoft R esearch T echnical R ep ort M S R -T R -2 0 0 2 -10 3 , R edmond, W A , U S A , 2 0 0 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ". R e f e r e n c e s", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "[ 10 ] O ch, F . J ., Gildea, D ., K hu danp u r, S ., S arkar, A ., Y amada, K ., F raser, A ., K u mar, S ., S hen, L ., S mith, D ., E ng , K ., J ain, V ., J in, Z ., and R adev , D ., \"A S morg asbord of F eatu res for S tatistical M achine T ranslation\". Proceedings of H LT / N AACL 2 0 0 4 , B oston, M A , U S A , 2 0 0 4 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ". R e f e r e n c e s", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "[ 11] B ender, O ., Z ens, R ., M atsu ov , E . and N ey, H ., \"A lig nment T emp lates: the R W T H S M T S ystem\". I W S LT W ork sh op a t I N T E R S PE E CH 2 0 0 4 , J ej u I sland, K orea, 2 0 0 4 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ". R e f e r e n c e s", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "[ 12 ] O ch, F . J ., \"M inimu m E rror R ate T raining for S tatistical M achine T ranslation\", Proceedings of ACL 2 0 0 3 , S ap p oro, J ap an, 2 0 0 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ". R e f e r e n c e s", |
|
"sec_num": "5" |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "T ab l e 1: S um m ary of results under a variety of c onfigurations. F inal subm ission results are show n in b o l d f a c e . B est results in eac h c ategory are show n inerbatim Japanese to E nglish (best N I S T param eters) B aseline 4 8 .8 / 8 .7 4 8 .4 / 8 .4 3 9.3 / 7 .4 3 9.6 / 7 .4 L earned norm aliz ation 4 9.6 / 9.1 4 8 .9 / 9.1 4 0 .3 / 8 .2 4 1. 1 / 8 . 2 H iragana norm aliz ation 4 9.7 / 9.2 4 9.3 / 9.2 4 0 .1 / 8 .0 4 0 .6 / 8 .0 Japanese to E nglish (best B L E U param eters) B aseline 4 9.5 / 7 .1 4 8 .1 / 7 .3 3 9.6 / 6 .7 4 0 .1 / 6 .8 L earned norm aliz ation 4 9.0 / 8 .6 4 9.2 / 8 .6 4 2.1 / 7 .7 4 2. 9 / 7 . 8 H iragana norm aliz ation 5 1.0 / 8 .8 5 0 .3 / 8 .5 4 0 .8 / 7 .4 4 1.2 / 7 .4 E nglish to C hinese 14 .6 / 3 .8 17 .3 / 4 .4 22. 0 / 6 . 5", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "annex", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": {}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "e a p p r o x i m a t e t h e n u m b e r o f d e l e t e d w o r d s u s i n g M o d e l 1 p r o b a b i l i t i e s u s i n g t h e f o l l o w i n g f o r m u l a , w h e r e d i s a n e m p i r i c a l l y d e t e r m i n e d t h r e s h o l d :" |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": ") 2 .4 . S m all c or pu s optim iz ations 2 . 4 . 1 . M o de l 1 f a l l b a c k t r a n s l a t i o n s D u e t o t h e s m a l l s i z e o f t h e t r a i n i n g c o r p u s , w e f o u n d t h a t m i s a l i g n m e n t s o f t e n p r e v e n t e d u s f r o m f i n d i n g t r e e l e t t r a n s l a t i o n p a i r s f o r r a r e t o k e n s , o r w o u l d l e a d t o v e r y p o o r t r a n s l a t i o n s . A l l t o o o f t e n , t o o m u c h s o u r c e o r t a r g e t c o n t e x t w o u l d b e a l i g n e d w i t h t h e r a r e w o r d , t h u s l e a d i n g t o e i t h e r u n t r a n s l a t e d s o u r c e w o r d s i n t h e o u t p u t , o r s p u r i o u s a d d i t i o n a l w o r d s i n t h e t r a n s l a t i o n . A b s o l u t e d i s c o u n t i n g o f t h e M L E c h a n n e l m o d e l h e l p s t o s o l v e t h e l a t t e r i s s u e , b u t e x a c e r b a t e s t h e f o r m e r . T o w o r k a r o u n d t h i s s i t u a t i o n , a t r u n t i m e w e c o n s t r u c t s i n g l e -w o r d t r e e l e t t r a n s l a t i o n p a i r s f o r e a c h i n p u t n o d e f r o m t h e t o p f e w e n t r i e s i n t h e M o d e l 1 t r a n s l a t i o n t a b l e . 2 . 4 . 2 . N U L L t r a n s l a t i o n s We a l s o c r e a t e t r e e l e t s t h a t a l l o w w o r d s t o b e t r a n s l a t e d a s t h e e m p t y t o k e n , i .e . t o b e d e l e t e d . T h e s e N U L L t r a n s l a t i o n s a r e a s s i g n e d a d e f a u l t M L E p r o b a b i l i t y . 2 .5 . C or pu s s pec if ic is s u es 2 . 5 . 1 . D a t a c l e a n u p We o b s e r v e d i n m a n y c a s e s t h a t i n d i v i d u a l t r a i n i n g p a i r s c o n s i s t e d o f s e v e r a l c o n c a t e n a t e d s e n t e n c e s i n o n e o r b o t h l a n g u a g e s . S i n c e o u r s y s t e m p a r s e s t h e s o u r c e l a n g u a g e , w e p r e f e r t o p r o c e s s s e n t e n c e s i n d i v i d u a l l y ; h e n c e w e b r o k e t h e s e m u l t i -s e n t e n c e u t t e r a n c e s i n t o i n d i v i d u a l s e n t e n c e s a n d u s e d s i m p l e p o s i t i o n a l h e u r i s t i c s t o r e -a l i g n t h e s e n t e n c e s . J a p a n e s e t o E n g l i s h : O u r J a p a n e s e p a r s e r p r e f e r s t o d o i t s o w n w o r d -b r e a k i n g , s o w e r e m o v e d a l l s p a c e s f r o m t h e w o r d -" |
|
} |
|
} |
|
} |
|
} |