|
{ |
|
"paper_id": "I17-1013", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T07:38:09.890824Z" |
|
}, |
|
"title": "An Exploration of Neural Sequence-to-Sequence Architectures for Automatic Post-Editing", |
|
"authors": [ |
|
{ |
|
"first": "Marcin", |
|
"middle": [], |
|
"last": "Junczys-Dowmunt", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Adam Mickiewicz University", |
|
"location": { |
|
"settlement": "Pozna\u0144" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Roman", |
|
"middle": [], |
|
"last": "Grundkiewicz", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Edinburgh", |
|
"location": {} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this work, we explore multiple neural architectures adapted for the task of automatic post-editing of machine translation output. We focus on neural endto-end models that combine both inputs mt (raw MT output) and src (source language input) in a single neural architecture, modeling {mt, src} \u2192 pe directly. Apart from that, we investigate the influence of hard-attention models which seem to be well-suited for monolingual tasks, as well as combinations of both ideas. We report results on data sets provided during the WMT-2016 shared task on automatic post-editing and can demonstrate that dual-attention models that incorporate all available data in the APE scenario in a single model improve on the best shared task system and on all other published results after the shared task. Dual-attention models that are combined with hard attention remain competitive despite applying fewer changes to the input.", |
|
"pdf_parse": { |
|
"paper_id": "I17-1013", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this work, we explore multiple neural architectures adapted for the task of automatic post-editing of machine translation output. We focus on neural endto-end models that combine both inputs mt (raw MT output) and src (source language input) in a single neural architecture, modeling {mt, src} \u2192 pe directly. Apart from that, we investigate the influence of hard-attention models which seem to be well-suited for monolingual tasks, as well as combinations of both ideas. We report results on data sets provided during the WMT-2016 shared task on automatic post-editing and can demonstrate that dual-attention models that incorporate all available data in the APE scenario in a single model improve on the best shared task system and on all other published results after the shared task. Dual-attention models that are combined with hard attention remain competitive despite applying fewer changes to the input.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Given the raw output of a (possibly unknown) machine translation system from language src to language mt, Automatic Post-Editing (APE) is the process of automatic correction of raw MT output (mt), so that a closer resemblance to human postedited MT output (pe) is achieved. While APE systems that only model mt \u2192 pe yield good results, the field has always strived towards methods that also integrate src in various forms.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "With neural encoder-decoder models, and multi-source models in particular, this can be now achieved in more natural ways than for previously popular phrase-based statistical machine transla-tion (PB-SMT) systems. Despite this, previously reported results for multi-source or dual-source models in APE scenarios are unsatisfying in terms of performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this work, we explore a number of singlesource and dual-source neural architectures which we believe to be better fits to the APE task than vanilla encoder-decoder models with soft attention. We focus on neural end-to-end models that combine both inputs mt and src in a single neural architecture, modeling {mt, src} \u2192 pe directly. Apart from that, we investigate the influence of hard-attention models, which seem to be well-suited for monolingual tasks. Finally, we create combinations of both architectures.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We report results on data sets provided during the WMT-2016 shared task on automatic postediting (Bojar et al., 2016) and compare our performance against the shared task winner, the system submitted by the Adam Mickiewicz University (AMU) team (Junczys-Dowmunt and Grundkiewicz, 2016) , and a more recent system by Pal et al. (2017) with the previously best published results on the same test set.", |
|
"cite_spans": [ |
|
{ |
|
"start": 97, |
|
"end": 117, |
|
"text": "(Bojar et al., 2016)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 244, |
|
"end": 284, |
|
"text": "(Junczys-Dowmunt and Grundkiewicz, 2016)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 315, |
|
"end": 332, |
|
"text": "Pal et al. (2017)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our main contributions are: (1) we perform a thorough comparison of end-to-end neural approaches to APE during which (2) we demonstrate that dual-attention models that incorporate all available data in the APE scenario in a single model achieve the best reported results for the WMT-2016 APE task, and (3) show that models with a hard-attention mechanism reach competitive results although they execute fewer edits than models relying only on soft attention.", |
|
"cite_spans": [ |
|
{ |
|
"start": 279, |
|
"end": 305, |
|
"text": "WMT-2016 APE task, and (3)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The remainder of the paper is organized as follows: Previous relevant work is described in Section 2. Section 3 summarizes the basic encoderdecoder with attention architecture that is further extended with multiple non-standard attention mechanisms in Section 4. These attention mecha-nisms are: hard-attention in Section 4.1, a combination of hard attention and soft attention in Section 4.2, dual soft attention in Section 4.3 and a combination of hard attention and dual soft attention in Section 4.4. We describe experiments and results in Section 5 and conclude in Section 7.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Before the application of neural sequence-tosequence models to APE, most APE systems would rely on phrase-based SMT following a monolingual approach first introduced by Simard et al. (2007) . B\u00e9chara et al. (2011) proposed a \"source-context aware\" variant of this approach where automatically created word alignments were used to create a new source language which consisted of joined MT output and source token pairs. The inclusion of source-language information in that form was shown to improve the automatic post-editing results (B\u00e9chara et al., 2012; Chatterjee et al., 2015) . The quality of the used word alignments plays an important role for this methods, as demonstrated for instance by Pal et al. (2015) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 169, |
|
"end": 189, |
|
"text": "Simard et al. (2007)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 192, |
|
"end": 213, |
|
"text": "B\u00e9chara et al. (2011)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 533, |
|
"end": 555, |
|
"text": "(B\u00e9chara et al., 2012;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 556, |
|
"end": 580, |
|
"text": "Chatterjee et al., 2015)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 697, |
|
"end": 714, |
|
"text": "Pal et al. (2015)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "During the WMT-2016 APE shared task two systems relied on neural models, the CUNI system (Libovick\u00fd et al., 2016) and the shared task winner, the system submitted by the AMU team (Junczys-Dowmunt and Grundkiewicz, 2016) . This submission explored the application of neural translation models to the APE problem and achieved good results by treating different models as components in a log-linear model, allowing for multiple inputs (the source src and the translated sentence mt) that were decoded to the same target language (post-edited translation pe). Two systems were considered, one using src as the input (src \u2192 pe) and another using mt as the input (mt \u2192 pe). A simple string-matching penalty integrated within the log-linear model was used to control for higher faithfulness with regard to the raw MT output. The penalty fired if the APE system proposed a word in its output that had not been seen in mt. The influence of the components on the final result was tuned with Minimum Error Rate Training (Och, 2003 ) with regard to the task metric TER.", |
|
"cite_spans": [ |
|
{ |
|
"start": 89, |
|
"end": 113, |
|
"text": "(Libovick\u00fd et al., 2016)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 179, |
|
"end": 219, |
|
"text": "(Junczys-Dowmunt and Grundkiewicz, 2016)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 1009, |
|
"end": 1019, |
|
"text": "(Och, 2003", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Following the WMT-2016 APE shared task, Pal et al. (2017) published work on another neural APE system that integrated precomputed wordalignment features into the neural structure and en-forced symmetric attention during the neural training process. The result was the best reported single neural model for the WMT-2016 APE test set prior to this work. With n-best list re-ranking and combination with phrase-based post-editing systems, the authors improved their results even further. None of their systems, however, integrated information from src, all modeled mt \u2192 pe.", |
|
"cite_spans": [ |
|
{ |
|
"start": 40, |
|
"end": 57, |
|
"text": "Pal et al. (2017)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Implementations of all models explored in this paper are available in the Marian 1 toolkit . The attentional encoderdecoder model in Marian is a re-implementation of the NMT model in Nematus (Sennrich et al., 2017) . The model differs from the standard model introduced by Bahdanau et al. (2015) by several aspects, the most important being the conditional GRU with attention. The summary provided in this section is based on the description in Sennrich et al. (2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 191, |
|
"end": 214, |
|
"text": "(Sennrich et al., 2017)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 273, |
|
"end": 295, |
|
"text": "Bahdanau et al. (2015)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 445, |
|
"end": 467, |
|
"text": "Sennrich et al. (2017)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attentional Encoder-Decoder", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Given the raw MT output sequence (x 1 , . . . , x Tx ) of length T x and its manually post-edited equivalent (y 1 , . . . , y Ty ) of length T y , we construct the encoder-decoder model using the following formulations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attentional Encoder-Decoder", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Encoder context A single forward encoder state \u2212 \u2192 h i is calculated as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attentional Encoder-Decoder", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u2212 \u2192 h i = GRU( \u2212 \u2192 h i\u22121 , F[x i ]),", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attentional Encoder-Decoder", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "where F is the encoder embeddings matrix. The GRU RNN cell (Cho et al., 2014 ) is defined as:", |
|
"cite_spans": [ |
|
{ |
|
"start": 59, |
|
"end": 76, |
|
"text": "(Cho et al., 2014", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attentional Encoder-Decoder", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "GRU (s, x) =(1 \u2212 z) s + z s,", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Attentional Encoder-Decoder", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "s = tanh (Wx + r Us) , r = \u03c3 (W r x + U r s) , z = \u03c3 (W z x + U z s) ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attentional Encoder-Decoder", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "where x is the cell input; s is the previous recurrent state; W, U, W r , U r , W z , U z are trained model parameters 2 ; \u03c3 is the logistic sigmoid activation function. The backward encoder state is calculated analogously over a reversed input sequence with its own set of trained parameters. Let h i be the annotation of the source symbol at position i, obtained by concatenating the forward and backward encoder RNN hidden states,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attentional Encoder-Decoder", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "h i = [ \u2212 \u2192 h i ; \u2190 \u2212 h i ]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attentional Encoder-Decoder", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": ", the set of encoder states C = {h 1 , . . . , h Tx } then forms the encoder context.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attentional Encoder-Decoder", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The decoder is initialized with start state s 0 , computed as the average over all encoder states:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoder initialization", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "s 0 = tanh W init Tx i=1 h i T x .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoder initialization", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Conditional GRU with attention We follow the Nematus implementation of the conditional GRU with attention, cGRU att :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoder initialization", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "s j = cGRU att (s j\u22121 , E[y j\u22121 ], C) ,", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Decoder initialization", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where s j is the newly computed hidden state, s j\u22121 is the previous hidden state, C the source context and E[y j\u22121 ] is the embedding of the previously decoded symbol y i\u22121 . The conditional GRU cell with attention, cGRU att , has a complex internal structure, consisting of three parts: two GRU layers and an intermediate attention mechanism ATT.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoder initialization", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Layer GRU 1 generates an intermediate representation s j from the previous hidden state s j\u22121 and the embedding of the previous decoded symbol E[y j\u22121 ]:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoder initialization", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "s j = GRU 1 (s j\u22121 , E[y j\u22121 ]) .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoder initialization", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The attention mechanism, ATT, inputs the entire context set C along with intermediate hidden state s j in order to compute the context vector c j as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoder initialization", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "c j =ATT C, s j = Tx i \u03b1 ij h i , \u03b1 ij = exp(e ij ) Tx k=1 exp(e kj ) , e ij =v a tanh U a s j + W a h i ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoder initialization", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where \u03b1 ij is the normalized alignment weight between source symbol at position i and target symbol at position j, and v a , U a , W a are trained model parameters. Layer GRU 2 generates s j , the hidden state of the cGRU att , from the intermediate representation s j and context vector c j :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoder initialization", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "s j = GRU 2 s j , c j .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoder initialization", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Deep output Finally, given s j , y j\u22121 , and c j , the output probability p(y j |s j , y j\u22121 , c j ) is computed by a softmax activation as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoder initialization", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "p(y j |s j ,y j\u22121 , c j ) = softmax (t j W o ) , t j = tanh (s j W t 1 + E[y j\u22121 ]W t 2 + c j W t 3 ) . W t 1 , W t 2 , W t 3 , W o are the trained model pa- rameters.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoder initialization", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This rather standard encoder-decoder model with attention is our baseline and denoted as CGRU.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoder initialization", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The following models reuse most parts of the architecture described above wherever possible, most differences occur in the decoder RNN cell and the attention mechanism. The encoders are identical, so are the deep output layers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Encoder-Decoder Models with APE-specific Attention Models", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Aharoni and Goldberg (2016) introduce a simple model for monolingual morphological reinflection with hard monotonic attention. This model looks at one encoder state at a time, starting with the left-most encoder state and progressing to the right until all encoder states have been processed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hard Monotonic Attention", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The target word vocabulary V y is extended with a special step symbol (V y = V y \u222a { STEP }) and whenever STEP is predicted as the output symbol, the hard attention is moved to the next encoder state. Formally, the hard attention mechanism is represented as a precomputed monotonic sequence (a 1 , . . . , a Ty ) which can be inferred from the target sequence (y 1 , . . . , y Ty ) (containing original target symbols and T x step symbols) as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hard Monotonic Attention", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "a 1 = 1, a j = a j\u22121 + 1 if y j\u22121 = STEP a j\u22121 otherwise.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hard Monotonic Attention", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "For a given context C = {h 1 , . . . , h Tx }, the attended context vector at time step j is simply h a j . Following the description by Aharoni and Goldberg (2016) for their LSTM-based model, we adapt the previously described encoder-decoder model to incorporate hard attention. Given the sequence of attention indices (a 1 , . . . , a Ty ), the conditional GRU cell (Eq. 2) used for hidden state updates of the decoder is replaced with a simple GRU cell (Eq. 1) (thus removing the soft-attention mechanism):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hard Monotonic Attention", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "s j = GRU s j\u22121 , E[y j\u22121 ]; h a j ,", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Hard Monotonic Attention", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "where the cell input is now a concatenation of the embedding of the previous target symbol E[y j\u22121 ] and the currently attended encoder state h a j . This model is labeled GRU-HARD. We find this architecture compelling for monolingual tasks that might require higher faithfulness with regard to the input. With hard monotonic attention, the translation algorithm can enforce certain constraints:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hard Monotonic Attention", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "1. The end-of-sentence symbol can only be generated if the hard attention mechanism has reached the end of the input sequence, enforcing full coverage;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hard Monotonic Attention", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "2. The STEP symbol cannot be generated once the end-of-sentence position in the source has been reached. It is however still possible to generate content tokens.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hard Monotonic Attention", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "This model requires a target sequence with correctly inserted STEP symbols. For the described APE task, using the Longest Common Subsequence algorithm (Hirschberg, 1977) , we first generate a sequence of match, delete and insert operations which transform the raw MT out-", |
|
"cite_spans": [ |
|
{ |
|
"start": 151, |
|
"end": 169, |
|
"text": "(Hirschberg, 1977)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hard Monotonic Attention", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "put (x 1 , \u2022 \u2022 \u2022 x Tx ) into the corrected post-edited se- quence (y 1 , \u2022 \u2022 \u2022 y Ty ) 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hard Monotonic Attention", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Next, we map these operations to the final sequence of steps and target tokens according to the following rules:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hard Monotonic Attention", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 For each matched pair of tokens x, y we produce symbols: STEP y;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hard Monotonic Attention", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 For each inserted target token y we produce the same token y;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hard Monotonic Attention", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 For each deleted source token x we produce STEP ;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hard Monotonic Attention", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 Since at initialization of the model a 1 = 1, i.e. the first encoder state is already attended to, we discard the first symbol in the new sequence if it is a STEP symbol.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hard Monotonic Attention", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "While the hard attention model can be used to enforce faithfulness to the original input, we would also like the model to be able to look at information anywhere in the source sequence which is a property of the soft attention model. By re-introducing the conditional GRU cell with soft attention into the GRU-HARD model while also inputting the hard-attended encoder state h a j , we can try to take advantage of both attention mechanisms. Combining Eq. 2 and Eq. 3, we get:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hard and Soft Attention", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "s j = cGRU att s j\u22121 , E[y j\u22121 ]; h a j , C . (4)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hard and Soft Attention", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The rest of the model is unchanged; the translation process is the same as before and we use the same target step/token sequence for training. This model is called CGRU-HARD.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hard and Soft Attention", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Neural multi-source models (Zoph and Knight, 2016) seem to be a natural fit for the APE task as raw MT output and original source language input are available. Although applications to the APE problem have been reported (Libovick\u00fd and Helcl, 2017) , state-of-the-art results seem to be missing.", |
|
"cite_spans": [ |
|
{ |
|
"start": 27, |
|
"end": 50, |
|
"text": "(Zoph and Knight, 2016)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 220, |
|
"end": 247, |
|
"text": "(Libovick\u00fd and Helcl, 2017)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Soft Dual-Attention", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "In this section we give details about our dualsource model implementation. We rename the existing encoder C to C mt to signal that the first encoder consumes the raw MT output and introduce a structurally identical second encoder C src = {h src 1 , . . . , h src Tsrc } over the source language. To compute the decoder start state s 0 for the multiencoder model we concatenate the averaged encoder contexts before mapping them into the decoder state space:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Soft Dual-Attention", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "s 0 = tanh W init Tmt i=1 h mt i T mt ; Tsrc i=1 h src i T src .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Soft Dual-Attention", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "In the decoder, we replace the conditional GRU with attention, with a doubly-attentive cGRU cell (Calixto et al., 2017) over contexts C mt and C src :", |
|
"cite_spans": [ |
|
{ |
|
"start": 97, |
|
"end": 119, |
|
"text": "(Calixto et al., 2017)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Soft Dual-Attention", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "s j = cGRU 2-att s j\u22121 , E[y j\u22121 ], C mt , C src . (5)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Soft Dual-Attention", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "The procedure is similar to the original cGRU, differing only in that in order to compute the context vector c j , we first calculate contexts vectors c mt j and c src j for each context and then concatenate 4 the results:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Soft Dual-Attention", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "s j =GRU 1 (s j\u22121 , E[y j\u22121 ]) , c mt j =ATT C mt , s j = Tmt i \u03b1 ij h mt i , c src j =ATT C src , s j = Tsrc i \u03b1 ij h src i , c j = c mt j ; c src j , s j =GRU 2 s j , c j .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Soft Dual-Attention", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "This could be easily extended to an arbitrary number of encoders with different architectures. During training, this model is fed with a triparallel corpus, and during translation both input sequences are processed simultaneously to produce the corrected output. This model is denoted as M-CGRU.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Soft Dual-Attention", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Analogously to the procedure described in section 4.2, we can extend the doubly-attentive cGRU to take the hard-attended encoder context as additional input:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hard Attention with Soft Dual-Attention", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "s j = cGRU 2-att s j\u22121 , E[y j\u22121 ]; h mt a j , C mt , C src .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hard Attention with Soft Dual-Attention", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "In this formulation, only the first encoder context C mt is attended to by the hard monotonic attention mechanism. The target training data consists of the step/token sequences used for all previous hard-attention models. We call this model M-CGRU-HARD.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hard Attention with Soft Dual-Attention", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "We perform all our experiments 5 with the official WMT-2016 (Bojar et al., 2016) automatic postediting data and the respective development and test sets. The training data consists of a small set of 12,000 post-editing triplets (src, mt, pe), where src is the original English text, mt is the raw MT output generated by an English-to-German system, and pe is the human post-edited MT output. The MT system used to produce the raw MT output is unknown, so is the original training data. The task consists of automatically correcting the MT output so that it resembles human 5 All experiments in this sections can be reproduced following the instructions on https://marian-nmt. github.io/examples/exploration/. post-edited data. The main task metric is TER (Snover et al., 2006) -the lower the betterwith BLEU (Papineni et al., 2002) as a secondary metric.", |
|
"cite_spans": [ |
|
{ |
|
"start": 60, |
|
"end": 80, |
|
"text": "(Bojar et al., 2016)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 573, |
|
"end": 574, |
|
"text": "5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 755, |
|
"end": 776, |
|
"text": "(Snover et al., 2006)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 808, |
|
"end": 831, |
|
"text": "(Papineni et al., 2002)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training, Development, and Test Data", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "To overcome the problem of too little training data, Junczys-Dowmunt and Grundkiewicz (2016) -the authors of the best WMT-2016 APE shared task system -generated large amounts of artificial data via round-trip translations. The artificial data has been filtered to match the HTER statistics of the training and development data for the shared task and was made available for download 6 . Table 1 summarizes the data sets used in this work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training, Development, and Test Data", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "To produce our final training data set we oversample the original training data 20 times and add both artificial data sets. This results in a total of slightly more than 5M training triplets. We validate on the development set for early stopping and report results on the WMT-2016 test set. The data is already tokenized. Additionally we truecase all files and apply segmentation into BPE subword units (Sennrich et al., 2016) . We reuse the subword units distributed with the artificial data set. For the hard-attention models, we create target training and development files following the LCS-based procedure outlined in section 4.1.", |
|
"cite_spans": [ |
|
{ |
|
"start": 403, |
|
"end": 426, |
|
"text": "(Sennrich et al., 2016)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training, Development, and Test Data", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "All models are trained on the same training data. Models with single input encoders take only the raw MT output (mt) as input, dual-encoder models use raw MT output (mt) and the original source (pe). The training procedures and model settings are the same whenever possible: Table 3 : Results for models explored in this work. Models with \u00d7 4 are ensembles of four models. The main WMT 2016 APE shared task metric was TER (the lower the better).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 275, |
|
"end": 282, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Training parameters", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "\u2022 All embedding vectors consist of 512 units; the RNN states use 1024 units. We choose a vocabulary size of 40,000 for all inputs and outputs. When hard attention models are trained the maximum sentence length is 100 to accommodate the additional step symbols, otherwise 50.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training parameters", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "\u2022 To avoid overfitting, we use pervasive dropout (Gal and Ghahramani, 2016) over GRU steps and input embeddings, with dropout probabilities 0.2, and over source and target words with probabilities 0.2.", |
|
"cite_spans": [ |
|
{ |
|
"start": 49, |
|
"end": 75, |
|
"text": "(Gal and Ghahramani, 2016)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training parameters", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "\u2022 We use Adam (Kingma and Ba, 2014) as our optimizer, with a mini-batch size of 64. All models are trained with Asynchronous SGD (Adam) on three to four GPUs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training parameters", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "\u2022 We train all models until convergence (earlystopping with a patience of 10 based on development set cross-entropy cost), saving model checkpoints every 10,000 minibatches.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training parameters", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "For different models we observed early stopping to be triggered between 600,000 and 900,000 mini-batch updates or between 8 and 11 epochs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training parameters", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "\u2022 The best eight model checkpoints w.r.t. development set cross-entropy of each training run are averaged element-wise resulting in new single models with generally improved performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training parameters", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "\u2022 For the multi-source models we repeat the mentioned procedure four times with different randomly initialized weights.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training parameters", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Training time for one model on four NVIDIA GTX 1080 GPUs or NVIDIA TITAN X (Pascal) GPUs is between one and two days, depending on model complexity. The M-CGRU-HARD model is the most complex and trains longest. Table 2 contains relevant results for the WMT-2016 APE shared task -during the task and afterwards. WMT-2016 BASELINE-1 is the raw uncorrected MT output. BASELINE-2 is the result of a vanilla phrase-based Moses system (Koehn et al., 2007) trained only on the official 12,000 sentences. Junczys-Dowmunt and Grundkiewicz (2016) is the best system at the shared task. Pal In Table 3 we present the results for the models discussed in this work. Unsurprisingly, none of the single attention models can compete with the better systems reported in the literature. The encoder-decoder model with only hard monotonic attention (GRU-HARD) is the clear loser, while the comparison between CGRU and CGRU-HARD remains inconclusive. CGRU-HARD seems to generalize slightly better, but would not have been chosen based on the development set performance.", |
|
"cite_spans": [ |
|
{ |
|
"start": 429, |
|
"end": 449, |
|
"text": "(Koehn et al., 2007)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 211, |
|
"end": 218, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 583, |
|
"end": 590, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Training parameters", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The dual-attention models each outperform the best WMT-2016 system and the currently reported best single-model Pal et al. (2017) SYMMETRIC. The ensembles also beat the system combination Pal et al. (2017) RERANKING in terms of TER (not in terms of BLEU though). The simpler dualattention model with no hard-attention M-CGRU reaches slightly better results on the test set than its counterpart with added hard attention M-CGRU-HARD, but the situation would have been less clear if only the development set were used to determine the best model. The hard-attention model with dual soft-attention benefits less from ensembling.", |
|
"cite_spans": [ |
|
{ |
|
"start": 112, |
|
"end": 129, |
|
"text": "Pal et al. (2017)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 188, |
|
"end": 205, |
|
"text": "Pal et al. (2017)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "We postulated that the hard-attention models might have a potential for higher faithfulness. Since the APE task is a mostly monolingual task,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Faithfulness and Errors", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "Mod. Imp. Det. we can verify this by comparing TER scores with regard to the reference post-edition (TER-pe) and TER scores with regard to the raw MT output (TER-mt). The lower the TER-mt score the fewer changes have been made to the input to arrive at the output, thus resulting in higher faithfulness. Table 4 contains this comparison for the WMT-2016 APE test set. The hard-attention models indeed make fewer changes than their softattention counterparts. This difference is especially dramatic for M-CGRU and M-CGRU-HARD, where only small differences in TER-pe occur, but a gap of more than two TER points for TER-mt. This shows that hard-attention models can reach similar TER scores to soft-attention models while performing fewer changes. It might also explain why ensembling has a lower impact on the hardattention models: higher faithfulness means less variety which results in smaller benefits from ensembles. Table 5 compares the number of modified, improved and deteriorated test set sentences (2000 in total) for all models. The majority of sentences is being modified. While the number of deteriorated sentences is comparable between models, the number of improved sentences increases for the dual-source architectures. Ensembles lower the number of deteriorated sentences rather than increasing the number of improved sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 304, |
|
"end": 311, |
|
"text": "Table 4", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 920, |
|
"end": 927, |
|
"text": "Table 5", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Figures 1 and 2 visualize the behavior of the presented attention variants examined in this work for the example sentences in Table 6 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 126, |
|
"end": 133, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Visualization of Attention Types", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "For this sentence, the unseen MT system mistranslated the word \"Set\" as \"festlegen\". The monolingual mt \u2192 pe systems cannot easily correct the error as the original meaning is lost, but mt W\u00e4hlen Sie einen Tastaturbefehlssatz im Men\u00fc festlegen . src Select a shortcut set in the Set menu .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Visualization of Attention Types", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "W\u00e4hlen Sie einen Tastaturbefehlssatz im Men\u00fc aus .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CGRU", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "W\u00e4hlen Sie einen Tastaturbefehlssatz im Men\u00fc aus .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "GRU-HARD", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "W\u00e4hlen Sie einen Tastaturbefehlssatz im Men\u00fc aus .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CGRU-HARD", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "W\u00e4hlen Sie einen Tastaturbefehlssatz im Men\u00fc \" Satz \" aus . M-CGRU-HARD W\u00e4hlen Sie einen Tastaturbefehlssatz im Men\u00fc \" Satz . \"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "M-CGRU", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "W\u00e4hlen Sie einen Tastaturbefehlssatz im Men\u00fc \" Satz . \" Table 6 : Example corrections for different models. Only the multi-source models manage to restore the missing translation for \"Set\" and insert quotes. The added particle \"aus\" does not appear in the reference, but is grammatically correct as well. they improve grammaticality. In Figure 1 , we see how the soft attention model (CGRU) follows the input roughly monotonically. The monotonic hard attention model (GRU-HARD) does this naturally. For CGRU-HARD, it is interesting to see how the monotonic attention now allows the soft attention mechanism to look around the input sentence more freely or to remain inactive instead of following the monotonic path. Both {mt, src} \u2192 pe systems take advantage of the src information and improve the input. The proposed modifications could be accepted as correct; one matches the reference. The highlighted rows and columns in Figure 2 show how the original source was used to reconstruct the missing word \"Satz\" and how both attention mechanisms interact. The attention over src seems to spend most time in a \"parking\" position at the sentence end unless it can provide useful information; the attention over mt follows the input closely.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 56, |
|
"end": 63, |
|
"text": "Table 6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 337, |
|
"end": 345, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 925, |
|
"end": 933, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "pe", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this paper we presented several neural APE models that are equipped with non-standard attention mechanisms and combinations thereof. Among these, hard attention models have been applied to APE for the first time, whereas dual-soft attention models have been proposed before for APE tasks, but with non-conclusive results. This is the first work to report state-of-theart results for dual-attention models that integrate full post-edition triplets into a single end-to-end model. The ensembles of dual-attention models provide more than 1.52 TER points improvement over the best WMT-2016 system and 0.7 TER improvement over the best reported system combination for the same test set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "We also demonstrated that while hard-attention models yield similar results to pure soft-attention models, they do so by performing fewer changes to the input. This might be a useful property in scenarios where conservative edits are preferred.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "https://github.com/marian-nmt/marian 2 Biases have been omitted.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Similar to GNU wdiff.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Calixto et al. (2017) combine their two attention models by modifying their GRU cell to include another set of parameters that is multiplied with the additional context vector and summed in the GRU-components. Formally, both approaches give identical results, as for concatenation the original parameters have to grow in size to match the now longer input vector dimensions. The GRU cell itself does not need to be modified.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The artificial filtered data has been made available at https://github.com/emjotde/amunmt/wiki/ AmuNMT-for-Automatic-Post-Editing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This research was funded by the Amazon Academic Research Awards program. This project has received funding from the European Union's Horizon 2020 research and innovation program under grant 644333 (TraMOOC) and 645487 (Mod-ernMT). This work was partially funded by Facebook. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of Facebook.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Sequence to sequence transduction with hard monotonic attention", |
|
"authors": [ |
|
{ |
|
"first": "Roee", |
|
"middle": [], |
|
"last": "Aharoni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1611.01487" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roee Aharoni and Yoav Goldberg. 2016. Sequence to sequence transduction with hard monotonic atten- tion. arXiv preprint arXiv:1611.01487.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Neural machine translation by jointly learning to align and translate", |
|
"authors": [ |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the International Conference on Learning Represen- tations, San Diego, CA.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Statistical post-editing for a statistical MT system", |
|
"authors": [ |
|
{ |
|
"first": "Hanna", |
|
"middle": [], |
|
"last": "B\u00e9chara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yanjun", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Josef", |
|
"middle": [], |
|
"last": "Van Genabith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 13th Machine Translation Summit", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "308--315", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hanna B\u00e9chara, Yanjun Ma, and Josef van Genabith. 2011. Statistical post-editing for a statistical MT system. In Proceedings of the 13th Machine Trans- lation Summit, pages 308-315, Xiamen, China.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "An evaluation of statistical post-editing systems applied to RBMT and SMT systems", |
|
"authors": [ |
|
{ |
|
"first": "Hanna", |
|
"middle": [], |
|
"last": "B\u00e9chara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rapha\u00ebl", |
|
"middle": [], |
|
"last": "Rubino", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yifan", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yanjun", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Josef", |
|
"middle": [], |
|
"last": "Van Genabith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of COLING 2012", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "215--230", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hanna B\u00e9chara, Rapha\u00ebl Rubino, Yifan He, Yanjun Ma, and Josef van Genabith. 2012. An evaluation of statistical post-editing systems applied to RBMT and SMT systems. In Proceedings of COLING 2012, pages 215-230, Mumbai, India.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Findings of the 2016 conference on machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Ondrej", |
|
"middle": [], |
|
"last": "Bojar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rajen", |
|
"middle": [], |
|
"last": "Chatterjee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Federmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yvette", |
|
"middle": [], |
|
"last": "Graham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthias", |
|
"middle": [], |
|
"last": "Huck", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antonio", |
|
"middle": [ |
|
"Jimeno" |
|
], |
|
"last": "Yepes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Varvara", |
|
"middle": [], |
|
"last": "Logacheva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christof", |
|
"middle": [], |
|
"last": "Monz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matteo", |
|
"middle": [], |
|
"last": "Negri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aurelie", |
|
"middle": [], |
|
"last": "Neveol", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mariana", |
|
"middle": [], |
|
"last": "Neves", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Popel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Post", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raphael", |
|
"middle": [], |
|
"last": "Rubino", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carolina", |
|
"middle": [], |
|
"last": "Scarton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucia", |
|
"middle": [], |
|
"last": "Specia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Turchi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karin", |
|
"middle": [], |
|
"last": "Verspoor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcos", |
|
"middle": [], |
|
"last": "Zampieri", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the First Conference on Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "131--198", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ondrej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Aure- lie Neveol, Mariana Neves, Martin Popel, Matt Post, Raphael Rubino, Carolina Scarton, Lucia Spe- cia, Marco Turchi, Karin Verspoor, and Marcos Zampieri. 2016. Findings of the 2016 conference on machine translation. In Proceedings of the First Conference on Machine Translation, pages 131- 198, Berlin, Germany. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Doubly-attentive decoder for multi-modal neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Iacer", |
|
"middle": [], |
|
"last": "Calixto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qun", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nick", |
|
"middle": [], |
|
"last": "Campbell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Iacer Calixto, Qun Liu, and Nick Campbell. 2017. Doubly-attentive decoder for multi-modal neural machine translation. CoRR, abs/1702.01287.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Exploring the planet of the APEs: a comparative study of state-of-the-art methods for MT automatic post-editing", |
|
"authors": [ |
|
{ |
|
"first": "Rajen", |
|
"middle": [], |
|
"last": "Chatterjee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marion", |
|
"middle": [], |
|
"last": "Weller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matteo", |
|
"middle": [], |
|
"last": "Negri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Turchi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "156--161", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rajen Chatterjee, Marion Weller, Matteo Negri, and Marco Turchi. 2015. Exploring the planet of the APEs: a comparative study of state-of-the-art meth- ods for MT automatic post-editing. In Proceed- ings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th Interna- tional Joint Conference on Natural Language Pro- cessing, pages 156-161, Beijing, China. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Learning Phrase Representations Using RNN Encoder-Decoder for Statistical Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bart", |
|
"middle": [], |
|
"last": "Van Merri\u00ebnboer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Caglar", |
|
"middle": [], |
|
"last": "Gulcehre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proc. of Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kyunghyun Cho, Bart Van Merri\u00ebnboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Hol- ger Schwenk, and Yoshua Bengio. 2014. Learn- ing Phrase Representations Using RNN Encoder- Decoder for Statistical Machine Translation. In Proc. of Empirical Methods in Natural Language Processing.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "A theoretically grounded application of dropout in recurrent neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Yarin", |
|
"middle": [], |
|
"last": "Gal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zoubin", |
|
"middle": [], |
|
"last": "Ghahramani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Advances in Neural Information Processing Systems (NIPS)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yarin Gal and Zoubin Ghahramani. 2016. A theoret- ically grounded application of dropout in recurrent neural networks. In Advances in Neural Information Processing Systems (NIPS).", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Algorithms for the longest common subsequence problem", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Hirschberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1977, |
|
"venue": "J. ACM", |
|
"volume": "24", |
|
"issue": "4", |
|
"pages": "664--675", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel S. Hirschberg. 1977. Algorithms for the longest common subsequence problem. J. ACM, 24(4):664- 675.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Is neural machine translation ready for deployment? A case study on 30 translation directions", |
|
"authors": [ |
|
{ |
|
"first": "Marcin", |
|
"middle": [], |
|
"last": "Junczys-Dowmunt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomasz", |
|
"middle": [], |
|
"last": "Dwojak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hieu", |
|
"middle": [], |
|
"last": "Hoang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 9th International Workshop on Spoken Language Translation (IWSLT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marcin Junczys-Dowmunt, Tomasz Dwojak, and Hieu Hoang. 2016. Is neural machine translation ready for deployment? A case study on 30 translation directions. In Proceedings of the 9th Interna- tional Workshop on Spoken Language Translation (IWSLT), Seattle, WA.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Log-linear combinations of monolingual and bilingual neural machine translation models for automatic post-editing", |
|
"authors": [ |
|
{ |
|
"first": "Marcin", |
|
"middle": [], |
|
"last": "Junczys", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "-", |
|
"middle": [], |
|
"last": "Dowmunt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roman", |
|
"middle": [], |
|
"last": "Grundkiewicz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the First Conference on Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "751--758", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marcin Junczys-Dowmunt and Roman Grundkiewicz. 2016. Log-linear combinations of monolingual and bilingual neural machine translation models for au- tomatic post-editing. In Proceedings of the First Conference on Machine Translation, pages 751- 758.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Adam: A method for stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Diederik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 3rd International Conference on Learning Representations (ICLR)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. In Proceed- ings of the 3rd International Conference on Learn- ing Representations (ICLR).", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Moses: Open source toolkit for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hieu", |
|
"middle": [], |
|
"last": "Hoang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcello", |
|
"middle": [], |
|
"last": "Federico", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicola", |
|
"middle": [], |
|
"last": "Bertoldi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brooke", |
|
"middle": [], |
|
"last": "Cowan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wade", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christine", |
|
"middle": [], |
|
"last": "Moran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Zens", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "177--180", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for statistical machine translation. In Pro- ceedings of the 45th Annual Meeting of the Associa- tion for Computational Linguistics, pages 177-180. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Attention strategies for multi-source sequence-to-sequence learning", |
|
"authors": [ |
|
{ |
|
"first": "Jindrich", |
|
"middle": [], |
|
"last": "Libovick\u00fd", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jindrich", |
|
"middle": [], |
|
"last": "Helcl", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jindrich Libovick\u00fd and Jindrich Helcl. 2017. Attention strategies for multi-source sequence-to-sequence learning. CoRR, abs/1704.06567.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "CUNI system for WMT16 automatic post-editing and multimodal translation tasks", |
|
"authors": [ |
|
{ |
|
"first": "Jindrich", |
|
"middle": [], |
|
"last": "Libovick\u00fd", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jindrich", |
|
"middle": [], |
|
"last": "Helcl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marek", |
|
"middle": [], |
|
"last": "Tlust\u00fd", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the First Conference on Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "646--654", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jindrich Libovick\u00fd, Jindrich Helcl, Marek Tlust\u00fd, On- drej Bojar, and Pavel Pecina. 2016. CUNI sys- tem for WMT16 automatic post-editing and multi- modal translation tasks. In Proceedings of the First Conference on Machine Translation, pages 646- 654, Berlin, Germany. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Minimum error rate training in statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Franz Josef", |
|
"middle": [], |
|
"last": "Och", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "160--167", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Franz Josef Och. 2003. Minimum error rate train- ing in statistical machine translation. In Proceed- ings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 160-167, Sap- poro, Japan. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Neural automatic post-editing using prior alignment and reranking", |
|
"authors": [ |
|
{ |
|
"first": "Santanu", |
|
"middle": [], |
|
"last": "Pal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mihaela", |
|
"middle": [], |
|
"last": "Sudip Kumar Naskar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qun", |
|
"middle": [], |
|
"last": "Vela", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Josef", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Van Genabith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "349--355", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Santanu Pal, Sudip Kumar Naskar, Mihaela Vela, Qun Liu, and Josef van Genabith. 2017. Neural auto- matic post-editing using prior alignment and rerank- ing. In Proceedings of the European Chapter of the Association for Computational Linguistics, pages 349-355.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "USAAR-SAPE: An English-Spanish statistical automatic post-editing system", |
|
"authors": [ |
|
{ |
|
"first": "Santanu", |
|
"middle": [], |
|
"last": "Pal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mihaela", |
|
"middle": [], |
|
"last": "Vela", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sudip", |
|
"middle": [], |
|
"last": "Kumar Naskar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Josef", |
|
"middle": [], |
|
"last": "Van Genabith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the Tenth Workshop on Statistical Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "216--221", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Santanu Pal, Mihaela Vela, Sudip Kumar Naskar, and Josef van Genabith. 2015. USAAR-SAPE: An English-Spanish statistical automatic post-editing system. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 216-221, Lisbon, Portugal. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "BLEU: A method for automatic evaluation of machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Kishore", |
|
"middle": [], |
|
"last": "Papineni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salim", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Todd", |
|
"middle": [], |
|
"last": "Ward", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei-Jing", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "311--318", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Com- putational Linguistics, ACL '02, pages 311-318, Stroudsburg, PA, USA. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Nematus: a toolkit for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Orhan", |
|
"middle": [], |
|
"last": "Firat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julian", |
|
"middle": [], |
|
"last": "Hitschler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcin", |
|
"middle": [], |
|
"last": "Junczys-Dowmunt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samuel", |
|
"middle": [], |
|
"last": "L\u00e4ubli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antonio", |
|
"middle": [], |
|
"last": "Valerio Miceli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jozef", |
|
"middle": [], |
|
"last": "Barone", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Mokry", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Nadejde", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the Software Demonstrations of the 15th Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "65--68", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rico Sennrich, Orhan Firat, Kyunghyun Cho, Alexan- dra Birch, Barry Haddow, Julian Hitschler, Marcin Junczys-Dowmunt, Samuel L\u00e4ubli, Antonio Valerio Miceli Barone, Jozef Mokry, and Maria Nadejde. 2017. Nematus: a toolkit for neural machine trans- lation. In Proceedings of the Software Demonstra- tions of the 15th Conference of the European Chap- ter of the Association for Computational Linguistics, pages 65-68, Valencia, Spain. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Neural machine translation of rare words with subword units", |
|
"authors": [ |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1715--1725", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics, pages 1715-1725, Berlin, Germany. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Statistical phrase-based post-editing", |
|
"authors": [ |
|
{ |
|
"first": "Michel", |
|
"middle": [], |
|
"last": "Simard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cyril", |
|
"middle": [], |
|
"last": "Goutte", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierre", |
|
"middle": [], |
|
"last": "Isabelle", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "508--515", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michel Simard, Cyril Goutte, and Pierre Isabelle. 2007. Statistical phrase-based post-editing. In Proceed- ings of the Conference of the North American Chap- ter of the Association for Computational Linguistics, pages 508-515, Rochester, New York. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "A study of translation edit rate with targeted human annotation", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Snover", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bonnie", |
|
"middle": [], |
|
"last": "Dorr", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Schwartz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Linnea", |
|
"middle": [], |
|
"last": "Micciulla", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Makhoul", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of Association for Machine Translation in the Americas", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew Snover, Bonnie Dorr, Richard Schwartz, Lin- nea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of Association for Machine Transla- tion in the Americas.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Multi-source neural translation", |
|
"authors": [ |
|
{ |
|
"first": "Barret", |
|
"middle": [], |
|
"last": "Zoph", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "30--34", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Barret Zoph and Kevin Knight. 2016. Multi-source neural translation. In Proceedings of the 2016 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 30-34, San Diego, Cali- fornia. Association for Computational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Behavior of different monolingual attention models (best viewed in color). Attention matrices for dual-soft-attention model M-CGRU (best viewed in color).", |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"type_str": "table", |
|
"text": "Results from the literature for the WMT-2016 APE development and test set.", |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td>dev 2016</td><td>test 2016</td></tr></table>" |
|
}, |
|
"TABREF4": { |
|
"type_str": "table", |
|
"text": "", |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td>: TER w.r.t. the reference compared to TER</td></tr><tr><td>w.r.t. the input on test 2016. Lower results for</td></tr><tr><td>TER-mt indicate greater similarity to the input.</td></tr><tr><td>et al. (2017) SYMMETRIC is the currently best re-</td></tr><tr><td>ported result on the WMT-2016 APE test set for</td></tr><tr><td>a single neural model (single source), whereas Pal</td></tr><tr><td>et al. (2017) RERANKING -the overall best re-</td></tr><tr><td>ported result on the test set -is a system com-</td></tr><tr><td>bination of Pal et al. (2017) SYMMETRIC with</td></tr><tr><td>phrase-based models via n-best list re-ranking.</td></tr></table>" |
|
}, |
|
"TABREF6": { |
|
"type_str": "table", |
|
"text": "Number of test set sentences modified, improved and deteriorated by each model.", |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>" |
|
} |
|
} |
|
} |
|
} |