id
stringlengths
32
33
x
stringlengths
41
1.75k
y
stringlengths
4
39
12ab280d48ef6bfae0ff27a400e2ab_2
This session focused on experimental or planned approaches to human language technology evaluation and included an overview and five papers: two papers on experimental evaluation approaches [l, 2] , and three about the ongoing work in new annotation and evaluation approaches for human language technology [3, <cite>4,</cite> 5] .
background
12ab280d48ef6bfae0ff27a400e2ab_3
The last three papers ( [3, <cite>4,</cite> 5] ) take various approaches to the issue of predicate-argument 1The Penn Treebank parse annotations provide an interesting case where annotation supported evaluation.
background
12ab280d48ef6bfae0ff27a400e2ab_4
The last three papers [3, <cite>4,</cite> 5] all reflect a concern to develop better evaluation methods for semantics, with a shared focus on predicate-argument evaluation.
background
12ab280d48ef6bfae0ff27a400e2ab_5
Both Marcus and Grishman argued that the Treebank annotation should directly support the MUC-style predicate-argument evaluation outlined in<cite> [4]</cite> , although the Treebank annotations may be a sub-set of what is used for MUC predicate-argument evaluation.
background
12c5d72fad925c8ec025cda87a0fd9_0
Verb-noun combinations (VNCs), consisting of a verb with a noun in its direct object position, are a common type of semantically-idiomatic MWE in English and cross-lingually (<cite>Fazly et al., 2009</cite> ).
background
12c5d72fad925c8ec025cda87a0fd9_1
In this paper we further incorporate knowledge of the lexico-syntactic fixedness of VNCs -automatically acquired from corpora using the method of <cite>Fazly et al. (2009)</cite> -into our various embedding-based approaches.
uses
12c5d72fad925c8ec025cda87a0fd9_2
Much research on MWE identification has focused on specific kinds of MWEs (e.g., Patrick and Fletcher, 2005; Uchiyama et al., 2005) , including English VNCs (e.g., <cite>Fazly et al., 2009</cite>; Salton et al., 2016) , although some recent work has considered the identification of a broad range of kinds of MWEs (e.g., Schneider et al., 2014; Brooke et al., 2014; Savary et al., 2017) .
background
12c5d72fad925c8ec025cda87a0fd9_3
Work on MWE identification has leveraged rich linguistic knowledge of the constructions under consideration (e.g., <cite>Fazly et al., 2009</cite>; Fothergill and Baldwin, 2012) , treated literal and idiomatic as two senses of an expression and applied approaches similar to word-sense disambiguation (e.g., Birke and Sarkar, 2006; Hashimoto and Kawahara, 2008) , incorporated topic models (e.g., Li et al., 2010) , and made use of distributed representations of words (Gharbieh et al., 2016) .
background
12c5d72fad925c8ec025cda87a0fd9_4
<cite>Fazly et al. (2009)</cite> formed a set of eleven lexicosyntactic patterns for VNC instances capturing the voice of the verb (active or passive), determiner (e.g., a, the), and number of the noun (singular or plural).
background
12c5d72fad925c8ec025cda87a0fd9_5
<cite>Fazly et al. (2009)</cite> formed a set of eleven lexicosyntactic patterns for VNC instances capturing the voice of the verb (active or passive), determiner (e.g., a, the), and number of the noun (singular or plural). <cite>They</cite> then determine the canonical form, C(v, n), for a given VNC as follows: 2
background
12c5d72fad925c8ec025cda87a0fd9_6
<cite>Fazly et al. (2009)</cite> showed that idiomatic usages of a VNC tend to occur in that expression's canonical form, while literal usages do not.
background
12c5d72fad925c8ec025cda87a0fd9_7
We use the VNC-Tokens dataset (Cook et al., 2008) -the same dataset used by <cite>Fazly et al. (2009)</cite> and Salton et al. (2016) -to train and evaluate our models.
similarities uses
12c5d72fad925c8ec025cda87a0fd9_8
<cite>Fazly et al. (2009)</cite> and Salton et al. (2016) structured their experiments differently.
background
12c5d72fad925c8ec025cda87a0fd9_9
<cite>Fazly et al.</cite> report results over DEV and TEST separately.
background
12c5d72fad925c8ec025cda87a0fd9_10
We therefore use accuracy to evaluate our models following <cite>Fazly et al. (2009)</cite> because the classes are roughly balanced.
motivation uses
12c5d72fad925c8ec025cda87a0fd9_11
In Table 2 we report results on DEV and TEST for each model, as well as the unsupervised CForm model of <cite>Fazly et al. (2009)</cite> , which simply labels a VNC as idiomatic if it occurs in its canonical form, and as literal otherwise.
differences
12c5d72fad925c8ec025cda87a0fd9_12
In line with the findings of <cite>Fazly et al. (2009)</cite> , CForm achieves higher precision and recall on idiomatic usages than literal ones.
similarities
13249ad2fd022b9b4f1d22d2ca77cd_0
The weights of this linear combination are usually trained to maximise some automatic translation metric (e.g. BLEU) [1] using Minimum Error Rate Training (MERT) [2,<cite> 3]</cite> or a variant of the Margin Infused Relaxed Algorithm (MIRA) [4, 5] .
background
13249ad2fd022b9b4f1d22d2ca77cd_1
The most common approach is an iterative algorithm MERT <cite>[3]</cite> which employs N-best lists (the best N translations decoded with a weight set from a previous iteration) as candidate translations C. In this way, the loss function is constructed as E(Ē,Ê) = S s=1 E(ē s ,ê s ), whereē is the reference sentence,ê is selected from N-best lists byê s = arg max e∈C K k=1 w k H k (e, f s ) and S represents the volume of sentences.
background
134baefab4d27e9dafd0c050c43775_0
The only exceptions we are aware of are the Groningen Meaning Bank and the Parallel Meaning Bank <cite>(Abzianidze et al., 2017)</cite> , two annotation efforts which use a graphical user interface for annotating sentences with CCG derivations and other annotation layers, and which have produced CCG treebanks for English, German, Italian, and Dutch.
background
13d1d79a4922d3b5d215d6f8f722ba_0
De Cao et al. <cite>[2]</cite> proposed a method to detect the set of suitable WordNet senses able to evoke the same frame by exploiting the hypernym hierarchies that capture the largest number of LUs in the frame.
background
13d1d79a4922d3b5d215d6f8f722ba_1
The only comparable evaluation available is reported in [5] , and shows that our results are promising. De Cao at al. <cite>[2]</cite> reported a better performance, particularly for recall, but evaluation of their mapping algorithm relied on a gold standard of 4 selected frames having at least 10 LUs and a given number of corpus instantiations.
differences
13d3d973a4be832f66b049b364fea5_0
Several neural architectures have been employed including variants of Long Short-Term Memory (LSTM)<cite> (Alikaniotis et al., 2016</cite>; Taghipour and Ng, 2016) and Convolutional Neural Networks (CNN) (Dong and Zhang, 2016) .
background
13d3d973a4be832f66b049b364fea5_1
For instance, <cite>Alikaniotis et al. (2016)</cite> developed score-specific word embeddings (SSWE) to address the AA task on the ASAP dataset.
motivation
13d3d973a4be832f66b049b364fea5_2
We implement a CNN as the AA model and compare its performance when initialized with our embeddings, tuned based on natural writing errors, to the one obtained when bootstrapped with the SSWE, proposed by <cite>Alikaniotis et al. (2016)</cite> , that relies on random noisy contexts and script scores.
similarities uses
13d3d973a4be832f66b049b364fea5_3
<cite>Alikaniotis et al. (2016)</cite> assessed the same dataset by building a bidirectional double-layer LSTM which outperformed Distributed Memory Model of Paragraph Vectors (PV-DM) (Le and Mikolov, 2014) and Support Vector Machines (SVM) baselines.
background
13d3d973a4be832f66b049b364fea5_4
<cite>Alikaniotis et al. (2016)</cite> applied a similar idea; in their SSWE model, they trained word embeddings to distinguish between correct and noisy contexts in addition to focusing more on each word's contribution to the overall text score.
background motivation
13d3d973a4be832f66b049b364fea5_5
In this section, we describe three different neural networks to pre-train word representations: the model implemented by <cite>Alikaniotis et al. (2016)</cite> and the two error-oriented models we propose in this work.
uses
13d3d973a4be832f66b049b364fea5_6
We compare our pre-training models to the SSWE developed by <cite>Alikaniotis et al. (2016)</cite> .
similarities
13d3d973a4be832f66b049b364fea5_7
Table 2 demonstrates that learning from the er-9 Using the same parameters as <cite>Alikaniotis et al. (2016)</cite> .
similarities uses
1527ce2786adfe0decf8c926a3d846_0
The vast majority of prior methods assume a domain independent context, and rely on Wikipedia and Simple English Wikipedia, a subset of Wikipedia using simplified grammar and terminology, to learn simplifications <cite>(Biran et al., 2011</cite>; Paetzold and Specia, 2015) , with translationbased approaches using an aligned version (Coster and Kauchak, 2011; Horn et al., 2014; Yatskar et al., 2010) .
background
1527ce2786adfe0decf8c926a3d846_1
Further, some approaches work by detecting all pairs of words in a corpus and filtering to isolate synonym or hypernym-relationship pairs using WordNet<cite> (Biran et al., 2011)</cite> .
background
1527ce2786adfe0decf8c926a3d846_2
One approach identifies all pairwise permutations of 'content' terms and then applies semantic (i.e., WordNet) and simplicity filters to eliminate pairs that are not simplifications<cite> (Biran et al., 2011)</cite> .
background
1527ce2786adfe0decf8c926a3d846_3
Embeddings identify words that share context in an unsupervised, scalable way and are more efficient than constructing co-occurrence matrices<cite> (Biran et al., 2011)</cite> .
motivation background
1527ce2786adfe0decf8c926a3d846_4
To retain only rules of the form complex word → simple word we calculate the corpus complexity, C<cite> (Biran et al., 2011)</cite> of each word w as the ratio between the frequency (f ) in the scientific versus general corpus: C w = f w,scientif ic /f w,general .
similarities
1527ce2786adfe0decf8c926a3d846_5
We require that the final complexity score of the first word in the rule be greater than the second. While this simplicity filter has been shown to work well in general corpora<cite> (Biran et al., 2011)</cite> , it is sensitive to very small differences in the frequencies with which both words appear in the corpora.
differences
1527ce2786adfe0decf8c926a3d846_6
In prior context-aware simplification systems, the decision of whether to apply a simplification rule in an input sentence is complex, involving several similarity operations on word co-occurrence matrices<cite> (Biran et al., 2011)</cite> or using embeddings to incorporate co-occurrence context for pairs generated using other means (Paetzold and Specia, 2015) .
background
1527ce2786adfe0decf8c926a3d846_7
The second is the cosine similarity of a minimum shared frequency co-occurrence matrix for the words in the pair and the co-occurrence matrix for the input sentence<cite> (Biran et al., 2011)</cite> .
background
1527ce2786adfe0decf8c926a3d846_8
Our SimpleScience approach outperforms the original approach by<cite> Biran et al. (2011)</cite> applied to the Wikipedia and SEW corpus as well as to the scientific corpus (Table 1) .
similarities uses
1527ce2786adfe0decf8c926a3d846_9
Adding techniques to filter antonym rules, such as using co-reference chains<cite> (Adel and Schütze, 2014)</cite> , is important in future work.
future_work
1540b0b172971ac75771b414765f1d_0
Bollmann and Søgaard (2016) and <cite>Bollmann et al. (2017)</cite> recently showed that we can obtain more robust historical text normalization models by exploiting synergies across historical text normalization datasets and with related tasks.
background
1540b0b172971ac75771b414765f1d_1
Specifically, <cite>Bollmann et al. (2017)</cite> showed that multitask learning with German grapheme-to-phoneme translation as an auxiliary task improves a stateof-the-art sequence-to-sequence model for historical text normalization of medieval German manuscripts.
background
1540b0b172971ac75771b414765f1d_2
We consider 10 datasets from 8 different languages: German, using the <cite>Anselm dataset</cite> (taken from <cite>Bollmann et al., 2017</cite>) and texts from the RIDGES corpus (Odebrecht et al., 2016) <cite>Bollmann et al. (2017)</cite> to obtain a single dataset.
uses
1540b0b172971ac75771b414765f1d_3
Specifically, we evaluate a state-ofthe-art approach to historical text normalization <cite>(Bollmann et al., 2017)</cite> with and without various auxiliary tasks, across 10 historical text normalization datasets.
motivation uses
1540b0b172971ac75771b414765f1d_4
Model We use the same encoder-decoder architecture with attention as described in <cite>Bollmann et al. (2017)</cite> .
uses
1540b0b172971ac75771b414765f1d_5
The hyperparameters were set on a randomly selected subset of 50,000 tokens from each of the following datasets: English, German (<cite>Anselm</cite>), Hungarian, Icelandic, and Slovene (Gaj).
uses
1540b0b172971ac75771b414765f1d_6
<cite>Bollmann et al. (2017)</cite> also describe a multi-task learning (MTL) scenario where the encoder-decoder model is trained on two datasets in parallel.
background
1540b0b172971ac75771b414765f1d_7
<cite>Bollmann et al. (2017)</cite> also describe a multi-task learning (MTL) scenario where the encoder-decoder model is trained on two datasets in parallel. We perform similar experiments on pairwise combinations of our datasets.
similarities
1540b0b172971ac75771b414765f1d_8
There is a wide range of design questions and sharing strategies that we ignore here, focusing instead on under what circumstances the approach advocated in <cite>(Bollmann et al., 2017)</cite> works.
uses
1542325bbf9bed87c22d34d12ee40e_0
LR-decoding algorithms exist for phrasebased (Koehn, 2004; Galley and Manning, 2010) and syntax-based (Huang and Mi, 2010; Feng et al., 2012 ) models and also for hierarchical phrasebased models (Watanabe et al., 2006; <cite>Siahbani et al., 2013</cite>) , which is our focus in this paper.
uses background
1542325bbf9bed87c22d34d12ee40e_1
Throughout this paper we abuse the notation for simplicity and use the term GNF grammars for such SCFGs. This constraint drastically reduces the size of grammar for LR-Hiero in comparison to Hiero grammar (<cite>Siahbani et al., 2013</cite>) .
uses
1542325bbf9bed87c22d34d12ee40e_2
This constraint drastically reduces the size of grammar for LR-Hiero in comparison to Hiero grammar (<cite>Siahbani et al., 2013</cite>) .
background
1542325bbf9bed87c22d34d12ee40e_3
<cite>Siahbani et al. (2013)</cite> propose an augmented version of LR decoding to address some limitations in the original LR-Hiero algorithm in terms of translation quality and time efficiency.
background
1542325bbf9bed87c22d34d12ee40e_4
We introduce two improvements to LR decoding of GNF grammars: (1) We add queue diversity to the <cite>cube pruning algorithm for LR-Hiero</cite>, and (2) We extend the LR-Hiero decoder to capture all the hierarchical phrasal alignments that are reachable in CKY-Hiero (restricted to using GNF grammars).
extends uses
1542325bbf9bed87c22d34d12ee40e_5
Although, LR-Hiero performs much faster than Hiero in decoding and obtains BLEU scores comparable to phrase-based translation system on some language pairs, there is still a notable gap between CKY-Hiero and LR-Hiero (<cite>Siahbani et al., 2013</cite>) .
uses
1542325bbf9bed87c22d34d12ee40e_6
LR-Hiero with CP was introduced in (<cite>Siahbani et al., 2013</cite>) .
extends background
1542325bbf9bed87c22d34d12ee40e_7
d=1 in standard cube pruning for LR-Hiero (<cite>Siahbani et al., 2013</cite>) .
uses
1542325bbf9bed87c22d34d12ee40e_8
Pop limit for Hiero and <cite>LRHiero+CP</cite> is 500 and beam size LR-Hiero is 500.
uses background
1542325bbf9bed87c22d34d12ee40e_9
We extend the LR-Hiero decoder to handle such cases by making the GNF grammar more expressive. Pop limit for Hiero and <cite>LRHiero+CP</cite> is 500 and beam size LR-Hiero is 500. Other extraction and decoder settings such as maximum phrase length, etc. were identical across settings.
uses
1542325bbf9bed87c22d34d12ee40e_10
Pop limit for Hiero and <cite>LRHiero+CP</cite> is 500 and beam size LR-Hiero is 500. Other extraction and decoder settings such as maximum phrase length, etc. were identical across settings.
uses background
1542325bbf9bed87c22d34d12ee40e_11
To make the results comparable we use the same feature set for all baselines, Hiero as well (including new features proposed by (<cite>Siahbani et al., 2013</cite>) ).
similarities uses
1542325bbf9bed87c22d34d12ee40e_12
We use 3 baselines: (i) our implementation of (Watanabe et al., 2006) : LR-Hiero with beam search (LR-Hiero) and (ii) LR-Hiero with cube pruning (<cite>Siahbani et al., 2013</cite>) : (<cite>LR-Hiero+CP</cite>); and (iii) Kriya, an open-source implementation of Hiero in Python, which performs comparably to other open-source Hiero systems (Sankaran et al., 2012) .
uses
1542325bbf9bed87c22d34d12ee40e_13
Row 3 is from (<cite>Siahbani et al., 2013</cite>) 5 . As we discussed in Section 2, <cite>LR-Hiero+CP</cite> suffers from severe search errors on Zh-En (1.5 BLEU) but using queue diversity (QD=15) we fill this gap.
motivation uses differences
1542325bbf9bed87c22d34d12ee40e_14
Table 2a shows the translation quality of different systems in terms of BLEU score. Row 3 is from (<cite>Siahbani et al., 2013</cite>) 5 . As we discussed in Section 2, <cite>LR-Hiero+CP</cite> suffers from severe search errors on Zh-En (1.5 BLEU) but using queue diversity (QD=15) we fill this gap. We achieve better results than <cite>our previous work</cite> (<cite>Siahbani et al., 2013</cite>) type (c) rules.
differences
1542325bbf9bed87c22d34d12ee40e_15
We can see that for all language pairs (ab) constantly improves performance of LRHiero, significantly better than <cite>LR-Hiero+CP</cite> and LR-Hiero (p-value<0.05) on Cs-En and Zh-En, evaluated by MultEval (Clark et al., 2011) .
differences
1542325bbf9bed87c22d34d12ee40e_16
Table 2a shows the translation quality of different systems in terms of BLEU score. Row 3 is from (<cite>Siahbani et al., 2013</cite>) 5 . As we discussed in Section 2, <cite>LR-Hiero+CP</cite> suffers from severe search errors on Zh-En (1.5 BLEU) but using queue diversity (QD=15) we fill this gap. Row 4 is the same translation system as row 3 (<cite>LR-Hiero+CP</cite>).
differences
1542325bbf9bed87c22d34d12ee40e_17
But modifying rule type (c) does not show any improvement due to spurious ambiguity created by 5 We report results on Cs-En and De-En in (<cite>Siahbani et al., 2013</cite>) .
similarities uses
1542325bbf9bed87c22d34d12ee40e_18
Table 2a shows the translation quality of different systems in terms of BLEU score. We can see that for all language pairs (ab) constantly improves performance of LRHiero, significantly better than <cite>LR-Hiero+CP</cite> and LR-Hiero (p-value<0.05) on Cs-En and Zh-En, evaluated by MultEval (Clark et al., 2011) . But modifying rule type (c) does not show any improvement due to spurious ambiguity created by 5 We report results on Cs-En and De-En in (<cite>Siahbani et al., 2013</cite>) .
similarities differences
1542325bbf9bed87c22d34d12ee40e_19
We achieve better results than <cite>our previous work</cite> (<cite>Siahbani et al., 2013</cite>) type (c) rules.
differences
1542325bbf9bed87c22d34d12ee40e_20
In (<cite>Siahbani et al., 2013</cite>) we discuss that LR-Hiero with beam search (Watanabe et al., 2006) does not perform at the same level of state-of-the-art Hiero (more LM calls and less translation quality).
background
1542325bbf9bed87c22d34d12ee40e_21
In (<cite>Siahbani et al., 2013</cite>) we discuss that LR-Hiero with beam search (Watanabe et al., 2006) does not perform at the same level of state-of-the-art Hiero (more LM calls and less translation quality). As we can see in this figure, adding new modified rules slightly increases the number of language model queries on Cs-En and De-En so that <cite>LR-Hiero+CP</cite> still works 2 to 3 times faster than Hiero.
uses
1542325bbf9bed87c22d34d12ee40e_22
<cite>LRHiero+CP</cite> with our modifications works substantially faster than LR-Hiero while obtain significantly better translation quality on Zh-En.
differences
1542325bbf9bed87c22d34d12ee40e_23
On Zh-En, <cite>LR-Hiero+CP</cite> applies queue diversity (QD=15) which reduces search errors and improves translation quality but increases the number of hypothesis generation as well.
differences
155920441b8e81dff4e2b8e110383d_0
Here, we also try to mimic the word2vec <cite>(Mikolov et al., 2013)</cite> embeddings (i.e. that are the expected outputs of the model) to learn the rare word representations with a complex morphology.
uses
155920441b8e81dff4e2b8e110383d_1
Classical word representation models such as word2vec <cite>(Mikolov et al., 2013)</cite> have been successful in learning word representations for frequent words.
background
155920441b8e81dff4e2b8e110383d_2
Bojanowski et al. (2017) introduce an extension to word2vec <cite>(Mikolov et al., 2013)</cite> by representing each word in terms of the vector representations of its n-grams, which was earlier applied by Schütze (1993) that learns the representations of fourgrams by applying singular value decomposition (SVD).
background
155920441b8e81dff4e2b8e110383d_3
For training, we use the pre-trained word2vec <cite>(Mikolov et al., 2013)</cite> vectors in order to minimize the cost between the learned and pre-trained vectors with the following objective function:
uses
155920441b8e81dff4e2b8e110383d_4
For the pre-trained word vectors, we used the word vectors of dimension 300 that were obtained by training word2vec <cite>(Mikolov et al., 2013)</cite> .
uses
155920441b8e81dff4e2b8e110383d_6
However, our model performs better than both fasttext (Bojanowski et al., 2017) and word2vec <cite>(Mikolov et al., 2013)</cite> on Turkish despite the highly agglutinative morphological structure of the language.
differences
155920441b8e81dff4e2b8e110383d_7
For English, we used the syntactic relations section provided in the Google analogy dataset <cite>(Mikolov et al., 2013)</cite> that involves 10675 questions.
uses
155920441b8e81dff4e2b8e110383d_8
The results show that our model outperforms both word2vec <cite>(Mikolov et al., 2013)</cite> and fasttext (Bojanowski et al., 2017) on both Turkish and English languages.
differences
155920441b8e81dff4e2b8e110383d_11
Our morpheme-based model morph2vec learns better word representations for morphologically complex words compared to the word-based model word2vec <cite>(Mikolov et al., 2013)</cite> , character-based model char2vec (Cao and Rei, 2016) , and the character n-gram level model fasttext (Bojanowski et al., 2017) .
differences
15bacab4a8c520cfcdd7e7bd1e9ec5_1
We will introduce an in-depth case study of Generative Adversarial Networks for NLP, with a focus on dialogue generation <cite>(Li et al., 2017)</cite> .
background
15bacab4a8c520cfcdd7e7bd1e9ec5_2
Finally, we provide an in-depth case study of deploying two-agent GAN models for conversational AI <cite>(Li et al., 2017)</cite> .
uses
15c8ca572430c214d9c571fbe0db95_0
In this paper, we present a phrase-based unigram system similar to the one in (<cite>Tillmann and Xia, 2003</cite>) , which is extended by an unigram orientation model.
similarities
15c8ca572430c214d9c571fbe0db95_1
Our baseline model is the unigram monotone model described in (<cite>Tillmann and Xia, 2003</cite>) .
uses
15c8ca572430c214d9c571fbe0db95_2
We use a DP-based beam search procedure similar to the one presented in (<cite>Tillmann and Xia, 2003</cite>) .
similarities
15c8ca572430c214d9c571fbe0db95_3
This is the model presented in (<cite>Tillmann and Xia, 2003</cite>) .
uses
15df1d107fb349f78c313b0c3342b8_0
The system that we propose builds on top of one of the latest neural MT architectures called the Transformer <cite>(Vaswani et al., 2017)</cite> .
extends
15df1d107fb349f78c313b0c3342b8_1
This section provides a brief high-level explanation of the neural MT approach that we are using as a baseline system, which is one of the strongest systems presented recently <cite>(Vaswani et al., 2017)</cite> , as well as a glance of its differences with other popular neural machine translation architectures.
motivation background
15df1d107fb349f78c313b0c3342b8_2
In this paper we make use of the third paradigm for neural machine translation, proposed in <cite>(Vaswani et al., 2017)</cite> , namely the Transformer architecture, which is based on a feed-forward encoder-decoder scheme with attention mechanisms.
uses
15df1d107fb349f78c313b0c3342b8_3
Equations and details about the transformer system can be found in the original paper <cite>(Vaswani et al., 2017)</cite> and are out of the scope of this paper.
background
167511f278a8596aed0124c3a4242b_0
Current Simultaneous Neural Machine Translation (SNMT) systems (Satija and Pineau, 2016; Cho and Esipova, 2016; <cite>Gu et al., 2017)</cite> use an AGENT to control an incremental encoder-decoder (or sequence to sequence) NMT model.
background
167511f278a8596aed0124c3a4242b_1
Current Simultaneous Neural Machine Translation (SNMT) systems (Satija and Pineau, 2016; Cho and Esipova, 2016; <cite>Gu et al., 2017)</cite> use an AGENT to control an incremental encoder-decoder (or sequence to sequence) NMT model. In this paper, we propose adding a new action to the AGENT: a PREDICT action that predicts what words might appear in the input stream.
extends
167511f278a8596aed0124c3a4242b_2
An agent-based framework whose actions decide whether to translate or wait for more input is a natural way to extend neural MT to simultaneous neural MT and has been explored in (Satija and Pineau, 2016; <cite>Gu et al., 2017)</cite> which contains two main components: The ENVIRONMENT which receives the input words X = {x 1 , . . . , x N } from the source language and incrementally generates translated words W = {w 1 , . . . , w M } in the target language; And the AGENT which decides an action for each time step, a t .
background
167511f278a8596aed0124c3a4242b_3
The agent in the greedy decoding framework<cite> (Gu et al., 2017)</cite> was trained using reinforcement learning with the policy gradient algorithm (Williams, 1992) , which observes the current state of the ENVIRONMENT at time step t as o t where o t = [c t ; s t ; w m ].
uses
167511f278a8596aed0124c3a4242b_4
The delay reward is smoothed using a Target Delay which is a scalar constant denoted by d ⇤<cite> (Gu et al., 2017)</cite> :
uses
167511f278a8596aed0124c3a4242b_5
Reinforcement Learning is used to train the AGENT using a policy gradient algorithm<cite> (Gu et al., 2017</cite>; Williams, 1992) which searches for the maximum in
uses
167511f278a8596aed0124c3a4242b_6
All sentences have been tokenized and the words are segmented using byte pair encoding (BPE) (Sennrich et al., 2016 Model Configuration For a fair comparison, we follow the settings that worked the best for the greedy decoding model in<cite> (Gu et al., 2017)</cite> and set the target delay d ⇤ for the AGENT to 0.7.
uses
167511f278a8596aed0124c3a4242b_7
We modified the SNMT trainable agent in<cite> (Gu et al., 2017)</cite> and added a new non-trivial PREDICT action to the agent.
extends