Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "I17-1001",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:38:52.755940Z"
},
"title": "Evaluating Layers of Representation in Neural Machine Translation on Part-of-Speech and Semantic Tagging Tasks",
"authors": [
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": "",
"affiliation": {
"laboratory": "MIT Computer Science and Artificial Intelligence Laboratory",
"institution": "",
"location": {
"postCode": "02139",
"settlement": "Cambridge",
"region": "MA",
"country": "USA"
}
},
"email": "belinkov@mit.edu"
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "M\u00e0rquez",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "HBKU",
"location": {
"settlement": "Doha",
"country": "Qatar"
}
},
"email": "lmarquez@qf.org.qa"
},
{
"first": "Hassan",
"middle": [],
"last": "Sajjad",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "HBKU",
"location": {
"settlement": "Doha",
"country": "Qatar"
}
},
"email": "hsajjad@qf.org.qa"
},
{
"first": "Nadir",
"middle": [],
"last": "Durrani",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "HBKU",
"location": {
"settlement": "Doha",
"country": "Qatar"
}
},
"email": "ndurrani@qf.org.qa"
},
{
"first": "Fahim",
"middle": [],
"last": "Dalvi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "HBKU",
"location": {
"settlement": "Doha",
"country": "Qatar"
}
},
"email": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": "",
"affiliation": {
"laboratory": "MIT Computer Science and Artificial Intelligence Laboratory",
"institution": "",
"location": {
"postCode": "02139",
"settlement": "Cambridge",
"region": "MA",
"country": "USA"
}
},
"email": "glass@mit.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "While neural machine translation (NMT) models provide improved translation quality in an elegant framework, it is less clear what they learn about language. Recent work has started evaluating the quality of vector representations learned by NMT models on morphological and syntactic tasks. In this paper, we investigate the representations learned at different layers of NMT encoders. We train NMT systems on parallel data and use the models to extract features for training a classifier on two tasks: part-of-speech and semantic tagging. We then measure the performance of the classifier as a proxy to the quality of the original NMT model for the given task. Our quantitative analysis yields interesting insights regarding representation learning in NMT models. For instance, we find that higher layers are better at learning semantics while lower layers tend to be better for part-of-speech tagging. We also observe little effect of the target language on source-side representations, especially in higher quality models. 1",
"pdf_parse": {
"paper_id": "I17-1001",
"_pdf_hash": "",
"abstract": [
{
"text": "While neural machine translation (NMT) models provide improved translation quality in an elegant framework, it is less clear what they learn about language. Recent work has started evaluating the quality of vector representations learned by NMT models on morphological and syntactic tasks. In this paper, we investigate the representations learned at different layers of NMT encoders. We train NMT systems on parallel data and use the models to extract features for training a classifier on two tasks: part-of-speech and semantic tagging. We then measure the performance of the classifier as a proxy to the quality of the original NMT model for the given task. Our quantitative analysis yields interesting insights regarding representation learning in NMT models. For instance, we find that higher layers are better at learning semantics while lower layers tend to be better for part-of-speech tagging. We also observe little effect of the target language on source-side representations, especially in higher quality models. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Neural machine translation (NMT) offers an elegant end-to-end architecture, while at the same time improving translation quality. However, little is known about the inner workings of these models and their interpretability is limited. Recent work has started exploring what kind of linguistic information such models learn on morphological (Vylomova et al., 2016; and syntactic levels (Shi et al., 2016; Sennrich, 2017) .",
"cite_spans": [
{
"start": 340,
"end": 363,
"text": "(Vylomova et al., 2016;",
"ref_id": "BIBREF31"
},
{
"start": 385,
"end": 403,
"text": "(Shi et al., 2016;",
"ref_id": "BIBREF29"
},
{
"start": 404,
"end": 419,
"text": "Sennrich, 2017)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One observation that has been made is that lower layers in the neural MT network learn different kinds of information than higher layers. For example, Shi et al. (2016) and found that representations from lower layers of the NMT encoder are more predictive of word-level linguistic properties like part-ofspeech (POS) and morphological tags, whereas higher layer representations are more predictive of more global syntactic information. In this work, we take a first step towards understanding what NMT models learn about semantics. We evaluate NMT representations from different layers on a semantic tagging task and compare to the results on a POS tagging task. We believe that understanding the semantics learned in NMT can facilitate using semantic information for improving NMT systems, as previously shown for non-neural MT (Chan et al., 2007; Liu and Gildea, 2010; Gao and Vogel, 2011; Wu et al., 2011; Jones et al., 2012; Gildea, 2013, 2014) .",
"cite_spans": [
{
"start": 151,
"end": 168,
"text": "Shi et al. (2016)",
"ref_id": "BIBREF29"
},
{
"start": 827,
"end": 849,
"text": "MT (Chan et al., 2007;",
"ref_id": null
},
{
"start": 850,
"end": 871,
"text": "Liu and Gildea, 2010;",
"ref_id": "BIBREF22"
},
{
"start": 872,
"end": 892,
"text": "Gao and Vogel, 2011;",
"ref_id": "BIBREF13"
},
{
"start": 893,
"end": 909,
"text": "Wu et al., 2011;",
"ref_id": "BIBREF33"
},
{
"start": 910,
"end": 929,
"text": "Jones et al., 2012;",
"ref_id": "BIBREF15"
},
{
"start": 930,
"end": 949,
"text": "Gildea, 2013, 2014)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For the semantic (SEM) tagging task, we use the dataset recently introduced by Bjerva et al. (2016) . This is a lexical semantics task: given a sentence, the goal is to assign to each word a tag representing a semantic class. The classes capture nuanced meanings that are ignored in most POS tag schemes. For instance, proximal and distal demonstratives (e.g., this and that) are typically assigned the same POS tag (DT) but receive different SEM tags (PRX and DST, respectively), and proper nouns are assigned different SEM tags depending on their type (e.g., geopolitical entity, organization, person, and location). As another example, consider pronouns like myself, yourself, and herself. They may have reflexive or emphasizing functions, as in (1) and (2), respectively:",
"cite_spans": [
{
"start": 79,
"end": 99,
"text": "Bjerva et al. (2016)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) Sarah bought herself a book",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(2) Sarah herself bought a book Figure 1 : Illustration of our approach, after : (i) NMT system trained on parallel data; (ii) features extracted from pre-trained model; (iii) classifier trained using the extracted features. We train classifiers on either SEM or POS tagging using features from different layers (here: layer 2).",
"cite_spans": [],
"ref_spans": [
{
"start": 32,
"end": 40,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In these examples, herself has the same POS tag (PRP) but different SEM tags: REF for a reflexive function and EMP for an emphasizing function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Capturing semantic distinctions of this sort can be important for producing accurate translations. For instance, example (1) would be translated to Spanish with the reflexive pronoun se, whereas (2) would be translated with the intensifier misma. Thus, a translation system needs to learn different representations of herself in the two sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In order to assess the quality of the representations learned by NMT models, we adopt the following methodology from Shi et al. (2016) and . We first train an NMT system on parallel data. Given a sentence, we extract representations from the pre-trained NMT model and train a word-level classifier to predict a tag for each word. Our assumption is that the performance of the classifier reflects the quality of the representation for the given task.",
"cite_spans": [
{
"start": 117,
"end": 134,
"text": "Shi et al. (2016)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We compare POS and SEM tagging quality with representations from different layers or from models trained on different target languages, while keeping the English source fixed. Our results yield useful insights on representation learning in NMT:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Consistent with previous work, we find that lower layer representations are usually better for POS tagging. However, we also find that representations from higher layers are better at capturing semantics, even though these are word-level labels. This is especially true with tags that are more semantic in nature such as discourse functions or noun concepts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 In contrast to previous work, we observe little effect of the target language on source-side representation. We find that the effect of target language diminishes as the size of data used to train the NMT model increases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Given a parallel corpus of source and target sentence pairs, we train an NMT system with a standard sequence-to-sequence model with attention (Bahdanau et al., 2014; Sutskever et al., 2014) . After training the NMT system, we fix its parameters and treat it as a feature generator for our classification task. Let h k j denote the output of the k-th layer of the encoder at the j-th word. Given another corpus of sentences, where each word is annotated with a label, we train a classifier that takes h k j as input features and maps words to labels. We then measure the performance of the classifier as a way to evaluate the quality of the representations generated by the NMT system. By extracting different NMT features we can obtain a quantitative comparison of representation learning quality in the NMT model for the given task. For instance, we may vary k in order to evaluate representations learned at different encoding layers.",
"cite_spans": [
{
"start": 142,
"end": 165,
"text": "(Bahdanau et al., 2014;",
"ref_id": "BIBREF3"
},
{
"start": 166,
"end": 189,
"text": "Sutskever et al., 2014)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "In our case, we first train NMT systems on parallel corpora of an English source and several target languages. Then we train separate classifiers for predicting POS and SEM tags using the features h k j that are obtained from the English encoder and evaluate their accuracies. Figure 1 illustrates the process.",
"cite_spans": [],
"ref_spans": [
{
"start": 277,
"end": 285,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "3 Data and Experimental Setup 3.1 Data MT We use the fully-aligned United Nations corpus (Ziemski et al., 2016) for training NMT models, which includes 11 million multi-parallel sentences in six languages: Arabic (Ar), Chinese (Zh), English (En), French (Fr), Spanish (Es), and Russian (Ru). We train En-to-* models on the first 2 million sentences of the train set, using the official train/dev/test split. This dataset has the benefit of multiple alignment of the six languages, which allows for comparable cross-linguistic analysis.",
"cite_spans": [
{
"start": 89,
"end": 111,
"text": "(Ziemski et al., 2016)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "Note that the parallel dataset is only used for training the NMT model. The classifier is then trained on the supervised data (described next) and all accuracies are reported on the English test sets. Bjerva et al. (2016) introduced a new sequence labeling task, for tagging words with semantic (SEM) tags in context. This is a good task to use as a starting point for investigating semantics because: i) tagging words with semantic labels is very simple, compared to building complex relational semantic structures; ii) it provides a large supervised dataset to train on, in contrast to most available datasets on word sense disambiguation, lexical substitution, and lexical similarity; and iii) the proposed SEM tagging task is an abstraction over POS tagging aimed at being language-neutral, and oriented to multi-lingual semantic parsing, all relevant aspects to MT. We provide here a brief overview of the task and its associated dataset, and refer to (Bjerva et al., 2016; Abzianidze et al., 2017) for more details.",
"cite_spans": [
{
"start": 201,
"end": 221,
"text": "Bjerva et al. (2016)",
"ref_id": "BIBREF8"
},
{
"start": 957,
"end": 978,
"text": "(Bjerva et al., 2016;",
"ref_id": "BIBREF8"
},
{
"start": 979,
"end": 1003,
"text": "Abzianidze et al., 2017)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "The semantic classes abstract over redundant POS distinctions and disambiguate useful cases inside a given POS tag. Examples (1-2) above illustrate how fine-grained semantic distinctions may be important for generating accurate translations. Other examples of SEM tag distinctions include determiners like every, no, and some that are typically assigned a single POS tag (e.g., DT in the Penn Treebank), but have different SEM tags, reflecting universal quantification (AND), negation (NOT), and existential quantification (DIS), respectively. The comma, whose POS tag is a punctuation mark, is assigned different SEM tags representing conjunction, disjunction, or apposition, according to its discourse function. Proximal and distant demonstratives (this vs. that) have different SEM tags but the same POS tag. Named-entities, whose POS tag is usually a single tag for proper nouns, are disambiguated into several classes such as geo-political entity, location, organization, person, and artifact. Other nouns are divided into \"role\" entities (e.g., boxer) and \"concepts\" (e.g., wheel), a distinction reflecting existential consistency: an entity can have multiple roles but cannot be two different concepts. The dataset annotation scheme includes 66 finegrained tags grouped in 13 coarse categories. We use the silver part of the dataset; see Table 1 for some statistics.",
"cite_spans": [],
"ref_spans": [
{
"start": 1345,
"end": 1352,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Semantic tagging",
"sec_num": null
},
{
"text": "Part-of-speech tagging For POS tagging, we simply use the Penn Treebank with the standard split (parts 2-21/22/23 for train/dev/test); see Table 1 for statistics. There are 34 POS tags.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic tagging",
"sec_num": null
},
{
"text": "Neural MT We use the seq2seq-attn toolkit (Kim, 2016) to train 4-layered long shortterm memory (LSTM) (Hochreiter and Schmidhuber, 1997 ) attentional encoder-decoder NMT systems with 500 dimensions for both word embeddings and LSTM states. We compare both unidirectional and bidirectional encoders and experiment with different numbers of layers. Each system is trained with SGD for 20 epochs and the model with the best loss on the development set is used for generating features for the classifier.",
"cite_spans": [
{
"start": 42,
"end": 53,
"text": "(Kim, 2016)",
"ref_id": "BIBREF18"
},
{
"start": 102,
"end": 135,
"text": "(Hochreiter and Schmidhuber, 1997",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "3.2"
},
{
"text": "Classifier The classifier is modeled as a feedforward neural network with one hidden layer, dropout (ratio of 0.5), a ReLU activation function, and a softmax layer onto the label set size. 2 The hidden layer is of the same size as the input coming from the NMT system (i.e., 500 dimensions). The classifier has no explicit access to context other than the hidden representation gen- erated by the NMT system, which allows us to focus on the quality of the representation. We chose this simple formulation as our goal is not to improve the state-of-the-art on the supervised task, but rather to analyze the quality of the NMT representation for the task. We train the classifier for 30 epochs by minimizing the cross-entropy loss using Adam (Kingma and Ba, 2014) with default settings. Again, we use the model with the best loss on the development set for evaluation.",
"cite_spans": [
{
"start": 189,
"end": 190,
"text": "2",
"ref_id": null
},
{
"start": 740,
"end": 761,
"text": "(Kingma and Ba, 2014)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "3.2"
},
{
"text": "Baselines and an upper bound we consider two baselines: assigning to each word the most frequent tag (MFT) according to the training set (with the global majority tag for unseen words); and training with unsupervised word embeddings (UnsupEmb) as features for the classifier, which shows what a simple task-independent distributed representation can achieve. For the unsupervised word embeddings, we train a Skip-gram negative sampling model (Mikolov et al., 2013) with 500 dimensional vectors on the English side of the parallel data, to mirror the NMT word embedding size. We also report an upper bound of directly training an encoder-decoder on word-tag sequences (Word2Tag), simulating what an NMTstyle model can achieve by directly optimizing for the tagging tasks. Table 3 : SEM and POS tagging accuracy using features from the k-th encoding layer of 4-layered NMT models trained with different target languages. \"En\" column is an English autoencoder. BLEU scores are given for reference. Statistically significant differences from layer 1 are shown at p < 0.001 (\u21e4) and p < 0.01 (\u21e4\u21e4) . See text for details. Table 3 summarizes the results of training classifiers to predict POS and SEM tags using features extracted from different encoding layers of 4layered NMT systems. 3 In the POS tagging results (first block), as the representations move above layer 0, performance jumps to around 91-92%. This is above the UnsupEmb baseline but only on par with the MFT baseline (Table 2) . We note that previous work reported performance above a majority baseline for POS tagging (Shi et al., 2016; , but used a weak global majority baseline (all words are assigned a single tag) whereas here we compare with a stronger baseline that assigns to each word the most frequent tag according to the training data. The results are also far below the Word2Tag upper bound (Table 2) . Comparing layers 1 through 4, we see that in 3/5 target languages (Ar, Ru, Zh), POS tagging accuracy peaks at layer 1 and does not improve at higher layers, with some drops at layers 2 and 3. In 2/5 cases (Es, Fr) the performance is higher at layer 4. This result is partially consistent with previous findings regarding the quality of lower layer representations for the POS tagging task (Shi et al., 2016; . One possible explanation for the discrepancy when using different target languages is that French and Spanish are typologically closer to English compared to the other languages. It is possible that when the source and target languages are more similar, they share similar POS characteristics, leading to more benefit in using upper layers for POS tagging.",
"cite_spans": [
{
"start": 442,
"end": 464,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF23"
},
{
"start": 1069,
"end": 1072,
"text": "(\u21e4)",
"ref_id": null
},
{
"start": 1086,
"end": 1090,
"text": "(\u21e4\u21e4)",
"ref_id": null
},
{
"start": 1279,
"end": 1280,
"text": "3",
"ref_id": null
},
{
"start": 1578,
"end": 1596,
"text": "(Shi et al., 2016;",
"ref_id": "BIBREF29"
},
{
"start": 2264,
"end": 2282,
"text": "(Shi et al., 2016;",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 771,
"end": 778,
"text": "Table 3",
"ref_id": null
},
{
"start": 1115,
"end": 1122,
"text": "Table 3",
"ref_id": null
},
{
"start": 1476,
"end": 1485,
"text": "(Table 2)",
"ref_id": "TABREF2"
},
{
"start": 1863,
"end": 1872,
"text": "(Table 2)",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "3.2"
},
{
"text": "Turning to SEM tagging (Table 3 , second block), representations from layers 1 through 4 boost the performance to around 87-88%, far above the UnsupEmb and MFT baselines. While these results are below the Word2Tag upper bound (Table 2) , they indicate that NMT representations contain useful information for SEM tagging.",
"cite_spans": [],
"ref_spans": [
{
"start": 23,
"end": 31,
"text": "(Table 3",
"ref_id": null
},
{
"start": 226,
"end": 235,
"text": "(Table 2)",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Effect of network depth",
"sec_num": "4.1"
},
{
"text": "Going beyond the 1st encoding layer, representations from the 2nd and 3rd layers do not consistently improve semantic tagging performance. However, representations from the 4th layer lead to significant improvement with all target languages except for Chinese. Note that there is a statistically significant difference (p < 0.001) between layers 0 and 1 for all target languages, and between layers 1 and 4 for all languages except for Chinese, according to the approximate randomization test (Pad\u00f3, 2006) .",
"cite_spans": [
{
"start": 493,
"end": 505,
"text": "(Pad\u00f3, 2006)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of network depth",
"sec_num": "4.1"
},
{
"text": "Intuitively, higher layers have a more global perspective because they have access to higher representations of the word and its context, while lower layers have a more local perspective. Layer 1 has access to context but only through one hidden layer which may not be sufficient for capturing semantics. It appears that higher representations are necessary for learning even relatively simple lexical semantics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of network depth",
"sec_num": "4.1"
},
{
"text": "Finally, we found that En-En encoder-decoders (that is, English autoencoders) produce poor representations for POS and SEM tagging (last column in Table 3 ). This is especially true with higher layer representations (e.g., around 5% below the MT models using representations from layer 4). In contrast, the autoencoder has excellent sentence recreation capabilities (96.6 BLEU). This indicates that learning to translate (to any foreign language) is important for obtaining useful representations for both tagging tasks. Table 4 : SEM and POS tagging accuracy using features extracted from the 4th NMT encoding layer, trained with different target languages on a smaller parallel corpus (200K sentences).",
"cite_spans": [],
"ref_spans": [
{
"start": 147,
"end": 154,
"text": "Table 3",
"ref_id": null
},
{
"start": 521,
"end": 528,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Effect of network depth",
"sec_num": "4.1"
},
{
"text": "Fr Ru Zh En POS",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ar Es",
"sec_num": null
},
{
"text": "Does translating into different languages make the NMT system learn different source-side representations? In previous work , we found a fairly consistent effect of the target language on the quality of encoder representations for POS and morphological tagging, with differences of \u21e02-3% in accuracy. Here we examine if such an effect exists in both POS and SEM tagging. Table 3 also shows results using features obtained by training NMT systems on different target languages (the English source remains fixed). In both POS and SEM tagging, there are very small differences with different target languages (\u21e00.5%), except for Chinese which leads to slightly worse representations. While the differences are small, they are mostly statistically significant. For example, at layer 4, all the pairwise comparisons with different target languages are statistically significant (p < 0.001) in SEM tagging, and all except for two comparisons (Ar vs. Ru and Es vs. Fr) are significant in POS tagging.",
"cite_spans": [],
"ref_spans": [
{
"start": 371,
"end": 378,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Effect of target language",
"sec_num": "4.2"
},
{
"text": "The effect of target language is much smaller than that reported in for POS and morphological tagging. This discrepancy can be attributed to the fact that our NMT systems in the present work are trained on much larger corpora (10x), so it is possible that some of the differences disappear when the NMT model is of better quality. To verify this, we trained systems using a smaller data size (200K sentences), comparable to the size used in . The results are shown in Table 4 . In this case, we observe a variance in classifier accuracy of 1-2%, based on target language, which is consistent with our earlier findings. This is true for both POS and SEM tagging. The differences in POS tagging accuracy are statistically significant (p < 0.001) for all pairwise comparisons except for Ar vs. Ru; the differences in SEM tagging accuracy are significant for all comparisons except for Ru vs. Zh. Finally, we note that training an English autoencoder on the smaller dataset results in much worse representations compared to MT models, for both POS and SEM tagging (Table 4 , last column), consistent with the behavior we observed on the larger data (Table 3 , last column).",
"cite_spans": [],
"ref_spans": [
{
"start": 468,
"end": 475,
"text": "Table 4",
"ref_id": null
},
{
"start": 1060,
"end": 1068,
"text": "(Table 4",
"ref_id": null
},
{
"start": 1145,
"end": 1153,
"text": "(Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Effect of target language",
"sec_num": "4.2"
},
{
"text": "The SEM tags are grouped in coarse-grained categories such as events, names, time, and logical expressions (Bjerva et al., 2016) . In Figure 2 (top lines), we show the results of training and testing classifiers on coarse tags. Similar trends to the fine-grained case arise, with higher absolute scores: significant improvement using the 1st encoding layer and some additional improvement using the 4th layer, both statistically significant (p < 0.001). Again, there is a small effect of the target language. Figure 3 shows the change in F 1 score (averaged over target languages) when moving from layer 1 to layer 4 representations. The blue bars describe the differences per coarse tag when directly predicting coarse tags. The red bars show the same differences when predicting fine-grained tags and micro-averaging inside each coarse tag. The former shows the differences between the two layers at distinguishing among coarse tags. The latter gives an idea of the differences when distinguishing between fine-grained tags within a coarse category. The first observation is that in the majority of cases there is an advantage for classifiers trained with layer 4 representations, i.e., higher layer representations are better suited for learning the SEM Figure 3 : Difference in F 1 when using representations from layer 4 compared to layer 1, showing F 1 when directly predicting coarse tags (blue) and when predicting fine-grained tags and averaging inside each coarse tag (red).",
"cite_spans": [
{
"start": 107,
"end": 128,
"text": "(Bjerva et al., 2016)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 134,
"end": 142,
"text": "Figure 2",
"ref_id": "FIGREF0"
},
{
"start": 509,
"end": 517,
"text": "Figure 3",
"ref_id": null
},
{
"start": 1257,
"end": 1265,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis at the semantic tag level",
"sec_num": "4.3"
},
{
"text": "tags, at both coarse and fine-grained levels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis at the semantic tag level",
"sec_num": "4.3"
},
{
"text": "Considering specific tags, higher layers of the NMT model are especially better at capturing semantic information such as discourse relations (DIS tag: subordinate vs. coordinate vs. apposition relations), semantic properties of nouns (roles vs. concepts, within the ENT tag), events and predicate tense (EVE and TNS tags), logic relations and quantifiers (LOG tag: disjunction, conjunction, implication, existential, universal, etc.) , and comparative constructions (COM tag: equatives, comparatives, and superlatives). These examples represent semantic concepts and relations that require a level of abstraction going beyond the lexeme or word form, and thus might be better represented in higher layers in the deep network.",
"cite_spans": [
{
"start": 356,
"end": 434,
"text": "(LOG tag: disjunction, conjunction, implication, existential, universal, etc.)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis at the semantic tag level",
"sec_num": "4.3"
},
{
"text": "One negative example that stands out in Figure 3 is the prediction of the MOD tag, corresponding to modality (necessity, possibility, and negation). It seems that such semantic concepts should be better represented in higher layers following our previous hypothesis. Still, layer 1 is better than layer 4 in this case. One possible explanation is that words tagged as MOD form a closed class, with only a few and mostly unambiguous words (\"no\", \"not\", \"should\", \"must\", \"may\", \"can\", \"might\", etc.). It is enough for the classifier to memorize these words in order to predict this class with high F 1 , and this is something that occurs better in lower layers. One final case worth mentioning is the NAM category, which stands for different types of named entities (person, location, organization, artifact, etc.) . In principle, this seems a clear case of semantic abstractions suited for higher layers, but the results from layer 4 are not significantly better than those from layer 1. This might be signaling a limitation of the NMT system at learning this type of semantic classes. Another factor might be the fact that many named entities are out of vocabulary words for the NMT system.",
"cite_spans": [
{
"start": 765,
"end": 813,
"text": "(person, location, organization, artifact, etc.)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 40,
"end": 46,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis at the semantic tag level",
"sec_num": "4.3"
},
{
"text": "In this section, we analyze specific cases of disagreement between predictions using representations from layer 1 and layer 4. We focus on discourse relations, as they show the largest improvement when going from layer 1 to layer 4 representations (DIS category in Figure 3 ). Intuitively, identifying discourse relations requires a relatively large context so it is expected that higher layers would perform better in this case. There are three discourse relations in the SEM tags annotation scheme: subordinate (SUB), coordinate (COO), and apposition (APP) relations. For each of those, Figure 4 (examples 1-9) shows the first three cases in the test set where layer 4 representations correctly predicted the tag but layer 1 representations were wrong. Examples 1-3 have subordinate conjunctions (as, after, because) connecting a main and an embedded clause, which layer 4 is able to correctly predict. Layer 1 mistakes these as attribute tags (REL, IST) that are usually used for prepositions. In examples 4-5, the coordinate conjunction and is used to connect sentences/clauses, which layer 4 correctly tags as COO. Layer 1 wrongly predicts the tag AND, which is used for conjunctions connecting shorter expressions like words (e.g., \"murder and sabotage\" in example 1). Example 6 is probably an annotation error, as and connects the phrases \"lame gait\" and \"wrinkled skin\" and should be tagged as AND. In this case, layer 1 is actually correct. In examples 7-9, layer 4 correctly identifies the comma as introducing an apposition, while layer 1 predicts NIL, a tag for punctuation marks without semantic content (e.g., end-of-sentence period). As expected, in most of these cases identifying the discourse function requires a fairly large context.",
"cite_spans": [],
"ref_spans": [
{
"start": 265,
"end": 273,
"text": "Figure 3",
"ref_id": null
},
{
"start": 589,
"end": 612,
"text": "Figure 4 (examples 1-9)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Analyzing discourse relations",
"sec_num": "4.4"
},
{
"text": "Finally, we show in examples 10-12 the first three occurrences of AND in the test set, where layer 1 was correct and layer 4 was wrong. Interestingly, two of these (10-11) are clear cases of and connecting clauses or sentences, which should have been annotated as COO, and the last (12) is a conjunction of two gerunds. The predictions from layer 4 in these cases thus appear justifiable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analyzing discourse relations",
"sec_num": "4.4"
},
{
"text": "Here we consider two architectural variants that have been shown to benefit NMT systems: bidirectional encoder and residual connections. We also experiment with NMT systems trained with different depths. Our motivation in this section is to see if the patterns we observed thus far hold in different NMT architectures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other architectural variants",
"sec_num": "4.5"
},
{
"text": "Bidirectional encoder Bidirectional LSTMs have become ubiquitous in NLP and also give some improvement as NMT encoders (Britz et al., 2017) . We confirm these results and note improvements in both translation (+1-2 BLEU) and SEM tagging quality (+3-4% accuracy), across the board, when using a bidirectional encoder. Some of our bidirectional models obtain 92-93% accuracy, which is close to the state-of-the-art on this task (Bjerva et al., 2016) . We observed similar improvements on POS tagging. Comparing POS and SEM tagging (Table 5) , we note that higher layer representations improve SEM tagging, while POS tagging peaks at layer 1, in line with our previous observations.",
"cite_spans": [
{
"start": 119,
"end": 139,
"text": "(Britz et al., 2017)",
"ref_id": "BIBREF9"
},
{
"start": 426,
"end": 447,
"text": "(Bjerva et al., 2016)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 529,
"end": 538,
"text": "(Table 5)",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Other architectural variants",
"sec_num": "4.5"
},
{
"text": "Residual connections Deep networks can sometimes be trained better if residual connections are introduced between layers. Such connections were also found useful for SEM tagging (Bjerva et al., 2016) . Indeed, we noticed small but consistent improvements in both translation (+0.9 BLEU) and POS and SEM tagging (up to +0.6% accuracy) when using features extracted from an NMT model trained with residual connections (Table 5) . We also observe similar trends as before: POS tagging does not benefit from features from the upper layers, while SEM tagging improves with layer 4 representations.",
"cite_spans": [
{
"start": 178,
"end": 199,
"text": "(Bjerva et al., 2016)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 416,
"end": 425,
"text": "(Table 5)",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Other architectural variants",
"sec_num": "4.5"
},
{
"text": "Shallower MT models In comparing network depth in NMT, Britz et al. (2017) found that encoders with 2 to 4 layers performed the best. For completeness, we report here results using features extracted from models trained originally with 2 and 3 layers, in addition to our basic setting of 4 layers. Table 6 shows consistent trends with our previous observations: POS tagging does not benefit from upper layers, while SEM tagging does, although the improvement is rather small in the shallower models.",
"cite_spans": [
{
"start": 55,
"end": 74,
"text": "Britz et al. (2017)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 298,
"end": 305,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Other architectural variants",
"sec_num": "4.5"
},
{
"text": "Techniques for analyzing neural network models include visualization of hidden units (Elman, 1991; Karpathy et al., 2015; K\u00e1d\u00e1r et al., 2016; Qian et al., 2016a) , which provide illuminating, but often anecdotal information on how the network works. A number of studies aim to ob-0 1 2 3 4 Uni POS 87.9 92.0 91.7 91.8 91.9 SEM 81.8 87.8 87.4 87.6 88.2 Bi POS 87.9 93.3 92.9 93.2 92.8 SEM 81.9 91.3 90.8 91.9 91.9",
"cite_spans": [
{
"start": 85,
"end": 98,
"text": "(Elman, 1991;",
"ref_id": "BIBREF12"
},
{
"start": 99,
"end": 121,
"text": "Karpathy et al., 2015;",
"ref_id": "BIBREF17"
},
{
"start": 122,
"end": 141,
"text": "K\u00e1d\u00e1r et al., 2016;",
"ref_id": "BIBREF16"
},
{
"start": 142,
"end": 161,
"text": "Qian et al., 2016a)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Res POS 87.9 92.5 91.9 92.0 92.4 SEM 81.9 88.2 87.5 87.6 88.5 Table 6 : POS and SEM tagging accuracy with features from different layers of 2/3/4-layer encoders, averaged over all non-English target languages.",
"cite_spans": [],
"ref_spans": [
{
"start": 62,
"end": 69,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "tain quantitative correlations between parts of the neural network and linguistic properties, in both speech (Wu and King, 2016; Alishahi et al., 2017; Wang et al., 2017) and language processing models (K\u00f6hn, 2015; Qian et al., 2016a; Adi et al., 2016; Linzen et al., 2016; Qian et al., 2016b) . Methodologically, our work is most similar to Shi et al. (2016) and , who also used hidden vectors from neural MT models to predict linguistic properties. However, they focused on relatively low-level tasks (syntax and morphology, respectively), while we apply the approach to a semantic task and compare the results with a POS tagging task.",
"cite_spans": [
{
"start": 109,
"end": 128,
"text": "(Wu and King, 2016;",
"ref_id": "BIBREF34"
},
{
"start": 129,
"end": 151,
"text": "Alishahi et al., 2017;",
"ref_id": "BIBREF2"
},
{
"start": 152,
"end": 170,
"text": "Wang et al., 2017)",
"ref_id": "BIBREF32"
},
{
"start": 202,
"end": 214,
"text": "(K\u00f6hn, 2015;",
"ref_id": "BIBREF20"
},
{
"start": 215,
"end": 234,
"text": "Qian et al., 2016a;",
"ref_id": "BIBREF26"
},
{
"start": 235,
"end": 252,
"text": "Adi et al., 2016;",
"ref_id": "BIBREF1"
},
{
"start": 253,
"end": 273,
"text": "Linzen et al., 2016;",
"ref_id": "BIBREF21"
},
{
"start": 274,
"end": 293,
"text": "Qian et al., 2016b)",
"ref_id": "BIBREF27"
},
{
"start": 342,
"end": 359,
"text": "Shi et al. (2016)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Our methodology is reminiscent of the approach taken by P\u00e9rez-Ortiz and Forcada (2001) , who trained a recurrent neural network POS tagger in two steps. However, their goal was to improve POS tagging while we use it as a task to evaluate neural MT models.",
"cite_spans": [
{
"start": 56,
"end": 86,
"text": "P\u00e9rez-Ortiz and Forcada (2001)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "While neural network models have improved the state-of-the-art in machine translation, it is difficult to interpret what they learn about language. In this work, we explore what kind of linguistic information such models learn at different layers. Our experimental evaluation leads to interesting insights about the hidden representations in NMT models such as the effect of layer depth and target language on part-of-speech and semantic tagging.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "In the future, we would like to extend this work to other syntactic and semantic tasks that require building relations such as dependency relations and predicate-argument structure or to evaluate semantic representations of multi-word expressions. We believe that understanding how semantic properties are learned in NMT is a key step for creating better machine translation systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Our code is available at http://github.com/ boknilev/nmt-repr-analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We use a non-linear classifier because previous work found that it outperforms a linear classifier, while showing very similar trends(Qian et al., 2016b;.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The results given are with a unidirectional encoder; in section 4.5 we compare with a bidirectional encoder and observe similar trends.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research was carried out in collaboration between the HBKU Qatar Computing Research Institute (QCRI) and the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The Parallel Meaning Bank: Towards a Multilingual Corpus of Translations Annotated with Compositional Meaning Representations",
"authors": [
{
"first": "Lasha",
"middle": [],
"last": "Abzianidze",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Bjerva",
"suffix": ""
},
{
"first": "Kilian",
"middle": [],
"last": "Evang",
"suffix": ""
},
{
"first": "Hessel",
"middle": [],
"last": "Haagsma",
"suffix": ""
},
{
"first": "Rik",
"middle": [],
"last": "Van Noord",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Ludmann",
"suffix": ""
},
{
"first": "Duc-Duy",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Bos",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "242--247",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lasha Abzianidze, Johannes Bjerva, Kilian Evang, Hessel Haagsma, Rik van Noord, Pierre Ludmann, Duc-Duy Nguyen, and Johan Bos. 2017. The Par- allel Meaning Bank: Towards a Multilingual Cor- pus of Translations Annotated with Compositional Meaning Representations. In Proceedings of the 15th Conference of the European Chapter of the As- sociation for Computational Linguistics: Volume 2, Short Papers, pages 242-247. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks",
"authors": [
{
"first": "Yossi",
"middle": [],
"last": "Adi",
"suffix": ""
},
{
"first": "Einat",
"middle": [],
"last": "Kermany",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Ofer",
"middle": [],
"last": "Lavi",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1608.04207"
]
},
"num": null,
"urls": [],
"raw_text": "Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2016. Fine-grained Anal- ysis of Sentence Embeddings Using Auxiliary Pre- diction Tasks. arXiv preprint arXiv:1608.04207.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Encoding of phonology in a recurrent neural model of grounded speech",
"authors": [
{
"first": "Afra",
"middle": [],
"last": "Alishahi",
"suffix": ""
},
{
"first": "Marie",
"middle": [],
"last": "Barking",
"suffix": ""
},
{
"first": "Grzegorz",
"middle": [],
"last": "Chrupa\u0142a",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the SIGNLL Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Afra Alishahi, Marie Barking, and Grzegorz Chrupa\u0142a. 2017. Encoding of phonology in a recurrent neu- ral model of grounded speech. In Proceedings of the SIGNLL Conference on Computational Natural Language Learning, Vancouver, Canada. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Neural Machine Translation by Jointly Learning to Align and Translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.0473"
]
},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural Machine Translation by Jointly Learning to Align and Translate. arXiv preprint arXiv:1409.0473.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Semantic Roles for String to Tree Machine Translation",
"authors": [
{
"first": "Marzieh",
"middle": [],
"last": "Bazrafshan",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "419--423",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marzieh Bazrafshan and Daniel Gildea. 2013. Seman- tic Roles for String to Tree Machine Translation. In Proceedings of the 51st Annual Meeting of the As- sociation for Computational Linguistics (Volume 2: Short Papers), pages 419-423, Sofia, Bulgaria. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Comparing Representations of Semantic Roles for String-To-Tree Decoding",
"authors": [
{
"first": "Marzieh",
"middle": [],
"last": "Bazrafshan",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1786--1791",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marzieh Bazrafshan and Daniel Gildea. 2014. Com- paring Representations of Semantic Roles for String-To-Tree Decoding. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1786-1791, Doha, Qatar. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "What do Neural Machine Translation Models Learn about Morphology?",
"authors": [
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Nadir",
"middle": [],
"last": "Durrani",
"suffix": ""
},
{
"first": "Fahim",
"middle": [],
"last": "Dalvi",
"suffix": ""
},
{
"first": "Hassan",
"middle": [],
"last": "Sajjad",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "861--872",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Has- san Sajjad, and James Glass. 2017. What do Neural Machine Translation Models Learn about Morphol- ogy? In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 861-872. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Analyzing Hidden Representations in End-to-End Automatic Speech Recognition Systems",
"authors": [
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems (NIPS)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonatan Belinkov and James Glass. 2017. Analyzing Hidden Representations in End-to-End Automatic Speech Recognition Systems. In Advances in Neu- ral Information Processing Systems (NIPS).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Semantic Tagging with Deep Residual Networks",
"authors": [
{
"first": "Johannes",
"middle": [],
"last": "Bjerva",
"suffix": ""
},
{
"first": "Barbara",
"middle": [],
"last": "Plank",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Bos",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "3531--3541",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johannes Bjerva, Barbara Plank, and Johan Bos. 2016. Semantic Tagging with Deep Residual Networks. In Proceedings of COLING 2016, the 26th Inter- national Conference on Computational Linguistics: Technical Papers, pages 3531-3541, Osaka, Japan. The COLING 2016 Organizing Committee.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Massive Exploration of Neural Machine Translation Architectures",
"authors": [
{
"first": "Denny",
"middle": [],
"last": "Britz",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Goldie",
"suffix": ""
},
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Denny Britz, Anna Goldie, Thang Luong, and Quoc Le. 2017. Massive Exploration of Neural Machine Translation Architectures. ArXiv e-prints.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Word Sense Disambiguation Improves Statistical Machine Translation",
"authors": [
{
"first": "Tou Hwee",
"middle": [],
"last": "Seng Yee Chan",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "33--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Seng Yee Chan, Tou Hwee Ng, and David Chiang. 2007. Word Sense Disambiguation Improves Sta- tistical Machine Translation. In Proceedings of the 45th Annual Meeting of the Association of Compu- tational Linguistics, pages 33-40. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Understanding and Improving Morphological Learning in the Neural Machine Translation Decoder",
"authors": [
{
"first": "Fahim",
"middle": [],
"last": "Dalvi",
"suffix": ""
},
{
"first": "Nadir",
"middle": [],
"last": "Durrani",
"suffix": ""
},
{
"first": "Hassan",
"middle": [],
"last": "Sajjad",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 8th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fahim Dalvi, Nadir Durrani, Hassan Sajjad, Yonatan Belinkov, and Stephan Vogel. 2017. Understanding and Improving Morphological Learning in the Neu- ral Machine Translation Decoder. In Proceedings of the 8th International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers), Taipei, Taiwan. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Distributed representations, simple recurrent networks, and grammatical structure",
"authors": [
{
"first": "",
"middle": [],
"last": "Jeffrey L Elman",
"suffix": ""
}
],
"year": 1991,
"venue": "Machine learning",
"volume": "7",
"issue": "2-3",
"pages": "195--225",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey L Elman. 1991. Distributed representations, simple recurrent networks, and grammatical struc- ture. Machine learning, 7(2-3):195-225.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Utilizing Target-Side Semantic Role Labels to Assist Hierarchical Phrase-based Machine Translation",
"authors": [
{
"first": "Qin",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of Fifth Workshop on Syntax, Semantics and Structure in Statistical Translation",
"volume": "",
"issue": "",
"pages": "107--115",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qin Gao and Stephan Vogel. 2011. Utilizing Target- Side Semantic Role Labels to Assist Hierarchical Phrase-based Machine Translation. In Proceedings of Fifth Workshop on Syntax, Semantics and Struc- ture in Statistical Translation, pages 107-115. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Semantics-Based Machine Translation with Hyperedge Replacement Grammars",
"authors": [
{
"first": "Bevan",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Andreas",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "Karl",
"middle": [],
"last": "Moritz",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Hermann",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2012,
"venue": "The COLING 2012 Organizing Committee",
"volume": "",
"issue": "",
"pages": "1359--1376",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bevan Jones, Jacob Andreas, Daniel Bauer, Moritz Karl Hermann, and Kevin Knight. 2012. Semantics-Based Machine Translation with Hyper- edge Replacement Grammars. In Proceedings of COLING 2012, pages 1359-1376. The COLING 2012 Organizing Committee.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Representation of linguistic form and function in recurrent neural networks",
"authors": [
{
"first": "Akos",
"middle": [],
"last": "K\u00e1d\u00e1r",
"suffix": ""
},
{
"first": "Grzegorz",
"middle": [],
"last": "Chrupa\u0142a",
"suffix": ""
},
{
"first": "Afra",
"middle": [],
"last": "Alishahi",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1602.08952"
]
},
"num": null,
"urls": [],
"raw_text": "Akos K\u00e1d\u00e1r, Grzegorz Chrupa\u0142a, and Afra Alishahi. 2016. Representation of linguistic form and func- tion in recurrent neural networks. arXiv preprint arXiv:1602.08952.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Visualizing and Understanding Recurrent Networks",
"authors": [
{
"first": "Andrej",
"middle": [],
"last": "Karpathy",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Fei-Fei",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1506.02078"
]
},
"num": null,
"urls": [],
"raw_text": "Andrej Karpathy, Justin Johnson, and Fei-Fei Li. 2015. Visualizing and Understanding Recurrent Networks. arXiv preprint arXiv:1506.02078.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Seq2seq-attn",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoon Kim. 2016. Seq2seq-attn. https:// github.com/harvardnlp/seq2seq-attn.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Adam: A Method for Stochastic Optimization",
"authors": [
{
"first": "Diederik",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "What's in an Embedding? Analyzing Word Embeddings through Multilingual Evaluation",
"authors": [
{
"first": "Arne",
"middle": [],
"last": "K\u00f6hn",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2067--2073",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arne K\u00f6hn. 2015. What's in an Embedding? Analyz- ing Word Embeddings through Multilingual Evalu- ation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Process- ing, pages 2067-2073, Lisbon, Portugal. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies",
"authors": [
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
},
{
"first": "Emmanuel",
"middle": [],
"last": "Dupoux",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "521--535",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies. Transactions of the Association for Computational Linguistics, 4:521- 535.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Semantic Role Features for Machine Translation",
"authors": [
{
"first": "Ding",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "716--724",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ding Liu and Daniel Gildea. 2010. Semantic Role Features for Machine Translation. In Proceedings of the 23rd International Conference on Computa- tional Linguistics (Coling 2010), pages 716-724. Coling 2010 Organizing Committee.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Distributed Representations of Words and Phrases and their Compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed Representa- tions of Words and Phrases and their Composition- ality. In Advances in Neural Information Processing Systems, pages 3111-3119.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "User's guide to sigf: Significance testing by approximate randomisation",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Pad\u00f3. 2006. User's guide to sigf: Sig- nificance testing by approximate randomisation. https://www.nlpado.de/\u02dcsebastian/ software/sigf.shtml.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Part-of-Speech Tagging with Recurrent Neural Networks",
"authors": [
{
"first": "Juan",
"middle": [],
"last": "Antonio P\u00e9rez-Ortiz",
"suffix": ""
},
{
"first": "Mikel",
"middle": [
"L"
],
"last": "Forcada",
"suffix": ""
}
],
"year": 2001,
"venue": "Neural Networks, 2001. Proceedings. IJCNN '01. International Joint Conference on",
"volume": "3",
"issue": "",
"pages": "1588--1592",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juan Antonio P\u00e9rez-Ortiz and Mikel L. Forcada. 2001. Part-of-Speech Tagging with Recurrent Neural Net- works. In Neural Networks, 2001. Proceedings. IJCNN '01. International Joint Conference on, vol- ume 3, pages 1588-1592.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Analyzing Linguistic Knowledge in Sequential Model of Sentence",
"authors": [
{
"first": "Xipeng",
"middle": [],
"last": "Peng Qian",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "826--835",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng Qian, Xipeng Qiu, and Xuanjing Huang. 2016a. Analyzing Linguistic Knowledge in Sequential Model of Sentence. In Proceedings of the 2016 Con- ference on Empirical Methods in Natural Language Processing, pages 826-835, Austin, Texas. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Investigating Language Universal and Specific Properties in Word Embeddings",
"authors": [
{
"first": "Xipeng",
"middle": [],
"last": "Peng Qian",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1478--1488",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng Qian, Xipeng Qiu, and Xuanjing Huang. 2016b. Investigating Language Universal and Specific Prop- erties in Word Embeddings. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 1478-1488, Berlin, Germany. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "How Grammatical is Characterlevel Neural Machine Translation? Assessing MT Quality with Contrastive Translation Pairs",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "376--382",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich. 2017. How Grammatical is Character- level Neural Machine Translation? Assessing MT Quality with Contrastive Translation Pairs. In Pro- ceedings of the 15th Conference of the European Chapter of the Association for Computational Lin- guistics: Volume 2, Short Papers, pages 376-382. Association for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Does String-Based Neural MT Learn Source Syntax?",
"authors": [
{
"first": "Xing",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Inkit",
"middle": [],
"last": "Padhi",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1526--1534",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xing Shi, Inkit Padhi, and Kevin Knight. 2016. Does String-Based Neural MT Learn Source Syntax? In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1526-1534, Austin, Texas. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Sequence to Sequence Learning with Neural Networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc Vv",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. 2014. Sequence to Sequence Learning with Neural Net- works. In Advances in Neural Information Process- ing Systems, pages 3104-3112.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Word Representation Models for Morphologically Rich Languages in Neural Machine Translation",
"authors": [
{
"first": "Ekaterina",
"middle": [],
"last": "Vylomova",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Xuanli",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Gholamreza",
"middle": [],
"last": "Haffari",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1606.04217"
]
},
"num": null,
"urls": [],
"raw_text": "Ekaterina Vylomova, Trevor Cohn, Xuanli He, and Gholamreza Haffari. 2016. Word Representa- tion Models for Morphologically Rich Languages in Neural Machine Translation. arXiv preprint arXiv:1606.04217.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Gate Activation Signal Analysis for Gated Recurrent Neural Networks and Its Correlation with Phoneme Boundaries",
"authors": [
{
"first": "Yu-Hsuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Cheng-Tao",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Hung-Yi",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1703.07588"
]
},
"num": null,
"urls": [],
"raw_text": "Yu-Hsuan Wang, Cheng-Tao Chung, and Hung-yi Lee. 2017. Gate Activation Signal Analysis for Gated Recurrent Neural Networks and Its Corre- lation with Phoneme Boundaries. arXiv preprint arXiv:1703.07588.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Lexical Semantics for Statistical Machine Translation",
"authors": [
{
"first": "Dekai",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Pascale",
"suffix": ""
},
{
"first": "Marine",
"middle": [],
"last": "Fung",
"suffix": ""
},
{
"first": "Chi-Kiu",
"middle": [],
"last": "Carpuat",
"suffix": ""
},
{
"first": "Yongsheng",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Zhaojun",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2011,
"venue": "Handbook of Natural Language Processing and Machine Translation: DARPA Global Autonomous Language Exploitation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekai Wu, Pascale N Fung, Marine Carpuat, Chi-kiu Lo, Yongsheng Yang, and Zhaojun Wu. 2011. Lex- ical Semantics for Statistical Machine Translation. In Handbook of Natural Language Processing and Machine Translation: DARPA Global Autonomous Language Exploitation.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Investigating Gated Recurrent Networks for Speech Synthesis",
"authors": [
{
"first": "Zhizheng",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "King",
"suffix": ""
}
],
"year": 2016,
"venue": "Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on",
"volume": "",
"issue": "",
"pages": "5140--5144",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhizheng Wu and Simon King. 2016. Investigat- ing Gated Recurrent Networks for Speech Synthe- sis. In Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on, pages 5140-5144. IEEE.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "The United Nations Parallel Corpus v1.0",
"authors": [
{
"first": "Micha",
"middle": [],
"last": "Ziemski",
"suffix": ""
},
{
"first": "Marcin",
"middle": [],
"last": "Junczys-Dowmunt",
"suffix": ""
},
{
"first": "Bruno",
"middle": [],
"last": "Pouliquen",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Micha Ziemski, Marcin Junczys-Dowmunt, and Bruno Pouliquen. 2016. The United Nations Parallel Cor- pus v1.0. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), Paris, France. European Language Resources Association (ELRA).",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "SEM tagging accuracy with fine/coarsegrained tags using features extracted from different encoding layers of 4-layered NMT models trained with different target languages."
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Examples of cases of disagreement between layer 1 (L1) and layer 4 (L4) representations when predicting SEM tags. The correct tag is italicized and the relevant word is underlined."
},
"TABREF2": {
"type_str": "table",
"content": "<table><tr><td>quent tag; UnsupEmb: classifier using unsuper-</td></tr><tr><td>vised word embeddings; Word2Tag: upper bound</td></tr><tr><td>encoder-decoder.</td></tr></table>",
"text": "POS and SEM tagging accuracy with baselines and an upper bound. MFT: most fre-",
"html": null,
"num": null
},
"TABREF3": {
"type_str": "table",
"content": "<table><tr><td>k</td><td>Ar</td><td>Es</td><td>Fr</td><td>Ru</td><td>Zh</td><td>En</td></tr><tr><td/><td/><td colspan=\"4\">POS Tagging Accuracy</td><td/></tr><tr><td colspan=\"6\">0 88.0 SEM Tagging Accuracy</td><td/></tr><tr><td colspan=\"7\">0 81.9 1 87.9 87.7 87.8 87.9 87.7 84.5</td></tr><tr><td colspan=\"4\">2 87.4 BLEU</td><td/><td/><td/></tr><tr><td/><td colspan=\"6\">32.7 49.1 38.5 34.2 32.1 96.6</td></tr></table>",
"text": "shows baseline and upper bound results. The UnsupEmb baseline performs rather poorly on both POS and SEM tagging. In comparison, NMT word embeddings(Table 3, rows with k = 0) perform slightly better, suggesting that word embeddings learned as part of the NMT model are better syntactic and semantic representations. However, the results are still below the most frequent tag baseline (MFT), indicating that noncontextual word embeddings are poor representations for POS and SEM tags. \u21e4 87.9 \u21e4 87.9 \u21e4 87.8 \u21e4 87.7 \u21e4 87.4 \u21e4 1 92.4 91.9 92.1 92.1 91.5 89.4 2 91.9 \u21e4 91.8 91.8 91.8 \u21e4 91.3 88.3 3 92.0 \u21e4 92.3 \u21e4 92.1 91.6 \u21e4\u21e4 91.2 \u21e4 87.9 \u21e4 4 92.1 \u21e4 92.4 \u21e4 92.5 \u21e4 92.0 90.5 \u21e4 86.9 \u21e4 \u21e4 81.9 \u21e4 81.8 \u21e4 81.8 \u21e4 81.8 \u21e4 81.2 \u21e4 \u21e4 87.5 \u21e4 87.4 \u21e4 87.3 \u21e4 87.2 \u21e4 83.2 \u21e4 3 87.8 87.9 \u21e4 87.9 \u21e4\u21e4 87.3 \u21e4 87.3 \u21e4 82.9 \u21e4 4 88.3 \u21e4 88.6 \u21e4 88.4 \u21e4 88.1 \u21e4 87.7 \u21e4 82.1 \u21e4",
"html": null,
"num": null
},
"TABREF5": {
"type_str": "table",
"content": "<table><tr><td/><td>L1</td><td>L4</td><td/></tr><tr><td colspan=\"4\">1 Zimbabwe 's 6 REL SUB AND COO A Fox asked him , \" How can you pretend to prescribe for others , when you are unable to heal</td></tr><tr><td/><td/><td/><td>your own lame gait and wrinkled skin ? \"</td></tr><tr><td>7</td><td>NIL</td><td>APP</td><td>But Syria 's president , Bashar al-Assad , has already rejected the commission 's request [...]</td></tr><tr><td>8</td><td>NIL</td><td>APP</td><td>Hassan Halemi , head of the pathology department at Kabul University where the autopsies were</td></tr><tr><td/><td/><td/><td>carried out , said hours of testing Saturday confirmed [...]</td></tr><tr><td>9</td><td>NIL</td><td>APP</td><td>Mr. Hu made the comments Tuesday during a meeting with Ichiro Ozawa , the leader of Japan 's</td></tr><tr><td/><td/><td/><td>main opposition party .</td></tr><tr><td>10</td><td/><td/><td/></tr></table>",
"text": "President Robert Mugabe has freed three men who were jailed for murder and sabotage as they battled South Africa 's anti-apartheid African National Congress in 1988 . 2 REL SUB The military says the battle erupted after gunmen fired on U.S. troops and Afghan police investigating a reported beating of a villager . 3 IST SUB Election authorities had previously told Haitian-born Dumarsais Simeus that he was not eligible to run because he holds U.S. citizenship . 4 AND COO Fifty people representing 26 countries took the Oath of Allegiance this week ( Thursday ) and became U.S. citizens in a special ceremony at the Newseum in Washington , D.C. 5 AND COO But rebel groups said on Sunday they would not sign and insisted on changes . AND COO [...] abortion opponents will march past the U.S. Capitol and end outside the Supreme Court . 11 AND COO Van Schalkwyk said no new coal-fired power stations would be approved unless they use technology that captures and stores carbon emissions . 12 AND COO A MEMBER of the Kansas Legislature meeting a Cake of Soap was passing it by without recognition , but the Cake of Soap insisted on stopping and shaking hands .",
"html": null,
"num": null
},
"TABREF6": {
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"5\">: POS and SEM tagging accuracy</td></tr><tr><td colspan=\"6\">with features from different layers of 4-layer</td></tr><tr><td colspan=\"6\">Uni/Bidirectional/Residual NMT encoders, aver-</td></tr><tr><td colspan=\"5\">aged over all non-English target languages.</td></tr><tr><td/><td>0</td><td>1</td><td>2</td><td>3</td><td>4</td></tr><tr><td>4</td><td colspan=\"5\">POS 87.9 92.0 91.7 91.8 91.9 SEM 81.8 87.8 87.4 87.6 88.2</td></tr><tr><td>3</td><td colspan=\"4\">POS 87.9 92.5 92.3 92.4 SEM 81.9 88.2 88.0 88.4</td><td>--</td></tr><tr><td>2</td><td colspan=\"3\">POS 87.9 92.7 92.7 SEM 82.0 88.5 88.7</td><td>--</td><td>--</td></tr></table>",
"text": "",
"html": null,
"num": null
}
}
}
}